Jan 31 01:09:23 np0005603609 kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Jan 31 01:09:23 np0005603609 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 31 01:09:23 np0005603609 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 01:09:23 np0005603609 kernel: BIOS-provided physical RAM map:
Jan 31 01:09:23 np0005603609 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 31 01:09:23 np0005603609 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 31 01:09:23 np0005603609 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 31 01:09:23 np0005603609 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 31 01:09:23 np0005603609 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 31 01:09:23 np0005603609 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 31 01:09:23 np0005603609 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 31 01:09:23 np0005603609 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 31 01:09:23 np0005603609 kernel: NX (Execute Disable) protection: active
Jan 31 01:09:23 np0005603609 kernel: APIC: Static calls initialized
Jan 31 01:09:23 np0005603609 kernel: SMBIOS 2.8 present.
Jan 31 01:09:23 np0005603609 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 31 01:09:23 np0005603609 kernel: Hypervisor detected: KVM
Jan 31 01:09:23 np0005603609 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 31 01:09:23 np0005603609 kernel: kvm-clock: using sched offset of 11165307892 cycles
Jan 31 01:09:23 np0005603609 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 31 01:09:23 np0005603609 kernel: tsc: Detected 2799.998 MHz processor
Jan 31 01:09:23 np0005603609 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 31 01:09:23 np0005603609 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 31 01:09:23 np0005603609 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 31 01:09:23 np0005603609 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 31 01:09:23 np0005603609 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 31 01:09:23 np0005603609 kernel: Using GB pages for direct mapping
Jan 31 01:09:23 np0005603609 kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Jan 31 01:09:23 np0005603609 kernel: ACPI: Early table checksum verification disabled
Jan 31 01:09:23 np0005603609 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 31 01:09:23 np0005603609 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 01:09:23 np0005603609 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 01:09:23 np0005603609 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 01:09:23 np0005603609 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 31 01:09:23 np0005603609 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 01:09:23 np0005603609 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 01:09:23 np0005603609 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 31 01:09:23 np0005603609 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 31 01:09:23 np0005603609 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 31 01:09:23 np0005603609 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 31 01:09:23 np0005603609 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 31 01:09:23 np0005603609 kernel: No NUMA configuration found
Jan 31 01:09:23 np0005603609 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 31 01:09:23 np0005603609 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 31 01:09:23 np0005603609 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 31 01:09:23 np0005603609 kernel: Zone ranges:
Jan 31 01:09:23 np0005603609 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 31 01:09:23 np0005603609 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 31 01:09:23 np0005603609 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 01:09:23 np0005603609 kernel:  Device   empty
Jan 31 01:09:23 np0005603609 kernel: Movable zone start for each node
Jan 31 01:09:23 np0005603609 kernel: Early memory node ranges
Jan 31 01:09:23 np0005603609 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 31 01:09:23 np0005603609 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 31 01:09:23 np0005603609 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 01:09:23 np0005603609 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 31 01:09:23 np0005603609 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 31 01:09:23 np0005603609 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 31 01:09:23 np0005603609 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 31 01:09:23 np0005603609 kernel: ACPI: PM-Timer IO Port: 0x608
Jan 31 01:09:23 np0005603609 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 31 01:09:23 np0005603609 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 31 01:09:23 np0005603609 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 31 01:09:23 np0005603609 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 31 01:09:23 np0005603609 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 31 01:09:23 np0005603609 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 31 01:09:23 np0005603609 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 31 01:09:23 np0005603609 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 31 01:09:23 np0005603609 kernel: TSC deadline timer available
Jan 31 01:09:23 np0005603609 kernel: CPU topo: Max. logical packages:   8
Jan 31 01:09:23 np0005603609 kernel: CPU topo: Max. logical dies:       8
Jan 31 01:09:23 np0005603609 kernel: CPU topo: Max. dies per package:   1
Jan 31 01:09:23 np0005603609 kernel: CPU topo: Max. threads per core:   1
Jan 31 01:09:23 np0005603609 kernel: CPU topo: Num. cores per package:     1
Jan 31 01:09:23 np0005603609 kernel: CPU topo: Num. threads per package:   1
Jan 31 01:09:23 np0005603609 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 31 01:09:23 np0005603609 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 31 01:09:23 np0005603609 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 31 01:09:23 np0005603609 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 31 01:09:23 np0005603609 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 31 01:09:23 np0005603609 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 31 01:09:23 np0005603609 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 31 01:09:23 np0005603609 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 31 01:09:23 np0005603609 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 31 01:09:23 np0005603609 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 31 01:09:23 np0005603609 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 31 01:09:23 np0005603609 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 31 01:09:23 np0005603609 kernel: Booting paravirtualized kernel on KVM
Jan 31 01:09:23 np0005603609 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 31 01:09:23 np0005603609 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 31 01:09:23 np0005603609 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 31 01:09:23 np0005603609 kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 31 01:09:23 np0005603609 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 01:09:23 np0005603609 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Jan 31 01:09:23 np0005603609 kernel: random: crng init done
Jan 31 01:09:23 np0005603609 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 31 01:09:23 np0005603609 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 31 01:09:23 np0005603609 kernel: Fallback order for Node 0: 0 
Jan 31 01:09:23 np0005603609 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 31 01:09:23 np0005603609 kernel: Policy zone: Normal
Jan 31 01:09:23 np0005603609 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 31 01:09:23 np0005603609 kernel: software IO TLB: area num 8.
Jan 31 01:09:23 np0005603609 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 31 01:09:23 np0005603609 kernel: ftrace: allocating 49438 entries in 194 pages
Jan 31 01:09:23 np0005603609 kernel: ftrace: allocated 194 pages with 3 groups
Jan 31 01:09:23 np0005603609 kernel: Dynamic Preempt: voluntary
Jan 31 01:09:23 np0005603609 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 31 01:09:23 np0005603609 kernel: rcu: #011RCU event tracing is enabled.
Jan 31 01:09:23 np0005603609 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 31 01:09:23 np0005603609 kernel: #011Trampoline variant of Tasks RCU enabled.
Jan 31 01:09:23 np0005603609 kernel: #011Rude variant of Tasks RCU enabled.
Jan 31 01:09:23 np0005603609 kernel: #011Tracing variant of Tasks RCU enabled.
Jan 31 01:09:23 np0005603609 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 31 01:09:23 np0005603609 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 31 01:09:23 np0005603609 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 01:09:23 np0005603609 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 01:09:23 np0005603609 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 01:09:23 np0005603609 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 31 01:09:23 np0005603609 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 31 01:09:23 np0005603609 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 31 01:09:23 np0005603609 kernel: Console: colour VGA+ 80x25
Jan 31 01:09:23 np0005603609 kernel: printk: console [ttyS0] enabled
Jan 31 01:09:23 np0005603609 kernel: ACPI: Core revision 20230331
Jan 31 01:09:23 np0005603609 kernel: APIC: Switch to symmetric I/O mode setup
Jan 31 01:09:23 np0005603609 kernel: x2apic enabled
Jan 31 01:09:23 np0005603609 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 31 01:09:23 np0005603609 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 31 01:09:23 np0005603609 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 31 01:09:23 np0005603609 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 31 01:09:23 np0005603609 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 31 01:09:23 np0005603609 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 31 01:09:23 np0005603609 kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Jan 31 01:09:23 np0005603609 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 31 01:09:23 np0005603609 kernel: Spectre V2 : Mitigation: Retpolines
Jan 31 01:09:23 np0005603609 kernel: RETBleed: Mitigation: untrained return thunk
Jan 31 01:09:23 np0005603609 kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Jan 31 01:09:23 np0005603609 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 31 01:09:23 np0005603609 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 31 01:09:23 np0005603609 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 31 01:09:23 np0005603609 kernel: active return thunk: retbleed_return_thunk
Jan 31 01:09:23 np0005603609 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 31 01:09:23 np0005603609 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 31 01:09:23 np0005603609 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 31 01:09:23 np0005603609 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 31 01:09:23 np0005603609 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 31 01:09:23 np0005603609 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 31 01:09:23 np0005603609 kernel: Freeing SMP alternatives memory: 40K
Jan 31 01:09:23 np0005603609 kernel: pid_max: default: 32768 minimum: 301
Jan 31 01:09:23 np0005603609 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 31 01:09:23 np0005603609 kernel: landlock: Up and running.
Jan 31 01:09:23 np0005603609 kernel: Yama: becoming mindful.
Jan 31 01:09:23 np0005603609 kernel: SELinux:  Initializing.
Jan 31 01:09:23 np0005603609 kernel: LSM support for eBPF active
Jan 31 01:09:23 np0005603609 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 01:09:23 np0005603609 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 01:09:23 np0005603609 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 31 01:09:23 np0005603609 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 31 01:09:23 np0005603609 kernel: ... version:                0
Jan 31 01:09:23 np0005603609 kernel: ... bit width:              48
Jan 31 01:09:23 np0005603609 kernel: ... generic registers:      6
Jan 31 01:09:23 np0005603609 kernel: ... value mask:             0000ffffffffffff
Jan 31 01:09:23 np0005603609 kernel: ... max period:             00007fffffffffff
Jan 31 01:09:23 np0005603609 kernel: ... fixed-purpose events:   0
Jan 31 01:09:23 np0005603609 kernel: ... event mask:             000000000000003f
Jan 31 01:09:23 np0005603609 kernel: signal: max sigframe size: 1776
Jan 31 01:09:23 np0005603609 kernel: rcu: Hierarchical SRCU implementation.
Jan 31 01:09:23 np0005603609 kernel: rcu: #011Max phase no-delay instances is 400.
Jan 31 01:09:23 np0005603609 kernel: smp: Bringing up secondary CPUs ...
Jan 31 01:09:23 np0005603609 kernel: smpboot: x86: Booting SMP configuration:
Jan 31 01:09:23 np0005603609 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 31 01:09:23 np0005603609 kernel: smp: Brought up 1 node, 8 CPUs
Jan 31 01:09:23 np0005603609 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 31 01:09:23 np0005603609 kernel: node 0 deferred pages initialised in 9ms
Jan 31 01:09:23 np0005603609 kernel: Memory: 7763820K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618408K reserved, 0K cma-reserved)
Jan 31 01:09:23 np0005603609 kernel: devtmpfs: initialized
Jan 31 01:09:23 np0005603609 kernel: x86/mm: Memory block size: 128MB
Jan 31 01:09:23 np0005603609 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 31 01:09:23 np0005603609 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 31 01:09:23 np0005603609 kernel: pinctrl core: initialized pinctrl subsystem
Jan 31 01:09:23 np0005603609 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 31 01:09:23 np0005603609 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 31 01:09:23 np0005603609 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 31 01:09:23 np0005603609 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 31 01:09:23 np0005603609 kernel: audit: initializing netlink subsys (disabled)
Jan 31 01:09:23 np0005603609 kernel: audit: type=2000 audit(1769839761.310:1): state=initialized audit_enabled=0 res=1
Jan 31 01:09:23 np0005603609 kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 31 01:09:23 np0005603609 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 31 01:09:23 np0005603609 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 31 01:09:23 np0005603609 kernel: cpuidle: using governor menu
Jan 31 01:09:23 np0005603609 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 31 01:09:23 np0005603609 kernel: PCI: Using configuration type 1 for base access
Jan 31 01:09:23 np0005603609 kernel: PCI: Using configuration type 1 for extended access
Jan 31 01:09:23 np0005603609 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 31 01:09:23 np0005603609 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 31 01:09:23 np0005603609 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 31 01:09:23 np0005603609 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 31 01:09:23 np0005603609 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 31 01:09:23 np0005603609 kernel: Demotion targets for Node 0: null
Jan 31 01:09:23 np0005603609 kernel: cryptd: max_cpu_qlen set to 1000
Jan 31 01:09:23 np0005603609 kernel: ACPI: Added _OSI(Module Device)
Jan 31 01:09:23 np0005603609 kernel: ACPI: Added _OSI(Processor Device)
Jan 31 01:09:23 np0005603609 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 31 01:09:23 np0005603609 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 31 01:09:23 np0005603609 kernel: ACPI: Interpreter enabled
Jan 31 01:09:23 np0005603609 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 31 01:09:23 np0005603609 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 31 01:09:23 np0005603609 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 31 01:09:23 np0005603609 kernel: PCI: Using E820 reservations for host bridge windows
Jan 31 01:09:23 np0005603609 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 31 01:09:23 np0005603609 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 31 01:09:23 np0005603609 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [3] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [4] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [5] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [6] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [7] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [8] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [9] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [10] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [11] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [12] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [13] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [14] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [15] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [16] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [17] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [18] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [19] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [20] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [21] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [22] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [23] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [24] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [25] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [26] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [27] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [28] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [29] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [30] registered
Jan 31 01:09:23 np0005603609 kernel: acpiphp: Slot [31] registered
Jan 31 01:09:23 np0005603609 kernel: PCI host bridge to bus 0000:00
Jan 31 01:09:23 np0005603609 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 31 01:09:23 np0005603609 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 31 01:09:23 np0005603609 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 31 01:09:23 np0005603609 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 31 01:09:23 np0005603609 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 31 01:09:23 np0005603609 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 31 01:09:23 np0005603609 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 31 01:09:23 np0005603609 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 31 01:09:23 np0005603609 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 31 01:09:23 np0005603609 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 31 01:09:23 np0005603609 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 31 01:09:23 np0005603609 kernel: iommu: Default domain type: Translated
Jan 31 01:09:23 np0005603609 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 31 01:09:23 np0005603609 kernel: SCSI subsystem initialized
Jan 31 01:09:23 np0005603609 kernel: ACPI: bus type USB registered
Jan 31 01:09:23 np0005603609 kernel: usbcore: registered new interface driver usbfs
Jan 31 01:09:23 np0005603609 kernel: usbcore: registered new interface driver hub
Jan 31 01:09:23 np0005603609 kernel: usbcore: registered new device driver usb
Jan 31 01:09:23 np0005603609 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 31 01:09:23 np0005603609 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 31 01:09:23 np0005603609 kernel: PTP clock support registered
Jan 31 01:09:23 np0005603609 kernel: EDAC MC: Ver: 3.0.0
Jan 31 01:09:23 np0005603609 kernel: NetLabel: Initializing
Jan 31 01:09:23 np0005603609 kernel: NetLabel:  domain hash size = 128
Jan 31 01:09:23 np0005603609 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 31 01:09:23 np0005603609 kernel: NetLabel:  unlabeled traffic allowed by default
Jan 31 01:09:23 np0005603609 kernel: PCI: Using ACPI for IRQ routing
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 31 01:09:23 np0005603609 kernel: vgaarb: loaded
Jan 31 01:09:23 np0005603609 kernel: clocksource: Switched to clocksource kvm-clock
Jan 31 01:09:23 np0005603609 kernel: VFS: Disk quotas dquot_6.6.0
Jan 31 01:09:23 np0005603609 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 31 01:09:23 np0005603609 kernel: pnp: PnP ACPI init
Jan 31 01:09:23 np0005603609 kernel: pnp: PnP ACPI: found 5 devices
Jan 31 01:09:23 np0005603609 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 31 01:09:23 np0005603609 kernel: NET: Registered PF_INET protocol family
Jan 31 01:09:23 np0005603609 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 31 01:09:23 np0005603609 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 31 01:09:23 np0005603609 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 31 01:09:23 np0005603609 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 31 01:09:23 np0005603609 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 31 01:09:23 np0005603609 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 31 01:09:23 np0005603609 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 31 01:09:23 np0005603609 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 01:09:23 np0005603609 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 01:09:23 np0005603609 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 31 01:09:23 np0005603609 kernel: NET: Registered PF_XDP protocol family
Jan 31 01:09:23 np0005603609 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 31 01:09:23 np0005603609 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 31 01:09:23 np0005603609 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 31 01:09:23 np0005603609 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 31 01:09:23 np0005603609 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 31 01:09:23 np0005603609 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 31 01:09:23 np0005603609 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 46840 usecs
Jan 31 01:09:23 np0005603609 kernel: PCI: CLS 0 bytes, default 64
Jan 31 01:09:23 np0005603609 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 31 01:09:23 np0005603609 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 31 01:09:23 np0005603609 kernel: ACPI: bus type thunderbolt registered
Jan 31 01:09:23 np0005603609 kernel: Trying to unpack rootfs image as initramfs...
Jan 31 01:09:23 np0005603609 kernel: Initialise system trusted keyrings
Jan 31 01:09:23 np0005603609 kernel: Key type blacklist registered
Jan 31 01:09:23 np0005603609 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 31 01:09:23 np0005603609 kernel: zbud: loaded
Jan 31 01:09:23 np0005603609 kernel: integrity: Platform Keyring initialized
Jan 31 01:09:23 np0005603609 kernel: integrity: Machine keyring initialized
Jan 31 01:09:23 np0005603609 kernel: Freeing initrd memory: 88000K
Jan 31 01:09:23 np0005603609 kernel: NET: Registered PF_ALG protocol family
Jan 31 01:09:23 np0005603609 kernel: xor: automatically using best checksumming function   avx       
Jan 31 01:09:23 np0005603609 kernel: Key type asymmetric registered
Jan 31 01:09:23 np0005603609 kernel: Asymmetric key parser 'x509' registered
Jan 31 01:09:23 np0005603609 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 31 01:09:23 np0005603609 kernel: io scheduler mq-deadline registered
Jan 31 01:09:23 np0005603609 kernel: io scheduler kyber registered
Jan 31 01:09:23 np0005603609 kernel: io scheduler bfq registered
Jan 31 01:09:23 np0005603609 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 31 01:09:23 np0005603609 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 31 01:09:23 np0005603609 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 31 01:09:23 np0005603609 kernel: ACPI: button: Power Button [PWRF]
Jan 31 01:09:23 np0005603609 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 31 01:09:23 np0005603609 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 31 01:09:23 np0005603609 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 31 01:09:23 np0005603609 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 31 01:09:23 np0005603609 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 31 01:09:23 np0005603609 kernel: Non-volatile memory driver v1.3
Jan 31 01:09:23 np0005603609 kernel: rdac: device handler registered
Jan 31 01:09:23 np0005603609 kernel: hp_sw: device handler registered
Jan 31 01:09:23 np0005603609 kernel: emc: device handler registered
Jan 31 01:09:23 np0005603609 kernel: alua: device handler registered
Jan 31 01:09:23 np0005603609 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 31 01:09:23 np0005603609 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 31 01:09:23 np0005603609 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 31 01:09:23 np0005603609 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 31 01:09:23 np0005603609 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 31 01:09:23 np0005603609 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 31 01:09:23 np0005603609 kernel: usb usb1: Product: UHCI Host Controller
Jan 31 01:09:23 np0005603609 kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Jan 31 01:09:23 np0005603609 kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 31 01:09:23 np0005603609 kernel: hub 1-0:1.0: USB hub found
Jan 31 01:09:23 np0005603609 kernel: hub 1-0:1.0: 2 ports detected
Jan 31 01:09:23 np0005603609 kernel: usbcore: registered new interface driver usbserial_generic
Jan 31 01:09:23 np0005603609 kernel: usbserial: USB Serial support registered for generic
Jan 31 01:09:23 np0005603609 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 31 01:09:23 np0005603609 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 31 01:09:23 np0005603609 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 31 01:09:23 np0005603609 kernel: mousedev: PS/2 mouse device common for all mice
Jan 31 01:09:23 np0005603609 kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 31 01:09:23 np0005603609 kernel: rtc_cmos 00:04: registered as rtc0
Jan 31 01:09:23 np0005603609 kernel: rtc_cmos 00:04: setting system clock to 2026-01-31T06:09:22 UTC (1769839762)
Jan 31 01:09:23 np0005603609 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 31 01:09:23 np0005603609 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 31 01:09:23 np0005603609 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 31 01:09:23 np0005603609 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 31 01:09:23 np0005603609 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 31 01:09:23 np0005603609 kernel: usbcore: registered new interface driver usbhid
Jan 31 01:09:23 np0005603609 kernel: usbhid: USB HID core driver
Jan 31 01:09:23 np0005603609 kernel: drop_monitor: Initializing network drop monitor service
Jan 31 01:09:23 np0005603609 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 31 01:09:23 np0005603609 kernel: Initializing XFRM netlink socket
Jan 31 01:09:23 np0005603609 kernel: NET: Registered PF_INET6 protocol family
Jan 31 01:09:23 np0005603609 kernel: Segment Routing with IPv6
Jan 31 01:09:23 np0005603609 kernel: NET: Registered PF_PACKET protocol family
Jan 31 01:09:23 np0005603609 kernel: mpls_gso: MPLS GSO support
Jan 31 01:09:23 np0005603609 kernel: IPI shorthand broadcast: enabled
Jan 31 01:09:23 np0005603609 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 31 01:09:23 np0005603609 kernel: AES CTR mode by8 optimization enabled
Jan 31 01:09:23 np0005603609 kernel: sched_clock: Marking stable (1455001763, 155346331)->(1751901211, -141553117)
Jan 31 01:09:23 np0005603609 kernel: registered taskstats version 1
Jan 31 01:09:23 np0005603609 kernel: Loading compiled-in X.509 certificates
Jan 31 01:09:23 np0005603609 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 01:09:23 np0005603609 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 31 01:09:23 np0005603609 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 31 01:09:23 np0005603609 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 31 01:09:23 np0005603609 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 31 01:09:23 np0005603609 kernel: Demotion targets for Node 0: null
Jan 31 01:09:23 np0005603609 kernel: page_owner is disabled
Jan 31 01:09:23 np0005603609 kernel: Key type .fscrypt registered
Jan 31 01:09:23 np0005603609 kernel: Key type fscrypt-provisioning registered
Jan 31 01:09:23 np0005603609 kernel: Key type big_key registered
Jan 31 01:09:23 np0005603609 kernel: Key type encrypted registered
Jan 31 01:09:23 np0005603609 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 31 01:09:23 np0005603609 kernel: Loading compiled-in module X.509 certificates
Jan 31 01:09:23 np0005603609 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 01:09:23 np0005603609 kernel: ima: Allocated hash algorithm: sha256
Jan 31 01:09:23 np0005603609 kernel: ima: No architecture policies found
Jan 31 01:09:23 np0005603609 kernel: evm: Initialising EVM extended attributes:
Jan 31 01:09:23 np0005603609 kernel: evm: security.selinux
Jan 31 01:09:23 np0005603609 kernel: evm: security.SMACK64 (disabled)
Jan 31 01:09:23 np0005603609 kernel: evm: security.SMACK64EXEC (disabled)
Jan 31 01:09:23 np0005603609 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 31 01:09:23 np0005603609 kernel: evm: security.SMACK64MMAP (disabled)
Jan 31 01:09:23 np0005603609 kernel: evm: security.apparmor (disabled)
Jan 31 01:09:23 np0005603609 kernel: evm: security.ima
Jan 31 01:09:23 np0005603609 kernel: evm: security.capability
Jan 31 01:09:23 np0005603609 kernel: evm: HMAC attrs: 0x1
Jan 31 01:09:23 np0005603609 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 31 01:09:23 np0005603609 kernel: Running certificate verification RSA selftest
Jan 31 01:09:23 np0005603609 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 31 01:09:23 np0005603609 kernel: Running certificate verification ECDSA selftest
Jan 31 01:09:23 np0005603609 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 31 01:09:23 np0005603609 kernel: clk: Disabling unused clocks
Jan 31 01:09:23 np0005603609 kernel: Freeing unused decrypted memory: 2028K
Jan 31 01:09:23 np0005603609 kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 31 01:09:23 np0005603609 kernel: Write protecting the kernel read-only data: 30720k
Jan 31 01:09:23 np0005603609 kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Jan 31 01:09:23 np0005603609 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 31 01:09:23 np0005603609 kernel: Run /init as init process
Jan 31 01:09:23 np0005603609 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 01:09:23 np0005603609 systemd: Detected virtualization kvm.
Jan 31 01:09:23 np0005603609 systemd: Detected architecture x86-64.
Jan 31 01:09:23 np0005603609 systemd: Running in initrd.
Jan 31 01:09:23 np0005603609 systemd: No hostname configured, using default hostname.
Jan 31 01:09:23 np0005603609 systemd: Hostname set to <localhost>.
Jan 31 01:09:23 np0005603609 systemd: Initializing machine ID from VM UUID.
Jan 31 01:09:23 np0005603609 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 31 01:09:23 np0005603609 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 31 01:09:23 np0005603609 kernel: usb 1-1: Product: QEMU USB Tablet
Jan 31 01:09:23 np0005603609 kernel: usb 1-1: Manufacturer: QEMU
Jan 31 01:09:23 np0005603609 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 31 01:09:23 np0005603609 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 31 01:09:23 np0005603609 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 31 01:09:23 np0005603609 systemd: Queued start job for default target Initrd Default Target.
Jan 31 01:09:23 np0005603609 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 01:09:23 np0005603609 systemd: Reached target Local Encrypted Volumes.
Jan 31 01:09:23 np0005603609 systemd: Reached target Initrd /usr File System.
Jan 31 01:09:23 np0005603609 systemd: Reached target Local File Systems.
Jan 31 01:09:23 np0005603609 systemd: Reached target Path Units.
Jan 31 01:09:23 np0005603609 systemd: Reached target Slice Units.
Jan 31 01:09:23 np0005603609 systemd: Reached target Swaps.
Jan 31 01:09:23 np0005603609 systemd: Reached target Timer Units.
Jan 31 01:09:23 np0005603609 systemd: Listening on D-Bus System Message Bus Socket.
Jan 31 01:09:23 np0005603609 systemd: Listening on Journal Socket (/dev/log).
Jan 31 01:09:23 np0005603609 systemd: Listening on Journal Socket.
Jan 31 01:09:23 np0005603609 systemd: Listening on udev Control Socket.
Jan 31 01:09:23 np0005603609 systemd: Listening on udev Kernel Socket.
Jan 31 01:09:23 np0005603609 systemd: Reached target Socket Units.
Jan 31 01:09:23 np0005603609 systemd: Starting Create List of Static Device Nodes...
Jan 31 01:09:23 np0005603609 systemd: Starting Journal Service...
Jan 31 01:09:23 np0005603609 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 01:09:23 np0005603609 systemd: Starting Apply Kernel Variables...
Jan 31 01:09:23 np0005603609 systemd: Starting Create System Users...
Jan 31 01:09:23 np0005603609 systemd: Starting Setup Virtual Console...
Jan 31 01:09:23 np0005603609 systemd: Finished Create List of Static Device Nodes.
Jan 31 01:09:23 np0005603609 systemd: Finished Apply Kernel Variables.
Jan 31 01:09:23 np0005603609 systemd-journald[306]: Journal started
Jan 31 01:09:23 np0005603609 systemd-journald[306]: Runtime Journal (/run/log/journal/231927d41ded4b84843c456d697af567) is 8.0M, max 153.6M, 145.6M free.
Jan 31 01:09:23 np0005603609 systemd: Started Journal Service.
Jan 31 01:09:23 np0005603609 systemd-sysusers[310]: Creating group 'users' with GID 100.
Jan 31 01:09:23 np0005603609 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Jan 31 01:09:23 np0005603609 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 31 01:09:23 np0005603609 systemd[1]: Finished Create System Users.
Jan 31 01:09:23 np0005603609 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 01:09:23 np0005603609 systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 01:09:23 np0005603609 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 01:09:23 np0005603609 systemd[1]: Finished Setup Virtual Console.
Jan 31 01:09:23 np0005603609 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 31 01:09:23 np0005603609 systemd[1]: Starting dracut cmdline hook...
Jan 31 01:09:23 np0005603609 systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 01:09:23 np0005603609 dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Jan 31 01:09:23 np0005603609 dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 01:09:23 np0005603609 systemd[1]: Finished dracut cmdline hook.
Jan 31 01:09:23 np0005603609 systemd[1]: Starting dracut pre-udev hook...
Jan 31 01:09:23 np0005603609 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 31 01:09:23 np0005603609 kernel: device-mapper: uevent: version 1.0.3
Jan 31 01:09:23 np0005603609 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 31 01:09:23 np0005603609 kernel: RPC: Registered named UNIX socket transport module.
Jan 31 01:09:23 np0005603609 kernel: RPC: Registered udp transport module.
Jan 31 01:09:23 np0005603609 kernel: RPC: Registered tcp transport module.
Jan 31 01:09:23 np0005603609 kernel: RPC: Registered tcp-with-tls transport module.
Jan 31 01:09:23 np0005603609 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 31 01:09:23 np0005603609 rpc.statd[441]: Version 2.5.4 starting
Jan 31 01:09:23 np0005603609 rpc.statd[441]: Initializing NSM state
Jan 31 01:09:23 np0005603609 rpc.idmapd[446]: Setting log level to 0
Jan 31 01:09:23 np0005603609 systemd[1]: Finished dracut pre-udev hook.
Jan 31 01:09:23 np0005603609 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 01:09:23 np0005603609 systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 01:09:23 np0005603609 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 01:09:23 np0005603609 systemd[1]: Starting dracut pre-trigger hook...
Jan 31 01:09:23 np0005603609 systemd[1]: Finished dracut pre-trigger hook.
Jan 31 01:09:23 np0005603609 systemd[1]: Starting Coldplug All udev Devices...
Jan 31 01:09:23 np0005603609 systemd[1]: Created slice Slice /system/modprobe.
Jan 31 01:09:23 np0005603609 systemd[1]: Starting Load Kernel Module configfs...
Jan 31 01:09:23 np0005603609 systemd[1]: Finished Coldplug All udev Devices.
Jan 31 01:09:23 np0005603609 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 01:09:23 np0005603609 systemd[1]: Finished Load Kernel Module configfs.
Jan 31 01:09:23 np0005603609 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 01:09:23 np0005603609 systemd[1]: Reached target Network.
Jan 31 01:09:23 np0005603609 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 01:09:23 np0005603609 systemd[1]: Starting dracut initqueue hook...
Jan 31 01:09:23 np0005603609 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 31 01:09:23 np0005603609 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 31 01:09:23 np0005603609 kernel: vda: vda1
Jan 31 01:09:23 np0005603609 kernel: scsi host0: ata_piix
Jan 31 01:09:23 np0005603609 kernel: scsi host1: ata_piix
Jan 31 01:09:23 np0005603609 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 31 01:09:23 np0005603609 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 31 01:09:23 np0005603609 systemd-udevd[495]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 01:09:23 np0005603609 systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 01:09:24 np0005603609 kernel: ata1: found unknown device (class 0)
Jan 31 01:09:24 np0005603609 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 31 01:09:24 np0005603609 systemd[1]: Reached target Initrd Root Device.
Jan 31 01:09:24 np0005603609 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 31 01:09:24 np0005603609 systemd[1]: Mounting Kernel Configuration File System...
Jan 31 01:09:24 np0005603609 systemd[1]: Mounted Kernel Configuration File System.
Jan 31 01:09:24 np0005603609 systemd[1]: Reached target System Initialization.
Jan 31 01:09:24 np0005603609 systemd[1]: Reached target Basic System.
Jan 31 01:09:24 np0005603609 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 31 01:09:24 np0005603609 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 31 01:09:24 np0005603609 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 31 01:09:24 np0005603609 systemd[1]: Finished dracut initqueue hook.
Jan 31 01:09:24 np0005603609 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 01:09:24 np0005603609 systemd[1]: Reached target Remote Encrypted Volumes.
Jan 31 01:09:24 np0005603609 systemd[1]: Reached target Remote File Systems.
Jan 31 01:09:24 np0005603609 systemd[1]: Starting dracut pre-mount hook...
Jan 31 01:09:24 np0005603609 systemd[1]: Finished dracut pre-mount hook.
Jan 31 01:09:24 np0005603609 systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Jan 31 01:09:24 np0005603609 systemd-fsck[554]: /usr/sbin/fsck.xfs: XFS file system.
Jan 31 01:09:24 np0005603609 systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 01:09:24 np0005603609 systemd[1]: Mounting /sysroot...
Jan 31 01:09:24 np0005603609 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 31 01:09:24 np0005603609 kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Jan 31 01:09:25 np0005603609 kernel: XFS (vda1): Ending clean mount
Jan 31 01:09:25 np0005603609 systemd[1]: Mounted /sysroot.
Jan 31 01:09:25 np0005603609 systemd[1]: Reached target Initrd Root File System.
Jan 31 01:09:25 np0005603609 systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 31 01:09:25 np0005603609 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 31 01:09:25 np0005603609 systemd[1]: Reached target Initrd File Systems.
Jan 31 01:09:25 np0005603609 systemd[1]: Reached target Initrd Default Target.
Jan 31 01:09:25 np0005603609 systemd[1]: Starting dracut mount hook...
Jan 31 01:09:25 np0005603609 systemd[1]: Finished dracut mount hook.
Jan 31 01:09:25 np0005603609 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 31 01:09:25 np0005603609 rpc.idmapd[446]: exiting on signal 15
Jan 31 01:09:25 np0005603609 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 31 01:09:25 np0005603609 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped target Network.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped target Timer Units.
Jan 31 01:09:25 np0005603609 systemd[1]: dbus.socket: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 31 01:09:25 np0005603609 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped target Initrd Default Target.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped target Basic System.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped target Initrd Root Device.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped target Initrd /usr File System.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped target Path Units.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped target Remote File Systems.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped target Slice Units.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped target Socket Units.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped target System Initialization.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped target Local File Systems.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped target Swaps.
Jan 31 01:09:25 np0005603609 systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped dracut mount hook.
Jan 31 01:09:25 np0005603609 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped dracut pre-mount hook.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped target Local Encrypted Volumes.
Jan 31 01:09:25 np0005603609 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 31 01:09:25 np0005603609 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped dracut initqueue hook.
Jan 31 01:09:25 np0005603609 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped Apply Kernel Variables.
Jan 31 01:09:25 np0005603609 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped Create Volatile Files and Directories.
Jan 31 01:09:25 np0005603609 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped Coldplug All udev Devices.
Jan 31 01:09:25 np0005603609 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped dracut pre-trigger hook.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 31 01:09:25 np0005603609 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped Setup Virtual Console.
Jan 31 01:09:25 np0005603609 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 31 01:09:25 np0005603609 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Closed udev Control Socket.
Jan 31 01:09:25 np0005603609 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Closed udev Kernel Socket.
Jan 31 01:09:25 np0005603609 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped dracut pre-udev hook.
Jan 31 01:09:25 np0005603609 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped dracut cmdline hook.
Jan 31 01:09:25 np0005603609 systemd[1]: Starting Cleanup udev Database...
Jan 31 01:09:25 np0005603609 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 31 01:09:25 np0005603609 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped Create List of Static Device Nodes.
Jan 31 01:09:25 np0005603609 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Stopped Create System Users.
Jan 31 01:09:25 np0005603609 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 31 01:09:25 np0005603609 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 31 01:09:25 np0005603609 systemd[1]: Finished Cleanup udev Database.
Jan 31 01:09:25 np0005603609 systemd[1]: Reached target Switch Root.
Jan 31 01:09:25 np0005603609 systemd[1]: Starting Switch Root...
Jan 31 01:09:25 np0005603609 systemd[1]: Switching root.
Jan 31 01:09:25 np0005603609 systemd-journald[306]: Journal stopped
Jan 31 01:09:27 np0005603609 systemd-journald: Received SIGTERM from PID 1 (systemd).
Jan 31 01:09:27 np0005603609 kernel: audit: type=1404 audit(1769839765.941:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 31 01:09:27 np0005603609 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 01:09:27 np0005603609 kernel: SELinux:  policy capability open_perms=1
Jan 31 01:09:27 np0005603609 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 01:09:27 np0005603609 kernel: SELinux:  policy capability always_check_network=0
Jan 31 01:09:27 np0005603609 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 01:09:27 np0005603609 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 01:09:27 np0005603609 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 01:09:27 np0005603609 kernel: audit: type=1403 audit(1769839766.092:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 31 01:09:27 np0005603609 systemd: Successfully loaded SELinux policy in 164.096ms.
Jan 31 01:09:27 np0005603609 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 36.615ms.
Jan 31 01:09:27 np0005603609 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 01:09:27 np0005603609 systemd: Detected virtualization kvm.
Jan 31 01:09:27 np0005603609 systemd: Detected architecture x86-64.
Jan 31 01:09:27 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:09:27 np0005603609 systemd: initrd-switch-root.service: Deactivated successfully.
Jan 31 01:09:27 np0005603609 systemd: Stopped Switch Root.
Jan 31 01:09:27 np0005603609 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 31 01:09:27 np0005603609 systemd: Created slice Slice /system/getty.
Jan 31 01:09:27 np0005603609 systemd: Created slice Slice /system/serial-getty.
Jan 31 01:09:27 np0005603609 systemd: Created slice Slice /system/sshd-keygen.
Jan 31 01:09:27 np0005603609 systemd: Created slice User and Session Slice.
Jan 31 01:09:27 np0005603609 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 01:09:27 np0005603609 systemd: Started Forward Password Requests to Wall Directory Watch.
Jan 31 01:09:27 np0005603609 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 31 01:09:27 np0005603609 systemd: Reached target Local Encrypted Volumes.
Jan 31 01:09:27 np0005603609 systemd: Stopped target Switch Root.
Jan 31 01:09:27 np0005603609 systemd: Stopped target Initrd File Systems.
Jan 31 01:09:27 np0005603609 systemd: Stopped target Initrd Root File System.
Jan 31 01:09:27 np0005603609 systemd: Reached target Local Integrity Protected Volumes.
Jan 31 01:09:27 np0005603609 systemd: Reached target Path Units.
Jan 31 01:09:27 np0005603609 systemd: Reached target rpc_pipefs.target.
Jan 31 01:09:27 np0005603609 systemd: Reached target Slice Units.
Jan 31 01:09:27 np0005603609 systemd: Reached target Swaps.
Jan 31 01:09:27 np0005603609 systemd: Reached target Local Verity Protected Volumes.
Jan 31 01:09:27 np0005603609 systemd: Listening on RPCbind Server Activation Socket.
Jan 31 01:09:27 np0005603609 systemd: Reached target RPC Port Mapper.
Jan 31 01:09:27 np0005603609 systemd: Listening on Process Core Dump Socket.
Jan 31 01:09:27 np0005603609 systemd: Listening on initctl Compatibility Named Pipe.
Jan 31 01:09:27 np0005603609 systemd: Listening on udev Control Socket.
Jan 31 01:09:27 np0005603609 systemd: Listening on udev Kernel Socket.
Jan 31 01:09:27 np0005603609 systemd: Mounting Huge Pages File System...
Jan 31 01:09:27 np0005603609 systemd: Mounting POSIX Message Queue File System...
Jan 31 01:09:27 np0005603609 systemd: Mounting Kernel Debug File System...
Jan 31 01:09:27 np0005603609 systemd: Mounting Kernel Trace File System...
Jan 31 01:09:27 np0005603609 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 01:09:27 np0005603609 systemd: Starting Create List of Static Device Nodes...
Jan 31 01:09:27 np0005603609 systemd: Starting Load Kernel Module configfs...
Jan 31 01:09:27 np0005603609 systemd: Starting Load Kernel Module drm...
Jan 31 01:09:27 np0005603609 systemd: Starting Load Kernel Module efi_pstore...
Jan 31 01:09:27 np0005603609 systemd: Starting Load Kernel Module fuse...
Jan 31 01:09:27 np0005603609 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 31 01:09:27 np0005603609 systemd: systemd-fsck-root.service: Deactivated successfully.
Jan 31 01:09:27 np0005603609 systemd: Stopped File System Check on Root Device.
Jan 31 01:09:27 np0005603609 systemd: Stopped Journal Service.
Jan 31 01:09:27 np0005603609 systemd: Starting Journal Service...
Jan 31 01:09:27 np0005603609 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 01:09:27 np0005603609 systemd: Starting Generate network units from Kernel command line...
Jan 31 01:09:27 np0005603609 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 01:09:27 np0005603609 systemd: Starting Remount Root and Kernel File Systems...
Jan 31 01:09:27 np0005603609 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 31 01:09:27 np0005603609 systemd: Starting Apply Kernel Variables...
Jan 31 01:09:27 np0005603609 kernel: fuse: init (API version 7.37)
Jan 31 01:09:27 np0005603609 systemd: Starting Coldplug All udev Devices...
Jan 31 01:09:27 np0005603609 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 31 01:09:27 np0005603609 systemd-journald[677]: Journal started
Jan 31 01:09:27 np0005603609 systemd-journald[677]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 01:09:27 np0005603609 systemd[1]: Queued start job for default target Multi-User System.
Jan 31 01:09:27 np0005603609 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 31 01:09:27 np0005603609 systemd: Started Journal Service.
Jan 31 01:09:27 np0005603609 systemd[1]: Mounted Huge Pages File System.
Jan 31 01:09:27 np0005603609 systemd[1]: Mounted POSIX Message Queue File System.
Jan 31 01:09:27 np0005603609 systemd[1]: Mounted Kernel Debug File System.
Jan 31 01:09:27 np0005603609 systemd[1]: Mounted Kernel Trace File System.
Jan 31 01:09:27 np0005603609 systemd[1]: Finished Create List of Static Device Nodes.
Jan 31 01:09:27 np0005603609 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 01:09:27 np0005603609 systemd[1]: Finished Load Kernel Module configfs.
Jan 31 01:09:27 np0005603609 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 31 01:09:27 np0005603609 systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 31 01:09:27 np0005603609 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 31 01:09:27 np0005603609 systemd[1]: Finished Load Kernel Module fuse.
Jan 31 01:09:27 np0005603609 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 31 01:09:27 np0005603609 systemd[1]: Finished Generate network units from Kernel command line.
Jan 31 01:09:27 np0005603609 systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 31 01:09:27 np0005603609 systemd[1]: Finished Apply Kernel Variables.
Jan 31 01:09:27 np0005603609 kernel: ACPI: bus type drm_connector registered
Jan 31 01:09:27 np0005603609 systemd[1]: Mounting FUSE Control File System...
Jan 31 01:09:27 np0005603609 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 01:09:27 np0005603609 systemd[1]: Starting Rebuild Hardware Database...
Jan 31 01:09:27 np0005603609 systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 31 01:09:27 np0005603609 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 31 01:09:27 np0005603609 systemd[1]: Starting Load/Save OS Random Seed...
Jan 31 01:09:27 np0005603609 systemd[1]: Starting Create System Users...
Jan 31 01:09:27 np0005603609 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 31 01:09:27 np0005603609 systemd[1]: Finished Load Kernel Module drm.
Jan 31 01:09:27 np0005603609 systemd-journald[677]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 01:09:27 np0005603609 systemd-journald[677]: Received client request to flush runtime journal.
Jan 31 01:09:27 np0005603609 systemd[1]: Mounted FUSE Control File System.
Jan 31 01:09:27 np0005603609 systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 31 01:09:27 np0005603609 systemd[1]: Finished Coldplug All udev Devices.
Jan 31 01:09:27 np0005603609 systemd[1]: Finished Create System Users.
Jan 31 01:09:27 np0005603609 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 01:09:27 np0005603609 systemd[1]: Finished Load/Save OS Random Seed.
Jan 31 01:09:27 np0005603609 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 01:09:27 np0005603609 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 01:09:27 np0005603609 systemd[1]: Reached target Preparation for Local File Systems.
Jan 31 01:09:27 np0005603609 systemd[1]: Reached target Local File Systems.
Jan 31 01:09:27 np0005603609 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 31 01:09:27 np0005603609 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 31 01:09:27 np0005603609 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 31 01:09:27 np0005603609 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 31 01:09:27 np0005603609 systemd[1]: Starting Automatic Boot Loader Update...
Jan 31 01:09:27 np0005603609 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 31 01:09:27 np0005603609 systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 01:09:27 np0005603609 bootctl[696]: Couldn't find EFI system partition, skipping.
Jan 31 01:09:27 np0005603609 systemd[1]: Finished Automatic Boot Loader Update.
Jan 31 01:09:27 np0005603609 systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 01:09:27 np0005603609 systemd[1]: Starting Security Auditing Service...
Jan 31 01:09:27 np0005603609 systemd[1]: Starting RPC Bind...
Jan 31 01:09:27 np0005603609 systemd[1]: Starting Rebuild Journal Catalog...
Jan 31 01:09:27 np0005603609 auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 31 01:09:27 np0005603609 auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 31 01:09:27 np0005603609 systemd[1]: Finished Rebuild Journal Catalog.
Jan 31 01:09:27 np0005603609 systemd[1]: Started RPC Bind.
Jan 31 01:09:27 np0005603609 augenrules[707]: /sbin/augenrules: No change
Jan 31 01:09:27 np0005603609 augenrules[722]: No rules
Jan 31 01:09:27 np0005603609 augenrules[722]: enabled 1
Jan 31 01:09:27 np0005603609 augenrules[722]: failure 1
Jan 31 01:09:27 np0005603609 augenrules[722]: pid 702
Jan 31 01:09:27 np0005603609 augenrules[722]: rate_limit 0
Jan 31 01:09:27 np0005603609 augenrules[722]: backlog_limit 8192
Jan 31 01:09:27 np0005603609 augenrules[722]: lost 0
Jan 31 01:09:27 np0005603609 augenrules[722]: backlog 0
Jan 31 01:09:27 np0005603609 augenrules[722]: backlog_wait_time 60000
Jan 31 01:09:27 np0005603609 augenrules[722]: backlog_wait_time_actual 0
Jan 31 01:09:27 np0005603609 augenrules[722]: enabled 1
Jan 31 01:09:27 np0005603609 augenrules[722]: failure 1
Jan 31 01:09:27 np0005603609 augenrules[722]: pid 702
Jan 31 01:09:27 np0005603609 augenrules[722]: rate_limit 0
Jan 31 01:09:27 np0005603609 augenrules[722]: backlog_limit 8192
Jan 31 01:09:27 np0005603609 augenrules[722]: lost 0
Jan 31 01:09:27 np0005603609 augenrules[722]: backlog 0
Jan 31 01:09:27 np0005603609 augenrules[722]: backlog_wait_time 60000
Jan 31 01:09:27 np0005603609 augenrules[722]: backlog_wait_time_actual 0
Jan 31 01:09:27 np0005603609 augenrules[722]: enabled 1
Jan 31 01:09:27 np0005603609 augenrules[722]: failure 1
Jan 31 01:09:27 np0005603609 augenrules[722]: pid 702
Jan 31 01:09:27 np0005603609 augenrules[722]: rate_limit 0
Jan 31 01:09:27 np0005603609 augenrules[722]: backlog_limit 8192
Jan 31 01:09:27 np0005603609 augenrules[722]: lost 0
Jan 31 01:09:27 np0005603609 augenrules[722]: backlog 0
Jan 31 01:09:27 np0005603609 augenrules[722]: backlog_wait_time 60000
Jan 31 01:09:27 np0005603609 augenrules[722]: backlog_wait_time_actual 0
Jan 31 01:09:27 np0005603609 systemd[1]: Started Security Auditing Service.
Jan 31 01:09:27 np0005603609 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 31 01:09:27 np0005603609 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 31 01:09:28 np0005603609 systemd[1]: Finished Rebuild Hardware Database.
Jan 31 01:09:28 np0005603609 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 01:09:28 np0005603609 systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 01:09:28 np0005603609 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 01:09:28 np0005603609 systemd[1]: Starting Load Kernel Module configfs...
Jan 31 01:09:28 np0005603609 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 01:09:28 np0005603609 systemd[1]: Finished Load Kernel Module configfs.
Jan 31 01:09:28 np0005603609 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 31 01:09:28 np0005603609 systemd-udevd[735]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 01:09:28 np0005603609 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 31 01:09:28 np0005603609 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 31 01:09:28 np0005603609 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 31 01:09:28 np0005603609 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 31 01:09:28 np0005603609 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 31 01:09:28 np0005603609 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 31 01:09:28 np0005603609 kernel: Console: switching to colour dummy device 80x25
Jan 31 01:09:28 np0005603609 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 31 01:09:28 np0005603609 kernel: [drm] features: -context_init
Jan 31 01:09:28 np0005603609 kernel: [drm] number of scanouts: 1
Jan 31 01:09:28 np0005603609 kernel: [drm] number of cap sets: 0
Jan 31 01:09:28 np0005603609 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 31 01:09:28 np0005603609 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 31 01:09:28 np0005603609 kernel: Console: switching to colour frame buffer device 128x48
Jan 31 01:09:28 np0005603609 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 31 01:09:28 np0005603609 kernel: kvm_amd: TSC scaling supported
Jan 31 01:09:28 np0005603609 kernel: kvm_amd: Nested Virtualization enabled
Jan 31 01:09:28 np0005603609 kernel: kvm_amd: Nested Paging enabled
Jan 31 01:09:28 np0005603609 kernel: kvm_amd: LBR virtualization supported
Jan 31 01:09:28 np0005603609 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 31 01:09:28 np0005603609 systemd[1]: Starting Update is Completed...
Jan 31 01:09:28 np0005603609 systemd[1]: Finished Update is Completed.
Jan 31 01:09:28 np0005603609 systemd[1]: Reached target System Initialization.
Jan 31 01:09:28 np0005603609 systemd[1]: Started dnf makecache --timer.
Jan 31 01:09:28 np0005603609 systemd[1]: Started Daily rotation of log files.
Jan 31 01:09:28 np0005603609 systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 31 01:09:28 np0005603609 systemd[1]: Reached target Timer Units.
Jan 31 01:09:28 np0005603609 systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 31 01:09:28 np0005603609 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 31 01:09:28 np0005603609 systemd[1]: Reached target Socket Units.
Jan 31 01:09:28 np0005603609 systemd[1]: Starting D-Bus System Message Bus...
Jan 31 01:09:28 np0005603609 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 01:09:28 np0005603609 systemd[1]: Started D-Bus System Message Bus.
Jan 31 01:09:28 np0005603609 systemd[1]: Reached target Basic System.
Jan 31 01:09:28 np0005603609 dbus-broker-lau[800]: Ready
Jan 31 01:09:28 np0005603609 systemd[1]: Starting NTP client/server...
Jan 31 01:09:28 np0005603609 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 31 01:09:28 np0005603609 systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 31 01:09:28 np0005603609 systemd[1]: Starting IPv4 firewall with iptables...
Jan 31 01:09:28 np0005603609 systemd[1]: Started irqbalance daemon.
Jan 31 01:09:28 np0005603609 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 31 01:09:28 np0005603609 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 01:09:28 np0005603609 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 01:09:28 np0005603609 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 01:09:28 np0005603609 systemd[1]: Reached target sshd-keygen.target.
Jan 31 01:09:28 np0005603609 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 31 01:09:28 np0005603609 systemd[1]: Reached target User and Group Name Lookups.
Jan 31 01:09:28 np0005603609 systemd[1]: Starting User Login Management...
Jan 31 01:09:28 np0005603609 systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 31 01:09:28 np0005603609 chronyd[831]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 01:09:28 np0005603609 systemd-logind[823]: New seat seat0.
Jan 31 01:09:28 np0005603609 systemd-logind[823]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 01:09:28 np0005603609 systemd-logind[823]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 01:09:28 np0005603609 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 31 01:09:28 np0005603609 systemd[1]: Started User Login Management.
Jan 31 01:09:28 np0005603609 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 31 01:09:28 np0005603609 chronyd[831]: Loaded 0 symmetric keys
Jan 31 01:09:28 np0005603609 chronyd[831]: Using right/UTC timezone to obtain leap second data
Jan 31 01:09:28 np0005603609 chronyd[831]: Loaded seccomp filter (level 2)
Jan 31 01:09:28 np0005603609 systemd[1]: Started NTP client/server.
Jan 31 01:09:29 np0005603609 iptables.init[817]: iptables: Applying firewall rules: [  OK  ]
Jan 31 01:09:29 np0005603609 systemd[1]: Finished IPv4 firewall with iptables.
Jan 31 01:09:30 np0005603609 cloud-init[840]: Cloud-init v. 24.4-8.el9 running 'init-local' at Sat, 31 Jan 2026 06:09:30 +0000. Up 9.28 seconds.
Jan 31 01:09:30 np0005603609 systemd[1]: run-cloud\x2dinit-tmp-tmpvsd62n0f.mount: Deactivated successfully.
Jan 31 01:09:30 np0005603609 systemd[1]: Starting Hostname Service...
Jan 31 01:09:30 np0005603609 systemd[1]: Started Hostname Service.
Jan 31 01:09:30 np0005603609 systemd-hostnamed[854]: Hostname set to <np0005603609.novalocal> (static)
Jan 31 01:09:31 np0005603609 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 31 01:09:31 np0005603609 systemd[1]: Reached target Preparation for Network.
Jan 31 01:09:31 np0005603609 systemd[1]: Starting Network Manager...
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.2914] NetworkManager (version 1.54.3-2.el9) is starting... (boot:ce650b02-d8ad-4f14-899a-0e3fc9a18049)
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.2919] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3374] manager[0x55bff0f0e000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3438] hostname: hostname: using hostnamed
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3439] hostname: static hostname changed from (none) to "np0005603609.novalocal"
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3445] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3607] manager[0x55bff0f0e000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3607] manager[0x55bff0f0e000]: rfkill: WWAN hardware radio set enabled
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3746] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3747] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3747] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3747] manager: Networking is enabled by state file
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3749] settings: Loaded settings plugin: keyfile (internal)
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3792] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3816] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3882] dhcp: init: Using DHCP client 'internal'
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3909] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3923] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 01:09:31 np0005603609 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3949] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3961] device (lo): Activation: starting connection 'lo' (4702302b-2207-4077-8516-154f1b85f390)
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3969] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.3972] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 01:09:31 np0005603609 systemd[1]: Started Network Manager.
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4004] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4009] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 01:09:31 np0005603609 systemd[1]: Reached target Network.
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4022] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4024] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4026] device (eth0): carrier: link connected
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4028] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4035] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4041] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 01:09:31 np0005603609 systemd[1]: Starting Network Manager Wait Online...
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4125] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4128] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4130] manager: NetworkManager state is now CONNECTING
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4131] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4138] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4141] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:09:31 np0005603609 systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 31 01:09:31 np0005603609 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4214] dhcp4 (eth0): state changed new lease, address=38.129.56.60
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4224] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4248] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4256] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4258] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4263] device (lo): Activation: successful, device activated.
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4305] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4307] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4311] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4314] device (eth0): Activation: successful, device activated.
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4321] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 01:09:31 np0005603609 NetworkManager[858]: <info>  [1769839771.4325] manager: startup complete
Jan 31 01:09:31 np0005603609 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 31 01:09:31 np0005603609 systemd[1]: Started GSSAPI Proxy Daemon.
Jan 31 01:09:31 np0005603609 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 01:09:31 np0005603609 systemd[1]: Reached target NFS client services.
Jan 31 01:09:31 np0005603609 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 01:09:31 np0005603609 systemd[1]: Reached target Remote File Systems.
Jan 31 01:09:31 np0005603609 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 01:09:31 np0005603609 systemd[1]: Finished Network Manager Wait Online.
Jan 31 01:09:31 np0005603609 systemd[1]: Starting Cloud-init: Network Stage...
Jan 31 01:09:32 np0005603609 cloud-init[924]: Cloud-init v. 24.4-8.el9 running 'init' at Sat, 31 Jan 2026 06:09:32 +0000. Up 11.17 seconds.
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: |  eth0  | True |         38.129.56.60         | 255.255.255.0 | global | fa:16:3e:0c:8a:55 |
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fe0c:8a55/64 |       .       |  link  | fa:16:3e:0c:8a:55 |
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 31 01:09:32 np0005603609 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 01:09:35 np0005603609 chronyd[831]: Selected source 147.189.136.126 (2.centos.pool.ntp.org)
Jan 31 01:09:35 np0005603609 chronyd[831]: System clock wrong by -1.170438 seconds
Jan 31 01:09:35 np0005603609 chronyd[831]: System clock was stepped by -1.170438 seconds
Jan 31 01:09:35 np0005603609 chronyd[831]: System clock TAI offset set to 37 seconds
Jan 31 01:09:37 np0005603609 chronyd[831]: Selected source 209.227.173.244 (2.centos.pool.ntp.org)
Jan 31 01:09:37 np0005603609 irqbalance[818]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 31 01:09:37 np0005603609 irqbalance[818]: IRQ 25 affinity is now unmanaged
Jan 31 01:09:37 np0005603609 irqbalance[818]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 31 01:09:37 np0005603609 irqbalance[818]: IRQ 31 affinity is now unmanaged
Jan 31 01:09:37 np0005603609 irqbalance[818]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 31 01:09:37 np0005603609 irqbalance[818]: IRQ 28 affinity is now unmanaged
Jan 31 01:09:37 np0005603609 irqbalance[818]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 31 01:09:37 np0005603609 irqbalance[818]: IRQ 32 affinity is now unmanaged
Jan 31 01:09:37 np0005603609 irqbalance[818]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 31 01:09:37 np0005603609 irqbalance[818]: IRQ 30 affinity is now unmanaged
Jan 31 01:09:37 np0005603609 irqbalance[818]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 31 01:09:37 np0005603609 irqbalance[818]: IRQ 29 affinity is now unmanaged
Jan 31 01:09:38 np0005603609 cloud-init[924]: Generating public/private rsa key pair.
Jan 31 01:09:38 np0005603609 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 31 01:09:38 np0005603609 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 31 01:09:38 np0005603609 cloud-init[924]: The key fingerprint is:
Jan 31 01:09:38 np0005603609 cloud-init[924]: SHA256:KyyG9efxX215Tkm7hyb1aGXRCyoz8BgePnCCCvh+nDA root@np0005603609.novalocal
Jan 31 01:09:38 np0005603609 cloud-init[924]: The key's randomart image is:
Jan 31 01:09:38 np0005603609 cloud-init[924]: +---[RSA 3072]----+
Jan 31 01:09:38 np0005603609 cloud-init[924]: |                 |
Jan 31 01:09:38 np0005603609 cloud-init[924]: |                 |
Jan 31 01:09:38 np0005603609 cloud-init[924]: |.   .           .|
Jan 31 01:09:38 np0005603609 cloud-init[924]: |o  . o =     . ..|
Jan 31 01:09:38 np0005603609 cloud-init[924]: |... . * S   . ..o|
Jan 31 01:09:38 np0005603609 cloud-init[924]: | .Eo o = * .  oo*|
Jan 31 01:09:38 np0005603609 cloud-init[924]: | ..+o.+ = +  ..X=|
Jan 31 01:09:38 np0005603609 cloud-init[924]: |  ..+. + o  ..=+=|
Jan 31 01:09:38 np0005603609 cloud-init[924]: |   .    . ...+ .o|
Jan 31 01:09:38 np0005603609 cloud-init[924]: +----[SHA256]-----+
Jan 31 01:09:38 np0005603609 cloud-init[924]: Generating public/private ecdsa key pair.
Jan 31 01:09:38 np0005603609 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 31 01:09:38 np0005603609 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 31 01:09:38 np0005603609 cloud-init[924]: The key fingerprint is:
Jan 31 01:09:38 np0005603609 cloud-init[924]: SHA256:LN+AJBnuhYrWUcZCQpawcS52AT1s2Ud5TdrmLDrLqlc root@np0005603609.novalocal
Jan 31 01:09:38 np0005603609 cloud-init[924]: The key's randomart image is:
Jan 31 01:09:38 np0005603609 cloud-init[924]: +---[ECDSA 256]---+
Jan 31 01:09:38 np0005603609 cloud-init[924]: |==Bo=o... o.     |
Jan 31 01:09:38 np0005603609 cloud-init[924]: |.*.O+* o .o.     |
Jan 31 01:09:38 np0005603609 cloud-init[924]: |o.ooB + .. o     |
Jan 31 01:09:38 np0005603609 cloud-init[924]: |.ooo.+ o  +      |
Jan 31 01:09:38 np0005603609 cloud-init[924]: |..... o S. o     |
Jan 31 01:09:38 np0005603609 cloud-init[924]: |.      E.o.      |
Jan 31 01:09:38 np0005603609 cloud-init[924]: |      .o. .      |
Jan 31 01:09:38 np0005603609 cloud-init[924]: |     .. o        |
Jan 31 01:09:38 np0005603609 cloud-init[924]: |   .o..o         |
Jan 31 01:09:38 np0005603609 cloud-init[924]: +----[SHA256]-----+
Jan 31 01:09:38 np0005603609 cloud-init[924]: Generating public/private ed25519 key pair.
Jan 31 01:09:38 np0005603609 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 31 01:09:38 np0005603609 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 31 01:09:38 np0005603609 cloud-init[924]: The key fingerprint is:
Jan 31 01:09:38 np0005603609 cloud-init[924]: SHA256:Sl6BmKtvX9MljM0hxkJkc7d41f3BKsRlj9UL1LUo/a8 root@np0005603609.novalocal
Jan 31 01:09:38 np0005603609 cloud-init[924]: The key's randomart image is:
Jan 31 01:09:38 np0005603609 cloud-init[924]: +--[ED25519 256]--+
Jan 31 01:09:38 np0005603609 cloud-init[924]: |     .= . o +=oo+|
Jan 31 01:09:38 np0005603609 cloud-init[924]: |     = = o =o.*++|
Jan 31 01:09:38 np0005603609 cloud-init[924]: |    o o * =. +oo+|
Jan 31 01:09:38 np0005603609 cloud-init[924]: |     . o O o.....|
Jan 31 01:09:38 np0005603609 cloud-init[924]: |    . . S = o  . |
Jan 31 01:09:38 np0005603609 cloud-init[924]: |   . o o . o    .|
Jan 31 01:09:38 np0005603609 cloud-init[924]: |  .   o o .     .|
Jan 31 01:09:38 np0005603609 cloud-init[924]: |   ..  . .     . |
Jan 31 01:09:38 np0005603609 cloud-init[924]: |   ....       E  |
Jan 31 01:09:38 np0005603609 cloud-init[924]: +----[SHA256]-----+
Jan 31 01:09:38 np0005603609 systemd[1]: Finished Cloud-init: Network Stage.
Jan 31 01:09:38 np0005603609 systemd[1]: Reached target Cloud-config availability.
Jan 31 01:09:38 np0005603609 systemd[1]: Reached target Network is Online.
Jan 31 01:09:38 np0005603609 systemd[1]: Starting Cloud-init: Config Stage...
Jan 31 01:09:38 np0005603609 systemd[1]: Starting Crash recovery kernel arming...
Jan 31 01:09:38 np0005603609 systemd[1]: Starting Notify NFS peers of a restart...
Jan 31 01:09:38 np0005603609 systemd[1]: Starting System Logging Service...
Jan 31 01:09:38 np0005603609 systemd[1]: Starting OpenSSH server daemon...
Jan 31 01:09:38 np0005603609 sm-notify[1006]: Version 2.5.4 starting
Jan 31 01:09:38 np0005603609 systemd[1]: Starting Permit User Sessions...
Jan 31 01:09:38 np0005603609 systemd[1]: Started Notify NFS peers of a restart.
Jan 31 01:09:38 np0005603609 systemd[1]: Finished Permit User Sessions.
Jan 31 01:09:38 np0005603609 systemd[1]: Started Command Scheduler.
Jan 31 01:09:38 np0005603609 rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Jan 31 01:09:38 np0005603609 rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 31 01:09:38 np0005603609 systemd[1]: Started Getty on tty1.
Jan 31 01:09:38 np0005603609 systemd[1]: Started Serial Getty on ttyS0.
Jan 31 01:09:38 np0005603609 systemd[1]: Reached target Login Prompts.
Jan 31 01:09:38 np0005603609 systemd[1]: Started OpenSSH server daemon.
Jan 31 01:09:38 np0005603609 systemd[1]: Started System Logging Service.
Jan 31 01:09:38 np0005603609 systemd[1]: Reached target Multi-User System.
Jan 31 01:09:38 np0005603609 systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 31 01:09:38 np0005603609 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 31 01:09:38 np0005603609 systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 31 01:09:38 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 01:09:38 np0005603609 cloud-init[1080]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Sat, 31 Jan 2026 06:09:38 +0000. Up 18.92 seconds.
Jan 31 01:09:38 np0005603609 systemd[1]: Finished Cloud-init: Config Stage.
Jan 31 01:09:38 np0005603609 kdumpctl[1022]: kdump: No kdump initial ramdisk found.
Jan 31 01:09:38 np0005603609 kdumpctl[1022]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Jan 31 01:09:39 np0005603609 systemd[1]: Starting Cloud-init: Final Stage...
Jan 31 01:09:39 np0005603609 cloud-init[1218]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Sat, 31 Jan 2026 06:09:39 +0000. Up 19.37 seconds.
Jan 31 01:09:39 np0005603609 cloud-init[1265]: #############################################################
Jan 31 01:09:39 np0005603609 cloud-init[1268]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 31 01:09:39 np0005603609 cloud-init[1275]: 256 SHA256:LN+AJBnuhYrWUcZCQpawcS52AT1s2Ud5TdrmLDrLqlc root@np0005603609.novalocal (ECDSA)
Jan 31 01:09:39 np0005603609 cloud-init[1278]: 256 SHA256:Sl6BmKtvX9MljM0hxkJkc7d41f3BKsRlj9UL1LUo/a8 root@np0005603609.novalocal (ED25519)
Jan 31 01:09:39 np0005603609 cloud-init[1282]: 3072 SHA256:KyyG9efxX215Tkm7hyb1aGXRCyoz8BgePnCCCvh+nDA root@np0005603609.novalocal (RSA)
Jan 31 01:09:39 np0005603609 cloud-init[1283]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 31 01:09:39 np0005603609 cloud-init[1284]: #############################################################
Jan 31 01:09:39 np0005603609 cloud-init[1218]: Cloud-init v. 24.4-8.el9 finished at Sat, 31 Jan 2026 06:09:39 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 19.60 seconds
Jan 31 01:09:39 np0005603609 systemd[1]: Finished Cloud-init: Final Stage.
Jan 31 01:09:39 np0005603609 systemd[1]: Reached target Cloud-init target.
Jan 31 01:09:39 np0005603609 dracut[1303]: dracut-057-102.git20250818.el9
Jan 31 01:09:39 np0005603609 dracut[1305]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 01:09:40 np0005603609 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 01:09:40 np0005603609 dracut[1305]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: memstrack is not available
Jan 31 01:09:41 np0005603609 dracut[1305]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 01:09:41 np0005603609 dracut[1305]: memstrack is not available
Jan 31 01:09:41 np0005603609 dracut[1305]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 01:09:41 np0005603609 dracut[1305]: *** Including module: systemd ***
Jan 31 01:09:41 np0005603609 dracut[1305]: *** Including module: fips ***
Jan 31 01:09:42 np0005603609 dracut[1305]: *** Including module: systemd-initrd ***
Jan 31 01:09:42 np0005603609 dracut[1305]: *** Including module: i18n ***
Jan 31 01:09:42 np0005603609 dracut[1305]: *** Including module: drm ***
Jan 31 01:09:42 np0005603609 dracut[1305]: *** Including module: prefixdevname ***
Jan 31 01:09:42 np0005603609 dracut[1305]: *** Including module: kernel-modules ***
Jan 31 01:09:43 np0005603609 kernel: block vda: the capability attribute has been deprecated.
Jan 31 01:09:43 np0005603609 dracut[1305]: *** Including module: kernel-modules-extra ***
Jan 31 01:09:43 np0005603609 dracut[1305]: *** Including module: qemu ***
Jan 31 01:09:43 np0005603609 dracut[1305]: *** Including module: fstab-sys ***
Jan 31 01:09:43 np0005603609 dracut[1305]: *** Including module: rootfs-block ***
Jan 31 01:09:43 np0005603609 dracut[1305]: *** Including module: terminfo ***
Jan 31 01:09:43 np0005603609 dracut[1305]: *** Including module: udev-rules ***
Jan 31 01:09:44 np0005603609 dracut[1305]: Skipping udev rule: 91-permissions.rules
Jan 31 01:09:44 np0005603609 dracut[1305]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 31 01:09:44 np0005603609 dracut[1305]: *** Including module: virtiofs ***
Jan 31 01:09:44 np0005603609 dracut[1305]: *** Including module: dracut-systemd ***
Jan 31 01:09:44 np0005603609 dracut[1305]: *** Including module: usrmount ***
Jan 31 01:09:44 np0005603609 dracut[1305]: *** Including module: base ***
Jan 31 01:09:44 np0005603609 dracut[1305]: *** Including module: fs-lib ***
Jan 31 01:09:44 np0005603609 dracut[1305]: *** Including module: kdumpbase ***
Jan 31 01:09:44 np0005603609 dracut[1305]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 31 01:09:44 np0005603609 dracut[1305]:  microcode_ctl module: mangling fw_dir
Jan 31 01:09:44 np0005603609 dracut[1305]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 31 01:09:44 np0005603609 dracut[1305]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 31 01:09:44 np0005603609 dracut[1305]:    microcode_ctl: configuration "intel" is ignored
Jan 31 01:09:44 np0005603609 dracut[1305]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 31 01:09:45 np0005603609 dracut[1305]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 31 01:09:45 np0005603609 dracut[1305]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 31 01:09:45 np0005603609 dracut[1305]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 31 01:09:45 np0005603609 dracut[1305]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 31 01:09:45 np0005603609 dracut[1305]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 31 01:09:45 np0005603609 dracut[1305]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 31 01:09:45 np0005603609 dracut[1305]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 31 01:09:45 np0005603609 dracut[1305]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 31 01:09:45 np0005603609 dracut[1305]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 31 01:09:45 np0005603609 dracut[1305]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 31 01:09:45 np0005603609 dracut[1305]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 31 01:09:45 np0005603609 dracut[1305]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 31 01:09:45 np0005603609 dracut[1305]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 31 01:09:45 np0005603609 dracut[1305]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 31 01:09:45 np0005603609 dracut[1305]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 31 01:09:45 np0005603609 dracut[1305]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 31 01:09:45 np0005603609 dracut[1305]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 31 01:09:45 np0005603609 dracut[1305]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 31 01:09:45 np0005603609 dracut[1305]: *** Including module: openssl ***
Jan 31 01:09:45 np0005603609 dracut[1305]: *** Including module: shutdown ***
Jan 31 01:09:45 np0005603609 dracut[1305]: *** Including module: squash ***
Jan 31 01:09:45 np0005603609 dracut[1305]: *** Including modules done ***
Jan 31 01:09:45 np0005603609 dracut[1305]: *** Installing kernel module dependencies ***
Jan 31 01:09:46 np0005603609 dracut[1305]: *** Installing kernel module dependencies done ***
Jan 31 01:09:46 np0005603609 dracut[1305]: *** Resolving executable dependencies ***
Jan 31 01:09:48 np0005603609 dracut[1305]: *** Resolving executable dependencies done ***
Jan 31 01:09:48 np0005603609 dracut[1305]: *** Generating early-microcode cpio image ***
Jan 31 01:09:48 np0005603609 dracut[1305]: *** Store current command line parameters ***
Jan 31 01:09:48 np0005603609 dracut[1305]: Stored kernel commandline:
Jan 31 01:09:48 np0005603609 dracut[1305]: No dracut internal kernel commandline stored in the initramfs
Jan 31 01:09:48 np0005603609 dracut[1305]: *** Install squash loader ***
Jan 31 01:09:49 np0005603609 dracut[1305]: *** Squashing the files inside the initramfs ***
Jan 31 01:09:50 np0005603609 dracut[1305]: *** Squashing the files inside the initramfs done ***
Jan 31 01:09:50 np0005603609 dracut[1305]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Jan 31 01:09:50 np0005603609 dracut[1305]: *** Hardlinking files ***
Jan 31 01:09:50 np0005603609 dracut[1305]: *** Hardlinking files done ***
Jan 31 01:09:52 np0005603609 dracut[1305]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Jan 31 01:09:52 np0005603609 kdumpctl[1022]: kdump: kexec: loaded kdump kernel
Jan 31 01:09:52 np0005603609 kdumpctl[1022]: kdump: Starting kdump: [OK]
Jan 31 01:09:52 np0005603609 systemd[1]: Finished Crash recovery kernel arming.
Jan 31 01:09:52 np0005603609 systemd[1]: Startup finished in 1.807s (kernel) + 3.021s (initrd) + 28.142s (userspace) = 32.971s.
Jan 31 01:09:53 np0005603609 systemd[1]: Created slice User Slice of UID 1000.
Jan 31 01:09:53 np0005603609 systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 31 01:09:53 np0005603609 systemd-logind[823]: New session 1 of user zuul.
Jan 31 01:09:53 np0005603609 systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 31 01:09:53 np0005603609 systemd[1]: Starting User Manager for UID 1000...
Jan 31 01:09:53 np0005603609 systemd[4309]: Queued start job for default target Main User Target.
Jan 31 01:09:53 np0005603609 systemd[4309]: Created slice User Application Slice.
Jan 31 01:09:53 np0005603609 systemd[4309]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 01:09:53 np0005603609 systemd[4309]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 01:09:53 np0005603609 systemd[4309]: Reached target Paths.
Jan 31 01:09:53 np0005603609 systemd[4309]: Reached target Timers.
Jan 31 01:09:53 np0005603609 systemd[4309]: Starting D-Bus User Message Bus Socket...
Jan 31 01:09:53 np0005603609 systemd[4309]: Starting Create User's Volatile Files and Directories...
Jan 31 01:09:53 np0005603609 systemd[4309]: Finished Create User's Volatile Files and Directories.
Jan 31 01:09:53 np0005603609 systemd[4309]: Listening on D-Bus User Message Bus Socket.
Jan 31 01:09:53 np0005603609 systemd[4309]: Reached target Sockets.
Jan 31 01:09:53 np0005603609 systemd[4309]: Reached target Basic System.
Jan 31 01:09:53 np0005603609 systemd[4309]: Reached target Main User Target.
Jan 31 01:09:53 np0005603609 systemd[4309]: Startup finished in 164ms.
Jan 31 01:09:53 np0005603609 systemd[1]: Started User Manager for UID 1000.
Jan 31 01:09:53 np0005603609 systemd[1]: Started Session 1 of User zuul.
Jan 31 01:09:53 np0005603609 python3[4391]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:10:00 np0005603609 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 01:10:00 np0005603609 python3[4421]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:10:14 np0005603609 python3[4479]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:10:16 np0005603609 python3[4519]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 31 01:10:18 np0005603609 python3[4545]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDchShM99lH6I0ER4M5bdTKJqBTEwI+oB9SwUKCFnfSFe+YXdwGln/ZQz1oTQoc7uHsosGAjxkBLnzurBq9QuoyCLJfHlIRMt33udq87cbS+4TPUzX86YzbvCdjL2JcQ7HQdT/t4eiTsq/T6rUG6NN8sZSab/kVk1sT3I1DEnUGPGWr5xAUZ/TMosNE9wHhXQsHXN13G6YeYDfG/h+84mm6kTISBC+8M8Ne+jGn4udnhGcj24MjbKqS4l405WKsvB7IHwjnkEFFSQ0MXxcPMC+W1PqE0JQeoE6StfGL1kcrrAyVCz+t2vX8dRWY/nDCcOyEiXPb/tEW8ddykqk/ZgDlBYlNimaPvgLPoGTr6XZHfSGRjrYhiPwQI9xa+AHOZOXJsMdEBoZk1VMty2FQTwJgfV/t7gi5q5lagFfQ44wTy8HBcOiaQ08p2URDYYWoqWLaBV2TDvVmjvuCHKZJiWKb9vRE0G1BBNIVjIvkTPeu+7RayIoYTevDiRJsPJ0pJi0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:19 np0005603609 python3[4569]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:10:19 np0005603609 python3[4668]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:10:19 np0005603609 python3[4739]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769839819.2216733-252-34045430794663/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=ef956508a7f94c3dbffb4bb08b3ee84f_id_rsa follow=False checksum=38004d5aef71e7771fda9a333b3cee8e58c249ca backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:10:20 np0005603609 python3[4862]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:10:20 np0005603609 python3[4933]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769839820.2402697-307-264538983940316/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=ef956508a7f94c3dbffb4bb08b3ee84f_id_rsa.pub follow=False checksum=aabe17515195de3c7c885977f74a37a021ba1e01 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:10:22 np0005603609 python3[4981]: ansible-ping Invoked with data=pong
Jan 31 01:10:23 np0005603609 python3[5005]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:10:25 np0005603609 python3[5063]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 31 01:10:27 np0005603609 python3[5095]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:10:27 np0005603609 python3[5119]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:10:27 np0005603609 python3[5143]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:10:27 np0005603609 python3[5167]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:10:28 np0005603609 python3[5191]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:10:28 np0005603609 python3[5215]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:10:30 np0005603609 python3[5241]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:10:30 np0005603609 python3[5319]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:10:31 np0005603609 python3[5392]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839830.2541537-32-13405915430/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:10:31 np0005603609 python3[5440]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:32 np0005603609 python3[5464]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:32 np0005603609 python3[5488]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:32 np0005603609 python3[5512]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:32 np0005603609 python3[5536]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:33 np0005603609 python3[5560]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:33 np0005603609 python3[5584]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:33 np0005603609 python3[5608]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:34 np0005603609 python3[5632]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:34 np0005603609 python3[5656]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:34 np0005603609 python3[5680]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:35 np0005603609 python3[5704]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:35 np0005603609 python3[5728]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:35 np0005603609 python3[5752]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:35 np0005603609 python3[5776]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:36 np0005603609 python3[5800]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:36 np0005603609 python3[5824]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:36 np0005603609 python3[5848]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:37 np0005603609 python3[5872]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:38 np0005603609 python3[5896]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:38 np0005603609 python3[5920]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:39 np0005603609 python3[5944]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:39 np0005603609 python3[5968]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:39 np0005603609 python3[5992]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:39 np0005603609 python3[6016]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:40 np0005603609 python3[6040]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:10:41 np0005603609 python3[6066]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 01:10:41 np0005603609 systemd[1]: Starting Time & Date Service...
Jan 31 01:10:41 np0005603609 systemd[1]: Started Time & Date Service.
Jan 31 01:10:41 np0005603609 systemd-timedated[6068]: Changed time zone to 'UTC' (UTC).
Jan 31 01:10:41 np0005603609 python3[6097]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:10:42 np0005603609 python3[6173]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:10:42 np0005603609 python3[6244]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769839842.210176-252-275409538132112/source _original_basename=tmpbpz2_g05 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:10:43 np0005603609 python3[6344]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:10:43 np0005603609 python3[6415]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769839843.1103375-302-3506561948947/source _original_basename=tmpze7p19k6 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:10:45 np0005603609 python3[6517]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:10:45 np0005603609 python3[6590]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769839845.1929255-382-66585125099298/source _original_basename=tmpo4lngmmw follow=False checksum=5f07403bd9a90067fdaa6ee64964c23399e3e919 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:10:46 np0005603609 python3[6638]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:10:46 np0005603609 python3[6664]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:10:47 np0005603609 python3[6744]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:10:47 np0005603609 python3[6817]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839847.2677548-453-193665534495368/source _original_basename=tmp37gwaa95 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:10:48 np0005603609 python3[6868]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-a598-46a5-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:10:49 np0005603609 python3[6895]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-a598-46a5-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 31 01:10:50 np0005603609 python3[6924]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:11:11 np0005603609 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 01:11:12 np0005603609 python3[6952]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:12:04 np0005603609 systemd[4309]: Starting Mark boot as successful...
Jan 31 01:12:04 np0005603609 systemd[4309]: Finished Mark boot as successful.
Jan 31 01:12:12 np0005603609 systemd-logind[823]: Session 1 logged out. Waiting for processes to exit.
Jan 31 01:12:21 np0005603609 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 01:12:21 np0005603609 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 31 01:12:21 np0005603609 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 31 01:12:21 np0005603609 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 31 01:12:21 np0005603609 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 31 01:12:21 np0005603609 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 31 01:12:21 np0005603609 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 31 01:12:21 np0005603609 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 31 01:12:21 np0005603609 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 31 01:12:21 np0005603609 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 31 01:12:21 np0005603609 NetworkManager[858]: <info>  [1769839941.2860] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 01:12:21 np0005603609 systemd-udevd[6954]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 01:12:21 np0005603609 NetworkManager[858]: <info>  [1769839941.3035] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 01:12:21 np0005603609 NetworkManager[858]: <info>  [1769839941.3055] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 31 01:12:21 np0005603609 NetworkManager[858]: <info>  [1769839941.3057] device (eth1): carrier: link connected
Jan 31 01:12:21 np0005603609 NetworkManager[858]: <info>  [1769839941.3059] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 01:12:21 np0005603609 NetworkManager[858]: <info>  [1769839941.3063] policy: auto-activating connection 'Wired connection 1' (2a2c2ab2-5ce1-3781-8b22-8c858885c334)
Jan 31 01:12:21 np0005603609 NetworkManager[858]: <info>  [1769839941.3066] device (eth1): Activation: starting connection 'Wired connection 1' (2a2c2ab2-5ce1-3781-8b22-8c858885c334)
Jan 31 01:12:21 np0005603609 NetworkManager[858]: <info>  [1769839941.3067] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:12:21 np0005603609 NetworkManager[858]: <info>  [1769839941.3068] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:12:21 np0005603609 NetworkManager[858]: <info>  [1769839941.3071] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:12:21 np0005603609 NetworkManager[858]: <info>  [1769839941.3074] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:12:22 np0005603609 systemd-logind[823]: New session 3 of user zuul.
Jan 31 01:12:22 np0005603609 systemd[1]: Started Session 3 of User zuul.
Jan 31 01:12:22 np0005603609 python3[6985]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-8b99-369b-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:12:32 np0005603609 python3[7065]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:12:32 np0005603609 python3[7138]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769839952.2800863-155-97083655010946/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=0804628a1240a3e73ab5eb3b58392bd4ebbf56dd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:12:33 np0005603609 python3[7188]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 01:12:33 np0005603609 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 01:12:33 np0005603609 systemd[1]: Stopped Network Manager Wait Online.
Jan 31 01:12:33 np0005603609 systemd[1]: Stopping Network Manager Wait Online...
Jan 31 01:12:33 np0005603609 systemd[1]: Stopping Network Manager...
Jan 31 01:12:33 np0005603609 NetworkManager[858]: <info>  [1769839953.4333] caught SIGTERM, shutting down normally.
Jan 31 01:12:33 np0005603609 NetworkManager[858]: <info>  [1769839953.4344] dhcp4 (eth0): canceled DHCP transaction
Jan 31 01:12:33 np0005603609 NetworkManager[858]: <info>  [1769839953.4344] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:12:33 np0005603609 NetworkManager[858]: <info>  [1769839953.4345] dhcp4 (eth0): state changed no lease
Jan 31 01:12:33 np0005603609 NetworkManager[858]: <info>  [1769839953.4348] manager: NetworkManager state is now CONNECTING
Jan 31 01:12:33 np0005603609 NetworkManager[858]: <info>  [1769839953.4460] dhcp4 (eth1): canceled DHCP transaction
Jan 31 01:12:33 np0005603609 NetworkManager[858]: <info>  [1769839953.4460] dhcp4 (eth1): state changed no lease
Jan 31 01:12:33 np0005603609 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 01:12:33 np0005603609 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 01:12:33 np0005603609 NetworkManager[858]: <info>  [1769839953.6270] exiting (success)
Jan 31 01:12:33 np0005603609 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 01:12:33 np0005603609 systemd[1]: Stopped Network Manager.
Jan 31 01:12:33 np0005603609 systemd[1]: NetworkManager.service: Consumed 1.661s CPU time, 10.0M memory peak.
Jan 31 01:12:33 np0005603609 systemd[1]: Starting Network Manager...
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.6878] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:ce650b02-d8ad-4f14-899a-0e3fc9a18049)
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.6879] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.6915] manager[0x557defb1c000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 01:12:33 np0005603609 systemd[1]: Starting Hostname Service...
Jan 31 01:12:33 np0005603609 systemd[1]: Started Hostname Service.
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7448] hostname: hostname: using hostnamed
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7449] hostname: static hostname changed from (none) to "np0005603609.novalocal"
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7457] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7465] manager[0x557defb1c000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7466] manager[0x557defb1c000]: rfkill: WWAN hardware radio set enabled
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7513] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7514] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7515] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7516] manager: Networking is enabled by state file
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7523] settings: Loaded settings plugin: keyfile (internal)
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7530] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7601] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7621] dhcp: init: Using DHCP client 'internal'
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7626] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7640] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7653] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7669] device (lo): Activation: starting connection 'lo' (4702302b-2207-4077-8516-154f1b85f390)
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7680] device (eth0): carrier: link connected
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7685] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7696] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7696] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7709] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7722] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7730] device (eth1): carrier: link connected
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7736] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7746] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (2a2c2ab2-5ce1-3781-8b22-8c858885c334) (indicated)
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7747] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7758] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7771] device (eth1): Activation: starting connection 'Wired connection 1' (2a2c2ab2-5ce1-3781-8b22-8c858885c334)
Jan 31 01:12:33 np0005603609 systemd[1]: Started Network Manager.
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7784] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7793] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7797] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7801] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7805] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7810] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7818] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7824] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7831] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7843] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7853] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7866] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7871] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7895] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 01:12:33 np0005603609 systemd[1]: Starting Network Manager Wait Online...
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7903] dhcp4 (eth0): state changed new lease, address=38.129.56.60
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7911] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7921] device (lo): Activation: successful, device activated.
Jan 31 01:12:33 np0005603609 NetworkManager[7205]: <info>  [1769839953.7939] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 01:12:34 np0005603609 python3[7253]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-8b99-369b-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:12:34 np0005603609 NetworkManager[7205]: <info>  [1769839954.0863] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 01:12:34 np0005603609 NetworkManager[7205]: <info>  [1769839954.0888] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 01:12:34 np0005603609 NetworkManager[7205]: <info>  [1769839954.0890] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 01:12:34 np0005603609 NetworkManager[7205]: <info>  [1769839954.0893] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 01:12:34 np0005603609 NetworkManager[7205]: <info>  [1769839954.0895] device (eth0): Activation: successful, device activated.
Jan 31 01:12:34 np0005603609 NetworkManager[7205]: <info>  [1769839954.0901] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 01:12:44 np0005603609 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 01:12:50 np0005603609 chronyd[831]: Selected source 147.189.136.126 (2.centos.pool.ntp.org)
Jan 31 01:13:03 np0005603609 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 01:13:18 np0005603609 NetworkManager[7205]: <info>  [1769839998.9065] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 01:13:18 np0005603609 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 01:13:18 np0005603609 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 01:13:18 np0005603609 NetworkManager[7205]: <info>  [1769839998.9396] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 01:13:18 np0005603609 NetworkManager[7205]: <info>  [1769839998.9399] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 01:13:18 np0005603609 NetworkManager[7205]: <info>  [1769839998.9405] device (eth1): Activation: successful, device activated.
Jan 31 01:13:18 np0005603609 NetworkManager[7205]: <info>  [1769839998.9411] manager: startup complete
Jan 31 01:13:18 np0005603609 NetworkManager[7205]: <info>  [1769839998.9412] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 31 01:13:18 np0005603609 NetworkManager[7205]: <warn>  [1769839998.9417] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 31 01:13:18 np0005603609 NetworkManager[7205]: <info>  [1769839998.9425] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 31 01:13:18 np0005603609 systemd[1]: Finished Network Manager Wait Online.
Jan 31 01:13:18 np0005603609 NetworkManager[7205]: <info>  [1769839998.9520] dhcp4 (eth1): canceled DHCP transaction
Jan 31 01:13:18 np0005603609 NetworkManager[7205]: <info>  [1769839998.9520] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:13:18 np0005603609 NetworkManager[7205]: <info>  [1769839998.9520] dhcp4 (eth1): state changed no lease
Jan 31 01:13:18 np0005603609 NetworkManager[7205]: <info>  [1769839998.9533] policy: auto-activating connection 'ci-private-network' (6e8a572d-ed1d-574b-8dec-718c9179119c)
Jan 31 01:13:18 np0005603609 NetworkManager[7205]: <info>  [1769839998.9537] device (eth1): Activation: starting connection 'ci-private-network' (6e8a572d-ed1d-574b-8dec-718c9179119c)
Jan 31 01:13:18 np0005603609 NetworkManager[7205]: <info>  [1769839998.9538] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:13:18 np0005603609 NetworkManager[7205]: <info>  [1769839998.9541] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:13:18 np0005603609 NetworkManager[7205]: <info>  [1769839998.9547] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:13:18 np0005603609 NetworkManager[7205]: <info>  [1769839998.9556] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 01:13:19 np0005603609 NetworkManager[7205]: <info>  [1769839999.0028] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 01:13:19 np0005603609 NetworkManager[7205]: <info>  [1769839999.0031] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 01:13:19 np0005603609 NetworkManager[7205]: <info>  [1769839999.0037] device (eth1): Activation: successful, device activated.
Jan 31 01:13:29 np0005603609 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 01:13:34 np0005603609 systemd[1]: session-3.scope: Deactivated successfully.
Jan 31 01:13:34 np0005603609 systemd[1]: session-3.scope: Consumed 1.358s CPU time.
Jan 31 01:13:34 np0005603609 systemd-logind[823]: Session 3 logged out. Waiting for processes to exit.
Jan 31 01:13:34 np0005603609 systemd-logind[823]: Removed session 3.
Jan 31 01:14:12 np0005603609 systemd-logind[823]: New session 4 of user zuul.
Jan 31 01:14:12 np0005603609 systemd[1]: Started Session 4 of User zuul.
Jan 31 01:14:12 np0005603609 python3[7381]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:14:13 np0005603609 python3[7454]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769840052.7106752-373-253948479233263/source _original_basename=tmpdu40zl3u follow=False checksum=9316f16cb173efcd9fb25ed4519cb4ede33986d5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:14:16 np0005603609 systemd[1]: session-4.scope: Deactivated successfully.
Jan 31 01:14:16 np0005603609 systemd-logind[823]: Session 4 logged out. Waiting for processes to exit.
Jan 31 01:14:16 np0005603609 systemd-logind[823]: Removed session 4.
Jan 31 01:15:04 np0005603609 systemd[4309]: Created slice User Background Tasks Slice.
Jan 31 01:15:04 np0005603609 systemd[4309]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 01:15:04 np0005603609 systemd[4309]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 01:25:04 np0005603609 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 31 01:25:04 np0005603609 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 31 01:25:04 np0005603609 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 31 01:25:04 np0005603609 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 31 01:27:24 np0005603609 systemd-logind[823]: New session 5 of user zuul.
Jan 31 01:27:24 np0005603609 systemd[1]: Started Session 5 of User zuul.
Jan 31 01:27:25 np0005603609 python3[7520]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-879a-7837-000000000cd6-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:27:25 np0005603609 python3[7548]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:27:25 np0005603609 python3[7575]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:27:26 np0005603609 python3[7601]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:27:26 np0005603609 python3[7627]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:27:26 np0005603609 python3[7653]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:27:27 np0005603609 python3[7731]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:27:27 np0005603609 python3[7804]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769840847.048459-394-1387017586312/source _original_basename=tmprtb82tl2 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:27:29 np0005603609 python3[7854]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 01:27:29 np0005603609 systemd[1]: Reloading.
Jan 31 01:27:29 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:27:30 np0005603609 python3[7911]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 31 01:27:31 np0005603609 python3[7937]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:27:31 np0005603609 python3[7967]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:27:31 np0005603609 python3[7995]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:27:31 np0005603609 python3[8023]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:27:32 np0005603609 python3[8050]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-879a-7837-000000000cdd-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:27:33 np0005603609 python3[8080]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 01:27:35 np0005603609 systemd[1]: session-5.scope: Deactivated successfully.
Jan 31 01:27:35 np0005603609 systemd[1]: session-5.scope: Consumed 3.795s CPU time.
Jan 31 01:27:35 np0005603609 systemd-logind[823]: Session 5 logged out. Waiting for processes to exit.
Jan 31 01:27:35 np0005603609 systemd-logind[823]: Removed session 5.
Jan 31 01:27:37 np0005603609 systemd-logind[823]: New session 6 of user zuul.
Jan 31 01:27:37 np0005603609 systemd[1]: Started Session 6 of User zuul.
Jan 31 01:27:38 np0005603609 python3[8114]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 01:27:48 np0005603609 setsebool[8156]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 31 01:27:48 np0005603609 setsebool[8156]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 31 01:28:07 np0005603609 kernel: SELinux:  Converting 385 SID table entries...
Jan 31 01:28:07 np0005603609 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 01:28:07 np0005603609 kernel: SELinux:  policy capability open_perms=1
Jan 31 01:28:07 np0005603609 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 01:28:07 np0005603609 kernel: SELinux:  policy capability always_check_network=0
Jan 31 01:28:07 np0005603609 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 01:28:07 np0005603609 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 01:28:07 np0005603609 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 01:28:20 np0005603609 kernel: SELinux:  Converting 388 SID table entries...
Jan 31 01:28:20 np0005603609 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 01:28:20 np0005603609 kernel: SELinux:  policy capability open_perms=1
Jan 31 01:28:20 np0005603609 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 01:28:20 np0005603609 kernel: SELinux:  policy capability always_check_network=0
Jan 31 01:28:20 np0005603609 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 01:28:20 np0005603609 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 01:28:20 np0005603609 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 01:28:27 np0005603609 irqbalance[818]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 31 01:28:27 np0005603609 irqbalance[818]: IRQ 27 affinity is now unmanaged
Jan 31 01:28:44 np0005603609 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 01:28:44 np0005603609 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 01:28:45 np0005603609 systemd[1]: Starting man-db-cache-update.service...
Jan 31 01:28:45 np0005603609 systemd[1]: Reloading.
Jan 31 01:28:45 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:28:45 np0005603609 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 01:28:52 np0005603609 python3[13022]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-ca6c-e883-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:28:53 np0005603609 kernel: evm: overlay not supported
Jan 31 01:28:53 np0005603609 systemd[4309]: Starting D-Bus User Message Bus...
Jan 31 01:28:53 np0005603609 dbus-broker-launch[13868]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 31 01:28:53 np0005603609 dbus-broker-launch[13868]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 31 01:28:53 np0005603609 systemd[4309]: Started D-Bus User Message Bus.
Jan 31 01:28:53 np0005603609 dbus-broker-lau[13868]: Ready
Jan 31 01:28:53 np0005603609 systemd[4309]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 01:28:53 np0005603609 systemd[4309]: Created slice Slice /user.
Jan 31 01:28:53 np0005603609 systemd[4309]: podman-13670.scope: unit configures an IP firewall, but not running as root.
Jan 31 01:28:53 np0005603609 systemd[4309]: (This warning is only shown for the first unit using IP firewalling.)
Jan 31 01:28:53 np0005603609 systemd[4309]: Started podman-13670.scope.
Jan 31 01:28:54 np0005603609 systemd[4309]: Started podman-pause-59c4b3ce.scope.
Jan 31 01:28:56 np0005603609 python3[14340]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.80:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.80:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:28:56 np0005603609 python3[14340]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 31 01:28:56 np0005603609 systemd[1]: session-6.scope: Deactivated successfully.
Jan 31 01:28:56 np0005603609 systemd[1]: session-6.scope: Consumed 41.905s CPU time.
Jan 31 01:28:56 np0005603609 systemd-logind[823]: Session 6 logged out. Waiting for processes to exit.
Jan 31 01:28:56 np0005603609 systemd-logind[823]: Removed session 6.
Jan 31 01:29:36 np0005603609 systemd-logind[823]: New session 7 of user zuul.
Jan 31 01:29:36 np0005603609 systemd[1]: Started Session 7 of User zuul.
Jan 31 01:29:36 np0005603609 python3[28683]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL0FMDZV22TCmyOw/BdKuDUy2cGp4BfiRzOwx/JLqMraff8LZ9BOqpkfzg5VkWRHTAeVSl/Uyb0smrpcknokWxg= zuul@np0005603607.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:29:38 np0005603609 python3[29058]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL0FMDZV22TCmyOw/BdKuDUy2cGp4BfiRzOwx/JLqMraff8LZ9BOqpkfzg5VkWRHTAeVSl/Uyb0smrpcknokWxg= zuul@np0005603607.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:29:39 np0005603609 python3[29386]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005603609.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 31 01:29:40 np0005603609 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 01:29:40 np0005603609 systemd[1]: Finished man-db-cache-update.service.
Jan 31 01:29:40 np0005603609 systemd[1]: man-db-cache-update.service: Consumed 42.626s CPU time.
Jan 31 01:29:40 np0005603609 systemd[1]: run-rba00e1de6e684099b405c74e05bb5bef.service: Deactivated successfully.
Jan 31 01:29:41 np0005603609 python3[29778]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL0FMDZV22TCmyOw/BdKuDUy2cGp4BfiRzOwx/JLqMraff8LZ9BOqpkfzg5VkWRHTAeVSl/Uyb0smrpcknokWxg= zuul@np0005603607.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:29:41 np0005603609 python3[29856]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:29:42 np0005603609 python3[29929]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769840981.569169-168-32931351885313/source _original_basename=tmp71tkxe34 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:29:43 np0005603609 python3[29979]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Jan 31 01:29:43 np0005603609 systemd[1]: Starting Hostname Service...
Jan 31 01:29:43 np0005603609 systemd[1]: Started Hostname Service.
Jan 31 01:29:43 np0005603609 systemd-hostnamed[29983]: Changed pretty hostname to 'compute-1'
Jan 31 01:29:43 np0005603609 systemd-hostnamed[29983]: Hostname set to <compute-1> (static)
Jan 31 01:29:43 np0005603609 NetworkManager[7205]: <info>  [1769840983.2506] hostname: static hostname changed from "np0005603609.novalocal" to "compute-1"
Jan 31 01:29:43 np0005603609 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 01:29:43 np0005603609 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 01:29:43 np0005603609 systemd[1]: session-7.scope: Deactivated successfully.
Jan 31 01:29:43 np0005603609 systemd[1]: session-7.scope: Consumed 2.092s CPU time.
Jan 31 01:29:43 np0005603609 systemd-logind[823]: Session 7 logged out. Waiting for processes to exit.
Jan 31 01:29:43 np0005603609 systemd-logind[823]: Removed session 7.
Jan 31 01:29:53 np0005603609 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 01:30:13 np0005603609 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 01:34:58 np0005603609 systemd-logind[823]: New session 8 of user zuul.
Jan 31 01:34:58 np0005603609 systemd[1]: Started Session 8 of User zuul.
Jan 31 01:34:58 np0005603609 python3[30086]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:35:00 np0005603609 python3[30202]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:35:01 np0005603609 python3[30275]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2879992-34135-211314812811327/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:35:01 np0005603609 python3[30301]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:35:01 np0005603609 python3[30374]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2879992-34135-211314812811327/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:35:01 np0005603609 python3[30400]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:35:02 np0005603609 python3[30473]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2879992-34135-211314812811327/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:35:02 np0005603609 python3[30499]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:35:02 np0005603609 python3[30572]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2879992-34135-211314812811327/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:35:02 np0005603609 python3[30598]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:35:03 np0005603609 python3[30671]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2879992-34135-211314812811327/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:35:03 np0005603609 python3[30697]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:35:03 np0005603609 python3[30770]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2879992-34135-211314812811327/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:35:03 np0005603609 python3[30796]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:35:04 np0005603609 python3[30869]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2879992-34135-211314812811327/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:35:15 np0005603609 python3[30917]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:40:14 np0005603609 systemd[1]: session-8.scope: Deactivated successfully.
Jan 31 01:40:14 np0005603609 systemd[1]: session-8.scope: Consumed 4.123s CPU time.
Jan 31 01:40:14 np0005603609 systemd-logind[823]: Session 8 logged out. Waiting for processes to exit.
Jan 31 01:40:14 np0005603609 systemd-logind[823]: Removed session 8.
Jan 31 01:56:05 np0005603609 systemd-logind[823]: New session 9 of user zuul.
Jan 31 01:56:05 np0005603609 systemd[1]: Started Session 9 of User zuul.
Jan 31 01:56:06 np0005603609 python3.9[31104]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:56:08 np0005603609 python3.9[31285]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:56:20 np0005603609 systemd[1]: session-9.scope: Deactivated successfully.
Jan 31 01:56:20 np0005603609 systemd[1]: session-9.scope: Consumed 7.854s CPU time.
Jan 31 01:56:20 np0005603609 systemd-logind[823]: Session 9 logged out. Waiting for processes to exit.
Jan 31 01:56:20 np0005603609 systemd-logind[823]: Removed session 9.
Jan 31 01:56:40 np0005603609 systemd-logind[823]: New session 10 of user zuul.
Jan 31 01:56:40 np0005603609 systemd[1]: Started Session 10 of User zuul.
Jan 31 01:56:41 np0005603609 python3.9[31496]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 01:56:42 np0005603609 python3.9[31670]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:56:43 np0005603609 python3.9[31822]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:56:44 np0005603609 python3.9[31975]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:56:45 np0005603609 python3.9[32127]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:56:45 np0005603609 python3.9[32279]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:56:46 np0005603609 python3.9[32402]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842605.3401217-178-54852019450685/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:56:47 np0005603609 python3.9[32555]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:56:47 np0005603609 python3.9[32711]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:56:48 np0005603609 python3.9[32863]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:56:49 np0005603609 python3.9[33013]: ansible-ansible.builtin.service_facts Invoked
Jan 31 01:56:52 np0005603609 python3.9[33266]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:56:53 np0005603609 python3.9[33416]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:56:54 np0005603609 python3.9[33570]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:56:55 np0005603609 python3.9[33728]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 01:56:56 np0005603609 python3.9[33812]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:57:39 np0005603609 systemd[1]: Reloading.
Jan 31 01:57:39 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:57:39 np0005603609 systemd[1]: Starting dnf makecache...
Jan 31 01:57:39 np0005603609 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 31 01:57:39 np0005603609 dnf[34023]: Failed determining last makecache time.
Jan 31 01:57:39 np0005603609 dnf[34023]: delorean-openstack-barbican-42b4c41831408a8e323 127 kB/s | 3.0 kB     00:00
Jan 31 01:57:39 np0005603609 dnf[34023]: delorean-python-glean-642fffe0203a8ffcc2443db52 153 kB/s | 3.0 kB     00:00
Jan 31 01:57:39 np0005603609 dnf[34023]: delorean-openstack-cinder-1c00d6490d88e436f26ef 153 kB/s | 3.0 kB     00:00
Jan 31 01:57:39 np0005603609 dnf[34023]: delorean-python-stevedore-c4acc5639fd2329372142 162 kB/s | 3.0 kB     00:00
Jan 31 01:57:39 np0005603609 dnf[34023]: delorean-python-cloudkitty-tests-tempest-783703 165 kB/s | 3.0 kB     00:00
Jan 31 01:57:39 np0005603609 dnf[34023]: delorean-diskimage-builder-61b717cc45660834fe9a 185 kB/s | 3.0 kB     00:00
Jan 31 01:57:39 np0005603609 dnf[34023]: delorean-openstack-nova-eaa65f0b85123a4ee343246 168 kB/s | 3.0 kB     00:00
Jan 31 01:57:39 np0005603609 systemd[1]: Reloading.
Jan 31 01:57:40 np0005603609 dnf[34023]: delorean-python-designate-tests-tempest-347fdbc 164 kB/s | 3.0 kB     00:00
Jan 31 01:57:40 np0005603609 dnf[34023]: delorean-openstack-glance-1fd12c29b339f30fe823e 171 kB/s | 3.0 kB     00:00
Jan 31 01:57:40 np0005603609 dnf[34023]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 144 kB/s | 3.0 kB     00:00
Jan 31 01:57:40 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:57:40 np0005603609 dnf[34023]: delorean-openstack-manila-d783d10e75495b73866db 141 kB/s | 3.0 kB     00:00
Jan 31 01:57:40 np0005603609 dnf[34023]: delorean-openstack-neutron-95cadbd379667c8520c8 143 kB/s | 3.0 kB     00:00
Jan 31 01:57:40 np0005603609 dnf[34023]: delorean-openstack-octavia-5975097dd4b021385178 150 kB/s | 3.0 kB     00:00
Jan 31 01:57:40 np0005603609 dnf[34023]: delorean-openstack-watcher-c014f81a8647287f6dcc 170 kB/s | 3.0 kB     00:00
Jan 31 01:57:40 np0005603609 dnf[34023]: delorean-python-tcib-78032d201b02cee27e8e644c61 122 kB/s | 3.0 kB     00:00
Jan 31 01:57:40 np0005603609 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 31 01:57:40 np0005603609 dnf[34023]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 173 kB/s | 3.0 kB     00:00
Jan 31 01:57:40 np0005603609 dnf[34023]: delorean-openstack-swift-dc98a8463506ac520c469a 172 kB/s | 3.0 kB     00:00
Jan 31 01:57:40 np0005603609 dnf[34023]: delorean-python-tempestconf-8515371b7cceebd4282 153 kB/s | 3.0 kB     00:00
Jan 31 01:57:40 np0005603609 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 31 01:57:40 np0005603609 dnf[34023]: delorean-openstack-heat-ui-013accbfd179753bc3f0 176 kB/s | 3.0 kB     00:00
Jan 31 01:57:40 np0005603609 systemd[1]: Reloading.
Jan 31 01:57:40 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:57:40 np0005603609 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 31 01:57:40 np0005603609 dnf[34023]: CentOS Stream 9 - BaseOS                         25 kB/s | 6.1 kB     00:00
Jan 31 01:57:40 np0005603609 dbus-broker-launch[800]: Noticed file-system modification, trigger reload.
Jan 31 01:57:40 np0005603609 dbus-broker-launch[800]: Noticed file-system modification, trigger reload.
Jan 31 01:57:40 np0005603609 dbus-broker-launch[800]: Noticed file-system modification, trigger reload.
Jan 31 01:57:40 np0005603609 dnf[34023]: CentOS Stream 9 - AppStream                      57 kB/s | 6.5 kB     00:00
Jan 31 01:57:40 np0005603609 dnf[34023]: CentOS Stream 9 - CRB                            61 kB/s | 6.0 kB     00:00
Jan 31 01:57:41 np0005603609 dnf[34023]: CentOS Stream 9 - Extras packages                32 kB/s | 7.3 kB     00:00
Jan 31 01:57:41 np0005603609 dnf[34023]: dlrn-antelope-testing                           171 kB/s | 3.0 kB     00:00
Jan 31 01:57:41 np0005603609 dnf[34023]: dlrn-antelope-build-deps                        145 kB/s | 3.0 kB     00:00
Jan 31 01:57:41 np0005603609 dnf[34023]: centos9-rabbitmq                                 95 kB/s | 3.0 kB     00:00
Jan 31 01:57:41 np0005603609 dnf[34023]: centos9-storage                                 124 kB/s | 3.0 kB     00:00
Jan 31 01:57:41 np0005603609 dnf[34023]: centos9-opstools                                133 kB/s | 3.0 kB     00:00
Jan 31 01:57:41 np0005603609 dnf[34023]: NFV SIG OpenvSwitch                             145 kB/s | 3.0 kB     00:00
Jan 31 01:57:41 np0005603609 dnf[34023]: repo-setup-centos-appstream                     211 kB/s | 4.4 kB     00:00
Jan 31 01:57:41 np0005603609 dnf[34023]: repo-setup-centos-baseos                        159 kB/s | 3.9 kB     00:00
Jan 31 01:57:41 np0005603609 dnf[34023]: repo-setup-centos-highavailability              169 kB/s | 3.9 kB     00:00
Jan 31 01:57:41 np0005603609 dnf[34023]: repo-setup-centos-powertools                    169 kB/s | 4.3 kB     00:00
Jan 31 01:57:41 np0005603609 dnf[34023]: Extra Packages for Enterprise Linux 9 - x86_64  243 kB/s |  31 kB     00:00
Jan 31 01:57:42 np0005603609 dnf[34023]: Metadata cache created.
Jan 31 01:57:42 np0005603609 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 31 01:57:42 np0005603609 systemd[1]: Finished dnf makecache.
Jan 31 01:57:42 np0005603609 systemd[1]: dnf-makecache.service: Consumed 1.878s CPU time.
Jan 31 01:58:39 np0005603609 kernel: SELinux:  Converting 2727 SID table entries...
Jan 31 01:58:39 np0005603609 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 01:58:39 np0005603609 kernel: SELinux:  policy capability open_perms=1
Jan 31 01:58:39 np0005603609 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 01:58:39 np0005603609 kernel: SELinux:  policy capability always_check_network=0
Jan 31 01:58:39 np0005603609 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 01:58:39 np0005603609 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 01:58:39 np0005603609 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 01:58:40 np0005603609 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 31 01:58:40 np0005603609 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 01:58:40 np0005603609 systemd[1]: Starting man-db-cache-update.service...
Jan 31 01:58:40 np0005603609 systemd[1]: Reloading.
Jan 31 01:58:40 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:58:40 np0005603609 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 01:58:41 np0005603609 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 01:58:41 np0005603609 systemd[1]: Finished man-db-cache-update.service.
Jan 31 01:58:41 np0005603609 systemd[1]: run-r7378242570634aa1a58691b7131a034f.service: Deactivated successfully.
Jan 31 01:58:51 np0005603609 python3.9[35377]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:58:54 np0005603609 python3.9[35658]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 01:58:55 np0005603609 python3.9[35810]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 01:58:58 np0005603609 python3.9[35965]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:58:59 np0005603609 python3.9[36117]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 01:59:05 np0005603609 python3.9[36269]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:07 np0005603609 python3.9[36422]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:59:08 np0005603609 python3.9[36545]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842746.2079332-668-201552092126397/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:59:10 np0005603609 python3.9[36697]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:59:10 np0005603609 python3.9[36849]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:59:11 np0005603609 python3.9[37002]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:59:13 np0005603609 python3.9[37154]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 01:59:13 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 01:59:14 np0005603609 python3.9[37308]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 01:59:15 np0005603609 python3.9[37466]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 01:59:15 np0005603609 python3.9[37626]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 01:59:16 np0005603609 python3.9[37779]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 01:59:17 np0005603609 python3.9[37937]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 01:59:18 np0005603609 python3.9[38089]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:59:21 np0005603609 python3.9[38242]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:22 np0005603609 python3.9[38394]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:59:22 np0005603609 python3.9[38517]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842761.7451851-1024-121656221209664/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:23 np0005603609 python3.9[38669]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 01:59:23 np0005603609 systemd[1]: Starting Load Kernel Modules...
Jan 31 01:59:23 np0005603609 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 31 01:59:23 np0005603609 kernel: Bridge firewalling registered
Jan 31 01:59:23 np0005603609 systemd-modules-load[38673]: Inserted module 'br_netfilter'
Jan 31 01:59:23 np0005603609 systemd[1]: Finished Load Kernel Modules.
Jan 31 01:59:24 np0005603609 python3.9[38828]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 01:59:25 np0005603609 python3.9[38951]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842764.178157-1094-80265584817175/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 01:59:26 np0005603609 python3.9[39103]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 01:59:29 np0005603609 dbus-broker-launch[800]: Noticed file-system modification, trigger reload.
Jan 31 01:59:29 np0005603609 dbus-broker-launch[800]: Noticed file-system modification, trigger reload.
Jan 31 01:59:29 np0005603609 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 01:59:29 np0005603609 systemd[1]: Starting man-db-cache-update.service...
Jan 31 01:59:29 np0005603609 systemd[1]: Reloading.
Jan 31 01:59:29 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:59:30 np0005603609 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 01:59:31 np0005603609 python3.9[40751]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:59:32 np0005603609 python3.9[41894]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 01:59:32 np0005603609 python3.9[42729]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 01:59:33 np0005603609 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 01:59:33 np0005603609 systemd[1]: Finished man-db-cache-update.service.
Jan 31 01:59:33 np0005603609 systemd[1]: man-db-cache-update.service: Consumed 4.240s CPU time.
Jan 31 01:59:33 np0005603609 systemd[1]: run-r379ca031c776431c9d4e017233dc2ba7.service: Deactivated successfully.
Jan 31 01:59:33 np0005603609 python3.9[43335]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:59:33 np0005603609 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 01:59:34 np0005603609 systemd[1]: Starting Authorization Manager...
Jan 31 01:59:34 np0005603609 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 01:59:34 np0005603609 polkitd[43552]: Started polkitd version 0.117
Jan 31 01:59:34 np0005603609 systemd[1]: Started Authorization Manager.
Jan 31 01:59:35 np0005603609 python3.9[43722]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:59:35 np0005603609 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 01:59:35 np0005603609 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 01:59:35 np0005603609 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 01:59:35 np0005603609 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 01:59:35 np0005603609 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 01:59:36 np0005603609 python3.9[43884]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 01:59:41 np0005603609 python3.9[44036]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:59:41 np0005603609 systemd[1]: Reloading.
Jan 31 01:59:41 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:59:42 np0005603609 python3.9[44226]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 01:59:42 np0005603609 systemd[1]: Reloading.
Jan 31 01:59:42 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:59:43 np0005603609 python3.9[44415]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:59:43 np0005603609 python3.9[44568]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:59:43 np0005603609 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 31 01:59:44 np0005603609 python3.9[44721]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:59:46 np0005603609 python3.9[44883]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:59:47 np0005603609 python3.9[45036]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 01:59:47 np0005603609 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 01:59:47 np0005603609 systemd[1]: Stopped Apply Kernel Variables.
Jan 31 01:59:47 np0005603609 systemd[1]: Stopping Apply Kernel Variables...
Jan 31 01:59:47 np0005603609 systemd[1]: Starting Apply Kernel Variables...
Jan 31 01:59:47 np0005603609 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 01:59:47 np0005603609 systemd[1]: Finished Apply Kernel Variables.
Jan 31 01:59:48 np0005603609 systemd[1]: session-10.scope: Deactivated successfully.
Jan 31 01:59:48 np0005603609 systemd[1]: session-10.scope: Consumed 2min 4.378s CPU time.
Jan 31 01:59:48 np0005603609 systemd-logind[823]: Session 10 logged out. Waiting for processes to exit.
Jan 31 01:59:48 np0005603609 systemd-logind[823]: Removed session 10.
Jan 31 01:59:56 np0005603609 systemd-logind[823]: New session 11 of user zuul.
Jan 31 01:59:56 np0005603609 systemd[1]: Started Session 11 of User zuul.
Jan 31 01:59:57 np0005603609 python3.9[45220]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:59:58 np0005603609 python3.9[45376]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 01:59:59 np0005603609 python3.9[45529]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 02:00:00 np0005603609 python3.9[45687]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 02:00:01 np0005603609 python3.9[45847]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:00:02 np0005603609 python3.9[45931]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 02:00:05 np0005603609 python3.9[46094]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:00:16 np0005603609 kernel: SELinux:  Converting 2739 SID table entries...
Jan 31 02:00:16 np0005603609 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 02:00:16 np0005603609 kernel: SELinux:  policy capability open_perms=1
Jan 31 02:00:16 np0005603609 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 02:00:16 np0005603609 kernel: SELinux:  policy capability always_check_network=0
Jan 31 02:00:16 np0005603609 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 02:00:16 np0005603609 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 02:00:16 np0005603609 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 02:00:16 np0005603609 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 31 02:00:16 np0005603609 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 31 02:00:17 np0005603609 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 02:00:17 np0005603609 systemd[1]: Starting man-db-cache-update.service...
Jan 31 02:00:17 np0005603609 systemd[1]: Reloading.
Jan 31 02:00:17 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:00:17 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:00:17 np0005603609 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 02:00:18 np0005603609 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 02:00:18 np0005603609 systemd[1]: Finished man-db-cache-update.service.
Jan 31 02:00:18 np0005603609 systemd[1]: run-re4b2a9fe57e84f72bc72c291ebc1d35c.service: Deactivated successfully.
Jan 31 02:00:22 np0005603609 python3.9[47192]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:00:23 np0005603609 systemd[1]: Reloading.
Jan 31 02:00:23 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:00:23 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:00:23 np0005603609 systemd[1]: Starting Open vSwitch Database Unit...
Jan 31 02:00:23 np0005603609 chown[47235]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 31 02:00:23 np0005603609 ovs-ctl[47240]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 31 02:00:23 np0005603609 ovs-ctl[47240]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 31 02:00:23 np0005603609 ovs-ctl[47240]: Starting ovsdb-server [  OK  ]
Jan 31 02:00:23 np0005603609 ovs-vsctl[47289]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 31 02:00:23 np0005603609 ovs-vsctl[47305]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"a8402939-fce1-46a9-9749-88c4c6334003\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 31 02:00:23 np0005603609 ovs-ctl[47240]: Configuring Open vSwitch system IDs [  OK  ]
Jan 31 02:00:23 np0005603609 ovs-ctl[47240]: Enabling remote OVSDB managers [  OK  ]
Jan 31 02:00:23 np0005603609 ovs-vsctl[47315]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Jan 31 02:00:23 np0005603609 systemd[1]: Started Open vSwitch Database Unit.
Jan 31 02:00:23 np0005603609 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 31 02:00:23 np0005603609 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 31 02:00:23 np0005603609 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 31 02:00:23 np0005603609 kernel: openvswitch: Open vSwitch switching datapath
Jan 31 02:00:23 np0005603609 ovs-ctl[47359]: Inserting openvswitch module [  OK  ]
Jan 31 02:00:23 np0005603609 ovs-ctl[47328]: Starting ovs-vswitchd [  OK  ]
Jan 31 02:00:23 np0005603609 ovs-vsctl[47376]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Jan 31 02:00:23 np0005603609 ovs-ctl[47328]: Enabling remote OVSDB managers [  OK  ]
Jan 31 02:00:23 np0005603609 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 31 02:00:23 np0005603609 systemd[1]: Starting Open vSwitch...
Jan 31 02:00:23 np0005603609 systemd[1]: Finished Open vSwitch.
Jan 31 02:00:24 np0005603609 python3.9[47528]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:00:25 np0005603609 python3.9[47680]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 02:00:26 np0005603609 kernel: SELinux:  Converting 2753 SID table entries...
Jan 31 02:00:26 np0005603609 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 02:00:26 np0005603609 kernel: SELinux:  policy capability open_perms=1
Jan 31 02:00:26 np0005603609 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 02:00:26 np0005603609 kernel: SELinux:  policy capability always_check_network=0
Jan 31 02:00:26 np0005603609 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 02:00:26 np0005603609 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 02:00:26 np0005603609 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 02:00:28 np0005603609 python3.9[47835]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:00:28 np0005603609 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 31 02:00:29 np0005603609 python3.9[47993]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:00:31 np0005603609 python3.9[48146]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:00:32 np0005603609 python3.9[48433]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 02:00:33 np0005603609 python3.9[48583]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:00:34 np0005603609 python3.9[48737]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:00:35 np0005603609 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 02:00:35 np0005603609 systemd[1]: Starting man-db-cache-update.service...
Jan 31 02:00:35 np0005603609 systemd[1]: Reloading.
Jan 31 02:00:35 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:00:35 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:00:35 np0005603609 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 02:00:36 np0005603609 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 02:00:36 np0005603609 systemd[1]: Finished man-db-cache-update.service.
Jan 31 02:00:36 np0005603609 systemd[1]: run-r748a993cc7ec4d71afed5903e23a544d.service: Deactivated successfully.
Jan 31 02:00:37 np0005603609 python3.9[49054]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:00:37 np0005603609 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 02:00:37 np0005603609 systemd[1]: Stopped Network Manager Wait Online.
Jan 31 02:00:37 np0005603609 systemd[1]: Stopping Network Manager Wait Online...
Jan 31 02:00:37 np0005603609 systemd[1]: Stopping Network Manager...
Jan 31 02:00:37 np0005603609 NetworkManager[7205]: <info>  [1769842837.7764] caught SIGTERM, shutting down normally.
Jan 31 02:00:37 np0005603609 NetworkManager[7205]: <info>  [1769842837.7783] dhcp4 (eth0): canceled DHCP transaction
Jan 31 02:00:37 np0005603609 NetworkManager[7205]: <info>  [1769842837.7783] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 02:00:37 np0005603609 NetworkManager[7205]: <info>  [1769842837.7783] dhcp4 (eth0): state changed no lease
Jan 31 02:00:37 np0005603609 NetworkManager[7205]: <info>  [1769842837.7786] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 02:00:37 np0005603609 NetworkManager[7205]: <info>  [1769842837.7854] exiting (success)
Jan 31 02:00:37 np0005603609 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 02:00:37 np0005603609 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 02:00:37 np0005603609 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 02:00:37 np0005603609 systemd[1]: Stopped Network Manager.
Jan 31 02:00:37 np0005603609 systemd[1]: NetworkManager.service: Consumed 24.521s CPU time, 4.1M memory peak, read 0B from disk, written 34.0K to disk.
Jan 31 02:00:37 np0005603609 systemd[1]: Starting Network Manager...
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.8349] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:ce650b02-d8ad-4f14-899a-0e3fc9a18049)
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.8352] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.8410] manager[0x55df2ae50000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 02:00:37 np0005603609 systemd[1]: Starting Hostname Service...
Jan 31 02:00:37 np0005603609 systemd[1]: Started Hostname Service.
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9176] hostname: hostname: using hostnamed
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9177] hostname: static hostname changed from (none) to "compute-1"
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9180] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9184] manager[0x55df2ae50000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9185] manager[0x55df2ae50000]: rfkill: WWAN hardware radio set enabled
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9202] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9210] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9211] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9211] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9212] manager: Networking is enabled by state file
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9213] settings: Loaded settings plugin: keyfile (internal)
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9216] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9237] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9244] dhcp: init: Using DHCP client 'internal'
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9246] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9250] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9254] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9260] device (lo): Activation: starting connection 'lo' (4702302b-2207-4077-8516-154f1b85f390)
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9267] device (eth0): carrier: link connected
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9270] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9274] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9275] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9279] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9285] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9289] device (eth1): carrier: link connected
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9292] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9295] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (6e8a572d-ed1d-574b-8dec-718c9179119c) (indicated)
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9295] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9298] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9303] device (eth1): Activation: starting connection 'ci-private-network' (6e8a572d-ed1d-574b-8dec-718c9179119c)
Jan 31 02:00:37 np0005603609 systemd[1]: Started Network Manager.
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9312] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9322] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9329] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9332] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9336] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9341] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9344] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9347] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9351] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9359] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9362] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9376] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9400] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9410] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9411] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 02:00:37 np0005603609 systemd[1]: Starting Network Manager Wait Online...
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9416] device (lo): Activation: successful, device activated.
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9422] dhcp4 (eth0): state changed new lease, address=38.129.56.60
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9427] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9503] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9511] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9512] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9515] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9517] device (eth1): Activation: successful, device activated.
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9526] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9528] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9530] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9532] device (eth0): Activation: successful, device activated.
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9539] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 02:00:37 np0005603609 NetworkManager[49064]: <info>  [1769842837.9542] manager: startup complete
Jan 31 02:00:37 np0005603609 systemd[1]: Finished Network Manager Wait Online.
Jan 31 02:00:38 np0005603609 python3.9[49280]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:00:42 np0005603609 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 02:00:43 np0005603609 systemd[1]: Starting man-db-cache-update.service...
Jan 31 02:00:43 np0005603609 systemd[1]: Reloading.
Jan 31 02:00:43 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:00:43 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:00:43 np0005603609 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 02:00:44 np0005603609 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 02:00:44 np0005603609 systemd[1]: Finished man-db-cache-update.service.
Jan 31 02:00:44 np0005603609 systemd[1]: run-rdd12a3248eb340209017ea4f93d7a790.service: Deactivated successfully.
Jan 31 02:00:46 np0005603609 python3.9[49740]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:00:47 np0005603609 python3.9[49892]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:48 np0005603609 python3.9[50046]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:48 np0005603609 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 02:00:48 np0005603609 python3.9[50198]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:49 np0005603609 python3.9[50350]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:49 np0005603609 python3.9[50502]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:50 np0005603609 python3.9[50654]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:00:51 np0005603609 python3.9[50777]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842849.9228084-648-126641998397598/.source _original_basename=._im8v0ou follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:51 np0005603609 python3.9[50929]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:52 np0005603609 python3.9[51081]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 31 02:00:53 np0005603609 python3.9[51233]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:00:55 np0005603609 python3.9[51660]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 31 02:00:56 np0005603609 ansible-async_wrapper.py[51835]: Invoked with j179080268923 300 /home/zuul/.ansible/tmp/ansible-tmp-1769842855.582884-846-227220787398859/AnsiballZ_edpm_os_net_config.py _
Jan 31 02:00:56 np0005603609 ansible-async_wrapper.py[51838]: Starting module and watcher
Jan 31 02:00:56 np0005603609 ansible-async_wrapper.py[51838]: Start watching 51839 (300)
Jan 31 02:00:56 np0005603609 ansible-async_wrapper.py[51839]: Start module (51839)
Jan 31 02:00:56 np0005603609 ansible-async_wrapper.py[51835]: Return async_wrapper task started.
Jan 31 02:00:56 np0005603609 python3.9[51840]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 31 02:00:57 np0005603609 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 31 02:00:57 np0005603609 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 31 02:00:57 np0005603609 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 31 02:00:57 np0005603609 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 31 02:00:57 np0005603609 kernel: cfg80211: failed to load regulatory.db
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.3766] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.3790] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4397] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4399] audit: op="connection-add" uuid="1e6b7b38-4a9f-40bf-8177-52f3e1f85bdf" name="br-ex-br" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4415] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4417] audit: op="connection-add" uuid="3d35f505-b814-4343-b808-33cfd9a45232" name="br-ex-port" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4431] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4432] audit: op="connection-add" uuid="ab83a260-2770-4c05-8da3-294f391b4a74" name="eth1-port" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4445] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4447] audit: op="connection-add" uuid="b1a7abf6-671c-4f0b-b1a0-7b918726fb68" name="vlan20-port" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4460] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4462] audit: op="connection-add" uuid="6bea02c5-0f71-4092-a35b-14a054c9fcfb" name="vlan21-port" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4477] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4480] audit: op="connection-add" uuid="2cfccad5-202c-43d3-bb87-a0410f16d8e7" name="vlan22-port" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4492] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4493] audit: op="connection-add" uuid="1aa529eb-f21e-4504-a16b-352eb4ffb018" name="vlan23-port" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4515] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4534] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4535] audit: op="connection-add" uuid="344cb185-d84d-4425-8a71-86fc4c0d490c" name="br-ex-if" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4580] audit: op="connection-update" uuid="6e8a572d-ed1d-574b-8dec-718c9179119c" name="ci-private-network" args="ovs-interface.type,connection.controller,connection.timestamp,connection.port-type,connection.master,connection.slave-type,ovs-external-ids.data,ipv6.addr-gen-mode,ipv6.dns,ipv6.routes,ipv6.routing-rules,ipv6.method,ipv6.addresses,ipv4.dns,ipv4.never-default,ipv4.routes,ipv4.routing-rules,ipv4.method,ipv4.addresses" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4599] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4601] audit: op="connection-add" uuid="c16ffd8b-c4b1-41f5-b695-d7b52794f227" name="vlan20-if" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4617] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4620] audit: op="connection-add" uuid="52990689-32f0-4151-959f-673dfc0823f6" name="vlan21-if" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4635] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4638] audit: op="connection-add" uuid="0b1f2bc6-ca5c-4471-bf7d-ea68a5df147a" name="vlan22-if" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4653] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4656] audit: op="connection-add" uuid="28aa8fb5-4e3e-4c8a-b5f8-655762ff2d96" name="vlan23-if" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4667] audit: op="connection-delete" uuid="2a2c2ab2-5ce1-3781-8b22-8c858885c334" name="Wired connection 1" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4681] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <warn>  [1769842858.4684] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4693] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4698] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (1e6b7b38-4a9f-40bf-8177-52f3e1f85bdf)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4699] audit: op="connection-activate" uuid="1e6b7b38-4a9f-40bf-8177-52f3e1f85bdf" name="br-ex-br" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4702] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <warn>  [1769842858.4703] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Success
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4710] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4715] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (3d35f505-b814-4343-b808-33cfd9a45232)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4718] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <warn>  [1769842858.4720] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4725] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4730] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (ab83a260-2770-4c05-8da3-294f391b4a74)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4733] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <warn>  [1769842858.4734] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4741] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4746] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (b1a7abf6-671c-4f0b-b1a0-7b918726fb68)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4749] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <warn>  [1769842858.4750] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4757] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4762] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (6bea02c5-0f71-4092-a35b-14a054c9fcfb)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4765] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <warn>  [1769842858.4766] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4774] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4781] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (2cfccad5-202c-43d3-bb87-a0410f16d8e7)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4784] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <warn>  [1769842858.4786] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4793] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4797] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (1aa529eb-f21e-4504-a16b-352eb4ffb018)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4799] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4802] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4806] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4814] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <warn>  [1769842858.4816] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4822] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4832] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (344cb185-d84d-4425-8a71-86fc4c0d490c)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4833] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4840] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4844] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4845] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4848] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4864] device (eth1): disconnecting for new activation request.
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4865] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4868] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4870] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4871] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4874] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <warn>  [1769842858.4875] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4879] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4885] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (c16ffd8b-c4b1-41f5-b695-d7b52794f227)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4886] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4890] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4892] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4894] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4897] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <warn>  [1769842858.4898] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4902] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4905] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (52990689-32f0-4151-959f-673dfc0823f6)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4906] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4910] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4912] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4913] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4917] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <warn>  [1769842858.4918] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4921] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4927] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (0b1f2bc6-ca5c-4471-bf7d-ea68a5df147a)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4927] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4931] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4933] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4935] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4938] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <warn>  [1769842858.4938] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4942] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4947] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (28aa8fb5-4e3e-4c8a-b5f8-655762ff2d96)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4948] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4951] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4952] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4953] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4955] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4970] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4972] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4975] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4977] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4983] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4986] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4989] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4992] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4994] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.4998] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5003] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5006] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5007] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 kernel: ovs-system: entered promiscuous mode
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5012] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5016] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5018] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5019] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5023] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5030] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5033] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 systemd-udevd[51847]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:00:58 np0005603609 kernel: Timeout policy base is empty
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5036] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5041] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5045] dhcp4 (eth0): canceled DHCP transaction
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5045] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5045] dhcp4 (eth0): state changed no lease
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5047] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5059] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 31 02:00:58 np0005603609 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5063] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51841 uid=0 result="fail" reason="Device is not activated"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5105] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5109] dhcp4 (eth0): state changed new lease, address=38.129.56.60
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5152] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5164] device (eth1): disconnecting for new activation request.
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5165] audit: op="connection-activate" uuid="6e8a572d-ed1d-574b-8dec-718c9179119c" name="ci-private-network" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5167] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5174] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 31 02:00:58 np0005603609 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5221] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51841 uid=0 result="success"
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5221] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 31 02:00:58 np0005603609 kernel: br-ex: entered promiscuous mode
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5348] device (eth1): Activation: starting connection 'ci-private-network' (6e8a572d-ed1d-574b-8dec-718c9179119c)
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5354] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5362] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5365] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5373] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5378] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5389] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5391] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5392] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5394] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5396] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5398] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 kernel: vlan22: entered promiscuous mode
Jan 31 02:00:58 np0005603609 systemd-udevd[51845]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:00:58 np0005603609 kernel: vlan20: entered promiscuous mode
Jan 31 02:00:58 np0005603609 systemd-udevd[51846]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5506] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 kernel: vlan21: entered promiscuous mode
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5523] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5529] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5535] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5564] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5570] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5577] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5583] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 kernel: vlan23: entered promiscuous mode
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5590] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5596] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5603] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 02:00:58 np0005603609 systemd-udevd[51944]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5610] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5616] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 02:00:58 np0005603609 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5668] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5682] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5727] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5739] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5746] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5756] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5760] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5770] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5786] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5791] device (eth1): Activation: successful, device activated.
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5799] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5824] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5832] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5841] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5848] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5859] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5864] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5869] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5875] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5877] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5878] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5879] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5882] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5886] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5892] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5896] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5901] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5905] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5912] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:00:58 np0005603609 NetworkManager[49064]: <info>  [1769842858.5919] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 02:00:59 np0005603609 NetworkManager[49064]: <info>  [1769842859.7237] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51841 uid=0 result="success"
Jan 31 02:00:59 np0005603609 NetworkManager[49064]: <info>  [1769842859.8956] checkpoint[0x55df2ae25950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 31 02:00:59 np0005603609 NetworkManager[49064]: <info>  [1769842859.8958] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51841 uid=0 result="success"
Jan 31 02:01:00 np0005603609 python3.9[52200]: ansible-ansible.legacy.async_status Invoked with jid=j179080268923.51835 mode=status _async_dir=/root/.ansible_async
Jan 31 02:01:00 np0005603609 NetworkManager[49064]: <info>  [1769842860.2609] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51841 uid=0 result="success"
Jan 31 02:01:00 np0005603609 NetworkManager[49064]: <info>  [1769842860.2621] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51841 uid=0 result="success"
Jan 31 02:01:00 np0005603609 NetworkManager[49064]: <info>  [1769842860.4909] audit: op="networking-control" arg="global-dns-configuration" pid=51841 uid=0 result="success"
Jan 31 02:01:00 np0005603609 NetworkManager[49064]: <info>  [1769842860.4946] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 31 02:01:00 np0005603609 NetworkManager[49064]: <info>  [1769842860.4990] audit: op="networking-control" arg="global-dns-configuration" pid=51841 uid=0 result="success"
Jan 31 02:01:00 np0005603609 NetworkManager[49064]: <info>  [1769842860.5011] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51841 uid=0 result="success"
Jan 31 02:01:00 np0005603609 NetworkManager[49064]: <info>  [1769842860.6558] checkpoint[0x55df2ae25a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 31 02:01:00 np0005603609 NetworkManager[49064]: <info>  [1769842860.6564] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51841 uid=0 result="success"
Jan 31 02:01:00 np0005603609 ansible-async_wrapper.py[51839]: Module complete (51839)
Jan 31 02:01:01 np0005603609 ansible-async_wrapper.py[51838]: Done in kid B.
Jan 31 02:01:03 np0005603609 python3.9[52320]: ansible-ansible.legacy.async_status Invoked with jid=j179080268923.51835 mode=status _async_dir=/root/.ansible_async
Jan 31 02:01:04 np0005603609 python3.9[52420]: ansible-ansible.legacy.async_status Invoked with jid=j179080268923.51835 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 02:01:04 np0005603609 python3.9[52572]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:01:05 np0005603609 python3.9[52695]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842864.338127-927-153581574438027/.source.returncode _original_basename=.1w7gztd8 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:01:06 np0005603609 python3.9[52847]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:01:06 np0005603609 python3.9[52970]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842865.550722-975-57003369282439/.source.cfg _original_basename=.faep1gw5 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:01:07 np0005603609 python3.9[53123]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:01:07 np0005603609 systemd[1]: Reloading Network Manager...
Jan 31 02:01:07 np0005603609 NetworkManager[49064]: <info>  [1769842867.5136] audit: op="reload" arg="0" pid=53127 uid=0 result="success"
Jan 31 02:01:07 np0005603609 NetworkManager[49064]: <info>  [1769842867.5150] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 31 02:01:07 np0005603609 systemd[1]: Reloaded Network Manager.
Jan 31 02:01:07 np0005603609 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 02:01:07 np0005603609 systemd[1]: session-11.scope: Deactivated successfully.
Jan 31 02:01:07 np0005603609 systemd[1]: session-11.scope: Consumed 45.451s CPU time.
Jan 31 02:01:07 np0005603609 systemd-logind[823]: Session 11 logged out. Waiting for processes to exit.
Jan 31 02:01:07 np0005603609 systemd-logind[823]: Removed session 11.
Jan 31 02:01:13 np0005603609 systemd-logind[823]: New session 12 of user zuul.
Jan 31 02:01:13 np0005603609 systemd[1]: Started Session 12 of User zuul.
Jan 31 02:01:14 np0005603609 python3.9[53315]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:01:15 np0005603609 python3.9[53469]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:01:17 np0005603609 python3.9[53663]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:01:17 np0005603609 systemd[1]: session-12.scope: Deactivated successfully.
Jan 31 02:01:17 np0005603609 systemd[1]: session-12.scope: Consumed 2.212s CPU time.
Jan 31 02:01:17 np0005603609 systemd-logind[823]: Session 12 logged out. Waiting for processes to exit.
Jan 31 02:01:17 np0005603609 systemd-logind[823]: Removed session 12.
Jan 31 02:01:17 np0005603609 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 02:01:24 np0005603609 systemd-logind[823]: New session 13 of user zuul.
Jan 31 02:01:24 np0005603609 systemd[1]: Started Session 13 of User zuul.
Jan 31 02:01:25 np0005603609 python3.9[53845]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:01:26 np0005603609 python3.9[53999]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:01:27 np0005603609 python3.9[54156]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:01:27 np0005603609 python3.9[54240]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:01:30 np0005603609 python3.9[54394]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:01:31 np0005603609 python3.9[54589]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:01:32 np0005603609 python3.9[54741]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:01:32 np0005603609 systemd[1]: var-lib-containers-storage-overlay-compat79752020-merged.mount: Deactivated successfully.
Jan 31 02:01:32 np0005603609 podman[54742]: 2026-01-31 07:01:32.273368179 +0000 UTC m=+0.043001941 system refresh
Jan 31 02:01:33 np0005603609 python3.9[54903]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:01:33 np0005603609 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:01:33 np0005603609 python3.9[55026]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842892.4707012-198-253616444959023/.source.json follow=False _original_basename=podman_network_config.j2 checksum=572e1bf98b0adbd3b5abbaae733816d6baf3d63c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:01:34 np0005603609 python3.9[55178]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:01:35 np0005603609 python3.9[55301]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842894.1073985-243-77964450847521/.source.conf follow=False _original_basename=registries.conf.j2 checksum=a4fd3ca7d18166099562a65af8d6da655db34efc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:01:35 np0005603609 python3.9[55453]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:01:36 np0005603609 python3.9[55605]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:01:37 np0005603609 python3.9[55757]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:01:37 np0005603609 python3.9[55909]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:01:38 np0005603609 python3.9[56061]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:01:41 np0005603609 python3.9[56214]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:01:41 np0005603609 python3.9[56368]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:01:42 np0005603609 python3.9[56520]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:01:43 np0005603609 python3.9[56672]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:01:44 np0005603609 python3.9[56825]: ansible-service_facts Invoked
Jan 31 02:01:44 np0005603609 network[56842]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:01:44 np0005603609 network[56843]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:01:44 np0005603609 network[56844]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:01:48 np0005603609 python3.9[57298]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:01:51 np0005603609 python3.9[57451]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 02:01:52 np0005603609 python3.9[57603]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:01:53 np0005603609 python3.9[57728]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842912.3689241-676-56549678964493/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:01:54 np0005603609 python3.9[57882]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:01:54 np0005603609 python3.9[58007]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842913.601284-721-212848563493689/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:01:56 np0005603609 python3.9[58161]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:01:58 np0005603609 python3.9[58315]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:01:59 np0005603609 python3.9[58399]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:02:00 np0005603609 python3.9[58553]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:02:01 np0005603609 python3.9[58637]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:02:01 np0005603609 chronyd[831]: chronyd exiting
Jan 31 02:02:01 np0005603609 systemd[1]: Stopping NTP client/server...
Jan 31 02:02:01 np0005603609 systemd[1]: chronyd.service: Deactivated successfully.
Jan 31 02:02:01 np0005603609 systemd[1]: Stopped NTP client/server.
Jan 31 02:02:01 np0005603609 systemd[1]: Starting NTP client/server...
Jan 31 02:02:01 np0005603609 chronyd[58646]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 02:02:01 np0005603609 chronyd[58646]: Frequency -23.695 +/- 0.096 ppm read from /var/lib/chrony/drift
Jan 31 02:02:01 np0005603609 chronyd[58646]: Loaded seccomp filter (level 2)
Jan 31 02:02:01 np0005603609 systemd[1]: Started NTP client/server.
Jan 31 02:02:01 np0005603609 systemd[1]: session-13.scope: Deactivated successfully.
Jan 31 02:02:01 np0005603609 systemd[1]: session-13.scope: Consumed 23.215s CPU time.
Jan 31 02:02:01 np0005603609 systemd-logind[823]: Session 13 logged out. Waiting for processes to exit.
Jan 31 02:02:01 np0005603609 systemd-logind[823]: Removed session 13.
Jan 31 02:02:07 np0005603609 systemd-logind[823]: New session 14 of user zuul.
Jan 31 02:02:07 np0005603609 systemd[1]: Started Session 14 of User zuul.
Jan 31 02:02:08 np0005603609 python3.9[58828]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:09 np0005603609 python3.9[58980]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:02:09 np0005603609 python3.9[59103]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842928.5628293-63-238190443801174/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:10 np0005603609 systemd[1]: session-14.scope: Deactivated successfully.
Jan 31 02:02:10 np0005603609 systemd[1]: session-14.scope: Consumed 1.471s CPU time.
Jan 31 02:02:10 np0005603609 systemd-logind[823]: Session 14 logged out. Waiting for processes to exit.
Jan 31 02:02:10 np0005603609 systemd-logind[823]: Removed session 14.
Jan 31 02:02:15 np0005603609 systemd-logind[823]: New session 15 of user zuul.
Jan 31 02:02:15 np0005603609 systemd[1]: Started Session 15 of User zuul.
Jan 31 02:02:16 np0005603609 python3.9[59281]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:02:17 np0005603609 python3.9[59437]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:18 np0005603609 python3.9[59612]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:02:19 np0005603609 python3.9[59735]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769842937.959878-84-224735998629791/.source.json _original_basename=.05znet21 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:20 np0005603609 python3.9[59887]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:02:20 np0005603609 python3.9[60010]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842939.8036075-153-29267995870818/.source _original_basename=.94rca_ll follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:21 np0005603609 python3.9[60162]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:02:22 np0005603609 python3.9[60314]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:02:23 np0005603609 python3.9[60437]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842942.0564694-225-32647012748292/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:02:23 np0005603609 python3.9[60589]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:02:24 np0005603609 python3.9[60712]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842943.1702979-225-41309022969038/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:02:24 np0005603609 python3.9[60864]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:25 np0005603609 python3.9[61016]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:02:25 np0005603609 python3.9[61139]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842944.8891482-336-45229410076836/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:26 np0005603609 python3.9[61291]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:02:27 np0005603609 python3.9[61414]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842946.17883-381-96692935906902/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:28 np0005603609 python3.9[61566]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:02:28 np0005603609 systemd[1]: Reloading.
Jan 31 02:02:28 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:02:28 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:02:28 np0005603609 systemd[1]: Reloading.
Jan 31 02:02:28 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:02:28 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:02:28 np0005603609 systemd[1]: Starting EDPM Container Shutdown...
Jan 31 02:02:28 np0005603609 systemd[1]: Finished EDPM Container Shutdown.
Jan 31 02:02:29 np0005603609 python3.9[61793]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:02:30 np0005603609 python3.9[61916]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842948.8947916-450-85523486946563/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:30 np0005603609 python3.9[62068]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:02:31 np0005603609 python3.9[62191]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842950.2000527-495-67366849235681/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:31 np0005603609 python3.9[62343]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:02:31 np0005603609 systemd[1]: Reloading.
Jan 31 02:02:31 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:02:31 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:02:32 np0005603609 systemd[1]: Reloading.
Jan 31 02:02:32 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:02:32 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:02:32 np0005603609 systemd[1]: Starting Create netns directory...
Jan 31 02:02:32 np0005603609 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 02:02:32 np0005603609 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 02:02:32 np0005603609 systemd[1]: Finished Create netns directory.
Jan 31 02:02:33 np0005603609 python3.9[62568]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:02:33 np0005603609 network[62585]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:02:33 np0005603609 network[62586]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:02:33 np0005603609 network[62587]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:02:36 np0005603609 python3.9[62849]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:02:36 np0005603609 systemd[1]: Reloading.
Jan 31 02:02:36 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:02:36 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:02:36 np0005603609 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 31 02:02:37 np0005603609 iptables.init[62888]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 31 02:02:37 np0005603609 iptables.init[62888]: iptables: Flushing firewall rules: [  OK  ]
Jan 31 02:02:37 np0005603609 systemd[1]: iptables.service: Deactivated successfully.
Jan 31 02:02:37 np0005603609 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 31 02:02:37 np0005603609 irqbalance[818]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 31 02:02:37 np0005603609 irqbalance[818]: IRQ 26 affinity is now unmanaged
Jan 31 02:02:37 np0005603609 python3.9[63085]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:02:39 np0005603609 python3.9[63239]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:02:39 np0005603609 systemd[1]: Reloading.
Jan 31 02:02:39 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:02:39 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:02:39 np0005603609 systemd[1]: Starting Netfilter Tables...
Jan 31 02:02:39 np0005603609 systemd[1]: Finished Netfilter Tables.
Jan 31 02:02:40 np0005603609 python3.9[63431]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:02:41 np0005603609 python3.9[63584]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:02:41 np0005603609 python3.9[63709]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842960.7149487-702-56347119765504/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:42 np0005603609 python3.9[63862]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:02:42 np0005603609 systemd[1]: Reloading OpenSSH server daemon...
Jan 31 02:02:42 np0005603609 systemd[1]: Reloaded OpenSSH server daemon.
Jan 31 02:02:43 np0005603609 python3.9[64018]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:43 np0005603609 python3.9[64170]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:02:44 np0005603609 python3.9[64293]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842963.3714395-796-115097561339188/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:45 np0005603609 python3.9[64445]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 02:02:45 np0005603609 systemd[1]: Starting Time & Date Service...
Jan 31 02:02:45 np0005603609 systemd[1]: Started Time & Date Service.
Jan 31 02:02:46 np0005603609 python3.9[64601]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:46 np0005603609 python3.9[64753]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:02:47 np0005603609 python3.9[64876]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842966.4334502-900-75342008291676/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:48 np0005603609 python3.9[65028]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:02:48 np0005603609 python3.9[65151]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842967.6259406-945-203796536180646/.source.yaml _original_basename=.z63xft_k follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:49 np0005603609 python3.9[65303]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:02:49 np0005603609 python3.9[65426]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842968.87949-991-30060247223787/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:50 np0005603609 python3.9[65578]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:02:51 np0005603609 python3.9[65731]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:02:51 np0005603609 python3[65884]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 02:02:52 np0005603609 python3.9[66036]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:02:53 np0005603609 python3.9[66159]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842972.2231472-1107-99685128345495/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:54 np0005603609 python3.9[66311]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:02:54 np0005603609 python3.9[66434]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842973.6548636-1152-178240585482698/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:55 np0005603609 python3.9[66586]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:02:55 np0005603609 python3.9[66709]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842974.862169-1197-147560926900480/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:56 np0005603609 python3.9[66861]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:02:57 np0005603609 python3.9[66984]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842976.0130928-1242-45992380487767/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:57 np0005603609 python3.9[67136]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:02:58 np0005603609 python3.9[67259]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842977.2083478-1287-9353041811512/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:58 np0005603609 python3.9[67411]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:02:59 np0005603609 python3.9[67563]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:03:00 np0005603609 python3.9[67722]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:01 np0005603609 python3.9[67875]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:01 np0005603609 python3.9[68027]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:02 np0005603609 python3.9[68179]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 02:03:03 np0005603609 python3.9[68332]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 02:03:03 np0005603609 systemd[1]: session-15.scope: Deactivated successfully.
Jan 31 02:03:03 np0005603609 systemd[1]: session-15.scope: Consumed 31.226s CPU time.
Jan 31 02:03:03 np0005603609 systemd-logind[823]: Session 15 logged out. Waiting for processes to exit.
Jan 31 02:03:03 np0005603609 systemd-logind[823]: Removed session 15.
Jan 31 02:03:08 np0005603609 systemd-logind[823]: New session 16 of user zuul.
Jan 31 02:03:08 np0005603609 systemd[1]: Started Session 16 of User zuul.
Jan 31 02:03:09 np0005603609 python3.9[68513]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 02:03:10 np0005603609 python3.9[68665]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:03:11 np0005603609 python3.9[68817]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:03:12 np0005603609 python3.9[68969]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlvTGYGifalEmozttYlZ79wRHZPo6p3FfxUn+H8fCt//gLYJvHB9ygqCWO8F06xZhwaSJlU3R5k49AFtcq6rCaf4D9FuDYpYU5B1qGxpqY2S/6r/PmC9TmJJe6DJfuIf95os5YrDLR82BbT8dLFvu76PfZiMt0+kvm9gj1Q6XCUTgIsIvY9pyPySu0V4JDeT8EBgROR7WA5Fev80wO2/RlFXH9xVIupO8rswjwWPuIXoua1w44d35HWWHBdMAFXeZZMopWHWwY+fIlyz4B8y/TWDow7KZxG9GHKZ04e73/RA972Gub2LC0SlBFsBqaSnub8ooOcA3jZ3R2bjHAVkZvLgCK9UFSgwvvfyOWxtkJgj5KalAy9vZeGQ02ndAPNkQ6B1GnnRHaR5yGPG78q9Nd8RDmzhTr1iwnYLHhup04nAUnUDw5ubZFyF9bW1KQWvDv+4cfFeT8mhARMCxu7Imzne5FDq9OZAA9VLfnA26YFT0MpGjGl332cx20iz3Z4IU=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDBM/OyT9HQGjLM76vSXpTFer+lkr//u0v4BsUk+Rcai#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPttGgqMF5HnqNXeajmhgAAhQFj1yReXfFmUGT6cv24PcfDX+VeASpBgDGWJKvbu1EgrSPUu2R8sDzajVI5+ETk=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwK9tbwI1sVhVFn3RGaEAgpi2689y9VdIyBp+cw+RWFupGnK46xr4HB/N67Aw+A+3FJtEl1Zq1cnt3Gy8PYb6XnLd4xH/NFtUI3ukhekrtKvSmysEjpRGIamjt1BkH4Lxh79PNkk13AVMQN92Wo271/fHEvcV7HaC0Q5VypZMd+77ZvI9NuEG1nofpvI8+32YECZBLpoC5KQK7EibqD9MUR2OmapGZhV+5B5jdb0ZvNb966Q0kwAGV8E+xgHSVnh5eCWC8oxgWkycmQd2co9E79fiIHEioABE9aDUGKw0+nsZ7HrvjG/ENeg5C6fjdJE4MsPq3FNHAiTCQPZ7QZgv/CSudt7WYyLTztGL9ksWqaTUeDocKVKPlJlzGrn/TXgMoix8+qbFzxVixIROb2nqElyEy6mo0Xxt2b4aisil9ZQhWVMQY0hGX5vtVv0E6+svzjSTfkyZolbjyRsolJF4pH7+klLEmlWGDlgSoCDZeK/XEi7xq3yaCymuWtX2fAX8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICzJ5+1VSPloOqHhejNen2lHjfV4Hvj7nbRbNJjS6dtd#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBECc0+1u2G3haTNDUnwK7F3+bqZqLNjR6ayEsOJcH6U6RkqhSd2eAlivxlw9dfPuir2TFrYzGTtSXuJ8iauDAtQ=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDir5Ux7IUuKTsqwrpZRFpieFX7Hi9Bsaw7N3jCiMd+vuHlEKHLX54HbyTIVnox1XbNjeYynLRRz7VKBfder8IEerGmST/uWuX5FOdve7vDdY++9J6qYkj1Gf6v6BGp8BT97bbPdvaQdLP6YS2jFEfOz4s0oJkgr8dsHjPU70e1P0b7vKxqo3z/E/XCe2BUGEv5j/z9GTl2oQ9/KoTvahfr6qfonnQK9E0gsJKDB9S1UPNFkJUxvVPfKfEao207dmT8EmQL2ZdwDwecA2Mg0SneGaNmEFWDW4CWQjdbHuikc3vsZ1do7kzq2+tz+WLEXqdb4Ig4S0OfV/MAcaC/C1DRfZHxZN3vSayrm99nFc8oPaLnRtT8Jz1dVonMOpwLm3xMm6nAeGNTzM0ImTrJTusVmKNRQI3x6VPiEcWdKNvN5sVcrN9uyINDMuzpXIxc1LmpmR/338EfP4HYhfsTqdM0worzzewvh2XhAVxQAiNYRRUbLvR4/EE5SjXTjSA4ID0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ8oYZpZvdB1n917+wvTxetgtueloCox+7yBQBW8LHZX#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCEH62xmPSqzu7EFth8e8ITel7fLvoU9FKlxQN/eSXzUuR/7sZGPhcgLzjrJmEcn4Za0K2VNu6+z559d/AEJY2U=#012 create=True mode=0644 path=/tmp/ansible.unkp16fm state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:12 np0005603609 python3.9[69121]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.unkp16fm' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:03:13 np0005603609 python3.9[69275]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.unkp16fm state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:14 np0005603609 systemd[1]: session-16.scope: Deactivated successfully.
Jan 31 02:03:14 np0005603609 systemd[1]: session-16.scope: Consumed 3.040s CPU time.
Jan 31 02:03:14 np0005603609 systemd-logind[823]: Session 16 logged out. Waiting for processes to exit.
Jan 31 02:03:14 np0005603609 systemd-logind[823]: Removed session 16.
Jan 31 02:03:15 np0005603609 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 02:03:19 np0005603609 systemd-logind[823]: New session 17 of user zuul.
Jan 31 02:03:19 np0005603609 systemd[1]: Started Session 17 of User zuul.
Jan 31 02:03:20 np0005603609 python3.9[69455]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:03:21 np0005603609 python3.9[69611]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 02:03:22 np0005603609 python3.9[69765]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:03:23 np0005603609 python3.9[69918]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:03:24 np0005603609 python3.9[70071]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:03:24 np0005603609 python3.9[70225]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:03:25 np0005603609 python3.9[70380]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:26 np0005603609 systemd-logind[823]: Session 17 logged out. Waiting for processes to exit.
Jan 31 02:03:26 np0005603609 systemd[1]: session-17.scope: Deactivated successfully.
Jan 31 02:03:26 np0005603609 systemd[1]: session-17.scope: Consumed 4.061s CPU time.
Jan 31 02:03:26 np0005603609 systemd-logind[823]: Removed session 17.
Jan 31 02:03:31 np0005603609 systemd-logind[823]: New session 18 of user zuul.
Jan 31 02:03:31 np0005603609 systemd[1]: Started Session 18 of User zuul.
Jan 31 02:03:32 np0005603609 python3.9[70558]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:03:33 np0005603609 python3.9[70714]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:03:34 np0005603609 python3.9[70798]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 02:03:36 np0005603609 python3.9[70949]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:03:37 np0005603609 python3.9[71100]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 02:03:38 np0005603609 python3.9[71250]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:03:38 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:03:38 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:03:39 np0005603609 python3.9[71401]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:03:39 np0005603609 systemd-logind[823]: Session 18 logged out. Waiting for processes to exit.
Jan 31 02:03:39 np0005603609 systemd[1]: session-18.scope: Deactivated successfully.
Jan 31 02:03:39 np0005603609 systemd[1]: session-18.scope: Consumed 5.280s CPU time.
Jan 31 02:03:39 np0005603609 systemd-logind[823]: Removed session 18.
Jan 31 02:03:48 np0005603609 systemd-logind[823]: New session 19 of user zuul.
Jan 31 02:03:48 np0005603609 systemd[1]: Started Session 19 of User zuul.
Jan 31 02:03:53 np0005603609 python3[72167]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:03:55 np0005603609 python3[72262]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 02:03:57 np0005603609 python3[72289]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 02:03:57 np0005603609 python3[72315]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:03:57 np0005603609 kernel: loop: module loaded
Jan 31 02:03:57 np0005603609 kernel: loop3: detected capacity change from 0 to 14680064
Jan 31 02:03:57 np0005603609 python3[72350]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:03:57 np0005603609 lvm[72353]: PV /dev/loop3 not used.
Jan 31 02:03:58 np0005603609 lvm[72355]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 02:03:58 np0005603609 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 31 02:03:58 np0005603609 lvm[72361]:  1 logical volume(s) in volume group "ceph_vg0" now active
Jan 31 02:03:58 np0005603609 lvm[72365]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 02:03:58 np0005603609 lvm[72365]: VG ceph_vg0 finished
Jan 31 02:03:58 np0005603609 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 31 02:03:58 np0005603609 python3[72443]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:03:58 np0005603609 python3[72516]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843038.346838-37138-959740383278/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:03:59 np0005603609 python3[72566]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:04:00 np0005603609 systemd[1]: Reloading.
Jan 31 02:04:00 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:04:00 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:04:00 np0005603609 systemd[1]: Starting Ceph OSD losetup...
Jan 31 02:04:00 np0005603609 bash[72607]: /dev/loop3: [64513]:4194937 (/var/lib/ceph-osd-0.img)
Jan 31 02:04:00 np0005603609 systemd[1]: Finished Ceph OSD losetup.
Jan 31 02:04:00 np0005603609 lvm[72608]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 02:04:00 np0005603609 lvm[72608]: VG ceph_vg0 finished
Jan 31 02:04:02 np0005603609 python3[72632]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:04:11 np0005603609 chronyd[58646]: Selected source 207.34.48.31 (pool.ntp.org)
Jan 31 02:05:48 np0005603609 systemd-logind[823]: New session 20 of user ceph-admin.
Jan 31 02:05:48 np0005603609 systemd[1]: Created slice User Slice of UID 42477.
Jan 31 02:05:48 np0005603609 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 31 02:05:48 np0005603609 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 31 02:05:48 np0005603609 systemd[1]: Starting User Manager for UID 42477...
Jan 31 02:05:48 np0005603609 systemd[72682]: Queued start job for default target Main User Target.
Jan 31 02:05:48 np0005603609 systemd[72682]: Created slice User Application Slice.
Jan 31 02:05:48 np0005603609 systemd[72682]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 02:05:48 np0005603609 systemd[72682]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 02:05:48 np0005603609 systemd[72682]: Reached target Paths.
Jan 31 02:05:48 np0005603609 systemd[72682]: Reached target Timers.
Jan 31 02:05:48 np0005603609 systemd-logind[823]: New session 22 of user ceph-admin.
Jan 31 02:05:48 np0005603609 systemd[72682]: Starting D-Bus User Message Bus Socket...
Jan 31 02:05:48 np0005603609 systemd[72682]: Starting Create User's Volatile Files and Directories...
Jan 31 02:05:48 np0005603609 systemd[72682]: Finished Create User's Volatile Files and Directories.
Jan 31 02:05:48 np0005603609 systemd[72682]: Listening on D-Bus User Message Bus Socket.
Jan 31 02:05:48 np0005603609 systemd[72682]: Reached target Sockets.
Jan 31 02:05:48 np0005603609 systemd[72682]: Reached target Basic System.
Jan 31 02:05:48 np0005603609 systemd[72682]: Reached target Main User Target.
Jan 31 02:05:48 np0005603609 systemd[72682]: Startup finished in 118ms.
Jan 31 02:05:48 np0005603609 systemd[1]: Started User Manager for UID 42477.
Jan 31 02:05:48 np0005603609 systemd[1]: Started Session 20 of User ceph-admin.
Jan 31 02:05:48 np0005603609 systemd[1]: Started Session 22 of User ceph-admin.
Jan 31 02:05:49 np0005603609 systemd-logind[823]: New session 23 of user ceph-admin.
Jan 31 02:05:49 np0005603609 systemd[1]: Started Session 23 of User ceph-admin.
Jan 31 02:05:49 np0005603609 systemd-logind[823]: New session 24 of user ceph-admin.
Jan 31 02:05:49 np0005603609 systemd[1]: Started Session 24 of User ceph-admin.
Jan 31 02:05:50 np0005603609 systemd-logind[823]: New session 25 of user ceph-admin.
Jan 31 02:05:50 np0005603609 systemd[1]: Started Session 25 of User ceph-admin.
Jan 31 02:05:50 np0005603609 systemd-logind[823]: New session 26 of user ceph-admin.
Jan 31 02:05:50 np0005603609 systemd[1]: Started Session 26 of User ceph-admin.
Jan 31 02:05:50 np0005603609 systemd-logind[823]: New session 27 of user ceph-admin.
Jan 31 02:05:50 np0005603609 systemd[1]: Started Session 27 of User ceph-admin.
Jan 31 02:05:51 np0005603609 systemd-logind[823]: New session 28 of user ceph-admin.
Jan 31 02:05:51 np0005603609 systemd[1]: Started Session 28 of User ceph-admin.
Jan 31 02:05:51 np0005603609 systemd-logind[823]: New session 29 of user ceph-admin.
Jan 31 02:05:51 np0005603609 systemd[1]: Started Session 29 of User ceph-admin.
Jan 31 02:05:51 np0005603609 systemd-logind[823]: New session 30 of user ceph-admin.
Jan 31 02:05:51 np0005603609 systemd[1]: Started Session 30 of User ceph-admin.
Jan 31 02:05:52 np0005603609 systemd-logind[823]: New session 31 of user ceph-admin.
Jan 31 02:05:52 np0005603609 systemd[1]: Started Session 31 of User ceph-admin.
Jan 31 02:05:52 np0005603609 systemd-logind[823]: New session 32 of user ceph-admin.
Jan 31 02:05:52 np0005603609 systemd[1]: Started Session 32 of User ceph-admin.
Jan 31 02:05:53 np0005603609 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:05:53 np0005603609 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:05:54 np0005603609 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:05:54 np0005603609 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:05:54 np0005603609 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73655 (sysctl)
Jan 31 02:05:54 np0005603609 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 31 02:05:54 np0005603609 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 31 02:05:55 np0005603609 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:05:55 np0005603609 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:05:55 np0005603609 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:05:57 np0005603609 systemd[1]: var-lib-containers-storage-overlay-compat1329151782-lower\x2dmapped.mount: Deactivated successfully.
Jan 31 02:06:09 np0005603609 podman[73930]: 2026-01-31 07:06:09.87713763 +0000 UTC m=+14.238779409 container create 3a1cfb4b8a31e957a18dad41b8d8177268e77e217e43b38e7a69407c4482e2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 02:06:09 np0005603609 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck2192604017-merged.mount: Deactivated successfully.
Jan 31 02:06:09 np0005603609 podman[73930]: 2026-01-31 07:06:09.862192851 +0000 UTC m=+14.223834640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:06:09 np0005603609 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 31 02:06:09 np0005603609 systemd[1]: Started libpod-conmon-3a1cfb4b8a31e957a18dad41b8d8177268e77e217e43b38e7a69407c4482e2c3.scope.
Jan 31 02:06:09 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:06:09 np0005603609 podman[73930]: 2026-01-31 07:06:09.972634652 +0000 UTC m=+14.334276461 container init 3a1cfb4b8a31e957a18dad41b8d8177268e77e217e43b38e7a69407c4482e2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 02:06:09 np0005603609 podman[73930]: 2026-01-31 07:06:09.978992378 +0000 UTC m=+14.340634137 container start 3a1cfb4b8a31e957a18dad41b8d8177268e77e217e43b38e7a69407c4482e2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wilbur, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:06:09 np0005603609 podman[73930]: 2026-01-31 07:06:09.982450067 +0000 UTC m=+14.344091886 container attach 3a1cfb4b8a31e957a18dad41b8d8177268e77e217e43b38e7a69407c4482e2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wilbur, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:06:09 np0005603609 confident_wilbur[73991]: 167 167
Jan 31 02:06:09 np0005603609 systemd[1]: libpod-3a1cfb4b8a31e957a18dad41b8d8177268e77e217e43b38e7a69407c4482e2c3.scope: Deactivated successfully.
Jan 31 02:06:09 np0005603609 conmon[73991]: conmon 3a1cfb4b8a31e957a18d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a1cfb4b8a31e957a18dad41b8d8177268e77e217e43b38e7a69407c4482e2c3.scope/container/memory.events
Jan 31 02:06:10 np0005603609 podman[73996]: 2026-01-31 07:06:10.027495577 +0000 UTC m=+0.027165876 container died 3a1cfb4b8a31e957a18dad41b8d8177268e77e217e43b38e7a69407c4482e2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wilbur, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:06:10 np0005603609 systemd[1]: var-lib-containers-storage-overlay-fa264da38f40eaa54b1218ebbb182bf0d084f75df9cc51168d184add9b6c278f-merged.mount: Deactivated successfully.
Jan 31 02:06:10 np0005603609 podman[73996]: 2026-01-31 07:06:10.058804252 +0000 UTC m=+0.058474571 container remove 3a1cfb4b8a31e957a18dad41b8d8177268e77e217e43b38e7a69407c4482e2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 02:06:10 np0005603609 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:06:10 np0005603609 systemd[1]: libpod-conmon-3a1cfb4b8a31e957a18dad41b8d8177268e77e217e43b38e7a69407c4482e2c3.scope: Deactivated successfully.
Jan 31 02:06:10 np0005603609 podman[74018]: 2026-01-31 07:06:10.198478861 +0000 UTC m=+0.048306196 container create e194c5fcf2643afaa49c7ebeaaf43902687bb30881621b459da5e5468ad11b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_haibt, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 02:06:10 np0005603609 systemd[1]: Started libpod-conmon-e194c5fcf2643afaa49c7ebeaaf43902687bb30881621b459da5e5468ad11b25.scope.
Jan 31 02:06:10 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:06:10 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b237662e81503b7ddb8edf61658aa1f7523c344b697c3202f551aa8b041942/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:10 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b237662e81503b7ddb8edf61658aa1f7523c344b697c3202f551aa8b041942/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:10 np0005603609 podman[74018]: 2026-01-31 07:06:10.183074041 +0000 UTC m=+0.032901376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:06:10 np0005603609 podman[74018]: 2026-01-31 07:06:10.278222354 +0000 UTC m=+0.128049719 container init e194c5fcf2643afaa49c7ebeaaf43902687bb30881621b459da5e5468ad11b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:06:10 np0005603609 podman[74018]: 2026-01-31 07:06:10.28810625 +0000 UTC m=+0.137933585 container start e194c5fcf2643afaa49c7ebeaaf43902687bb30881621b459da5e5468ad11b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 02:06:10 np0005603609 podman[74018]: 2026-01-31 07:06:10.291674954 +0000 UTC m=+0.141502529 container attach e194c5fcf2643afaa49c7ebeaaf43902687bb30881621b459da5e5468ad11b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]: [
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:    {
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:        "available": false,
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:        "ceph_device": false,
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:        "lsm_data": {},
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:        "lvs": [],
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:        "path": "/dev/sr0",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:        "rejected_reasons": [
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "Has a FileSystem",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "Insufficient space (<5GB)"
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:        ],
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:        "sys_api": {
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "actuators": null,
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "device_nodes": "sr0",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "devname": "sr0",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "human_readable_size": "482.00 KB",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "id_bus": "ata",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "model": "QEMU DVD-ROM",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "nr_requests": "2",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "parent": "/dev/sr0",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "partitions": {},
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "path": "/dev/sr0",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "removable": "1",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "rev": "2.5+",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "ro": "0",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "rotational": "1",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "sas_address": "",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "sas_device_handle": "",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "scheduler_mode": "mq-deadline",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "sectors": 0,
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "sectorsize": "2048",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "size": 493568.0,
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "support_discard": "2048",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "type": "disk",
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:            "vendor": "QEMU"
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:        }
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]:    }
Jan 31 02:06:11 np0005603609 admiring_haibt[74035]: ]
Jan 31 02:06:11 np0005603609 systemd[1]: libpod-e194c5fcf2643afaa49c7ebeaaf43902687bb30881621b459da5e5468ad11b25.scope: Deactivated successfully.
Jan 31 02:06:11 np0005603609 systemd[1]: libpod-e194c5fcf2643afaa49c7ebeaaf43902687bb30881621b459da5e5468ad11b25.scope: Consumed 1.215s CPU time.
Jan 31 02:06:11 np0005603609 podman[74981]: 2026-01-31 07:06:11.545617562 +0000 UTC m=+0.025348559 container died e194c5fcf2643afaa49c7ebeaaf43902687bb30881621b459da5e5468ad11b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 02:06:11 np0005603609 systemd[1]: var-lib-containers-storage-overlay-d4b237662e81503b7ddb8edf61658aa1f7523c344b697c3202f551aa8b041942-merged.mount: Deactivated successfully.
Jan 31 02:06:11 np0005603609 podman[74981]: 2026-01-31 07:06:11.59821188 +0000 UTC m=+0.077942867 container remove e194c5fcf2643afaa49c7ebeaaf43902687bb30881621b459da5e5468ad11b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_haibt, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:06:11 np0005603609 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:06:11 np0005603609 systemd[1]: libpod-conmon-e194c5fcf2643afaa49c7ebeaaf43902687bb30881621b459da5e5468ad11b25.scope: Deactivated successfully.
Jan 31 02:06:15 np0005603609 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:06:15 np0005603609 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:06:15 np0005603609 podman[76828]: 2026-01-31 07:06:15.802392274 +0000 UTC m=+0.040443492 container create 3682c1c0623163245623adf23adf124fe6275f490edfc62d15d7dbd9884dba1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:06:15 np0005603609 systemd[1]: Started libpod-conmon-3682c1c0623163245623adf23adf124fe6275f490edfc62d15d7dbd9884dba1d.scope.
Jan 31 02:06:15 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:06:15 np0005603609 podman[76828]: 2026-01-31 07:06:15.869851187 +0000 UTC m=+0.107902435 container init 3682c1c0623163245623adf23adf124fe6275f490edfc62d15d7dbd9884dba1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:06:15 np0005603609 podman[76828]: 2026-01-31 07:06:15.875600027 +0000 UTC m=+0.113651255 container start 3682c1c0623163245623adf23adf124fe6275f490edfc62d15d7dbd9884dba1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 02:06:15 np0005603609 determined_rosalind[76844]: 167 167
Jan 31 02:06:15 np0005603609 systemd[1]: libpod-3682c1c0623163245623adf23adf124fe6275f490edfc62d15d7dbd9884dba1d.scope: Deactivated successfully.
Jan 31 02:06:15 np0005603609 podman[76828]: 2026-01-31 07:06:15.787391404 +0000 UTC m=+0.025442642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:06:15 np0005603609 podman[76828]: 2026-01-31 07:06:15.888156673 +0000 UTC m=+0.126207901 container attach 3682c1c0623163245623adf23adf124fe6275f490edfc62d15d7dbd9884dba1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rosalind, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:06:15 np0005603609 podman[76828]: 2026-01-31 07:06:15.888666586 +0000 UTC m=+0.126717824 container died 3682c1c0623163245623adf23adf124fe6275f490edfc62d15d7dbd9884dba1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 02:06:15 np0005603609 podman[76828]: 2026-01-31 07:06:15.926082238 +0000 UTC m=+0.164133456 container remove 3682c1c0623163245623adf23adf124fe6275f490edfc62d15d7dbd9884dba1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rosalind, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:06:15 np0005603609 systemd[1]: libpod-conmon-3682c1c0623163245623adf23adf124fe6275f490edfc62d15d7dbd9884dba1d.scope: Deactivated successfully.
Jan 31 02:06:15 np0005603609 systemd[1]: Reloading.
Jan 31 02:06:16 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:06:16 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:06:16 np0005603609 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:06:16 np0005603609 systemd[1]: Reloading.
Jan 31 02:06:16 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:06:16 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:06:16 np0005603609 systemd[1]: Reached target All Ceph clusters and services.
Jan 31 02:06:16 np0005603609 systemd[1]: Reloading.
Jan 31 02:06:16 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:06:16 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:06:16 np0005603609 systemd[1]: Reached target Ceph cluster f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 02:06:16 np0005603609 systemd[1]: Reloading.
Jan 31 02:06:16 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:06:16 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:06:16 np0005603609 systemd[1]: Reloading.
Jan 31 02:06:16 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:06:16 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:06:17 np0005603609 systemd[1]: Created slice Slice /system/ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 02:06:17 np0005603609 systemd[1]: Reached target System Time Set.
Jan 31 02:06:17 np0005603609 systemd[1]: Reached target System Time Synchronized.
Jan 31 02:06:17 np0005603609 systemd[1]: Starting Ceph crash.compute-1 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 02:06:17 np0005603609 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:06:17 np0005603609 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:06:17 np0005603609 podman[77103]: 2026-01-31 07:06:17.211819884 +0000 UTC m=+0.037090935 container create 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:06:17 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aafa76cbbfc639cad89af1a44dee0120188aab64461ec69d0b945ebe2ff544c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:17 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aafa76cbbfc639cad89af1a44dee0120188aab64461ec69d0b945ebe2ff544c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:17 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aafa76cbbfc639cad89af1a44dee0120188aab64461ec69d0b945ebe2ff544c8/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:17 np0005603609 podman[77103]: 2026-01-31 07:06:17.259264587 +0000 UTC m=+0.084535678 container init 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 02:06:17 np0005603609 podman[77103]: 2026-01-31 07:06:17.264097393 +0000 UTC m=+0.089368454 container start 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 02:06:17 np0005603609 bash[77103]: 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b
Jan 31 02:06:17 np0005603609 podman[77103]: 2026-01-31 07:06:17.195755027 +0000 UTC m=+0.021026108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:06:17 np0005603609 systemd[1]: Started Ceph crash.compute-1 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 02:06:17 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1[77118]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 31 02:06:17 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1[77118]: 2026-01-31T07:06:17.643+0000 7fe18c748640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 02:06:17 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1[77118]: 2026-01-31T07:06:17.643+0000 7fe18c748640 -1 AuthRegistry(0x7fe184067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 02:06:17 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1[77118]: 2026-01-31T07:06:17.644+0000 7fe18c748640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 02:06:17 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1[77118]: 2026-01-31T07:06:17.644+0000 7fe18c748640 -1 AuthRegistry(0x7fe18c747000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 02:06:17 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1[77118]: 2026-01-31T07:06:17.646+0000 7fe18a4bd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 31 02:06:17 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1[77118]: 2026-01-31T07:06:17.646+0000 7fe18c748640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 31 02:06:17 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1[77118]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 31 02:06:17 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1[77118]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 31 02:06:17 np0005603609 podman[77275]: 2026-01-31 07:06:17.879365333 +0000 UTC m=+0.039989920 container create 2df09ac1e304b6c111b86f9eb3640a3494dc587420a83a9b22c6a275f122d6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_chebyshev, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:06:17 np0005603609 systemd[1]: Started libpod-conmon-2df09ac1e304b6c111b86f9eb3640a3494dc587420a83a9b22c6a275f122d6de.scope.
Jan 31 02:06:17 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:06:17 np0005603609 podman[77275]: 2026-01-31 07:06:17.861108708 +0000 UTC m=+0.021733345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:06:17 np0005603609 podman[77275]: 2026-01-31 07:06:17.963882799 +0000 UTC m=+0.124507386 container init 2df09ac1e304b6c111b86f9eb3640a3494dc587420a83a9b22c6a275f122d6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 02:06:17 np0005603609 podman[77275]: 2026-01-31 07:06:17.969966758 +0000 UTC m=+0.130591345 container start 2df09ac1e304b6c111b86f9eb3640a3494dc587420a83a9b22c6a275f122d6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 02:06:17 np0005603609 podman[77275]: 2026-01-31 07:06:17.972627107 +0000 UTC m=+0.133251714 container attach 2df09ac1e304b6c111b86f9eb3640a3494dc587420a83a9b22c6a275f122d6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_chebyshev, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:06:17 np0005603609 cranky_chebyshev[77291]: 167 167
Jan 31 02:06:17 np0005603609 systemd[1]: libpod-2df09ac1e304b6c111b86f9eb3640a3494dc587420a83a9b22c6a275f122d6de.scope: Deactivated successfully.
Jan 31 02:06:17 np0005603609 podman[77275]: 2026-01-31 07:06:17.975881471 +0000 UTC m=+0.136506058 container died 2df09ac1e304b6c111b86f9eb3640a3494dc587420a83a9b22c6a275f122d6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 02:06:17 np0005603609 systemd[1]: var-lib-containers-storage-overlay-0d88895de7c2a32f60a35b5f3c3392987358ac82dbe16a5346177ff07846e6fa-merged.mount: Deactivated successfully.
Jan 31 02:06:18 np0005603609 podman[77275]: 2026-01-31 07:06:18.010290866 +0000 UTC m=+0.170915453 container remove 2df09ac1e304b6c111b86f9eb3640a3494dc587420a83a9b22c6a275f122d6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:06:18 np0005603609 systemd[1]: libpod-conmon-2df09ac1e304b6c111b86f9eb3640a3494dc587420a83a9b22c6a275f122d6de.scope: Deactivated successfully.
Jan 31 02:06:18 np0005603609 podman[77313]: 2026-01-31 07:06:18.143941479 +0000 UTC m=+0.036357326 container create 77489cb4ae91dbf05d27a0fab991226ef3cdeda06c553e6d575213cf8e3ae1c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 02:06:18 np0005603609 systemd[1]: Started libpod-conmon-77489cb4ae91dbf05d27a0fab991226ef3cdeda06c553e6d575213cf8e3ae1c1.scope.
Jan 31 02:06:18 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:06:18 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b55e334721dc6e268c1a130eb1f61aef52883177f780a7cb1a983c340b656ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:18 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b55e334721dc6e268c1a130eb1f61aef52883177f780a7cb1a983c340b656ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:18 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b55e334721dc6e268c1a130eb1f61aef52883177f780a7cb1a983c340b656ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:18 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b55e334721dc6e268c1a130eb1f61aef52883177f780a7cb1a983c340b656ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:18 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b55e334721dc6e268c1a130eb1f61aef52883177f780a7cb1a983c340b656ba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:18 np0005603609 podman[77313]: 2026-01-31 07:06:18.128660152 +0000 UTC m=+0.021076019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:06:18 np0005603609 podman[77313]: 2026-01-31 07:06:18.24634072 +0000 UTC m=+0.138756597 container init 77489cb4ae91dbf05d27a0fab991226ef3cdeda06c553e6d575213cf8e3ae1c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 02:06:18 np0005603609 podman[77313]: 2026-01-31 07:06:18.253935818 +0000 UTC m=+0.146351665 container start 77489cb4ae91dbf05d27a0fab991226ef3cdeda06c553e6d575213cf8e3ae1c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:06:18 np0005603609 podman[77313]: 2026-01-31 07:06:18.258388954 +0000 UTC m=+0.150804821 container attach 77489cb4ae91dbf05d27a0fab991226ef3cdeda06c553e6d575213cf8e3ae1c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 02:06:19 np0005603609 quirky_ride[77330]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:06:19 np0005603609 quirky_ride[77330]: --> relative data size: 1.0
Jan 31 02:06:19 np0005603609 quirky_ride[77330]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 02:06:19 np0005603609 quirky_ride[77330]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new ea64e94f-e08a-40f8-8a79-f0fb7a401afe
Jan 31 02:06:19 np0005603609 quirky_ride[77330]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 02:06:19 np0005603609 quirky_ride[77330]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 31 02:06:19 np0005603609 lvm[77378]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 02:06:19 np0005603609 lvm[77378]: VG ceph_vg0 finished
Jan 31 02:06:19 np0005603609 quirky_ride[77330]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 31 02:06:19 np0005603609 quirky_ride[77330]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 02:06:19 np0005603609 quirky_ride[77330]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 31 02:06:19 np0005603609 quirky_ride[77330]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 31 02:06:20 np0005603609 quirky_ride[77330]: stderr: got monmap epoch 1
Jan 31 02:06:20 np0005603609 quirky_ride[77330]: --> Creating keyring file for osd.1
Jan 31 02:06:20 np0005603609 quirky_ride[77330]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 31 02:06:20 np0005603609 quirky_ride[77330]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 31 02:06:20 np0005603609 quirky_ride[77330]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid ea64e94f-e08a-40f8-8a79-f0fb7a401afe --setuser ceph --setgroup ceph
Jan 31 02:06:22 np0005603609 quirky_ride[77330]: stderr: 2026-01-31T07:06:20.184+0000 7f48ebc59740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 31 02:06:22 np0005603609 quirky_ride[77330]: stderr: 2026-01-31T07:06:20.184+0000 7f48ebc59740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 31 02:06:22 np0005603609 quirky_ride[77330]: stderr: 2026-01-31T07:06:20.184+0000 7f48ebc59740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 31 02:06:22 np0005603609 quirky_ride[77330]: stderr: 2026-01-31T07:06:20.184+0000 7f48ebc59740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 31 02:06:22 np0005603609 quirky_ride[77330]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 31 02:06:22 np0005603609 quirky_ride[77330]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 02:06:22 np0005603609 quirky_ride[77330]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 31 02:06:22 np0005603609 quirky_ride[77330]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 31 02:06:22 np0005603609 quirky_ride[77330]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 31 02:06:22 np0005603609 quirky_ride[77330]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 02:06:22 np0005603609 quirky_ride[77330]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 02:06:22 np0005603609 quirky_ride[77330]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 31 02:06:22 np0005603609 quirky_ride[77330]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 31 02:06:22 np0005603609 systemd[1]: libpod-77489cb4ae91dbf05d27a0fab991226ef3cdeda06c553e6d575213cf8e3ae1c1.scope: Deactivated successfully.
Jan 31 02:06:22 np0005603609 systemd[1]: libpod-77489cb4ae91dbf05d27a0fab991226ef3cdeda06c553e6d575213cf8e3ae1c1.scope: Consumed 2.304s CPU time.
Jan 31 02:06:22 np0005603609 podman[77313]: 2026-01-31 07:06:22.774157003 +0000 UTC m=+4.666572870 container died 77489cb4ae91dbf05d27a0fab991226ef3cdeda06c553e6d575213cf8e3ae1c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:06:22 np0005603609 systemd[1]: var-lib-containers-storage-overlay-2b55e334721dc6e268c1a130eb1f61aef52883177f780a7cb1a983c340b656ba-merged.mount: Deactivated successfully.
Jan 31 02:06:22 np0005603609 podman[77313]: 2026-01-31 07:06:22.826242565 +0000 UTC m=+4.718658412 container remove 77489cb4ae91dbf05d27a0fab991226ef3cdeda06c553e6d575213cf8e3ae1c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:06:22 np0005603609 systemd[1]: libpod-conmon-77489cb4ae91dbf05d27a0fab991226ef3cdeda06c553e6d575213cf8e3ae1c1.scope: Deactivated successfully.
Jan 31 02:06:23 np0005603609 podman[78433]: 2026-01-31 07:06:23.4018703 +0000 UTC m=+0.044268630 container create 734372ac34bdf771e79b2c77c91ebfaf26a98fcfeda4a2185e6326885e995055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brattain, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 02:06:23 np0005603609 systemd[1]: Started libpod-conmon-734372ac34bdf771e79b2c77c91ebfaf26a98fcfeda4a2185e6326885e995055.scope.
Jan 31 02:06:23 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:06:23 np0005603609 podman[78433]: 2026-01-31 07:06:23.473386174 +0000 UTC m=+0.115784534 container init 734372ac34bdf771e79b2c77c91ebfaf26a98fcfeda4a2185e6326885e995055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brattain, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 02:06:23 np0005603609 podman[78433]: 2026-01-31 07:06:23.379603532 +0000 UTC m=+0.022001972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:06:23 np0005603609 podman[78433]: 2026-01-31 07:06:23.478918803 +0000 UTC m=+0.121317143 container start 734372ac34bdf771e79b2c77c91ebfaf26a98fcfeda4a2185e6326885e995055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brattain, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 02:06:23 np0005603609 suspicious_brattain[78449]: 167 167
Jan 31 02:06:23 np0005603609 podman[78433]: 2026-01-31 07:06:23.483380468 +0000 UTC m=+0.125778948 container attach 734372ac34bdf771e79b2c77c91ebfaf26a98fcfeda4a2185e6326885e995055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:06:23 np0005603609 systemd[1]: libpod-734372ac34bdf771e79b2c77c91ebfaf26a98fcfeda4a2185e6326885e995055.scope: Deactivated successfully.
Jan 31 02:06:23 np0005603609 podman[78433]: 2026-01-31 07:06:23.48477611 +0000 UTC m=+0.127174470 container died 734372ac34bdf771e79b2c77c91ebfaf26a98fcfeda4a2185e6326885e995055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brattain, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 02:06:23 np0005603609 systemd[1]: var-lib-containers-storage-overlay-e34e1c3bf548ce83007798c8386bdf4db1a3b164b9c769998ce2a80e81de20ea-merged.mount: Deactivated successfully.
Jan 31 02:06:23 np0005603609 podman[78433]: 2026-01-31 07:06:23.520503581 +0000 UTC m=+0.162901921 container remove 734372ac34bdf771e79b2c77c91ebfaf26a98fcfeda4a2185e6326885e995055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:06:23 np0005603609 systemd[1]: libpod-conmon-734372ac34bdf771e79b2c77c91ebfaf26a98fcfeda4a2185e6326885e995055.scope: Deactivated successfully.
Jan 31 02:06:23 np0005603609 podman[78474]: 2026-01-31 07:06:23.637236097 +0000 UTC m=+0.043601085 container create 701603b42773fa6a89eb8228c8d7b9f3bb9a9991282dea524605e7cbf8b96e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_jennings, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 02:06:23 np0005603609 systemd[1]: Started libpod-conmon-701603b42773fa6a89eb8228c8d7b9f3bb9a9991282dea524605e7cbf8b96e34.scope.
Jan 31 02:06:23 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:06:23 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622c9917421298d549558a980ab257209b99f8ff23dbff27a5e9780898f72a26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:23 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622c9917421298d549558a980ab257209b99f8ff23dbff27a5e9780898f72a26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:23 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622c9917421298d549558a980ab257209b99f8ff23dbff27a5e9780898f72a26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:23 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622c9917421298d549558a980ab257209b99f8ff23dbff27a5e9780898f72a26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:23 np0005603609 podman[78474]: 2026-01-31 07:06:23.616544887 +0000 UTC m=+0.022909905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:06:23 np0005603609 podman[78474]: 2026-01-31 07:06:23.721618302 +0000 UTC m=+0.127983290 container init 701603b42773fa6a89eb8228c8d7b9f3bb9a9991282dea524605e7cbf8b96e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_jennings, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:06:23 np0005603609 podman[78474]: 2026-01-31 07:06:23.727276293 +0000 UTC m=+0.133641281 container start 701603b42773fa6a89eb8228c8d7b9f3bb9a9991282dea524605e7cbf8b96e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_jennings, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 02:06:23 np0005603609 podman[78474]: 2026-01-31 07:06:23.732371841 +0000 UTC m=+0.138736919 container attach 701603b42773fa6a89eb8228c8d7b9f3bb9a9991282dea524605e7cbf8b96e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]: {
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:    "1": [
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:        {
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:            "devices": [
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:                "/dev/loop3"
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:            ],
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:            "lv_name": "ceph_lv0",
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:            "lv_size": "7511998464",
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=83ggCo-i4wd-2Lhy-c1Bo-zH58-OdeG-KCfGbc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ea64e94f-e08a-40f8-8a79-f0fb7a401afe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:            "lv_uuid": "83ggCo-i4wd-2Lhy-c1Bo-zH58-OdeG-KCfGbc",
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:            "name": "ceph_lv0",
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:            "tags": {
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:                "ceph.block_uuid": "83ggCo-i4wd-2Lhy-c1Bo-zH58-OdeG-KCfGbc",
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:                "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:                "ceph.cluster_name": "ceph",
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:                "ceph.crush_device_class": "",
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:                "ceph.encrypted": "0",
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:                "ceph.osd_fsid": "ea64e94f-e08a-40f8-8a79-f0fb7a401afe",
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:                "ceph.osd_id": "1",
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:                "ceph.type": "block",
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:                "ceph.vdo": "0"
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:            },
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:            "type": "block",
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:            "vg_name": "ceph_vg0"
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:        }
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]:    ]
Jan 31 02:06:24 np0005603609 jolly_jennings[78490]: }
Jan 31 02:06:24 np0005603609 systemd[1]: libpod-701603b42773fa6a89eb8228c8d7b9f3bb9a9991282dea524605e7cbf8b96e34.scope: Deactivated successfully.
Jan 31 02:06:24 np0005603609 conmon[78490]: conmon 701603b42773fa6a89eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-701603b42773fa6a89eb8228c8d7b9f3bb9a9991282dea524605e7cbf8b96e34.scope/container/memory.events
Jan 31 02:06:24 np0005603609 podman[78474]: 2026-01-31 07:06:24.498631374 +0000 UTC m=+0.904996362 container died 701603b42773fa6a89eb8228c8d7b9f3bb9a9991282dea524605e7cbf8b96e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_jennings, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 02:06:24 np0005603609 systemd[1]: var-lib-containers-storage-overlay-622c9917421298d549558a980ab257209b99f8ff23dbff27a5e9780898f72a26-merged.mount: Deactivated successfully.
Jan 31 02:06:24 np0005603609 podman[78474]: 2026-01-31 07:06:24.561189619 +0000 UTC m=+0.967554607 container remove 701603b42773fa6a89eb8228c8d7b9f3bb9a9991282dea524605e7cbf8b96e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 02:06:24 np0005603609 systemd[1]: libpod-conmon-701603b42773fa6a89eb8228c8d7b9f3bb9a9991282dea524605e7cbf8b96e34.scope: Deactivated successfully.
Jan 31 02:06:25 np0005603609 podman[78649]: 2026-01-31 07:06:25.070474351 +0000 UTC m=+0.037126435 container create 9a49512a28931c475847b87cfeffcc9d2388c570ad3f36c6dd9c1306a313baf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:06:25 np0005603609 systemd[1]: Started libpod-conmon-9a49512a28931c475847b87cfeffcc9d2388c570ad3f36c6dd9c1306a313baf7.scope.
Jan 31 02:06:25 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:06:25 np0005603609 podman[78649]: 2026-01-31 07:06:25.133530198 +0000 UTC m=+0.100182312 container init 9a49512a28931c475847b87cfeffcc9d2388c570ad3f36c6dd9c1306a313baf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pasteur, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:06:25 np0005603609 podman[78649]: 2026-01-31 07:06:25.142525348 +0000 UTC m=+0.109177432 container start 9a49512a28931c475847b87cfeffcc9d2388c570ad3f36c6dd9c1306a313baf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pasteur, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 02:06:25 np0005603609 podman[78649]: 2026-01-31 07:06:25.146467649 +0000 UTC m=+0.113119743 container attach 9a49512a28931c475847b87cfeffcc9d2388c570ad3f36c6dd9c1306a313baf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:06:25 np0005603609 zealous_pasteur[78665]: 167 167
Jan 31 02:06:25 np0005603609 systemd[1]: libpod-9a49512a28931c475847b87cfeffcc9d2388c570ad3f36c6dd9c1306a313baf7.scope: Deactivated successfully.
Jan 31 02:06:25 np0005603609 podman[78649]: 2026-01-31 07:06:25.149276154 +0000 UTC m=+0.115928248 container died 9a49512a28931c475847b87cfeffcc9d2388c570ad3f36c6dd9c1306a313baf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pasteur, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:06:25 np0005603609 podman[78649]: 2026-01-31 07:06:25.054042648 +0000 UTC m=+0.020694762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:06:25 np0005603609 systemd[1]: var-lib-containers-storage-overlay-a20b5c995b8ac1419725c9a7505531975d60d04510629aa97ccc881b4ef416a9-merged.mount: Deactivated successfully.
Jan 31 02:06:25 np0005603609 podman[78649]: 2026-01-31 07:06:25.184542885 +0000 UTC m=+0.151194979 container remove 9a49512a28931c475847b87cfeffcc9d2388c570ad3f36c6dd9c1306a313baf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 02:06:25 np0005603609 systemd[1]: libpod-conmon-9a49512a28931c475847b87cfeffcc9d2388c570ad3f36c6dd9c1306a313baf7.scope: Deactivated successfully.
Jan 31 02:06:25 np0005603609 podman[78697]: 2026-01-31 07:06:25.399410705 +0000 UTC m=+0.057402476 container create ab425508dcbe4108d2af00907204a74eb4329c786bd8bd5b3a9375c74ad3bae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 02:06:25 np0005603609 systemd[1]: Started libpod-conmon-ab425508dcbe4108d2af00907204a74eb4329c786bd8bd5b3a9375c74ad3bae2.scope.
Jan 31 02:06:25 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:06:25 np0005603609 podman[78697]: 2026-01-31 07:06:25.37552959 +0000 UTC m=+0.033521401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:06:25 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3666dc88555bebffb3f702e6fd6751f1a5413075a3db1ea04bd0a4f9b0627367/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:25 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3666dc88555bebffb3f702e6fd6751f1a5413075a3db1ea04bd0a4f9b0627367/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:25 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3666dc88555bebffb3f702e6fd6751f1a5413075a3db1ea04bd0a4f9b0627367/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:25 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3666dc88555bebffb3f702e6fd6751f1a5413075a3db1ea04bd0a4f9b0627367/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:25 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3666dc88555bebffb3f702e6fd6751f1a5413075a3db1ea04bd0a4f9b0627367/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:25 np0005603609 podman[78697]: 2026-01-31 07:06:25.490006914 +0000 UTC m=+0.147998725 container init ab425508dcbe4108d2af00907204a74eb4329c786bd8bd5b3a9375c74ad3bae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:06:25 np0005603609 podman[78697]: 2026-01-31 07:06:25.498297687 +0000 UTC m=+0.156289428 container start ab425508dcbe4108d2af00907204a74eb4329c786bd8bd5b3a9375c74ad3bae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:06:25 np0005603609 podman[78697]: 2026-01-31 07:06:25.502110145 +0000 UTC m=+0.160101876 container attach ab425508dcbe4108d2af00907204a74eb4329c786bd8bd5b3a9375c74ad3bae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:06:26 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate-test[78713]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Jan 31 02:06:26 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate-test[78713]:                            [--no-systemd] [--no-tmpfs]
Jan 31 02:06:26 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate-test[78713]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 02:06:26 np0005603609 systemd[1]: libpod-ab425508dcbe4108d2af00907204a74eb4329c786bd8bd5b3a9375c74ad3bae2.scope: Deactivated successfully.
Jan 31 02:06:26 np0005603609 podman[78697]: 2026-01-31 07:06:26.143761908 +0000 UTC m=+0.801753649 container died ab425508dcbe4108d2af00907204a74eb4329c786bd8bd5b3a9375c74ad3bae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:06:26 np0005603609 systemd[1]: var-lib-containers-storage-overlay-3666dc88555bebffb3f702e6fd6751f1a5413075a3db1ea04bd0a4f9b0627367-merged.mount: Deactivated successfully.
Jan 31 02:06:26 np0005603609 podman[78697]: 2026-01-31 07:06:26.196182007 +0000 UTC m=+0.854173738 container remove ab425508dcbe4108d2af00907204a74eb4329c786bd8bd5b3a9375c74ad3bae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:06:26 np0005603609 systemd[1]: libpod-conmon-ab425508dcbe4108d2af00907204a74eb4329c786bd8bd5b3a9375c74ad3bae2.scope: Deactivated successfully.
Jan 31 02:06:26 np0005603609 systemd[1]: Reloading.
Jan 31 02:06:26 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:06:26 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:06:26 np0005603609 systemd[1]: Reloading.
Jan 31 02:06:26 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:06:26 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:06:26 np0005603609 systemd[1]: Starting Ceph osd.1 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 02:06:27 np0005603609 podman[78875]: 2026-01-31 07:06:27.085160164 +0000 UTC m=+0.048776426 container create 2ff22a7a4fd5ef97c691dec65e45ca96c0abf50ea38fa9558344dee700c86b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:06:27 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:06:27 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c447eb5951d52ba3ea68bc3a4ccb61ce5798ec823eba07096ab2790ae872a7b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:27 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c447eb5951d52ba3ea68bc3a4ccb61ce5798ec823eba07096ab2790ae872a7b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:27 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c447eb5951d52ba3ea68bc3a4ccb61ce5798ec823eba07096ab2790ae872a7b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:27 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c447eb5951d52ba3ea68bc3a4ccb61ce5798ec823eba07096ab2790ae872a7b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:27 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c447eb5951d52ba3ea68bc3a4ccb61ce5798ec823eba07096ab2790ae872a7b7/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:27 np0005603609 podman[78875]: 2026-01-31 07:06:27.065657651 +0000 UTC m=+0.029273893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:06:27 np0005603609 podman[78875]: 2026-01-31 07:06:27.174804881 +0000 UTC m=+0.138421173 container init 2ff22a7a4fd5ef97c691dec65e45ca96c0abf50ea38fa9558344dee700c86b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:06:27 np0005603609 podman[78875]: 2026-01-31 07:06:27.187606088 +0000 UTC m=+0.151222350 container start 2ff22a7a4fd5ef97c691dec65e45ca96c0abf50ea38fa9558344dee700c86b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:06:27 np0005603609 podman[78875]: 2026-01-31 07:06:27.192319569 +0000 UTC m=+0.155935801 container attach 2ff22a7a4fd5ef97c691dec65e45ca96c0abf50ea38fa9558344dee700c86b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 02:06:28 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate[78890]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 02:06:28 np0005603609 bash[78875]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 02:06:28 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate[78890]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 02:06:28 np0005603609 bash[78875]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 02:06:28 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate[78890]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 02:06:28 np0005603609 bash[78875]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 02:06:28 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate[78890]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 02:06:28 np0005603609 bash[78875]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 02:06:28 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate[78890]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 31 02:06:28 np0005603609 bash[78875]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 31 02:06:28 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate[78890]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 02:06:28 np0005603609 bash[78875]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 02:06:28 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate[78890]: --> ceph-volume raw activate successful for osd ID: 1
Jan 31 02:06:28 np0005603609 bash[78875]: --> ceph-volume raw activate successful for osd ID: 1
Jan 31 02:06:28 np0005603609 systemd[1]: libpod-2ff22a7a4fd5ef97c691dec65e45ca96c0abf50ea38fa9558344dee700c86b7f.scope: Deactivated successfully.
Jan 31 02:06:28 np0005603609 podman[78875]: 2026-01-31 07:06:28.120003156 +0000 UTC m=+1.083619378 container died 2ff22a7a4fd5ef97c691dec65e45ca96c0abf50ea38fa9558344dee700c86b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:06:28 np0005603609 systemd[1]: var-lib-containers-storage-overlay-c447eb5951d52ba3ea68bc3a4ccb61ce5798ec823eba07096ab2790ae872a7b7-merged.mount: Deactivated successfully.
Jan 31 02:06:28 np0005603609 podman[78875]: 2026-01-31 07:06:28.169770795 +0000 UTC m=+1.133387017 container remove 2ff22a7a4fd5ef97c691dec65e45ca96c0abf50ea38fa9558344dee700c86b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 02:06:28 np0005603609 podman[79065]: 2026-01-31 07:06:28.330724621 +0000 UTC m=+0.038079148 container create 286672a875d69b98bf4a118a9fc59e80f24b4ae0a5f9c807cd6b8eec08d04fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 02:06:28 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edc38c55da07aa33ea41d370c3adb78d2a111db6cc3296c48bf837491188793f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:28 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edc38c55da07aa33ea41d370c3adb78d2a111db6cc3296c48bf837491188793f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:28 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edc38c55da07aa33ea41d370c3adb78d2a111db6cc3296c48bf837491188793f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:28 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edc38c55da07aa33ea41d370c3adb78d2a111db6cc3296c48bf837491188793f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:28 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edc38c55da07aa33ea41d370c3adb78d2a111db6cc3296c48bf837491188793f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:28 np0005603609 podman[79065]: 2026-01-31 07:06:28.394429523 +0000 UTC m=+0.101784040 container init 286672a875d69b98bf4a118a9fc59e80f24b4ae0a5f9c807cd6b8eec08d04fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:06:28 np0005603609 podman[79065]: 2026-01-31 07:06:28.405721985 +0000 UTC m=+0.113076502 container start 286672a875d69b98bf4a118a9fc59e80f24b4ae0a5f9c807cd6b8eec08d04fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:06:28 np0005603609 podman[79065]: 2026-01-31 07:06:28.309842234 +0000 UTC m=+0.017196751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:06:28 np0005603609 bash[79065]: 286672a875d69b98bf4a118a9fc59e80f24b4ae0a5f9c807cd6b8eec08d04fa4
Jan 31 02:06:28 np0005603609 systemd[1]: Started Ceph osd.1 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: pidfile_write: ignore empty --pid-file
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: bdev(0x55f1fee99800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: bdev(0x55f1fee99800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: bdev(0x55f1fee99800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: bdev(0x55f1fee99800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: bdev(0x55f1ffcd1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: bdev(0x55f1ffcd1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: bdev(0x55f1ffcd1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: bdev(0x55f1ffcd1800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: bdev(0x55f1ffcd1800 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: bdev(0x55f1fee99800 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: load: jerasure load: lrc 
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd52c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd52c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd52c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd52c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 02:06:28 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd52c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 02:06:29 np0005603609 podman[79240]: 2026-01-31 07:06:29.054501784 +0000 UTC m=+0.058614446 container create b4451720dca0fdf33753e5b6af3cb2f38a196de9fb850830410ebf43cbab8e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_aryabhata, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:06:29 np0005603609 systemd[1]: Started libpod-conmon-b4451720dca0fdf33753e5b6af3cb2f38a196de9fb850830410ebf43cbab8e75.scope.
Jan 31 02:06:29 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:06:29 np0005603609 podman[79240]: 2026-01-31 07:06:29.03160491 +0000 UTC m=+0.035717622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:06:29 np0005603609 podman[79240]: 2026-01-31 07:06:29.135297243 +0000 UTC m=+0.139409965 container init b4451720dca0fdf33753e5b6af3cb2f38a196de9fb850830410ebf43cbab8e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_aryabhata, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:06:29 np0005603609 podman[79240]: 2026-01-31 07:06:29.146075484 +0000 UTC m=+0.150188156 container start b4451720dca0fdf33753e5b6af3cb2f38a196de9fb850830410ebf43cbab8e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_aryabhata, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:06:29 np0005603609 podman[79240]: 2026-01-31 07:06:29.149931184 +0000 UTC m=+0.154043906 container attach b4451720dca0fdf33753e5b6af3cb2f38a196de9fb850830410ebf43cbab8e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_aryabhata, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 02:06:29 np0005603609 determined_aryabhata[79261]: 167 167
Jan 31 02:06:29 np0005603609 systemd[1]: libpod-b4451720dca0fdf33753e5b6af3cb2f38a196de9fb850830410ebf43cbab8e75.scope: Deactivated successfully.
Jan 31 02:06:29 np0005603609 podman[79240]: 2026-01-31 07:06:29.15278256 +0000 UTC m=+0.156895232 container died b4451720dca0fdf33753e5b6af3cb2f38a196de9fb850830410ebf43cbab8e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 02:06:29 np0005603609 systemd[1]: var-lib-containers-storage-overlay-b9c59cf5b3f31a3a1a9b77037b21960454ffcec11a613b6369a1bf44f6b3f86a-merged.mount: Deactivated successfully.
Jan 31 02:06:29 np0005603609 podman[79240]: 2026-01-31 07:06:29.192863293 +0000 UTC m=+0.196975965 container remove b4451720dca0fdf33753e5b6af3cb2f38a196de9fb850830410ebf43cbab8e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 02:06:29 np0005603609 systemd[1]: libpod-conmon-b4451720dca0fdf33753e5b6af3cb2f38a196de9fb850830410ebf43cbab8e75.scope: Deactivated successfully.
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd52c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd52c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd52c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd52c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd52c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 02:06:29 np0005603609 podman[79290]: 2026-01-31 07:06:29.404672932 +0000 UTC m=+0.079417599 container create d8eba3dace9bf481e87ee5b31e28e05afa039c47b98a0b8c042c8efe33ecc9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mayer, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:06:29 np0005603609 systemd[1]: Started libpod-conmon-d8eba3dace9bf481e87ee5b31e28e05afa039c47b98a0b8c042c8efe33ecc9b6.scope.
Jan 31 02:06:29 np0005603609 podman[79290]: 2026-01-31 07:06:29.369257268 +0000 UTC m=+0.044001985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:06:29 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:06:29 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234f7514215f58440c9db9218ad0714704f9b83d9b1181af8c58f3a84f62d9bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:29 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234f7514215f58440c9db9218ad0714704f9b83d9b1181af8c58f3a84f62d9bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:29 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234f7514215f58440c9db9218ad0714704f9b83d9b1181af8c58f3a84f62d9bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:29 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234f7514215f58440c9db9218ad0714704f9b83d9b1181af8c58f3a84f62d9bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:29 np0005603609 podman[79290]: 2026-01-31 07:06:29.50771482 +0000 UTC m=+0.182459457 container init d8eba3dace9bf481e87ee5b31e28e05afa039c47b98a0b8c042c8efe33ecc9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 02:06:29 np0005603609 podman[79290]: 2026-01-31 07:06:29.516515794 +0000 UTC m=+0.191260431 container start d8eba3dace9bf481e87ee5b31e28e05afa039c47b98a0b8c042c8efe33ecc9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mayer, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 02:06:29 np0005603609 podman[79290]: 2026-01-31 07:06:29.520059627 +0000 UTC m=+0.194804304 container attach d8eba3dace9bf481e87ee5b31e28e05afa039c47b98a0b8c042c8efe33ecc9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mayer, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd52c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd52c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd52c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd52c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd53400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd53400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd53400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd53400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluefs mount
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluefs mount shared_bdev_used = 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: RocksDB version: 7.9.2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Git sha 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: DB SUMMARY
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: DB Session ID:  PYL9NX7R0OG5GQ6BSB2N
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: CURRENT file:  CURRENT
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                         Options.error_if_exists: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.create_if_missing: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                                     Options.env: 0x55f1ffd23c70
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                                Options.info_log: 0x55f1fef16ba0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                              Options.statistics: (nil)
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.use_fsync: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                              Options.db_log_dir: 
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.write_buffer_manager: 0x55f1ffe2c460
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.unordered_write: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.row_cache: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                              Options.wal_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.two_write_queues: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.wal_compression: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.atomic_flush: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.max_background_jobs: 4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.max_background_compactions: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.max_subcompactions: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.max_open_files: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Compression algorithms supported:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: #011kZSTD supported: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: #011kXpressCompression supported: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: #011kZlibCompression supported: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef16600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef16600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef16600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef16600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef16600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef16600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef16600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef165c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0c430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef165c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0c430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef165c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0c430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5291e3ee-9b14-4fc6-a9dc-27756bcac029
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843189566308, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843189566526, "job": 1, "event": "recovery_finished"}
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: freelist init
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: freelist _read_cfg
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluefs umount
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd53400 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd53400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd53400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd53400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bdev(0x55f1ffd53400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluefs mount
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluefs mount shared_bdev_used = 4718592
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: RocksDB version: 7.9.2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Git sha 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: DB SUMMARY
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: DB Session ID:  PYL9NX7R0OG5GQ6BSB2M
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: CURRENT file:  CURRENT
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                         Options.error_if_exists: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.create_if_missing: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                                     Options.env: 0x55f1fef583f0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                                Options.info_log: 0x55f1feef3580
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                              Options.statistics: (nil)
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.use_fsync: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                              Options.db_log_dir: 
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.write_buffer_manager: 0x55f1ffe2c960
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.unordered_write: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.row_cache: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                              Options.wal_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.two_write_queues: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.wal_compression: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.atomic_flush: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.max_background_jobs: 4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.max_background_compactions: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.max_subcompactions: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.max_open_files: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Compression algorithms supported:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: #011kZSTD supported: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: #011kXpressCompression supported: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: #011kZlibCompression supported: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef17220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0cf30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef17220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0cf30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef17220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0cf30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef17220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0cf30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef17220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0cf30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef17220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0cf30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef17220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0cf30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef17100)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0d610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef17100)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0d610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:           Options.merge_operator: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f1fef17100)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f1fef0d610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.compression: LZ4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.num_levels: 7
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5291e3ee-9b14-4fc6-a9dc-27756bcac029
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843189833505, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843189840220, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843189, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5291e3ee-9b14-4fc6-a9dc-27756bcac029", "db_session_id": "PYL9NX7R0OG5GQ6BSB2M", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843189842685, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843189, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5291e3ee-9b14-4fc6-a9dc-27756bcac029", "db_session_id": "PYL9NX7R0OG5GQ6BSB2M", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843189847230, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843189, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5291e3ee-9b14-4fc6-a9dc-27756bcac029", "db_session_id": "PYL9NX7R0OG5GQ6BSB2M", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843189848593, "job": 1, "event": "recovery_finished"}
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f1fefde700
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: DB pointer 0x55f1ffe15a00
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f1fef0cf30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f1fef0cf30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f1fef0cf30#2 capacity: 460.80 MB usag
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: _get_class not permitted to load lua
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: _get_class not permitted to load sdk
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: _get_class not permitted to load test_remote_reads
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: osd.1 0 load_pgs
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: osd.1 0 load_pgs opened 0 pgs
Jan 31 02:06:29 np0005603609 ceph-osd[79083]: osd.1 0 log_to_monitors true
Jan 31 02:06:29 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1[79079]: 2026-01-31T07:06:29.882+0000 7f7a615a7740 -1 osd.1 0 log_to_monitors true
Jan 31 02:06:30 np0005603609 confident_mayer[79307]: {
Jan 31 02:06:30 np0005603609 confident_mayer[79307]:    "ea64e94f-e08a-40f8-8a79-f0fb7a401afe": {
Jan 31 02:06:30 np0005603609 confident_mayer[79307]:        "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 02:06:30 np0005603609 confident_mayer[79307]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:06:30 np0005603609 confident_mayer[79307]:        "osd_id": 1,
Jan 31 02:06:30 np0005603609 confident_mayer[79307]:        "osd_uuid": "ea64e94f-e08a-40f8-8a79-f0fb7a401afe",
Jan 31 02:06:30 np0005603609 confident_mayer[79307]:        "type": "bluestore"
Jan 31 02:06:30 np0005603609 confident_mayer[79307]:    }
Jan 31 02:06:30 np0005603609 confident_mayer[79307]: }
Jan 31 02:06:30 np0005603609 systemd[1]: libpod-d8eba3dace9bf481e87ee5b31e28e05afa039c47b98a0b8c042c8efe33ecc9b6.scope: Deactivated successfully.
Jan 31 02:06:30 np0005603609 podman[79737]: 2026-01-31 07:06:30.334990491 +0000 UTC m=+0.030936010 container died d8eba3dace9bf481e87ee5b31e28e05afa039c47b98a0b8c042c8efe33ecc9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mayer, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:06:30 np0005603609 systemd[1]: var-lib-containers-storage-overlay-234f7514215f58440c9db9218ad0714704f9b83d9b1181af8c58f3a84f62d9bf-merged.mount: Deactivated successfully.
Jan 31 02:06:30 np0005603609 podman[79737]: 2026-01-31 07:06:30.384722509 +0000 UTC m=+0.080668008 container remove d8eba3dace9bf481e87ee5b31e28e05afa039c47b98a0b8c042c8efe33ecc9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mayer, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 02:06:30 np0005603609 systemd[1]: libpod-conmon-d8eba3dace9bf481e87ee5b31e28e05afa039c47b98a0b8c042c8efe33ecc9b6.scope: Deactivated successfully.
Jan 31 02:06:30 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 02:06:30 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 02:06:31 np0005603609 podman[79972]: 2026-01-31 07:06:31.399283089 +0000 UTC m=+0.070723727 container exec 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 02:06:31 np0005603609 ceph-osd[79083]: osd.1 0 done with init, starting boot process
Jan 31 02:06:31 np0005603609 ceph-osd[79083]: osd.1 0 start_boot
Jan 31 02:06:31 np0005603609 ceph-osd[79083]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 02:06:31 np0005603609 ceph-osd[79083]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 02:06:31 np0005603609 ceph-osd[79083]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 02:06:31 np0005603609 ceph-osd[79083]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 02:06:31 np0005603609 ceph-osd[79083]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 31 02:06:31 np0005603609 podman[79972]: 2026-01-31 07:06:31.634438471 +0000 UTC m=+0.305879039 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:06:32 np0005603609 podman[80163]: 2026-01-31 07:06:32.433759472 +0000 UTC m=+0.077500354 container create cf640d9f2a13b6dedfc447ad95cd680587d4cae3dff24d472759be10bd0da8a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kare, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:06:32 np0005603609 podman[80163]: 2026-01-31 07:06:32.375206389 +0000 UTC m=+0.018947301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:06:32 np0005603609 systemd[1]: Started libpod-conmon-cf640d9f2a13b6dedfc447ad95cd680587d4cae3dff24d472759be10bd0da8a7.scope.
Jan 31 02:06:32 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:06:32 np0005603609 podman[80163]: 2026-01-31 07:06:32.532722676 +0000 UTC m=+0.176463598 container init cf640d9f2a13b6dedfc447ad95cd680587d4cae3dff24d472759be10bd0da8a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kare, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:06:32 np0005603609 podman[80163]: 2026-01-31 07:06:32.539417291 +0000 UTC m=+0.183158213 container start cf640d9f2a13b6dedfc447ad95cd680587d4cae3dff24d472759be10bd0da8a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kare, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 02:06:32 np0005603609 keen_kare[80180]: 167 167
Jan 31 02:06:32 np0005603609 systemd[1]: libpod-cf640d9f2a13b6dedfc447ad95cd680587d4cae3dff24d472759be10bd0da8a7.scope: Deactivated successfully.
Jan 31 02:06:32 np0005603609 podman[80163]: 2026-01-31 07:06:32.565391826 +0000 UTC m=+0.209132728 container attach cf640d9f2a13b6dedfc447ad95cd680587d4cae3dff24d472759be10bd0da8a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:06:32 np0005603609 podman[80163]: 2026-01-31 07:06:32.565747564 +0000 UTC m=+0.209488456 container died cf640d9f2a13b6dedfc447ad95cd680587d4cae3dff24d472759be10bd0da8a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kare, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:06:32 np0005603609 systemd[1]: var-lib-containers-storage-overlay-94b341941ef208d897960fc9846ca68e07787f9987b9dd1c88b3d294d6390893-merged.mount: Deactivated successfully.
Jan 31 02:06:32 np0005603609 podman[80163]: 2026-01-31 07:06:32.758465209 +0000 UTC m=+0.402206121 container remove cf640d9f2a13b6dedfc447ad95cd680587d4cae3dff24d472759be10bd0da8a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kare, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:06:32 np0005603609 systemd[1]: libpod-conmon-cf640d9f2a13b6dedfc447ad95cd680587d4cae3dff24d472759be10bd0da8a7.scope: Deactivated successfully.
Jan 31 02:06:32 np0005603609 podman[80206]: 2026-01-31 07:06:32.896154503 +0000 UTC m=+0.061441661 container create c7c207c5f5323224ac42b722a4f73755fd2127028192dfebbe828af75f745a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 02:06:32 np0005603609 podman[80206]: 2026-01-31 07:06:32.857276459 +0000 UTC m=+0.022563707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:06:32 np0005603609 systemd[1]: Started libpod-conmon-c7c207c5f5323224ac42b722a4f73755fd2127028192dfebbe828af75f745a0a.scope.
Jan 31 02:06:33 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:06:33 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8afe8e6b74e46606f8e0cd77dbd3569dc7f0cd9dae941be2b6183e62d1f953d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:33 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8afe8e6b74e46606f8e0cd77dbd3569dc7f0cd9dae941be2b6183e62d1f953d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:33 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8afe8e6b74e46606f8e0cd77dbd3569dc7f0cd9dae941be2b6183e62d1f953d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:33 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8afe8e6b74e46606f8e0cd77dbd3569dc7f0cd9dae941be2b6183e62d1f953d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:06:33 np0005603609 podman[80206]: 2026-01-31 07:06:33.045018448 +0000 UTC m=+0.210305606 container init c7c207c5f5323224ac42b722a4f73755fd2127028192dfebbe828af75f745a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kare, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:06:33 np0005603609 podman[80206]: 2026-01-31 07:06:33.051037087 +0000 UTC m=+0.216324245 container start c7c207c5f5323224ac42b722a4f73755fd2127028192dfebbe828af75f745a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kare, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 02:06:33 np0005603609 podman[80206]: 2026-01-31 07:06:33.090496565 +0000 UTC m=+0.255783723 container attach c7c207c5f5323224ac42b722a4f73755fd2127028192dfebbe828af75f745a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kare, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:06:34 np0005603609 youthful_kare[80221]: [
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:    {
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:        "available": false,
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:        "ceph_device": false,
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:        "lsm_data": {},
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:        "lvs": [],
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:        "path": "/dev/sr0",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:        "rejected_reasons": [
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "Insufficient space (<5GB)",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "Has a FileSystem"
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:        ],
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:        "sys_api": {
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "actuators": null,
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "device_nodes": "sr0",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "devname": "sr0",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "human_readable_size": "482.00 KB",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "id_bus": "ata",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "model": "QEMU DVD-ROM",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "nr_requests": "2",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "parent": "/dev/sr0",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "partitions": {},
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "path": "/dev/sr0",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "removable": "1",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "rev": "2.5+",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "ro": "0",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "rotational": "1",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "sas_address": "",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "sas_device_handle": "",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "scheduler_mode": "mq-deadline",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "sectors": 0,
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "sectorsize": "2048",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "size": 493568.0,
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "support_discard": "2048",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "type": "disk",
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:            "vendor": "QEMU"
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:        }
Jan 31 02:06:34 np0005603609 youthful_kare[80221]:    }
Jan 31 02:06:34 np0005603609 youthful_kare[80221]: ]
Jan 31 02:06:34 np0005603609 systemd[1]: libpod-c7c207c5f5323224ac42b722a4f73755fd2127028192dfebbe828af75f745a0a.scope: Deactivated successfully.
Jan 31 02:06:34 np0005603609 systemd[1]: libpod-c7c207c5f5323224ac42b722a4f73755fd2127028192dfebbe828af75f745a0a.scope: Consumed 1.202s CPU time.
Jan 31 02:06:34 np0005603609 podman[80206]: 2026-01-31 07:06:34.304726822 +0000 UTC m=+1.470013980 container died c7c207c5f5323224ac42b722a4f73755fd2127028192dfebbe828af75f745a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kare, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:06:38 np0005603609 systemd[1]: var-lib-containers-storage-overlay-d8afe8e6b74e46606f8e0cd77dbd3569dc7f0cd9dae941be2b6183e62d1f953d-merged.mount: Deactivated successfully.
Jan 31 02:06:38 np0005603609 podman[80206]: 2026-01-31 07:06:38.128178758 +0000 UTC m=+5.293465936 container remove c7c207c5f5323224ac42b722a4f73755fd2127028192dfebbe828af75f745a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kare, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:06:38 np0005603609 systemd[1]: libpod-conmon-c7c207c5f5323224ac42b722a4f73755fd2127028192dfebbe828af75f745a0a.scope: Deactivated successfully.
Jan 31 02:06:39 np0005603609 ceph-osd[79083]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 33.054 iops: 8461.933 elapsed_sec: 0.355
Jan 31 02:06:39 np0005603609 ceph-osd[79083]: log_channel(cluster) log [WRN] : OSD bench result of 8461.932853 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 02:06:39 np0005603609 ceph-osd[79083]: osd.1 0 waiting for initial osdmap
Jan 31 02:06:39 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1[79079]: 2026-01-31T07:06:39.646+0000 7f7a5dd3e640 -1 osd.1 0 waiting for initial osdmap
Jan 31 02:06:39 np0005603609 ceph-osd[79083]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 02:06:39 np0005603609 ceph-osd[79083]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 31 02:06:39 np0005603609 ceph-osd[79083]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 02:06:39 np0005603609 ceph-osd[79083]: osd.1 11 check_osdmap_features require_osd_release unknown -> reef
Jan 31 02:06:39 np0005603609 ceph-osd[79083]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 02:06:39 np0005603609 ceph-osd[79083]: osd.1 11 set_numa_affinity not setting numa affinity
Jan 31 02:06:39 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-1[79079]: 2026-01-31T07:06:39.674+0000 7f7a58b4f640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 02:06:39 np0005603609 ceph-osd[79083]: osd.1 11 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Jan 31 02:06:40 np0005603609 ceph-osd[79083]: osd.1 12 state: booting -> active
Jan 31 02:06:40 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:06:41 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:00 np0005603609 podman[81430]: 2026-01-31 07:06:59.986368261 +0000 UTC m=+0.019063104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:07:00 np0005603609 podman[81430]: 2026-01-31 07:07:00.102495604 +0000 UTC m=+0.135190417 container create 614dd832d0570bec0407b4930e1839346a89b21b3b172bc94b7115c2331d9531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_grothendieck, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 02:07:00 np0005603609 systemd[1]: Started libpod-conmon-614dd832d0570bec0407b4930e1839346a89b21b3b172bc94b7115c2331d9531.scope.
Jan 31 02:07:00 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:07:00 np0005603609 podman[81430]: 2026-01-31 07:07:00.182456506 +0000 UTC m=+0.215151339 container init 614dd832d0570bec0407b4930e1839346a89b21b3b172bc94b7115c2331d9531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_grothendieck, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 02:07:00 np0005603609 podman[81430]: 2026-01-31 07:07:00.192794606 +0000 UTC m=+0.225489429 container start 614dd832d0570bec0407b4930e1839346a89b21b3b172bc94b7115c2331d9531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:07:00 np0005603609 romantic_grothendieck[81446]: 167 167
Jan 31 02:07:00 np0005603609 systemd[1]: libpod-614dd832d0570bec0407b4930e1839346a89b21b3b172bc94b7115c2331d9531.scope: Deactivated successfully.
Jan 31 02:07:00 np0005603609 podman[81430]: 2026-01-31 07:07:00.249981857 +0000 UTC m=+0.282676690 container attach 614dd832d0570bec0407b4930e1839346a89b21b3b172bc94b7115c2331d9531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_grothendieck, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:07:00 np0005603609 podman[81430]: 2026-01-31 07:07:00.250654792 +0000 UTC m=+0.283349605 container died 614dd832d0570bec0407b4930e1839346a89b21b3b172bc94b7115c2331d9531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_grothendieck, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 02:07:00 np0005603609 systemd[1]: var-lib-containers-storage-overlay-49378e9a5e97018a8ed849e505aaec74c103268af0479a8dd87a012dc0f2fd4e-merged.mount: Deactivated successfully.
Jan 31 02:07:00 np0005603609 podman[81430]: 2026-01-31 07:07:00.35755179 +0000 UTC m=+0.390246633 container remove 614dd832d0570bec0407b4930e1839346a89b21b3b172bc94b7115c2331d9531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:07:00 np0005603609 systemd[1]: libpod-conmon-614dd832d0570bec0407b4930e1839346a89b21b3b172bc94b7115c2331d9531.scope: Deactivated successfully.
Jan 31 02:07:00 np0005603609 podman[81466]: 2026-01-31 07:07:00.437770877 +0000 UTC m=+0.052986604 container create 30c236a949306fb55481ae89499a8806a1586ba467d394f2c98856ec84da0eb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_edison, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:07:00 np0005603609 systemd[1]: Started libpod-conmon-30c236a949306fb55481ae89499a8806a1586ba467d394f2c98856ec84da0eb8.scope.
Jan 31 02:07:00 np0005603609 podman[81466]: 2026-01-31 07:07:00.412015228 +0000 UTC m=+0.027230975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:07:00 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:07:00 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e209235ca78b79e0b4aa47bc6fd2a3b41fdca9d83a22ccdb1293a35771999f9/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:00 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e209235ca78b79e0b4aa47bc6fd2a3b41fdca9d83a22ccdb1293a35771999f9/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:00 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e209235ca78b79e0b4aa47bc6fd2a3b41fdca9d83a22ccdb1293a35771999f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:00 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e209235ca78b79e0b4aa47bc6fd2a3b41fdca9d83a22ccdb1293a35771999f9/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:00 np0005603609 podman[81466]: 2026-01-31 07:07:00.545812931 +0000 UTC m=+0.161028698 container init 30c236a949306fb55481ae89499a8806a1586ba467d394f2c98856ec84da0eb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_edison, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:07:00 np0005603609 podman[81466]: 2026-01-31 07:07:00.552026565 +0000 UTC m=+0.167242292 container start 30c236a949306fb55481ae89499a8806a1586ba467d394f2c98856ec84da0eb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 02:07:00 np0005603609 podman[81466]: 2026-01-31 07:07:00.558046006 +0000 UTC m=+0.173261763 container attach 30c236a949306fb55481ae89499a8806a1586ba467d394f2c98856ec84da0eb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_edison, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:07:00 np0005603609 systemd[1]: libpod-30c236a949306fb55481ae89499a8806a1586ba467d394f2c98856ec84da0eb8.scope: Deactivated successfully.
Jan 31 02:07:00 np0005603609 podman[81466]: 2026-01-31 07:07:00.779404267 +0000 UTC m=+0.394619974 container died 30c236a949306fb55481ae89499a8806a1586ba467d394f2c98856ec84da0eb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:07:00 np0005603609 systemd[1]: var-lib-containers-storage-overlay-3e209235ca78b79e0b4aa47bc6fd2a3b41fdca9d83a22ccdb1293a35771999f9-merged.mount: Deactivated successfully.
Jan 31 02:07:00 np0005603609 podman[81466]: 2026-01-31 07:07:00.912152456 +0000 UTC m=+0.527368153 container remove 30c236a949306fb55481ae89499a8806a1586ba467d394f2c98856ec84da0eb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_edison, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:07:00 np0005603609 systemd[1]: libpod-conmon-30c236a949306fb55481ae89499a8806a1586ba467d394f2c98856ec84da0eb8.scope: Deactivated successfully.
Jan 31 02:07:01 np0005603609 systemd[1]: Reloading.
Jan 31 02:07:01 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:07:01 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:07:01 np0005603609 systemd[1]: Reloading.
Jan 31 02:07:01 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:07:01 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:07:01 np0005603609 systemd[1]: Starting Ceph mon.compute-1 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 02:07:02 np0005603609 podman[81648]: 2026-01-31 07:07:01.909587668 +0000 UTC m=+0.021469440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:07:02 np0005603609 podman[81648]: 2026-01-31 07:07:02.061534025 +0000 UTC m=+0.173415817 container create 26cb4dc9845a822c251a96c4a936d285df97bd9bdb9dbed61c1ee160fb495f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:07:02 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/675532b0f00b4ad065bd1590202c6cc34df5e7be0d51e60ae8ea6725e06d40c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:02 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/675532b0f00b4ad065bd1590202c6cc34df5e7be0d51e60ae8ea6725e06d40c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:02 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/675532b0f00b4ad065bd1590202c6cc34df5e7be0d51e60ae8ea6725e06d40c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:02 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/675532b0f00b4ad065bd1590202c6cc34df5e7be0d51e60ae8ea6725e06d40c8/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:02 np0005603609 podman[81648]: 2026-01-31 07:07:02.277378399 +0000 UTC m=+0.389260201 container init 26cb4dc9845a822c251a96c4a936d285df97bd9bdb9dbed61c1ee160fb495f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-1, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:07:02 np0005603609 podman[81648]: 2026-01-31 07:07:02.28347519 +0000 UTC m=+0.395356942 container start 26cb4dc9845a822c251a96c4a936d285df97bd9bdb9dbed61c1ee160fb495f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-1, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 02:07:02 np0005603609 bash[81648]: 26cb4dc9845a822c251a96c4a936d285df97bd9bdb9dbed61c1ee160fb495f4a
Jan 31 02:07:02 np0005603609 systemd[1]: Started Ceph mon.compute-1 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: pidfile_write: ignore empty --pid-file
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: load: jerasure load: lrc 
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: RocksDB version: 7.9.2
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Git sha 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: DB SUMMARY
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: DB Session ID:  GJ8KZ27N7JDEP6ITHIOI
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: CURRENT file:  CURRENT
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                         Options.error_if_exists: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                       Options.create_if_missing: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                                     Options.env: 0x5619ab0a3c40
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                                Options.info_log: 0x5619ad8f4fc0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                              Options.statistics: (nil)
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                               Options.use_fsync: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                              Options.db_log_dir: 
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                                 Options.wal_dir: 
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                    Options.write_buffer_manager: 0x5619ad904b40
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                  Options.unordered_write: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                               Options.row_cache: None
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                              Options.wal_filter: None
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.two_write_queues: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.wal_compression: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.atomic_flush: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.max_background_jobs: 2
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.max_background_compactions: -1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.max_subcompactions: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.max_total_wal_size: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                          Options.max_open_files: -1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:       Options.compaction_readahead_size: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Compression algorithms supported:
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: #011kZSTD supported: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: #011kXpressCompression supported: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: #011kZlibCompression supported: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:           Options.merge_operator: 
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:        Options.compaction_filter: None
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5619ad8f4c00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5619ad8ed1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:        Options.write_buffer_size: 33554432
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:  Options.max_write_buffer_number: 2
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:          Options.compression: NoCompression
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.num_levels: 7
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4bf585ed-5b97-4c56-83f8-35fa96479e70
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843222332911, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843222364910, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843222365109, "job": 1, "event": "recovery_finished"}
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5619ad916e00
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: DB pointer 0x5619ad9a0000
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.3 total, 0.3 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012 Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619ad8ed1f0#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 02:07:02 np0005603609 ceph-mon[81667]: mon.compute-1@-1(???) e0 preinit fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).mds e1 new map
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 0 up, 2 in
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 1 up, 2 in
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 1 up, 2 in
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 1 up, 2 in
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e14 crush map has features 3314933000852226048, adjusting msgr requires
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e14 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e14 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).osd e14 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Updating compute-1:/etc/ceph/ceph.conf
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Updating compute-1:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Updating compute-1:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Deploying daemon crash.compute-1 on compute-1
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/2157664835' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d19aa227-e399-4341-9824-b20a6ddbc903"}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/2157664835' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d19aa227-e399-4341-9824-b20a6ddbc903"}]': finished
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.101:0/2276134975' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ea64e94f-e08a-40f8-8a79-f0fb7a401afe"}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.101:0/2276134975' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ea64e94f-e08a-40f8-8a79-f0fb7a401afe"}]': finished
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Deploying daemon osd.0 on compute-0
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Deploying daemon osd.1 on compute-1
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Unable to set osd_memory_target on compute-0 to 134197657: error parsing value: Value '134197657' is below minimum 939524096
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: OSD bench result of 1880.086077 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438] boot
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Adjusting osd_memory_target on compute-1 to  5247M
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: OSD bench result of 8461.932853 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795] boot
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Updating compute-2:/etc/ceph/ceph.conf
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Updating compute-2:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Updating compute-2:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Deploying daemon mon.compute-2 on compute-2
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: Cluster is now healthy
Jan 31 02:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Jan 31 02:07:07 np0005603609 ceph-mon[81667]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Jan 31 02:07:07 np0005603609 ceph-mon[81667]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Jan 31 02:07:07 np0005603609 ceph-mon[81667]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 31 02:07:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:07:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:07:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 31 02:07:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Jan 31 02:07:10 np0005603609 ceph-mon[81667]: Deploying daemon mon.compute-1 on compute-1
Jan 31 02:07:10 np0005603609 ceph-mon[81667]: mon.compute-0 calling monitor election
Jan 31 02:07:10 np0005603609 ceph-mon[81667]: mon.compute-2 calling monitor election
Jan 31 02:07:10 np0005603609 ceph-mon[81667]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 31 02:07:10 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 02:07:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:07:10 np0005603609 ceph-mon[81667]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,created_at=2026-01-31T07:07:00.595476Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864292,os=Linux}
Jan 31 02:07:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Jan 31 02:07:10 np0005603609 ceph-mon[81667]: mon.compute-0 calling monitor election
Jan 31 02:07:10 np0005603609 ceph-mon[81667]: mon.compute-2 calling monitor election
Jan 31 02:07:10 np0005603609 ceph-mon[81667]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 02:07:10 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 02:07:11 np0005603609 podman[81849]: 2026-01-31 07:07:11.536092267 +0000 UTC m=+0.064103992 container create 1b14ca6dc3c428e008b99de1f351b8a3f4583ecbd884e8f3f743c175c0ab0f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hellman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:07:11 np0005603609 podman[81849]: 2026-01-31 07:07:11.492260258 +0000 UTC m=+0.020272023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:07:11 np0005603609 systemd[1]: Started libpod-conmon-1b14ca6dc3c428e008b99de1f351b8a3f4583ecbd884e8f3f743c175c0ab0f02.scope.
Jan 31 02:07:11 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:07:11 np0005603609 podman[81849]: 2026-01-31 07:07:11.749884553 +0000 UTC m=+0.277896288 container init 1b14ca6dc3c428e008b99de1f351b8a3f4583ecbd884e8f3f743c175c0ab0f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hellman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:07:11 np0005603609 podman[81849]: 2026-01-31 07:07:11.761113424 +0000 UTC m=+0.289125179 container start 1b14ca6dc3c428e008b99de1f351b8a3f4583ecbd884e8f3f743c175c0ab0f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hellman, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:07:11 np0005603609 bold_hellman[81866]: 167 167
Jan 31 02:07:11 np0005603609 systemd[1]: libpod-1b14ca6dc3c428e008b99de1f351b8a3f4583ecbd884e8f3f743c175c0ab0f02.scope: Deactivated successfully.
Jan 31 02:07:11 np0005603609 podman[81849]: 2026-01-31 07:07:11.814902917 +0000 UTC m=+0.342914732 container attach 1b14ca6dc3c428e008b99de1f351b8a3f4583ecbd884e8f3f743c175c0ab0f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hellman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 02:07:11 np0005603609 podman[81849]: 2026-01-31 07:07:11.816430002 +0000 UTC m=+0.344441747 container died 1b14ca6dc3c428e008b99de1f351b8a3f4583ecbd884e8f3f743c175c0ab0f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:07:11 np0005603609 ceph-mon[81667]: mon.compute-1 calling monitor election
Jan 31 02:07:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:11 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/1344602823' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 02:07:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hodsiu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 02:07:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hodsiu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 02:07:12 np0005603609 systemd[1]: var-lib-containers-storage-overlay-0b8572bd762d62d159221ab9dd4e1190ab5e1bd6347dce753405de9ffc0fcbad-merged.mount: Deactivated successfully.
Jan 31 02:07:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e15 e15: 2 total, 2 up, 2 in
Jan 31 02:07:12 np0005603609 podman[81849]: 2026-01-31 07:07:12.197931669 +0000 UTC m=+0.725943384 container remove 1b14ca6dc3c428e008b99de1f351b8a3f4583ecbd884e8f3f743c175c0ab0f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 02:07:12 np0005603609 systemd[1]: libpod-conmon-1b14ca6dc3c428e008b99de1f351b8a3f4583ecbd884e8f3f743c175c0ab0f02.scope: Deactivated successfully.
Jan 31 02:07:12 np0005603609 systemd[1]: Reloading.
Jan 31 02:07:12 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:07:12 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:07:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e15 _set_new_cache_sizes cache_size:1019937054 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:12 np0005603609 systemd[1]: Reloading.
Jan 31 02:07:12 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:07:12 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:07:12 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 15 pg[2.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:12 np0005603609 ceph-mon[81667]: Deploying daemon mgr.compute-1.hodsiu on compute-1
Jan 31 02:07:12 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/1344602823' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 02:07:12 np0005603609 systemd[1]: Starting Ceph mgr.compute-1.hodsiu for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 02:07:13 np0005603609 podman[82014]: 2026-01-31 07:07:13.158151645 +0000 UTC m=+0.048391557 container create 479824a71b73d925f8543c76cbc3a44ba071ab71790405797946eac8548051e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:07:13 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9cba04d3b0474c2b97b5d76aa7d985d7ac86bf6e63bf84fb8bde67fcd3513f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:13 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9cba04d3b0474c2b97b5d76aa7d985d7ac86bf6e63bf84fb8bde67fcd3513f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:13 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9cba04d3b0474c2b97b5d76aa7d985d7ac86bf6e63bf84fb8bde67fcd3513f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:13 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9cba04d3b0474c2b97b5d76aa7d985d7ac86bf6e63bf84fb8bde67fcd3513f/merged/var/lib/ceph/mgr/ceph-compute-1.hodsiu supports timestamps until 2038 (0x7fffffff)
Jan 31 02:07:13 np0005603609 podman[82014]: 2026-01-31 07:07:13.134206838 +0000 UTC m=+0.024446750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:07:13 np0005603609 podman[82014]: 2026-01-31 07:07:13.235863943 +0000 UTC m=+0.126103865 container init 479824a71b73d925f8543c76cbc3a44ba071ab71790405797946eac8548051e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:07:13 np0005603609 podman[82014]: 2026-01-31 07:07:13.239797205 +0000 UTC m=+0.130037107 container start 479824a71b73d925f8543c76cbc3a44ba071ab71790405797946eac8548051e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 02:07:13 np0005603609 bash[82014]: 479824a71b73d925f8543c76cbc3a44ba071ab71790405797946eac8548051e2
Jan 31 02:07:13 np0005603609 systemd[1]: Started Ceph mgr.compute-1.hodsiu for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 02:07:13 np0005603609 ceph-mgr[82033]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 02:07:13 np0005603609 ceph-mgr[82033]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 31 02:07:13 np0005603609 ceph-mgr[82033]: pidfile_write: ignore empty --pid-file
Jan 31 02:07:13 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'alerts'
Jan 31 02:07:13 np0005603609 ceph-mgr[82033]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 31 02:07:13 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'balancer'
Jan 31 02:07:13 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:13.722+0000 7fb094432140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 31 02:07:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e16 e16: 2 total, 2 up, 2 in
Jan 31 02:07:13 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 16 pg[2.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:14 np0005603609 ceph-mgr[82033]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 31 02:07:14 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:14.003+0000 7fb094432140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 31 02:07:14 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'cephadm'
Jan 31 02:07:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:14 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/4258413547' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 02:07:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 02:07:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 02:07:14 np0005603609 ceph-mon[81667]: Deploying daemon crash.compute-2 on compute-2
Jan 31 02:07:14 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/4258413547' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 02:07:14 np0005603609 ceph-mon[81667]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 02:07:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e17 e17: 2 total, 2 up, 2 in
Jan 31 02:07:16 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:16 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:16 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/2911166990' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 02:07:16 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:16 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:16 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:07:16 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:07:16 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:16 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'crash'
Jan 31 02:07:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e18 e18: 2 total, 2 up, 2 in
Jan 31 02:07:16 np0005603609 ceph-mgr[82033]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 31 02:07:16 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'dashboard'
Jan 31 02:07:16 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:16.429+0000 7fb094432140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 31 02:07:16 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/2911166990' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 02:07:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e19 e19: 3 total, 2 up, 3 in
Jan 31 02:07:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e19 _set_new_cache_sizes cache_size:1020053334 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:18 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'devicehealth'
Jan 31 02:07:18 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.102:0/2266493088' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d108d581-3dd3-4742-941a-f201ff187649"}]: dispatch
Jan 31 02:07:18 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d108d581-3dd3-4742-941a-f201ff187649"}]: dispatch
Jan 31 02:07:18 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d108d581-3dd3-4742-941a-f201ff187649"}]': finished
Jan 31 02:07:18 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/331184130' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 02:07:18 np0005603609 ceph-mgr[82033]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 31 02:07:18 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 02:07:18 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:18.304+0000 7fb094432140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 31 02:07:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e20 e20: 3 total, 2 up, 3 in
Jan 31 02:07:18 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 02:07:18 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 02:07:18 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]:  from numpy import show_config as show_numpy_config
Jan 31 02:07:18 np0005603609 ceph-mgr[82033]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 31 02:07:18 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:18.883+0000 7fb094432140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 31 02:07:18 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'influx'
Jan 31 02:07:19 np0005603609 ceph-mgr[82033]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 31 02:07:19 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:19.200+0000 7fb094432140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 31 02:07:19 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'insights'
Jan 31 02:07:19 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'iostat'
Jan 31 02:07:19 np0005603609 ceph-mgr[82033]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 31 02:07:19 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:19.686+0000 7fb094432140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 31 02:07:19 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'k8sevents'
Jan 31 02:07:19 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/331184130' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 02:07:19 np0005603609 ceph-mon[81667]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 02:07:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e21 e21: 3 total, 2 up, 3 in
Jan 31 02:07:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:07:21 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'localpool'
Jan 31 02:07:21 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 02:07:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e22 e22: 3 total, 2 up, 3 in
Jan 31 02:07:22 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'mirroring'
Jan 31 02:07:22 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/4225669901' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 02:07:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e22 _set_new_cache_sizes cache_size:1020054715 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:22 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'nfs'
Jan 31 02:07:23 np0005603609 ceph-mgr[82033]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 31 02:07:23 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:23.639+0000 7fb094432140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 31 02:07:23 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'orchestrator'
Jan 31 02:07:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e23 e23: 3 total, 2 up, 3 in
Jan 31 02:07:24 np0005603609 ceph-mgr[82033]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 31 02:07:24 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:24.334+0000 7fb094432140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 31 02:07:24 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 02:07:24 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 23 pg[2.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=23 pruub=13.183480263s) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active pruub 67.882278442s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:24 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 23 pg[2.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=23 pruub=13.183480263s) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown pruub 67.882278442s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:24 np0005603609 ceph-mgr[82033]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 31 02:07:24 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:24.608+0000 7fb094432140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 31 02:07:24 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'osd_support'
Jan 31 02:07:24 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:07:24 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/4225669901' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 02:07:24 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:07:24 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 31 02:07:24 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:07:24 np0005603609 ceph-mgr[82033]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 31 02:07:24 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:24.879+0000 7fb094432140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 31 02:07:24 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 02:07:25 np0005603609 ceph-mgr[82033]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 31 02:07:25 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:25.196+0000 7fb094432140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 31 02:07:25 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'progress'
Jan 31 02:07:25 np0005603609 ceph-mgr[82033]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 31 02:07:25 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:25.481+0000 7fb094432140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 31 02:07:25 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'prometheus'
Jan 31 02:07:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e24 e24: 3 total, 2 up, 3 in
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.1f( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.1c( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.1d( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.1e( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.b( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.9( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.a( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.8( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.7( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.6( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.5( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.4( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.2( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.1( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.c( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.3( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.d( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.e( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.11( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.10( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.12( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.f( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.14( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.13( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.16( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.15( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.17( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.18( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.19( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.1a( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.1b( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.7( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.2( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.0( empty local-lis/les=23/24 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.1( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.8( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.e( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.3( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.1e( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.11( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.14( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.16( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.17( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.1a( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 24 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:26 np0005603609 ceph-mon[81667]: Deploying daemon osd.2 on compute-2
Jan 31 02:07:26 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:07:26 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:07:26 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:07:26 np0005603609 ceph-mon[81667]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 02:07:26 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:07:26 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:07:26 np0005603609 ceph-mgr[82033]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 31 02:07:26 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:26.590+0000 7fb094432140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 31 02:07:26 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'rbd_support'
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 31 02:07:26 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 31 02:07:26 np0005603609 ceph-mgr[82033]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 31 02:07:26 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:26.910+0000 7fb094432140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 31 02:07:26 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'restful'
Jan 31 02:07:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:27 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'rgw'
Jan 31 02:07:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e25 e25: 3 total, 2 up, 3 in
Jan 31 02:07:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:07:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:07:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:07:28 np0005603609 ceph-mgr[82033]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 31 02:07:28 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'rook'
Jan 31 02:07:28 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:28.474+0000 7fb094432140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 31 02:07:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e26 e26: 3 total, 2 up, 3 in
Jan 31 02:07:29 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 26 pg[7.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [1] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:07:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:07:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:29 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/386957885' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 02:07:29 np0005603609 ceph-mon[81667]: from='osd.2 [v2:192.168.122.102:6800/2867694174,v1:192.168.122.102:6801/2867694174]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 31 02:07:29 np0005603609 ceph-mon[81667]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 31 02:07:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:07:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:07:29 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/386957885' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 02:07:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:30 np0005603609 ceph-mgr[82033]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 31 02:07:30 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:30.777+0000 7fb094432140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 31 02:07:30 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'selftest'
Jan 31 02:07:30 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 31 02:07:30 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 31 02:07:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e27 e27: 3 total, 2 up, 3 in
Jan 31 02:07:31 np0005603609 ceph-mgr[82033]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 31 02:07:31 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:31.033+0000 7fb094432140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 31 02:07:31 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'snap_schedule'
Jan 31 02:07:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:07:31 np0005603609 ceph-mon[81667]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 02:07:31 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 27 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [1] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:31 np0005603609 ceph-mgr[82033]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 31 02:07:31 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:31.298+0000 7fb094432140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 31 02:07:31 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'stats'
Jan 31 02:07:31 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'status'
Jan 31 02:07:31 np0005603609 ceph-mgr[82033]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 31 02:07:31 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:31.886+0000 7fb094432140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 31 02:07:31 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'telegraf'
Jan 31 02:07:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e28 e28: 3 total, 2 up, 3 in
Jan 31 02:07:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:31 np0005603609 ceph-mon[81667]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 31 02:07:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:07:31 np0005603609 ceph-mon[81667]: from='osd.2 [v2:192.168.122.102:6800/2867694174,v1:192.168.122.102:6801/2867694174]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 31 02:07:31 np0005603609 ceph-mon[81667]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 31 02:07:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:32 np0005603609 ceph-mgr[82033]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 31 02:07:32 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:32.167+0000 7fb094432140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 31 02:07:32 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'telemetry'
Jan 31 02:07:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:32 np0005603609 podman[82289]: 2026-01-31 07:07:32.81282614 +0000 UTC m=+0.092240955 container exec 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:07:32 np0005603609 ceph-mgr[82033]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 31 02:07:32 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:32.936+0000 7fb094432140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 31 02:07:32 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 02:07:32 np0005603609 podman[82289]: 2026-01-31 07:07:32.940477127 +0000 UTC m=+0.219891942 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:07:33 np0005603609 ceph-mon[81667]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 31 02:07:33 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/3259085705' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 31 02:07:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e29 e29: 3 total, 2 up, 3 in
Jan 31 02:07:33 np0005603609 ceph-mgr[82033]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 31 02:07:33 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:33.706+0000 7fb094432140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 31 02:07:33 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'volumes'
Jan 31 02:07:33 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 31 02:07:33 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 31 02:07:34 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/3259085705' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 31 02:07:34 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:34 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:34 np0005603609 ceph-mgr[82033]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 31 02:07:34 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:34.473+0000 7fb094432140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 31 02:07:34 np0005603609 ceph-mgr[82033]: mgr[py] Loading python module 'zabbix'
Jan 31 02:07:34 np0005603609 ceph-mgr[82033]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 31 02:07:34 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-1-hodsiu[82029]: 2026-01-31T07:07:34.738+0000 7fb094432140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 31 02:07:34 np0005603609 ceph-mgr[82033]: ms_deliver_dispatch: unhandled message 0x55dcdc4291e0 mon_map magic: 0 v1 from mon.2 v2:192.168.122.101:3300/0
Jan 31 02:07:34 np0005603609 ceph-mgr[82033]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 02:07:35 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:35 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:35 np0005603609 ceph-mgr[82033]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 02:07:35 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.4 deep-scrub starts
Jan 31 02:07:35 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.4 deep-scrub ok
Jan 31 02:07:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e30 e30: 3 total, 2 up, 3 in
Jan 31 02:07:36 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/501340508' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 31 02:07:36 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/501340508' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 31 02:07:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e31 e31: 3 total, 2 up, 3 in
Jan 31 02:07:37 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/1530384074' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 31 02:07:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:38 np0005603609 ceph-mon[81667]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 02:07:38 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/1530384074' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 31 02:07:38 np0005603609 ceph-mon[81667]: OSD bench result of 6665.721838 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 02:07:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e32 e32: 3 total, 3 up, 3 in
Jan 31 02:07:38 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.5 deep-scrub starts
Jan 31 02:07:38 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.5 deep-scrub ok
Jan 31 02:07:39 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/2153731720' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 31 02:07:39 np0005603609 ceph-mon[81667]: osd.2 [v2:192.168.122.102:6800/2867694174,v1:192.168.122.102:6801/2867694174] boot
Jan 31 02:07:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:07:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:07:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e33 e33: 3 total, 3 up, 3 in
Jan 31 02:07:40 np0005603609 ceph-mon[81667]: Adjusting osd_memory_target on compute-2 to 127.9M
Jan 31 02:07:40 np0005603609 ceph-mon[81667]: Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Jan 31 02:07:40 np0005603609 ceph-mon[81667]: Updating compute-0:/etc/ceph/ceph.conf
Jan 31 02:07:40 np0005603609 ceph-mon[81667]: Updating compute-1:/etc/ceph/ceph.conf
Jan 31 02:07:40 np0005603609 ceph-mon[81667]: Updating compute-2:/etc/ceph/ceph.conf
Jan 31 02:07:40 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/2153731720' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 31 02:07:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e34 e34: 3 total, 3 up, 3 in
Jan 31 02:07:41 np0005603609 ceph-mon[81667]: Updating compute-2:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 02:07:41 np0005603609 ceph-mon[81667]: Updating compute-0:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 02:07:41 np0005603609 ceph-mon[81667]: Updating compute-1:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 02:07:41 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/4176655533' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 31 02:07:41 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:41 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:41 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:41 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:41 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:41 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:41 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:41 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:07:41 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 31 02:07:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e35 e35: 3 total, 3 up, 3 in
Jan 31 02:07:41 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 31 02:07:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:42 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/4176655533' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 31 02:07:43 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 31 02:07:43 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 31 02:07:44 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/1645037556' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 31 02:07:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e36 e36: 3 total, 3 up, 3 in
Jan 31 02:07:45 np0005603609 ceph-mon[81667]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 02:07:45 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/1645037556' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 31 02:07:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e37 e37: 3 total, 3 up, 3 in
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[5.1c( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[5.1f( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[5.10( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[3.16( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[5.11( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[3.14( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[3.13( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[5.15( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[5.16( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[3.10( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[3.f( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[5.9( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[3.c( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[3.d( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[5.7( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[3.3( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[3.5( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[5.2( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[5.1( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[5.f( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[3.a( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[5.1b( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[3.1c( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[5.18( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.1e( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.200383186s) [0] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.406875610s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.200145721s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.406768799s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.1e( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.200255394s) [0] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.406875610s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.200098038s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.406768799s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.141762733s) [0] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.348449707s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.141630173s) [0] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.348449707s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.199757576s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.406723022s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.199743271s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.406639099s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.199644089s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.406639099s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.199797630s) [0] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.406906128s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.199601173s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.406738281s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.199576378s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.406723022s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.199765205s) [0] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.406906128s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.199547768s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.406738281s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.199408531s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.406753540s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.199366570s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.406753540s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[4.13( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.199273109s) [0] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.406745911s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.199110985s) [0] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.406745911s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.199026108s) [0] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.406745911s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.199227333s) [0] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.406745911s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[4.a( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[4.c( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.1( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.198324203s) [0] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.406852722s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.1( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.198293686s) [0] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.406852722s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[4.d( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.198054314s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.406997681s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.198000908s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.406974792s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.198028564s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.406997681s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.e( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.197955132s) [0] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.406860352s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.198073387s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.407127380s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.197976112s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.406974792s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.197973251s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.407058716s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.197939873s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.407058716s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.198040009s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.407127380s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[4.5( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.e( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.197731018s) [0] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.406860352s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.197430611s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.407020569s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.197605133s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.407226562s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.197363853s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.407020569s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.197544098s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.407226562s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.197670937s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.407379150s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.197632790s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.407379150s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.197304726s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.407127380s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[4.e( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.197259903s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.407127380s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.197310448s) [0] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.407302856s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[4.1a( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.197195053s) [0] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.407302856s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.196729660s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active pruub 88.407157898s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37 pruub=12.196693420s) [2] r=-1 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.407157898s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[4.18( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 37 pg[4.1b( empty local-lis/les=0/0 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:07:46 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:07:46 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:07:46 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:07:46 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:07:46 np0005603609 ceph-mon[81667]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 02:07:46 np0005603609 ceph-mon[81667]: Cluster is now healthy
Jan 31 02:07:46 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:07:46 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:07:46 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:07:46 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 31 02:07:46 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 31 02:07:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e38 e38: 3 total, 3 up, 3 in
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[5.18( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[5.1b( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[5.1( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[5.2( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[5.f( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[5.7( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[5.16( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[5.15( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[5.9( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[5.10( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[5.1c( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[5.1f( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[4.13( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[5.11( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [1] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[4.a( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[4.5( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[4.e( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[4.c( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[4.1a( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[4.d( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[4.1b( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 38 pg[4.18( empty local-lis/les=37/38 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:07:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:48 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 31 02:07:48 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 31 02:07:49 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.14 deep-scrub starts
Jan 31 02:07:49 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/3678408126' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 02:07:49 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/3678408126' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 02:07:49 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.14 deep-scrub ok
Jan 31 02:07:50 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:50 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:50 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:50 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/3677494513' entity='client.admin' 
Jan 31 02:07:51 np0005603609 ceph-mon[81667]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 31 02:07:51 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 02:07:51 np0005603609 ceph-mon[81667]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 02:07:51 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:51 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:51 np0005603609 ceph-mon[81667]: Reconfiguring mgr.compute-0.hhuoua (monmap changed)...
Jan 31 02:07:51 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hhuoua", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 02:07:51 np0005603609 ceph-mon[81667]: Reconfiguring daemon mgr.compute-0.hhuoua on compute-0
Jan 31 02:07:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:52 np0005603609 ceph-mon[81667]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 02:07:52 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:52 np0005603609 ceph-mon[81667]: Saving service ingress.rgw.default spec with placement count:2
Jan 31 02:07:52 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:52 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:52 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:52 np0005603609 ceph-mon[81667]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 31 02:07:52 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 02:07:52 np0005603609 ceph-mon[81667]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 31 02:07:54 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:54 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:54 np0005603609 ceph-mon[81667]: Reconfiguring osd.0 (monmap changed)...
Jan 31 02:07:54 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 31 02:07:54 np0005603609 ceph-mon[81667]: Reconfiguring daemon osd.0 on compute-0
Jan 31 02:07:55 np0005603609 podman[83519]: 2026-01-31 07:07:55.204671082 +0000 UTC m=+0.020824817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:07:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).mds e2 new map
Jan 31 02:07:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T07:07:55.106295+0000#012modified#0112026-01-31T07:07:55.106335+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Jan 31 02:07:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e39 e39: 3 total, 3 up, 3 in
Jan 31 02:07:55 np0005603609 podman[83519]: 2026-01-31 07:07:55.408637027 +0000 UTC m=+0.224790742 container create 3ea5f9ecb7447e10f1d37b2274fb342530762aec0e852b5e9bc872b40ac0b339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:07:55 np0005603609 systemd[72682]: Starting Mark boot as successful...
Jan 31 02:07:55 np0005603609 systemd[72682]: Finished Mark boot as successful.
Jan 31 02:07:55 np0005603609 systemd[1]: Started libpod-conmon-3ea5f9ecb7447e10f1d37b2274fb342530762aec0e852b5e9bc872b40ac0b339.scope.
Jan 31 02:07:55 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:07:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:55 np0005603609 ceph-mon[81667]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 31 02:07:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 02:07:55 np0005603609 ceph-mon[81667]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 31 02:07:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 31 02:07:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 31 02:07:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 31 02:07:55 np0005603609 ceph-mon[81667]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 02:07:55 np0005603609 ceph-mon[81667]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 31 02:07:55 np0005603609 podman[83519]: 2026-01-31 07:07:55.692605776 +0000 UTC m=+0.508759571 container init 3ea5f9ecb7447e10f1d37b2274fb342530762aec0e852b5e9bc872b40ac0b339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_faraday, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:07:55 np0005603609 podman[83519]: 2026-01-31 07:07:55.703470438 +0000 UTC m=+0.519624183 container start 3ea5f9ecb7447e10f1d37b2274fb342530762aec0e852b5e9bc872b40ac0b339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:07:55 np0005603609 elated_faraday[83537]: 167 167
Jan 31 02:07:55 np0005603609 systemd[1]: libpod-3ea5f9ecb7447e10f1d37b2274fb342530762aec0e852b5e9bc872b40ac0b339.scope: Deactivated successfully.
Jan 31 02:07:55 np0005603609 podman[83519]: 2026-01-31 07:07:55.78291825 +0000 UTC m=+0.599072045 container attach 3ea5f9ecb7447e10f1d37b2274fb342530762aec0e852b5e9bc872b40ac0b339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:07:55 np0005603609 podman[83519]: 2026-01-31 07:07:55.783475002 +0000 UTC m=+0.599628747 container died 3ea5f9ecb7447e10f1d37b2274fb342530762aec0e852b5e9bc872b40ac0b339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_faraday, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 02:07:55 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.16 deep-scrub starts
Jan 31 02:07:55 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.16 deep-scrub ok
Jan 31 02:07:55 np0005603609 systemd[1]: var-lib-containers-storage-overlay-c78d9bea479d3a7272c122d8fa790a34a21f04d8276471aa6667540f946dd61e-merged.mount: Deactivated successfully.
Jan 31 02:07:56 np0005603609 podman[83519]: 2026-01-31 07:07:56.013160192 +0000 UTC m=+0.829313917 container remove 3ea5f9ecb7447e10f1d37b2274fb342530762aec0e852b5e9bc872b40ac0b339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 02:07:56 np0005603609 systemd[1]: libpod-conmon-3ea5f9ecb7447e10f1d37b2274fb342530762aec0e852b5e9bc872b40ac0b339.scope: Deactivated successfully.
Jan 31 02:07:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 31 02:07:56 np0005603609 ceph-mon[81667]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 02:07:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 31 02:07:56 np0005603609 podman[83674]: 2026-01-31 07:07:56.654916527 +0000 UTC m=+0.063501597 container create c3696d3f69695ca4c86fa053f03d97b7ac77dd2e36f7f1217ebb357fa4b2fbb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:07:56 np0005603609 systemd[1]: Started libpod-conmon-c3696d3f69695ca4c86fa053f03d97b7ac77dd2e36f7f1217ebb357fa4b2fbb5.scope.
Jan 31 02:07:56 np0005603609 podman[83674]: 2026-01-31 07:07:56.626508257 +0000 UTC m=+0.035093417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:07:56 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:07:56 np0005603609 podman[83674]: 2026-01-31 07:07:56.740575256 +0000 UTC m=+0.149160356 container init c3696d3f69695ca4c86fa053f03d97b7ac77dd2e36f7f1217ebb357fa4b2fbb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:07:56 np0005603609 podman[83674]: 2026-01-31 07:07:56.746981397 +0000 UTC m=+0.155566507 container start c3696d3f69695ca4c86fa053f03d97b7ac77dd2e36f7f1217ebb357fa4b2fbb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:07:56 np0005603609 objective_roentgen[83690]: 167 167
Jan 31 02:07:56 np0005603609 systemd[1]: libpod-c3696d3f69695ca4c86fa053f03d97b7ac77dd2e36f7f1217ebb357fa4b2fbb5.scope: Deactivated successfully.
Jan 31 02:07:56 np0005603609 podman[83674]: 2026-01-31 07:07:56.753239605 +0000 UTC m=+0.161824705 container attach c3696d3f69695ca4c86fa053f03d97b7ac77dd2e36f7f1217ebb357fa4b2fbb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 02:07:56 np0005603609 podman[83674]: 2026-01-31 07:07:56.754846387 +0000 UTC m=+0.163431467 container died c3696d3f69695ca4c86fa053f03d97b7ac77dd2e36f7f1217ebb357fa4b2fbb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:07:56 np0005603609 systemd[1]: var-lib-containers-storage-overlay-664e3f1a2a4e4233e6918642ff7903773bd653bdfb69151c02175e0296d5771d-merged.mount: Deactivated successfully.
Jan 31 02:07:56 np0005603609 podman[83674]: 2026-01-31 07:07:56.826611074 +0000 UTC m=+0.235196174 container remove c3696d3f69695ca4c86fa053f03d97b7ac77dd2e36f7f1217ebb357fa4b2fbb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 31 02:07:56 np0005603609 systemd[1]: libpod-conmon-c3696d3f69695ca4c86fa053f03d97b7ac77dd2e36f7f1217ebb357fa4b2fbb5.scope: Deactivated successfully.
Jan 31 02:07:57 np0005603609 podman[83832]: 2026-01-31 07:07:57.594378611 +0000 UTC m=+0.043926217 container create e5bd5facc2788c86bede62904dfc7da236a685def34cfa493d137b479710bcc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 02:07:57 np0005603609 ceph-mon[81667]: Reconfiguring osd.1 (monmap changed)...
Jan 31 02:07:57 np0005603609 ceph-mon[81667]: Reconfiguring daemon osd.1 on compute-1
Jan 31 02:07:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 02:07:57 np0005603609 systemd[1]: Started libpod-conmon-e5bd5facc2788c86bede62904dfc7da236a685def34cfa493d137b479710bcc8.scope.
Jan 31 02:07:57 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:07:57 np0005603609 podman[83832]: 2026-01-31 07:07:57.571636847 +0000 UTC m=+0.021184483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:07:57 np0005603609 podman[83832]: 2026-01-31 07:07:57.676174521 +0000 UTC m=+0.125722157 container init e5bd5facc2788c86bede62904dfc7da236a685def34cfa493d137b479710bcc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:07:57 np0005603609 podman[83832]: 2026-01-31 07:07:57.681755955 +0000 UTC m=+0.131303561 container start e5bd5facc2788c86bede62904dfc7da236a685def34cfa493d137b479710bcc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 02:07:57 np0005603609 compassionate_mestorf[83849]: 167 167
Jan 31 02:07:57 np0005603609 systemd[1]: libpod-e5bd5facc2788c86bede62904dfc7da236a685def34cfa493d137b479710bcc8.scope: Deactivated successfully.
Jan 31 02:07:57 np0005603609 conmon[83849]: conmon e5bd5facc2788c86bede <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e5bd5facc2788c86bede62904dfc7da236a685def34cfa493d137b479710bcc8.scope/container/memory.events
Jan 31 02:07:57 np0005603609 podman[83832]: 2026-01-31 07:07:57.688186647 +0000 UTC m=+0.137734283 container attach e5bd5facc2788c86bede62904dfc7da236a685def34cfa493d137b479710bcc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mestorf, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:07:57 np0005603609 podman[83832]: 2026-01-31 07:07:57.688553104 +0000 UTC m=+0.138100710 container died e5bd5facc2788c86bede62904dfc7da236a685def34cfa493d137b479710bcc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:07:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:07:57 np0005603609 systemd[1]: var-lib-containers-storage-overlay-9da28962a04480854d124919735a699afb7d3f4866ce10bb9d4b7f72d10a0bfa-merged.mount: Deactivated successfully.
Jan 31 02:07:57 np0005603609 podman[83832]: 2026-01-31 07:07:57.747319014 +0000 UTC m=+0.196866640 container remove e5bd5facc2788c86bede62904dfc7da236a685def34cfa493d137b479710bcc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mestorf, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:07:57 np0005603609 systemd[1]: libpod-conmon-e5bd5facc2788c86bede62904dfc7da236a685def34cfa493d137b479710bcc8.scope: Deactivated successfully.
Jan 31 02:07:57 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 31 02:07:57 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 31 02:07:58 np0005603609 ceph-mon[81667]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 02:07:58 np0005603609 ceph-mon[81667]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 31 02:07:58 np0005603609 ceph-mon[81667]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 31 02:07:58 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:58 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:58 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 02:07:58 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:58 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:58 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.wmgest", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 02:07:59 np0005603609 ceph-mon[81667]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 31 02:07:59 np0005603609 ceph-mon[81667]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 31 02:07:59 np0005603609 ceph-mon[81667]: Reconfiguring mgr.compute-2.wmgest (monmap changed)...
Jan 31 02:07:59 np0005603609 ceph-mon[81667]: Reconfiguring daemon mgr.compute-2.wmgest on compute-2
Jan 31 02:07:59 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:59 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:07:59 np0005603609 podman[84041]: 2026-01-31 07:07:59.791290124 +0000 UTC m=+0.060748361 container exec 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:07:59 np0005603609 podman[84041]: 2026-01-31 07:07:59.892090452 +0000 UTC m=+0.161548669 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 02:08:00 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/1696344557' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 31 02:08:00 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/1696344557' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 31 02:08:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:08:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:08:00 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.1a deep-scrub starts
Jan 31 02:08:00 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 2.1a deep-scrub ok
Jan 31 02:08:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:04 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/1362729985' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 31 02:08:04 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 31 02:08:04 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 31 02:08:06 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:06 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:06 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.kddbks", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 02:08:06 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.kddbks", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 02:08:06 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:07 np0005603609 ceph-mon[81667]: Deploying daemon rgw.rgw.compute-2.kddbks on compute-2
Jan 31 02:08:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.zjvjex", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 02:08:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.zjvjex", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 02:08:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:08 np0005603609 podman[84263]: 2026-01-31 07:08:08.036940767 +0000 UTC m=+0.039772903 container create 917ae052e49c6a7d720767574e15d7ed6d8385156819772210394c2d7186d71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_grothendieck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:08:08 np0005603609 systemd[1]: Started libpod-conmon-917ae052e49c6a7d720767574e15d7ed6d8385156819772210394c2d7186d71f.scope.
Jan 31 02:08:08 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:08:08 np0005603609 podman[84263]: 2026-01-31 07:08:08.018866927 +0000 UTC m=+0.021699083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:08:08 np0005603609 podman[84263]: 2026-01-31 07:08:08.120225238 +0000 UTC m=+0.123057394 container init 917ae052e49c6a7d720767574e15d7ed6d8385156819772210394c2d7186d71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 02:08:08 np0005603609 podman[84263]: 2026-01-31 07:08:08.126074677 +0000 UTC m=+0.128906813 container start 917ae052e49c6a7d720767574e15d7ed6d8385156819772210394c2d7186d71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_grothendieck, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 02:08:08 np0005603609 brave_grothendieck[84280]: 167 167
Jan 31 02:08:08 np0005603609 systemd[1]: libpod-917ae052e49c6a7d720767574e15d7ed6d8385156819772210394c2d7186d71f.scope: Deactivated successfully.
Jan 31 02:08:08 np0005603609 conmon[84280]: conmon 917ae052e49c6a7d7207 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-917ae052e49c6a7d720767574e15d7ed6d8385156819772210394c2d7186d71f.scope/container/memory.events
Jan 31 02:08:08 np0005603609 podman[84263]: 2026-01-31 07:08:08.132803195 +0000 UTC m=+0.135635361 container attach 917ae052e49c6a7d720767574e15d7ed6d8385156819772210394c2d7186d71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_grothendieck, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 02:08:08 np0005603609 podman[84263]: 2026-01-31 07:08:08.133929188 +0000 UTC m=+0.136761364 container died 917ae052e49c6a7d720767574e15d7ed6d8385156819772210394c2d7186d71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_grothendieck, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:08:08 np0005603609 systemd[1]: var-lib-containers-storage-overlay-854327111bde1483921aaaca13731b1ce19576cf867868d28d35cf6b6cad9549-merged.mount: Deactivated successfully.
Jan 31 02:08:08 np0005603609 podman[84263]: 2026-01-31 07:08:08.195792381 +0000 UTC m=+0.198624517 container remove 917ae052e49c6a7d720767574e15d7ed6d8385156819772210394c2d7186d71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 02:08:08 np0005603609 systemd[1]: libpod-conmon-917ae052e49c6a7d720767574e15d7ed6d8385156819772210394c2d7186d71f.scope: Deactivated successfully.
Jan 31 02:08:08 np0005603609 systemd[1]: Reloading.
Jan 31 02:08:08 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:08:08 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:08:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e40 e40: 3 total, 3 up, 3 in
Jan 31 02:08:08 np0005603609 systemd[1]: Reloading.
Jan 31 02:08:08 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:08:08 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:08:08 np0005603609 systemd[1]: Starting Ceph rgw.rgw.compute-1.zjvjex for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 02:08:08 np0005603609 ceph-mon[81667]: Deploying daemon rgw.rgw.compute-1.zjvjex on compute-1
Jan 31 02:08:08 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.102:0/2780650631' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 31 02:08:08 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 31 02:08:08 np0005603609 podman[84424]: 2026-01-31 07:08:08.927925482 +0000 UTC m=+0.037656200 container create 22b34e2c4a834803c8846bcb6f5ecaeb22a77050e342fdf6064e05785771d96e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-rgw-rgw-compute-1-zjvjex, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:08:08 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6161eabc3946f280086af2cde0d594cfcfc1676bb6308de510f390ff0086d6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:08 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6161eabc3946f280086af2cde0d594cfcfc1676bb6308de510f390ff0086d6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:08 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6161eabc3946f280086af2cde0d594cfcfc1676bb6308de510f390ff0086d6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:08 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6161eabc3946f280086af2cde0d594cfcfc1676bb6308de510f390ff0086d6a/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.zjvjex supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:08 np0005603609 podman[84424]: 2026-01-31 07:08:08.983295902 +0000 UTC m=+0.093026640 container init 22b34e2c4a834803c8846bcb6f5ecaeb22a77050e342fdf6064e05785771d96e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-rgw-rgw-compute-1-zjvjex, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 02:08:08 np0005603609 podman[84424]: 2026-01-31 07:08:08.987437257 +0000 UTC m=+0.097167975 container start 22b34e2c4a834803c8846bcb6f5ecaeb22a77050e342fdf6064e05785771d96e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-rgw-rgw-compute-1-zjvjex, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 02:08:08 np0005603609 bash[84424]: 22b34e2c4a834803c8846bcb6f5ecaeb22a77050e342fdf6064e05785771d96e
Jan 31 02:08:08 np0005603609 podman[84424]: 2026-01-31 07:08:08.911369814 +0000 UTC m=+0.021100552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:08:08 np0005603609 systemd[1]: Started Ceph rgw.rgw.compute-1.zjvjex for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 02:08:09 np0005603609 radosgw[84443]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 31 02:08:09 np0005603609 radosgw[84443]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Jan 31 02:08:09 np0005603609 radosgw[84443]: framework: beast
Jan 31 02:08:09 np0005603609 radosgw[84443]: framework conf key: endpoint, val: 192.168.122.101:8082
Jan 31 02:08:09 np0005603609 radosgw[84443]: init_numa not setting numa affinity
Jan 31 02:08:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e41 e41: 3 total, 3 up, 3 in
Jan 31 02:08:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.njduba", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 02:08:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.njduba", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 02:08:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:10 np0005603609 ceph-mon[81667]: Deploying daemon rgw.rgw.compute-0.njduba on compute-0
Jan 31 02:08:10 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 31 02:08:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e42 e42: 3 total, 3 up, 3 in
Jan 31 02:08:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 31 02:08:10 np0005603609 ceph-mon[81667]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1217523012' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 02:08:11 np0005603609 ceph-mon[81667]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 02:08:11 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 02:08:11 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.101:0/1217523012' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 02:08:11 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.102:0/2780650631' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 02:08:11 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 02:08:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:11 np0005603609 ceph-mon[81667]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 02:08:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.ihffma", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 02:08:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.ihffma", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 02:08:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Jan 31 02:08:12 np0005603609 ceph-mon[81667]: Deploying daemon mds.cephfs.compute-2.ihffma on compute-2
Jan 31 02:08:12 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 02:08:12 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 02:08:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.voybui", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 02:08:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.voybui", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 02:08:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Jan 31 02:08:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 31 02:08:12 np0005603609 ceph-mon[81667]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1217523012' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 02:08:12 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 44 pg[10.0( empty local-lis/les=0/0 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [1] r=0 lpr=44 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).mds e3 new map
Jan 31 02:08:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T07:07:55.106295+0000#012modified#0112026-01-31T07:07:55.106335+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.ihffma{-1:24157} state up:standby seq 1 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 02:08:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).mds e4 new map
Jan 31 02:08:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T07:07:55.106295+0000#012modified#0112026-01-31T07:08:13.111386+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.ihffma{0:24157} state up:creating seq 1 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Jan 31 02:08:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Jan 31 02:08:13 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 45 pg[10.0( empty local-lis/les=44/45 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [1] r=0 lpr=44 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:13 np0005603609 ceph-mon[81667]: Deploying daemon mds.cephfs.compute-0.voybui on compute-0
Jan 31 02:08:13 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/1455749916' entity='client.rgw.rgw.compute-0.njduba' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 02:08:13 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.102:0/2780650631' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 02:08:13 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 02:08:13 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 02:08:13 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.101:0/1217523012' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 02:08:13 np0005603609 ceph-mon[81667]: daemon mds.cephfs.compute-2.ihffma assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 31 02:08:13 np0005603609 ceph-mon[81667]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 31 02:08:13 np0005603609 ceph-mon[81667]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 31 02:08:13 np0005603609 ceph-mon[81667]: daemon mds.cephfs.compute-2.ihffma is now active in filesystem cephfs as rank 0
Jan 31 02:08:13 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 31 02:08:13 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 31 02:08:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).mds e5 new map
Jan 31 02:08:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T07:07:55.106295+0000#012modified#0112026-01-31T07:08:14.127061+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.ihffma{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.voybui{-1:14406} state up:standby seq 1 addr [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 02:08:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).mds e6 new map
Jan 31 02:08:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).mds e6 print_map#012e6#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T07:07:55.106295+0000#012modified#0112026-01-31T07:08:14.127061+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.ihffma{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.voybui{-1:14406} state up:standby seq 1 addr [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 02:08:14 np0005603609 podman[84655]: 2026-01-31 07:08:14.663857803 +0000 UTC m=+0.105517816 container create 7ccf58851bfe114bc38709234c7f061d3234b98d36353db11de6cdf103642d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:08:14 np0005603609 podman[84655]: 2026-01-31 07:08:14.578374907 +0000 UTC m=+0.020034950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:08:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Jan 31 02:08:14 np0005603609 systemd[1]: Started libpod-conmon-7ccf58851bfe114bc38709234c7f061d3234b98d36353db11de6cdf103642d37.scope.
Jan 31 02:08:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 31 02:08:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1299930726' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 02:08:14 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:08:14 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/1455749916' entity='client.rgw.rgw.compute-0.njduba' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 02:08:14 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 02:08:14 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 02:08:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.dqeaqy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 02:08:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.dqeaqy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 02:08:14 np0005603609 podman[84655]: 2026-01-31 07:08:14.847910752 +0000 UTC m=+0.289570795 container init 7ccf58851bfe114bc38709234c7f061d3234b98d36353db11de6cdf103642d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:08:14 np0005603609 podman[84655]: 2026-01-31 07:08:14.854547828 +0000 UTC m=+0.296207861 container start 7ccf58851bfe114bc38709234c7f061d3234b98d36353db11de6cdf103642d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 02:08:14 np0005603609 distracted_rhodes[84671]: 167 167
Jan 31 02:08:14 np0005603609 systemd[1]: libpod-7ccf58851bfe114bc38709234c7f061d3234b98d36353db11de6cdf103642d37.scope: Deactivated successfully.
Jan 31 02:08:14 np0005603609 podman[84655]: 2026-01-31 07:08:14.873579336 +0000 UTC m=+0.315239349 container attach 7ccf58851bfe114bc38709234c7f061d3234b98d36353db11de6cdf103642d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 02:08:14 np0005603609 podman[84655]: 2026-01-31 07:08:14.874465074 +0000 UTC m=+0.316125087 container died 7ccf58851bfe114bc38709234c7f061d3234b98d36353db11de6cdf103642d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:08:14 np0005603609 systemd[1]: var-lib-containers-storage-overlay-2acbc1683683f7d3747f70417bd59d3269fd17d160fe2cac44049b8fa57226c3-merged.mount: Deactivated successfully.
Jan 31 02:08:14 np0005603609 podman[84655]: 2026-01-31 07:08:14.979209443 +0000 UTC m=+0.420869466 container remove 7ccf58851bfe114bc38709234c7f061d3234b98d36353db11de6cdf103642d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 02:08:14 np0005603609 systemd[1]: libpod-conmon-7ccf58851bfe114bc38709234c7f061d3234b98d36353db11de6cdf103642d37.scope: Deactivated successfully.
Jan 31 02:08:15 np0005603609 systemd[1]: Reloading.
Jan 31 02:08:15 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:08:15 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:08:15 np0005603609 systemd[1]: Reloading.
Jan 31 02:08:15 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:08:15 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:08:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Jan 31 02:08:15 np0005603609 systemd[1]: Starting Ceph mds.cephfs.compute-1.dqeaqy for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 02:08:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 31 02:08:15 np0005603609 ceph-mon[81667]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1299930726' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 02:08:15 np0005603609 ceph-mon[81667]: Deploying daemon mds.cephfs.compute-1.dqeaqy on compute-1
Jan 31 02:08:15 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.102:0/616184540' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 02:08:15 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 02:08:15 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/1494637509' entity='client.rgw.rgw.compute-0.njduba' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 02:08:15 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.101:0/1299930726' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 02:08:15 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 02:08:15 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:15 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 02:08:15 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/1494637509' entity='client.rgw.rgw.compute-0.njduba' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 02:08:15 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 02:08:15 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.102:0/616184540' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 02:08:15 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 02:08:15 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/1494637509' entity='client.rgw.rgw.compute-0.njduba' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 02:08:15 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 02:08:15 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.101:0/1299930726' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 02:08:16 np0005603609 podman[84818]: 2026-01-31 07:08:16.037589955 +0000 UTC m=+0.052154455 container create 409bd6abd95502cdf9d3eefe2ab5eb5d6909eb601a9d35cb8dc30536ecec42f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mds-cephfs-compute-1-dqeaqy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:08:16 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6e0415b7f6fcce54dbe47c3459e8c777a480ecf6bb467a9f5b7d3c7a5b26b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:16 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6e0415b7f6fcce54dbe47c3459e8c777a480ecf6bb467a9f5b7d3c7a5b26b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:16 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6e0415b7f6fcce54dbe47c3459e8c777a480ecf6bb467a9f5b7d3c7a5b26b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:16 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6e0415b7f6fcce54dbe47c3459e8c777a480ecf6bb467a9f5b7d3c7a5b26b1/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.dqeaqy supports timestamps until 2038 (0x7fffffff)
Jan 31 02:08:16 np0005603609 podman[84818]: 2026-01-31 07:08:16.00792187 +0000 UTC m=+0.022486450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:08:16 np0005603609 podman[84818]: 2026-01-31 07:08:16.142158141 +0000 UTC m=+0.156722651 container init 409bd6abd95502cdf9d3eefe2ab5eb5d6909eb601a9d35cb8dc30536ecec42f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mds-cephfs-compute-1-dqeaqy, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Jan 31 02:08:16 np0005603609 podman[84818]: 2026-01-31 07:08:16.148049131 +0000 UTC m=+0.162613621 container start 409bd6abd95502cdf9d3eefe2ab5eb5d6909eb601a9d35cb8dc30536ecec42f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mds-cephfs-compute-1-dqeaqy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 02:08:16 np0005603609 ceph-mds[84837]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 02:08:16 np0005603609 ceph-mds[84837]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Jan 31 02:08:16 np0005603609 ceph-mds[84837]: main not setting numa affinity
Jan 31 02:08:16 np0005603609 ceph-mds[84837]: pidfile_write: ignore empty --pid-file
Jan 31 02:08:16 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mds-cephfs-compute-1-dqeaqy[84833]: starting mds.cephfs.compute-1.dqeaqy at 
Jan 31 02:08:16 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Updating MDS map to version 6 from mon.2
Jan 31 02:08:16 np0005603609 bash[84818]: 409bd6abd95502cdf9d3eefe2ab5eb5d6909eb601a9d35cb8dc30536ecec42f7
Jan 31 02:08:16 np0005603609 systemd[1]: Started Ceph mds.cephfs.compute-1.dqeaqy for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 02:08:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Jan 31 02:08:16 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:16 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:16 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:16 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:16 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:16 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 02:08:16 np0005603609 ceph-mon[81667]: from='client.? 192.168.122.100:0/1494637509' entity='client.rgw.rgw.compute-0.njduba' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 02:08:16 np0005603609 ceph-mon[81667]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 02:08:16 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:16 np0005603609 ceph-mon[81667]: Deploying daemon haproxy.rgw.default.compute-0.cwtxbj on compute-0
Jan 31 02:08:16 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 31 02:08:16 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 31 02:08:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).mds e7 new map
Jan 31 02:08:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).mds e7 print_map#012e7#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T07:07:55.106295+0000#012modified#0112026-01-31T07:08:14.127061+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.ihffma{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.voybui{-1:14406} state up:standby seq 1 addr [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.dqeaqy{-1:24170} state up:standby seq 1 addr [v2:192.168.122.101:6804/858174765,v1:192.168.122.101:6805/858174765] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 02:08:17 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Updating MDS map to version 7 from mon.2
Jan 31 02:08:17 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Monitors have assigned me to become a standby.
Jan 31 02:08:17 np0005603609 radosgw[84443]: LDAP not started since no server URIs were provided in the configuration.
Jan 31 02:08:17 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-rgw-rgw-compute-1-zjvjex[84439]: 2026-01-31T07:08:17.236+0000 7f511293a940 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 31 02:08:17 np0005603609 radosgw[84443]: framework: beast
Jan 31 02:08:17 np0005603609 radosgw[84443]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 31 02:08:17 np0005603609 radosgw[84443]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 31 02:08:17 np0005603609 radosgw[84443]: starting handler: beast
Jan 31 02:08:17 np0005603609 radosgw[84443]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 02:08:17 np0005603609 radosgw[84443]: mgrc service_daemon_register rgw.24158 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.zjvjex,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864292,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=12135664-ebbb-4151-b9c5-06f6a22e158a,zone_name=default,zonegroup_id=50d9470b-8f69-45ea-ac36-a7be21625514,zonegroup_name=default}
Jan 31 02:08:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:17 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 31 02:08:17 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 31 02:08:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).mds e8 new map
Jan 31 02:08:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).mds e8 print_map#012e8#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T07:07:55.106295+0000#012modified#0112026-01-31T07:08:17.997329+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.ihffma{0:24157} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.voybui{-1:14406} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.dqeaqy{-1:24170} state up:standby seq 1 addr [v2:192.168.122.101:6804/858174765,v1:192.168.122.101:6805/858174765] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 02:08:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Jan 31 02:08:18 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2513510612' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 31 02:08:19 np0005603609 ceph-mon[81667]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 02:08:19 np0005603609 ceph-mon[81667]: Cluster is now healthy
Jan 31 02:08:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).mds e9 new map
Jan 31 02:08:20 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Updating MDS map to version 9 from mon.2
Jan 31 02:08:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).mds e9 print_map#012e9#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T07:07:55.106295+0000#012modified#0112026-01-31T07:08:17.997329+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.ihffma{0:24157} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.voybui{-1:14406} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.dqeaqy{-1:24170} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/858174765,v1:192.168.122.101:6805/858174765] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 02:08:21 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 31 02:08:21 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 31 02:08:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000040s ======
Jan 31 02:08:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:21.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000040s
Jan 31 02:08:22 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 31 02:08:22 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 31 02:08:22 np0005603609 ceph-mon[81667]: Deploying daemon haproxy.rgw.default.compute-2.envbir on compute-2
Jan 31 02:08:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:23.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:25.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:26 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 31 02:08:26 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 31 02:08:26 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:26 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:26 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:26 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000020s ======
Jan 31 02:08:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:27.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Jan 31 02:08:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:27.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:27 np0005603609 ceph-mon[81667]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 02:08:27 np0005603609 ceph-mon[81667]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 02:08:27 np0005603609 ceph-mon[81667]: Deploying daemon keepalived.rgw.default.compute-2.faavbs on compute-2
Jan 31 02:08:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 31 02:08:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Jan 31 02:08:28 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 31 02:08:28 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 31 02:08:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:29.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:29.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Jan 31 02:08:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 31 02:08:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:08:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 31 02:08:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Jan 31 02:08:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:08:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 31 02:08:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:08:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:31.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:31.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:31 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 31 02:08:31 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 31 02:08:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Jan 31 02:08:32 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 52 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=52 pruub=11.072503090s) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active pruub 133.406188965s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:32 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 52 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=52 pruub=11.072503090s) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown pruub 133.406188965s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:32 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:08:32 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:08:32 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:08:32 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:08:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:33.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.16( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.1e( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.1d( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.12( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.10( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.17( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.14( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.b( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.7( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.d( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.19( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=26/27 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:08:33 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:08:33 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:08:33 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:08:33 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:33 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:33 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:33 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.1d( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.1e( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.12( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.10( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.17( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.b( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.14( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.7( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.16( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.0( empty local-lis/les=52/53 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.19( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.d( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=26/26 les/c/f=27/27/0 sis=52) [1] r=0 lpr=52 pi=[26,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:33.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Jan 31 02:08:34 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 54 pg[10.0( v 45'48 (0'0,45'48] local-lis/les=44/45 n=8 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=11.346794128s) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 45'47 mlcod 45'47 active pruub 135.697341919s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:34 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 54 pg[10.0( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=11.346794128s) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 45'47 mlcod 0'0 unknown pruub 135.697341919s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:34 np0005603609 ceph-mon[81667]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 02:08:34 np0005603609 ceph-mon[81667]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 02:08:34 np0005603609 ceph-mon[81667]: Deploying daemon keepalived.rgw.default.compute-0.rwjfwq on compute-0
Jan 31 02:08:34 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:08:34 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:08:34 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:08:34 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:08:34 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:08:34 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:08:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:35.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.1b( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.11( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.18( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.1( v 45'48 (0'0,45'48] local-lis/les=44/45 n=1 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.7( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.9( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.13( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.12( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.10( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.1f( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.1e( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.1c( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.1a( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.19( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.6( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.5( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.1d( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.4( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.b( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.8( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.a( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.c( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.d( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.e( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.f( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.2( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.3( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.14( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.15( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.16( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.17( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.11( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.7( v 45'48 (0'0,45'48] local-lis/les=54/55 n=1 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.1( v 45'48 (0'0,45'48] local-lis/les=54/55 n=1 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.9( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.18( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.13( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.12( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.10( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.1b( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.1f( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.1e( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.1c( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.1a( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.19( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.6( v 45'48 (0'0,45'48] local-lis/les=54/55 n=1 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.1d( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.5( v 45'48 (0'0,45'48] local-lis/les=54/55 n=1 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.4( v 45'48 (0'0,45'48] local-lis/les=54/55 n=1 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.b( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.d( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.c( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.e( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.0( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 45'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.f( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.8( v 45'48 (0'0,45'48] local-lis/les=54/55 n=1 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.3( v 45'48 (0'0,45'48] local-lis/les=54/55 n=1 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.14( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.2( v 45'48 (0'0,45'48] local-lis/les=54/55 n=1 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.15( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.a( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.17( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 55 pg[10.16( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:35.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:36 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:08:36 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Jan 31 02:08:36 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 31 02:08:36 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 31 02:08:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:37.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Jan 31 02:08:37 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:08:37 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:37 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:37 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:37.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:38 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 31 02:08:38 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 31 02:08:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:39.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:39 np0005603609 podman[85623]: 2026-01-31 07:08:39.232792334 +0000 UTC m=+0.067567320 container exec 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:08:39 np0005603609 podman[85623]: 2026-01-31 07:08:39.353350387 +0000 UTC m=+0.188125353 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 02:08:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:39.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:41.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:08:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:41.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:08:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:43.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Jan 31 02:08:43 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:08:43 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:08:43 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 02:08:43 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:08:43 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 02:08:43 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[11.1a( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[8.19( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[11.12( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[8.10( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[8.12( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[11.1e( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[11.1c( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[11.1d( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[11.1b( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[8.18( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[8.1b( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[8.4( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[11.7( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[11.4( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[11.5( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[8.8( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[11.f( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[11.1( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[8.17( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[11.14( empty local-lis/les=0/0 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[8.14( empty local-lis/les=0/0 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.1( v 45'48 (0'0,45'48] local-lis/les=54/55 n=1 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991904259s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 141.420120239s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.13( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991950035s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 141.420242310s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.18( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991886139s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 141.420196533s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.1( v 45'48 (0'0,45'48] local-lis/les=54/55 n=1 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991782188s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.420120239s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.12( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991938114s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 141.420272827s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.13( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991897106s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.420242310s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.18( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991839409s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.420196533s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.12( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991887569s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.420272827s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.1b( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991947651s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 141.420532227s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.1b( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991919994s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.420532227s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.1e( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991869926s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 141.420562744s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.1e( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991811275s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.420562744s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.19( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991744995s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 141.420578003s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.19( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991710186s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.420578003s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.4( v 45'48 (0'0,45'48] local-lis/les=54/55 n=1 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991752148s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 141.420837402s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.4( v 45'48 (0'0,45'48] local-lis/les=54/55 n=1 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991718292s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.420837402s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.5( v 45'48 (0'0,45'48] local-lis/les=54/55 n=1 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991523743s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 141.420822144s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.f( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991607189s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 141.420989990s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.8( v 45'48 (0'0,45'48] local-lis/les=54/55 n=1 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991627216s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 141.421035767s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.5( v 45'48 (0'0,45'48] local-lis/les=54/55 n=1 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991454601s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.420822144s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.8( v 45'48 (0'0,45'48] local-lis/les=54/55 n=1 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991594315s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.421035767s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.f( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991576672s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.420989990s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.15( v 57'51 (0'0,57'51] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991353989s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=55'49 lcod 57'50 mlcod 57'50 active pruub 141.421081543s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.2( v 45'48 (0'0,45'48] local-lis/les=54/55 n=1 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991156101s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 141.421066284s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.14( v 57'51 (0'0,57'51] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991121292s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=55'49 lcod 57'50 mlcod 57'50 active pruub 141.421066284s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.2( v 45'48 (0'0,45'48] local-lis/les=54/55 n=1 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991117001s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.421066284s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.14( v 57'51 (0'0,57'51] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991051197s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=55'49 lcod 57'50 mlcod 0'0 unknown NOTIFY pruub 141.421066284s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.3( v 57'51 (0'0,57'51] local-lis/les=54/55 n=1 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.991313934s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=55'49 lcod 57'50 mlcod 57'50 active pruub 141.421051025s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.1b( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.932502747s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362792969s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.10( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.990633011s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 141.420318604s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.1b( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.932473183s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362792969s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.15( v 57'51 (0'0,57'51] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.990845203s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=55'49 lcod 57'50 mlcod 0'0 unknown NOTIFY pruub 141.421081543s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.10( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.989934921s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.420318604s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.18( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.932329178s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362731934s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.18( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.932292938s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362731934s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.3( v 57'51 (0'0,57'51] local-lis/les=54/55 n=1 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=7.990498066s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=55'49 lcod 57'50 mlcod 0'0 unknown NOTIFY pruub 141.421051025s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.e( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.932032585s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362747192s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.e( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.932002068s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362747192s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.f( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.931896210s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362655640s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.f( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.931856155s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362655640s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.2( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.931538582s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362640381s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.2( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.931499481s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362640381s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.3( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.931433678s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362594604s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.3( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.931376457s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362594604s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.5( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.931122780s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362548828s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.5( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.931086540s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362548828s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.8( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.930691719s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362396240s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.8( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.930661201s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362396240s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.9( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.930857658s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362609863s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.b( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.930585861s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362350464s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.6( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.930743217s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362426758s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.b( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.930544853s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362350464s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.9( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.930814743s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362609863s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.6( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.930439949s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362426758s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.11( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.930267334s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362335205s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.14( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.930319786s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362411499s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.14( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.930293083s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362411499s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.11( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.930238724s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362335205s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.10( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.929828644s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362319946s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.10( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.929780006s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362319946s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.1d( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.929489136s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362106323s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.1d( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.929391861s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362106323s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.1f( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.929898262s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362777710s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.1f( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.929860115s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362777710s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.1e( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.928991318s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362075806s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.4( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.929343224s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362411499s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.a( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.928903580s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362045288s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.4( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.929265976s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362411499s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.a( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.928873062s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362045288s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.13( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.928872108s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362152100s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.1e( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.928946495s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362075806s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.16( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.929080009s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active pruub 147.362472534s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.16( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.929050446s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362472534s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.11( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=15.975890160s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 149.409515381s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[10.11( v 45'48 (0'0,45'48] local-lis/les=54/55 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=15.975848198s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.409515381s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 58 pg[7.13( empty local-lis/les=52/53 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.928808212s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 147.362152100s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:43.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[8.14( v 41'8 (0'0,41'8] local-lis/les=58/59 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[6.6( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[6.2( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[6.e( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[6.a( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[11.14( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[8.17( v 41'8 (0'0,41'8] local-lis/les=58/59 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[11.1( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[11.f( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[11.5( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[11.7( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[8.4( v 41'8 (0'0,41'8] local-lis/les=58/59 n=1 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[11.4( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[8.8( v 41'8 (0'0,41'8] local-lis/les=58/59 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[8.18( v 41'8 (0'0,41'8] local-lis/les=58/59 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[8.1b( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=58/59 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[52,58)/1 crt=41'8 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[11.1b( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[11.1d( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[11.1c( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[11.1e( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[8.10( v 41'8 (0'0,41'8] local-lis/les=58/59 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[11.12( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[8.12( v 41'8 (0'0,41'8] local-lis/les=58/59 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[8.19( v 41'8 (0'0,41'8] local-lis/les=58/59 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58) [1] r=0 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:44 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 59 pg[11.1a( empty local-lis/les=58/59 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:08:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:45.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:08:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:08:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:08:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 02:08:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:08:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 02:08:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:08:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 02:08:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 02:08:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Jan 31 02:08:45 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 60 pg[6.6( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=59/60 n=2 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=48'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:45 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 60 pg[6.e( v 48'39 lc 44'19 (0'0,48'39] local-lis/les=59/60 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:45.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:45 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 60 pg[6.2( v 48'39 (0'0,48'39] local-lis/les=59/60 n=2 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:45 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 60 pg[6.a( v 48'39 (0'0,48'39] local-lis/les=59/60 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[50,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:46 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:08:46 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:46 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:08:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:47.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:47.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:08:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:49.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:08:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:08:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:49.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:08:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:51.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:51.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:52 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 31 02:08:52 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 31 02:08:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Jan 31 02:08:52 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 31 02:08:52 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 31 02:08:52 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=58/58 les/c/f=59/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:52 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 61 pg[6.b( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:52 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 61 pg[6.3( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:52 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 61 pg[6.7( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:53.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:53.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:53 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 02:08:53 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 02:08:53 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:53 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:53 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 31 02:08:53 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 31 02:08:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Jan 31 02:08:54 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 62 pg[6.3( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=61/62 n=2 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:54 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 62 pg[6.b( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=61/62 n=1 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:54 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 62 pg[6.f( v 48'39 lc 44'1 (0'0,48'39] local-lis/les=61/62 n=1 ec=50/22 lis/c=58/58 les/c/f=59/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:54 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 62 pg[6.7( v 48'39 lc 44'21 (0'0,48'39] local-lis/les=61/62 n=1 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:54 np0005603609 podman[86101]: 2026-01-31 07:08:54.013557084 +0000 UTC m=+0.074840768 container exec 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 02:08:54 np0005603609 podman[86101]: 2026-01-31 07:08:54.119385977 +0000 UTC m=+0.180669641 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 02:08:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 02:08:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 02:08:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Jan 31 02:08:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:55.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:55.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:55 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 31 02:08:55 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 31 02:08:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:08:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:08:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:08:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 31 02:08:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 31 02:08:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Jan 31 02:08:56 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 64 pg[6.5( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:56 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:56 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 31 02:08:56 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 31 02:08:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 02:08:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 02:08:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Jan 31 02:08:57 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 65 pg[6.d( v 48'39 lc 44'13 (0'0,48'39] local-lis/les=64/65 n=1 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=48'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:57 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 65 pg[6.5( v 48'39 lc 44'11 (0'0,48'39] local-lis/les=64/65 n=2 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=48'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:08:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:57.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:57.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:58 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 31 02:08:58 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 31 02:08:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Jan 31 02:08:58 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=66) [1] r=0 lpr=66 pi=[54,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:58 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=66) [1] r=0 lpr=66 pi=[54,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:58 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=66) [1] r=0 lpr=66 pi=[54,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:58 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=66) [1] r=0 lpr=66 pi=[54,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:08:58 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 66 pg[6.e( v 48'39 (0'0,48'39] local-lis/les=59/60 n=1 ec=50/22 lis/c=59/59 les/c/f=60/60/0 sis=66 pruub=11.374151230s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=48'39 mlcod 48'39 active pruub 159.701370239s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:58 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 66 pg[6.e( v 48'39 (0'0,48'39] local-lis/les=59/60 n=1 ec=50/22 lis/c=59/59 les/c/f=60/60/0 sis=66 pruub=11.374047279s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=48'39 mlcod 0'0 unknown NOTIFY pruub 159.701370239s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:58 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 66 pg[6.6( v 48'39 (0'0,48'39] local-lis/les=59/60 n=2 ec=50/22 lis/c=59/59 les/c/f=60/60/0 sis=66 pruub=11.373584747s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=48'39 mlcod 48'39 active pruub 159.701324463s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:58 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 66 pg[6.6( v 48'39 (0'0,48'39] local-lis/les=59/60 n=2 ec=50/22 lis/c=59/59 les/c/f=60/60/0 sis=66 pruub=11.373451233s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=48'39 mlcod 0'0 unknown NOTIFY pruub 159.701324463s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:08:59 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 02:08:59 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 02:08:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Jan 31 02:08:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:08:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:59.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:08:59 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=-1 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:59 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=-1 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:59 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=-1 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:59 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=-1 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:59 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=-1 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:59 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=-1 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:59 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=-1 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:08:59 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=-1 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:08:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:08:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:08:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:59.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:09:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Jan 31 02:09:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:01.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Jan 31 02:09:01 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 69 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:01 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 69 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:01 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 69 pg[9.e( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:01 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 69 pg[9.e( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:01 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 69 pg[9.6( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:01 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 69 pg[9.6( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:01 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 69 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:01 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 69 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:01 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:09:01 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:09:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:01.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:01 np0005603609 systemd[1]: session-19.scope: Deactivated successfully.
Jan 31 02:09:01 np0005603609 systemd[1]: session-19.scope: Consumed 7.836s CPU time.
Jan 31 02:09:01 np0005603609 systemd-logind[823]: Session 19 logged out. Waiting for processes to exit.
Jan 31 02:09:01 np0005603609 systemd-logind[823]: Removed session 19.
Jan 31 02:09:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Jan 31 02:09:02 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 70 pg[9.e( v 48'908 (0'0,48'908] local-lis/les=69/70 n=6 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:09:02 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 70 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=69/70 n=5 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:09:02 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 70 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=69/70 n=5 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:09:02 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 70 pg[9.6( v 48'908 (0'0,48'908] local-lis/les=69/70 n=6 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69) [1] r=0 lpr=69 pi=[54,69)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:09:02 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 31 02:09:02 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 31 02:09:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:09:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:03.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:09:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:03.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:05.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:05 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 31 02:09:05 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 31 02:09:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Jan 31 02:09:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:05.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:06 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 02:09:06 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 02:09:06 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 31 02:09:06 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 31 02:09:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:07.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:07.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Jan 31 02:09:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 31 02:09:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 31 02:09:07 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 72 pg[6.8( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=72) [1] r=0 lpr=72 pi=[50,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:08 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 02:09:08 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 02:09:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Jan 31 02:09:08 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 73 pg[6.8( v 48'39 (0'0,48'39] local-lis/les=72/73 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=72) [1] r=0 lpr=72 pi=[50,72)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:09:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:09.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:09.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Jan 31 02:09:09 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 31 02:09:09 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 31 02:09:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Jan 31 02:09:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 02:09:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 02:09:10 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 31 02:09:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Jan 31 02:09:10 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 31 02:09:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:11.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:11.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 31 02:09:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 31 02:09:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Jan 31 02:09:11 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 77 pg[9.a( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=77) [1] r=0 lpr=77 pi=[54,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:11 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 77 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=77) [1] r=0 lpr=77 pi=[54,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Jan 31 02:09:12 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 78 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[54,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:12 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 78 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[54,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:09:12 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 78 pg[9.a( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[54,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:12 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 78 pg[9.a( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[54,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:09:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 02:09:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 02:09:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:13.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:13.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Jan 31 02:09:13 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 31 02:09:13 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 31 02:09:13 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 02:09:13 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 02:09:13 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 79 pg[6.b( v 48'39 (0'0,48'39] local-lis/les=61/62 n=1 ec=50/22 lis/c=61/61 les/c/f=62/62/0 sis=79 pruub=12.125774384s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=48'39 mlcod 48'39 active pruub 176.131134033s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:13 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 79 pg[6.b( v 48'39 (0'0,48'39] local-lis/les=61/62 n=1 ec=50/22 lis/c=61/61 les/c/f=62/62/0 sis=79 pruub=12.125587463s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=48'39 mlcod 0'0 unknown NOTIFY pruub 176.131134033s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:09:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Jan 31 02:09:14 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 80 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=78/54 les/c/f=79/55/0 sis=80) [1] r=0 lpr=80 pi=[54,80)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:14 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 80 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=78/54 les/c/f=79/55/0 sis=80) [1] r=0 lpr=80 pi=[54,80)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:14 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 80 pg[9.a( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=78/54 les/c/f=79/55/0 sis=80) [1] r=0 lpr=80 pi=[54,80)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:14 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 80 pg[9.a( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=78/54 les/c/f=79/55/0 sis=80) [1] r=0 lpr=80 pi=[54,80)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:15.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:09:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:15.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:09:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Jan 31 02:09:15 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 81 pg[9.a( v 48'908 (0'0,48'908] local-lis/les=80/81 n=6 ec=54/42 lis/c=78/54 les/c/f=79/55/0 sis=80) [1] r=0 lpr=80 pi=[54,80)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:09:15 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 81 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=80/81 n=5 ec=54/42 lis/c=78/54 les/c/f=79/55/0 sis=80) [1] r=0 lpr=80 pi=[54,80)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:09:15 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 31 02:09:15 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 31 02:09:15 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 31 02:09:15 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 31 02:09:15 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 02:09:15 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 02:09:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:17.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:09:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:17.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:09:17 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 31 02:09:17 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 31 02:09:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:19.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:09:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:19.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:09:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:21.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:21.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Jan 31 02:09:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 31 02:09:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 31 02:09:21 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 82 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=82) [1] r=0 lpr=82 pi=[67,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:21 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 82 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=82) [1] r=0 lpr=82 pi=[67,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Jan 31 02:09:22 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 83 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[67,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:22 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 83 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[67,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:09:22 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 83 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[67,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:22 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 83 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[67,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:09:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 02:09:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 02:09:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:23.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:23.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Jan 31 02:09:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 31 02:09:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 31 02:09:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 02:09:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 02:09:24 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 84 pg[6.e( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=66/66 les/c/f=67/67/0 sis=84) [1] r=0 lpr=84 pi=[66,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Jan 31 02:09:24 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 85 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=83/67 les/c/f=84/68/0 sis=85) [1] r=0 lpr=85 pi=[67,85)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:24 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 85 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=83/67 les/c/f=84/68/0 sis=85) [1] r=0 lpr=85 pi=[67,85)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:24 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 85 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=83/67 les/c/f=84/68/0 sis=85) [1] r=0 lpr=85 pi=[67,85)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:24 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 85 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=83/67 les/c/f=84/68/0 sis=85) [1] r=0 lpr=85 pi=[67,85)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:24 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 85 pg[6.e( v 48'39 lc 44'19 (0'0,48'39] local-lis/les=84/85 n=1 ec=50/22 lis/c=66/66 les/c/f=67/67/0 sis=84) [1] r=0 lpr=84 pi=[66,84)/1 crt=48'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:09:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:25.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:09:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:25.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:09:25 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 31 02:09:25 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 31 02:09:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Jan 31 02:09:25 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 31 02:09:25 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 31 02:09:25 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 86 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=86) [1] r=0 lpr=86 pi=[64,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:25 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 86 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=86) [1] r=0 lpr=86 pi=[64,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:25 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 86 pg[6.f( v 48'39 (0'0,48'39] local-lis/les=61/62 n=1 ec=50/22 lis/c=61/61 les/c/f=62/62/0 sis=86 pruub=8.195038795s) [0] r=-1 lpr=86 pi=[61,86)/1 crt=48'39 mlcod 48'39 active pruub 184.131469727s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:25 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 86 pg[6.f( v 48'39 (0'0,48'39] local-lis/les=61/62 n=1 ec=50/22 lis/c=61/61 les/c/f=62/62/0 sis=86 pruub=8.194910049s) [0] r=-1 lpr=86 pi=[61,86)/1 crt=48'39 mlcod 0'0 unknown NOTIFY pruub 184.131469727s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:09:25 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 86 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=85/86 n=6 ec=54/42 lis/c=83/67 les/c/f=84/68/0 sis=85) [1] r=0 lpr=85 pi=[67,85)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:09:25 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 86 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=85/86 n=5 ec=54/42 lis/c=83/67 les/c/f=84/68/0 sis=85) [1] r=0 lpr=85 pi=[67,85)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:09:26 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 02:09:26 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 02:09:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Jan 31 02:09:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 87 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[64,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 87 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[64,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:09:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 87 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[64,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:26 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 87 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[64,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:09:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:27.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:09:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:27.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:09:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Jan 31 02:09:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Jan 31 02:09:28 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 89 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=87/64 les/c/f=88/65/0 sis=89) [1] r=0 lpr=89 pi=[64,89)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:28 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 89 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=87/64 les/c/f=88/65/0 sis=89) [1] r=0 lpr=89 pi=[64,89)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:28 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 89 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=87/64 les/c/f=88/65/0 sis=89) [1] r=0 lpr=89 pi=[64,89)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:28 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 89 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=87/64 les/c/f=88/65/0 sis=89) [1] r=0 lpr=89 pi=[64,89)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:09:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:29.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:09:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:09:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:29.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:09:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Jan 31 02:09:29 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 90 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=89/90 n=6 ec=54/42 lis/c=87/64 les/c/f=88/65/0 sis=89) [1] r=0 lpr=89 pi=[64,89)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:09:29 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 90 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=89/90 n=5 ec=54/42 lis/c=87/64 les/c/f=88/65/0 sis=89) [1] r=0 lpr=89 pi=[64,89)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:09:30 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 31 02:09:30 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 31 02:09:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:31.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:09:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:31.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:09:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Jan 31 02:09:31 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 91 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=91) [1] r=0 lpr=91 pi=[54,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 31 02:09:32 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 31 02:09:32 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 31 02:09:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Jan 31 02:09:32 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 92 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=92) [1]/[0] r=-1 lpr=92 pi=[54,92)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:32 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 92 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=92) [1]/[0] r=-1 lpr=92 pi=[54,92)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:09:32 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 31 02:09:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:33.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:33 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 31 02:09:33 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 31 02:09:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:33.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Jan 31 02:09:34 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 93 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=93) [1] r=0 lpr=93 pi=[54,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:34 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 31 02:09:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Jan 31 02:09:34 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 94 pg[9.10( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=92/54 les/c/f=93/55/0 sis=94) [1] r=0 lpr=94 pi=[54,94)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:34 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 94 pg[9.10( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=92/54 les/c/f=93/55/0 sis=94) [1] r=0 lpr=94 pi=[54,94)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:34 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 94 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:34 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 94 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:09:35 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 31 02:09:35 np0005603609 systemd-logind[823]: New session 33 of user zuul.
Jan 31 02:09:35 np0005603609 systemd[1]: Started Session 33 of User zuul.
Jan 31 02:09:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:09:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:35.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:09:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:35.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Jan 31 02:09:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 95 pg[9.12( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=95) [1] r=0 lpr=95 pi=[54,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:35 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 95 pg[9.10( v 48'908 (0'0,48'908] local-lis/les=94/95 n=6 ec=54/42 lis/c=92/54 les/c/f=93/55/0 sis=94) [1] r=0 lpr=94 pi=[54,94)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:09:36 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 31 02:09:36 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 31 02:09:36 np0005603609 python3.9[86428]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:09:36 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 31 02:09:36 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 31 02:09:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Jan 31 02:09:36 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 96 pg[9.11( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:36 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 96 pg[9.11( v 48'908 (0'0,48'908] local-lis/les=0/0 n=6 ec=54/42 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:36 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 96 pg[9.12( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[54,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:36 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 96 pg[9.12( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[54,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:09:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:37.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:37.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Jan 31 02:09:37 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 97 pg[9.11( v 48'908 (0'0,48'908] local-lis/les=96/97 n=6 ec=54/42 lis/c=94/54 les/c/f=95/55/0 sis=96) [1] r=0 lpr=96 pi=[54,96)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:09:38 np0005603609 python3.9[86642]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:09:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Jan 31 02:09:38 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 98 pg[9.12( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=96/54 les/c/f=97/55/0 sis=98) [1] r=0 lpr=98 pi=[54,98)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:38 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 98 pg[9.12( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=96/54 les/c/f=97/55/0 sis=98) [1] r=0 lpr=98 pi=[54,98)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:39.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:09:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:39.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:09:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Jan 31 02:09:39 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 99 pg[9.12( v 48'908 (0'0,48'908] local-lis/les=98/99 n=5 ec=54/42 lis/c=96/54 les/c/f=97/55/0 sis=98) [1] r=0 lpr=98 pi=[54,98)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:09:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:09:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:41.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:09:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:41.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:41 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 31 02:09:41 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 31 02:09:41 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 31 02:09:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Jan 31 02:09:42 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 31 02:09:42 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 31 02:09:42 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 31 02:09:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:43.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:09:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:43.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:09:43 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 31 02:09:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Jan 31 02:09:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 31 02:09:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:45.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:45 np0005603609 systemd[1]: session-33.scope: Deactivated successfully.
Jan 31 02:09:45 np0005603609 systemd[1]: session-33.scope: Consumed 8.487s CPU time.
Jan 31 02:09:45 np0005603609 systemd-logind[823]: Session 33 logged out. Waiting for processes to exit.
Jan 31 02:09:45 np0005603609 systemd-logind[823]: Removed session 33.
Jan 31 02:09:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:09:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:45.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:09:46 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 31 02:09:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Jan 31 02:09:46 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 102 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:47 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 31 02:09:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Jan 31 02:09:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 103 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=103) [1]/[2] r=-1 lpr=103 pi=[67,103)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:47 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 103 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=67/67 les/c/f=68/68/0 sis=103) [1]/[2] r=-1 lpr=103 pi=[67,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:09:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:47.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:09:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:47.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:09:47 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 31 02:09:47 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 31 02:09:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Jan 31 02:09:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 31 02:09:48 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 104 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=69/70 n=5 ec=54/42 lis/c=69/69 les/c/f=70/70/0 sis=104 pruub=10.005849838s) [2] r=-1 lpr=104 pi=[69,104)/1 crt=48'908 mlcod 0'0 active pruub 208.496337891s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:48 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 104 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=69/70 n=5 ec=54/42 lis/c=69/69 les/c/f=70/70/0 sis=104 pruub=10.005738258s) [2] r=-1 lpr=104 pi=[69,104)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 208.496337891s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:09:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 31 02:09:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Jan 31 02:09:49 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 105 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=103/67 les/c/f=104/68/0 sis=105) [1] r=0 lpr=105 pi=[67,105)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:49 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 105 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=69/70 n=5 ec=54/42 lis/c=69/69 les/c/f=70/70/0 sis=105) [2]/[1] r=0 lpr=105 pi=[69,105)/1 crt=48'908 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:49 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 105 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=69/70 n=5 ec=54/42 lis/c=69/69 les/c/f=70/70/0 sis=105) [2]/[1] r=0 lpr=105 pi=[69,105)/1 crt=48'908 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:49 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 105 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=103/67 les/c/f=104/68/0 sis=105) [1] r=0 lpr=105 pi=[67,105)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:49.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:09:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:49.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:09:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Jan 31 02:09:50 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 106 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=105/106 n=5 ec=54/42 lis/c=103/67 les/c/f=104/68/0 sis=105) [1] r=0 lpr=105 pi=[67,105)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:09:50 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 106 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=105/106 n=5 ec=54/42 lis/c=69/69 les/c/f=70/70/0 sis=105) [2]/[1] async=[2] r=0 lpr=105 pi=[69,105)/1 crt=48'908 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:09:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Jan 31 02:09:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:51.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:51 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 107 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=105/106 n=5 ec=54/42 lis/c=105/69 les/c/f=106/70/0 sis=107 pruub=15.185519218s) [2] async=[2] r=-1 lpr=107 pi=[69,107)/1 crt=48'908 mlcod 48'908 active pruub 216.617065430s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:51 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 107 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=105/106 n=5 ec=54/42 lis/c=105/69 les/c/f=106/70/0 sis=107 pruub=15.185372353s) [2] r=-1 lpr=107 pi=[69,107)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 216.617065430s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:09:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:51.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:51 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 31 02:09:51 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 31 02:09:52 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 31 02:09:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Jan 31 02:09:52 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 31 02:09:52 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 31 02:09:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:53.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:53 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 31 02:09:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:53.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:54 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 31 02:09:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Jan 31 02:09:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:55.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Jan 31 02:09:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 31 02:09:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 31 02:09:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:09:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:55.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:09:55 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 31 02:09:55 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 31 02:09:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Jan 31 02:09:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 31 02:09:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:09:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:57.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:09:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Jan 31 02:09:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 31 02:09:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 31 02:09:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:57.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:58 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 112 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=80/81 n=5 ec=54/42 lis/c=80/80 les/c/f=81/81/0 sis=112 pruub=13.647308350s) [0] r=-1 lpr=112 pi=[80,112)/1 crt=48'908 mlcod 0'0 active pruub 221.942153931s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:58 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 112 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=80/81 n=5 ec=54/42 lis/c=80/80 les/c/f=81/81/0 sis=112 pruub=13.647246361s) [0] r=-1 lpr=112 pi=[80,112)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 221.942153931s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:09:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Jan 31 02:09:58 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 113 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=80/81 n=5 ec=54/42 lis/c=80/80 les/c/f=81/81/0 sis=113) [0]/[1] r=0 lpr=113 pi=[80,113)/1 crt=48'908 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:58 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 113 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=80/81 n=5 ec=54/42 lis/c=80/80 les/c/f=81/81/0 sis=113) [0]/[1] r=0 lpr=113 pi=[80,113)/1 crt=48'908 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:09:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:09:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:59.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Jan 31 02:09:59 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 31 02:09:59 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 31 02:09:59 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 114 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=113/114 n=5 ec=54/42 lis/c=80/80 les/c/f=81/81/0 sis=113) [0]/[1] async=[0] r=0 lpr=113 pi=[80,113)/1 crt=48'908 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:09:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:09:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:09:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:59.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:09:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Jan 31 02:09:59 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 115 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=113/114 n=5 ec=54/42 lis/c=113/80 les/c/f=114/81/0 sis=115 pruub=15.795205116s) [0] async=[0] r=-1 lpr=115 pi=[80,115)/1 crt=48'908 mlcod 48'908 active pruub 225.732254028s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:09:59 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 115 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=113/114 n=5 ec=54/42 lis/c=113/80 les/c/f=114/81/0 sis=115 pruub=15.795125961s) [0] r=-1 lpr=115 pi=[80,115)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 225.732254028s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:10:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Jan 31 02:10:00 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 02:10:01 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 31 02:10:01 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 31 02:10:01 np0005603609 systemd-logind[823]: New session 34 of user zuul.
Jan 31 02:10:01 np0005603609 systemd[1]: Started Session 34 of User zuul.
Jan 31 02:10:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:01.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:01 np0005603609 podman[86975]: 2026-01-31 07:10:01.659015951 +0000 UTC m=+0.072846491 container exec 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:10:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:01.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:01 np0005603609 podman[86975]: 2026-01-31 07:10:01.760728526 +0000 UTC m=+0.174559086 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:10:01 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 31 02:10:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Jan 31 02:10:01 np0005603609 python3.9[87045]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 02:10:02 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 31 02:10:02 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 31 02:10:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 31 02:10:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:10:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:10:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:10:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:10:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:10:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:10:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Jan 31 02:10:03 np0005603609 python3.9[87449]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:10:03 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 31 02:10:03 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 31 02:10:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:03.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:03.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:10:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:10:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:10:04 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 31 02:10:04 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 31 02:10:04 np0005603609 python3.9[87605]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:10:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:10:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:05.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:10:05 np0005603609 python3.9[87758]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:10:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:05.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:06 np0005603609 python3.9[87912]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:10:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:07.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:07 np0005603609 python3.9[88064]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:10:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:07.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:08 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 31 02:10:08 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 31 02:10:08 np0005603609 python3.9[88214]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:10:08 np0005603609 network[88231]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:10:08 np0005603609 network[88232]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:10:08 np0005603609 network[88233]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:10:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:09.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:10:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:09.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:10:09 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:10:09 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:10:09 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 31 02:10:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Jan 31 02:10:10 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 119 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=85/86 n=5 ec=54/42 lis/c=85/85 les/c/f=86/86/0 sis=119 pruub=11.338006973s) [2] r=-1 lpr=119 pi=[85,119)/1 crt=48'908 mlcod 0'0 active pruub 231.946899414s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:10:10 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 119 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=85/86 n=5 ec=54/42 lis/c=85/85 les/c/f=86/86/0 sis=119 pruub=11.337910652s) [2] r=-1 lpr=119 pi=[85,119)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 231.946899414s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:10:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 31 02:10:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Jan 31 02:10:10 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 120 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=85/86 n=5 ec=54/42 lis/c=85/85 les/c/f=86/86/0 sis=120) [2]/[1] r=0 lpr=120 pi=[85,120)/1 crt=48'908 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:10:10 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 120 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=85/86 n=5 ec=54/42 lis/c=85/85 les/c/f=86/86/0 sis=120) [2]/[1] r=0 lpr=120 pi=[85,120)/1 crt=48'908 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:10:11 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 31 02:10:11 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 31 02:10:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:11.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:11.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 31 02:10:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Jan 31 02:10:11 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 121 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=69/70 n=5 ec=54/42 lis/c=69/69 les/c/f=70/70/0 sis=121 pruub=10.487287521s) [0] r=-1 lpr=121 pi=[69,121)/1 crt=48'908 mlcod 0'0 active pruub 232.496826172s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:10:11 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 121 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=69/70 n=5 ec=54/42 lis/c=69/69 les/c/f=70/70/0 sis=121 pruub=10.487191200s) [0] r=-1 lpr=121 pi=[69,121)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 232.496826172s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:10:11 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 121 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=120/121 n=5 ec=54/42 lis/c=85/85 les/c/f=86/86/0 sis=120) [2]/[1] async=[2] r=0 lpr=120 pi=[85,120)/1 crt=48'908 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:10:12 np0005603609 python3.9[88543]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:10:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 31 02:10:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Jan 31 02:10:12 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 122 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=69/70 n=5 ec=54/42 lis/c=69/69 les/c/f=70/70/0 sis=122) [0]/[1] r=0 lpr=122 pi=[69,122)/1 crt=48'908 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:10:12 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 122 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=69/70 n=5 ec=54/42 lis/c=69/69 les/c/f=70/70/0 sis=122) [0]/[1] r=0 lpr=122 pi=[69,122)/1 crt=48'908 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:10:12 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 122 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=120/121 n=5 ec=54/42 lis/c=120/85 les/c/f=121/86/0 sis=122 pruub=15.000168800s) [2] async=[2] r=-1 lpr=122 pi=[85,122)/1 crt=48'908 mlcod 48'908 active pruub 238.019317627s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:10:12 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 122 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=120/121 n=5 ec=54/42 lis/c=120/85 les/c/f=121/86/0 sis=122 pruub=15.000104904s) [2] r=-1 lpr=122 pi=[85,122)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 238.019317627s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:10:13 np0005603609 python3.9[88693]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:10:13 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 31 02:10:13 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 31 02:10:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:13.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:13.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Jan 31 02:10:13 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 123 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=122/123 n=5 ec=54/42 lis/c=69/69 les/c/f=70/70/0 sis=122) [0]/[1] async=[0] r=0 lpr=122 pi=[69,122)/1 crt=48'908 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:10:14 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 31 02:10:14 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 31 02:10:14 np0005603609 python3.9[88847]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:10:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Jan 31 02:10:14 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 124 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=122/123 n=5 ec=54/42 lis/c=122/69 les/c/f=123/70/0 sis=124 pruub=15.099517822s) [0] async=[0] r=-1 lpr=124 pi=[69,124)/1 crt=48'908 mlcod 48'908 active pruub 240.049041748s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:10:14 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 124 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=122/123 n=5 ec=54/42 lis/c=122/69 les/c/f=123/70/0 sis=124 pruub=15.099387169s) [0] r=-1 lpr=124 pi=[69,124)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 240.049041748s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:10:15 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 31 02:10:15 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 31 02:10:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:15.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:15 np0005603609 python3.9[89005]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:10:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:15.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Jan 31 02:10:15 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Jan 31 02:10:15 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:10:15.966742) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:10:15 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Jan 31 02:10:15 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843415966888, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7245, "num_deletes": 255, "total_data_size": 13323245, "memory_usage": 13588448, "flush_reason": "Manual Compaction"}
Jan 31 02:10:15 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843416031150, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 7839369, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 241, "largest_seqno": 7250, "table_properties": {"data_size": 7811075, "index_size": 18579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8645, "raw_key_size": 81286, "raw_average_key_size": 23, "raw_value_size": 7743864, "raw_average_value_size": 2244, "num_data_blocks": 822, "num_entries": 3450, "num_filter_entries": 3450, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 1769843222, "file_creation_time": 1769843415, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 64441 microseconds, and 14356 cpu microseconds.
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:10:16.031209) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 7839369 bytes OK
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:10:16.031226) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:10:16.032381) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:10:16.032393) EVENT_LOG_v1 {"time_micros": 1769843416032389, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:10:16.032407) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13285367, prev total WAL file size 13285367, number of live WAL files 2.
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:10:16.033602) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(7655KB) 8(1648B)]
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843416033709, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 7841017, "oldest_snapshot_seqno": -1}
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3199 keys, 7835876 bytes, temperature: kUnknown
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843416073890, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 7835876, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7808247, "index_size": 18559, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 77108, "raw_average_key_size": 24, "raw_value_size": 7744144, "raw_average_value_size": 2420, "num_data_blocks": 822, "num_entries": 3199, "num_filter_entries": 3199, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769843416, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:10:16.074466) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 7835876 bytes
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:10:16.077743) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.6 rd, 194.5 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(7.5, 0.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3455, records dropped: 256 output_compression: NoCompression
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:10:16.077766) EVENT_LOG_v1 {"time_micros": 1769843416077754, "job": 4, "event": "compaction_finished", "compaction_time_micros": 40285, "compaction_time_cpu_micros": 14407, "output_level": 6, "num_output_files": 1, "total_output_size": 7835876, "num_input_records": 3455, "num_output_records": 3199, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843416078828, "job": 4, "event": "table_file_deletion", "file_number": 14}
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843416078863, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 31 02:10:16 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:10:16.033513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:10:16 np0005603609 python3.9[89090]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:10:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:17.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:17.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:19.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:10:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:19.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:10:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Jan 31 02:10:20 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 126 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=89/90 n=5 ec=54/42 lis/c=89/89 les/c/f=90/90/0 sis=126 pruub=13.807893753s) [0] r=-1 lpr=126 pi=[89,126)/1 crt=48'908 mlcod 0'0 active pruub 244.021438599s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:10:20 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 126 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=89/90 n=5 ec=54/42 lis/c=89/89 les/c/f=90/90/0 sis=126 pruub=13.807791710s) [0] r=-1 lpr=126 pi=[89,126)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 244.021438599s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:10:20 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 31 02:10:20 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:10:20 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 31 02:10:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Jan 31 02:10:21 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 127 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=89/90 n=5 ec=54/42 lis/c=89/89 les/c/f=90/90/0 sis=127) [0]/[1] r=0 lpr=127 pi=[89,127)/1 crt=48'908 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:10:21 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 127 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=89/90 n=5 ec=54/42 lis/c=89/89 les/c/f=90/90/0 sis=127) [0]/[1] r=0 lpr=127 pi=[89,127)/1 crt=48'908 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:10:21 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 31 02:10:21 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 31 02:10:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:10:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:21.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:21.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Jan 31 02:10:22 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 128 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=127/128 n=5 ec=54/42 lis/c=89/89 les/c/f=90/90/0 sis=127) [0]/[1] async=[0] r=0 lpr=127 pi=[89,127)/1 crt=48'908 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:10:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Jan 31 02:10:23 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 129 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=127/128 n=5 ec=54/42 lis/c=127/89 les/c/f=128/90/0 sis=129 pruub=15.537175179s) [0] async=[0] r=-1 lpr=129 pi=[89,129)/1 crt=48'908 mlcod 48'908 active pruub 248.896728516s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:10:23 np0005603609 ceph-osd[79083]: osd.1 pg_epoch: 129 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=127/128 n=5 ec=54/42 lis/c=127/89 les/c/f=128/90/0 sis=129 pruub=15.537021637s) [0] r=-1 lpr=129 pi=[89,129)/1 crt=48'908 mlcod 0'0 unknown NOTIFY pruub 248.896728516s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:10:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:23.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:23.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:24 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 31 02:10:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Jan 31 02:10:24 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 31 02:10:25 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 31 02:10:25 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 31 02:10:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:10:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:25.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:10:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:25.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:27.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:27.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:29.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:29.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:31.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:31.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:33 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 31 02:10:33 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 31 02:10:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:33.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:33.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:35.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:35.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:37 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 31 02:10:37 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 31 02:10:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:37.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:37.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:38 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.b deep-scrub starts
Jan 31 02:10:38 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.b deep-scrub ok
Jan 31 02:10:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:39.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:39.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:41 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 31 02:10:41 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 31 02:10:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:41.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:41.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:43.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:43.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:45.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:45.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:46 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.d deep-scrub starts
Jan 31 02:10:46 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.d deep-scrub ok
Jan 31 02:10:47 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 31 02:10:47 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 31 02:10:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:47.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:47.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:49.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:49.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:50 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.16 deep-scrub starts
Jan 31 02:10:50 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.16 deep-scrub ok
Jan 31 02:10:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:51.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:10:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:51.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:10:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:53.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:53.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:55.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:55.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:57 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 31 02:10:57 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 31 02:10:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:57.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:57.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:10:59 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 31 02:10:59 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 31 02:10:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:59.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:10:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:10:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:10:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:59.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:00 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 31 02:11:00 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 31 02:11:01 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 31 02:11:01 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 31 02:11:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:01.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:01.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:03.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:11:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:03.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:04 np0005603609 python3.9[89388]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:11:04 np0005603609 systemd[72682]: Created slice User Background Tasks Slice.
Jan 31 02:11:04 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 31 02:11:04 np0005603609 systemd[72682]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 02:11:04 np0005603609 systemd[72682]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 02:11:04 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 31 02:11:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:05.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:05.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:06 np0005603609 python3.9[89676]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 02:11:07 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 31 02:11:07 np0005603609 python3.9[89828]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 02:11:07 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 31 02:11:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:07.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:07.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:07 np0005603609 python3.9[89980]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:11:08 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 31 02:11:08 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 31 02:11:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:08 np0005603609 python3.9[90132]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 02:11:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:09.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:09.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:11:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:11:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:11.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:11 np0005603609 python3.9[90416]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:11:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:11.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:12 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 31 02:11:12 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 31 02:11:12 np0005603609 python3.9[90568]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:11:13 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 31 02:11:13 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 31 02:11:13 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:11:13 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:11:13 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:11:13 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:11:13 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:11:13 np0005603609 python3.9[90646]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:11:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:13.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:13.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:14 np0005603609 python3.9[90798]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:11:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:15.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:15.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:16 np0005603609 python3.9[90952]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 02:11:17 np0005603609 python3.9[91105]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 02:11:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:17.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:17.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:18 np0005603609 python3.9[91258]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 02:11:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:18 np0005603609 python3.9[91460]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 02:11:18 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:11:18 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:11:19 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 31 02:11:19 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 31 02:11:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:19.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:11:19 np0005603609 python3.9[91612]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:11:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:19.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:21.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:21 np0005603609 python3.9[91765]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:11:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:21.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:22 np0005603609 python3.9[91917]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:11:22 np0005603609 python3.9[91995]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:11:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:23.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:23.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:25 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 31 02:11:25 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 31 02:11:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 31 02:11:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:25.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 31 02:11:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:25.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:26 np0005603609 python3.9[92147]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:11:26 np0005603609 python3.9[92225]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:11:27 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 8.18 deep-scrub starts
Jan 31 02:11:27 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 8.18 deep-scrub ok
Jan 31 02:11:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:27.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:27 np0005603609 python3.9[92377]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:11:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:11:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:27.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:11:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:29.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:29.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:30 np0005603609 python3.9[92528]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:11:30 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 31 02:11:30 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 31 02:11:30 np0005603609 python3.9[92680]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 02:11:31 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.1e deep-scrub starts
Jan 31 02:11:31 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.1e deep-scrub ok
Jan 31 02:11:31 np0005603609 python3.9[92830]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:11:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:31.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:31.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:32 np0005603609 python3.9[92982]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:11:32 np0005603609 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 02:11:32 np0005603609 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 02:11:32 np0005603609 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 02:11:32 np0005603609 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 02:11:33 np0005603609 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 02:11:33 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 31 02:11:33 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 31 02:11:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:33.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:33 np0005603609 python3.9[93144]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 02:11:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:34.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:11:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:35.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:11:36 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 31 02:11:36 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 31 02:11:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:36.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:37.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:38 np0005603609 python3.9[93296]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:11:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:38.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:38 np0005603609 python3.9[93450]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:11:39 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 31 02:11:39 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 31 02:11:39 np0005603609 systemd[1]: session-34.scope: Deactivated successfully.
Jan 31 02:11:39 np0005603609 systemd[1]: session-34.scope: Consumed 1min 3.949s CPU time.
Jan 31 02:11:39 np0005603609 systemd-logind[823]: Session 34 logged out. Waiting for processes to exit.
Jan 31 02:11:39 np0005603609 systemd-logind[823]: Removed session 34.
Jan 31 02:11:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:39.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:11:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:40.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:11:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:41.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:42.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:43.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:11:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:44.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:11:44 np0005603609 systemd-logind[823]: New session 35 of user zuul.
Jan 31 02:11:44 np0005603609 systemd[1]: Started Session 35 of User zuul.
Jan 31 02:11:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:45.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:46 np0005603609 python3.9[93630]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:11:46 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 31 02:11:46 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 31 02:11:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:11:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:46.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:11:47 np0005603609 python3.9[93786]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 02:11:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:47.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:48 np0005603609 python3.9[93939]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:11:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:11:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:48.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:11:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:48 np0005603609 python3.9[94023]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 02:11:49 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 31 02:11:49 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 31 02:11:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:49.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:50 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 31 02:11:50 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 31 02:11:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:50.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:11:51 np0005603609 python3.9[94176]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:11:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:51.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:11:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:52.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:11:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:11:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:53.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:11:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:53 np0005603609 python3.9[94329]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:11:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:11:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:54.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:11:54 np0005603609 python3.9[94482]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:11:55 np0005603609 python3.9[94634]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 02:11:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:55.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:56 np0005603609 python3.9[94784]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:11:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:56.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:57 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 31 02:11:57 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 31 02:11:57 np0005603609 python3.9[94942]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:11:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:57.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:58 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 31 02:11:58 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 31 02:11:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:58.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:11:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:11:59 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 31 02:11:59 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 31 02:11:59 np0005603609 python3.9[95095]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:11:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:11:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:11:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:59.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:00.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:00 np0005603609 python3.9[95382]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 02:12:01 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 31 02:12:01 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 31 02:12:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:01.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:01 np0005603609 python3.9[95532]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:12:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:12:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:02.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:02 np0005603609 python3.9[95686]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:12:03 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 31 02:12:03 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 31 02:12:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:03.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:12:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:04.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:12:04 np0005603609 python3.9[95839]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:12:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:12:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:05.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:12:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:06.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:06 np0005603609 python3.9[95992]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:12:07 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 31 02:12:07 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 31 02:12:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:07.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:07 np0005603609 python3.9[96146]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 31 02:12:08 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 31 02:12:08 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 31 02:12:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:08.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:08 np0005603609 systemd[1]: session-35.scope: Deactivated successfully.
Jan 31 02:12:08 np0005603609 systemd[1]: session-35.scope: Consumed 17.699s CPU time.
Jan 31 02:12:08 np0005603609 systemd-logind[823]: Session 35 logged out. Waiting for processes to exit.
Jan 31 02:12:08 np0005603609 systemd-logind[823]: Removed session 35.
Jan 31 02:12:09 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 31 02:12:09 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 31 02:12:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:09.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:10.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:11.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:12 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 31 02:12:12 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 31 02:12:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:12:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:12.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:12:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:12:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:13.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:12:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:13 np0005603609 systemd-logind[823]: New session 36 of user zuul.
Jan 31 02:12:13 np0005603609 systemd[1]: Started Session 36 of User zuul.
Jan 31 02:12:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:14.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:14 np0005603609 python3.9[96324]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:12:15 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 31 02:12:15 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 31 02:12:15 np0005603609 python3.9[96478]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:12:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:15.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:12:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:16.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:16 np0005603609 python3.9[96671]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:12:17 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 31 02:12:17 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 31 02:12:17 np0005603609 systemd[1]: session-36.scope: Deactivated successfully.
Jan 31 02:12:17 np0005603609 systemd[1]: session-36.scope: Consumed 2.010s CPU time.
Jan 31 02:12:17 np0005603609 systemd-logind[823]: Session 36 logged out. Waiting for processes to exit.
Jan 31 02:12:17 np0005603609 systemd-logind[823]: Removed session 36.
Jan 31 02:12:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:17.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:18 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 31 02:12:18 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 31 02:12:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:12:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:18.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:12:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:19.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:12:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:12:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:20.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:20 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:12:20 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:12:20 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:12:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:12:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:21.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:12:22 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 31 02:12:22 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 31 02:12:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:22.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:23 np0005603609 systemd-logind[823]: New session 37 of user zuul.
Jan 31 02:12:23 np0005603609 systemd[1]: Started Session 37 of User zuul.
Jan 31 02:12:23 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 31 02:12:23 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 31 02:12:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:23.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:24 np0005603609 python3.9[96981]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:12:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:24.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:25 np0005603609 python3.9[97135]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:12:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:25.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:26 np0005603609 python3.9[97341]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:12:26 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:12:26 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:12:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:12:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:26.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:26 np0005603609 python3.9[97425]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:12:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:27.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:28.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:29 np0005603609 python3.9[97578]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:12:29 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 31 02:12:29 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 31 02:12:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:29.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:30 np0005603609 python3.9[97773]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:12:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:12:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:30.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:12:30 np0005603609 python3.9[97925]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:12:31 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 31 02:12:31 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 31 02:12:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:31.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:31 np0005603609 python3.9[98089]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:12:32 np0005603609 python3.9[98167]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:12:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:32.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:32 np0005603609 python3.9[98319]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:12:33 np0005603609 python3.9[98397]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:12:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:33.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:34 np0005603609 python3.9[98549]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:12:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:12:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:34.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:12:35 np0005603609 python3.9[98701]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:12:35 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 31 02:12:35 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 31 02:12:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:12:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:35.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:12:35 np0005603609 python3.9[98853]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:12:36 np0005603609 python3.9[99005]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:12:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:12:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:36.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:12:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:38.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:12:38 np0005603609 python3.9[99157]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:12:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:38.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:40 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 31 02:12:40 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 31 02:12:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:40.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:40.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:41 np0005603609 python3.9[99310]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:12:41 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 31 02:12:41 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 31 02:12:41 np0005603609 python3.9[99464]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:12:42 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 31 02:12:42 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 31 02:12:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:42.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:42 np0005603609 python3.9[99616]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:12:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:42.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:43 np0005603609 python3.9[99768]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:12:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:44 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 31 02:12:44 np0005603609 python3.9[99921]: ansible-service_facts Invoked
Jan 31 02:12:44 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 31 02:12:44 np0005603609 network[99938]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:12:44 np0005603609 network[99939]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:12:44 np0005603609 network[99940]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:12:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:44.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:12:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:44.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:12:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:12:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:46.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:12:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:12:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:46.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:47 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 31 02:12:47 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 31 02:12:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:48.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:48.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:49 np0005603609 python3.9[100392]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:12:50 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 31 02:12:50 np0005603609 ceph-osd[79083]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 31 02:12:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:50.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:50.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:12:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:52.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:52 np0005603609 python3.9[100545]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 02:12:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:52.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:53 np0005603609 python3.9[100697]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:12:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:12:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:54.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:12:54 np0005603609 python3.9[100775]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:12:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:54.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:55 np0005603609 python3.9[100927]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:12:55 np0005603609 python3.9[101005]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:12:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:56.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:12:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:56.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:12:57 np0005603609 python3.9[101157]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:12:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:58.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:12:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:12:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:12:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:58.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:12:59 np0005603609 python3.9[101309]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:13:00 np0005603609 python3.9[101393]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:13:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:00.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:00.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:01 np0005603609 systemd[1]: session-37.scope: Deactivated successfully.
Jan 31 02:13:01 np0005603609 systemd[1]: session-37.scope: Consumed 22.259s CPU time.
Jan 31 02:13:01 np0005603609 systemd-logind[823]: Session 37 logged out. Waiting for processes to exit.
Jan 31 02:13:01 np0005603609 systemd-logind[823]: Removed session 37.
Jan 31 02:13:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:02.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:02.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:13:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:04.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:13:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:04.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:06.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:06.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:07 np0005603609 systemd-logind[823]: New session 38 of user zuul.
Jan 31 02:13:07 np0005603609 systemd[1]: Started Session 38 of User zuul.
Jan 31 02:13:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:08.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:08 np0005603609 python3.9[101576]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:13:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:08.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:13:09 np0005603609 python3.9[101728]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:09 np0005603609 python3.9[101806]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:10 np0005603609 systemd[1]: session-38.scope: Deactivated successfully.
Jan 31 02:13:10 np0005603609 systemd[1]: session-38.scope: Consumed 1.437s CPU time.
Jan 31 02:13:10 np0005603609 systemd-logind[823]: Session 38 logged out. Waiting for processes to exit.
Jan 31 02:13:10 np0005603609 systemd-logind[823]: Removed session 38.
Jan 31 02:13:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:10.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:10.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:12.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:12.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:14.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:14.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:15 np0005603609 systemd-logind[823]: New session 39 of user zuul.
Jan 31 02:13:15 np0005603609 systemd[1]: Started Session 39 of User zuul.
Jan 31 02:13:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:16.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:16 np0005603609 python3.9[101987]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:13:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:16.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:17 np0005603609 python3.9[102143]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:18.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:18 np0005603609 python3.9[102318]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:18.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:19 np0005603609 python3.9[102396]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.q058z89e recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:20 np0005603609 python3.9[102548]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:20.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:20 np0005603609 python3.9[102626]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.27ss_d6z recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:20.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:21 np0005603609 python3.9[102778]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:13:22 np0005603609 python3.9[102930]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:22.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:22 np0005603609 python3.9[103008]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:13:22.734781) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843602734868, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2547, "num_deletes": 251, "total_data_size": 5267348, "memory_usage": 5336128, "flush_reason": "Manual Compaction"}
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843602771259, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3439665, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7255, "largest_seqno": 9797, "table_properties": {"data_size": 3430016, "index_size": 5631, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 24839, "raw_average_key_size": 21, "raw_value_size": 3408362, "raw_average_value_size": 2918, "num_data_blocks": 252, "num_entries": 1168, "num_filter_entries": 1168, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843416, "oldest_key_time": 1769843416, "file_creation_time": 1769843602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 36520 microseconds, and 6301 cpu microseconds.
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:13:22.771319) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3439665 bytes OK
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:13:22.771344) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:13:22.773434) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:13:22.773452) EVENT_LOG_v1 {"time_micros": 1769843602773446, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:13:22.773480) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 5255580, prev total WAL file size 5255580, number of live WAL files 2.
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:13:22.774515) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3359KB)], [15(7652KB)]
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843602774555, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 11275541, "oldest_snapshot_seqno": -1}
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3846 keys, 9678596 bytes, temperature: kUnknown
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843602882139, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 9678596, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9646969, "index_size": 20893, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 92771, "raw_average_key_size": 24, "raw_value_size": 9571745, "raw_average_value_size": 2488, "num_data_blocks": 913, "num_entries": 3846, "num_filter_entries": 3846, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769843602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:13:22.882467) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9678596 bytes
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:13:22.884918) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.7 rd, 89.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.5 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4367, records dropped: 521 output_compression: NoCompression
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:13:22.884943) EVENT_LOG_v1 {"time_micros": 1769843602884931, "job": 6, "event": "compaction_finished", "compaction_time_micros": 107689, "compaction_time_cpu_micros": 18054, "output_level": 6, "num_output_files": 1, "total_output_size": 9678596, "num_input_records": 4367, "num_output_records": 3846, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843602885663, "job": 6, "event": "table_file_deletion", "file_number": 17}
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843602886828, "job": 6, "event": "table_file_deletion", "file_number": 15}
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:13:22.774397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:13:22.886917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:13:22.886923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:13:22.886926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:13:22.886928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:13:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:13:22.886930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:13:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:22.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:23 np0005603609 python3.9[103160]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:23 np0005603609 python3.9[103238]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:13:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:24 np0005603609 python3.9[103390]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:24.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:24 np0005603609 python3.9[103542]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:24.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:25 np0005603609 python3.9[103620]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:25 np0005603609 python3.9[103772]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:26 np0005603609 python3.9[103950]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:26.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:26.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:27 np0005603609 python3.9[104133]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:13:27 np0005603609 systemd[1]: Reloading.
Jan 31 02:13:27 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:13:27 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:13:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:13:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:13:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:13:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:13:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:13:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:28.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:28 np0005603609 python3.9[104324]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:28 np0005603609 python3.9[104402]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:28.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:29 np0005603609 python3.9[104554]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:29 np0005603609 python3.9[104632]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:30.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:30 np0005603609 python3.9[104784]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:13:30 np0005603609 systemd[1]: Reloading.
Jan 31 02:13:30 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:13:30 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:13:30 np0005603609 systemd[1]: Starting Create netns directory...
Jan 31 02:13:30 np0005603609 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 02:13:30 np0005603609 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 02:13:30 np0005603609 systemd[1]: Finished Create netns directory.
Jan 31 02:13:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:30.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:13:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:32.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:13:32 np0005603609 python3.9[104978]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:13:32 np0005603609 network[104995]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:13:32 np0005603609 network[104996]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:13:32 np0005603609 network[104997]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:13:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:32.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:34 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:13:34 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:13:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:34.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:35.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:36.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:37.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:38 np0005603609 python3.9[105309]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:38.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:38 np0005603609 python3.9[105387]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:39 np0005603609 python3.9[105539]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:39.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:39 np0005603609 python3.9[105691]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:40.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:40 np0005603609 python3.9[105769]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:41.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:41 np0005603609 python3.9[105921]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 02:13:41 np0005603609 systemd[1]: Starting Time & Date Service...
Jan 31 02:13:41 np0005603609 systemd[1]: Started Time & Date Service.
Jan 31 02:13:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:42.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:42 np0005603609 python3.9[106077]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:43 np0005603609 python3.9[106229]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:43.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:43 np0005603609 python3.9[106307]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:44.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:44 np0005603609 python3.9[106459]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:44 np0005603609 python3.9[106537]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.v96bbr37 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:45.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:45 np0005603609 python3.9[106689]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:46 np0005603609 python3.9[106767]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:46.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:46 np0005603609 python3.9[106919]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:13:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:13:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:47.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:13:47 np0005603609 python3[107072]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 02:13:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:48.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:48 np0005603609 python3.9[107224]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:48 np0005603609 python3.9[107302]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:49.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:49 np0005603609 python3.9[107454]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:50.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:50 np0005603609 python3.9[107579]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843629.1850545-900-181231777261987/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:50 np0005603609 python3.9[107731]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:51.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:51 np0005603609 python3.9[107809]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:52 np0005603609 python3.9[107961]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:52.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:52 np0005603609 python3.9[108039]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:53 np0005603609 python3.9[108191]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:53.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:53 np0005603609 python3.9[108269]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:54.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:54 np0005603609 python3.9[108421]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:13:55 np0005603609 python3.9[108576]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:55.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:56 np0005603609 python3.9[108728]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:56.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:56 np0005603609 python3.9[108880]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:13:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:57.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:13:57 np0005603609 python3.9[109032]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 02:13:58 np0005603609 python3.9[109184]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 02:13:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:58.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:13:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:13:58 np0005603609 systemd[1]: session-39.scope: Deactivated successfully.
Jan 31 02:13:58 np0005603609 systemd[1]: session-39.scope: Consumed 26.479s CPU time.
Jan 31 02:13:58 np0005603609 systemd-logind[823]: Session 39 logged out. Waiting for processes to exit.
Jan 31 02:13:58 np0005603609 systemd-logind[823]: Removed session 39.
Jan 31 02:13:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:13:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:13:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:59.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:14:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:00.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:14:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:01.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:02.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:03.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:04 np0005603609 systemd-logind[823]: New session 40 of user zuul.
Jan 31 02:14:04 np0005603609 systemd[1]: Started Session 40 of User zuul.
Jan 31 02:14:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:04.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:04 np0005603609 python3.9[109364]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 02:14:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:14:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:05.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:14:05 np0005603609 python3.9[109516]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:14:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:14:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:06.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:14:06 np0005603609 python3.9[109670]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 31 02:14:07 np0005603609 python3.9[109822]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.as4aiqfw follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:14:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:07.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:07 np0005603609 python3.9[109947]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.as4aiqfw mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843646.57441-108-160337926158532/.source.as4aiqfw _original_basename=.mh_ljuda follow=False checksum=a4502e4e8f59847dd2b7c5f9ecd52d55f7558ce1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:14:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:08.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:08 np0005603609 python3.9[110099]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:14:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:09.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:09 np0005603609 python3.9[110251]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwK9tbwI1sVhVFn3RGaEAgpi2689y9VdIyBp+cw+RWFupGnK46xr4HB/N67Aw+A+3FJtEl1Zq1cnt3Gy8PYb6XnLd4xH/NFtUI3ukhekrtKvSmysEjpRGIamjt1BkH4Lxh79PNkk13AVMQN92Wo271/fHEvcV7HaC0Q5VypZMd+77ZvI9NuEG1nofpvI8+32YECZBLpoC5KQK7EibqD9MUR2OmapGZhV+5B5jdb0ZvNb966Q0kwAGV8E+xgHSVnh5eCWC8oxgWkycmQd2co9E79fiIHEioABE9aDUGKw0+nsZ7HrvjG/ENeg5C6fjdJE4MsPq3FNHAiTCQPZ7QZgv/CSudt7WYyLTztGL9ksWqaTUeDocKVKPlJlzGrn/TXgMoix8+qbFzxVixIROb2nqElyEy6mo0Xxt2b4aisil9ZQhWVMQY0hGX5vtVv0E6+svzjSTfkyZolbjyRsolJF4pH7+klLEmlWGDlgSoCDZeK/XEi7xq3yaCymuWtX2fAX8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICzJ5+1VSPloOqHhejNen2lHjfV4Hvj7nbRbNJjS6dtd#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBECc0+1u2G3haTNDUnwK7F3+bqZqLNjR6ayEsOJcH6U6RkqhSd2eAlivxlw9dfPuir2TFrYzGTtSXuJ8iauDAtQ=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlvTGYGifalEmozttYlZ79wRHZPo6p3FfxUn+H8fCt//gLYJvHB9ygqCWO8F06xZhwaSJlU3R5k49AFtcq6rCaf4D9FuDYpYU5B1qGxpqY2S/6r/PmC9TmJJe6DJfuIf95os5YrDLR82BbT8dLFvu76PfZiMt0+kvm9gj1Q6XCUTgIsIvY9pyPySu0V4JDeT8EBgROR7WA5Fev80wO2/RlFXH9xVIupO8rswjwWPuIXoua1w44d35HWWHBdMAFXeZZMopWHWwY+fIlyz4B8y/TWDow7KZxG9GHKZ04e73/RA972Gub2LC0SlBFsBqaSnub8ooOcA3jZ3R2bjHAVkZvLgCK9UFSgwvvfyOWxtkJgj5KalAy9vZeGQ02ndAPNkQ6B1GnnRHaR5yGPG78q9Nd8RDmzhTr1iwnYLHhup04nAUnUDw5ubZFyF9bW1KQWvDv+4cfFeT8mhARMCxu7Imzne5FDq9OZAA9VLfnA26YFT0MpGjGl332cx20iz3Z4IU=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDBM/OyT9HQGjLM76vSXpTFer+lkr//u0v4BsUk+Rcai#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPttGgqMF5HnqNXeajmhgAAhQFj1yReXfFmUGT6cv24PcfDX+VeASpBgDGWJKvbu1EgrSPUu2R8sDzajVI5+ETk=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDir5Ux7IUuKTsqwrpZRFpieFX7Hi9Bsaw7N3jCiMd+vuHlEKHLX54HbyTIVnox1XbNjeYynLRRz7VKBfder8IEerGmST/uWuX5FOdve7vDdY++9J6qYkj1Gf6v6BGp8BT97bbPdvaQdLP6YS2jFEfOz4s0oJkgr8dsHjPU70e1P0b7vKxqo3z/E/XCe2BUGEv5j/z9GTl2oQ9/KoTvahfr6qfonnQK9E0gsJKDB9S1UPNFkJUxvVPfKfEao207dmT8EmQL2ZdwDwecA2Mg0SneGaNmEFWDW4CWQjdbHuikc3vsZ1do7kzq2+tz+WLEXqdb4Ig4S0OfV/MAcaC/C1DRfZHxZN3vSayrm99nFc8oPaLnRtT8Jz1dVonMOpwLm3xMm6nAeGNTzM0ImTrJTusVmKNRQI3x6VPiEcWdKNvN5sVcrN9uyINDMuzpXIxc1LmpmR/338EfP4HYhfsTqdM0worzzewvh2XhAVxQAiNYRRUbLvR4/EE5SjXTjSA4ID0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ8oYZpZvdB1n917+wvTxetgtueloCox+7yBQBW8LHZX#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCEH62xmPSqzu7EFth8e8ITel7fLvoU9FKlxQN/eSXzUuR/7sZGPhcgLzjrJmEcn4Za0K2VNu6+z559d/AEJY2U=#012 create=True mode=0644 path=/tmp/ansible.as4aiqfw state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:14:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:10.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:10 np0005603609 python3.9[110403]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.as4aiqfw' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:14:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:11.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:11 np0005603609 python3.9[110557]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.as4aiqfw state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:14:11 np0005603609 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 02:14:11 np0005603609 systemd[1]: session-40.scope: Deactivated successfully.
Jan 31 02:14:11 np0005603609 systemd[1]: session-40.scope: Consumed 4.471s CPU time.
Jan 31 02:14:11 np0005603609 systemd-logind[823]: Session 40 logged out. Waiting for processes to exit.
Jan 31 02:14:11 np0005603609 systemd-logind[823]: Removed session 40.
Jan 31 02:14:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:12.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:13.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:14.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:15.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:16.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:14:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:17.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:14:17 np0005603609 systemd-logind[823]: New session 41 of user zuul.
Jan 31 02:14:18 np0005603609 systemd[1]: Started Session 41 of User zuul.
Jan 31 02:14:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:18.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:19 np0005603609 python3.9[110737]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:14:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:19.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:20.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:21 np0005603609 python3.9[110893]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 02:14:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:14:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:21.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:14:22 np0005603609 python3.9[111047]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:14:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:14:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:22.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:14:23 np0005603609 python3.9[111200]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:14:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:23.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:14:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:24.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:14:24 np0005603609 python3.9[111353]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:14:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:25.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:25 np0005603609 python3.9[111505]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:14:26 np0005603609 systemd[1]: session-41.scope: Deactivated successfully.
Jan 31 02:14:26 np0005603609 systemd[1]: session-41.scope: Consumed 3.627s CPU time.
Jan 31 02:14:26 np0005603609 systemd-logind[823]: Session 41 logged out. Waiting for processes to exit.
Jan 31 02:14:26 np0005603609 systemd-logind[823]: Removed session 41.
Jan 31 02:14:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:26.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:27.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:28.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:29.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:29 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Jan 31 02:14:29 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:14:29.949597) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:14:29 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Jan 31 02:14:29 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843669949680, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 840, "num_deletes": 255, "total_data_size": 1757877, "memory_usage": 1784968, "flush_reason": "Manual Compaction"}
Jan 31 02:14:29 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Jan 31 02:14:29 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843669955628, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 749363, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9802, "largest_seqno": 10637, "table_properties": {"data_size": 746005, "index_size": 1202, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8495, "raw_average_key_size": 19, "raw_value_size": 738934, "raw_average_value_size": 1726, "num_data_blocks": 53, "num_entries": 428, "num_filter_entries": 428, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843603, "oldest_key_time": 1769843603, "file_creation_time": 1769843669, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:14:29 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 6078 microseconds, and 2553 cpu microseconds.
Jan 31 02:14:29 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:14:29 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:14:29.955672) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 749363 bytes OK
Jan 31 02:14:29 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:14:29.955695) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Jan 31 02:14:29 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:14:29.959401) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Jan 31 02:14:29 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:14:29.959417) EVENT_LOG_v1 {"time_micros": 1769843669959411, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:14:29 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:14:29.959443) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:14:29 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1753518, prev total WAL file size 1753518, number of live WAL files 2.
Jan 31 02:14:29 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:14:29 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:14:29.960056) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323536' seq:0, type:0; will stop at (end)
Jan 31 02:14:29 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:14:29 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(731KB)], [18(9451KB)]
Jan 31 02:14:29 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843669960164, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 10427959, "oldest_snapshot_seqno": -1}
Jan 31 02:14:30 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 3776 keys, 7723507 bytes, temperature: kUnknown
Jan 31 02:14:30 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843670020244, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 7723507, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7695376, "index_size": 17589, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 91769, "raw_average_key_size": 24, "raw_value_size": 7624266, "raw_average_value_size": 2019, "num_data_blocks": 769, "num_entries": 3776, "num_filter_entries": 3776, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769843669, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:14:30 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:14:30 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:14:30.020601) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7723507 bytes
Jan 31 02:14:30 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:14:30.022349) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.2 rd, 128.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.2 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(24.2) write-amplify(10.3) OK, records in: 4274, records dropped: 498 output_compression: NoCompression
Jan 31 02:14:30 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:14:30.022381) EVENT_LOG_v1 {"time_micros": 1769843670022366, "job": 8, "event": "compaction_finished", "compaction_time_micros": 60206, "compaction_time_cpu_micros": 17622, "output_level": 6, "num_output_files": 1, "total_output_size": 7723507, "num_input_records": 4274, "num_output_records": 3776, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:14:30 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:14:30 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843670022734, "job": 8, "event": "table_file_deletion", "file_number": 20}
Jan 31 02:14:30 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:14:30 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843670024113, "job": 8, "event": "table_file_deletion", "file_number": 18}
Jan 31 02:14:30 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:14:29.959883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:14:30 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:14:30.024154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:14:30 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:14:30.024160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:14:30 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:14:30.024163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:14:30 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:14:30.024165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:14:30 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:14:30.024167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:14:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:30.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:31.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:31 np0005603609 systemd-logind[823]: New session 42 of user zuul.
Jan 31 02:14:31 np0005603609 systemd[1]: Started Session 42 of User zuul.
Jan 31 02:14:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:32.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:32 np0005603609 python3.9[111683]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:14:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:33.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:33 np0005603609 python3.9[111839]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:14:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:34 np0005603609 python3.9[112054]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 02:14:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:34.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:34 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:14:34 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:14:34 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:14:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:35.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:36.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:36 np0005603609 python3.9[112205]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:14:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:37.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:38 np0005603609 python3.9[112356]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 02:14:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:38.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:38 np0005603609 python3.9[112506]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:14:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:14:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:39.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:14:39 np0005603609 python3.9[112656]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:14:40 np0005603609 systemd[1]: session-42.scope: Deactivated successfully.
Jan 31 02:14:40 np0005603609 systemd[1]: session-42.scope: Consumed 5.624s CPU time.
Jan 31 02:14:40 np0005603609 systemd-logind[823]: Session 42 logged out. Waiting for processes to exit.
Jan 31 02:14:40 np0005603609 systemd-logind[823]: Removed session 42.
Jan 31 02:14:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:40.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:40 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:14:40 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:14:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:41.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:42.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:43.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:44.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:45 np0005603609 systemd-logind[823]: New session 43 of user zuul.
Jan 31 02:14:45 np0005603609 systemd[1]: Started Session 43 of User zuul.
Jan 31 02:14:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:45.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:46.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:46 np0005603609 python3.9[112885]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:14:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:47.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:48 np0005603609 python3.9[113041]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:14:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:48.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:48 np0005603609 python3.9[113193]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:14:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:49.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:49 np0005603609 python3.9[113345]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:14:50 np0005603609 python3.9[113468]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843689.1038244-159-136326272276281/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=55a9155f4c5ff6bc65efc4572c73f659a75e54aa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:14:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:50.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:50 np0005603609 python3.9[113620]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:14:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:51.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:51 np0005603609 python3.9[113743]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843690.467019-159-166484633395277/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=4d19f6ebd16a505bd4f1bae6f0d06a9a74ad0f67 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:14:52 np0005603609 python3.9[113895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:14:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:52.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:52 np0005603609 python3.9[114018]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843691.610391-159-25689333486268/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=ff753eca60ddffe79b075b123e1650038318eb99 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:14:53 np0005603609 python3.9[114170]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:14:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:53.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:53 np0005603609 python3.9[114322]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:14:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:54.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:54 np0005603609 python3.9[114474]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:14:55 np0005603609 python3.9[114597]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843694.140106-341-250875962292847/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=9212157ca0e6aba1108f3dc62392aa712141273e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:14:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:55.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:55 np0005603609 python3.9[114749]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:14:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:56.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:56 np0005603609 python3.9[114872]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843695.2715826-341-276997578944518/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=005e44589b03f310b2e01f05c09d39b290e9f9f5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:14:57 np0005603609 python3.9[115024]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:14:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:57.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:57 np0005603609 python3.9[115147]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843696.6365619-341-138927239913685/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=0aec9c6fcffa133e6c2e4c6a453c283a6364cb8c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:14:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:14:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:58.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:14:58 np0005603609 python3.9[115299]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:14:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:14:59 np0005603609 python3.9[115451]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:14:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:14:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:14:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:59.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:14:59 np0005603609 python3.9[115603]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:00.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:00 np0005603609 python3.9[115726]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843699.3558702-531-272469069934490/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=e999a445f0c0b97e7bb81be7d0f74a99fc5c7335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:01 np0005603609 python3.9[115878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:01.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:01 np0005603609 python3.9[116001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843700.6946402-531-265453363075561/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=005e44589b03f310b2e01f05c09d39b290e9f9f5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:15:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:02.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:15:02 np0005603609 python3.9[116153]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:02 np0005603609 python3.9[116276]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843701.9904187-531-186619403594930/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=9a7e84036bdb3c4b27c82d01d3c1f23f841465d7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:03.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:04.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:04 np0005603609 python3.9[116428]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:15:05 np0005603609 python3.9[116580]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:05.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:05 np0005603609 python3.9[116703]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843704.804034-739-272248450054403/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:06.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:06 np0005603609 python3.9[116855]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:15:07 np0005603609 python3.9[117007]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:07.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:07 np0005603609 python3.9[117130]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843706.711382-816-19930177888757/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:08.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:08 np0005603609 python3.9[117282]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:15:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:09 np0005603609 python3.9[117434]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:09.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:09 np0005603609 python3.9[117557]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843708.7372391-887-237114896128532/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:10.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:10 np0005603609 python3.9[117709]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:15:11 np0005603609 python3.9[117861]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:11.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:11 np0005603609 python3.9[117984]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843710.596453-958-179300325222007/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:12.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:12 np0005603609 python3.9[118136]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:15:13 np0005603609 python3.9[118288]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:15:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:13.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:15:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:13 np0005603609 python3.9[118411]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843712.7374897-1034-139098478547232/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:14.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:14 np0005603609 python3.9[118563]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:15:15 np0005603609 python3.9[118715]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:15.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:15 np0005603609 python3.9[118838]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843714.6185155-1093-185687590418939/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:16 np0005603609 systemd-logind[823]: Session 43 logged out. Waiting for processes to exit.
Jan 31 02:15:16 np0005603609 systemd[1]: session-43.scope: Deactivated successfully.
Jan 31 02:15:16 np0005603609 systemd[1]: session-43.scope: Consumed 20.493s CPU time.
Jan 31 02:15:16 np0005603609 systemd-logind[823]: Removed session 43.
Jan 31 02:15:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:16.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:17.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:18.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:15:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:19.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:15:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:15:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:20.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:15:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:15:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:21.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:15:21 np0005603609 systemd-logind[823]: New session 44 of user zuul.
Jan 31 02:15:21 np0005603609 systemd[1]: Started Session 44 of User zuul.
Jan 31 02:15:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:22.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:22 np0005603609 python3.9[119019]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:23 np0005603609 python3.9[119171]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:23.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:24 np0005603609 python3.9[119294]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843722.812384-63-219535625760405/.source.conf _original_basename=ceph.conf follow=False checksum=d315a9cac0e1e65728b0668f9e154f01a66e4c1a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:24.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:24 np0005603609 python3.9[119446]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:25 np0005603609 python3.9[119569]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843724.2344368-63-172736702019919/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=d07c30b1acab71467a05fb02d206fcd55de2512c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:25.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:25 np0005603609 systemd[1]: session-44.scope: Deactivated successfully.
Jan 31 02:15:25 np0005603609 systemd[1]: session-44.scope: Consumed 2.541s CPU time.
Jan 31 02:15:25 np0005603609 systemd-logind[823]: Session 44 logged out. Waiting for processes to exit.
Jan 31 02:15:25 np0005603609 systemd-logind[823]: Removed session 44.
Jan 31 02:15:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:26.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:15:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:27.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:15:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:28.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:29.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:30.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:30 np0005603609 systemd-logind[823]: New session 45 of user zuul.
Jan 31 02:15:30 np0005603609 systemd[1]: Started Session 45 of User zuul.
Jan 31 02:15:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:31.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:31 np0005603609 python3.9[119749]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:15:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:32.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:32 np0005603609 python3.9[119905]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:15:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:33.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:33 np0005603609 python3.9[120057]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:15:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:34 np0005603609 python3.9[120207]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:15:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:34.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:35 np0005603609 python3.9[120359]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 02:15:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:35.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:36.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:37 np0005603609 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 31 02:15:37 np0005603609 python3.9[120515]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:15:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:37.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:38 np0005603609 python3.9[120599]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:15:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:38.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:39.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:40.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:40 np0005603609 python3.9[120883]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:15:40 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:15:40 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:15:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:15:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:41.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:15:41 np0005603609 python3[121038]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 31 02:15:41 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:15:41 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:15:41 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:15:42 np0005603609 python3.9[121190]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:42.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:43 np0005603609 python3.9[121342]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:43 np0005603609 python3.9[121420]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:43.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:44 np0005603609 python3.9[121572]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:44.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:44 np0005603609 python3.9[121650]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.258z4rvf recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:45 np0005603609 python3.9[121802]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:45.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:45 np0005603609 python3.9[121880]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:46.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:46 np0005603609 python3.9[122032]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:15:47 np0005603609 python3[122185]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 02:15:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:47.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:47 np0005603609 python3.9[122387]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:47 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:15:47 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:15:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:48.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:48 np0005603609 python3.9[122512]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843747.4907806-432-174201057901757/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:49 np0005603609 python3.9[122664]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:49.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:49 np0005603609 python3.9[122789]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843748.8130634-477-136365828927866/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:50.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:50 np0005603609 python3.9[122941]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:51 np0005603609 python3.9[123066]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843750.0337749-522-209040304960066/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:15:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:51.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:15:51 np0005603609 python3.9[123218]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:52 np0005603609 python3.9[123343]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843751.2528474-567-167934209854126/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:15:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:52.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:15:53 np0005603609 python3.9[123495]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:53 np0005603609 python3.9[123620]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843752.4605975-612-231680721301677/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:15:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:53.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:15:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:54 np0005603609 python3.9[123772]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:54.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:54 np0005603609 python3.9[123924]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:15:55 np0005603609 python3.9[124079]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:15:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:55.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:15:56 np0005603609 python3.9[124231]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:15:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:56.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:56 np0005603609 python3.9[124384]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:15:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:15:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:57.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:15:57 np0005603609 python3.9[124538]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:15:58 np0005603609 python3.9[124693]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:15:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:58.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:15:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:15:59 np0005603609 python3.9[124843]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:15:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:15:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:15:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:59.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:16:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:00.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:00 np0005603609 python3.9[124996]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:16:00 np0005603609 ovs-vsctl[124997]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 31 02:16:01 np0005603609 python3.9[125149]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:16:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:01.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:02 np0005603609 python3.9[125306]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:16:02 np0005603609 ovs-vsctl[125307]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 31 02:16:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:02.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:02 np0005603609 python3.9[125457]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:16:03 np0005603609 python3.9[125611]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:16:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:03.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:03 np0005603609 python3.9[125763]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:16:04 np0005603609 python3.9[125841]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:16:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:04.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:04 np0005603609 python3.9[125993]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:16:05 np0005603609 python3.9[126071]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:16:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:05.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:06 np0005603609 python3.9[126223]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:16:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:06.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:06 np0005603609 python3.9[126375]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:16:07 np0005603609 python3.9[126453]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:16:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:07.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:07 np0005603609 python3.9[126605]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:16:08 np0005603609 python3.9[126683]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:16:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:08.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:08 np0005603609 python3.9[126835]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:16:08 np0005603609 systemd[1]: Reloading.
Jan 31 02:16:09 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:16:09 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:16:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:09.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:10 np0005603609 python3.9[127025]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:16:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:10.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:10 np0005603609 python3.9[127103]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:16:11 np0005603609 python3.9[127255]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:16:11 np0005603609 python3.9[127333]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:16:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:11.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:12 np0005603609 python3.9[127485]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:16:12 np0005603609 systemd[1]: Reloading.
Jan 31 02:16:12 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:16:12 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:16:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:12.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:12 np0005603609 systemd[1]: Starting Create netns directory...
Jan 31 02:16:12 np0005603609 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 02:16:12 np0005603609 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 02:16:12 np0005603609 systemd[1]: Finished Create netns directory.
Jan 31 02:16:13 np0005603609 python3.9[127680]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:16:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:13.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:14 np0005603609 python3.9[127832]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:16:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:14.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:14 np0005603609 python3.9[127955]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843773.575893-1365-201165068931188/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:16:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:15.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:15 np0005603609 python3.9[128107]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:16:16 np0005603609 python3.9[128259]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:16:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:16.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:17 np0005603609 python3.9[128411]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:16:17 np0005603609 python3.9[128534]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843776.6034796-1464-73366363702144/.source.json _original_basename=.mt9gijhv follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:16:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:17.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:18 np0005603609 python3.9[128684]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:16:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:18.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:19.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:20 np0005603609 python3.9[129107]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 31 02:16:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:20.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:21 np0005603609 python3.9[129259]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 02:16:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:21.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:22 np0005603609 python3[129411]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 02:16:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:22.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:23.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:24.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:25.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:26.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:27 np0005603609 podman[129425]: 2026-01-31 07:16:27.483478233 +0000 UTC m=+4.984908399 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 02:16:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:27.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:27 np0005603609 podman[129543]: 2026-01-31 07:16:27.626086943 +0000 UTC m=+0.069994809 container create 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 02:16:27 np0005603609 podman[129543]: 2026-01-31 07:16:27.573631518 +0000 UTC m=+0.017539404 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 02:16:27 np0005603609 python3[129411]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 02:16:28 np0005603609 python3.9[129733]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:16:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:28.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:29 np0005603609 python3.9[129887]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:16:29 np0005603609 python3.9[129963]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:16:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:29.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:16:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5776 writes, 24K keys, 5776 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5776 writes, 914 syncs, 6.32 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5776 writes, 24K keys, 5776 commit groups, 1.0 writes per commit group, ingest: 18.94 MB, 0.03 MB/s#012Interval WAL: 5776 writes, 914 syncs, 6.32 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f1fef0cf30#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f1fef0cf30#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Jan 31 02:16:30 np0005603609 python3.9[130114]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843789.6010315-1698-209819617552559/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:16:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:30.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:30 np0005603609 python3.9[130190]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:16:30 np0005603609 systemd[1]: Reloading.
Jan 31 02:16:30 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:16:30 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:16:31 np0005603609 python3.9[130302]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:16:31 np0005603609 systemd[1]: Reloading.
Jan 31 02:16:31 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:16:31 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:16:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:31.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:31 np0005603609 systemd[1]: Starting ovn_controller container...
Jan 31 02:16:31 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:16:31 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1faaac280500d2690d379881c4e3b6d493c0db15843f23b9ce3c1e07687075/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 31 02:16:32 np0005603609 systemd[1]: Started /usr/bin/podman healthcheck run 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e.
Jan 31 02:16:32 np0005603609 podman[130343]: 2026-01-31 07:16:32.100500713 +0000 UTC m=+0.318952991 container init 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: + sudo -E kolla_set_configs
Jan 31 02:16:32 np0005603609 podman[130343]: 2026-01-31 07:16:32.129180201 +0000 UTC m=+0.347632469 container start 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 02:16:32 np0005603609 edpm-start-podman-container[130343]: ovn_controller
Jan 31 02:16:32 np0005603609 systemd[1]: Created slice User Slice of UID 0.
Jan 31 02:16:32 np0005603609 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 31 02:16:32 np0005603609 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 31 02:16:32 np0005603609 systemd[1]: Starting User Manager for UID 0...
Jan 31 02:16:32 np0005603609 edpm-start-podman-container[130342]: Creating additional drop-in dependency for "ovn_controller" (3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e)
Jan 31 02:16:32 np0005603609 podman[130366]: 2026-01-31 07:16:32.199840594 +0000 UTC m=+0.062267417 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 02:16:32 np0005603609 systemd[1]: 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e-7b7ea2374fd1cd76.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 02:16:32 np0005603609 systemd[1]: 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e-7b7ea2374fd1cd76.service: Failed with result 'exit-code'.
Jan 31 02:16:32 np0005603609 systemd[1]: Reloading.
Jan 31 02:16:32 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:16:32 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:16:32 np0005603609 systemd[130388]: Queued start job for default target Main User Target.
Jan 31 02:16:32 np0005603609 systemd[130388]: Created slice User Application Slice.
Jan 31 02:16:32 np0005603609 systemd[130388]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 31 02:16:32 np0005603609 systemd[130388]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 02:16:32 np0005603609 systemd[130388]: Reached target Paths.
Jan 31 02:16:32 np0005603609 systemd[130388]: Reached target Timers.
Jan 31 02:16:32 np0005603609 systemd[130388]: Starting D-Bus User Message Bus Socket...
Jan 31 02:16:32 np0005603609 systemd[130388]: Starting Create User's Volatile Files and Directories...
Jan 31 02:16:32 np0005603609 systemd[130388]: Finished Create User's Volatile Files and Directories.
Jan 31 02:16:32 np0005603609 systemd[130388]: Listening on D-Bus User Message Bus Socket.
Jan 31 02:16:32 np0005603609 systemd[130388]: Reached target Sockets.
Jan 31 02:16:32 np0005603609 systemd[130388]: Reached target Basic System.
Jan 31 02:16:32 np0005603609 systemd[130388]: Reached target Main User Target.
Jan 31 02:16:32 np0005603609 systemd[130388]: Startup finished in 130ms.
Jan 31 02:16:32 np0005603609 systemd[1]: Started User Manager for UID 0.
Jan 31 02:16:32 np0005603609 systemd[1]: Started ovn_controller container.
Jan 31 02:16:32 np0005603609 systemd[1]: Started Session c1 of User root.
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: INFO:__main__:Validating config file
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: INFO:__main__:Writing out command to execute
Jan 31 02:16:32 np0005603609 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: ++ cat /run_command
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: + ARGS=
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: + sudo kolla_copy_cacerts
Jan 31 02:16:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:16:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:32.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:16:32 np0005603609 systemd[1]: Started Session c2 of User root.
Jan 31 02:16:32 np0005603609 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: + [[ ! -n '' ]]
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: + . kolla_extend_start
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: + umask 0022
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 31 02:16:32 np0005603609 NetworkManager[49064]: <info>  [1769843792.5671] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 31 02:16:32 np0005603609 NetworkManager[49064]: <info>  [1769843792.5681] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:16:32 np0005603609 NetworkManager[49064]: <warn>  [1769843792.5684] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 02:16:32 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:16:32 np0005603609 NetworkManager[49064]: <info>  [1769843792.5692] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 31 02:16:32 np0005603609 NetworkManager[49064]: <info>  [1769843792.5697] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 31 02:16:32 np0005603609 NetworkManager[49064]: <info>  [1769843792.5700] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 02:16:32 np0005603609 kernel: br-int: entered promiscuous mode
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 31 02:16:32 np0005603609 systemd-udevd[130490]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 02:16:32 np0005603609 NetworkManager[49064]: <info>  [1769843792.6490] manager: (ovn-c06836-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 31 02:16:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:16:32Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 02:16:32 np0005603609 NetworkManager[49064]: <info>  [1769843792.6498] manager: (ovn-f59cdb-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Jan 31 02:16:32 np0005603609 kernel: genev_sys_6081: entered promiscuous mode
Jan 31 02:16:32 np0005603609 NetworkManager[49064]: <info>  [1769843792.6667] device (genev_sys_6081): carrier: link connected
Jan 31 02:16:32 np0005603609 NetworkManager[49064]: <info>  [1769843792.6669] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Jan 31 02:16:32 np0005603609 NetworkManager[49064]: <info>  [1769843792.6680] manager: (ovn-5c3074-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 31 02:16:33 np0005603609 python3.9[130620]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 02:16:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:33.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:34 np0005603609 python3.9[130772]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:16:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:34.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:34 np0005603609 python3.9[130895]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843793.937283-1833-93898535259463/.source.yaml _original_basename=.sj0ocwmq follow=False checksum=f6b75a149047666d3f825893ea1fa078d9873798 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:16:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:35.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:35 np0005603609 python3.9[131047]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:16:35 np0005603609 ovs-vsctl[131048]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 31 02:16:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:36.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:36 np0005603609 python3.9[131201]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:16:36 np0005603609 ovs-vsctl[131203]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 31 02:16:37 np0005603609 python3.9[131356]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:16:37 np0005603609 ovs-vsctl[131357]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 31 02:16:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:37.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:37 np0005603609 systemd[1]: session-45.scope: Deactivated successfully.
Jan 31 02:16:37 np0005603609 systemd[1]: session-45.scope: Consumed 51.693s CPU time.
Jan 31 02:16:37 np0005603609 systemd-logind[823]: Session 45 logged out. Waiting for processes to exit.
Jan 31 02:16:37 np0005603609 systemd-logind[823]: Removed session 45.
Jan 31 02:16:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:38.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:39.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:40.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:41.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:42.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:42 np0005603609 systemd[1]: Stopping User Manager for UID 0...
Jan 31 02:16:42 np0005603609 systemd[130388]: Activating special unit Exit the Session...
Jan 31 02:16:42 np0005603609 systemd[130388]: Stopped target Main User Target.
Jan 31 02:16:42 np0005603609 systemd[130388]: Stopped target Basic System.
Jan 31 02:16:42 np0005603609 systemd[130388]: Stopped target Paths.
Jan 31 02:16:42 np0005603609 systemd[130388]: Stopped target Sockets.
Jan 31 02:16:42 np0005603609 systemd[130388]: Stopped target Timers.
Jan 31 02:16:42 np0005603609 systemd[130388]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 02:16:42 np0005603609 systemd[130388]: Closed D-Bus User Message Bus Socket.
Jan 31 02:16:42 np0005603609 systemd[130388]: Stopped Create User's Volatile Files and Directories.
Jan 31 02:16:42 np0005603609 systemd[130388]: Removed slice User Application Slice.
Jan 31 02:16:42 np0005603609 systemd[130388]: Reached target Shutdown.
Jan 31 02:16:42 np0005603609 systemd[130388]: Finished Exit the Session.
Jan 31 02:16:42 np0005603609 systemd[130388]: Reached target Exit the Session.
Jan 31 02:16:42 np0005603609 systemd[1]: user@0.service: Deactivated successfully.
Jan 31 02:16:42 np0005603609 systemd[1]: Stopped User Manager for UID 0.
Jan 31 02:16:42 np0005603609 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 31 02:16:42 np0005603609 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 31 02:16:42 np0005603609 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 31 02:16:42 np0005603609 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 31 02:16:42 np0005603609 systemd[1]: Removed slice User Slice of UID 0.
Jan 31 02:16:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:43.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:43 np0005603609 systemd-logind[823]: New session 47 of user zuul.
Jan 31 02:16:43 np0005603609 systemd[1]: Started Session 47 of User zuul.
Jan 31 02:16:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:44.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:44 np0005603609 python3.9[131536]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:16:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:45.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:46.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:47.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:48.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:49.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:50.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:51.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:52.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:53.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:54.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:16:55 np0005603609 python3.9[131814]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:16:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:55.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:55 np0005603609 python3.9[131966]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:16:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:56.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 02:16:56 np0005603609 python3.9[132118]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:16:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:16:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:57.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:16:57 np0005603609 python3.9[132270]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:16:58 np0005603609 python3.9[132422]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:16:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:58.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:16:59 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:16:59 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:16:59 np0005603609 python3.9[132572]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:16:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:16:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:16:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:59.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:00 np0005603609 python3.9[132855]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 02:17:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:17:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:17:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:17:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 02:17:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 31 02:17:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:17:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:00.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:01 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:17:01 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:17:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:17:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:01.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:17:01 np0005603609 python3.9[133005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:02 np0005603609 python3.9[133126]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843820.9414089-218-209985736352755/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:17:02 np0005603609 ovn_controller[130359]: 2026-01-31T07:17:02Z|00025|memory|INFO|16256 kB peak resident set size after 29.8 seconds
Jan 31 02:17:02 np0005603609 ovn_controller[130359]: 2026-01-31T07:17:02Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Jan 31 02:17:02 np0005603609 podman[133127]: 2026-01-31 07:17:02.368339696 +0000 UTC m=+0.081360869 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:17:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:02.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:17:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.3 total, 600.0 interval#012Cumulative writes: 2086 writes, 11K keys, 2086 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 2085 writes, 2085 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2086 writes, 11K keys, 2086 commit groups, 1.0 writes per commit group, ingest: 22.93 MB, 0.04 MB/s#012Interval WAL: 2085 writes, 2085 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     82.5      0.14              0.02         4    0.035       0      0       0.0       0.0#012  L6      1/0    7.37 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1    135.3    115.6      0.21              0.05         3    0.069     12K   1275       0.0       0.0#012 Sum      1/0    7.37 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     81.1    102.4      0.35              0.07         7    0.050     12K   1275       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     89.4    112.7      0.32              0.07         6    0.053     12K   1275       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    135.3    115.6      0.21              0.05         3    0.069     12K   1275       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    107.2      0.11              0.02         3    0.036       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.011, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619ad8ed1f0#2 capacity: 308.00 MB usage: 990.08 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 7.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(46,856.83 KB,0.271671%) FilterBlock(7,41.42 KB,0.0131335%) IndexBlock(7,91.83 KB,0.0291156%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 02:17:03 np0005603609 python3.9[133303]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:03.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:03 np0005603609 python3.9[133424]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843822.4808567-263-68576825540927/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:17:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:17:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:04.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:17:04 np0005603609 python3.9[133576]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:17:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:17:05 np0005603609 python3.9[133660]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:17:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:05.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:06.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:07.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:08 np0005603609 python3.9[133813]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:17:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:17:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:08.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:17:08 np0005603609 python3.9[133966]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:09 np0005603609 python3.9[134087]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843828.5078733-374-230397466376983/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:17:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:17:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:09.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:17:10 np0005603609 python3.9[134237]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:17:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:10.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:10 np0005603609 python3.9[134358]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843829.62221-374-195332823516315/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:17:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:17:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:11.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:17:12 np0005603609 python3.9[134508]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:17:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:12.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:17:12 np0005603609 python3.9[134629]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843831.8554268-506-216026940570147/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:17:13 np0005603609 python3.9[134779]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:13.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:13 np0005603609 python3.9[134900]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843832.9252005-506-159037306871409/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:17:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:14.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:14 np0005603609 python3.9[135050]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:17:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:17:15 np0005603609 python3.9[135254]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:17:15 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:17:15 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:17:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:15.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:16 np0005603609 python3.9[135406]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:16 np0005603609 python3.9[135484]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:17:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:16.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:17 np0005603609 python3.9[135636]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:17 np0005603609 python3.9[135714]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:17:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:17.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:18 np0005603609 python3.9[135866]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:18.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:18 np0005603609 python3.9[136018]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:19 np0005603609 python3.9[136096]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:17:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:19.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:17:19 np0005603609 python3.9[136248]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:17:20 np0005603609 python3.9[136326]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:17:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:20.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:17:21 np0005603609 python3.9[136478]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:17:21 np0005603609 systemd[1]: Reloading.
Jan 31 02:17:21 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:17:21 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:17:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:17:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:21.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:17:22 np0005603609 python3.9[136666]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:22 np0005603609 python3.9[136744]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:22.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:23 np0005603609 python3.9[136896]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:23.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:23 np0005603609 python3.9[136974]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:24 np0005603609 python3.9[137126]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:17:24 np0005603609 systemd[1]: Reloading.
Jan 31 02:17:24 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:17:24 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:17:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:24.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:24 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Jan 31 02:17:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:17:24.604339) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:17:24 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Jan 31 02:17:24 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843844604387, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1765, "num_deletes": 252, "total_data_size": 4411979, "memory_usage": 4462688, "flush_reason": "Manual Compaction"}
Jan 31 02:17:24 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Jan 31 02:17:24 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843844657710, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2882773, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10643, "largest_seqno": 12402, "table_properties": {"data_size": 2875389, "index_size": 4391, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15006, "raw_average_key_size": 19, "raw_value_size": 2860522, "raw_average_value_size": 3768, "num_data_blocks": 197, "num_entries": 759, "num_filter_entries": 759, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843670, "oldest_key_time": 1769843670, "file_creation_time": 1769843844, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:17:24 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 53409 microseconds, and 4004 cpu microseconds.
Jan 31 02:17:24 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:17:24 np0005603609 systemd[1]: Starting Create netns directory...
Jan 31 02:17:24 np0005603609 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 02:17:24 np0005603609 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 02:17:24 np0005603609 systemd[1]: Finished Create netns directory.
Jan 31 02:17:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:17:24.657756) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2882773 bytes OK
Jan 31 02:17:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:17:24.657771) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Jan 31 02:17:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:17:24.686686) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Jan 31 02:17:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:17:24.686724) EVENT_LOG_v1 {"time_micros": 1769843844686716, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:17:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:17:24.686744) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:17:24 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4403920, prev total WAL file size 4403920, number of live WAL files 2.
Jan 31 02:17:24 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:17:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:17:24.687376) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 31 02:17:24 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:17:24 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2815KB)], [21(7542KB)]
Jan 31 02:17:24 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843844687399, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 10606280, "oldest_snapshot_seqno": -1}
Jan 31 02:17:25 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4013 keys, 8424122 bytes, temperature: kUnknown
Jan 31 02:17:25 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843845145297, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 8424122, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8394607, "index_size": 18397, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97438, "raw_average_key_size": 24, "raw_value_size": 8319441, "raw_average_value_size": 2073, "num_data_blocks": 794, "num_entries": 4013, "num_filter_entries": 4013, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769843844, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:17:25 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:17:25 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:17:25.145662) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8424122 bytes
Jan 31 02:17:25 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:17:25.226391) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 23.2 rd, 18.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 7.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(6.6) write-amplify(2.9) OK, records in: 4535, records dropped: 522 output_compression: NoCompression
Jan 31 02:17:25 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:17:25.226428) EVENT_LOG_v1 {"time_micros": 1769843845226414, "job": 10, "event": "compaction_finished", "compaction_time_micros": 458024, "compaction_time_cpu_micros": 14440, "output_level": 6, "num_output_files": 1, "total_output_size": 8424122, "num_input_records": 4535, "num_output_records": 4013, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:17:25 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:17:25 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843845226862, "job": 10, "event": "table_file_deletion", "file_number": 23}
Jan 31 02:17:25 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:17:25 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843845227561, "job": 10, "event": "table_file_deletion", "file_number": 21}
Jan 31 02:17:25 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:17:24.687331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:17:25 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:17:25.227635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:17:25 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:17:25.227643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:17:25 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:17:25.227646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:17:25 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:17:25.227649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:17:25 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:17:25.227652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:17:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:17:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:25.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:26 np0005603609 python3.9[137321]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:17:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:26.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:26 np0005603609 python3.9[137473]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:27 np0005603609 python3.9[137596]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843846.3389769-959-27071078804405/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:17:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:27.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:28 np0005603609 python3.9[137748]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:28.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:28 np0005603609 python3.9[137900]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:17:29 np0005603609 python3.9[138052]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:29.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:30 np0005603609 python3.9[138175]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843849.1218233-1058-51359680893873/.source.json _original_basename=.q7i4e5hu follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:17:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:30.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:30 np0005603609 python3.9[138325]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:17:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:31.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:17:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:32.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:32 np0005603609 podman[138720]: 2026-01-31 07:17:32.638804031 +0000 UTC m=+0.089677116 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 02:17:32 np0005603609 python3.9[138764]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 31 02:17:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:33.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:33 np0005603609 python3.9[138924]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 02:17:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:17:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:34.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:17:35 np0005603609 python3[139076]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 02:17:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:17:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:17:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:35.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:17:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:36.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:17:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:37.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:17:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:38.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:39.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:17:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:40.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:41.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:42.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:43.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:17:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:44.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:17:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:45.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:17:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:46.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:17:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:47.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:48 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 02:17:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:48.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:17:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:49.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:17:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:50.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:17:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:51.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:17:52 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 02:17:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:17:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:52.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:17:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:17:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:53.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:17:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:54.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:17:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:55.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:17:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:56.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:17:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:17:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:57.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:17:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:17:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:58.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:17:59 np0005603609 podman[139089]: 2026-01-31 07:17:59.572349737 +0000 UTC m=+24.495456872 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:17:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:17:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:17:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:59.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:17:59 np0005603609 podman[139211]: 2026-01-31 07:17:59.652746718 +0000 UTC m=+0.017714121 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:17:59 np0005603609 podman[139211]: 2026-01-31 07:17:59.898872628 +0000 UTC m=+0.263840001 container create 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 02:17:59 np0005603609 python3[139076]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:18:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:18:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:18:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:00.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:18:01 np0005603609 python3.9[139401]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:18:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:01.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:02 np0005603609 python3.9[139555]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:02 np0005603609 python3.9[139631]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:18:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:02.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:03 np0005603609 podman[139754]: 2026-01-31 07:18:03.020822601 +0000 UTC m=+0.071116426 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Jan 31 02:18:03 np0005603609 python3.9[139795]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843882.6052032-1292-39867669985711/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:03 np0005603609 python3.9[139886]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:18:03 np0005603609 systemd[1]: Reloading.
Jan 31 02:18:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:03.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:03 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:18:03 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:18:04 np0005603609 python3.9[139996]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:18:04 np0005603609 systemd[1]: Reloading.
Jan 31 02:18:04 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:18:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:04.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:04 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:18:04 np0005603609 systemd[1]: Starting ovn_metadata_agent container...
Jan 31 02:18:05 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:18:05 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5ab075a287d334fd3a0ffeb875f32cc0e37bd4a8cbdbbedb8bc441aa1bccd9/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 31 02:18:05 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5ab075a287d334fd3a0ffeb875f32cc0e37bd4a8cbdbbedb8bc441aa1bccd9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:18:05 np0005603609 systemd[1]: Started /usr/bin/podman healthcheck run 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e.
Jan 31 02:18:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:18:05 np0005603609 podman[140037]: 2026-01-31 07:18:05.652124942 +0000 UTC m=+0.820015463 container init 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: + sudo -E kolla_set_configs
Jan 31 02:18:05 np0005603609 podman[140037]: 2026-01-31 07:18:05.675357136 +0000 UTC m=+0.843247627 container start 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 02:18:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:05.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: INFO:__main__:Validating config file
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: INFO:__main__:Copying service configuration files
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 31 02:18:05 np0005603609 edpm-start-podman-container[140037]: ovn_metadata_agent
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: INFO:__main__:Writing out command to execute
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: ++ cat /run_command
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: + CMD=neutron-ovn-metadata-agent
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: + ARGS=
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: + sudo kolla_copy_cacerts
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: Running command: 'neutron-ovn-metadata-agent'
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: + [[ ! -n '' ]]
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: + . kolla_extend_start
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: + umask 0022
Jan 31 02:18:05 np0005603609 ovn_metadata_agent[140053]: + exec neutron-ovn-metadata-agent
Jan 31 02:18:05 np0005603609 podman[140059]: 2026-01-31 07:18:05.794117447 +0000 UTC m=+0.110897601 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 31 02:18:05 np0005603609 edpm-start-podman-container[140036]: Creating additional drop-in dependency for "ovn_metadata_agent" (94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e)
Jan 31 02:18:05 np0005603609 systemd[1]: Reloading.
Jan 31 02:18:05 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:18:05 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:18:06 np0005603609 systemd[1]: Started ovn_metadata_agent container.
Jan 31 02:18:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:06.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.413 140058 INFO neutron.common.config [-] Logging enabled!#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.414 140058 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.414 140058 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.414 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.414 140058 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.414 140058 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.414 140058 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.415 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.415 140058 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.415 140058 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.415 140058 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.415 140058 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.415 140058 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.415 140058 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.415 140058 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.415 140058 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.416 140058 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.416 140058 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.416 140058 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.416 140058 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.416 140058 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.416 140058 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.416 140058 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.416 140058 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.416 140058 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.416 140058 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.417 140058 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.417 140058 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.417 140058 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.417 140058 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.417 140058 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.417 140058 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.417 140058 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.417 140058 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.417 140058 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.417 140058 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.418 140058 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.418 140058 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.418 140058 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.418 140058 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.418 140058 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.418 140058 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.418 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.418 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.419 140058 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.419 140058 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.419 140058 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.419 140058 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.419 140058 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.419 140058 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.419 140058 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.419 140058 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.419 140058 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.419 140058 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.419 140058 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.420 140058 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.420 140058 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.420 140058 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.420 140058 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.420 140058 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.420 140058 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.420 140058 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.420 140058 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.421 140058 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.421 140058 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.421 140058 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.421 140058 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.421 140058 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.421 140058 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.421 140058 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.421 140058 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.421 140058 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.421 140058 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.422 140058 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.422 140058 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.422 140058 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.422 140058 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.422 140058 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.422 140058 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.422 140058 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.422 140058 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.422 140058 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.422 140058 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.423 140058 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.423 140058 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.423 140058 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.423 140058 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.423 140058 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.423 140058 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.423 140058 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.423 140058 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.423 140058 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.423 140058 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.424 140058 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.424 140058 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.424 140058 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.424 140058 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.424 140058 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.424 140058 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.424 140058 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.424 140058 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.424 140058 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.424 140058 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.424 140058 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.425 140058 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.425 140058 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.425 140058 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.425 140058 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.425 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.425 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.425 140058 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.425 140058 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.425 140058 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.426 140058 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.426 140058 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.426 140058 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.426 140058 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.426 140058 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.426 140058 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.426 140058 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.426 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.426 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.426 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.427 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.427 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.427 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.427 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.427 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.428 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.428 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.428 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.428 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.428 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.428 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.428 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.428 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.428 140058 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.428 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.429 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.429 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.429 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.429 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.429 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.429 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.429 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.429 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.429 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.429 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.430 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.430 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.430 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.430 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.430 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.430 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.430 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.430 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.430 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.431 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.431 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.431 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.431 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.431 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.431 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.431 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.431 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.431 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.431 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.432 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.432 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.432 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.432 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.432 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.432 140058 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.432 140058 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.432 140058 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.432 140058 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.433 140058 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.433 140058 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.433 140058 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.433 140058 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.433 140058 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.433 140058 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.433 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.433 140058 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.433 140058 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.433 140058 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.434 140058 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.434 140058 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.434 140058 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.434 140058 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.434 140058 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.434 140058 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.434 140058 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.434 140058 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.434 140058 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.435 140058 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.435 140058 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.435 140058 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.435 140058 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.435 140058 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.435 140058 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.435 140058 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.435 140058 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.435 140058 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.435 140058 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.436 140058 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.436 140058 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.436 140058 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.436 140058 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.436 140058 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.436 140058 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.436 140058 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.436 140058 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.436 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.436 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.437 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.437 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.437 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.437 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.437 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.437 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.437 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.437 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.437 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.437 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.438 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.438 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.438 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.438 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.438 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.438 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.438 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.438 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.438 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.438 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.439 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.439 140058 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.439 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.439 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.439 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.439 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.439 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.439 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.440 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.440 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.440 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.440 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.440 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.440 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.440 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.440 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.440 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.441 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.441 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.441 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.441 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.441 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.441 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.441 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.441 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.442 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.442 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.442 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.442 140058 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.442 140058 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.442 140058 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.442 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.442 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.442 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.442 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.443 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.443 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.443 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.443 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.443 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.443 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.443 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.443 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.443 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.443 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.444 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.444 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.444 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.444 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.444 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.444 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.444 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.444 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.444 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.445 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.445 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.445 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.445 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.445 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.445 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.445 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.445 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.445 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.445 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.446 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.446 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.446 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.446 140058 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.446 140058 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.454 140058 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.454 140058 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.454 140058 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.454 140058 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.455 140058 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.466 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name a8402939-fce1-46a9-9749-88c4c6334003 (UUID: a8402939-fce1-46a9-9749-88c4c6334003) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.491 140058 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.491 140058 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.491 140058 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.491 140058 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.493 140058 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.499 140058 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.504 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'a8402939-fce1-46a9-9749-88c4c6334003'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], external_ids={}, name=a8402939-fce1-46a9-9749-88c4c6334003, nb_cfg_timestamp=1769843800584, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.504 140058 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f2fcd7caf70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.505 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.505 140058 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.505 140058 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.505 140058 INFO oslo_service.service [-] Starting 1 workers#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.508 140058 DEBUG oslo_service.service [-] Started child 140167 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.510 140058 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpqd6d_5zy/privsep.sock']#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.512 140167 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-888031'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.552 140167 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.553 140167 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.553 140167 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.558 140167 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.566 140167 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 31 02:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:07.574 140167 INFO eventlet.wsgi.server [-] (140167) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Jan 31 02:18:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:18:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:07.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:18:08 np0005603609 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 31 02:18:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:08.140 140058 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 31 02:18:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:08.141 140058 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpqd6d_5zy/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 31 02:18:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:08.028 140172 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 31 02:18:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:08.033 140172 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 31 02:18:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:08.039 140172 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 31 02:18:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:08.039 140172 INFO oslo.privsep.daemon [-] privsep daemon running as pid 140172#033[00m
Jan 31 02:18:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:08.143 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[01e8579f-a61c-49b8-b8b4-e0a7a6f1bb59]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:18:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:08.630 140172 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:18:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:08.630 140172 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:18:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:08.630 140172 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:18:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:08.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.188 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0444d8-e139-49e4-8a2e-78fc0f67ca73]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.190 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, column=external_ids, values=({'neutron:ovn-metadata-id': '8cd95683-e428-552a-a9bd-3e4892f4823c'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.286 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.364 140058 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.364 140058 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.365 140058 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.365 140058 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.365 140058 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.365 140058 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.365 140058 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.365 140058 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.366 140058 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.366 140058 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.366 140058 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.366 140058 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.366 140058 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.366 140058 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.366 140058 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.367 140058 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.367 140058 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.367 140058 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.367 140058 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.367 140058 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.367 140058 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.367 140058 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.368 140058 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.368 140058 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.368 140058 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.368 140058 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.368 140058 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.368 140058 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.369 140058 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.369 140058 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.369 140058 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.369 140058 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.369 140058 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.369 140058 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.369 140058 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.370 140058 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.370 140058 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.370 140058 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.370 140058 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.370 140058 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.370 140058 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.370 140058 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.371 140058 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.371 140058 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.371 140058 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.371 140058 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.371 140058 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.371 140058 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.371 140058 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.372 140058 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.372 140058 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.372 140058 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.372 140058 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.372 140058 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.372 140058 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.372 140058 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.372 140058 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.373 140058 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.373 140058 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.373 140058 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.373 140058 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.373 140058 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.373 140058 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.373 140058 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.374 140058 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.374 140058 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.374 140058 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.374 140058 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.374 140058 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.374 140058 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.374 140058 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.375 140058 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.375 140058 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.375 140058 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.375 140058 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.375 140058 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.375 140058 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.375 140058 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.376 140058 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.376 140058 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.376 140058 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.376 140058 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.376 140058 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.376 140058 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.376 140058 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.376 140058 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.377 140058 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.377 140058 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.377 140058 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.377 140058 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.377 140058 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.377 140058 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.377 140058 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.377 140058 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.378 140058 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.378 140058 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.378 140058 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.378 140058 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.378 140058 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.378 140058 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.378 140058 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.379 140058 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.379 140058 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.379 140058 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.379 140058 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.379 140058 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.379 140058 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.379 140058 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.379 140058 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.380 140058 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.380 140058 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.380 140058 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.380 140058 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.380 140058 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.380 140058 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.380 140058 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.381 140058 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.381 140058 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.381 140058 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.381 140058 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.381 140058 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.381 140058 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.382 140058 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.382 140058 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.382 140058 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.382 140058 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.382 140058 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.382 140058 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.382 140058 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.382 140058 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.383 140058 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.383 140058 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.383 140058 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.383 140058 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.383 140058 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.383 140058 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.383 140058 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.384 140058 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.384 140058 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.384 140058 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.384 140058 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.384 140058 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.384 140058 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.384 140058 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.384 140058 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.385 140058 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.385 140058 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.385 140058 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.385 140058 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.385 140058 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.385 140058 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.385 140058 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.386 140058 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.386 140058 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.386 140058 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.386 140058 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.386 140058 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.386 140058 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.386 140058 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.386 140058 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.387 140058 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.387 140058 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.387 140058 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.387 140058 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.387 140058 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.387 140058 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.387 140058 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.387 140058 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.388 140058 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.388 140058 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.388 140058 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.388 140058 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.388 140058 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.388 140058 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.388 140058 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.389 140058 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.389 140058 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.389 140058 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.389 140058 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.389 140058 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.389 140058 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.389 140058 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.389 140058 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.390 140058 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.390 140058 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.390 140058 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.390 140058 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.390 140058 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.390 140058 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.390 140058 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.391 140058 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.391 140058 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.391 140058 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.391 140058 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.391 140058 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.391 140058 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.391 140058 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.391 140058 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.392 140058 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.392 140058 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.392 140058 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.392 140058 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.392 140058 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.392 140058 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.392 140058 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.393 140058 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.393 140058 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.393 140058 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.393 140058 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.393 140058 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.393 140058 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.393 140058 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.393 140058 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.394 140058 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.394 140058 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.394 140058 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.394 140058 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.394 140058 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.394 140058 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.394 140058 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.395 140058 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.395 140058 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.395 140058 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.395 140058 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.395 140058 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.395 140058 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.395 140058 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.395 140058 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.396 140058 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.396 140058 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.396 140058 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.396 140058 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.396 140058 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.396 140058 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.396 140058 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.397 140058 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.397 140058 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.397 140058 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.397 140058 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.397 140058 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.397 140058 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.398 140058 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.398 140058 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.398 140058 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.398 140058 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.398 140058 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.398 140058 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.398 140058 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.399 140058 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.399 140058 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.399 140058 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.399 140058 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.399 140058 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.399 140058 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.399 140058 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.399 140058 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.400 140058 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.400 140058 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.400 140058 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.400 140058 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.400 140058 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.400 140058 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.400 140058 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.401 140058 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.401 140058 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.401 140058 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.401 140058 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.401 140058 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.401 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.401 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.402 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.402 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.402 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.402 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.402 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.402 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.402 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.402 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.403 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.403 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.403 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.403 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.403 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.403 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.403 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.404 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.404 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.404 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.404 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.404 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.404 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.404 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.404 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.405 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.405 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.405 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.405 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.405 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.405 140058 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.405 140058 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.406 140058 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.406 140058 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.406 140058 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:18:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:18:09.406 140058 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 31 02:18:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:09.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:18:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:10.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:11 np0005603609 python3.9[140302]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 02:18:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:18:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:11.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:18:12 np0005603609 python3.9[140454]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:18:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:12.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:13 np0005603609 python3.9[140579]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843891.7884007-1427-21585958176404/.source.yaml _original_basename=.owa1os6p follow=False checksum=d444960821d3e2834cd73828669d050a1a289a05 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 02:18:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:13.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 02:18:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:14.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:18:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:18:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:15.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:18:16 np0005603609 systemd-logind[823]: Session 47 logged out. Waiting for processes to exit.
Jan 31 02:18:16 np0005603609 systemd[1]: session-47.scope: Deactivated successfully.
Jan 31 02:18:16 np0005603609 systemd[1]: session-47.scope: Consumed 47.569s CPU time.
Jan 31 02:18:16 np0005603609 systemd-logind[823]: Removed session 47.
Jan 31 02:18:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:16.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:18:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:17.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:18:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:18:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:18.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:18:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:18:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:18:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 02:18:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:19.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 02:18:20 np0005603609 radosgw[84443]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 31 02:18:20 np0005603609 radosgw[84443]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 31 02:18:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:18:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:20.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:21 np0005603609 radosgw[84443]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 31 02:18:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:21.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:18:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:18:22 np0005603609 systemd-logind[823]: New session 48 of user zuul.
Jan 31 02:18:22 np0005603609 systemd[1]: Started Session 48 of User zuul.
Jan 31 02:18:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:22.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:23 np0005603609 python3.9[140894]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:18:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 02:18:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:23.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 02:18:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:24.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:24 np0005603609 python3.9[141050]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:18:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:18:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:18:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:25.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:18:26 np0005603609 python3.9[141216]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:18:26 np0005603609 systemd[1]: Reloading.
Jan 31 02:18:26 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:18:26 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:18:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:26.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:27.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:27 np0005603609 python3.9[141401]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:18:27 np0005603609 network[141418]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:18:27 np0005603609 network[141419]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:18:27 np0005603609 network[141420]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:18:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:28.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:18:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:18:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:29.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:18:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:30.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:31 np0005603609 python3.9[141732]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:18:31 np0005603609 python3.9[141885]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:18:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:31.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:32 np0005603609 python3.9[142038]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:18:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:32.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:33 np0005603609 python3.9[142191]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:18:33 np0005603609 podman[142193]: 2026-01-31 07:18:33.171679274 +0000 UTC m=+0.078524602 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 02:18:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 02:18:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:33.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 02:18:33 np0005603609 python3.9[142370]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:18:34 np0005603609 python3.9[142523]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:18:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:34.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:35 np0005603609 python3.9[142676]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:18:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:18:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:35.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:36 np0005603609 podman[142702]: 2026-01-31 07:18:36.161527007 +0000 UTC m=+0.052457281 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:18:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:36.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:37 np0005603609 python3.9[142848]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:37.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:38 np0005603609 python3.9[143000]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:38.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:38 np0005603609 python3.9[143152]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:39 np0005603609 python3.9[143304]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:18:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:39.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:18:39 np0005603609 python3.9[143456]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:40 np0005603609 python3.9[143608]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:18:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:40.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:41 np0005603609 python3.9[143760]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:41.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:41 np0005603609 python3.9[143912]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:42 np0005603609 python3.9[144064]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:42.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:42 np0005603609 python3.9[144216]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:43 np0005603609 python3.9[144368]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:43.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:44 np0005603609 python3.9[144520]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:44.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:44 np0005603609 python3.9[144672]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:45 np0005603609 python3.9[144824]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:18:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:45.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:46 np0005603609 python3.9[144976]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:18:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:46.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:47 np0005603609 python3.9[145128]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 02:18:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:18:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:47.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:18:48 np0005603609 python3.9[145280]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:18:48 np0005603609 systemd[1]: Reloading.
Jan 31 02:18:48 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:18:48 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:18:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:48.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:48 np0005603609 python3.9[145468]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:18:49 np0005603609 python3.9[145621]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:18:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:49.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:50 np0005603609 python3.9[145774]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:18:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:18:50 np0005603609 python3.9[145927]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:18:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:50.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:51 np0005603609 python3.9[146080]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:18:51 np0005603609 python3.9[146233]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:18:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:51.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:52 np0005603609 python3.9[146386]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:18:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:52.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:53.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:53 np0005603609 python3.9[146539]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 31 02:18:54 np0005603609 python3.9[146692]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 02:18:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:54.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:18:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:55.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:56 np0005603609 python3.9[146850]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 02:18:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:56.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:18:57 np0005603609 python3.9[147010]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:18:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:18:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:57.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:18:58 np0005603609 python3.9[147095]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:18:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 02:18:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:58.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 02:18:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:18:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:18:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:59.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:19:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:19:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:00.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:19:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:01.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:02.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:03.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:04 np0005603609 podman[147106]: 2026-01-31 07:19:04.242083581 +0000 UTC m=+0.118187401 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:19:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:04.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:19:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:05.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:19:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:06.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:19:07 np0005603609 podman[147134]: 2026-01-31 07:19:07.200981767 +0000 UTC m=+0.076377931 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Jan 31 02:19:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:19:07.449 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:19:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:19:07.449 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:19:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:19:07.449 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:19:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:07.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:19:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:08.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:19:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:09.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:19:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:10.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:11.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:12.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:19:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:13.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:19:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000046s ======
Jan 31 02:19:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:14.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000046s
Jan 31 02:19:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:19:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:19:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:15.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:19:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:16.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:19:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:17.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:19:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:18.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:19.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:19:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:20.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:21.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:22.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:23.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:24.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:19:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:25.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:26.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:19:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:27.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:19:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:28.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:19:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:29.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:19:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:19:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:19:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:30.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:31.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:32 np0005603609 kernel: SELinux:  Converting 2778 SID table entries...
Jan 31 02:19:32 np0005603609 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 02:19:32 np0005603609 kernel: SELinux:  policy capability open_perms=1
Jan 31 02:19:32 np0005603609 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 02:19:32 np0005603609 kernel: SELinux:  policy capability always_check_network=0
Jan 31 02:19:32 np0005603609 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 02:19:32 np0005603609 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 02:19:32 np0005603609 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 02:19:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:32.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:19:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:33.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:19:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:34.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:35 np0005603609 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 31 02:19:35 np0005603609 podman[147473]: 2026-01-31 07:19:35.208845382 +0000 UTC m=+0.087056356 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 02:19:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:19:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:35.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:36.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:19:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:37.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:19:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:19:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:19:38 np0005603609 podman[147550]: 2026-01-31 07:19:38.233800313 +0000 UTC m=+0.051030274 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:19:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:19:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:38.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:19:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:39.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:19:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:40.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:41 np0005603609 kernel: SELinux:  Converting 2778 SID table entries...
Jan 31 02:19:41 np0005603609 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 02:19:41 np0005603609 kernel: SELinux:  policy capability open_perms=1
Jan 31 02:19:41 np0005603609 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 02:19:41 np0005603609 kernel: SELinux:  policy capability always_check_network=0
Jan 31 02:19:41 np0005603609 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 02:19:41 np0005603609 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 02:19:41 np0005603609 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 02:19:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:19:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:41.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:19:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:42.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:43.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:19:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:44.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:19:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:19:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:45.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:19:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:46.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:19:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:19:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:47.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:19:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:48.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:49.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:19:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:50.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:51.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:52.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:53.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:19:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:54.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:19:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:19:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:19:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:55.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:19:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:56.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:57.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:58.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:19:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:19:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:19:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:59.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:20:00 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 02:20:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:20:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:00.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:20:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:20:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:01.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:20:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:20:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:02.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:20:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:20:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:03.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:20:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:20:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:04.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:20:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:20:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:20:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:05.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:20:06 np0005603609 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 31 02:20:06 np0005603609 podman[159109]: 2026-01-31 07:20:06.294292103 +0000 UTC m=+0.161653894 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 31 02:20:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:06.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:20:07.450 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:20:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:20:07.453 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:20:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:20:07.453 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:20:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:20:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:07.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:20:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:20:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:08.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:20:09 np0005603609 podman[161422]: 2026-01-31 07:20:09.1567008 +0000 UTC m=+0.050217324 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 02:20:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:20:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:09.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:20:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:20:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:10.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:20:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:11.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:20:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:12.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:13.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:14.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:20:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:15.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:16.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:20:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:17.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:20:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:20:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:18.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:20:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:19.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:20:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:20.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:21.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 02:20:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:22.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 02:20:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:20:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:23.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:20:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 02:20:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:24.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 02:20:25 np0005603609 kernel: SELinux:  Converting 2779 SID table entries...
Jan 31 02:20:25 np0005603609 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 02:20:25 np0005603609 kernel: SELinux:  policy capability open_perms=1
Jan 31 02:20:25 np0005603609 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 02:20:25 np0005603609 kernel: SELinux:  policy capability always_check_network=0
Jan 31 02:20:25 np0005603609 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 02:20:25 np0005603609 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 02:20:25 np0005603609 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 02:20:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:20:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:25.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:26 np0005603609 dbus-broker-launch[800]: Noticed file-system modification, trigger reload.
Jan 31 02:20:26 np0005603609 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 31 02:20:26 np0005603609 dbus-broker-launch[800]: Noticed file-system modification, trigger reload.
Jan 31 02:20:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 02:20:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:26.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 02:20:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:27.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:28.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 02:20:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:29.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 02:20:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:20:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:20:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:30.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:20:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:31.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:32.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:33 np0005603609 systemd[1]: Stopping OpenSSH server daemon...
Jan 31 02:20:33 np0005603609 systemd[1]: sshd.service: Deactivated successfully.
Jan 31 02:20:33 np0005603609 systemd[1]: Stopped OpenSSH server daemon.
Jan 31 02:20:33 np0005603609 systemd[1]: sshd.service: Consumed 2.622s CPU time, read 32.0K from disk, written 80.0K to disk.
Jan 31 02:20:33 np0005603609 systemd[1]: Stopped target sshd-keygen.target.
Jan 31 02:20:33 np0005603609 systemd[1]: Stopping sshd-keygen.target...
Jan 31 02:20:33 np0005603609 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 02:20:33 np0005603609 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 02:20:33 np0005603609 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 02:20:33 np0005603609 systemd[1]: Reached target sshd-keygen.target.
Jan 31 02:20:33 np0005603609 systemd[1]: Starting OpenSSH server daemon...
Jan 31 02:20:33 np0005603609 systemd[1]: Started OpenSSH server daemon.
Jan 31 02:20:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:33.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:34.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:35 np0005603609 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 02:20:35 np0005603609 systemd[1]: Starting man-db-cache-update.service...
Jan 31 02:20:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:20:35 np0005603609 systemd[1]: Reloading.
Jan 31 02:20:35 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:20:35 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:20:35 np0005603609 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 02:20:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:20:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:35.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:20:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:36.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:37 np0005603609 podman[167776]: 2026-01-31 07:20:37.19106934 +0000 UTC m=+0.077559887 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Jan 31 02:20:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 02:20:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:38.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 02:20:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:38.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:38 np0005603609 podman[169734]: 2026-01-31 07:20:38.917179702 +0000 UTC m=+0.067629375 container exec 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 02:20:39 np0005603609 podman[169734]: 2026-01-31 07:20:39.018541352 +0000 UTC m=+0.168991015 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:20:39 np0005603609 podman[170285]: 2026-01-31 07:20:39.356243683 +0000 UTC m=+0.101866772 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 02:20:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:20:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:20:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:20:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:40.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:20:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:20:40 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:20:40 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:20:40 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:20:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:40.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:42 np0005603609 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 02:20:42 np0005603609 systemd[1]: Finished man-db-cache-update.service.
Jan 31 02:20:42 np0005603609 systemd[1]: man-db-cache-update.service: Consumed 7.762s CPU time.
Jan 31 02:20:42 np0005603609 systemd[1]: run-ra9547515cc2548529a66022a0328c145.service: Deactivated successfully.
Jan 31 02:20:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:42.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:42.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:44.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:44.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:20:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:46.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 02:20:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:46.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 02:20:47 np0005603609 python3.9[174650]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:20:47 np0005603609 systemd[1]: Reloading.
Jan 31 02:20:47 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:20:47 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:20:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:20:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:20:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:48.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:48 np0005603609 python3.9[174839]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:20:48 np0005603609 systemd[1]: Reloading.
Jan 31 02:20:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:48.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:48 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:20:48 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:20:49 np0005603609 python3.9[175029]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:20:49 np0005603609 systemd[1]: Reloading.
Jan 31 02:20:50 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:20:50 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:20:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:20:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:50.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:20:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:20:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:50.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:50 np0005603609 python3.9[175219]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:20:51 np0005603609 systemd[1]: Reloading.
Jan 31 02:20:51 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:20:51 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:20:51 np0005603609 python3.9[175409]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:20:52 np0005603609 systemd[1]: Reloading.
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:20:52.095071) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844052095132, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1911, "num_deletes": 251, "total_data_size": 4892616, "memory_usage": 4964864, "flush_reason": "Manual Compaction"}
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Jan 31 02:20:52 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:20:52 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844052150078, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 3212469, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12407, "largest_seqno": 14313, "table_properties": {"data_size": 3204452, "index_size": 4896, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 14766, "raw_average_key_size": 18, "raw_value_size": 3188732, "raw_average_value_size": 3995, "num_data_blocks": 220, "num_entries": 798, "num_filter_entries": 798, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843845, "oldest_key_time": 1769843845, "file_creation_time": 1769844052, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 55058 microseconds, and 5650 cpu microseconds.
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:20:52.150139) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 3212469 bytes OK
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:20:52.150159) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:20:52.152137) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:20:52.152149) EVENT_LOG_v1 {"time_micros": 1769844052152145, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:20:52.152166) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 4884028, prev total WAL file size 4884028, number of live WAL files 2.
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:20:52.152760) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(3137KB)], [24(8226KB)]
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844052152788, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 11636591, "oldest_snapshot_seqno": -1}
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4294 keys, 11136232 bytes, temperature: kUnknown
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844052282445, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 11136232, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11102296, "index_size": 22114, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 104861, "raw_average_key_size": 24, "raw_value_size": 11019572, "raw_average_value_size": 2566, "num_data_blocks": 941, "num_entries": 4294, "num_filter_entries": 4294, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769844052, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:20:52.282630) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11136232 bytes
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:20:52.284005) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 89.7 rd, 85.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(7.1) write-amplify(3.5) OK, records in: 4811, records dropped: 517 output_compression: NoCompression
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:20:52.284023) EVENT_LOG_v1 {"time_micros": 1769844052284014, "job": 12, "event": "compaction_finished", "compaction_time_micros": 129715, "compaction_time_cpu_micros": 17468, "output_level": 6, "num_output_files": 1, "total_output_size": 11136232, "num_input_records": 4811, "num_output_records": 4294, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844052284403, "job": 12, "event": "table_file_deletion", "file_number": 26}
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844052285129, "job": 12, "event": "table_file_deletion", "file_number": 24}
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:20:52.152703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:20:52.285177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:20:52.285182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:20:52.285183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:20:52.285184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:20:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:20:52.285186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:20:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:52.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:52.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:52 np0005603609 python3.9[175598]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:20:52 np0005603609 systemd[1]: Reloading.
Jan 31 02:20:53 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:20:53 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:20:53 np0005603609 python3.9[175788]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:20:53 np0005603609 systemd[1]: Reloading.
Jan 31 02:20:54 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:20:54 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:20:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:54.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:54 np0005603609 python3.9[175977]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:20:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:54.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:20:55 np0005603609 python3.9[176132]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:20:55 np0005603609 systemd[1]: Reloading.
Jan 31 02:20:55 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:20:55 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:20:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:20:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:56.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:20:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:20:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:56.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:20:57 np0005603609 python3.9[176322]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:20:57 np0005603609 systemd[1]: Reloading.
Jan 31 02:20:58 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:20:58 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:20:58 np0005603609 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 31 02:20:58 np0005603609 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 31 02:20:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:20:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:58.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:20:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:20:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:20:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:58.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:20:59 np0005603609 python3.9[176515]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:21:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:00.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:00 np0005603609 python3.9[176670]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:21:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 02:21:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:00.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 02:21:01 np0005603609 python3.9[176825]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:21:01 np0005603609 python3.9[176980]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:21:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:02.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:02 np0005603609 python3.9[177135]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:21:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:21:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:02.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:21:03 np0005603609 python3.9[177290]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:21:03 np0005603609 python3.9[177445]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:21:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:04.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:04 np0005603609 python3.9[177600]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:21:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:21:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:04.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:21:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:05 np0005603609 python3.9[177755]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:21:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 02:21:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:06.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 02:21:06 np0005603609 python3.9[177910]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:21:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 02:21:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:06.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 02:21:07 np0005603609 python3.9[178065]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:21:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:21:07.451 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:21:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:21:07.453 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:21:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:21:07.453 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:21:07 np0005603609 podman[178067]: 2026-01-31 07:21:07.604558778 +0000 UTC m=+0.119016662 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:21:08 np0005603609 python3.9[178247]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:21:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 02:21:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:08.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 02:21:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:08.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:09 np0005603609 podman[178374]: 2026-01-31 07:21:09.822985621 +0000 UTC m=+0.066127260 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:21:10 np0005603609 python3.9[178421]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:21:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:10.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:10 np0005603609 python3.9[178577]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:21:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:21:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:10.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:21:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:12.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:12 np0005603609 python3.9[178732]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:21:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:12.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:13 np0005603609 python3.9[178884]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:21:13 np0005603609 python3.9[179036]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:21:14 np0005603609 python3.9[179188]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:21:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:14.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:14.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:14 np0005603609 python3.9[179340]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:21:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:15 np0005603609 python3.9[179492]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:21:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:16.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:16 np0005603609 python3.9[179642]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:21:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:16.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:17 np0005603609 python3.9[179794]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:17 np0005603609 python3.9[179919]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844076.6590555-1647-63171197656706/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:21:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:18.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:21:18 np0005603609 python3.9[180071]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 02:21:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:18.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 02:21:19 np0005603609 python3.9[180196]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844078.1280515-1647-177912949154655/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:19 np0005603609 python3.9[180348]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:20.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:20 np0005603609 python3.9[180473]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844079.3095076-1647-190211998001268/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:20.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:20 np0005603609 python3.9[180625]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:21 np0005603609 python3.9[180750]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844080.5180357-1647-139040141056046/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:22 np0005603609 python3.9[180902]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:22.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:22 np0005603609 python3.9[181027]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844081.6134877-1647-251665273972675/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:22.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:23 np0005603609 python3.9[181179]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:23 np0005603609 python3.9[181304]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844082.7871177-1647-157186427027833/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:21:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:24.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:21:24 np0005603609 python3.9[181456]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:24.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:24 np0005603609 python3.9[181579]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844083.9270067-1647-6598362603080/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:25 np0005603609 python3.9[181731]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:26.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:26 np0005603609 python3.9[181856]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844085.0957532-1647-191910429965977/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:21:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:26.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:21:27 np0005603609 python3.9[182008]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 31 02:21:27 np0005603609 python3.9[182161]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:28.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:28 np0005603609 python3.9[182313]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:28.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:29 np0005603609 python3.9[182465]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:29 np0005603609 python3.9[182617]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:30.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:30 np0005603609 python3.9[182769]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:30.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:31 np0005603609 python3.9[182921]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:31 np0005603609 python3.9[183073]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:32 np0005603609 python3.9[183225]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:32.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:32 np0005603609 python3.9[183377]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:32.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:33 np0005603609 python3.9[183529]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:34 np0005603609 python3.9[183681]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:34.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:34 np0005603609 python3.9[183833]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:21:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:34.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:21:35 np0005603609 python3.9[183985]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:35 np0005603609 python3.9[184137]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:21:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:36.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:21:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:36.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:38 np0005603609 podman[184162]: 2026-01-31 07:21:38.190288251 +0000 UTC m=+0.077409152 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 02:21:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:38.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:38 np0005603609 python3.9[184315]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:38.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:39 np0005603609 python3.9[184438]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844098.1801515-2310-159952303309261/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:39 np0005603609 podman[184562]: 2026-01-31 07:21:39.931539022 +0000 UTC m=+0.047727773 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 02:21:40 np0005603609 python3.9[184609]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:40.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:40 np0005603609 python3.9[184732]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844099.647153-2310-102129321304307/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:21:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:40.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:21:41 np0005603609 python3.9[184884]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:41 np0005603609 python3.9[185007]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844100.7552478-2310-41668358205387/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:21:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:42.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:21:42 np0005603609 python3.9[185159]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:42 np0005603609 python3.9[185282]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844101.8813338-2310-265228472357157/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:42.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:43 np0005603609 python3.9[185434]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:44 np0005603609 python3.9[185557]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844103.0795057-2310-74740317488697/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:44.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:44 np0005603609 python3.9[185709]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:44.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:45 np0005603609 python3.9[185832]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844104.2655282-2310-12179265639836/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:45 np0005603609 python3.9[185984]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:46.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:46 np0005603609 python3.9[186107]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844105.4552279-2310-7814072064275/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:46.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:47 np0005603609 python3.9[186259]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:47 np0005603609 python3.9[186382]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844106.6369567-2310-23664977302052/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:48 np0005603609 python3.9[186648]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:21:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:48.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:21:48 np0005603609 python3.9[186788]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844107.64075-2310-247278904846588/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:48.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:49 np0005603609 python3.9[186940]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:49 np0005603609 python3.9[187063]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844108.8337939-2310-279083574380127/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:21:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:50.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:21:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:50 np0005603609 python3.9[187215]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:50 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:21:50 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:21:50 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:21:50 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:21:50 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:21:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:50.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:51 np0005603609 python3.9[187338]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844110.1463325-2310-36720829210407/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:51 np0005603609 python3.9[187490]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:52 np0005603609 python3.9[187613]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844111.290846-2310-92121515825639/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:52.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:52 np0005603609 python3.9[187765]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:53.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:53 np0005603609 python3.9[187888]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844112.3647277-2310-15416826116398/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:53 np0005603609 python3.9[188040]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:21:54 np0005603609 python3.9[188163]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844113.358491-2310-114828250436468/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:21:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:54.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:55.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:56.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:56 np0005603609 python3.9[188313]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:21:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:21:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:57.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:21:58 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:21:58 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:21:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:21:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:58.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:21:59 np0005603609 python3.9[188518]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 31 02:21:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:21:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:21:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:59.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:22:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:00.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:00 np0005603609 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 31 02:22:01 np0005603609 python3.9[188674]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:01.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:01 np0005603609 python3.9[188826]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:02 np0005603609 python3.9[188978]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:22:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:02.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:22:02 np0005603609 python3.9[189130]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:03.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:03 np0005603609 python3.9[189282]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:04 np0005603609 python3.9[189434]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:22:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:04.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:22:04 np0005603609 python3.9[189587]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:05.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:05 np0005603609 python3.9[189739]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:06 np0005603609 python3.9[189891]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:22:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:06.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:22:06 np0005603609 python3.9[190043]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:07.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:22:07.453 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:22:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:22:07.454 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:22:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:22:07.454 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:22:07 np0005603609 python3.9[190195]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:22:07 np0005603609 systemd[1]: Reloading.
Jan 31 02:22:07 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:22:07 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:22:08 np0005603609 systemd[1]: Starting libvirt logging daemon socket...
Jan 31 02:22:08 np0005603609 systemd[1]: Listening on libvirt logging daemon socket.
Jan 31 02:22:08 np0005603609 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 31 02:22:08 np0005603609 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 31 02:22:08 np0005603609 systemd[1]: Starting libvirt logging daemon...
Jan 31 02:22:08 np0005603609 systemd[1]: Started libvirt logging daemon.
Jan 31 02:22:08 np0005603609 podman[190236]: 2026-01-31 07:22:08.335621501 +0000 UTC m=+0.114424891 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 02:22:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:22:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:08.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:22:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:22:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:09.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:22:09 np0005603609 python3.9[190415]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:22:09 np0005603609 systemd[1]: Reloading.
Jan 31 02:22:09 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:22:09 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:22:09 np0005603609 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 31 02:22:09 np0005603609 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 31 02:22:09 np0005603609 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 31 02:22:09 np0005603609 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 31 02:22:09 np0005603609 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 31 02:22:09 np0005603609 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 31 02:22:09 np0005603609 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 02:22:09 np0005603609 systemd[1]: Started libvirt nodedev daemon.
Jan 31 02:22:10 np0005603609 python3.9[190631]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:22:10 np0005603609 systemd[1]: Reloading.
Jan 31 02:22:10 np0005603609 podman[190633]: 2026-01-31 07:22:10.198645573 +0000 UTC m=+0.080425306 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 02:22:10 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:22:10 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:22:10 np0005603609 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 31 02:22:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:22:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:10.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:22:10 np0005603609 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 31 02:22:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:10 np0005603609 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 31 02:22:10 np0005603609 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 31 02:22:10 np0005603609 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 31 02:22:10 np0005603609 systemd[1]: Starting libvirt proxy daemon...
Jan 31 02:22:10 np0005603609 systemd[1]: Started libvirt proxy daemon.
Jan 31 02:22:10 np0005603609 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 31 02:22:10 np0005603609 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 31 02:22:10 np0005603609 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 31 02:22:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:11.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:11 np0005603609 python3.9[190870]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:22:11 np0005603609 systemd[1]: Reloading.
Jan 31 02:22:11 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:22:11 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:22:11 np0005603609 systemd[1]: Listening on libvirt locking daemon socket.
Jan 31 02:22:11 np0005603609 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 31 02:22:11 np0005603609 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 31 02:22:11 np0005603609 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 31 02:22:11 np0005603609 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 31 02:22:11 np0005603609 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 31 02:22:11 np0005603609 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 31 02:22:11 np0005603609 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 31 02:22:11 np0005603609 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 31 02:22:11 np0005603609 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 31 02:22:11 np0005603609 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 02:22:11 np0005603609 systemd[1]: Started libvirt QEMU daemon.
Jan 31 02:22:11 np0005603609 setroubleshoot[190688]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f50305eb-fd09-4ed8-8e9e-fb833dae2f6b
Jan 31 02:22:11 np0005603609 setroubleshoot[190688]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 31 02:22:11 np0005603609 setroubleshoot[190688]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f50305eb-fd09-4ed8-8e9e-fb833dae2f6b
Jan 31 02:22:11 np0005603609 setroubleshoot[190688]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 31 02:22:12 np0005603609 python3.9[191088]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:22:12 np0005603609 systemd[1]: Reloading.
Jan 31 02:22:12 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:22:12 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:22:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:12.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:12 np0005603609 systemd[1]: Starting libvirt secret daemon socket...
Jan 31 02:22:12 np0005603609 systemd[1]: Listening on libvirt secret daemon socket.
Jan 31 02:22:12 np0005603609 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 31 02:22:12 np0005603609 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 31 02:22:12 np0005603609 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 31 02:22:12 np0005603609 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 31 02:22:12 np0005603609 systemd[1]: Starting libvirt secret daemon...
Jan 31 02:22:12 np0005603609 systemd[1]: Started libvirt secret daemon.
Jan 31 02:22:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:13.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:13 np0005603609 python3.9[191300]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:14.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:14 np0005603609 python3.9[191452]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 02:22:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:22:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:15.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:22:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:15 np0005603609 python3.9[191604]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:16.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:16 np0005603609 python3.9[191758]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 02:22:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:22:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:17.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:22:17 np0005603609 python3.9[191908]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:22:18 np0005603609 python3.9[192029]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844137.209397-3384-72766294830720/.source.xml follow=False _original_basename=secret.xml.j2 checksum=e49dd15d2c7191e2dea7492d81017d486826e706 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:22:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:18.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:22:18 np0005603609 python3.9[192181]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine f70fcd2a-dcb4-5f89-a4ba-79a09959083b#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:19.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:22:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:20.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:22:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:20 np0005603609 python3.9[192343]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:21.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:21 np0005603609 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 31 02:22:21 np0005603609 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 31 02:22:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:22.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:22 np0005603609 python3.9[192806]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:23.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:23 np0005603609 python3.9[192958]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:22:24 np0005603609 python3.9[193081]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844143.1102395-3549-78372433095541/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:24.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:25.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:25 np0005603609 python3.9[193233]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:25 np0005603609 python3.9[193387]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:22:26 np0005603609 python3.9[193465]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:22:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:26.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:22:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:27.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:27 np0005603609 python3.9[193617]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:22:27 np0005603609 auditd[702]: Audit daemon rotating log files
Jan 31 02:22:27 np0005603609 python3.9[193695]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.zmzqk45z recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:28 np0005603609 python3.9[193847]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:22:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:28.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:28 np0005603609 python3.9[193925]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:22:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:29.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:22:29 np0005603609 python3.9[194077]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:22:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:30.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:22:30 np0005603609 python3[194230]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 02:22:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:31.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:31 np0005603609 python3.9[194382]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:22:31 np0005603609 python3.9[194460]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:32 np0005603609 python3.9[194612]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:22:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:22:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:32.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:22:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:33.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:33 np0005603609 python3.9[194737]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844151.9786255-3816-235592393322716/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:34 np0005603609 python3.9[194889]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:22:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:22:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:34.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:22:34 np0005603609 python3.9[194967]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:35.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:35 np0005603609 python3.9[195119]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:22:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:35 np0005603609 python3.9[195197]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:36.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:36 np0005603609 python3.9[195349]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:22:36 np0005603609 ceph-mgr[82033]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 02:22:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:37.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:37 np0005603609 python3.9[195474]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844156.0259626-3933-272137918351720/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:37 np0005603609 python3.9[195626]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:38.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:38 np0005603609 podman[195750]: 2026-01-31 07:22:38.587599685 +0000 UTC m=+0.089385062 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:22:38 np0005603609 python3.9[195793]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:39.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:39 np0005603609 python3.9[195961]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:40.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:40 np0005603609 python3.9[196113]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:41.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:41 np0005603609 podman[196237]: 2026-01-31 07:22:41.187648209 +0000 UTC m=+0.070150550 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:22:41 np0005603609 python3.9[196283]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:22:42 np0005603609 python3.9[196439]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:42.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:43 np0005603609 python3.9[196594]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:43.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:43 np0005603609 python3.9[196746]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:22:44 np0005603609 python3.9[196869]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844163.2747514-4149-120644419942585/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:44.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:45 np0005603609 python3.9[197021]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:22:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:45.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:45 np0005603609 python3.9[197144]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844164.5830064-4194-31271198649845/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:46 np0005603609 python3.9[197296]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:22:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:22:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:46.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:22:47 np0005603609 python3.9[197419]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844165.9566457-4239-168922225801224/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:22:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:47.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:22:47 np0005603609 python3.9[197571]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:22:47 np0005603609 systemd[1]: Reloading.
Jan 31 02:22:47 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:22:47 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:22:48 np0005603609 systemd[1]: Reached target edpm_libvirt.target.
Jan 31 02:22:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:22:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:48.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:22:49 np0005603609 python3.9[197762]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 02:22:49 np0005603609 systemd[1]: Reloading.
Jan 31 02:22:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:49.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:49 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:22:49 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:22:49 np0005603609 systemd[1]: Reloading.
Jan 31 02:22:49 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:22:49 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:22:50 np0005603609 systemd[1]: session-48.scope: Deactivated successfully.
Jan 31 02:22:50 np0005603609 systemd[1]: session-48.scope: Consumed 2min 58.133s CPU time.
Jan 31 02:22:50 np0005603609 systemd-logind[823]: Session 48 logged out. Waiting for processes to exit.
Jan 31 02:22:50 np0005603609 systemd-logind[823]: Removed session 48.
Jan 31 02:22:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:50.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:51.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:52.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:22:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:53.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:22:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:54.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:22:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:55.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:22:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:56.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:57.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:57 np0005603609 systemd-logind[823]: New session 49 of user zuul.
Jan 31 02:22:57 np0005603609 systemd[1]: Started Session 49 of User zuul.
Jan 31 02:22:58 np0005603609 python3.9[198142]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:22:58 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:22:58 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:22:58 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:22:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:22:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:58.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:22:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:22:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:22:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:59.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:22:59 np0005603609 python3.9[198297]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:22:59 np0005603609 network[198314]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:22:59 np0005603609 network[198315]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:22:59 np0005603609 network[198316]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:23:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:00.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:01.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:02.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:03.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:04 np0005603609 python3.9[198588]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:23:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:04.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:04 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:23:04 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:23:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:05.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:05 np0005603609 python3.9[198722]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:23:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:06.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:07.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:23:07.454 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:23:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:23:07.455 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:23:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:23:07.455 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:23:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:08.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:09.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:09 np0005603609 podman[198724]: 2026-01-31 07:23:09.236732404 +0000 UTC m=+0.119323785 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 02:23:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:10.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:11.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:11 np0005603609 python3.9[198901]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:23:12 np0005603609 podman[199001]: 2026-01-31 07:23:12.221299521 +0000 UTC m=+0.101471827 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:23:12 np0005603609 python3.9[199072]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:12.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:13.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:13 np0005603609 python3.9[199227]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:23:13 np0005603609 python3.9[199379]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:14.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:14 np0005603609 python3.9[199532]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:23:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:15.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:15 np0005603609 python3.9[199655]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844194.2740834-246-36492346980198/.source.iscsi _original_basename=.p1uurdvz follow=False checksum=bb11207e284d2675ef9e24f1a04e5fb8b809c0e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:23:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:16 np0005603609 python3.9[199807]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:23:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 02:23:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:16.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 02:23:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:17.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:17 np0005603609 python3.9[199959]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:23:17 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:23:17 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:23:18 np0005603609 python3.9[200112]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:23:18 np0005603609 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 31 02:23:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:18.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:19.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:19 np0005603609 python3.9[200268]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:23:19 np0005603609 systemd[1]: Reloading.
Jan 31 02:23:19 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:23:19 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:23:19 np0005603609 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 02:23:19 np0005603609 systemd[1]: Starting Open-iSCSI...
Jan 31 02:23:19 np0005603609 kernel: Loading iSCSI transport class v2.0-870.
Jan 31 02:23:19 np0005603609 systemd[1]: Started Open-iSCSI.
Jan 31 02:23:19 np0005603609 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 31 02:23:19 np0005603609 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 31 02:23:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:20.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:20 np0005603609 python3.9[200469]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:23:20 np0005603609 network[200486]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:23:20 np0005603609 network[200487]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:23:20 np0005603609 network[200488]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:23:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:21.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:23:22.146630) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844202146736, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1823, "num_deletes": 503, "total_data_size": 3901454, "memory_usage": 3959784, "flush_reason": "Manual Compaction"}
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844202161048, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 1495129, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14318, "largest_seqno": 16136, "table_properties": {"data_size": 1489630, "index_size": 2254, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16560, "raw_average_key_size": 19, "raw_value_size": 1475728, "raw_average_value_size": 1709, "num_data_blocks": 104, "num_entries": 863, "num_filter_entries": 863, "num_deletions": 503, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844052, "oldest_key_time": 1769844052, "file_creation_time": 1769844202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 14493 microseconds, and 6849 cpu microseconds.
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:23:22.161135) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 1495129 bytes OK
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:23:22.161165) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:23:22.162550) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:23:22.162568) EVENT_LOG_v1 {"time_micros": 1769844202162561, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:23:22.162595) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 3892234, prev total WAL file size 3892234, number of live WAL files 2.
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:23:22.163492) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323535' seq:72057594037927935, type:22 .. '6D67727374617400353038' seq:0, type:0; will stop at (end)
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1460KB)], [27(10MB)]
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844202163577, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 12631361, "oldest_snapshot_seqno": -1}
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 4206 keys, 8058943 bytes, temperature: kUnknown
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844202229270, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 8058943, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8029482, "index_size": 17831, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 104155, "raw_average_key_size": 24, "raw_value_size": 7952041, "raw_average_value_size": 1890, "num_data_blocks": 750, "num_entries": 4206, "num_filter_entries": 4206, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769844202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:23:22.229796) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 8058943 bytes
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:23:22.231845) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.7 rd, 122.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.6 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(13.8) write-amplify(5.4) OK, records in: 5157, records dropped: 951 output_compression: NoCompression
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:23:22.231888) EVENT_LOG_v1 {"time_micros": 1769844202231867, "job": 14, "event": "compaction_finished", "compaction_time_micros": 65894, "compaction_time_cpu_micros": 25526, "output_level": 6, "num_output_files": 1, "total_output_size": 8058943, "num_input_records": 5157, "num_output_records": 4206, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844202232774, "job": 14, "event": "table_file_deletion", "file_number": 29}
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844202236325, "job": 14, "event": "table_file_deletion", "file_number": 27}
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:23:22.163366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:23:22.236542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:23:22.236553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:23:22.236556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:23:22.236560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:23:22.236563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:23:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.003000072s ======
Jan 31 02:23:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:22.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000072s
Jan 31 02:23:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:23.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:24.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:25.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:25 np0005603609 python3.9[200760]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:23:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:26.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:27.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:27 np0005603609 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 02:23:27 np0005603609 systemd[1]: Starting man-db-cache-update.service...
Jan 31 02:23:27 np0005603609 systemd[1]: Reloading.
Jan 31 02:23:27 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:23:27 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:23:27 np0005603609 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 02:23:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:28.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:28 np0005603609 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 02:23:28 np0005603609 systemd[1]: Finished man-db-cache-update.service.
Jan 31 02:23:28 np0005603609 systemd[1]: run-rc3e3d1aeb7b2451883c803b23ee39475.service: Deactivated successfully.
Jan 31 02:23:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:29.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:29 np0005603609 python3.9[201078]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 02:23:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:30.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:30 np0005603609 python3.9[201230]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 31 02:23:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:31.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:31 np0005603609 python3.9[201386]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:23:31 np0005603609 python3.9[201509]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844210.9090207-510-139906376664490/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:23:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:32.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:32 np0005603609 python3.9[201661]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:23:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:33.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:34 np0005603609 python3.9[201813]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:23:34 np0005603609 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 02:23:34 np0005603609 systemd[1]: Stopped Load Kernel Modules.
Jan 31 02:23:34 np0005603609 systemd[1]: Stopping Load Kernel Modules...
Jan 31 02:23:34 np0005603609 systemd[1]: Starting Load Kernel Modules...
Jan 31 02:23:34 np0005603609 systemd[1]: Finished Load Kernel Modules.
Jan 31 02:23:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:34.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:34 np0005603609 python3.9[201969]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.004000095s ======
Jan 31 02:23:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:35.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000095s
Jan 31 02:23:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:35 np0005603609 python3.9[202122]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:23:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:36.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:36 np0005603609 python3.9[202274]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:23:37 np0005603609 python3.9[202397]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844216.0344403-663-43946639238956/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:23:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:37.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:37 np0005603609 python3.9[202549]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:38 np0005603609 python3.9[202702]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:23:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:38.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:39.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:39 np0005603609 podman[202826]: 2026-01-31 07:23:39.554452214 +0000 UTC m=+0.144407099 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:39 np0005603609 python3.9[202868]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:23:40 np0005603609 python3.9[203034]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:23:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:40.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:41 np0005603609 python3.9[203186]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:23:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:41.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:41 np0005603609 python3.9[203338]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:23:42 np0005603609 python3.9[203490]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:23:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:42.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:42 np0005603609 podman[203614]: 2026-01-31 07:23:42.812782111 +0000 UTC m=+0.055049060 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 02:23:43 np0005603609 python3.9[203662]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:23:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:43.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:43 np0005603609 python3.9[203814]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:23:44 np0005603609 python3.9[203968]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:44.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:45.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:45 np0005603609 python3.9[204121]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:23:45 np0005603609 systemd[1]: Listening on multipathd control socket.
Jan 31 02:23:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:46 np0005603609 python3.9[204277]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:23:46 np0005603609 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 31 02:23:46 np0005603609 udevadm[204282]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 31 02:23:46 np0005603609 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 31 02:23:46 np0005603609 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 02:23:46 np0005603609 multipathd[204285]: --------start up--------
Jan 31 02:23:46 np0005603609 multipathd[204285]: read /etc/multipath.conf
Jan 31 02:23:46 np0005603609 multipathd[204285]: path checkers start up
Jan 31 02:23:46 np0005603609 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 02:23:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:46.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:47.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:47 np0005603609 python3.9[204444]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 02:23:48 np0005603609 python3.9[204596]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 31 02:23:48 np0005603609 kernel: Key type psk registered
Jan 31 02:23:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:48.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:49.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:49 np0005603609 python3.9[204759]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:23:49 np0005603609 python3.9[204882]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844228.6988904-1053-4649838003482/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:23:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:50.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:50 np0005603609 python3.9[205034]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:23:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:51.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:51 np0005603609 python3.9[205186]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:23:51 np0005603609 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 02:23:51 np0005603609 systemd[1]: Stopped Load Kernel Modules.
Jan 31 02:23:51 np0005603609 systemd[1]: Stopping Load Kernel Modules...
Jan 31 02:23:51 np0005603609 systemd[1]: Starting Load Kernel Modules...
Jan 31 02:23:51 np0005603609 systemd[1]: Finished Load Kernel Modules.
Jan 31 02:23:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:52.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:52 np0005603609 python3.9[205342]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:23:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:53.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:54 np0005603609 systemd[1]: Reloading.
Jan 31 02:23:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:54.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:54 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:23:54 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:23:54 np0005603609 systemd[1]: Reloading.
Jan 31 02:23:54 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:23:54 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:23:55 np0005603609 systemd-logind[823]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 02:23:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:55.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:55 np0005603609 systemd-logind[823]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 02:23:55 np0005603609 lvm[205458]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 02:23:55 np0005603609 lvm[205458]: VG ceph_vg0 finished
Jan 31 02:23:55 np0005603609 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 02:23:55 np0005603609 systemd[1]: Starting man-db-cache-update.service...
Jan 31 02:23:55 np0005603609 systemd[1]: Reloading.
Jan 31 02:23:55 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:23:55 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:23:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:55 np0005603609 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 02:23:56 np0005603609 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 02:23:56 np0005603609 systemd[1]: Finished man-db-cache-update.service.
Jan 31 02:23:56 np0005603609 systemd[1]: run-r467ced4abbc94d9b83eeeead3c45b461.service: Deactivated successfully.
Jan 31 02:23:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:56.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:57.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:58.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:59 np0005603609 python3.9[206810]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:23:59 np0005603609 systemd[1]: Stopping Open-iSCSI...
Jan 31 02:23:59 np0005603609 iscsid[200309]: iscsid shutting down.
Jan 31 02:23:59 np0005603609 systemd[1]: iscsid.service: Deactivated successfully.
Jan 31 02:23:59 np0005603609 systemd[1]: Stopped Open-iSCSI.
Jan 31 02:23:59 np0005603609 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 02:23:59 np0005603609 systemd[1]: Starting Open-iSCSI...
Jan 31 02:23:59 np0005603609 systemd[1]: Started Open-iSCSI.
Jan 31 02:23:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:23:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:59.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:59 np0005603609 python3.9[206967]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:23:59 np0005603609 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 31 02:23:59 np0005603609 multipathd[204285]: exit (signal)
Jan 31 02:23:59 np0005603609 multipathd[204285]: --------shut down-------
Jan 31 02:24:00 np0005603609 systemd[1]: multipathd.service: Deactivated successfully.
Jan 31 02:24:00 np0005603609 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 31 02:24:00 np0005603609 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 02:24:00 np0005603609 multipathd[206973]: --------start up--------
Jan 31 02:24:00 np0005603609 multipathd[206973]: read /etc/multipath.conf
Jan 31 02:24:00 np0005603609 multipathd[206973]: path checkers start up
Jan 31 02:24:00 np0005603609 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 02:24:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:00.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:00 np0005603609 python3.9[207130]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:24:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:01.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:01 np0005603609 python3.9[207286]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:24:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:24:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:02.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:24:03 np0005603609 python3.9[207438]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:24:03 np0005603609 systemd[1]: Reloading.
Jan 31 02:24:03 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:24:03 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:24:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:24:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:03.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:24:03 np0005603609 python3.9[207623]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:24:04 np0005603609 network[207640]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:24:04 np0005603609 network[207641]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:24:04 np0005603609 network[207642]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:24:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:24:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:04.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:24:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:24:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:05.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:24:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:06.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:07.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:24:07.456 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:24:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:24:07.457 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:24:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:24:07.457 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:24:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:24:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:24:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:24:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:24:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:24:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:08.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:08 np0005603609 python3.9[208047]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:24:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:24:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:09.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:24:09 np0005603609 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 31 02:24:09 np0005603609 python3.9[208201]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:24:09 np0005603609 podman[208203]: 2026-01-31 07:24:09.935919767 +0000 UTC m=+0.102650136 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:24:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:10 np0005603609 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 02:24:10 np0005603609 python3.9[208380]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:24:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:24:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:10.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:24:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:24:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:11.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:24:11 np0005603609 python3.9[208534]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:24:11 np0005603609 python3.9[208687]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:24:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:24:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:12.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:24:12 np0005603609 python3.9[208840]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:24:13 np0005603609 podman[208941]: 2026-01-31 07:24:13.19860416 +0000 UTC m=+0.074307812 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 02:24:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:13.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:13 np0005603609 python3.9[209013]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:24:14 np0005603609 python3.9[209195]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:24:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:14.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:24:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:24:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:24:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:15.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:24:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:15 np0005603609 python3.9[209369]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:24:16 np0005603609 python3.9[209521]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:24:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:16.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:16 np0005603609 python3.9[209673]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:24:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:24:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:17.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:24:17 np0005603609 python3.9[209825]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:24:18 np0005603609 python3.9[209977]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:24:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:18.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:18 np0005603609 python3.9[210129]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:24:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:24:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:19.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:24:19 np0005603609 python3.9[210281]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:24:20 np0005603609 python3.9[210433]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:24:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:20.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:20 np0005603609 python3.9[210585]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:24:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:21.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:21 np0005603609 python3.9[210737]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:24:22 np0005603609 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 31 02:24:22 np0005603609 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 31 02:24:22 np0005603609 python3.9[210889]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:24:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:22.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:22 np0005603609 python3.9[211043]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:24:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:23.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:23 np0005603609 python3.9[211195]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:24:23 np0005603609 python3.9[211347]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:24:24 np0005603609 python3.9[211499]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:24:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:24:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:24.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:24:24 np0005603609 python3.9[211651]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:24:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:24:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:25.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:24:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:25 np0005603609 python3.9[211803]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:24:26 np0005603609 python3.9[211955]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 02:24:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:26.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:27.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:27 np0005603609 python3.9[212107]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:24:27 np0005603609 systemd[1]: Reloading.
Jan 31 02:24:27 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:24:27 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:24:28 np0005603609 python3.9[212294]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:24:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:24:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:28.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:24:29 np0005603609 python3.9[212447]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:24:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:29.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:29 np0005603609 python3.9[212600]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:24:30 np0005603609 python3.9[212753]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:24:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:24:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:30.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:24:30 np0005603609 python3.9[212906]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:24:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:24:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:31.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:24:31 np0005603609 python3.9[213059]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:24:32 np0005603609 python3.9[213212]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:24:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:32.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:32 np0005603609 python3.9[213365]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:24:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:33.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:24:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:34.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:24:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:35.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:35 np0005603609 python3.9[213518]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:24:36 np0005603609 python3.9[213670]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:24:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:36.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:37 np0005603609 python3.9[213822]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:24:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:37.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:37 np0005603609 python3.9[213974]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:24:38 np0005603609 python3.9[214126]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:24:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:38.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:38 np0005603609 python3.9[214278]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:24:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:39.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:39 np0005603609 python3.9[214430]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:24:40 np0005603609 python3.9[214582]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:24:40 np0005603609 podman[214583]: 2026-01-31 07:24:40.1995543 +0000 UTC m=+0.088625633 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:24:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:40.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:40 np0005603609 python3.9[214760]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:24:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:24:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:41.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:24:41 np0005603609 python3.9[214912]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:24:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:24:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:42.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:24:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:24:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:43.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:24:44 np0005603609 podman[214937]: 2026-01-31 07:24:44.157769842 +0000 UTC m=+0.047917211 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:24:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:44.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:45.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:46.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:47.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:47 np0005603609 python3.9[215085]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 31 02:24:48 np0005603609 python3.9[215238]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 02:24:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:48.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:49.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:49 np0005603609 python3.9[215396]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 02:24:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:50.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:51 np0005603609 systemd-logind[823]: New session 50 of user zuul.
Jan 31 02:24:51 np0005603609 systemd[1]: Started Session 50 of User zuul.
Jan 31 02:24:51 np0005603609 systemd[1]: session-50.scope: Deactivated successfully.
Jan 31 02:24:51 np0005603609 systemd-logind[823]: Session 50 logged out. Waiting for processes to exit.
Jan 31 02:24:51 np0005603609 systemd-logind[823]: Removed session 50.
Jan 31 02:24:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:51.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:51 np0005603609 python3.9[215582]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:24:52 np0005603609 python3.9[215703]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844291.4247322-2660-105650805476151/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:24:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:52.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:53 np0005603609 python3.9[215853]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:24:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:53.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:53 np0005603609 python3.9[215929]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:24:54 np0005603609 python3.9[216079]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:24:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:54.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:54 np0005603609 python3.9[216200]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844293.6923003-2660-272899539226255/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:24:55 np0005603609 python3.9[216350]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:24:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:55.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:55 np0005603609 python3.9[216471]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844294.8015516-2660-52866362422997/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:24:56 np0005603609 python3.9[216621]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:24:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:24:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:56.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:24:56 np0005603609 python3.9[216742]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844295.9096904-2660-133513982850347/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:24:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:57.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:57 np0005603609 python3.9[216892]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:24:57 np0005603609 python3.9[217013]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844296.9625428-2660-110887471652756/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:24:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:58.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:58 np0005603609 python3.9[217165]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:24:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:24:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:59.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:59 np0005603609 python3.9[217317]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:25:00 np0005603609 python3.9[217469]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:25:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:00.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:00 np0005603609 python3.9[217621]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:25:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:01.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:01 np0005603609 python3.9[217744]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769844300.4061875-2981-256708179739935/.source _original_basename=.gccybuan follow=False checksum=809e51e67a6ec5fbfe90584cc78d2213ea8537b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 31 02:25:02 np0005603609 python3.9[217896]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:25:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:02.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:03 np0005603609 python3.9[218048]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:25:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:03.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:03 np0005603609 python3.9[218169]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844302.579451-3059-167791237141822/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:25:04 np0005603609 python3.9[218319]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:25:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:04.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:04 np0005603609 python3.9[218440]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844303.760804-3104-34001849204148/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:25:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:05.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:05 np0005603609 python3.9[218592]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 31 02:25:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:25:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:06.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:25:06 np0005603609 python3.9[218744]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 02:25:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:07.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:25:07.457 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:25:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:25:07.457 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:25:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:25:07.458 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:25:08 np0005603609 python3[218896]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 02:25:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:08.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:09.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:10.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:11.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:12.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:13 np0005603609 podman[218948]: 2026-01-31 07:25:13.160863277 +0000 UTC m=+2.833469773 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 02:25:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:13.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:14.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:15.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:16.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:17.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:18.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:19.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:20 np0005603609 podman[219016]: 2026-01-31 07:25:20.6459064 +0000 UTC m=+6.439866557 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 02:25:20 np0005603609 podman[218909]: 2026-01-31 07:25:20.673120484 +0000 UTC m=+12.354599054 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 02:25:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:20.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:20 np0005603609 podman[219151]: 2026-01-31 07:25:20.865358374 +0000 UTC m=+0.095326607 container create a922afc41bf64cbe0a8e8ce5173e3945f35ae5d6017d5b8eca452bb424aaf0d5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=nova_compute_init, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:25:20 np0005603609 podman[219151]: 2026-01-31 07:25:20.796037632 +0000 UTC m=+0.026005865 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 02:25:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:20 np0005603609 python3[218896]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 31 02:25:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:21.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:22.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:23.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:25:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:25:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:25:23 np0005603609 python3.9[219352]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:25:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:24.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:24 np0005603609 python3.9[219506]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 31 02:25:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:25.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:25 np0005603609 python3.9[219658]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 02:25:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:26.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:26 np0005603609 python3[219810]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 02:25:26 np0005603609 podman[219847]: 2026-01-31 07:25:26.989263253 +0000 UTC m=+0.050779060 container create 167e1dc250e13fbef1bcc8355537c0e4a72fe96db56d714d58131fa42438edb1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, container_name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 31 02:25:26 np0005603609 podman[219847]: 2026-01-31 07:25:26.961356702 +0000 UTC m=+0.022872609 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 02:25:26 np0005603609 python3[219810]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 31 02:25:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:27.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:27 np0005603609 python3.9[220037]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:25:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:28.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:28 np0005603609 python3.9[220191]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:25:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:29.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:29 np0005603609 python3.9[220342]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769844328.813217-3392-205244621246301/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:25:30 np0005603609 python3.9[220468]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:25:30 np0005603609 systemd[1]: Reloading.
Jan 31 02:25:30 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:25:30 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:25:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:25:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:25:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:30.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:31 np0005603609 python3.9[220579]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:25:31 np0005603609 systemd[1]: Reloading.
Jan 31 02:25:31 np0005603609 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:25:31 np0005603609 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:25:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:31.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:31 np0005603609 systemd[1]: Starting nova_compute container...
Jan 31 02:25:31 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:25:31 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f30645ec2fb91feb8f93ba63a3539a4ab5b70a4c5533f2858f64b326990fa2/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:31 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f30645ec2fb91feb8f93ba63a3539a4ab5b70a4c5533f2858f64b326990fa2/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:31 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f30645ec2fb91feb8f93ba63a3539a4ab5b70a4c5533f2858f64b326990fa2/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:31 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f30645ec2fb91feb8f93ba63a3539a4ab5b70a4c5533f2858f64b326990fa2/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:31 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f30645ec2fb91feb8f93ba63a3539a4ab5b70a4c5533f2858f64b326990fa2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:31 np0005603609 podman[220619]: 2026-01-31 07:25:31.498332059 +0000 UTC m=+0.116100143 container init 167e1dc250e13fbef1bcc8355537c0e4a72fe96db56d714d58131fa42438edb1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:25:31 np0005603609 podman[220619]: 2026-01-31 07:25:31.506579071 +0000 UTC m=+0.124347155 container start 167e1dc250e13fbef1bcc8355537c0e4a72fe96db56d714d58131fa42438edb1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Jan 31 02:25:31 np0005603609 podman[220619]: nova_compute
Jan 31 02:25:31 np0005603609 nova_compute[220635]: + sudo -E kolla_set_configs
Jan 31 02:25:31 np0005603609 systemd[1]: Started nova_compute container.
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Validating config file
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Copying service configuration files
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Deleting /etc/ceph
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Creating directory /etc/ceph
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Writing out command to execute
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:25:31 np0005603609 nova_compute[220635]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 02:25:31 np0005603609 nova_compute[220635]: ++ cat /run_command
Jan 31 02:25:31 np0005603609 nova_compute[220635]: + CMD=nova-compute
Jan 31 02:25:31 np0005603609 nova_compute[220635]: + ARGS=
Jan 31 02:25:31 np0005603609 nova_compute[220635]: + sudo kolla_copy_cacerts
Jan 31 02:25:31 np0005603609 nova_compute[220635]: + [[ ! -n '' ]]
Jan 31 02:25:31 np0005603609 nova_compute[220635]: + . kolla_extend_start
Jan 31 02:25:31 np0005603609 nova_compute[220635]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 02:25:31 np0005603609 nova_compute[220635]: Running command: 'nova-compute'
Jan 31 02:25:31 np0005603609 nova_compute[220635]: + umask 0022
Jan 31 02:25:31 np0005603609 nova_compute[220635]: + exec nova-compute
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.516782) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332516817, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1405, "num_deletes": 256, "total_data_size": 3354302, "memory_usage": 3404016, "flush_reason": "Manual Compaction"}
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332541601, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 2206040, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16141, "largest_seqno": 17541, "table_properties": {"data_size": 2200047, "index_size": 3320, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 11881, "raw_average_key_size": 18, "raw_value_size": 2188076, "raw_average_value_size": 3445, "num_data_blocks": 150, "num_entries": 635, "num_filter_entries": 635, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844202, "oldest_key_time": 1769844202, "file_creation_time": 1769844332, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 24867 microseconds, and 4574 cpu microseconds.
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.541649) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 2206040 bytes OK
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.541670) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.542916) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.542934) EVENT_LOG_v1 {"time_micros": 1769844332542929, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.542979) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 3347715, prev total WAL file size 3355939, number of live WAL files 2.
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.543696) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(2154KB)], [30(7870KB)]
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332543750, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 10264983, "oldest_snapshot_seqno": -1}
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4314 keys, 9882816 bytes, temperature: kUnknown
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332625421, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 9882816, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9851144, "index_size": 19786, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 107489, "raw_average_key_size": 24, "raw_value_size": 9770223, "raw_average_value_size": 2264, "num_data_blocks": 825, "num_entries": 4314, "num_filter_entries": 4314, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769844332, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.625603) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 9882816 bytes
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.646084) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.6 rd, 120.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.7 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(9.1) write-amplify(4.5) OK, records in: 4841, records dropped: 527 output_compression: NoCompression
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.646119) EVENT_LOG_v1 {"time_micros": 1769844332646106, "job": 16, "event": "compaction_finished", "compaction_time_micros": 81727, "compaction_time_cpu_micros": 27491, "output_level": 6, "num_output_files": 1, "total_output_size": 9882816, "num_input_records": 4841, "num_output_records": 4314, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332646391, "job": 16, "event": "table_file_deletion", "file_number": 32}
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332647103, "job": 16, "event": "table_file_deletion", "file_number": 30}
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.543528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.647169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.647174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.647175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.647177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.647178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.647481) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332647521, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 257, "num_deletes": 251, "total_data_size": 21496, "memory_usage": 27776, "flush_reason": "Manual Compaction"}
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332654287, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 13304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17543, "largest_seqno": 17798, "table_properties": {"data_size": 11551, "index_size": 50, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 4640, "raw_average_key_size": 18, "raw_value_size": 8177, "raw_average_value_size": 31, "num_data_blocks": 2, "num_entries": 256, "num_filter_entries": 256, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844332, "oldest_key_time": 1769844332, "file_creation_time": 1769844332, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 6866 microseconds, and 908 cpu microseconds.
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.654345) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 13304 bytes OK
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.654370) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.656121) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.656144) EVENT_LOG_v1 {"time_micros": 1769844332656135, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.656166) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 19470, prev total WAL file size 19470, number of live WAL files 2.
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.656622) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(12KB)], [33(9651KB)]
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332656654, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 9896120, "oldest_snapshot_seqno": -1}
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 4064 keys, 7831909 bytes, temperature: kUnknown
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332695128, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 7831909, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7803689, "index_size": 16974, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 102989, "raw_average_key_size": 25, "raw_value_size": 7728845, "raw_average_value_size": 1901, "num_data_blocks": 698, "num_entries": 4064, "num_filter_entries": 4064, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769844332, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.695396) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7831909 bytes
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.696835) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 256.6 rd, 203.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 9.4 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(1332.5) write-amplify(588.7) OK, records in: 4570, records dropped: 506 output_compression: NoCompression
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.696864) EVENT_LOG_v1 {"time_micros": 1769844332696851, "job": 18, "event": "compaction_finished", "compaction_time_micros": 38566, "compaction_time_cpu_micros": 13044, "output_level": 6, "num_output_files": 1, "total_output_size": 7831909, "num_input_records": 4570, "num_output_records": 4064, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332697008, "job": 18, "event": "table_file_deletion", "file_number": 35}
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332698222, "job": 18, "event": "table_file_deletion", "file_number": 33}
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.656536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.698292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.698298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.698300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.698301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:25:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:25:32.698303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:25:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:32.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:32 np0005603609 python3.9[220797]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:25:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:33.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:33 np0005603609 python3.9[220947]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:25:34 np0005603609 nova_compute[220635]: 2026-01-31 07:25:34.250 220639 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 02:25:34 np0005603609 nova_compute[220635]: 2026-01-31 07:25:34.251 220639 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 02:25:34 np0005603609 nova_compute[220635]: 2026-01-31 07:25:34.251 220639 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 02:25:34 np0005603609 nova_compute[220635]: 2026-01-31 07:25:34.251 220639 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 31 02:25:34 np0005603609 python3.9[221099]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:25:34 np0005603609 nova_compute[220635]: 2026-01-31 07:25:34.513 220639 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:25:34 np0005603609 nova_compute[220635]: 2026-01-31 07:25:34.534 220639 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:25:34 np0005603609 nova_compute[220635]: 2026-01-31 07:25:34.535 220639 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 02:25:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:34.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.180 220639 INFO nova.virt.driver [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 31 02:25:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:35.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.358 220639 INFO nova.compute.provider_config [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.409 220639 DEBUG oslo_concurrency.lockutils [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.409 220639 DEBUG oslo_concurrency.lockutils [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.410 220639 DEBUG oslo_concurrency.lockutils [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.410 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.410 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.410 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.410 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.411 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.411 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.411 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.411 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.411 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.412 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.412 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.412 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.412 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.412 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.413 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.413 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.413 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.413 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.413 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.414 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.414 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.414 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.414 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.414 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.415 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.415 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.415 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.416 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.416 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.416 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.416 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.417 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.417 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.417 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.417 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.417 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.418 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.418 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.418 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.418 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.419 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.419 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.419 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.419 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.419 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.420 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.420 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.420 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.420 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.420 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.421 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.421 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.421 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.421 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.421 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.422 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.422 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.422 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.422 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.422 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.423 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.423 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.423 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.423 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.423 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.423 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.424 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.424 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.424 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.424 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.424 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.425 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.425 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.425 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.425 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.425 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.426 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.426 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.426 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.426 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.426 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.427 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.427 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.427 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.427 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.427 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.428 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.428 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.428 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.428 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.428 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.429 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.429 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.429 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.429 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.429 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.430 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.430 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.430 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.430 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.430 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.431 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.431 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.431 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.431 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.431 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.432 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.432 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.432 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.432 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.432 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.433 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.433 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.433 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.433 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.433 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.433 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.434 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.434 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.434 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.434 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.434 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.435 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.435 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.435 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.435 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.435 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.436 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.436 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.436 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.436 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.436 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.436 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.437 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.437 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.437 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.437 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.437 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.438 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.438 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.438 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.438 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.438 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.439 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.439 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.439 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.439 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.439 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.440 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.440 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.440 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.440 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.440 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.441 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.441 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.441 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.441 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.441 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.442 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.442 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.442 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.442 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.442 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.442 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.443 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.443 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.443 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.443 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.443 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.444 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.444 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.444 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.444 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.444 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.445 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.445 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.445 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.445 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.445 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.446 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.446 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.446 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.446 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.446 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.447 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.447 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.447 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.447 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.447 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.448 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.448 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.448 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.448 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.448 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.448 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.449 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.449 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.449 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.449 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.449 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.450 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.450 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.450 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.450 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.450 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.451 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.451 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.451 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.451 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.451 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.452 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.452 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.452 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.452 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.452 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.452 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.453 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.453 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.453 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.453 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.453 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.454 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.454 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.454 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.454 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.454 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.455 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.455 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.455 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.455 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.455 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.456 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.456 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.456 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.456 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.456 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.457 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.457 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.457 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.457 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.457 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.458 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.458 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.458 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.458 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.458 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.458 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.459 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.459 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.459 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.459 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.459 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.460 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.460 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.460 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.460 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.460 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.461 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.461 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.461 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.461 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.461 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.462 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.462 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.462 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.462 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.462 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.463 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.463 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.463 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.463 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.463 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.463 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.464 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.464 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.464 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.464 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.464 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.465 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.465 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.465 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.465 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.465 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.466 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.466 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.466 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.466 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.466 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.467 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.467 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.467 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.467 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.467 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.468 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.468 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.468 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.468 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.468 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.469 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.469 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.469 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.469 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.469 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.469 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.470 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.470 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.470 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.470 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.470 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.471 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.471 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.471 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.471 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.471 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.472 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.472 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.472 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.472 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.472 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.473 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.473 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.473 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.473 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.473 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.473 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.474 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.474 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.474 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.474 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.474 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.475 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.475 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.475 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.475 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.475 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.476 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.476 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.476 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.476 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.476 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.477 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.477 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.477 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.477 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.477 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.478 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.478 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.478 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.478 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.478 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.478 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.479 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.479 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.479 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.479 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.480 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.480 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.480 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.480 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.480 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.481 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.481 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.481 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.481 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.481 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.482 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.482 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.482 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.482 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.482 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.483 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.483 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.483 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.483 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.483 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.483 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.484 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.484 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.484 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.484 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.484 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.485 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.485 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.485 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.485 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.485 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.486 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.486 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.486 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.486 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.486 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.487 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.487 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.487 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.487 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.487 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.488 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.488 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.488 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.488 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.488 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.488 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.489 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.489 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.489 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.489 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.489 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.490 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.490 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.490 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.490 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.490 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.491 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.491 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.491 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.491 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.491 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.492 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.492 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.492 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.492 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.492 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.493 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.493 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.493 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.493 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.493 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.494 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.494 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.494 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.494 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.494 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.494 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.495 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.495 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.495 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.495 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.495 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.496 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.496 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.496 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.496 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.496 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.497 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.497 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.497 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.497 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.497 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.497 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.498 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.498 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.498 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.498 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.498 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.499 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.499 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.499 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.499 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.500 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.500 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.500 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.500 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.500 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.500 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.501 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.501 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.501 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.501 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.501 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.502 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.502 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.502 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.502 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.502 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.503 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.503 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.503 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.503 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.503 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.504 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.504 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.504 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.504 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.504 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.505 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.505 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.505 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.505 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.505 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.506 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.506 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.506 220639 WARNING oslo_config.cfg [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 31 02:25:35 np0005603609 nova_compute[220635]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 31 02:25:35 np0005603609 nova_compute[220635]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 31 02:25:35 np0005603609 nova_compute[220635]: and ``live_migration_inbound_addr`` respectively.
Jan 31 02:25:35 np0005603609 nova_compute[220635]: ).  Its value may be silently ignored in the future.#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.506 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.506 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.507 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.507 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.507 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.507 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.507 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.508 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.508 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.508 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.508 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.508 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.509 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.509 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.509 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.509 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.509 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.510 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.510 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.rbd_secret_uuid        = f70fcd2a-dcb4-5f89-a4ba-79a09959083b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.510 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.510 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.510 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.511 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.511 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.511 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.511 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.511 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.512 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.512 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.512 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.512 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.512 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.513 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.513 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.513 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.513 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.513 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.514 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.514 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.514 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.514 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.514 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.515 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.515 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.515 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.515 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.515 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.516 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.516 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.516 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.516 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.516 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.517 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.517 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.517 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.517 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.517 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.517 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.518 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.518 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.518 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.519 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.519 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.519 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.519 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.519 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.519 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.519 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.520 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.520 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.520 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.520 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.520 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.520 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.520 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.521 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.521 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.521 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.521 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.521 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.521 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.521 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.522 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.522 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.522 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.522 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.522 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.522 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.522 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.523 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.523 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.523 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.523 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.523 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.523 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.523 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.523 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.524 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.524 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.524 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.524 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.524 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.524 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.524 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.525 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.525 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.525 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.525 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.525 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.525 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.525 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.526 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.526 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.526 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.526 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.526 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.526 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.526 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.527 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.527 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.527 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.527 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.527 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.527 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.527 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.528 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.528 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.528 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.528 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.528 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.528 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.528 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.528 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.529 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.529 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.529 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.529 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.529 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.529 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.530 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.530 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.530 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.530 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.530 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.530 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.530 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.531 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.531 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.531 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.531 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.532 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.532 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.532 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.532 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.532 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.532 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.532 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.533 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.533 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.533 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.533 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.533 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.533 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.533 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.534 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.534 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.534 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.534 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.534 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.534 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.534 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.534 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.535 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.535 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.535 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.535 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.535 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.535 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.535 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.536 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.536 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.536 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.536 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.536 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 python3.9[221253]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.536 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.537 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.537 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.537 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.537 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.537 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.537 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.537 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.538 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.538 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.538 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.538 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.538 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.538 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.539 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.539 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.539 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.539 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.539 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.539 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.539 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.539 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.540 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.540 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.540 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.540 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.540 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.540 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.540 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.541 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.541 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.541 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.541 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.541 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.541 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.541 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.541 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.542 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.542 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.542 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.542 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.542 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.542 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.542 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.543 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.543 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.543 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.543 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.543 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.543 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.543 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.544 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.544 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.544 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.545 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.545 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.545 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.545 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.545 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.545 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.545 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.546 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.546 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.546 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.546 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.546 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.546 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.547 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.547 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.547 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.547 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.547 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.547 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.548 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.548 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.548 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.548 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.548 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.548 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.548 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.549 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.549 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.549 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.549 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.549 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.549 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.549 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.550 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.550 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.550 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.550 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.550 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.550 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.550 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.551 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.551 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.551 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.551 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.551 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.551 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.552 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.552 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.552 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.552 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.552 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.552 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.552 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.553 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.553 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.553 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.553 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.553 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.553 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.553 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.554 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.554 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.554 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.554 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.554 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.554 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.554 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.555 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.555 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.555 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.555 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.555 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.555 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.555 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.556 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.556 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.556 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.556 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.556 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.556 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.556 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.557 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.557 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.557 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.557 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.557 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.557 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.557 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.558 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.558 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.558 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.558 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.558 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.558 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.558 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.559 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.559 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.559 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.559 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.559 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.559 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.559 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.560 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.560 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.560 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.560 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.560 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.560 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.560 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.561 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.561 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.561 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.561 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.561 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.561 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.561 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.562 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.562 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.562 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.562 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.562 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.562 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.562 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.562 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.563 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.563 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.563 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.563 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.563 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.563 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.563 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.564 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.564 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.564 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.564 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.564 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.564 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.564 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.565 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.565 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.565 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.565 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.565 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.565 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.565 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.566 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.566 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.566 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.566 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.566 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.566 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.566 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.566 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.567 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.567 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.567 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.567 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.567 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.567 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.568 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.568 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.568 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.568 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.568 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.569 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.569 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.569 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.569 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.569 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.570 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.570 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.570 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.570 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.570 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.571 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.571 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.571 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.571 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.571 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.571 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.572 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.572 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.572 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.572 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.572 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.572 220639 DEBUG oslo_service.service [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.574 220639 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m
Jan 31 02:25:35 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.589 220639 DEBUG nova.virt.libvirt.host [None req-f64d55ab-4961-4a7f-96cb-126560476f2a - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.591 220639 DEBUG nova.virt.libvirt.host [None req-f64d55ab-4961-4a7f-96cb-126560476f2a - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.591 220639 DEBUG nova.virt.libvirt.host [None req-f64d55ab-4961-4a7f-96cb-126560476f2a - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.592 220639 DEBUG nova.virt.libvirt.host [None req-f64d55ab-4961-4a7f-96cb-126560476f2a - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 31 02:25:35 np0005603609 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 02:25:35 np0005603609 systemd[1]: Started libvirt QEMU daemon.
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.673 220639 DEBUG nova.virt.libvirt.host [None req-f64d55ab-4961-4a7f-96cb-126560476f2a - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4109933e50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.676 220639 DEBUG nova.virt.libvirt.host [None req-f64d55ab-4961-4a7f-96cb-126560476f2a - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4109933e50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.677 220639 INFO nova.virt.libvirt.driver [None req-f64d55ab-4961-4a7f-96cb-126560476f2a - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.751 220639 WARNING nova.virt.libvirt.driver [None req-f64d55ab-4961-4a7f-96cb-126560476f2a - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Jan 31 02:25:35 np0005603609 nova_compute[220635]: 2026-01-31 07:25:35.752 220639 DEBUG nova.virt.libvirt.volume.mount [None req-f64d55ab-4961-4a7f-96cb-126560476f2a - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 31 02:25:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:36 np0005603609 python3.9[221478]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:25:36 np0005603609 systemd[1]: Stopping nova_compute container...
Jan 31 02:25:36 np0005603609 nova_compute[220635]: 2026-01-31 07:25:36.478 220639 DEBUG oslo_concurrency.lockutils [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:25:36 np0005603609 nova_compute[220635]: 2026-01-31 07:25:36.479 220639 DEBUG oslo_concurrency.lockutils [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:25:36 np0005603609 nova_compute[220635]: 2026-01-31 07:25:36.479 220639 DEBUG oslo_concurrency.lockutils [None req-1a7a75ad-8618-483d-880a-8ef49e75f2ae - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:25:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:36.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:36 np0005603609 virtqemud[221292]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 31 02:25:36 np0005603609 virtqemud[221292]: hostname: compute-1
Jan 31 02:25:36 np0005603609 virtqemud[221292]: End of file while reading data: Input/output error
Jan 31 02:25:36 np0005603609 systemd[1]: libpod-167e1dc250e13fbef1bcc8355537c0e4a72fe96db56d714d58131fa42438edb1.scope: Deactivated successfully.
Jan 31 02:25:36 np0005603609 systemd[1]: libpod-167e1dc250e13fbef1bcc8355537c0e4a72fe96db56d714d58131fa42438edb1.scope: Consumed 2.989s CPU time.
Jan 31 02:25:36 np0005603609 conmon[220635]: conmon 167e1dc250e13fbef1bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-167e1dc250e13fbef1bcc8355537c0e4a72fe96db56d714d58131fa42438edb1.scope/container/memory.events
Jan 31 02:25:36 np0005603609 podman[221490]: 2026-01-31 07:25:36.99234505 +0000 UTC m=+0.548318797 container stop 167e1dc250e13fbef1bcc8355537c0e4a72fe96db56d714d58131fa42438edb1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:25:37 np0005603609 podman[221490]: 2026-01-31 07:25:37.022169457 +0000 UTC m=+0.578143264 container died 167e1dc250e13fbef1bcc8355537c0e4a72fe96db56d714d58131fa42438edb1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute)
Jan 31 02:25:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:37.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:37 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-167e1dc250e13fbef1bcc8355537c0e4a72fe96db56d714d58131fa42438edb1-userdata-shm.mount: Deactivated successfully.
Jan 31 02:25:37 np0005603609 systemd[1]: var-lib-containers-storage-overlay-16f30645ec2fb91feb8f93ba63a3539a4ab5b70a4c5533f2858f64b326990fa2-merged.mount: Deactivated successfully.
Jan 31 02:25:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:38.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:39 np0005603609 podman[221490]: 2026-01-31 07:25:39.374528552 +0000 UTC m=+2.930502309 container cleanup 167e1dc250e13fbef1bcc8355537c0e4a72fe96db56d714d58131fa42438edb1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 02:25:39 np0005603609 podman[221490]: nova_compute
Jan 31 02:25:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:39.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:39 np0005603609 podman[221521]: nova_compute
Jan 31 02:25:39 np0005603609 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 31 02:25:39 np0005603609 systemd[1]: Stopped nova_compute container.
Jan 31 02:25:39 np0005603609 systemd[1]: Starting nova_compute container...
Jan 31 02:25:39 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:25:39 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f30645ec2fb91feb8f93ba63a3539a4ab5b70a4c5533f2858f64b326990fa2/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:39 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f30645ec2fb91feb8f93ba63a3539a4ab5b70a4c5533f2858f64b326990fa2/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:39 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f30645ec2fb91feb8f93ba63a3539a4ab5b70a4c5533f2858f64b326990fa2/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:39 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f30645ec2fb91feb8f93ba63a3539a4ab5b70a4c5533f2858f64b326990fa2/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:39 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f30645ec2fb91feb8f93ba63a3539a4ab5b70a4c5533f2858f64b326990fa2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:39 np0005603609 podman[221534]: 2026-01-31 07:25:39.609808492 +0000 UTC m=+0.122025428 container init 167e1dc250e13fbef1bcc8355537c0e4a72fe96db56d714d58131fa42438edb1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 02:25:39 np0005603609 podman[221534]: 2026-01-31 07:25:39.619816286 +0000 UTC m=+0.132033212 container start 167e1dc250e13fbef1bcc8355537c0e4a72fe96db56d714d58131fa42438edb1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 02:25:39 np0005603609 podman[221534]: nova_compute
Jan 31 02:25:39 np0005603609 nova_compute[221550]: + sudo -E kolla_set_configs
Jan 31 02:25:39 np0005603609 systemd[1]: Started nova_compute container.
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Validating config file
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Copying service configuration files
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Deleting /etc/ceph
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Creating directory /etc/ceph
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Writing out command to execute
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:25:39 np0005603609 nova_compute[221550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 02:25:39 np0005603609 nova_compute[221550]: ++ cat /run_command
Jan 31 02:25:39 np0005603609 nova_compute[221550]: + CMD=nova-compute
Jan 31 02:25:39 np0005603609 nova_compute[221550]: + ARGS=
Jan 31 02:25:39 np0005603609 nova_compute[221550]: + sudo kolla_copy_cacerts
Jan 31 02:25:39 np0005603609 nova_compute[221550]: + [[ ! -n '' ]]
Jan 31 02:25:39 np0005603609 nova_compute[221550]: + . kolla_extend_start
Jan 31 02:25:39 np0005603609 nova_compute[221550]: Running command: 'nova-compute'
Jan 31 02:25:39 np0005603609 nova_compute[221550]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 02:25:39 np0005603609 nova_compute[221550]: + umask 0022
Jan 31 02:25:39 np0005603609 nova_compute[221550]: + exec nova-compute
Jan 31 02:25:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:40.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:41.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:41 np0005603609 nova_compute[221550]: 2026-01-31 07:25:41.514 221554 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 02:25:41 np0005603609 nova_compute[221550]: 2026-01-31 07:25:41.514 221554 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 02:25:41 np0005603609 nova_compute[221550]: 2026-01-31 07:25:41.514 221554 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 02:25:41 np0005603609 nova_compute[221550]: 2026-01-31 07:25:41.514 221554 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 31 02:25:41 np0005603609 nova_compute[221550]: 2026-01-31 07:25:41.643 221554 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:25:41 np0005603609 nova_compute[221550]: 2026-01-31 07:25:41.650 221554 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:25:41 np0005603609 nova_compute[221550]: 2026-01-31 07:25:41.650 221554 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.213 221554 INFO nova.virt.driver [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.294 221554 INFO nova.compute.provider_config [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 31 02:25:42 np0005603609 python3.9[221718]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.342 221554 DEBUG oslo_concurrency.lockutils [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.343 221554 DEBUG oslo_concurrency.lockutils [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.343 221554 DEBUG oslo_concurrency.lockutils [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.343 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.343 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.343 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.343 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.344 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.344 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.344 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.344 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.344 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.344 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.344 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.345 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.345 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.345 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.345 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.345 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.345 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.345 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.346 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.346 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.346 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.346 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.346 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.346 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.346 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.347 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.347 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.347 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.347 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.347 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.347 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.347 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.348 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.348 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.348 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.348 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.348 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.348 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.348 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.349 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.349 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.349 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.349 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.349 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.349 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.349 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.350 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.350 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.350 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.350 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.350 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.350 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.350 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.351 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.351 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.351 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.351 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.351 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.351 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.351 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.351 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.352 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.352 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.352 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.352 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.352 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.352 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.352 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.353 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.353 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.353 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.353 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.353 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.353 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.353 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.354 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.354 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.354 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.354 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.354 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.354 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.354 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.355 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.355 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.355 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.355 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.355 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.355 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.355 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.356 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.356 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.356 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.356 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.356 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.356 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.356 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.356 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.357 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.357 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.357 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.357 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.357 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.357 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.357 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.358 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.358 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.358 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.358 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.358 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.358 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.358 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.359 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.359 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.359 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.359 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.359 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.359 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.359 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.359 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.360 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.360 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.360 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.360 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.360 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.360 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.360 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.361 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.361 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.361 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.361 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.361 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.361 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.361 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.362 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.362 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.362 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.362 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.362 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.362 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.362 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.362 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.363 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.363 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.363 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.363 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.363 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.363 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.363 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.364 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.364 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.364 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.364 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.364 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.364 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.364 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.365 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.365 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.365 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.365 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.365 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.365 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.365 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.366 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.366 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.366 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.366 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.366 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.366 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.366 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.367 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.367 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.367 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.367 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.367 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.367 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.368 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.368 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.368 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.368 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.368 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.368 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.368 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.369 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.369 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.369 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.369 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.369 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.369 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.370 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.370 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.370 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.370 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.370 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.370 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.370 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.371 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.371 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.371 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.371 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.371 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.371 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.371 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.372 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.372 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.372 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.372 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.372 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.372 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.372 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.373 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.373 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.373 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.373 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.373 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.373 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.373 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.374 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.374 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.374 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.374 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.374 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.374 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.374 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.375 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.375 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.375 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.375 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.375 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.375 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.375 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.375 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.376 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.376 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.376 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.376 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.376 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.376 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.376 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.377 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.377 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.377 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.377 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.377 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.377 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.377 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.378 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.378 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.378 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.378 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.378 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.378 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.378 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.379 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.379 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.379 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.379 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.379 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.379 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.379 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.380 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.380 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.380 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.380 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.380 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.380 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.380 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.381 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.381 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.381 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.381 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.381 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.381 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.381 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.382 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.382 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.382 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.382 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.382 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.382 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.382 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.383 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.383 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.383 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.383 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.383 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.383 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.383 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.384 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.384 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.384 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.384 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.384 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.384 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.385 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.385 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.385 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.385 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.385 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.385 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.385 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.386 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.386 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.386 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.386 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.386 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.386 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.386 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.387 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.387 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.387 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.387 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.387 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.387 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.387 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.388 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.388 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.388 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.388 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.388 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.388 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.388 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.389 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.389 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.389 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.389 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.389 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.389 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.389 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.390 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.390 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.390 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.390 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.390 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.390 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.391 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.391 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.391 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.391 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.391 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.392 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.392 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.392 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.392 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.392 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.392 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.393 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.393 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.393 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.393 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.393 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.393 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.393 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.394 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.394 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.394 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.394 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.394 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.394 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.395 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.395 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.395 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.395 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.395 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.395 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.395 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.396 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.396 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.396 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.396 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.396 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.396 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.396 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.397 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.397 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.397 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.397 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.397 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.397 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.397 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.398 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.398 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.398 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.398 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.398 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.398 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.398 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.399 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.399 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.399 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.399 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.399 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.399 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.399 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.400 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.400 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.400 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.400 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.400 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.400 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.400 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.401 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.401 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.401 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.401 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.401 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.401 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.402 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.402 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.402 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.402 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.402 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.402 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.403 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.403 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.403 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.403 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.403 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.403 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.403 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.404 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.404 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.404 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.404 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.404 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.404 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.404 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.404 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.405 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.405 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.405 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.405 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.405 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.405 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.406 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.406 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.406 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.406 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.406 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.406 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.406 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.407 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.407 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.407 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.407 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.407 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.407 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.407 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.408 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.408 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.408 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.408 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.408 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.408 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.408 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.409 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.409 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.409 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.409 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.409 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.409 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.410 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.410 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.410 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.410 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.410 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.410 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.410 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.411 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.411 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.411 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.411 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.411 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.411 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.411 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.411 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.412 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.412 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.412 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.412 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.412 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.412 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.412 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.413 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.413 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.413 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.413 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.413 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.414 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.414 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.414 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.414 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.414 221554 WARNING oslo_config.cfg [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 31 02:25:42 np0005603609 nova_compute[221550]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 31 02:25:42 np0005603609 nova_compute[221550]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 31 02:25:42 np0005603609 nova_compute[221550]: and ``live_migration_inbound_addr`` respectively.
Jan 31 02:25:42 np0005603609 nova_compute[221550]: ).  Its value may be silently ignored in the future.#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.415 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.415 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.415 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.415 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.415 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.416 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.416 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.416 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.416 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.416 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.416 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.416 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.417 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.417 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.417 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.417 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.418 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.418 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.418 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.rbd_secret_uuid        = f70fcd2a-dcb4-5f89-a4ba-79a09959083b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.418 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.418 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.418 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.419 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.419 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.419 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.419 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.419 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.419 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.419 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.420 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.420 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.420 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.420 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.420 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.421 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.421 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.421 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.421 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.421 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.421 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.422 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.422 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.422 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.422 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.422 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.423 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.423 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.423 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.423 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.423 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.424 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.424 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.424 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.424 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.424 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.425 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.425 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.425 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.425 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.425 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.426 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.426 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.426 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.426 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.426 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.427 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.427 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.427 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.427 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.427 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.428 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.428 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.428 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.428 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.428 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.429 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.429 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.429 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.429 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.429 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.430 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.430 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.430 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.430 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.430 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.431 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.431 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.431 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.431 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.431 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.432 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.432 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.432 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.432 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.433 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.433 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.433 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.433 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.433 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.434 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.434 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.434 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.434 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.434 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.435 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.435 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.435 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.435 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.435 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.436 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.436 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.436 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.436 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.436 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.437 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.437 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.437 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.437 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.437 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.438 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.438 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.438 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.438 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.438 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.439 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.439 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.439 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.439 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.439 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.439 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.440 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.440 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.440 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.440 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.440 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.441 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.441 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.441 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.441 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.442 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.442 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.442 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.442 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.442 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.442 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.443 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.443 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.443 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.443 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.443 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.444 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.444 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.444 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.444 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.444 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.445 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.445 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.445 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.445 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.445 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.446 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.446 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.446 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.446 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.446 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.447 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.447 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.447 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.447 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.447 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.448 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.448 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.448 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.448 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.448 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.449 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.449 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.449 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.449 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.449 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.449 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.450 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.450 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.450 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.450 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.450 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.450 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.451 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.451 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.451 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.451 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.451 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.452 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.452 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.452 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.452 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.452 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.453 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.453 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.453 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.453 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.454 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.454 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.454 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.454 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.454 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.454 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.455 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.455 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.455 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.455 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.455 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.456 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.456 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.456 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.456 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.456 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.456 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.456 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.456 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.457 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.457 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.457 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.457 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.457 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.457 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.457 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.458 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.458 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.458 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.458 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.458 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.458 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.458 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.459 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.459 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.459 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.459 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.459 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.459 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.459 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.460 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.460 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.460 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.460 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.460 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.460 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.460 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.461 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.461 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.461 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.461 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.461 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.461 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.462 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.462 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.463 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.463 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.463 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.463 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.463 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.463 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.463 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.464 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.464 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.464 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.464 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.464 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.464 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.464 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.465 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.465 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.465 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.465 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.465 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.465 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.466 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.466 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.466 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.466 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.466 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.466 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.466 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.467 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.467 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.467 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.467 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.467 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.467 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.467 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.468 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.468 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.468 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.468 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.468 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.468 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.468 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.468 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.469 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.469 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.469 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.469 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.469 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.470 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.470 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.470 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.470 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.470 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.470 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.470 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.470 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.471 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.471 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.471 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.471 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.471 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.471 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.471 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.472 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.472 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.472 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.472 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.472 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.473 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.473 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.473 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.473 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.473 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.473 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.473 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.474 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.474 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.474 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.474 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.474 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.474 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.475 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.475 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.475 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.475 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.475 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.475 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.475 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.476 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.476 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.476 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.476 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.476 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.476 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.476 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.476 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.477 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.477 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.477 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.477 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.477 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.477 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.478 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.478 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.478 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.478 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.478 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.478 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.479 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.479 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.479 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.479 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.479 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.479 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.479 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.480 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.480 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.480 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.480 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.480 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.480 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.480 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.481 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.481 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.481 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.481 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.481 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.481 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.481 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.482 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.482 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.482 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.482 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.482 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.482 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.483 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.483 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.483 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.483 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.483 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.483 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.484 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.484 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.484 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.484 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.484 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.485 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.485 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.485 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.485 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.485 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.486 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.486 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.486 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.486 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.486 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.487 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.487 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.487 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.487 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.488 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.488 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.488 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.488 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.488 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.489 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.489 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.489 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.489 221554 DEBUG oslo_service.service [None req-46426299-e9be-408b-818f-10e78d283191 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.490 221554 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m
Jan 31 02:25:42 np0005603609 systemd[1]: Started libpod-conmon-a922afc41bf64cbe0a8e8ce5173e3945f35ae5d6017d5b8eca452bb424aaf0d5.scope.
Jan 31 02:25:42 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:25:42 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d42e50357253e0488e769307e698d2e76406735e03da3ac0ab15acd7b866a58/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:42 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d42e50357253e0488e769307e698d2e76406735e03da3ac0ab15acd7b866a58/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:42 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d42e50357253e0488e769307e698d2e76406735e03da3ac0ab15acd7b866a58/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:42 np0005603609 podman[221741]: 2026-01-31 07:25:42.536566489 +0000 UTC m=+0.104990602 container init a922afc41bf64cbe0a8e8ce5173e3945f35ae5d6017d5b8eca452bb424aaf0d5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 31 02:25:42 np0005603609 podman[221741]: 2026-01-31 07:25:42.541229353 +0000 UTC m=+0.109653446 container start a922afc41bf64cbe0a8e8ce5173e3945f35ae5d6017d5b8eca452bb424aaf0d5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=nova_compute_init, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 31 02:25:42 np0005603609 python3.9[221718]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.573 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.574 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.575 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.575 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 31 02:25:42 np0005603609 nova_compute_init[221762]: INFO:nova_statedir:Applying nova statedir ownership
Jan 31 02:25:42 np0005603609 nova_compute_init[221762]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 31 02:25:42 np0005603609 nova_compute_init[221762]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 31 02:25:42 np0005603609 nova_compute_init[221762]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 31 02:25:42 np0005603609 nova_compute_init[221762]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 31 02:25:42 np0005603609 nova_compute_init[221762]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 31 02:25:42 np0005603609 nova_compute_init[221762]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 31 02:25:42 np0005603609 nova_compute_init[221762]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 31 02:25:42 np0005603609 nova_compute_init[221762]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 31 02:25:42 np0005603609 nova_compute_init[221762]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 31 02:25:42 np0005603609 nova_compute_init[221762]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 31 02:25:42 np0005603609 nova_compute_init[221762]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:25:42 np0005603609 nova_compute_init[221762]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 31 02:25:42 np0005603609 nova_compute_init[221762]: INFO:nova_statedir:Nova statedir ownership complete
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.584 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f521806bdf0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.587 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f521806bdf0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.588 221554 INFO nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 31 02:25:42 np0005603609 systemd[1]: libpod-a922afc41bf64cbe0a8e8ce5173e3945f35ae5d6017d5b8eca452bb424aaf0d5.scope: Deactivated successfully.
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.594 221554 INFO nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Libvirt host capabilities <capabilities>
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <host>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <uuid>231927d4-1ded-4b84-843c-456d697af567</uuid>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <cpu>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <arch>x86_64</arch>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model>EPYC-Rome-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <vendor>AMD</vendor>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <microcode version='16777317'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <signature family='23' model='49' stepping='0'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='x2apic'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='tsc-deadline'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='osxsave'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='hypervisor'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='tsc_adjust'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='spec-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='stibp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='arch-capabilities'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='ssbd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='cmp_legacy'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='topoext'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='virt-ssbd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='lbrv'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='tsc-scale'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='vmcb-clean'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='pause-filter'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='pfthreshold'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='svme-addr-chk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='rdctl-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='skip-l1dfl-vmentry'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='mds-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature name='pschange-mc-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <pages unit='KiB' size='4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <pages unit='KiB' size='2048'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <pages unit='KiB' size='1048576'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </cpu>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <power_management>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <suspend_mem/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </power_management>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <iommu support='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <migration_features>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <live/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <uri_transports>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <uri_transport>tcp</uri_transport>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <uri_transport>rdma</uri_transport>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </uri_transports>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </migration_features>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <topology>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <cells num='1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <cell id='0'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:          <memory unit='KiB'>7864292</memory>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:          <pages unit='KiB' size='4'>1966073</pages>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:          <pages unit='KiB' size='2048'>0</pages>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:          <distances>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:            <sibling id='0' value='10'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:          </distances>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:          <cpus num='8'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:          </cpus>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        </cell>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </cells>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </topology>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <cache>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </cache>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <secmodel>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model>selinux</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <doi>0</doi>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </secmodel>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <secmodel>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model>dac</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <doi>0</doi>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </secmodel>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </host>
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <guest>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <os_type>hvm</os_type>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <arch name='i686'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <wordsize>32</wordsize>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <domain type='qemu'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <domain type='kvm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </arch>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <features>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <pae/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <nonpae/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <acpi default='on' toggle='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <apic default='on' toggle='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <cpuselection/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <deviceboot/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <disksnapshot default='on' toggle='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <externalSnapshot/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </features>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </guest>
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <guest>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <os_type>hvm</os_type>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <arch name='x86_64'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <wordsize>64</wordsize>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <domain type='qemu'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <domain type='kvm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </arch>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <features>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <acpi default='on' toggle='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <apic default='on' toggle='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <cpuselection/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <deviceboot/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <disksnapshot default='on' toggle='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <externalSnapshot/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </features>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </guest>
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 
Jan 31 02:25:42 np0005603609 nova_compute[221550]: </capabilities>
Jan 31 02:25:42 np0005603609 nova_compute[221550]: #033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.610 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 31 02:25:42 np0005603609 podman[221797]: 2026-01-31 07:25:42.62967707 +0000 UTC m=+0.025178525 container died a922afc41bf64cbe0a8e8ce5173e3945f35ae5d6017d5b8eca452bb424aaf0d5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.643 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 31 02:25:42 np0005603609 nova_compute[221550]: <domainCapabilities>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <domain>kvm</domain>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <arch>i686</arch>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <vcpu max='4096'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <iothreads supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <os supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <enum name='firmware'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <loader supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>rom</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pflash</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='readonly'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>yes</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>no</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='secure'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>no</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </loader>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <cpu>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <mode name='host-passthrough' supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='hostPassthroughMigratable'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>on</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>off</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </mode>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <mode name='maximum' supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='maximumMigratable'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>on</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>off</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </mode>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <mode name='host-model' supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <vendor>AMD</vendor>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='x2apic'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='hypervisor'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='stibp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='ssbd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='overflow-recov'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='succor'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='lbrv'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='tsc-scale'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='flushbyasid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='pause-filter'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='pfthreshold'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='disable' name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </mode>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <mode name='custom' supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-noTSX'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='ClearwaterForest'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ddpd-u'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='intel-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ipred-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='lam'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rrsba-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sha512'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sm3'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sm4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='ClearwaterForest-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ddpd-u'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='intel-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ipred-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='lam'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rrsba-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sha512'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sm3'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sm4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cooperlake'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cooperlake-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cooperlake-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Denverton'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mpx'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Denverton-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mpx'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Denverton-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Denverton-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Dhyana-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Genoa'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='perfmon-v2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Milan'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Milan-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Milan-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Milan-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Rome'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Rome-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Rome-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Rome-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Turin'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibpb-brtype'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='perfmon-v2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbpb'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Turin-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibpb-brtype'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='perfmon-v2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbpb'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-v5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='GraniteRapids'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='GraniteRapids-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='GraniteRapids-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-128'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-256'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-512'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a922afc41bf64cbe0a8e8ce5173e3945f35ae5d6017d5b8eca452bb424aaf0d5-userdata-shm.mount: Deactivated successfully.
Jan 31 02:25:42 np0005603609 systemd[1]: var-lib-containers-storage-overlay-5d42e50357253e0488e769307e698d2e76406735e03da3ac0ab15acd7b866a58-merged.mount: Deactivated successfully.
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='GraniteRapids-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-128'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-256'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-512'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-noTSX'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v6'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v7'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='IvyBridge'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 podman[221797]: 2026-01-31 07:25:42.667051782 +0000 UTC m=+0.062553227 container cleanup a922afc41bf64cbe0a8e8ce5173e3945f35ae5d6017d5b8eca452bb424aaf0d5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm)
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='IvyBridge-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='IvyBridge-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='IvyBridge-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='KnightsMill'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-4fmaps'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-4vnniw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512er'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512pf'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='KnightsMill-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-4fmaps'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-4vnniw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512er'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512pf'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Opteron_G4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fma4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xop'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Opteron_G4-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fma4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xop'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Opteron_G5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fma4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tbm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xop'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Opteron_G5-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fma4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tbm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xop'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 systemd[1]: libpod-conmon-a922afc41bf64cbe0a8e8ce5173e3945f35ae5d6017d5b8eca452bb424aaf0d5.scope: Deactivated successfully.
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SierraForest'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SierraForest-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SierraForest-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='intel-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ipred-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='lam'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rrsba-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SierraForest-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='intel-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ipred-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='lam'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rrsba-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='core-capability'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mpx'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='split-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='core-capability'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mpx'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='split-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='core-capability'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='split-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='core-capability'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='split-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='athlon'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnow'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnowext'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='athlon-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnow'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnowext'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='core2duo'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='core2duo-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='coreduo'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='coreduo-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='n270'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='n270-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='phenom'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnow'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnowext'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='phenom-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnow'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnowext'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </mode>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <memoryBacking supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <enum name='sourceType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>file</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>anonymous</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>memfd</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </memoryBacking>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <disk supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='diskDevice'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>disk</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>cdrom</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>floppy</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>lun</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='bus'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>fdc</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>scsi</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>usb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>sata</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio-transitional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio-non-transitional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <graphics supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vnc</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>egl-headless</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>dbus</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </graphics>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <video supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='modelType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vga</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>cirrus</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>none</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>bochs</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>ramfb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <hostdev supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='mode'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>subsystem</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='startupPolicy'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>default</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>mandatory</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>requisite</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>optional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='subsysType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>usb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pci</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>scsi</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='capsType'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='pciBackend'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </hostdev>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <rng supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio-transitional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio-non-transitional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendModel'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>random</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>egd</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>builtin</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <filesystem supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='driverType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>path</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>handle</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtiofs</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </filesystem>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <tpm supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>tpm-tis</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>tpm-crb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendModel'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>emulator</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>external</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendVersion'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>2.0</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </tpm>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <redirdev supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='bus'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>usb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </redirdev>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <channel supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pty</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>unix</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </channel>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <crypto supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>qemu</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendModel'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>builtin</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </crypto>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <interface supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>default</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>passt</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <panic supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>isa</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>hyperv</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </panic>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <console supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>null</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vc</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pty</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>dev</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>file</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pipe</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>stdio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>udp</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>tcp</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>unix</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>qemu-vdagent</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>dbus</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </console>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <gic supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <vmcoreinfo supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <genid supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <backingStoreInput supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <backup supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <async-teardown supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <s390-pv supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <ps2 supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <tdx supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <sev supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <sgx supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <hyperv supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='features'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>relaxed</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vapic</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>spinlocks</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vpindex</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>runtime</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>synic</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>stimer</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>reset</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vendor_id</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>frequencies</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>reenlightenment</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>tlbflush</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>ipi</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>avic</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>emsr_bitmap</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>xmm_input</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <defaults>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <spinlocks>4095</spinlocks>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <stimer_direct>on</stimer_direct>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </defaults>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </hyperv>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <launchSecurity supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:25:42 np0005603609 nova_compute[221550]: </domainCapabilities>
Jan 31 02:25:42 np0005603609 nova_compute[221550]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.653 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 31 02:25:42 np0005603609 nova_compute[221550]: <domainCapabilities>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <domain>kvm</domain>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <arch>i686</arch>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <vcpu max='240'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <iothreads supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <os supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <enum name='firmware'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <loader supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>rom</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pflash</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='readonly'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>yes</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>no</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='secure'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>no</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </loader>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <cpu>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <mode name='host-passthrough' supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='hostPassthroughMigratable'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>on</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>off</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </mode>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <mode name='maximum' supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='maximumMigratable'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>on</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>off</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </mode>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <mode name='host-model' supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <vendor>AMD</vendor>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='x2apic'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='hypervisor'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='stibp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='ssbd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='overflow-recov'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='succor'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='lbrv'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='tsc-scale'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='flushbyasid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='pause-filter'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='pfthreshold'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='disable' name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </mode>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <mode name='custom' supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-noTSX'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='ClearwaterForest'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ddpd-u'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='intel-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ipred-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='lam'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rrsba-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sha512'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sm3'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sm4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='ClearwaterForest-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ddpd-u'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='intel-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ipred-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='lam'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rrsba-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sha512'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sm3'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sm4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cooperlake'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cooperlake-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cooperlake-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Denverton'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mpx'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Denverton-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mpx'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Denverton-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Denverton-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Dhyana-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Genoa'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='perfmon-v2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Milan'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 02:25:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Milan-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:42.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Milan-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Milan-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Rome'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Rome-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Rome-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Rome-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Turin'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibpb-brtype'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='perfmon-v2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbpb'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Turin-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibpb-brtype'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='perfmon-v2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbpb'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-v5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='GraniteRapids'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='GraniteRapids-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='GraniteRapids-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-128'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-256'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-512'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='GraniteRapids-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-128'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-256'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-512'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-noTSX'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v6'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v7'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='IvyBridge'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='IvyBridge-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='IvyBridge-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='IvyBridge-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='KnightsMill'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-4fmaps'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-4vnniw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512er'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512pf'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='KnightsMill-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-4fmaps'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-4vnniw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512er'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512pf'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Opteron_G4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fma4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xop'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Opteron_G4-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fma4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xop'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Opteron_G5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fma4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tbm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xop'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Opteron_G5-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fma4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tbm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xop'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SierraForest'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SierraForest-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SierraForest-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='intel-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ipred-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='lam'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rrsba-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SierraForest-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='intel-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ipred-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='lam'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rrsba-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='core-capability'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mpx'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='split-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='core-capability'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mpx'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='split-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='core-capability'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='split-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='core-capability'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='split-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='athlon'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnow'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnowext'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='athlon-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnow'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnowext'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='core2duo'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='core2duo-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='coreduo'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='coreduo-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='n270'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='n270-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='phenom'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnow'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnowext'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='phenom-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnow'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnowext'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </mode>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <memoryBacking supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <enum name='sourceType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>file</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>anonymous</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>memfd</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </memoryBacking>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <disk supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='diskDevice'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>disk</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>cdrom</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>floppy</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>lun</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='bus'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>ide</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>fdc</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>scsi</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>usb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>sata</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio-transitional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio-non-transitional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <graphics supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vnc</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>egl-headless</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>dbus</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </graphics>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <video supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='modelType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vga</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>cirrus</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>none</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>bochs</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>ramfb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <hostdev supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='mode'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>subsystem</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='startupPolicy'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>default</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>mandatory</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>requisite</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>optional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='subsysType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>usb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pci</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>scsi</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='capsType'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='pciBackend'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </hostdev>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <rng supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio-transitional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio-non-transitional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendModel'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>random</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>egd</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>builtin</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <filesystem supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='driverType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>path</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>handle</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtiofs</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </filesystem>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <tpm supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>tpm-tis</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>tpm-crb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendModel'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>emulator</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>external</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendVersion'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>2.0</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </tpm>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <redirdev supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='bus'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>usb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </redirdev>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <channel supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pty</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>unix</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </channel>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <crypto supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>qemu</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendModel'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>builtin</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </crypto>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <interface supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>default</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>passt</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <panic supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>isa</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>hyperv</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </panic>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <console supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>null</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vc</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pty</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>dev</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>file</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pipe</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>stdio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>udp</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>tcp</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>unix</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>qemu-vdagent</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>dbus</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </console>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <gic supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <vmcoreinfo supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <genid supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <backingStoreInput supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <backup supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <async-teardown supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <s390-pv supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <ps2 supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <tdx supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <sev supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <sgx supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <hyperv supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='features'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>relaxed</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vapic</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>spinlocks</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vpindex</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>runtime</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>synic</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>stimer</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>reset</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vendor_id</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>frequencies</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>reenlightenment</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>tlbflush</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>ipi</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>avic</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>emsr_bitmap</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>xmm_input</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <defaults>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <spinlocks>4095</spinlocks>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <stimer_direct>on</stimer_direct>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </defaults>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </hyperv>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <launchSecurity supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:25:42 np0005603609 nova_compute[221550]: </domainCapabilities>
Jan 31 02:25:42 np0005603609 nova_compute[221550]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.702 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.704 221554 WARNING nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.704 221554 DEBUG nova.virt.libvirt.volume.mount [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.708 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 31 02:25:42 np0005603609 nova_compute[221550]: <domainCapabilities>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <domain>kvm</domain>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <arch>x86_64</arch>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <vcpu max='4096'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <iothreads supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <os supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <enum name='firmware'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>efi</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <loader supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>rom</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pflash</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='readonly'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>yes</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>no</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='secure'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>yes</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>no</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </loader>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <cpu>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <mode name='host-passthrough' supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='hostPassthroughMigratable'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>on</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>off</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </mode>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <mode name='maximum' supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='maximumMigratable'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>on</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>off</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </mode>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <mode name='host-model' supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <vendor>AMD</vendor>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='x2apic'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='hypervisor'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='stibp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='ssbd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='overflow-recov'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='succor'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='lbrv'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='tsc-scale'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='flushbyasid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='pause-filter'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='pfthreshold'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='disable' name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </mode>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <mode name='custom' supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-noTSX'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='ClearwaterForest'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ddpd-u'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='intel-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ipred-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='lam'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rrsba-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sha512'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sm3'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sm4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='ClearwaterForest-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ddpd-u'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='intel-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ipred-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='lam'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rrsba-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sha512'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sm3'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sm4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cooperlake'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cooperlake-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cooperlake-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Denverton'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mpx'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Denverton-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mpx'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Denverton-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Denverton-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Dhyana-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Genoa'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='perfmon-v2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Milan'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Milan-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Milan-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Milan-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Rome'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Rome-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Rome-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Rome-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Turin'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibpb-brtype'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='perfmon-v2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbpb'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Turin-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibpb-brtype'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='perfmon-v2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbpb'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-v5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='GraniteRapids'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='GraniteRapids-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='GraniteRapids-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-128'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-256'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-512'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='GraniteRapids-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-128'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-256'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-512'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-noTSX'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v6'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v7'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='IvyBridge'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='IvyBridge-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='IvyBridge-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='IvyBridge-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='KnightsMill'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-4fmaps'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-4vnniw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512er'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512pf'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='KnightsMill-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-4fmaps'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-4vnniw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512er'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512pf'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Opteron_G4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fma4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xop'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Opteron_G4-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fma4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xop'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Opteron_G5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fma4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tbm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xop'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Opteron_G5-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fma4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tbm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xop'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SierraForest'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SierraForest-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SierraForest-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='intel-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ipred-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='lam'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rrsba-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SierraForest-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='intel-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ipred-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='lam'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rrsba-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='core-capability'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mpx'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='split-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='core-capability'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mpx'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='split-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='core-capability'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='split-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='core-capability'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='split-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='athlon'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnow'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnowext'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='athlon-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnow'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnowext'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='core2duo'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='core2duo-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='coreduo'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='coreduo-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='n270'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='n270-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='phenom'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnow'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnowext'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='phenom-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnow'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnowext'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </mode>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <memoryBacking supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <enum name='sourceType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>file</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>anonymous</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>memfd</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </memoryBacking>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <disk supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='diskDevice'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>disk</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>cdrom</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>floppy</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>lun</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='bus'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>fdc</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>scsi</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>usb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>sata</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio-transitional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio-non-transitional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <graphics supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vnc</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>egl-headless</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>dbus</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </graphics>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <video supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='modelType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vga</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>cirrus</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>none</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>bochs</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>ramfb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <hostdev supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='mode'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>subsystem</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='startupPolicy'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>default</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>mandatory</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>requisite</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>optional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='subsysType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>usb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pci</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>scsi</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='capsType'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='pciBackend'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </hostdev>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <rng supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio-transitional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio-non-transitional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendModel'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>random</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>egd</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>builtin</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <filesystem supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='driverType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>path</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>handle</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtiofs</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </filesystem>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <tpm supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>tpm-tis</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>tpm-crb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendModel'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>emulator</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>external</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendVersion'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>2.0</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </tpm>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <redirdev supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='bus'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>usb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </redirdev>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <channel supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pty</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>unix</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </channel>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <crypto supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>qemu</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendModel'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>builtin</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </crypto>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <interface supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>default</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>passt</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <panic supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>isa</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>hyperv</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </panic>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <console supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>null</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vc</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pty</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>dev</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>file</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pipe</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>stdio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>udp</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>tcp</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>unix</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>qemu-vdagent</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>dbus</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </console>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <gic supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <vmcoreinfo supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <genid supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <backingStoreInput supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <backup supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <async-teardown supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <s390-pv supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <ps2 supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <tdx supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <sev supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <sgx supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <hyperv supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='features'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>relaxed</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vapic</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>spinlocks</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vpindex</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>runtime</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>synic</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>stimer</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>reset</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vendor_id</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>frequencies</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>reenlightenment</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>tlbflush</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>ipi</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>avic</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>emsr_bitmap</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>xmm_input</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <defaults>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <spinlocks>4095</spinlocks>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <stimer_direct>on</stimer_direct>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </defaults>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </hyperv>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <launchSecurity supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:25:42 np0005603609 nova_compute[221550]: </domainCapabilities>
Jan 31 02:25:42 np0005603609 nova_compute[221550]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.772 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 31 02:25:42 np0005603609 nova_compute[221550]: <domainCapabilities>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <domain>kvm</domain>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <arch>x86_64</arch>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <vcpu max='240'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <iothreads supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <os supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <enum name='firmware'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <loader supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>rom</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pflash</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='readonly'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>yes</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>no</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='secure'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>no</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </loader>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <cpu>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <mode name='host-passthrough' supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='hostPassthroughMigratable'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>on</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>off</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </mode>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <mode name='maximum' supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='maximumMigratable'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>on</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>off</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </mode>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <mode name='host-model' supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <vendor>AMD</vendor>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='x2apic'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='hypervisor'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='stibp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='ssbd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='overflow-recov'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='succor'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='lbrv'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='tsc-scale'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='flushbyasid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='pause-filter'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='pfthreshold'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <feature policy='disable' name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </mode>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <mode name='custom' supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-noTSX'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Broadwell-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='ClearwaterForest'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ddpd-u'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='intel-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ipred-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='lam'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rrsba-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sha512'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sm3'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sm4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='ClearwaterForest-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ddpd-u'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='intel-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ipred-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='lam'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rrsba-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sha512'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sm3'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sm4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cooperlake'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cooperlake-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Cooperlake-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Denverton'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mpx'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Denverton-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mpx'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Denverton-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Denverton-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Dhyana-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Genoa'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='perfmon-v2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Milan'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Milan-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Milan-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Milan-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Rome'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Rome-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Rome-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Rome-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Turin'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibpb-brtype'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='perfmon-v2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbpb'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-Turin-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amd-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='auto-ibrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibpb-brtype'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='no-nested-data-bp'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='null-sel-clr-base'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='perfmon-v2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbpb'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='stibp-always-on'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='EPYC-v5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='GraniteRapids'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='GraniteRapids-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='GraniteRapids-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-128'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-256'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-512'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='GraniteRapids-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-128'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-256'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx10-512'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='prefetchiti'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-noTSX'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Haswell-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v6'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Icelake-Server-v7'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='IvyBridge'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='IvyBridge-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='IvyBridge-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='IvyBridge-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='KnightsMill'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-4fmaps'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-4vnniw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512er'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512pf'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='KnightsMill-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-4fmaps'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-4vnniw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512er'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512pf'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Opteron_G4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fma4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xop'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Opteron_G4-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fma4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xop'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Opteron_G5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fma4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tbm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xop'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Opteron_G5-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fma4'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tbm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xop'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SapphireRapids-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='amx-tile'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-bf16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-fp16'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bitalg'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vbmi2'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrc'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fzrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='la57'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='taa-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='tsx-ldtrk'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SierraForest'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SierraForest-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SierraForest-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='intel-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ipred-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='lam'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rrsba-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='SierraForest-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ifma'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-ne-convert'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx-vnni-int8'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bhi-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='bus-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cmpccxadd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fbsdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='fsrs'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ibrs-all'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='intel-psfd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ipred-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='lam'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mcdt-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pbrsb-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='psdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rrsba-ctrl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='serialize'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vaes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='vpclmulqdq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Client-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='hle'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='rtm'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Skylake-Server-v5'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512bw'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512cd'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512dq'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512f'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='avx512vl'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='invpcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pcid'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='pku'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='core-capability'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mpx'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='split-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='core-capability'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='mpx'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='split-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge-v2'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='core-capability'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='split-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge-v3'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='core-capability'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='split-lock-detect'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='Snowridge-v4'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='cldemote'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='erms'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='gfni'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdir64b'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='movdiri'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='xsaves'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='athlon'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnow'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnowext'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='athlon-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnow'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnowext'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='core2duo'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='core2duo-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='coreduo'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='coreduo-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='n270'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='n270-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='ss'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='phenom'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnow'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnowext'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <blockers model='phenom-v1'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnow'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <feature name='3dnowext'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </blockers>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </mode>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <memoryBacking supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <enum name='sourceType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>file</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>anonymous</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <value>memfd</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </memoryBacking>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <disk supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='diskDevice'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>disk</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>cdrom</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>floppy</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>lun</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='bus'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>ide</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>fdc</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>scsi</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>usb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>sata</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio-transitional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio-non-transitional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <graphics supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vnc</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>egl-headless</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>dbus</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </graphics>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <video supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='modelType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vga</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>cirrus</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>none</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>bochs</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>ramfb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <hostdev supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='mode'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>subsystem</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='startupPolicy'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>default</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>mandatory</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>requisite</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>optional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='subsysType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>usb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pci</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>scsi</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='capsType'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='pciBackend'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </hostdev>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <rng supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio-transitional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtio-non-transitional</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendModel'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>random</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>egd</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>builtin</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <filesystem supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='driverType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>path</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>handle</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>virtiofs</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </filesystem>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <tpm supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>tpm-tis</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>tpm-crb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendModel'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>emulator</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>external</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendVersion'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>2.0</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </tpm>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <redirdev supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='bus'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>usb</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </redirdev>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <channel supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pty</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>unix</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </channel>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <crypto supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>qemu</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendModel'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>builtin</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </crypto>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <interface supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='backendType'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>default</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>passt</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <panic supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='model'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>isa</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>hyperv</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </panic>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <console supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='type'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>null</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vc</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pty</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>dev</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>file</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>pipe</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>stdio</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>udp</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>tcp</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>unix</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>qemu-vdagent</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>dbus</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </console>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <gic supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <vmcoreinfo supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <genid supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <backingStoreInput supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <backup supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <async-teardown supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <s390-pv supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <ps2 supported='yes'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <tdx supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <sev supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <sgx supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <hyperv supported='yes'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <enum name='features'>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>relaxed</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vapic</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>spinlocks</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vpindex</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>runtime</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>synic</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>stimer</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>reset</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>vendor_id</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>frequencies</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>reenlightenment</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>tlbflush</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>ipi</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>avic</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>emsr_bitmap</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <value>xmm_input</value>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </enum>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      <defaults>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <spinlocks>4095</spinlocks>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <stimer_direct>on</stimer_direct>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:      </defaults>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    </hyperv>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:    <launchSecurity supported='no'/>
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:25:42 np0005603609 nova_compute[221550]: </domainCapabilities>
Jan 31 02:25:42 np0005603609 nova_compute[221550]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.850 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.850 221554 INFO nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Secure Boot support detected#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.852 221554 INFO nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.852 221554 INFO nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.859 221554 DEBUG nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] cpu compare xml: <cpu match="exact">
Jan 31 02:25:42 np0005603609 nova_compute[221550]:  <model>Nehalem</model>
Jan 31 02:25:42 np0005603609 nova_compute[221550]: </cpu>
Jan 31 02:25:42 np0005603609 nova_compute[221550]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.860 221554 DEBUG nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 31 02:25:42 np0005603609 nova_compute[221550]: 2026-01-31 07:25:42.939 221554 INFO nova.virt.node [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Determined node identity 09a2f316-8f9d-47b2-922f-864a1d14c517 from /var/lib/nova/compute_id#033[00m
Jan 31 02:25:43 np0005603609 nova_compute[221550]: 2026-01-31 07:25:43.022 221554 WARNING nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Compute nodes ['09a2f316-8f9d-47b2-922f-864a1d14c517'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 31 02:25:43 np0005603609 nova_compute[221550]: 2026-01-31 07:25:43.169 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 31 02:25:43 np0005603609 nova_compute[221550]: 2026-01-31 07:25:43.325 221554 WARNING nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Jan 31 02:25:43 np0005603609 nova_compute[221550]: 2026-01-31 07:25:43.325 221554 DEBUG oslo_concurrency.lockutils [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:25:43 np0005603609 nova_compute[221550]: 2026-01-31 07:25:43.325 221554 DEBUG oslo_concurrency.lockutils [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:25:43 np0005603609 nova_compute[221550]: 2026-01-31 07:25:43.325 221554 DEBUG oslo_concurrency.lockutils [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:25:43 np0005603609 nova_compute[221550]: 2026-01-31 07:25:43.326 221554 DEBUG nova.compute.resource_tracker [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:25:43 np0005603609 nova_compute[221550]: 2026-01-31 07:25:43.326 221554 DEBUG oslo_concurrency.processutils [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:25:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 02:25:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:43.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 02:25:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:25:43 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2169502590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:25:43 np0005603609 nova_compute[221550]: 2026-01-31 07:25:43.768 221554 DEBUG oslo_concurrency.processutils [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:25:43 np0005603609 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 02:25:43 np0005603609 systemd[1]: Started libvirt nodedev daemon.
Jan 31 02:25:44 np0005603609 nova_compute[221550]: 2026-01-31 07:25:44.046 221554 WARNING nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:25:44 np0005603609 nova_compute[221550]: 2026-01-31 07:25:44.047 221554 DEBUG nova.compute.resource_tracker [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5249MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:25:44 np0005603609 nova_compute[221550]: 2026-01-31 07:25:44.048 221554 DEBUG oslo_concurrency.lockutils [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:25:44 np0005603609 nova_compute[221550]: 2026-01-31 07:25:44.048 221554 DEBUG oslo_concurrency.lockutils [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:25:44 np0005603609 systemd[1]: session-49.scope: Deactivated successfully.
Jan 31 02:25:44 np0005603609 systemd[1]: session-49.scope: Consumed 1min 50.369s CPU time.
Jan 31 02:25:44 np0005603609 systemd-logind[823]: Session 49 logged out. Waiting for processes to exit.
Jan 31 02:25:44 np0005603609 systemd-logind[823]: Removed session 49.
Jan 31 02:25:44 np0005603609 nova_compute[221550]: 2026-01-31 07:25:44.185 221554 WARNING nova.compute.resource_tracker [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] No compute node record for compute-1.ctlplane.example.com:09a2f316-8f9d-47b2-922f-864a1d14c517: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 09a2f316-8f9d-47b2-922f-864a1d14c517 could not be found.#033[00m
Jan 31 02:25:44 np0005603609 nova_compute[221550]: 2026-01-31 07:25:44.361 221554 INFO nova.compute.resource_tracker [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: 09a2f316-8f9d-47b2-922f-864a1d14c517#033[00m
Jan 31 02:25:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:25:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:44.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:25:45 np0005603609 nova_compute[221550]: 2026-01-31 07:25:45.312 221554 DEBUG nova.compute.resource_tracker [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:25:45 np0005603609 nova_compute[221550]: 2026-01-31 07:25:45.312 221554 DEBUG nova.compute.resource_tracker [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:25:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:45.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:46 np0005603609 nova_compute[221550]: 2026-01-31 07:25:46.374 221554 INFO nova.scheduler.client.report [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [req-ec480c5a-26fd-4dd2-be78-e0646049b012] Created resource provider record via placement API for resource provider with UUID 09a2f316-8f9d-47b2-922f-864a1d14c517 and name compute-1.ctlplane.example.com.#033[00m
Jan 31 02:25:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:46.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:47 np0005603609 nova_compute[221550]: 2026-01-31 07:25:47.042 221554 DEBUG oslo_concurrency.processutils [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:25:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:47.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:25:47 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2858455899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:25:47 np0005603609 nova_compute[221550]: 2026-01-31 07:25:47.547 221554 DEBUG oslo_concurrency.processutils [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:25:47 np0005603609 nova_compute[221550]: 2026-01-31 07:25:47.554 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 31 02:25:47 np0005603609 nova_compute[221550]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Jan 31 02:25:47 np0005603609 nova_compute[221550]: 2026-01-31 07:25:47.554 221554 INFO nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] kernel doesn't support AMD SEV#033[00m
Jan 31 02:25:47 np0005603609 nova_compute[221550]: 2026-01-31 07:25:47.556 221554 DEBUG nova.compute.provider_tree [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 02:25:47 np0005603609 nova_compute[221550]: 2026-01-31 07:25:47.557 221554 DEBUG nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:25:47 np0005603609 nova_compute[221550]: 2026-01-31 07:25:47.563 221554 DEBUG nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Libvirt baseline CPU <cpu>
Jan 31 02:25:47 np0005603609 nova_compute[221550]:  <arch>x86_64</arch>
Jan 31 02:25:47 np0005603609 nova_compute[221550]:  <model>Nehalem</model>
Jan 31 02:25:47 np0005603609 nova_compute[221550]:  <vendor>AMD</vendor>
Jan 31 02:25:47 np0005603609 nova_compute[221550]:  <topology sockets="8" cores="1" threads="1"/>
Jan 31 02:25:47 np0005603609 nova_compute[221550]: </cpu>
Jan 31 02:25:47 np0005603609 nova_compute[221550]: _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537#033[00m
Jan 31 02:25:47 np0005603609 nova_compute[221550]: 2026-01-31 07:25:47.668 221554 DEBUG nova.scheduler.client.report [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Updated inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 31 02:25:47 np0005603609 nova_compute[221550]: 2026-01-31 07:25:47.669 221554 DEBUG nova.compute.provider_tree [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Updating resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 31 02:25:47 np0005603609 nova_compute[221550]: 2026-01-31 07:25:47.670 221554 DEBUG nova.compute.provider_tree [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 02:25:47 np0005603609 nova_compute[221550]: 2026-01-31 07:25:47.911 221554 DEBUG nova.compute.provider_tree [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Updating resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 31 02:25:48 np0005603609 nova_compute[221550]: 2026-01-31 07:25:48.024 221554 DEBUG nova.compute.resource_tracker [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:25:48 np0005603609 nova_compute[221550]: 2026-01-31 07:25:48.025 221554 DEBUG oslo_concurrency.lockutils [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.977s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:25:48 np0005603609 nova_compute[221550]: 2026-01-31 07:25:48.025 221554 DEBUG nova.service [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Jan 31 02:25:48 np0005603609 nova_compute[221550]: 2026-01-31 07:25:48.599 221554 DEBUG nova.service [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Jan 31 02:25:48 np0005603609 nova_compute[221550]: 2026-01-31 07:25:48.600 221554 DEBUG nova.servicegroup.drivers.db [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] DB_Driver: join new ServiceGroup member compute-1.ctlplane.example.com to the compute group, service = <Service: host=compute-1.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Jan 31 02:25:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:48.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:49.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:50.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:51 np0005603609 podman[221916]: 2026-01-31 07:25:51.180248767 +0000 UTC m=+0.059206805 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 02:25:51 np0005603609 podman[221915]: 2026-01-31 07:25:51.215908897 +0000 UTC m=+0.094716502 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Jan 31 02:25:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:51.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:51 np0005603609 nova_compute[221550]: 2026-01-31 07:25:51.601 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:25:51 np0005603609 nova_compute[221550]: 2026-01-31 07:25:51.619 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:25:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:52.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:53.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:54.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:25:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:55.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:25:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:56.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:57.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:58.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:25:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:59.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:26:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:00.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:26:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:26:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:01.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:26:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:02.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:26:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:03.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:04.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:05.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:26:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:06.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:07.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:26:07.458 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:26:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:26:07.459 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:26:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:26:07.459 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:26:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:26:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:08.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:26:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:09.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:10.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:26:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 02:26:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:11.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 02:26:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:26:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:12.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:26:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:26:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:13.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:26:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:14.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:15.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:26:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:16.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:17.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:18.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:19.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:26:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/571993948' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:26:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:26:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/571993948' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:26:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:20.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:26:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:21.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:22 np0005603609 podman[221959]: 2026-01-31 07:26:22.199242886 +0000 UTC m=+0.077105642 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 02:26:22 np0005603609 podman[221958]: 2026-01-31 07:26:22.208909932 +0000 UTC m=+0.088558011 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 02:26:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:22.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:23.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:24.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:25.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:26:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:26:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:26.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:26:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:26:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:27.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:26:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:28.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:29.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:26:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 6248 writes, 25K keys, 6248 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6248 writes, 1135 syncs, 5.50 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 472 writes, 746 keys, 472 commit groups, 1.0 writes per commit group, ingest: 0.24 MB, 0.00 MB/s#012Interval WAL: 472 writes, 221 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f1fef0cf30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f1fef0cf30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Jan 31 02:26:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:26:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:30.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:26:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:26:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:26:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:26:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:26:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:26:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:26:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:31.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:26:31 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2853494607' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:26:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:26:31 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2853494607' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:26:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:32.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:33.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:26:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:34.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:26:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:35.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:26:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:36.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:37 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:26:37 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:26:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:26:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:37.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:26:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:26:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:38.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:26:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:26:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:39.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:26:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:40.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:26:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:41.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:41 np0005603609 nova_compute[221550]: 2026-01-31 07:26:41.661 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:26:41 np0005603609 nova_compute[221550]: 2026-01-31 07:26:41.661 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:26:41 np0005603609 nova_compute[221550]: 2026-01-31 07:26:41.662 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:26:41 np0005603609 nova_compute[221550]: 2026-01-31 07:26:41.662 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:26:41 np0005603609 nova_compute[221550]: 2026-01-31 07:26:41.676 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:26:41 np0005603609 nova_compute[221550]: 2026-01-31 07:26:41.676 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:26:41 np0005603609 nova_compute[221550]: 2026-01-31 07:26:41.676 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:26:41 np0005603609 nova_compute[221550]: 2026-01-31 07:26:41.676 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:26:41 np0005603609 nova_compute[221550]: 2026-01-31 07:26:41.677 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:26:41 np0005603609 nova_compute[221550]: 2026-01-31 07:26:41.677 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:26:41 np0005603609 nova_compute[221550]: 2026-01-31 07:26:41.677 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:26:41 np0005603609 nova_compute[221550]: 2026-01-31 07:26:41.677 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:26:41 np0005603609 nova_compute[221550]: 2026-01-31 07:26:41.677 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:26:41 np0005603609 nova_compute[221550]: 2026-01-31 07:26:41.708 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:26:41 np0005603609 nova_compute[221550]: 2026-01-31 07:26:41.708 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:26:41 np0005603609 nova_compute[221550]: 2026-01-31 07:26:41.709 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:26:41 np0005603609 nova_compute[221550]: 2026-01-31 07:26:41.709 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:26:41 np0005603609 nova_compute[221550]: 2026-01-31 07:26:41.710 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:26:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:26:42 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/54853499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:26:42 np0005603609 nova_compute[221550]: 2026-01-31 07:26:42.127 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:26:42 np0005603609 nova_compute[221550]: 2026-01-31 07:26:42.306 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:26:42 np0005603609 nova_compute[221550]: 2026-01-31 07:26:42.307 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5337MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:26:42 np0005603609 nova_compute[221550]: 2026-01-31 07:26:42.307 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:26:42 np0005603609 nova_compute[221550]: 2026-01-31 07:26:42.308 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:26:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:42.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:42 np0005603609 nova_compute[221550]: 2026-01-31 07:26:42.929 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:26:42 np0005603609 nova_compute[221550]: 2026-01-31 07:26:42.929 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:26:42 np0005603609 nova_compute[221550]: 2026-01-31 07:26:42.969 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:26:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:26:43 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2851660928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:26:43 np0005603609 nova_compute[221550]: 2026-01-31 07:26:43.417 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:26:43 np0005603609 nova_compute[221550]: 2026-01-31 07:26:43.423 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:26:43 np0005603609 nova_compute[221550]: 2026-01-31 07:26:43.445 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:26:43 np0005603609 nova_compute[221550]: 2026-01-31 07:26:43.447 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:26:43 np0005603609 nova_compute[221550]: 2026-01-31 07:26:43.448 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:26:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:43.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:44.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:26:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:45.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:26:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:26:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:26:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:46.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:26:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:26:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:47.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:26:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:48.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:49.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:26:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:50.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:26:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:26:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:51.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:26:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:52.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:26:53 np0005603609 podman[222233]: 2026-01-31 07:26:53.164189056 +0000 UTC m=+0.047759501 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:26:53 np0005603609 podman[222232]: 2026-01-31 07:26:53.202627355 +0000 UTC m=+0.082068112 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:26:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:26:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:53.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:26:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:26:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:54.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:26:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:55.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:26:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:26:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:56.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:26:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:57.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:58.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:26:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:59.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:00.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:27:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:27:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:01.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:27:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:27:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.3 total, 600.0 interval#012Cumulative writes: 3399 writes, 18K keys, 3399 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s#012Cumulative WAL: 3399 writes, 3399 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1313 writes, 6700 keys, 1313 commit groups, 1.0 writes per commit group, ingest: 14.43 MB, 0.02 MB/s#012Interval WAL: 1314 writes, 1314 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     70.9      0.29              0.05         9    0.033       0      0       0.0       0.0#012  L6      1/0    7.47 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2     82.1     68.5      0.98              0.15         8    0.123     36K   4298       0.0       0.0#012 Sum      1/0    7.47 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     63.2     69.1      1.28              0.19        17    0.075     36K   4298       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.6     56.5     56.6      0.93              0.12        10    0.093     23K   3023       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     82.1     68.5      0.98              0.15         8    0.123     36K   4298       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     79.6      0.26              0.05         8    0.033       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.020, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.09 GB write, 0.07 MB/s write, 0.08 GB read, 0.07 MB/s read, 1.3 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619ad8ed1f0#2 capacity: 308.00 MB usage: 4.52 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000131 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(246,4.21 MB,1.36728%) FilterBlock(17,106.67 KB,0.033822%) IndexBlock(17,211.67 KB,0.0671139%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 02:27:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:02.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:03.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:04.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:05.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:27:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:27:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:06.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:27:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:27:07.459 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:27:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:27:07.460 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:27:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:27:07.460 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:27:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:07.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:08.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:09.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:10.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:27:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:11.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:27:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:12.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:27:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:27:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:13.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:27:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:14.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:27:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:15.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:27:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:27:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:16.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:17.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:18.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:19.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:27:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:20.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:27:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:27:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:27:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:21.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:27:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:22.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:23.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:24 np0005603609 podman[222278]: 2026-01-31 07:27:24.155583813 +0000 UTC m=+0.038931222 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 02:27:24 np0005603609 podman[222277]: 2026-01-31 07:27:24.17276794 +0000 UTC m=+0.063562306 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:27:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:24.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:27:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:25.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:27:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:27:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:26.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:27:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:27.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:27:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:27:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:28.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:27:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:29.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:27:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:30.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:27:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:27:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:27:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:31.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:27:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:32.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:27:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:33.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:27:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:34.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:27:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:35.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:27:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:27:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:36.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:37.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:27:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:38.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:27:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:27:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 02:27:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:27:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:27:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:27:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:39.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:40 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 02:27:40 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:27:40 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:27:40 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:27:40.795 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:27:40.796 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:27:40.797 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:27:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:27:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:40.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:27:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:27:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:41.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:42.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:43 np0005603609 nova_compute[221550]: 2026-01-31 07:27:43.440 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:27:43 np0005603609 nova_compute[221550]: 2026-01-31 07:27:43.440 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:27:43 np0005603609 nova_compute[221550]: 2026-01-31 07:27:43.475 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:27:43 np0005603609 nova_compute[221550]: 2026-01-31 07:27:43.476 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:27:43 np0005603609 nova_compute[221550]: 2026-01-31 07:27:43.476 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:27:43 np0005603609 nova_compute[221550]: 2026-01-31 07:27:43.504 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:27:43 np0005603609 nova_compute[221550]: 2026-01-31 07:27:43.504 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:27:43 np0005603609 nova_compute[221550]: 2026-01-31 07:27:43.504 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:27:43 np0005603609 nova_compute[221550]: 2026-01-31 07:27:43.505 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:27:43 np0005603609 nova_compute[221550]: 2026-01-31 07:27:43.531 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:27:43 np0005603609 nova_compute[221550]: 2026-01-31 07:27:43.531 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:27:43 np0005603609 nova_compute[221550]: 2026-01-31 07:27:43.532 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:27:43 np0005603609 nova_compute[221550]: 2026-01-31 07:27:43.532 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:27:43 np0005603609 nova_compute[221550]: 2026-01-31 07:27:43.532 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:27:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:27:43 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4160809978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:27:43 np0005603609 nova_compute[221550]: 2026-01-31 07:27:43.979 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:27:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:43.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:44 np0005603609 nova_compute[221550]: 2026-01-31 07:27:44.125 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:27:44 np0005603609 nova_compute[221550]: 2026-01-31 07:27:44.127 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5344MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:27:44 np0005603609 nova_compute[221550]: 2026-01-31 07:27:44.127 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:27:44 np0005603609 nova_compute[221550]: 2026-01-31 07:27:44.128 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:27:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:44.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:27:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:45.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:46 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:27:46 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:27:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:27:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:46.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:27:47 np0005603609 nova_compute[221550]: 2026-01-31 07:27:47.182 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:27:47 np0005603609 nova_compute[221550]: 2026-01-31 07:27:47.182 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:27:47 np0005603609 nova_compute[221550]: 2026-01-31 07:27:47.206 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:27:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:27:47 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2323152489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:27:47 np0005603609 nova_compute[221550]: 2026-01-31 07:27:47.614 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:27:47 np0005603609 nova_compute[221550]: 2026-01-31 07:27:47.621 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:27:47 np0005603609 nova_compute[221550]: 2026-01-31 07:27:47.670 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:27:47 np0005603609 nova_compute[221550]: 2026-01-31 07:27:47.673 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:27:47 np0005603609 nova_compute[221550]: 2026-01-31 07:27:47.674 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:27:47 np0005603609 nova_compute[221550]: 2026-01-31 07:27:47.829 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:27:47 np0005603609 nova_compute[221550]: 2026-01-31 07:27:47.830 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:27:47 np0005603609 nova_compute[221550]: 2026-01-31 07:27:47.830 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:27:47 np0005603609 nova_compute[221550]: 2026-01-31 07:27:47.830 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:27:47 np0005603609 nova_compute[221550]: 2026-01-31 07:27:47.830 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:27:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:27:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:47.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:27:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:48.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:49.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:27:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:50.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:51.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:27:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:52.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:27:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:53.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:54.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:55 np0005603609 podman[222665]: 2026-01-31 07:27:55.170427454 +0000 UTC m=+0.053204241 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 31 02:27:55 np0005603609 podman[222664]: 2026-01-31 07:27:55.218780348 +0000 UTC m=+0.102039287 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 02:27:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:27:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:55.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:56.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 02:27:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:58.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 02:27:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:27:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:58.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:00.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:28:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:00.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:02.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:02.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:04.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:04.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:28:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:28:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:06.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:28:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:28:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:06.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:28:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:28:07.460 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:28:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:28:07.461 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:28:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:28:07.461 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:28:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:08.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:28:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:08.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:28:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:10.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:28:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000046s ======
Jan 31 02:28:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:10.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000046s
Jan 31 02:28:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:12.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:12.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:14.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:28:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:14.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:28:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:28:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:16.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:16.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:18.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:18.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:20.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:28:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:28:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:20.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:28:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:22.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:22.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:24.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:28:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:24.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:28:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:28:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:28:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:26.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:28:26 np0005603609 podman[222709]: 2026-01-31 07:28:26.235727012 +0000 UTC m=+0.116950520 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 02:28:26 np0005603609 podman[222708]: 2026-01-31 07:28:26.241734574 +0000 UTC m=+0.130570971 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Jan 31 02:28:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:28:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:26.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:28:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:28.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:28.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:30.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:28:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:30.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:28:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:32.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:28:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:32.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:34.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:34.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:28:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:36.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:28:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:36.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:28:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:28:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:38.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:28:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:28:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:38.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:28:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:28:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:40.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:28:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:28:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:40.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:28:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:42.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:28:42 np0005603609 nova_compute[221550]: 2026-01-31 07:28:42.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:28:42 np0005603609 nova_compute[221550]: 2026-01-31 07:28:42.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:28:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:42.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:43 np0005603609 nova_compute[221550]: 2026-01-31 07:28:43.136 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:28:43 np0005603609 nova_compute[221550]: 2026-01-31 07:28:43.136 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:28:43 np0005603609 nova_compute[221550]: 2026-01-31 07:28:43.136 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:28:43 np0005603609 nova_compute[221550]: 2026-01-31 07:28:43.136 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:28:43 np0005603609 nova_compute[221550]: 2026-01-31 07:28:43.136 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:28:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:28:43 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2522133190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:28:43 np0005603609 nova_compute[221550]: 2026-01-31 07:28:43.548 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:28:43 np0005603609 nova_compute[221550]: 2026-01-31 07:28:43.658 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:28:43 np0005603609 nova_compute[221550]: 2026-01-31 07:28:43.659 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5354MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:28:43 np0005603609 nova_compute[221550]: 2026-01-31 07:28:43.660 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:28:43 np0005603609 nova_compute[221550]: 2026-01-31 07:28:43.660 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:28:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:44.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:44 np0005603609 nova_compute[221550]: 2026-01-31 07:28:44.133 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:28:44 np0005603609 nova_compute[221550]: 2026-01-31 07:28:44.134 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:28:44 np0005603609 nova_compute[221550]: 2026-01-31 07:28:44.148 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:28:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:28:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/840305027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:28:44 np0005603609 nova_compute[221550]: 2026-01-31 07:28:44.623 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:28:44 np0005603609 nova_compute[221550]: 2026-01-31 07:28:44.630 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:28:44 np0005603609 nova_compute[221550]: 2026-01-31 07:28:44.728 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:28:44 np0005603609 nova_compute[221550]: 2026-01-31 07:28:44.731 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:28:44 np0005603609 nova_compute[221550]: 2026-01-31 07:28:44.732 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:28:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:28:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3089092945' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:28:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:28:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3089092945' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:28:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:44.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:45 np0005603609 nova_compute[221550]: 2026-01-31 07:28:45.727 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:28:45 np0005603609 nova_compute[221550]: 2026-01-31 07:28:45.727 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:28:45 np0005603609 nova_compute[221550]: 2026-01-31 07:28:45.728 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:28:45 np0005603609 nova_compute[221550]: 2026-01-31 07:28:45.728 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:28:45 np0005603609 nova_compute[221550]: 2026-01-31 07:28:45.796 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:28:45 np0005603609 nova_compute[221550]: 2026-01-31 07:28:45.796 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:28:45 np0005603609 nova_compute[221550]: 2026-01-31 07:28:45.797 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:28:45 np0005603609 nova_compute[221550]: 2026-01-31 07:28:45.797 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:28:45 np0005603609 nova_compute[221550]: 2026-01-31 07:28:45.797 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:28:45 np0005603609 nova_compute[221550]: 2026-01-31 07:28:45.798 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:28:45 np0005603609 nova_compute[221550]: 2026-01-31 07:28:45.798 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:28:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:28:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:46.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:28:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:46.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:28:47 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:28:47 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:28:47 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:28:47 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:28:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:48.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:48.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:50.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:28:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:50.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:52.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:52.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:54.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:54 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:28:54 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:28:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:54.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:28:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:56.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:56.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:57 np0005603609 podman[222976]: 2026-01-31 07:28:57.164624244 +0000 UTC m=+0.050782203 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:28:57 np0005603609 podman[222975]: 2026-01-31 07:28:57.197791819 +0000 UTC m=+0.081892089 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 02:28:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:58.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:28:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:58.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:29:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:00.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:29:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:29:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:00.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:02.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:02.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:29:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:04.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:29:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:29:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:04.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:29:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:29:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:06.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:29:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:07.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:29:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:29:07.461 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:29:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:29:07.462 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:29:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:29:07.462 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:29:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:08.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:09.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:10.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:29:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:11.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:12.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:13.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:14.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:15.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:29:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:29:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:16.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:29:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:29:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:17.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:29:18.048031) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844558048110, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2331, "num_deletes": 251, "total_data_size": 5953355, "memory_usage": 6018384, "flush_reason": "Manual Compaction"}
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844558093235, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 3902909, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17803, "largest_seqno": 20129, "table_properties": {"data_size": 3893310, "index_size": 6093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19107, "raw_average_key_size": 20, "raw_value_size": 3874271, "raw_average_value_size": 4073, "num_data_blocks": 272, "num_entries": 951, "num_filter_entries": 951, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844333, "oldest_key_time": 1769844333, "file_creation_time": 1769844558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 45242 microseconds, and 6031 cpu microseconds.
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:29:18.093290) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 3902909 bytes OK
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:29:18.093312) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:29:18.098505) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:29:18.098523) EVENT_LOG_v1 {"time_micros": 1769844558098517, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:29:18.098546) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 5942943, prev total WAL file size 5942943, number of live WAL files 2.
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:29:18.099387) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(3811KB)], [36(7648KB)]
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844558099474, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 11734818, "oldest_snapshot_seqno": -1}
Jan 31 02:29:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:18.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4496 keys, 9657243 bytes, temperature: kUnknown
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844558234192, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 9657243, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9624761, "index_size": 20148, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 112418, "raw_average_key_size": 25, "raw_value_size": 9540877, "raw_average_value_size": 2122, "num_data_blocks": 836, "num_entries": 4496, "num_filter_entries": 4496, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769844558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:29:18.234510) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9657243 bytes
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:29:18.239306) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 87.0 rd, 71.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 7.5 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 5015, records dropped: 519 output_compression: NoCompression
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:29:18.239343) EVENT_LOG_v1 {"time_micros": 1769844558239328, "job": 20, "event": "compaction_finished", "compaction_time_micros": 134806, "compaction_time_cpu_micros": 29564, "output_level": 6, "num_output_files": 1, "total_output_size": 9657243, "num_input_records": 5015, "num_output_records": 4496, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844558240180, "job": 20, "event": "table_file_deletion", "file_number": 38}
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844558241446, "job": 20, "event": "table_file_deletion", "file_number": 36}
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:29:18.099239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:29:18.241511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:29:18.241519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:29:18.241520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:29:18.241521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:29:18 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:29:18.241523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:29:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:19.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:29:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:20.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:29:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:29:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:21.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:29:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:22.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:29:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:23.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - - [31/Jan/2026:07:29:23.131 +0000] "GET /swift/info HTTP/1.1" 200 509 - "python-urllib3/1.26.5" - latency=0.000000000s
Jan 31 02:29:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:24.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:25.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:29:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:26.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:27.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Jan 31 02:29:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:28.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:28 np0005603609 podman[223018]: 2026-01-31 07:29:28.198836379 +0000 UTC m=+0.082276485 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:29:28 np0005603609 podman[223019]: 2026-01-31 07:29:28.207218524 +0000 UTC m=+0.084890949 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 02:29:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:29.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Jan 31 02:29:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:29:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:30.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:29:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:29:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:31.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:29:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:32.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:29:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:33.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Jan 31 02:29:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:34.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:35.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:29:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:36.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:37.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:38.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Jan 31 02:29:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:39.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Jan 31 02:29:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:40.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:29:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:41.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:42.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:42 np0005603609 nova_compute[221550]: 2026-01-31 07:29:42.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:29:42 np0005603609 nova_compute[221550]: 2026-01-31 07:29:42.717 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:29:42 np0005603609 nova_compute[221550]: 2026-01-31 07:29:42.717 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:29:42 np0005603609 nova_compute[221550]: 2026-01-31 07:29:42.718 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:29:42 np0005603609 nova_compute[221550]: 2026-01-31 07:29:42.718 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:29:42 np0005603609 nova_compute[221550]: 2026-01-31 07:29:42.718 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:29:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:43.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:29:43 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3500024337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:29:43 np0005603609 nova_compute[221550]: 2026-01-31 07:29:43.171 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:29:43 np0005603609 nova_compute[221550]: 2026-01-31 07:29:43.291 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:29:43 np0005603609 nova_compute[221550]: 2026-01-31 07:29:43.292 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5373MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:29:43 np0005603609 nova_compute[221550]: 2026-01-31 07:29:43.292 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:29:43 np0005603609 nova_compute[221550]: 2026-01-31 07:29:43.292 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:29:43 np0005603609 nova_compute[221550]: 2026-01-31 07:29:43.477 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:29:43 np0005603609 nova_compute[221550]: 2026-01-31 07:29:43.478 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:29:43 np0005603609 nova_compute[221550]: 2026-01-31 07:29:43.525 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:29:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:29:43 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/451814909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:29:43 np0005603609 nova_compute[221550]: 2026-01-31 07:29:43.927 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:29:43 np0005603609 nova_compute[221550]: 2026-01-31 07:29:43.932 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:29:43 np0005603609 nova_compute[221550]: 2026-01-31 07:29:43.973 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:29:43 np0005603609 nova_compute[221550]: 2026-01-31 07:29:43.977 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:29:43 np0005603609 nova_compute[221550]: 2026-01-31 07:29:43.978 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:29:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:29:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:44.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:29:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:45.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:45 np0005603609 nova_compute[221550]: 2026-01-31 07:29:45.973 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:29:45 np0005603609 nova_compute[221550]: 2026-01-31 07:29:45.974 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:29:45 np0005603609 nova_compute[221550]: 2026-01-31 07:29:45.997 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:29:45 np0005603609 nova_compute[221550]: 2026-01-31 07:29:45.998 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:29:45 np0005603609 nova_compute[221550]: 2026-01-31 07:29:45.998 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:29:46 np0005603609 nova_compute[221550]: 2026-01-31 07:29:46.011 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:29:46 np0005603609 nova_compute[221550]: 2026-01-31 07:29:46.011 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:29:46 np0005603609 nova_compute[221550]: 2026-01-31 07:29:46.011 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:29:46 np0005603609 nova_compute[221550]: 2026-01-31 07:29:46.012 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:29:46 np0005603609 nova_compute[221550]: 2026-01-31 07:29:46.012 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:29:46 np0005603609 nova_compute[221550]: 2026-01-31 07:29:46.012 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:29:46 np0005603609 nova_compute[221550]: 2026-01-31 07:29:46.013 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:29:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:29:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:46.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:47.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:47 np0005603609 nova_compute[221550]: 2026-01-31 07:29:47.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:29:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:29:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:48.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:29:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:49.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:50.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:29:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:51.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:52.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:53.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:54.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:55.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:29:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:29:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:29:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:29:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:29:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:56.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:29:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:57.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:29:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:58.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:29:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:29:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:29:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:59.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:29:59 np0005603609 podman[223240]: 2026-01-31 07:29:59.190630691 +0000 UTC m=+0.075308965 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:29:59 np0005603609 podman[223241]: 2026-01-31 07:29:59.190778634 +0000 UTC m=+0.067423851 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 02:30:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:00.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:00 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 02:30:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:01.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:02.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:03.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:04.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:30:04.745 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:30:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:30:04.745 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:30:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:30:04.746 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:30:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:30:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:05.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:30:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:06.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:06 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:30:06 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:30:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:07.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:30:07.462 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:30:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:30:07.462 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:30:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:30:07.463 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:30:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:08.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:09.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:10.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:11.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:30:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:12.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:30:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:30:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:13.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:30:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:14.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:15.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:16.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:30:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:17.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:30:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:18.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:19.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:20.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:30:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:21.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:30:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:22.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:23.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:24.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:25.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:26.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:30:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:27.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:30:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:28.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:29.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:30 np0005603609 podman[223338]: 2026-01-31 07:30:30.154634579 +0000 UTC m=+0.040404470 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 02:30:30 np0005603609 podman[223337]: 2026-01-31 07:30:30.238668827 +0000 UTC m=+0.121286191 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 02:30:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:30.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:30:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:31.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:30:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:32.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:30:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:33.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:30:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:34.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:35.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:36.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:37.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Jan 31 02:30:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:38.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Jan 31 02:30:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:39.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:40.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:41.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:41 np0005603609 nova_compute[221550]: 2026-01-31 07:30:41.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:30:41 np0005603609 nova_compute[221550]: 2026-01-31 07:30:41.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 02:30:41 np0005603609 nova_compute[221550]: 2026-01-31 07:30:41.682 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 02:30:41 np0005603609 nova_compute[221550]: 2026-01-31 07:30:41.684 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:30:41 np0005603609 nova_compute[221550]: 2026-01-31 07:30:41.684 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 02:30:41 np0005603609 nova_compute[221550]: 2026-01-31 07:30:41.703 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:30:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:42.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:43.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:44.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:44 np0005603609 nova_compute[221550]: 2026-01-31 07:30:44.721 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:30:44 np0005603609 nova_compute[221550]: 2026-01-31 07:30:44.723 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:30:44 np0005603609 nova_compute[221550]: 2026-01-31 07:30:44.723 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:30:44 np0005603609 nova_compute[221550]: 2026-01-31 07:30:44.754 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:30:44 np0005603609 nova_compute[221550]: 2026-01-31 07:30:44.755 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:30:44 np0005603609 nova_compute[221550]: 2026-01-31 07:30:44.755 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:30:44 np0005603609 nova_compute[221550]: 2026-01-31 07:30:44.756 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:30:44 np0005603609 nova_compute[221550]: 2026-01-31 07:30:44.800 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:30:44 np0005603609 nova_compute[221550]: 2026-01-31 07:30:44.800 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:30:44 np0005603609 nova_compute[221550]: 2026-01-31 07:30:44.801 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:30:44 np0005603609 nova_compute[221550]: 2026-01-31 07:30:44.801 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:30:44 np0005603609 nova_compute[221550]: 2026-01-31 07:30:44.801 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:30:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:30:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/909324454' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:30:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:30:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/909324454' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:30:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:45.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:30:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1515124371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:30:45 np0005603609 nova_compute[221550]: 2026-01-31 07:30:45.287 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:30:45 np0005603609 nova_compute[221550]: 2026-01-31 07:30:45.448 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:30:45 np0005603609 nova_compute[221550]: 2026-01-31 07:30:45.449 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5332MB free_disk=20.968311309814453GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:30:45 np0005603609 nova_compute[221550]: 2026-01-31 07:30:45.449 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:30:45 np0005603609 nova_compute[221550]: 2026-01-31 07:30:45.449 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:30:45 np0005603609 nova_compute[221550]: 2026-01-31 07:30:45.701 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:30:45 np0005603609 nova_compute[221550]: 2026-01-31 07:30:45.702 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:30:45 np0005603609 nova_compute[221550]: 2026-01-31 07:30:45.742 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:30:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:30:46 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3350655955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:30:46 np0005603609 nova_compute[221550]: 2026-01-31 07:30:46.202 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:30:46 np0005603609 nova_compute[221550]: 2026-01-31 07:30:46.207 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:30:46 np0005603609 nova_compute[221550]: 2026-01-31 07:30:46.281 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:30:46 np0005603609 nova_compute[221550]: 2026-01-31 07:30:46.283 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:30:46 np0005603609 nova_compute[221550]: 2026-01-31 07:30:46.283 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:30:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:46.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:30:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:47.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:30:47 np0005603609 nova_compute[221550]: 2026-01-31 07:30:47.186 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:30:47 np0005603609 nova_compute[221550]: 2026-01-31 07:30:47.187 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:30:47 np0005603609 nova_compute[221550]: 2026-01-31 07:30:47.187 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:30:47 np0005603609 nova_compute[221550]: 2026-01-31 07:30:47.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:30:47 np0005603609 nova_compute[221550]: 2026-01-31 07:30:47.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:30:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Jan 31 02:30:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:30:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:48.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:30:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Jan 31 02:30:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:49.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:49 np0005603609 nova_compute[221550]: 2026-01-31 07:30:49.661 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:30:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:50.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:51.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:52.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:52 np0005603609 nova_compute[221550]: 2026-01-31 07:30:52.952 221554 DEBUG oslo_concurrency.lockutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Acquiring lock "57276aab-f603-4d25-880a-342aeedcbdc2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:30:52 np0005603609 nova_compute[221550]: 2026-01-31 07:30:52.953 221554 DEBUG oslo_concurrency.lockutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "57276aab-f603-4d25-880a-342aeedcbdc2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:30:52 np0005603609 nova_compute[221550]: 2026-01-31 07:30:52.977 221554 DEBUG nova.compute.manager [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:30:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:30:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:53.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:30:53 np0005603609 nova_compute[221550]: 2026-01-31 07:30:53.179 221554 DEBUG oslo_concurrency.lockutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:30:53 np0005603609 nova_compute[221550]: 2026-01-31 07:30:53.180 221554 DEBUG oslo_concurrency.lockutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:30:53 np0005603609 nova_compute[221550]: 2026-01-31 07:30:53.186 221554 DEBUG nova.virt.hardware [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:30:53 np0005603609 nova_compute[221550]: 2026-01-31 07:30:53.186 221554 INFO nova.compute.claims [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:30:53 np0005603609 nova_compute[221550]: 2026-01-31 07:30:53.436 221554 DEBUG nova.scheduler.client.report [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 02:30:53 np0005603609 nova_compute[221550]: 2026-01-31 07:30:53.587 221554 DEBUG nova.scheduler.client.report [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 02:30:53 np0005603609 nova_compute[221550]: 2026-01-31 07:30:53.588 221554 DEBUG nova.compute.provider_tree [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 02:30:53 np0005603609 nova_compute[221550]: 2026-01-31 07:30:53.606 221554 DEBUG nova.scheduler.client.report [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 02:30:53 np0005603609 nova_compute[221550]: 2026-01-31 07:30:53.637 221554 DEBUG nova.scheduler.client.report [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 02:30:53 np0005603609 nova_compute[221550]: 2026-01-31 07:30:53.755 221554 DEBUG oslo_concurrency.processutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:30:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:30:54 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2419106158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:30:54 np0005603609 nova_compute[221550]: 2026-01-31 07:30:54.172 221554 DEBUG oslo_concurrency.processutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:30:54 np0005603609 nova_compute[221550]: 2026-01-31 07:30:54.177 221554 DEBUG nova.compute.provider_tree [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:30:54 np0005603609 nova_compute[221550]: 2026-01-31 07:30:54.214 221554 DEBUG nova.scheduler.client.report [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:30:54 np0005603609 nova_compute[221550]: 2026-01-31 07:30:54.251 221554 DEBUG oslo_concurrency.lockutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:30:54 np0005603609 nova_compute[221550]: 2026-01-31 07:30:54.252 221554 DEBUG nova.compute.manager [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:30:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:30:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:54.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:30:54 np0005603609 nova_compute[221550]: 2026-01-31 07:30:54.355 221554 DEBUG nova.compute.manager [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:30:54 np0005603609 nova_compute[221550]: 2026-01-31 07:30:54.356 221554 DEBUG nova.network.neutron [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:30:54 np0005603609 nova_compute[221550]: 2026-01-31 07:30:54.431 221554 INFO nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:30:54 np0005603609 nova_compute[221550]: 2026-01-31 07:30:54.475 221554 DEBUG nova.compute.manager [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:30:54 np0005603609 nova_compute[221550]: 2026-01-31 07:30:54.606 221554 DEBUG nova.compute.manager [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:30:54 np0005603609 nova_compute[221550]: 2026-01-31 07:30:54.608 221554 DEBUG nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:30:54 np0005603609 nova_compute[221550]: 2026-01-31 07:30:54.608 221554 INFO nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Creating image(s)#033[00m
Jan 31 02:30:54 np0005603609 nova_compute[221550]: 2026-01-31 07:30:54.652 221554 DEBUG nova.storage.rbd_utils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] rbd image 57276aab-f603-4d25-880a-342aeedcbdc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:30:54 np0005603609 nova_compute[221550]: 2026-01-31 07:30:54.694 221554 DEBUG nova.storage.rbd_utils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] rbd image 57276aab-f603-4d25-880a-342aeedcbdc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:30:54 np0005603609 nova_compute[221550]: 2026-01-31 07:30:54.738 221554 DEBUG nova.storage.rbd_utils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] rbd image 57276aab-f603-4d25-880a-342aeedcbdc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:30:54 np0005603609 nova_compute[221550]: 2026-01-31 07:30:54.742 221554 DEBUG oslo_concurrency.lockutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:30:54 np0005603609 nova_compute[221550]: 2026-01-31 07:30:54.743 221554 DEBUG oslo_concurrency.lockutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:30:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:30:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:55.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:30:55 np0005603609 nova_compute[221550]: 2026-01-31 07:30:55.212 221554 DEBUG nova.virt.libvirt.imagebackend [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/7c23949f-bba8-4466-bb79-caf568852d38/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/7c23949f-bba8-4466-bb79-caf568852d38/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 31 02:30:55 np0005603609 nova_compute[221550]: 2026-01-31 07:30:55.290 221554 DEBUG nova.network.neutron [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 31 02:30:55 np0005603609 nova_compute[221550]: 2026-01-31 07:30:55.291 221554 DEBUG nova.compute.manager [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:30:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:56.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:57.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:57 np0005603609 nova_compute[221550]: 2026-01-31 07:30:57.595 221554 DEBUG oslo_concurrency.processutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:30:57 np0005603609 nova_compute[221550]: 2026-01-31 07:30:57.652 221554 DEBUG oslo_concurrency.processutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.part --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:30:57 np0005603609 nova_compute[221550]: 2026-01-31 07:30:57.654 221554 DEBUG nova.virt.images [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] 7c23949f-bba8-4466-bb79-caf568852d38 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Jan 31 02:30:57 np0005603609 nova_compute[221550]: 2026-01-31 07:30:57.657 221554 DEBUG nova.privsep.utils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 31 02:30:57 np0005603609 nova_compute[221550]: 2026-01-31 07:30:57.658 221554 DEBUG oslo_concurrency.processutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.part /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:30:58 np0005603609 nova_compute[221550]: 2026-01-31 07:30:58.014 221554 DEBUG oslo_concurrency.processutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.part /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.converted" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:30:58 np0005603609 nova_compute[221550]: 2026-01-31 07:30:58.017 221554 DEBUG oslo_concurrency.processutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:30:58 np0005603609 nova_compute[221550]: 2026-01-31 07:30:58.077 221554 DEBUG oslo_concurrency.processutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.converted --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:30:58 np0005603609 nova_compute[221550]: 2026-01-31 07:30:58.078 221554 DEBUG oslo_concurrency.lockutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:30:58 np0005603609 nova_compute[221550]: 2026-01-31 07:30:58.113 221554 DEBUG nova.storage.rbd_utils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] rbd image 57276aab-f603-4d25-880a-342aeedcbdc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:30:58 np0005603609 nova_compute[221550]: 2026-01-31 07:30:58.118 221554 DEBUG oslo_concurrency.processutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 57276aab-f603-4d25-880a-342aeedcbdc2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:30:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:58.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.083 221554 DEBUG oslo_concurrency.processutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 57276aab-f603-4d25-880a-342aeedcbdc2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.965s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.145 221554 DEBUG nova.storage.rbd_utils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] resizing rbd image 57276aab-f603-4d25-880a-342aeedcbdc2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:30:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:30:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:59.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.426 221554 DEBUG nova.objects.instance [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lazy-loading 'migration_context' on Instance uuid 57276aab-f603-4d25-880a-342aeedcbdc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.474 221554 DEBUG nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.475 221554 DEBUG nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Ensure instance console log exists: /var/lib/nova/instances/57276aab-f603-4d25-880a-342aeedcbdc2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.475 221554 DEBUG oslo_concurrency.lockutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.475 221554 DEBUG oslo_concurrency.lockutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.476 221554 DEBUG oslo_concurrency.lockutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.478 221554 DEBUG nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.485 221554 WARNING nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.494 221554 DEBUG nova.virt.libvirt.host [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.495 221554 DEBUG nova.virt.libvirt.host [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.500 221554 DEBUG nova.virt.libvirt.host [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.500 221554 DEBUG nova.virt.libvirt.host [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.502 221554 DEBUG nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.503 221554 DEBUG nova.virt.hardware [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.504 221554 DEBUG nova.virt.hardware [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.504 221554 DEBUG nova.virt.hardware [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.505 221554 DEBUG nova.virt.hardware [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.505 221554 DEBUG nova.virt.hardware [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.506 221554 DEBUG nova.virt.hardware [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.506 221554 DEBUG nova.virt.hardware [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.506 221554 DEBUG nova.virt.hardware [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.507 221554 DEBUG nova.virt.hardware [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.507 221554 DEBUG nova.virt.hardware [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.508 221554 DEBUG nova.virt.hardware [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.513 221554 DEBUG nova.privsep.utils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.514 221554 DEBUG oslo_concurrency.processutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:30:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:30:59 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1000464503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:30:59 np0005603609 nova_compute[221550]: 2026-01-31 07:30:59.971 221554 DEBUG oslo_concurrency.processutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:31:00 np0005603609 nova_compute[221550]: 2026-01-31 07:31:00.002 221554 DEBUG nova.storage.rbd_utils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] rbd image 57276aab-f603-4d25-880a-342aeedcbdc2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:31:00 np0005603609 nova_compute[221550]: 2026-01-31 07:31:00.006 221554 DEBUG oslo_concurrency.processutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:31:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:31:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:00.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:31:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:31:00 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/86164115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:31:00 np0005603609 nova_compute[221550]: 2026-01-31 07:31:00.429 221554 DEBUG oslo_concurrency.processutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:31:00 np0005603609 nova_compute[221550]: 2026-01-31 07:31:00.431 221554 DEBUG nova.objects.instance [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lazy-loading 'pci_devices' on Instance uuid 57276aab-f603-4d25-880a-342aeedcbdc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:31:00 np0005603609 nova_compute[221550]: 2026-01-31 07:31:00.446 221554 DEBUG nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:31:00 np0005603609 nova_compute[221550]:  <uuid>57276aab-f603-4d25-880a-342aeedcbdc2</uuid>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:  <name>instance-00000002</name>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-1630889839</nova:name>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:30:59</nova:creationTime>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:31:00 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:        <nova:user uuid="28efb7ff08054b4b93c9cc461b8e8862">tempest-DeleteServersAdminTestJSON-1558339450-project-member</nova:user>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:        <nova:project uuid="7931d3e634664686a06dd20aa52399ae">tempest-DeleteServersAdminTestJSON-1558339450</nova:project>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <nova:ports/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <entry name="serial">57276aab-f603-4d25-880a-342aeedcbdc2</entry>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <entry name="uuid">57276aab-f603-4d25-880a-342aeedcbdc2</entry>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/57276aab-f603-4d25-880a-342aeedcbdc2_disk">
Jan 31 02:31:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:31:00 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/57276aab-f603-4d25-880a-342aeedcbdc2_disk.config">
Jan 31 02:31:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:31:00 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/57276aab-f603-4d25-880a-342aeedcbdc2/console.log" append="off"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:31:00 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:31:00 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:31:00 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:31:00 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:31:00 np0005603609 nova_compute[221550]: 2026-01-31 07:31:00.492 221554 DEBUG nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:31:00 np0005603609 nova_compute[221550]: 2026-01-31 07:31:00.493 221554 DEBUG nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:31:00 np0005603609 nova_compute[221550]: 2026-01-31 07:31:00.493 221554 INFO nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Using config drive#033[00m
Jan 31 02:31:00 np0005603609 nova_compute[221550]: 2026-01-31 07:31:00.543 221554 DEBUG nova.storage.rbd_utils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] rbd image 57276aab-f603-4d25-880a-342aeedcbdc2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:31:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:31:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:01.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:31:01 np0005603609 podman[223709]: 2026-01-31 07:31:01.179735934 +0000 UTC m=+0.062268795 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 02:31:01 np0005603609 nova_compute[221550]: 2026-01-31 07:31:01.187 221554 INFO nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Creating config drive at /var/lib/nova/instances/57276aab-f603-4d25-880a-342aeedcbdc2/disk.config#033[00m
Jan 31 02:31:01 np0005603609 nova_compute[221550]: 2026-01-31 07:31:01.190 221554 DEBUG oslo_concurrency.processutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/57276aab-f603-4d25-880a-342aeedcbdc2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpc_6z9njs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:31:01 np0005603609 podman[223708]: 2026-01-31 07:31:01.232836264 +0000 UTC m=+0.114933995 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 31 02:31:01 np0005603609 nova_compute[221550]: 2026-01-31 07:31:01.323 221554 DEBUG oslo_concurrency.processutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/57276aab-f603-4d25-880a-342aeedcbdc2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpc_6z9njs" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:31:01 np0005603609 nova_compute[221550]: 2026-01-31 07:31:01.352 221554 DEBUG nova.storage.rbd_utils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] rbd image 57276aab-f603-4d25-880a-342aeedcbdc2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:31:01 np0005603609 nova_compute[221550]: 2026-01-31 07:31:01.356 221554 DEBUG oslo_concurrency.processutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/57276aab-f603-4d25-880a-342aeedcbdc2/disk.config 57276aab-f603-4d25-880a-342aeedcbdc2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:31:02 np0005603609 nova_compute[221550]: 2026-01-31 07:31:02.080 221554 DEBUG oslo_concurrency.processutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/57276aab-f603-4d25-880a-342aeedcbdc2/disk.config 57276aab-f603-4d25-880a-342aeedcbdc2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.724s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:31:02 np0005603609 nova_compute[221550]: 2026-01-31 07:31:02.082 221554 INFO nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Deleting local config drive /var/lib/nova/instances/57276aab-f603-4d25-880a-342aeedcbdc2/disk.config because it was imported into RBD.#033[00m
Jan 31 02:31:02 np0005603609 systemd[1]: Starting libvirt secret daemon...
Jan 31 02:31:02 np0005603609 systemd[1]: Started libvirt secret daemon.
Jan 31 02:31:02 np0005603609 systemd-machined[190912]: New machine qemu-1-instance-00000002.
Jan 31 02:31:02 np0005603609 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Jan 31 02:31:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:02.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:03.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.315 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844663.3136294, 57276aab-f603-4d25-880a-342aeedcbdc2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.317 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.321 221554 DEBUG nova.compute.manager [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.322 221554 DEBUG nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.326 221554 INFO nova.virt.libvirt.driver [-] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Instance spawned successfully.#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.327 221554 DEBUG nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.359 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.369 221554 DEBUG nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.369 221554 DEBUG nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.370 221554 DEBUG nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.370 221554 DEBUG nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.371 221554 DEBUG nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.372 221554 DEBUG nova.virt.libvirt.driver [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.376 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.425 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.425 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844663.3139107, 57276aab-f603-4d25-880a-342aeedcbdc2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.425 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] VM Started (Lifecycle Event)#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.466 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.469 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.483 221554 INFO nova.compute.manager [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Took 8.88 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.484 221554 DEBUG nova.compute.manager [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.523 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.576 221554 INFO nova.compute.manager [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Took 10.43 seconds to build instance.#033[00m
Jan 31 02:31:03 np0005603609 nova_compute[221550]: 2026-01-31 07:31:03.597 221554 DEBUG oslo_concurrency.lockutils [None req-c158bff3-dc62-48af-944b-9c531a50d741 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "57276aab-f603-4d25-880a-342aeedcbdc2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:31:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:04.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:05.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:31:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:06.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:31:06 np0005603609 podman[224039]: 2026-01-31 07:31:06.723919837 +0000 UTC m=+0.098129513 container exec 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Jan 31 02:31:06 np0005603609 podman[224039]: 2026-01-31 07:31:06.817241941 +0000 UTC m=+0.191451617 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 02:31:06 np0005603609 nova_compute[221550]: 2026-01-31 07:31:06.991 221554 DEBUG oslo_concurrency.lockutils [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Acquiring lock "57276aab-f603-4d25-880a-342aeedcbdc2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:31:06 np0005603609 nova_compute[221550]: 2026-01-31 07:31:06.993 221554 DEBUG oslo_concurrency.lockutils [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "57276aab-f603-4d25-880a-342aeedcbdc2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:31:06 np0005603609 nova_compute[221550]: 2026-01-31 07:31:06.993 221554 DEBUG oslo_concurrency.lockutils [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Acquiring lock "57276aab-f603-4d25-880a-342aeedcbdc2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:31:06 np0005603609 nova_compute[221550]: 2026-01-31 07:31:06.993 221554 DEBUG oslo_concurrency.lockutils [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "57276aab-f603-4d25-880a-342aeedcbdc2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:31:06 np0005603609 nova_compute[221550]: 2026-01-31 07:31:06.993 221554 DEBUG oslo_concurrency.lockutils [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "57276aab-f603-4d25-880a-342aeedcbdc2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:31:06 np0005603609 nova_compute[221550]: 2026-01-31 07:31:06.995 221554 INFO nova.compute.manager [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Terminating instance#033[00m
Jan 31 02:31:06 np0005603609 nova_compute[221550]: 2026-01-31 07:31:06.996 221554 DEBUG oslo_concurrency.lockutils [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Acquiring lock "refresh_cache-57276aab-f603-4d25-880a-342aeedcbdc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:31:06 np0005603609 nova_compute[221550]: 2026-01-31 07:31:06.997 221554 DEBUG oslo_concurrency.lockutils [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Acquired lock "refresh_cache-57276aab-f603-4d25-880a-342aeedcbdc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:31:06 np0005603609 nova_compute[221550]: 2026-01-31 07:31:06.997 221554 DEBUG nova.network.neutron [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:31:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:07.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:07 np0005603609 nova_compute[221550]: 2026-01-31 07:31:07.223 221554 DEBUG nova.network.neutron [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:31:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:07.463 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:31:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:07.463 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:31:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:07.463 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:31:07 np0005603609 nova_compute[221550]: 2026-01-31 07:31:07.542 221554 DEBUG nova.network.neutron [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:31:07 np0005603609 nova_compute[221550]: 2026-01-31 07:31:07.556 221554 DEBUG oslo_concurrency.lockutils [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Releasing lock "refresh_cache-57276aab-f603-4d25-880a-342aeedcbdc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:31:07 np0005603609 nova_compute[221550]: 2026-01-31 07:31:07.557 221554 DEBUG nova.compute.manager [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:31:07 np0005603609 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Jan 31 02:31:07 np0005603609 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 4.797s CPU time.
Jan 31 02:31:07 np0005603609 systemd-machined[190912]: Machine qemu-1-instance-00000002 terminated.
Jan 31 02:31:07 np0005603609 nova_compute[221550]: 2026-01-31 07:31:07.779 221554 INFO nova.virt.libvirt.driver [-] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Instance destroyed successfully.#033[00m
Jan 31 02:31:07 np0005603609 nova_compute[221550]: 2026-01-31 07:31:07.779 221554 DEBUG nova.objects.instance [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lazy-loading 'resources' on Instance uuid 57276aab-f603-4d25-880a-342aeedcbdc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:31:08 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:31:08 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:31:08 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:31:08 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:31:08 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:31:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:08.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:08 np0005603609 nova_compute[221550]: 2026-01-31 07:31:08.458 221554 INFO nova.virt.libvirt.driver [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Deleting instance files /var/lib/nova/instances/57276aab-f603-4d25-880a-342aeedcbdc2_del#033[00m
Jan 31 02:31:08 np0005603609 nova_compute[221550]: 2026-01-31 07:31:08.459 221554 INFO nova.virt.libvirt.driver [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Deletion of /var/lib/nova/instances/57276aab-f603-4d25-880a-342aeedcbdc2_del complete#033[00m
Jan 31 02:31:08 np0005603609 nova_compute[221550]: 2026-01-31 07:31:08.555 221554 DEBUG nova.virt.libvirt.host [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Jan 31 02:31:08 np0005603609 nova_compute[221550]: 2026-01-31 07:31:08.555 221554 INFO nova.virt.libvirt.host [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] UEFI support detected#033[00m
Jan 31 02:31:08 np0005603609 nova_compute[221550]: 2026-01-31 07:31:08.558 221554 INFO nova.compute.manager [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:31:08 np0005603609 nova_compute[221550]: 2026-01-31 07:31:08.558 221554 DEBUG oslo.service.loopingcall [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:31:08 np0005603609 nova_compute[221550]: 2026-01-31 07:31:08.558 221554 DEBUG nova.compute.manager [-] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:31:08 np0005603609 nova_compute[221550]: 2026-01-31 07:31:08.559 221554 DEBUG nova.network.neutron [-] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:31:08 np0005603609 nova_compute[221550]: 2026-01-31 07:31:08.757 221554 DEBUG nova.network.neutron [-] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:31:08 np0005603609 nova_compute[221550]: 2026-01-31 07:31:08.772 221554 DEBUG nova.network.neutron [-] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:31:08 np0005603609 nova_compute[221550]: 2026-01-31 07:31:08.788 221554 INFO nova.compute.manager [-] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Took 0.23 seconds to deallocate network for instance.#033[00m
Jan 31 02:31:08 np0005603609 nova_compute[221550]: 2026-01-31 07:31:08.892 221554 DEBUG oslo_concurrency.lockutils [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:31:08 np0005603609 nova_compute[221550]: 2026-01-31 07:31:08.893 221554 DEBUG oslo_concurrency.lockutils [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:31:08 np0005603609 nova_compute[221550]: 2026-01-31 07:31:08.953 221554 DEBUG oslo_concurrency.processutils [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:31:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:31:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:09.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:31:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:31:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2466196534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:31:09 np0005603609 nova_compute[221550]: 2026-01-31 07:31:09.362 221554 DEBUG oslo_concurrency.processutils [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:31:09 np0005603609 nova_compute[221550]: 2026-01-31 07:31:09.366 221554 DEBUG nova.compute.provider_tree [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 02:31:09 np0005603609 nova_compute[221550]: 2026-01-31 07:31:09.414 221554 DEBUG nova.scheduler.client.report [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Updated inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with generation 7 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 31 02:31:09 np0005603609 nova_compute[221550]: 2026-01-31 07:31:09.414 221554 DEBUG nova.compute.provider_tree [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Updating resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 generation from 7 to 8 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 31 02:31:09 np0005603609 nova_compute[221550]: 2026-01-31 07:31:09.415 221554 DEBUG nova.compute.provider_tree [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 02:31:09 np0005603609 nova_compute[221550]: 2026-01-31 07:31:09.441 221554 DEBUG oslo_concurrency.lockutils [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:31:09 np0005603609 nova_compute[221550]: 2026-01-31 07:31:09.465 221554 INFO nova.scheduler.client.report [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Deleted allocations for instance 57276aab-f603-4d25-880a-342aeedcbdc2#033[00m
Jan 31 02:31:09 np0005603609 nova_compute[221550]: 2026-01-31 07:31:09.526 221554 DEBUG oslo_concurrency.lockutils [None req-6af78baf-68b8-419e-b14a-894c22501015 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "57276aab-f603-4d25-880a-342aeedcbdc2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:31:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:10.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:10.732 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:31:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:10.732 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:31:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:11.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:31:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:12.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:31:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:13.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:14.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:15.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:16 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:31:16 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:31:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:31:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:16.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:31:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:16.734 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:31:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:17.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:18.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:19.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:20.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:21.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:22.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:22 np0005603609 nova_compute[221550]: 2026-01-31 07:31:22.780 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844667.7779412, 57276aab-f603-4d25-880a-342aeedcbdc2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:31:22 np0005603609 nova_compute[221550]: 2026-01-31 07:31:22.780 221554 INFO nova.compute.manager [-] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:31:22 np0005603609 nova_compute[221550]: 2026-01-31 07:31:22.875 221554 DEBUG nova.compute.manager [None req-086ad535-2c8d-4769-bdb0-1791259356a9 - - - - - -] [instance: 57276aab-f603-4d25-880a-342aeedcbdc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:31:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:23.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:24.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:25.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:31:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:26.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:31:26 np0005603609 nova_compute[221550]: 2026-01-31 07:31:26.784 221554 DEBUG oslo_concurrency.lockutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Acquiring lock "23c338db-50ed-434c-ac85-8190b9b5f194" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:31:26 np0005603609 nova_compute[221550]: 2026-01-31 07:31:26.784 221554 DEBUG oslo_concurrency.lockutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:31:26 np0005603609 nova_compute[221550]: 2026-01-31 07:31:26.818 221554 DEBUG nova.compute.manager [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:31:26 np0005603609 nova_compute[221550]: 2026-01-31 07:31:26.940 221554 DEBUG oslo_concurrency.lockutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:31:26 np0005603609 nova_compute[221550]: 2026-01-31 07:31:26.941 221554 DEBUG oslo_concurrency.lockutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:31:26 np0005603609 nova_compute[221550]: 2026-01-31 07:31:26.950 221554 DEBUG nova.virt.hardware [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:31:26 np0005603609 nova_compute[221550]: 2026-01-31 07:31:26.950 221554 INFO nova.compute.claims [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.081 221554 DEBUG oslo_concurrency.processutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:31:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:27.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:31:27 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/817307480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.496 221554 DEBUG oslo_concurrency.processutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.501 221554 DEBUG nova.compute.provider_tree [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.533 221554 DEBUG nova.scheduler.client.report [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.571 221554 DEBUG oslo_concurrency.lockutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.573 221554 DEBUG nova.compute.manager [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.646 221554 DEBUG nova.compute.manager [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.647 221554 DEBUG nova.network.neutron [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.673 221554 INFO nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.702 221554 DEBUG nova.compute.manager [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.856 221554 DEBUG nova.compute.manager [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.857 221554 DEBUG nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.857 221554 INFO nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Creating image(s)#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.881 221554 DEBUG nova.storage.rbd_utils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] rbd image 23c338db-50ed-434c-ac85-8190b9b5f194_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.910 221554 DEBUG nova.storage.rbd_utils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] rbd image 23c338db-50ed-434c-ac85-8190b9b5f194_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.937 221554 DEBUG nova.storage.rbd_utils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] rbd image 23c338db-50ed-434c-ac85-8190b9b5f194_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.940 221554 DEBUG oslo_concurrency.processutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.994 221554 DEBUG oslo_concurrency.processutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.995 221554 DEBUG oslo_concurrency.lockutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.996 221554 DEBUG oslo_concurrency.lockutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:31:27 np0005603609 nova_compute[221550]: 2026-01-31 07:31:27.996 221554 DEBUG oslo_concurrency.lockutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:31:28 np0005603609 nova_compute[221550]: 2026-01-31 07:31:28.022 221554 DEBUG nova.storage.rbd_utils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] rbd image 23c338db-50ed-434c-ac85-8190b9b5f194_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:31:28 np0005603609 nova_compute[221550]: 2026-01-31 07:31:28.026 221554 DEBUG oslo_concurrency.processutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 23c338db-50ed-434c-ac85-8190b9b5f194_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:31:28 np0005603609 nova_compute[221550]: 2026-01-31 07:31:28.153 221554 WARNING oslo_policy.policy [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Jan 31 02:31:28 np0005603609 nova_compute[221550]: 2026-01-31 07:31:28.155 221554 WARNING oslo_policy.policy [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Jan 31 02:31:28 np0005603609 nova_compute[221550]: 2026-01-31 07:31:28.158 221554 DEBUG nova.policy [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0a59df5da6284e4e8764816e1f8dfaa3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dfb4d4079ac944b288d5e285ce1de95a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:31:28 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 31 02:31:28 np0005603609 nova_compute[221550]: 2026-01-31 07:31:28.285 221554 DEBUG oslo_concurrency.processutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 23c338db-50ed-434c-ac85-8190b9b5f194_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:31:28 np0005603609 nova_compute[221550]: 2026-01-31 07:31:28.353 221554 DEBUG nova.storage.rbd_utils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] resizing rbd image 23c338db-50ed-434c-ac85-8190b9b5f194_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:31:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:28.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:28 np0005603609 nova_compute[221550]: 2026-01-31 07:31:28.472 221554 DEBUG nova.objects.instance [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lazy-loading 'migration_context' on Instance uuid 23c338db-50ed-434c-ac85-8190b9b5f194 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:31:28 np0005603609 nova_compute[221550]: 2026-01-31 07:31:28.504 221554 DEBUG nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:31:28 np0005603609 nova_compute[221550]: 2026-01-31 07:31:28.505 221554 DEBUG nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Ensure instance console log exists: /var/lib/nova/instances/23c338db-50ed-434c-ac85-8190b9b5f194/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:31:28 np0005603609 nova_compute[221550]: 2026-01-31 07:31:28.505 221554 DEBUG oslo_concurrency.lockutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:31:28 np0005603609 nova_compute[221550]: 2026-01-31 07:31:28.505 221554 DEBUG oslo_concurrency.lockutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:31:28 np0005603609 nova_compute[221550]: 2026-01-31 07:31:28.505 221554 DEBUG oslo_concurrency.lockutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:31:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:29.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:29 np0005603609 nova_compute[221550]: 2026-01-31 07:31:29.959 221554 DEBUG nova.network.neutron [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Successfully updated port: 2da07bdf-313d-4a90-a81e-e531c63b3d54 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:31:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:31:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:30.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:31:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:31.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:31 np0005603609 nova_compute[221550]: 2026-01-31 07:31:31.484 221554 DEBUG oslo_concurrency.lockutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Acquiring lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:31:31 np0005603609 nova_compute[221550]: 2026-01-31 07:31:31.484 221554 DEBUG oslo_concurrency.lockutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Acquired lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:31:31 np0005603609 nova_compute[221550]: 2026-01-31 07:31:31.485 221554 DEBUG nova.network.neutron [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:31:31 np0005603609 nova_compute[221550]: 2026-01-31 07:31:31.535 221554 DEBUG nova.compute.manager [req-fce701b4-6c9e-46d6-80b2-eaad78235d29 req-a8baf899-9eec-44cd-8ed9-1a6d992fde3a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received event network-changed-2da07bdf-313d-4a90-a81e-e531c63b3d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:31:31 np0005603609 nova_compute[221550]: 2026-01-31 07:31:31.535 221554 DEBUG nova.compute.manager [req-fce701b4-6c9e-46d6-80b2-eaad78235d29 req-a8baf899-9eec-44cd-8ed9-1a6d992fde3a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Refreshing instance network info cache due to event network-changed-2da07bdf-313d-4a90-a81e-e531c63b3d54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:31:31 np0005603609 nova_compute[221550]: 2026-01-31 07:31:31.535 221554 DEBUG oslo_concurrency.lockutils [req-fce701b4-6c9e-46d6-80b2-eaad78235d29 req-a8baf899-9eec-44cd-8ed9-1a6d992fde3a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:31:31 np0005603609 nova_compute[221550]: 2026-01-31 07:31:31.842 221554 DEBUG nova.network.neutron [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:31:32 np0005603609 podman[224571]: 2026-01-31 07:31:32.198738905 +0000 UTC m=+0.081574078 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 02:31:32 np0005603609 podman[224572]: 2026-01-31 07:31:32.199065743 +0000 UTC m=+0.078858342 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:31:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:32.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:32 np0005603609 nova_compute[221550]: 2026-01-31 07:31:32.958 221554 DEBUG nova.network.neutron [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Updating instance_info_cache with network_info: [{"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.007 221554 DEBUG oslo_concurrency.lockutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Releasing lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.008 221554 DEBUG nova.compute.manager [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Instance network_info: |[{"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.008 221554 DEBUG oslo_concurrency.lockutils [req-fce701b4-6c9e-46d6-80b2-eaad78235d29 req-a8baf899-9eec-44cd-8ed9-1a6d992fde3a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.008 221554 DEBUG nova.network.neutron [req-fce701b4-6c9e-46d6-80b2-eaad78235d29 req-a8baf899-9eec-44cd-8ed9-1a6d992fde3a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Refreshing network info cache for port 2da07bdf-313d-4a90-a81e-e531c63b3d54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.010 221554 DEBUG nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Start _get_guest_xml network_info=[{"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.013 221554 WARNING nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.018 221554 DEBUG nova.virt.libvirt.host [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.018 221554 DEBUG nova.virt.libvirt.host [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.022 221554 DEBUG nova.virt.libvirt.host [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.022 221554 DEBUG nova.virt.libvirt.host [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.023 221554 DEBUG nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.024 221554 DEBUG nova.virt.hardware [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.024 221554 DEBUG nova.virt.hardware [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.024 221554 DEBUG nova.virt.hardware [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.024 221554 DEBUG nova.virt.hardware [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.025 221554 DEBUG nova.virt.hardware [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.025 221554 DEBUG nova.virt.hardware [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.025 221554 DEBUG nova.virt.hardware [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.025 221554 DEBUG nova.virt.hardware [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.026 221554 DEBUG nova.virt.hardware [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.026 221554 DEBUG nova.virt.hardware [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.026 221554 DEBUG nova.virt.hardware [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.029 221554 DEBUG oslo_concurrency.processutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:31:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:33.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:31:33 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/443679632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.441 221554 DEBUG oslo_concurrency.processutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.470 221554 DEBUG nova.storage.rbd_utils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] rbd image 23c338db-50ed-434c-ac85-8190b9b5f194_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.475 221554 DEBUG oslo_concurrency.processutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:31:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:31:33 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1935318245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.919 221554 DEBUG oslo_concurrency.processutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.920 221554 DEBUG nova.virt.libvirt.vif [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:31:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-933343300',display_name='tempest-LiveMigrationTest-server-933343300',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-933343300',id=5,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dfb4d4079ac944b288d5e285ce1de95a',ramdisk_id='',reservation_id='r-kv2e0wfn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-48073594',owner_user_name='tempest-LiveMigrationTest-48073594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:31:27Z,user_data=None,user_id='0a59df5da6284e4e8764816e1f8dfaa3',uuid=23c338db-50ed-434c-ac85-8190b9b5f194,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.921 221554 DEBUG nova.network.os_vif_util [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Converting VIF {"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.922 221554 DEBUG nova.network.os_vif_util [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:9e:1b,bridge_name='br-int',has_traffic_filtering=True,id=2da07bdf-313d-4a90-a81e-e531c63b3d54,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2da07bdf-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.924 221554 DEBUG nova.objects.instance [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lazy-loading 'pci_devices' on Instance uuid 23c338db-50ed-434c-ac85-8190b9b5f194 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.958 221554 DEBUG nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:31:33 np0005603609 nova_compute[221550]:  <uuid>23c338db-50ed-434c-ac85-8190b9b5f194</uuid>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:  <name>instance-00000005</name>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <nova:name>tempest-LiveMigrationTest-server-933343300</nova:name>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:31:33</nova:creationTime>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:31:33 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:        <nova:user uuid="0a59df5da6284e4e8764816e1f8dfaa3">tempest-LiveMigrationTest-48073594-project-member</nova:user>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:        <nova:project uuid="dfb4d4079ac944b288d5e285ce1de95a">tempest-LiveMigrationTest-48073594</nova:project>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:        <nova:port uuid="2da07bdf-313d-4a90-a81e-e531c63b3d54">
Jan 31 02:31:33 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <entry name="serial">23c338db-50ed-434c-ac85-8190b9b5f194</entry>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <entry name="uuid">23c338db-50ed-434c-ac85-8190b9b5f194</entry>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/23c338db-50ed-434c-ac85-8190b9b5f194_disk">
Jan 31 02:31:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:31:33 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/23c338db-50ed-434c-ac85-8190b9b5f194_disk.config">
Jan 31 02:31:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:31:33 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:cc:9e:1b"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <target dev="tap2da07bdf-31"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/23c338db-50ed-434c-ac85-8190b9b5f194/console.log" append="off"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:31:33 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:31:33 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:31:33 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:31:33 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.960 221554 DEBUG nova.compute.manager [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Preparing to wait for external event network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.960 221554 DEBUG oslo_concurrency.lockutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Acquiring lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.960 221554 DEBUG oslo_concurrency.lockutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.961 221554 DEBUG oslo_concurrency.lockutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.961 221554 DEBUG nova.virt.libvirt.vif [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:31:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-933343300',display_name='tempest-LiveMigrationTest-server-933343300',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-933343300',id=5,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dfb4d4079ac944b288d5e285ce1de95a',ramdisk_id='',reservation_id='r-kv2e0wfn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-48073594',owner_user_name='tempest-LiveMigrationTest-48073594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:31:27Z,user_data=None,user_id='0a59df5da6284e4e8764816e1f8dfaa3',uuid=23c338db-50ed-434c-ac85-8190b9b5f194,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.962 221554 DEBUG nova.network.os_vif_util [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Converting VIF {"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.962 221554 DEBUG nova.network.os_vif_util [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:9e:1b,bridge_name='br-int',has_traffic_filtering=True,id=2da07bdf-313d-4a90-a81e-e531c63b3d54,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2da07bdf-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:31:33 np0005603609 nova_compute[221550]: 2026-01-31 07:31:33.963 221554 DEBUG os_vif [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:9e:1b,bridge_name='br-int',has_traffic_filtering=True,id=2da07bdf-313d-4a90-a81e-e531c63b3d54,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2da07bdf-31') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.035 221554 DEBUG ovsdbapp.backend.ovs_idl [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.035 221554 DEBUG ovsdbapp.backend.ovs_idl [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.035 221554 DEBUG ovsdbapp.backend.ovs_idl [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.036 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.037 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.037 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.037 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.039 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.042 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.053 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.054 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.054 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.055 221554 INFO oslo.privsep.daemon [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpdki4lrpj/privsep.sock']#033[00m
Jan 31 02:31:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:34.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.652 221554 INFO oslo.privsep.daemon [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.533 224680 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.537 224680 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.538 224680 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.538 224680 INFO oslo.privsep.daemon [-] privsep daemon running as pid 224680#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.940 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.941 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2da07bdf-31, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.941 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2da07bdf-31, col_values=(('external_ids', {'iface-id': '2da07bdf-313d-4a90-a81e-e531c63b3d54', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:9e:1b', 'vm-uuid': '23c338db-50ed-434c-ac85-8190b9b5f194'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.943 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:34 np0005603609 NetworkManager[49064]: <info>  [1769844694.9443] manager: (tap2da07bdf-31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.947 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.948 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:34 np0005603609 nova_compute[221550]: 2026-01-31 07:31:34.949 221554 INFO os_vif [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:9e:1b,bridge_name='br-int',has_traffic_filtering=True,id=2da07bdf-313d-4a90-a81e-e531c63b3d54,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2da07bdf-31')#033[00m
Jan 31 02:31:35 np0005603609 nova_compute[221550]: 2026-01-31 07:31:35.028 221554 DEBUG nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:31:35 np0005603609 nova_compute[221550]: 2026-01-31 07:31:35.028 221554 DEBUG nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:31:35 np0005603609 nova_compute[221550]: 2026-01-31 07:31:35.029 221554 DEBUG nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] No VIF found with MAC fa:16:3e:cc:9e:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:31:35 np0005603609 nova_compute[221550]: 2026-01-31 07:31:35.029 221554 INFO nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Using config drive#033[00m
Jan 31 02:31:35 np0005603609 nova_compute[221550]: 2026-01-31 07:31:35.055 221554 DEBUG nova.storage.rbd_utils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] rbd image 23c338db-50ed-434c-ac85-8190b9b5f194_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:31:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:35.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:35 np0005603609 nova_compute[221550]: 2026-01-31 07:31:35.430 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:36 np0005603609 nova_compute[221550]: 2026-01-31 07:31:36.038 221554 DEBUG nova.network.neutron [req-fce701b4-6c9e-46d6-80b2-eaad78235d29 req-a8baf899-9eec-44cd-8ed9-1a6d992fde3a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Updated VIF entry in instance network info cache for port 2da07bdf-313d-4a90-a81e-e531c63b3d54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:31:36 np0005603609 nova_compute[221550]: 2026-01-31 07:31:36.039 221554 DEBUG nova.network.neutron [req-fce701b4-6c9e-46d6-80b2-eaad78235d29 req-a8baf899-9eec-44cd-8ed9-1a6d992fde3a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Updating instance_info_cache with network_info: [{"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:31:36 np0005603609 nova_compute[221550]: 2026-01-31 07:31:36.090 221554 DEBUG oslo_concurrency.lockutils [req-fce701b4-6c9e-46d6-80b2-eaad78235d29 req-a8baf899-9eec-44cd-8ed9-1a6d992fde3a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:31:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:36.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:36 np0005603609 nova_compute[221550]: 2026-01-31 07:31:36.873 221554 INFO nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Creating config drive at /var/lib/nova/instances/23c338db-50ed-434c-ac85-8190b9b5f194/disk.config#033[00m
Jan 31 02:31:36 np0005603609 nova_compute[221550]: 2026-01-31 07:31:36.876 221554 DEBUG oslo_concurrency.processutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/23c338db-50ed-434c-ac85-8190b9b5f194/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpsugg7687 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:31:36 np0005603609 nova_compute[221550]: 2026-01-31 07:31:36.992 221554 DEBUG oslo_concurrency.processutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/23c338db-50ed-434c-ac85-8190b9b5f194/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpsugg7687" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:31:37 np0005603609 nova_compute[221550]: 2026-01-31 07:31:37.018 221554 DEBUG nova.storage.rbd_utils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] rbd image 23c338db-50ed-434c-ac85-8190b9b5f194_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:31:37 np0005603609 nova_compute[221550]: 2026-01-31 07:31:37.021 221554 DEBUG oslo_concurrency.processutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/23c338db-50ed-434c-ac85-8190b9b5f194/disk.config 23c338db-50ed-434c-ac85-8190b9b5f194_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:31:37 np0005603609 nova_compute[221550]: 2026-01-31 07:31:37.166 221554 DEBUG oslo_concurrency.processutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/23c338db-50ed-434c-ac85-8190b9b5f194/disk.config 23c338db-50ed-434c-ac85-8190b9b5f194_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:31:37 np0005603609 nova_compute[221550]: 2026-01-31 07:31:37.167 221554 INFO nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Deleting local config drive /var/lib/nova/instances/23c338db-50ed-434c-ac85-8190b9b5f194/disk.config because it was imported into RBD.#033[00m
Jan 31 02:31:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:37.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:37 np0005603609 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 31 02:31:37 np0005603609 kernel: tap2da07bdf-31: entered promiscuous mode
Jan 31 02:31:37 np0005603609 NetworkManager[49064]: <info>  [1769844697.2233] manager: (tap2da07bdf-31): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Jan 31 02:31:37 np0005603609 ovn_controller[130359]: 2026-01-31T07:31:37Z|00027|binding|INFO|Claiming lport 2da07bdf-313d-4a90-a81e-e531c63b3d54 for this chassis.
Jan 31 02:31:37 np0005603609 ovn_controller[130359]: 2026-01-31T07:31:37Z|00028|binding|INFO|2da07bdf-313d-4a90-a81e-e531c63b3d54: Claiming fa:16:3e:cc:9e:1b 10.100.0.5
Jan 31 02:31:37 np0005603609 ovn_controller[130359]: 2026-01-31T07:31:37Z|00029|binding|INFO|Claiming lport 6e847cdf-cab0-4432-ba18-1faa5270e0d7 for this chassis.
Jan 31 02:31:37 np0005603609 ovn_controller[130359]: 2026-01-31T07:31:37Z|00030|binding|INFO|6e847cdf-cab0-4432-ba18-1faa5270e0d7: Claiming fa:16:3e:dc:25:75 19.80.0.160
Jan 31 02:31:37 np0005603609 nova_compute[221550]: 2026-01-31 07:31:37.225 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:37 np0005603609 nova_compute[221550]: 2026-01-31 07:31:37.229 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:37 np0005603609 systemd-udevd[224763]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:31:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:37.269 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:9e:1b 10.100.0.5'], port_security=['fa:16:3e:cc:9e:1b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-211383396', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '23c338db-50ed-434c-ac85-8190b9b5f194', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-211383396', 'neutron:project_id': 'dfb4d4079ac944b288d5e285ce1de95a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'abfb90a3-3499-4078-8409-95077c250314', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8850cb79-5a97-415d-8eee-4d7273f04968, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=2da07bdf-313d-4a90-a81e-e531c63b3d54) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:31:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:37.271 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:25:75 19.80.0.160'], port_security=['fa:16:3e:dc:25:75 19.80.0.160'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['2da07bdf-313d-4a90-a81e-e531c63b3d54'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-70189147', 'neutron:cidrs': '19.80.0.160/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6abbefe0-4d30-4477-876e-e1412d7347f2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-70189147', 'neutron:project_id': 'dfb4d4079ac944b288d5e285ce1de95a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'abfb90a3-3499-4078-8409-95077c250314', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=00556c7f-96d8-4b10-939b-86f9a6371447, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6e847cdf-cab0-4432-ba18-1faa5270e0d7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:31:37 np0005603609 nova_compute[221550]: 2026-01-31 07:31:37.271 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:37 np0005603609 NetworkManager[49064]: <info>  [1769844697.2726] device (tap2da07bdf-31): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:31:37 np0005603609 NetworkManager[49064]: <info>  [1769844697.2733] device (tap2da07bdf-31): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:31:37 np0005603609 systemd-machined[190912]: New machine qemu-2-instance-00000005.
Jan 31 02:31:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:37.273 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 2da07bdf-313d-4a90-a81e-e531c63b3d54 in datapath 272cbcfe-dc1b-4319-84a2-27d245d969a3 bound to our chassis#033[00m
Jan 31 02:31:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:37.276 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 272cbcfe-dc1b-4319-84a2-27d245d969a3#033[00m
Jan 31 02:31:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:37.277 140058 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp01roqkbl/privsep.sock']#033[00m
Jan 31 02:31:37 np0005603609 ovn_controller[130359]: 2026-01-31T07:31:37Z|00031|binding|INFO|Setting lport 2da07bdf-313d-4a90-a81e-e531c63b3d54 ovn-installed in OVS
Jan 31 02:31:37 np0005603609 ovn_controller[130359]: 2026-01-31T07:31:37Z|00032|binding|INFO|Setting lport 2da07bdf-313d-4a90-a81e-e531c63b3d54 up in Southbound
Jan 31 02:31:37 np0005603609 ovn_controller[130359]: 2026-01-31T07:31:37Z|00033|binding|INFO|Setting lport 6e847cdf-cab0-4432-ba18-1faa5270e0d7 up in Southbound
Jan 31 02:31:37 np0005603609 nova_compute[221550]: 2026-01-31 07:31:37.281 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:37 np0005603609 systemd[1]: Started Virtual Machine qemu-2-instance-00000005.
Jan 31 02:31:37 np0005603609 nova_compute[221550]: 2026-01-31 07:31:37.704 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844697.7036405, 23c338db-50ed-434c-ac85-8190b9b5f194 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:31:37 np0005603609 nova_compute[221550]: 2026-01-31 07:31:37.704 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] VM Started (Lifecycle Event)#033[00m
Jan 31 02:31:38 np0005603609 nova_compute[221550]: 2026-01-31 07:31:38.171 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:31:38 np0005603609 nova_compute[221550]: 2026-01-31 07:31:38.175 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844697.70377, 23c338db-50ed-434c-ac85-8190b9b5f194 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:31:38 np0005603609 nova_compute[221550]: 2026-01-31 07:31:38.175 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:31:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:38.205 140058 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 31 02:31:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:38.205 140058 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp01roqkbl/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 31 02:31:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:38.066 224823 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 31 02:31:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:38.073 224823 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 31 02:31:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:38.078 224823 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Jan 31 02:31:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:38.079 224823 INFO oslo.privsep.daemon [-] privsep daemon running as pid 224823#033[00m
Jan 31 02:31:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:38.207 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[61824fe4-5643-477f-acea-60d7e996bec3]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:38 np0005603609 nova_compute[221550]: 2026-01-31 07:31:38.209 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:31:38 np0005603609 nova_compute[221550]: 2026-01-31 07:31:38.212 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:31:38 np0005603609 nova_compute[221550]: 2026-01-31 07:31:38.251 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:31:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:31:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:38.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:31:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:38.980 224823 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:31:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:38.980 224823 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:31:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:38.980 224823 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.188 221554 DEBUG nova.compute.manager [req-edcb6270-0cec-41c5-b779-dc12b88e3e57 req-ea26bc99-c593-4d72-b6ae-c54a1ba3fc9d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received event network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.189 221554 DEBUG oslo_concurrency.lockutils [req-edcb6270-0cec-41c5-b779-dc12b88e3e57 req-ea26bc99-c593-4d72-b6ae-c54a1ba3fc9d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.189 221554 DEBUG oslo_concurrency.lockutils [req-edcb6270-0cec-41c5-b779-dc12b88e3e57 req-ea26bc99-c593-4d72-b6ae-c54a1ba3fc9d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.189 221554 DEBUG oslo_concurrency.lockutils [req-edcb6270-0cec-41c5-b779-dc12b88e3e57 req-ea26bc99-c593-4d72-b6ae-c54a1ba3fc9d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.190 221554 DEBUG nova.compute.manager [req-edcb6270-0cec-41c5-b779-dc12b88e3e57 req-ea26bc99-c593-4d72-b6ae-c54a1ba3fc9d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Processing event network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.190 221554 DEBUG nova.compute.manager [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.195 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844699.1935563, 23c338db-50ed-434c-ac85-8190b9b5f194 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.196 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.199 221554 DEBUG nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:31:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:39.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.205 221554 INFO nova.virt.libvirt.driver [-] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Instance spawned successfully.#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.206 221554 DEBUG nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.229 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.238 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.244 221554 DEBUG nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.245 221554 DEBUG nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.246 221554 DEBUG nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.247 221554 DEBUG nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.247 221554 DEBUG nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.248 221554 DEBUG nova.virt.libvirt.driver [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.278 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.333 221554 INFO nova.compute.manager [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Took 11.48 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.334 221554 DEBUG nova.compute.manager [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.414 221554 INFO nova.compute.manager [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Took 12.51 seconds to build instance.#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.439 221554 DEBUG oslo_concurrency.lockutils [None req-fbfc4787-c87c-4cd2-b5f2-ca39d641ee49 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:31:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:39.643 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[430d52c5-4b1f-433a-9d2b-8eb1b9196780]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:39.644 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap272cbcfe-d1 in ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:31:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:39.647 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap272cbcfe-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:31:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:39.647 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c018012f-00d6-493c-a941-1c729d065604]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:39.651 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ce3e3282-a3fb-423d-bfa6-f91c7aa8c5d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:39.664 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1b9bb8-a514-41ab-bcd7-4c6734629fc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:39.686 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[70ce9ddf-fb34-4262-86e3-6a58aa5dd2c5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:39.689 140058 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpi2jq3p9r/privsep.sock']#033[00m
Jan 31 02:31:39 np0005603609 nova_compute[221550]: 2026-01-31 07:31:39.982 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:40.349 140058 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 31 02:31:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:40.350 140058 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpi2jq3p9r/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 31 02:31:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:40.233 224837 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 31 02:31:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:40.238 224837 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 31 02:31:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:40.242 224837 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Jan 31 02:31:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:40.242 224837 INFO oslo.privsep.daemon [-] privsep daemon running as pid 224837#033[00m
Jan 31 02:31:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:40.352 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[63760997-8bbe-42c0-8dce-3b2b26233cef]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:40.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:40 np0005603609 nova_compute[221550]: 2026-01-31 07:31:40.434 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:40.808 224837 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:31:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:40.808 224837 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:31:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:40.808 224837 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:31:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:41.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:41 np0005603609 nova_compute[221550]: 2026-01-31 07:31:41.323 221554 DEBUG nova.compute.manager [req-8769055b-cc2c-4cf1-955f-99dc2a749117 req-5c9fbd2d-e438-4aa1-9c6c-c330648a852b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received event network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:31:41 np0005603609 nova_compute[221550]: 2026-01-31 07:31:41.323 221554 DEBUG oslo_concurrency.lockutils [req-8769055b-cc2c-4cf1-955f-99dc2a749117 req-5c9fbd2d-e438-4aa1-9c6c-c330648a852b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:31:41 np0005603609 nova_compute[221550]: 2026-01-31 07:31:41.324 221554 DEBUG oslo_concurrency.lockutils [req-8769055b-cc2c-4cf1-955f-99dc2a749117 req-5c9fbd2d-e438-4aa1-9c6c-c330648a852b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:31:41 np0005603609 nova_compute[221550]: 2026-01-31 07:31:41.324 221554 DEBUG oslo_concurrency.lockutils [req-8769055b-cc2c-4cf1-955f-99dc2a749117 req-5c9fbd2d-e438-4aa1-9c6c-c330648a852b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:31:41 np0005603609 nova_compute[221550]: 2026-01-31 07:31:41.324 221554 DEBUG nova.compute.manager [req-8769055b-cc2c-4cf1-955f-99dc2a749117 req-5c9fbd2d-e438-4aa1-9c6c-c330648a852b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] No waiting events found dispatching network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:31:41 np0005603609 nova_compute[221550]: 2026-01-31 07:31:41.325 221554 WARNING nova.compute.manager [req-8769055b-cc2c-4cf1-955f-99dc2a749117 req-5c9fbd2d-e438-4aa1-9c6c-c330648a852b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received unexpected event network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:41.404 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[fec82a3a-ad10-436c-a2e3-2acb857c01c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:41.422 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[39aa3eb6-c938-4d53-880c-3821c39784b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:41 np0005603609 NetworkManager[49064]: <info>  [1769844701.4238] manager: (tap272cbcfe-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Jan 31 02:31:41 np0005603609 systemd-udevd[224849]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:41.448 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[9e5e9c59-15cb-4caa-913d-77d492eb638d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:41.455 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[72d0c8af-ab5f-4252-8883-b8463ce8d99a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:41 np0005603609 NetworkManager[49064]: <info>  [1769844701.4750] device (tap272cbcfe-d0): carrier: link connected
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:41.478 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[24297300-08e3-4fc9-b4f0-8567866c100f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:41.497 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[37951a72-c582-4c4c-9064-00c07510b468]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap272cbcfe-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:ea:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494117, 'reachable_time': 44690, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 224867, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:41.515 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[423cb898-afc0-4967-ab6a-5bb6c42676b3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee9:eac2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494117, 'tstamp': 494117}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 224868, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:41.528 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[39af6dd2-1a82-4dd4-86c6-a1a79f35974c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap272cbcfe-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:ea:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494117, 'reachable_time': 44690, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 224869, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:41.550 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d7151601-a7a4-4914-b481-305c2f6a2f20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:41.588 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[753e3ef7-7fba-467d-b5ab-60ac394a5d86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:41.589 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap272cbcfe-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:41.590 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:41.590 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap272cbcfe-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:31:41 np0005603609 nova_compute[221550]: 2026-01-31 07:31:41.592 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:41 np0005603609 NetworkManager[49064]: <info>  [1769844701.5927] manager: (tap272cbcfe-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 31 02:31:41 np0005603609 kernel: tap272cbcfe-d0: entered promiscuous mode
Jan 31 02:31:41 np0005603609 nova_compute[221550]: 2026-01-31 07:31:41.596 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:41.599 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap272cbcfe-d0, col_values=(('external_ids', {'iface-id': '8bd64eda-9666-44b9-9b11-431cc2aca18a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:31:41 np0005603609 ovn_controller[130359]: 2026-01-31T07:31:41Z|00034|binding|INFO|Releasing lport 8bd64eda-9666-44b9-9b11-431cc2aca18a from this chassis (sb_readonly=0)
Jan 31 02:31:41 np0005603609 nova_compute[221550]: 2026-01-31 07:31:41.600 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:41.604 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/272cbcfe-dc1b-4319-84a2-27d245d969a3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/272cbcfe-dc1b-4319-84a2-27d245d969a3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:31:41 np0005603609 nova_compute[221550]: 2026-01-31 07:31:41.606 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:41.605 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ae66e91c-d23c-44d9-b1d4-021ba8c58dc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:41.608 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-272cbcfe-dc1b-4319-84a2-27d245d969a3
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/272cbcfe-dc1b-4319-84a2-27d245d969a3.pid.haproxy
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 272cbcfe-dc1b-4319-84a2-27d245d969a3
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:41.609 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'env', 'PROCESS_TAG=haproxy-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/272cbcfe-dc1b-4319-84a2-27d245d969a3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:31:41 np0005603609 podman[224902]: 2026-01-31 07:31:41.980763293 +0000 UTC m=+0.082556492 container create d34faccac436cd4f5fb15ded43fc29e2a34c8a93642d0a1545b7161728e4a5f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 02:31:42 np0005603609 podman[224902]: 2026-01-31 07:31:41.919714498 +0000 UTC m=+0.021507717 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:31:42 np0005603609 systemd[1]: Started libpod-conmon-d34faccac436cd4f5fb15ded43fc29e2a34c8a93642d0a1545b7161728e4a5f0.scope.
Jan 31 02:31:42 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:31:42 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f3fabc382912f5e1b8a8e6568cd4770792d8de09e3967190d544c96e6a32a75/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:31:42 np0005603609 podman[224902]: 2026-01-31 07:31:42.065693082 +0000 UTC m=+0.167486281 container init d34faccac436cd4f5fb15ded43fc29e2a34c8a93642d0a1545b7161728e4a5f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 02:31:42 np0005603609 podman[224902]: 2026-01-31 07:31:42.0704908 +0000 UTC m=+0.172283999 container start d34faccac436cd4f5fb15ded43fc29e2a34c8a93642d0a1545b7161728e4a5f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:31:42 np0005603609 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[224918]: [NOTICE]   (224922) : New worker (224924) forked
Jan 31 02:31:42 np0005603609 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[224918]: [NOTICE]   (224922) : Loading success.
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.133 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 6e847cdf-cab0-4432-ba18-1faa5270e0d7 in datapath 6abbefe0-4d30-4477-876e-e1412d7347f2 unbound from our chassis#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.136 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6abbefe0-4d30-4477-876e-e1412d7347f2#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.142 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb01cc5-808b-4560-97bf-b3f666e05dae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.143 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6abbefe0-41 in ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.146 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6abbefe0-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.146 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2436f721-d47a-4c3a-86e0-6da63abb9a4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.147 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c4e0d442-4e76-4979-806e-e886b76c5f49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.159 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[3dcb69c9-cabf-4268-9e0b-e6b7d27f0e91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.169 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[174368e9-0518-4b4e-81bc-ec05a66fc8e7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.186 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[227a4bc4-47c4-4d9d-be22-93ec1fa4bf5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.191 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[25f56765-4de2-4ba5-a695-8a276c80ed3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:42 np0005603609 NetworkManager[49064]: <info>  [1769844702.1921] manager: (tap6abbefe0-40): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.210 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[adea9fd4-54a5-414a-a194-eb26c4df11be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.213 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8bd8e0-d5e7-48bf-a0a3-3fee24a2c3fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:42 np0005603609 NetworkManager[49064]: <info>  [1769844702.2275] device (tap6abbefe0-40): carrier: link connected
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.229 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[fa3d7432-2a82-4004-bbd6-7cf76e28f38e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.242 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2b45f3cd-d222-40c6-a8e0-df0728a30829]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6abbefe0-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:8f:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494192, 'reachable_time': 35895, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 224944, 'error': None, 'target': 'ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.252 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a1674a7d-b94c-44d1-b8f8-a0ed3f58b822]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febc:8ff6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494192, 'tstamp': 494192}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 224945, 'error': None, 'target': 'ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.265 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[75997aad-b199-4bec-af0c-110aaf04769a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6abbefe0-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:8f:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494192, 'reachable_time': 35895, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 224946, 'error': None, 'target': 'ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.293 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1090e0dd-df22-4eea-9a17-2ecf466a98b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.337 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a090db1f-8269-4ff1-8d51-c1a3e2f8430c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.339 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6abbefe0-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.340 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.341 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6abbefe0-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:31:42 np0005603609 nova_compute[221550]: 2026-01-31 07:31:42.343 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:42 np0005603609 NetworkManager[49064]: <info>  [1769844702.3435] manager: (tap6abbefe0-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Jan 31 02:31:42 np0005603609 kernel: tap6abbefe0-40: entered promiscuous mode
Jan 31 02:31:42 np0005603609 nova_compute[221550]: 2026-01-31 07:31:42.346 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.349 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6abbefe0-40, col_values=(('external_ids', {'iface-id': '37e0bde4-4a51-4973-b284-d740caeb19be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:31:42 np0005603609 nova_compute[221550]: 2026-01-31 07:31:42.350 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:42 np0005603609 ovn_controller[130359]: 2026-01-31T07:31:42Z|00035|binding|INFO|Releasing lport 37e0bde4-4a51-4973-b284-d740caeb19be from this chassis (sb_readonly=0)
Jan 31 02:31:42 np0005603609 nova_compute[221550]: 2026-01-31 07:31:42.351 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.353 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6abbefe0-4d30-4477-876e-e1412d7347f2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6abbefe0-4d30-4477-876e-e1412d7347f2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.353 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a101e908-9d17-4f88-bbfb-7fe1c8e0b557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:31:42 np0005603609 nova_compute[221550]: 2026-01-31 07:31:42.354 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.355 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-6abbefe0-4d30-4477-876e-e1412d7347f2
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/6abbefe0-4d30-4477-876e-e1412d7347f2.pid.haproxy
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 6abbefe0-4d30-4477-876e-e1412d7347f2
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:31:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:31:42.355 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2', 'env', 'PROCESS_TAG=haproxy-6abbefe0-4d30-4477-876e-e1412d7347f2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6abbefe0-4d30-4477-876e-e1412d7347f2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:31:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:42.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:31:42 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3329455590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:31:42 np0005603609 podman[224975]: 2026-01-31 07:31:42.681310781 +0000 UTC m=+0.056913694 container create f2f4802fbec456124d82859cb6710a5b3c24f4c3ec41995ea2da3b15444d657c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:31:42 np0005603609 systemd[1]: Started libpod-conmon-f2f4802fbec456124d82859cb6710a5b3c24f4c3ec41995ea2da3b15444d657c.scope.
Jan 31 02:31:42 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:31:42 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9001f94792e0b9674dee60c83153129dcf15ebdf30e9c13bc5bde05e4714ecd3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:31:42 np0005603609 podman[224975]: 2026-01-31 07:31:42.646441127 +0000 UTC m=+0.022044070 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:31:42 np0005603609 podman[224975]: 2026-01-31 07:31:42.75109728 +0000 UTC m=+0.126700213 container init f2f4802fbec456124d82859cb6710a5b3c24f4c3ec41995ea2da3b15444d657c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:31:42 np0005603609 podman[224975]: 2026-01-31 07:31:42.754635146 +0000 UTC m=+0.130238059 container start f2f4802fbec456124d82859cb6710a5b3c24f4c3ec41995ea2da3b15444d657c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 02:31:42 np0005603609 neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2[224990]: [NOTICE]   (224994) : New worker (224996) forked
Jan 31 02:31:42 np0005603609 neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2[224990]: [NOTICE]   (224994) : Loading success.
Jan 31 02:31:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:43.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:31:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:44.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:31:44 np0005603609 nova_compute[221550]: 2026-01-31 07:31:44.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:31:44 np0005603609 nova_compute[221550]: 2026-01-31 07:31:44.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:31:44 np0005603609 nova_compute[221550]: 2026-01-31 07:31:44.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:31:45 np0005603609 nova_compute[221550]: 2026-01-31 07:31:45.049 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:45.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:45 np0005603609 nova_compute[221550]: 2026-01-31 07:31:45.434 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:46.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:46 np0005603609 nova_compute[221550]: 2026-01-31 07:31:46.559 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:31:46 np0005603609 nova_compute[221550]: 2026-01-31 07:31:46.560 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:31:46 np0005603609 nova_compute[221550]: 2026-01-31 07:31:46.560 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:31:46 np0005603609 nova_compute[221550]: 2026-01-31 07:31:46.560 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 23c338db-50ed-434c-ac85-8190b9b5f194 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:31:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:47.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:48.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:31:48.414444) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844708414520, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1801, "num_deletes": 251, "total_data_size": 4113615, "memory_usage": 4178264, "flush_reason": "Manual Compaction"}
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844708424143, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 1666517, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20134, "largest_seqno": 21930, "table_properties": {"data_size": 1660650, "index_size": 2942, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14975, "raw_average_key_size": 20, "raw_value_size": 1647758, "raw_average_value_size": 2282, "num_data_blocks": 131, "num_entries": 722, "num_filter_entries": 722, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844559, "oldest_key_time": 1769844559, "file_creation_time": 1769844708, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 9733 microseconds, and 3274 cpu microseconds.
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:31:48.424186) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 1666517 bytes OK
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:31:48.424207) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:31:48.425548) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:31:48.425566) EVENT_LOG_v1 {"time_micros": 1769844708425560, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:31:48.425589) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4105305, prev total WAL file size 4105305, number of live WAL files 2.
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:31:48.426530) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353037' seq:72057594037927935, type:22 .. '6D67727374617400373539' seq:0, type:0; will stop at (end)
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(1627KB)], [39(9430KB)]
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844708426620, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 11323760, "oldest_snapshot_seqno": -1}
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 4765 keys, 8493111 bytes, temperature: kUnknown
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844708520029, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 8493111, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8461183, "index_size": 18890, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 118503, "raw_average_key_size": 24, "raw_value_size": 8374843, "raw_average_value_size": 1757, "num_data_blocks": 781, "num_entries": 4765, "num_filter_entries": 4765, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769844708, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:31:48.520319) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8493111 bytes
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:31:48.522283) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.1 rd, 90.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 9.2 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(11.9) write-amplify(5.1) OK, records in: 5218, records dropped: 453 output_compression: NoCompression
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:31:48.522300) EVENT_LOG_v1 {"time_micros": 1769844708522291, "job": 22, "event": "compaction_finished", "compaction_time_micros": 93482, "compaction_time_cpu_micros": 25731, "output_level": 6, "num_output_files": 1, "total_output_size": 8493111, "num_input_records": 5218, "num_output_records": 4765, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844708522512, "job": 22, "event": "table_file_deletion", "file_number": 41}
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844708523168, "job": 22, "event": "table_file_deletion", "file_number": 39}
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:31:48.426396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:31:48.523240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:31:48.523246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:31:48.523248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:31:48.523249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:31:48 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:31:48.523251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:31:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:49.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:50 np0005603609 nova_compute[221550]: 2026-01-31 07:31:50.053 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:50.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:50 np0005603609 nova_compute[221550]: 2026-01-31 07:31:50.437 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:31:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:51.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:31:51 np0005603609 ovn_controller[130359]: 2026-01-31T07:31:51Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cc:9e:1b 10.100.0.5
Jan 31 02:31:51 np0005603609 ovn_controller[130359]: 2026-01-31T07:31:51Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cc:9e:1b 10.100.0.5
Jan 31 02:31:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:52.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:53.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:54.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:55 np0005603609 nova_compute[221550]: 2026-01-31 07:31:55.057 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:55.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:55 np0005603609 nova_compute[221550]: 2026-01-31 07:31:55.439 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:31:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:56.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:57.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:31:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:58.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:31:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:31:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:59.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:00 np0005603609 nova_compute[221550]: 2026-01-31 07:32:00.102 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:00.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:00 np0005603609 nova_compute[221550]: 2026-01-31 07:32:00.440 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:01.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:02.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:03 np0005603609 podman[225008]: 2026-01-31 07:32:03.169274783 +0000 UTC m=+0.049690138 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 02:32:03 np0005603609 podman[225007]: 2026-01-31 07:32:03.195767202 +0000 UTC m=+0.076151486 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:32:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:32:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:03.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:32:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:04.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:05 np0005603609 nova_compute[221550]: 2026-01-31 07:32:05.106 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:32:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:05.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:32:05 np0005603609 nova_compute[221550]: 2026-01-31 07:32:05.441 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:06.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:06 np0005603609 nova_compute[221550]: 2026-01-31 07:32:06.592 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Updating instance_info_cache with network_info: [{"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:32:06 np0005603609 nova_compute[221550]: 2026-01-31 07:32:06.681 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:32:06 np0005603609 nova_compute[221550]: 2026-01-31 07:32:06.682 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:32:06 np0005603609 nova_compute[221550]: 2026-01-31 07:32:06.683 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:32:06 np0005603609 nova_compute[221550]: 2026-01-31 07:32:06.683 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:32:06 np0005603609 nova_compute[221550]: 2026-01-31 07:32:06.684 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:32:06 np0005603609 nova_compute[221550]: 2026-01-31 07:32:06.684 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:32:06 np0005603609 nova_compute[221550]: 2026-01-31 07:32:06.685 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:32:06 np0005603609 nova_compute[221550]: 2026-01-31 07:32:06.685 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:32:06 np0005603609 nova_compute[221550]: 2026-01-31 07:32:06.686 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:32:06 np0005603609 nova_compute[221550]: 2026-01-31 07:32:06.686 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:32:06 np0005603609 nova_compute[221550]: 2026-01-31 07:32:06.779 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:32:06 np0005603609 nova_compute[221550]: 2026-01-31 07:32:06.779 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:32:06 np0005603609 nova_compute[221550]: 2026-01-31 07:32:06.780 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:32:06 np0005603609 nova_compute[221550]: 2026-01-31 07:32:06.780 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:32:06 np0005603609 nova_compute[221550]: 2026-01-31 07:32:06.780 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:32:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:32:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3877595757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:32:07 np0005603609 nova_compute[221550]: 2026-01-31 07:32:07.196 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:32:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:07.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:07.464 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:32:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:07.465 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:32:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:07.466 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:32:08 np0005603609 nova_compute[221550]: 2026-01-31 07:32:08.142 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:32:08 np0005603609 nova_compute[221550]: 2026-01-31 07:32:08.142 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:32:08 np0005603609 nova_compute[221550]: 2026-01-31 07:32:08.284 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:32:08 np0005603609 nova_compute[221550]: 2026-01-31 07:32:08.285 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4823MB free_disk=20.897212982177734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:32:08 np0005603609 nova_compute[221550]: 2026-01-31 07:32:08.285 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:32:08 np0005603609 nova_compute[221550]: 2026-01-31 07:32:08.286 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:32:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:08.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:08 np0005603609 nova_compute[221550]: 2026-01-31 07:32:08.837 221554 INFO nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance fea0878b-0db1-45df-9b74-0c1f00190bc4 has allocations against this compute host but is not found in the database.#033[00m
Jan 31 02:32:08 np0005603609 nova_compute[221550]: 2026-01-31 07:32:08.838 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:32:08 np0005603609 nova_compute[221550]: 2026-01-31 07:32:08.838 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:32:08 np0005603609 nova_compute[221550]: 2026-01-31 07:32:08.934 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:32:09 np0005603609 nova_compute[221550]: 2026-01-31 07:32:09.140 221554 DEBUG nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Check if temp file /var/lib/nova/instances/tmpb6taclt6 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Jan 31 02:32:09 np0005603609 nova_compute[221550]: 2026-01-31 07:32:09.141 221554 DEBUG nova.compute.manager [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpb6taclt6',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='23c338db-50ed-434c-ac85-8190b9b5f194',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Jan 31 02:32:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:32:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:09.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:32:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:32:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1074906142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:32:09 np0005603609 nova_compute[221550]: 2026-01-31 07:32:09.409 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:32:09 np0005603609 nova_compute[221550]: 2026-01-31 07:32:09.414 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:32:09 np0005603609 nova_compute[221550]: 2026-01-31 07:32:09.433 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:32:09 np0005603609 nova_compute[221550]: 2026-01-31 07:32:09.467 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:32:09 np0005603609 nova_compute[221550]: 2026-01-31 07:32:09.468 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:32:10 np0005603609 nova_compute[221550]: 2026-01-31 07:32:10.109 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:32:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:10.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:32:10 np0005603609 nova_compute[221550]: 2026-01-31 07:32:10.443 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:10 np0005603609 nova_compute[221550]: 2026-01-31 07:32:10.462 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:32:10 np0005603609 nova_compute[221550]: 2026-01-31 07:32:10.463 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:32:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:11.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:11.683 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:32:11 np0005603609 nova_compute[221550]: 2026-01-31 07:32:11.683 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:11.685 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:32:11 np0005603609 nova_compute[221550]: 2026-01-31 07:32:11.908 221554 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:32:11 np0005603609 nova_compute[221550]: 2026-01-31 07:32:11.909 221554 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:32:11 np0005603609 nova_compute[221550]: 2026-01-31 07:32:11.932 221554 INFO nova.compute.rpcapi [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m
Jan 31 02:32:11 np0005603609 nova_compute[221550]: 2026-01-31 07:32:11.933 221554 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:32:12 np0005603609 ovn_controller[130359]: 2026-01-31T07:32:12Z|00036|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 02:32:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:12.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:13.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:14.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:15 np0005603609 nova_compute[221550]: 2026-01-31 07:32:15.113 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.005000119s ======
Jan 31 02:32:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:15.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000119s
Jan 31 02:32:15 np0005603609 nova_compute[221550]: 2026-01-31 07:32:15.444 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:15.688 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:32:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:32:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3231850267' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:32:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:32:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3231850267' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:32:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:16.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:32:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:17.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:32:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:18.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:32:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:32:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:32:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:32:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:32:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:19.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:19 np0005603609 nova_compute[221550]: 2026-01-31 07:32:19.555 221554 DEBUG nova.compute.manager [req-6d088890-3747-4f45-959b-16377454a2b3 req-55afe204-53fe-40fd-b4d1-95c58429db9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received event network-vif-unplugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:32:19 np0005603609 nova_compute[221550]: 2026-01-31 07:32:19.556 221554 DEBUG oslo_concurrency.lockutils [req-6d088890-3747-4f45-959b-16377454a2b3 req-55afe204-53fe-40fd-b4d1-95c58429db9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:32:19 np0005603609 nova_compute[221550]: 2026-01-31 07:32:19.556 221554 DEBUG oslo_concurrency.lockutils [req-6d088890-3747-4f45-959b-16377454a2b3 req-55afe204-53fe-40fd-b4d1-95c58429db9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:32:19 np0005603609 nova_compute[221550]: 2026-01-31 07:32:19.557 221554 DEBUG oslo_concurrency.lockutils [req-6d088890-3747-4f45-959b-16377454a2b3 req-55afe204-53fe-40fd-b4d1-95c58429db9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:32:19 np0005603609 nova_compute[221550]: 2026-01-31 07:32:19.557 221554 DEBUG nova.compute.manager [req-6d088890-3747-4f45-959b-16377454a2b3 req-55afe204-53fe-40fd-b4d1-95c58429db9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] No waiting events found dispatching network-vif-unplugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:32:19 np0005603609 nova_compute[221550]: 2026-01-31 07:32:19.557 221554 DEBUG nova.compute.manager [req-6d088890-3747-4f45-959b-16377454a2b3 req-55afe204-53fe-40fd-b4d1-95c58429db9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received event network-vif-unplugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:32:20 np0005603609 nova_compute[221550]: 2026-01-31 07:32:20.117 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:32:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:20.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:32:20 np0005603609 nova_compute[221550]: 2026-01-31 07:32:20.494 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:20 np0005603609 nova_compute[221550]: 2026-01-31 07:32:20.639 221554 DEBUG oslo_concurrency.processutils [None req-9d4d8dff-1356-4ba3-be8b-fe9629860fd1 cfea0969f6454663a879fad503877b49 af8a553c73ca46be8956317b59c88340 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:32:20 np0005603609 nova_compute[221550]: 2026-01-31 07:32:20.655 221554 DEBUG oslo_concurrency.processutils [None req-9d4d8dff-1356-4ba3-be8b-fe9629860fd1 cfea0969f6454663a879fad503877b49 af8a553c73ca46be8956317b59c88340 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:32:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:21.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.111 221554 INFO nova.compute.manager [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Took 10.20 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.#033[00m
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.113 221554 DEBUG nova.compute.manager [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.166 221554 DEBUG nova.compute.manager [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpb6taclt6',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='23c338db-50ed-434c-ac85-8190b9b5f194',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(fea0878b-0db1-45df-9b74-0c1f00190bc4),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.169 221554 DEBUG nova.objects.instance [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lazy-loading 'migration_context' on Instance uuid 23c338db-50ed-434c-ac85-8190b9b5f194 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.170 221554 DEBUG nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.171 221554 DEBUG nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.171 221554 DEBUG nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.208 221554 DEBUG nova.virt.libvirt.vif [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:31:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-933343300',display_name='tempest-LiveMigrationTest-server-933343300',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-933343300',id=5,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:31:39Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dfb4d4079ac944b288d5e285ce1de95a',ramdisk_id='',reservation_id='r-kv2e0wfn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-48073594',owner_user_name='tempest-LiveMigrationTest-48073594-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:31:39Z,user_data=None,user_id='0a59df5da6284e4e8764816e1f8dfaa3',uuid=23c338db-50ed-434c-ac85-8190b9b5f194,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.208 221554 DEBUG nova.network.os_vif_util [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Converting VIF {"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.209 221554 DEBUG nova.network.os_vif_util [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:9e:1b,bridge_name='br-int',has_traffic_filtering=True,id=2da07bdf-313d-4a90-a81e-e531c63b3d54,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2da07bdf-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.209 221554 DEBUG nova.virt.libvirt.migration [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Updating guest XML with vif config: <interface type="ethernet">
Jan 31 02:32:22 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:cc:9e:1b"/>
Jan 31 02:32:22 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 02:32:22 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:32:22 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 02:32:22 np0005603609 nova_compute[221550]:  <target dev="tap2da07bdf-31"/>
Jan 31 02:32:22 np0005603609 nova_compute[221550]: </interface>
Jan 31 02:32:22 np0005603609 nova_compute[221550]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.210 221554 DEBUG nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.397 221554 DEBUG nova.compute.manager [req-57da188c-1a4f-4d44-948a-326a1551c418 req-6bec5565-5cc2-4e15-99e7-9c3c0b0aaced 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received event network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.398 221554 DEBUG oslo_concurrency.lockutils [req-57da188c-1a4f-4d44-948a-326a1551c418 req-6bec5565-5cc2-4e15-99e7-9c3c0b0aaced 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.398 221554 DEBUG oslo_concurrency.lockutils [req-57da188c-1a4f-4d44-948a-326a1551c418 req-6bec5565-5cc2-4e15-99e7-9c3c0b0aaced 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.399 221554 DEBUG oslo_concurrency.lockutils [req-57da188c-1a4f-4d44-948a-326a1551c418 req-6bec5565-5cc2-4e15-99e7-9c3c0b0aaced 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.399 221554 DEBUG nova.compute.manager [req-57da188c-1a4f-4d44-948a-326a1551c418 req-6bec5565-5cc2-4e15-99e7-9c3c0b0aaced 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] No waiting events found dispatching network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.400 221554 WARNING nova.compute.manager [req-57da188c-1a4f-4d44-948a-326a1551c418 req-6bec5565-5cc2-4e15-99e7-9c3c0b0aaced 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received unexpected event network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 for instance with vm_state active and task_state migrating.#033[00m
Jan 31 02:32:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:22.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.674 221554 DEBUG nova.virt.libvirt.migration [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.675 221554 INFO nova.virt.libvirt.migration [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Jan 31 02:32:22 np0005603609 nova_compute[221550]: 2026-01-31 07:32:22.812 221554 INFO nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Jan 31 02:32:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:23.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:23 np0005603609 nova_compute[221550]: 2026-01-31 07:32:23.315 221554 DEBUG nova.virt.libvirt.migration [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 31 02:32:23 np0005603609 nova_compute[221550]: 2026-01-31 07:32:23.316 221554 DEBUG nova.virt.libvirt.migration [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Jan 31 02:32:23 np0005603609 nova_compute[221550]: 2026-01-31 07:32:23.821 221554 DEBUG nova.virt.libvirt.migration [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 31 02:32:23 np0005603609 nova_compute[221550]: 2026-01-31 07:32:23.822 221554 DEBUG nova.virt.libvirt.migration [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.104 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844744.104529, 23c338db-50ed-434c-ac85-8190b9b5f194 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.105 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.152 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.155 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.206 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Jan 31 02:32:24 np0005603609 kernel: tap2da07bdf-31 (unregistering): left promiscuous mode
Jan 31 02:32:24 np0005603609 NetworkManager[49064]: <info>  [1769844744.2331] device (tap2da07bdf-31): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:32:24 np0005603609 ovn_controller[130359]: 2026-01-31T07:32:24Z|00037|binding|INFO|Releasing lport 2da07bdf-313d-4a90-a81e-e531c63b3d54 from this chassis (sb_readonly=0)
Jan 31 02:32:24 np0005603609 ovn_controller[130359]: 2026-01-31T07:32:24Z|00038|binding|INFO|Setting lport 2da07bdf-313d-4a90-a81e-e531c63b3d54 down in Southbound
Jan 31 02:32:24 np0005603609 ovn_controller[130359]: 2026-01-31T07:32:24Z|00039|binding|INFO|Releasing lport 6e847cdf-cab0-4432-ba18-1faa5270e0d7 from this chassis (sb_readonly=0)
Jan 31 02:32:24 np0005603609 ovn_controller[130359]: 2026-01-31T07:32:24Z|00040|binding|INFO|Setting lport 6e847cdf-cab0-4432-ba18-1faa5270e0d7 down in Southbound
Jan 31 02:32:24 np0005603609 ovn_controller[130359]: 2026-01-31T07:32:24Z|00041|binding|INFO|Removing iface tap2da07bdf-31 ovn-installed in OVS
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.241 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:24 np0005603609 ovn_controller[130359]: 2026-01-31T07:32:24Z|00042|binding|INFO|Releasing lport 8bd64eda-9666-44b9-9b11-431cc2aca18a from this chassis (sb_readonly=0)
Jan 31 02:32:24 np0005603609 ovn_controller[130359]: 2026-01-31T07:32:24Z|00043|binding|INFO|Releasing lport 37e0bde4-4a51-4973-b284-d740caeb19be from this chassis (sb_readonly=0)
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.251 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:9e:1b 10.100.0.5'], port_security=['fa:16:3e:cc:9e:1b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '5c307474-e9ec-4d19-9f52-463eb0ff26d1'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-211383396', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '23c338db-50ed-434c-ac85-8190b9b5f194', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-211383396', 'neutron:project_id': 'dfb4d4079ac944b288d5e285ce1de95a', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'abfb90a3-3499-4078-8409-95077c250314', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8850cb79-5a97-415d-8eee-4d7273f04968, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=2da07bdf-313d-4a90-a81e-e531c63b3d54) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.255 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:25:75 19.80.0.160'], port_security=['fa:16:3e:dc:25:75 19.80.0.160'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['2da07bdf-313d-4a90-a81e-e531c63b3d54'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-70189147', 'neutron:cidrs': '19.80.0.160/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6abbefe0-4d30-4477-876e-e1412d7347f2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-70189147', 'neutron:project_id': 'dfb4d4079ac944b288d5e285ce1de95a', 'neutron:revision_number': '3', 'neutron:security_group_ids': 'abfb90a3-3499-4078-8409-95077c250314', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=00556c7f-96d8-4b10-939b-86f9a6371447, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6e847cdf-cab0-4432-ba18-1faa5270e0d7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.258 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 2da07bdf-313d-4a90-a81e-e531c63b3d54 in datapath 272cbcfe-dc1b-4319-84a2-27d245d969a3 unbound from our chassis#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.262 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 272cbcfe-dc1b-4319-84a2-27d245d969a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.263 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.265 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7baa4c63-5707-47da-b2dd-4285901b19c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.266 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3 namespace which is not needed anymore#033[00m
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.302 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:24 np0005603609 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 31 02:32:24 np0005603609 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Consumed 14.493s CPU time.
Jan 31 02:32:24 np0005603609 systemd-machined[190912]: Machine qemu-2-instance-00000005 terminated.
Jan 31 02:32:24 np0005603609 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[224918]: [NOTICE]   (224922) : haproxy version is 2.8.14-c23fe91
Jan 31 02:32:24 np0005603609 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[224918]: [NOTICE]   (224922) : path to executable is /usr/sbin/haproxy
Jan 31 02:32:24 np0005603609 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[224918]: [WARNING]  (224922) : Exiting Master process...
Jan 31 02:32:24 np0005603609 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[224918]: [ALERT]    (224922) : Current worker (224924) exited with code 143 (Terminated)
Jan 31 02:32:24 np0005603609 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[224918]: [WARNING]  (224922) : All workers exited. Exiting... (0)
Jan 31 02:32:24 np0005603609 virtqemud[221292]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/23c338db-50ed-434c-ac85-8190b9b5f194_disk: No such file or directory
Jan 31 02:32:24 np0005603609 virtqemud[221292]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/23c338db-50ed-434c-ac85-8190b9b5f194_disk: No such file or directory
Jan 31 02:32:24 np0005603609 systemd[1]: libpod-d34faccac436cd4f5fb15ded43fc29e2a34c8a93642d0a1545b7161728e4a5f0.scope: Deactivated successfully.
Jan 31 02:32:24 np0005603609 podman[225258]: 2026-01-31 07:32:24.398644144 +0000 UTC m=+0.055048908 container died d34faccac436cd4f5fb15ded43fc29e2a34c8a93642d0a1545b7161728e4a5f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.413 221554 DEBUG nova.virt.libvirt.guest [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.414 221554 INFO nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Migration operation has completed#033[00m
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.414 221554 INFO nova.compute.manager [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] _post_live_migration() is started..#033[00m
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.427 221554 DEBUG nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.427 221554 DEBUG nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.428 221554 DEBUG nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Jan 31 02:32:24 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d34faccac436cd4f5fb15ded43fc29e2a34c8a93642d0a1545b7161728e4a5f0-userdata-shm.mount: Deactivated successfully.
Jan 31 02:32:24 np0005603609 systemd[1]: var-lib-containers-storage-overlay-6f3fabc382912f5e1b8a8e6568cd4770792d8de09e3967190d544c96e6a32a75-merged.mount: Deactivated successfully.
Jan 31 02:32:24 np0005603609 podman[225258]: 2026-01-31 07:32:24.442059286 +0000 UTC m=+0.098464050 container cleanup d34faccac436cd4f5fb15ded43fc29e2a34c8a93642d0a1545b7161728e4a5f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 02:32:24 np0005603609 systemd[1]: libpod-conmon-d34faccac436cd4f5fb15ded43fc29e2a34c8a93642d0a1545b7161728e4a5f0.scope: Deactivated successfully.
Jan 31 02:32:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:24.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:24 np0005603609 podman[225300]: 2026-01-31 07:32:24.50591455 +0000 UTC m=+0.049082942 container remove d34faccac436cd4f5fb15ded43fc29e2a34c8a93642d0a1545b7161728e4a5f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.509 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d22406c6-a952-471f-a7fc-747493fd0194]: (4, ('Sat Jan 31 07:32:24 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3 (d34faccac436cd4f5fb15ded43fc29e2a34c8a93642d0a1545b7161728e4a5f0)\nd34faccac436cd4f5fb15ded43fc29e2a34c8a93642d0a1545b7161728e4a5f0\nSat Jan 31 07:32:24 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3 (d34faccac436cd4f5fb15ded43fc29e2a34c8a93642d0a1545b7161728e4a5f0)\nd34faccac436cd4f5fb15ded43fc29e2a34c8a93642d0a1545b7161728e4a5f0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.511 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[81f38941-a9aa-450d-8381-81d4c328372a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.512 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap272cbcfe-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.513 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:24 np0005603609 kernel: tap272cbcfe-d0: left promiscuous mode
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.519 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.521 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.524 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8c6743b4-2dfd-451a-91f0-f9dbbf5284b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.537 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0728ee37-553c-4974-90b9-25f280e5d8db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.538 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2d577daa-44a1-416a-8fd4-b9ffcf4041ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.551 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8c59c088-8889-4a1f-b804-29cd06f4fc07]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494110, 'reachable_time': 44298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 225317, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.557 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.557 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[49e3a9e6-e1a0-4135-9b91-636ae12e49e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.558 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 6e847cdf-cab0-4432-ba18-1faa5270e0d7 in datapath 6abbefe0-4d30-4477-876e-e1412d7347f2 unbound from our chassis#033[00m
Jan 31 02:32:24 np0005603609 systemd[1]: run-netns-ovnmeta\x2d272cbcfe\x2ddc1b\x2d4319\x2d84a2\x2d27d245d969a3.mount: Deactivated successfully.
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.560 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6abbefe0-4d30-4477-876e-e1412d7347f2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.560 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[84513dda-a874-4343-acc9-1a65aad78ef7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.560 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2 namespace which is not needed anymore#033[00m
Jan 31 02:32:24 np0005603609 neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2[224990]: [NOTICE]   (224994) : haproxy version is 2.8.14-c23fe91
Jan 31 02:32:24 np0005603609 neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2[224990]: [NOTICE]   (224994) : path to executable is /usr/sbin/haproxy
Jan 31 02:32:24 np0005603609 neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2[224990]: [WARNING]  (224994) : Exiting Master process...
Jan 31 02:32:24 np0005603609 neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2[224990]: [WARNING]  (224994) : Exiting Master process...
Jan 31 02:32:24 np0005603609 neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2[224990]: [ALERT]    (224994) : Current worker (224996) exited with code 143 (Terminated)
Jan 31 02:32:24 np0005603609 neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2[224990]: [WARNING]  (224994) : All workers exited. Exiting... (0)
Jan 31 02:32:24 np0005603609 systemd[1]: libpod-f2f4802fbec456124d82859cb6710a5b3c24f4c3ec41995ea2da3b15444d657c.scope: Deactivated successfully.
Jan 31 02:32:24 np0005603609 podman[225337]: 2026-01-31 07:32:24.655592094 +0000 UTC m=+0.038899624 container died f2f4802fbec456124d82859cb6710a5b3c24f4c3ec41995ea2da3b15444d657c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.671 221554 DEBUG nova.compute.manager [req-dbf178df-0be9-4769-b3d3-335bdf1e5529 req-3ca65708-9125-489d-8496-216dcfe5dee5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received event network-changed-2da07bdf-313d-4a90-a81e-e531c63b3d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.671 221554 DEBUG nova.compute.manager [req-dbf178df-0be9-4769-b3d3-335bdf1e5529 req-3ca65708-9125-489d-8496-216dcfe5dee5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Refreshing instance network info cache due to event network-changed-2da07bdf-313d-4a90-a81e-e531c63b3d54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.671 221554 DEBUG oslo_concurrency.lockutils [req-dbf178df-0be9-4769-b3d3-335bdf1e5529 req-3ca65708-9125-489d-8496-216dcfe5dee5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.672 221554 DEBUG oslo_concurrency.lockutils [req-dbf178df-0be9-4769-b3d3-335bdf1e5529 req-3ca65708-9125-489d-8496-216dcfe5dee5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.672 221554 DEBUG nova.network.neutron [req-dbf178df-0be9-4769-b3d3-335bdf1e5529 req-3ca65708-9125-489d-8496-216dcfe5dee5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Refreshing network info cache for port 2da07bdf-313d-4a90-a81e-e531c63b3d54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:32:24 np0005603609 systemd[1]: var-lib-containers-storage-overlay-9001f94792e0b9674dee60c83153129dcf15ebdf30e9c13bc5bde05e4714ecd3-merged.mount: Deactivated successfully.
Jan 31 02:32:24 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f2f4802fbec456124d82859cb6710a5b3c24f4c3ec41995ea2da3b15444d657c-userdata-shm.mount: Deactivated successfully.
Jan 31 02:32:24 np0005603609 podman[225337]: 2026-01-31 07:32:24.708865548 +0000 UTC m=+0.092173118 container cleanup f2f4802fbec456124d82859cb6710a5b3c24f4c3ec41995ea2da3b15444d657c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:32:24 np0005603609 systemd[1]: libpod-conmon-f2f4802fbec456124d82859cb6710a5b3c24f4c3ec41995ea2da3b15444d657c.scope: Deactivated successfully.
Jan 31 02:32:24 np0005603609 podman[225367]: 2026-01-31 07:32:24.766486008 +0000 UTC m=+0.041699302 container remove f2f4802fbec456124d82859cb6710a5b3c24f4c3ec41995ea2da3b15444d657c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.770 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ed73b47c-78ab-4ab6-a701-fa7ad7ae52bd]: (4, ('Sat Jan 31 07:32:24 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2 (f2f4802fbec456124d82859cb6710a5b3c24f4c3ec41995ea2da3b15444d657c)\nf2f4802fbec456124d82859cb6710a5b3c24f4c3ec41995ea2da3b15444d657c\nSat Jan 31 07:32:24 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2 (f2f4802fbec456124d82859cb6710a5b3c24f4c3ec41995ea2da3b15444d657c)\nf2f4802fbec456124d82859cb6710a5b3c24f4c3ec41995ea2da3b15444d657c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.771 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6126fea3-c1fa-49aa-be43-3c0ea514b4a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.772 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6abbefe0-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.773 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:24 np0005603609 kernel: tap6abbefe0-40: left promiscuous mode
Jan 31 02:32:24 np0005603609 nova_compute[221550]: 2026-01-31 07:32:24.781 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.782 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8a869aae-4ebc-40e4-8fef-250488c5ecd5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.796 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9e8a19-5130-4d21-aaa9-f105d08ab61a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.797 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac472ab-75f5-4f95-91bb-1b5d11a9f1d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.808 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ef6195ad-ff43-419d-bc49-7bbfd4db05f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494188, 'reachable_time': 16476, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 225386, 'error': None, 'target': 'ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:32:24 np0005603609 systemd[1]: run-netns-ovnmeta\x2d6abbefe0\x2d4d30\x2d4477\x2d876e\x2de1412d7347f2.mount: Deactivated successfully.
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.811 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:32:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:32:24.811 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[6622b9b2-8cbf-48ab-93e4-190f16a9c29d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:32:25 np0005603609 nova_compute[221550]: 2026-01-31 07:32:25.118 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:25.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:25 np0005603609 nova_compute[221550]: 2026-01-31 07:32:25.497 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:25 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:32:25 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:32:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:26.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:26 np0005603609 nova_compute[221550]: 2026-01-31 07:32:26.904 221554 DEBUG nova.compute.manager [req-df2dc7b7-019d-4a40-9be1-64599aa1cb16 req-76a48c42-7373-4f0f-965a-b29e7903171b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received event network-vif-unplugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:32:26 np0005603609 nova_compute[221550]: 2026-01-31 07:32:26.904 221554 DEBUG oslo_concurrency.lockutils [req-df2dc7b7-019d-4a40-9be1-64599aa1cb16 req-76a48c42-7373-4f0f-965a-b29e7903171b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:32:26 np0005603609 nova_compute[221550]: 2026-01-31 07:32:26.905 221554 DEBUG oslo_concurrency.lockutils [req-df2dc7b7-019d-4a40-9be1-64599aa1cb16 req-76a48c42-7373-4f0f-965a-b29e7903171b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:32:26 np0005603609 nova_compute[221550]: 2026-01-31 07:32:26.905 221554 DEBUG oslo_concurrency.lockutils [req-df2dc7b7-019d-4a40-9be1-64599aa1cb16 req-76a48c42-7373-4f0f-965a-b29e7903171b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:32:26 np0005603609 nova_compute[221550]: 2026-01-31 07:32:26.905 221554 DEBUG nova.compute.manager [req-df2dc7b7-019d-4a40-9be1-64599aa1cb16 req-76a48c42-7373-4f0f-965a-b29e7903171b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] No waiting events found dispatching network-vif-unplugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:32:26 np0005603609 nova_compute[221550]: 2026-01-31 07:32:26.906 221554 DEBUG nova.compute.manager [req-df2dc7b7-019d-4a40-9be1-64599aa1cb16 req-76a48c42-7373-4f0f-965a-b29e7903171b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received event network-vif-unplugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:32:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:27.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:27 np0005603609 nova_compute[221550]: 2026-01-31 07:32:27.886 221554 DEBUG nova.network.neutron [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Activated binding for port 2da07bdf-313d-4a90-a81e-e531c63b3d54 and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Jan 31 02:32:27 np0005603609 nova_compute[221550]: 2026-01-31 07:32:27.887 221554 DEBUG nova.compute.manager [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Jan 31 02:32:27 np0005603609 nova_compute[221550]: 2026-01-31 07:32:27.889 221554 DEBUG nova.virt.libvirt.vif [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:31:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-933343300',display_name='tempest-LiveMigrationTest-server-933343300',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-933343300',id=5,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:31:39Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dfb4d4079ac944b288d5e285ce1de95a',ramdisk_id='',reservation_id='r-kv2e0wfn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-48073594',owner_user_name='tempest-LiveMigrationTest-48073594-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:31:57Z,user_data=None,user_id='0a59df5da6284e4e8764816e1f8dfaa3',uuid=23c338db-50ed-434c-ac85-8190b9b5f194,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:32:27 np0005603609 nova_compute[221550]: 2026-01-31 07:32:27.889 221554 DEBUG nova.network.os_vif_util [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Converting VIF {"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:32:27 np0005603609 nova_compute[221550]: 2026-01-31 07:32:27.890 221554 DEBUG nova.network.os_vif_util [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:9e:1b,bridge_name='br-int',has_traffic_filtering=True,id=2da07bdf-313d-4a90-a81e-e531c63b3d54,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2da07bdf-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:32:27 np0005603609 nova_compute[221550]: 2026-01-31 07:32:27.891 221554 DEBUG os_vif [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:9e:1b,bridge_name='br-int',has_traffic_filtering=True,id=2da07bdf-313d-4a90-a81e-e531c63b3d54,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2da07bdf-31') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:32:27 np0005603609 nova_compute[221550]: 2026-01-31 07:32:27.896 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:27 np0005603609 nova_compute[221550]: 2026-01-31 07:32:27.896 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2da07bdf-31, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:32:27 np0005603609 nova_compute[221550]: 2026-01-31 07:32:27.899 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:27 np0005603609 nova_compute[221550]: 2026-01-31 07:32:27.900 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:27 np0005603609 nova_compute[221550]: 2026-01-31 07:32:27.903 221554 INFO os_vif [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:9e:1b,bridge_name='br-int',has_traffic_filtering=True,id=2da07bdf-313d-4a90-a81e-e531c63b3d54,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2da07bdf-31')#033[00m
Jan 31 02:32:27 np0005603609 nova_compute[221550]: 2026-01-31 07:32:27.904 221554 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:32:27 np0005603609 nova_compute[221550]: 2026-01-31 07:32:27.905 221554 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:32:27 np0005603609 nova_compute[221550]: 2026-01-31 07:32:27.906 221554 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:32:27 np0005603609 nova_compute[221550]: 2026-01-31 07:32:27.906 221554 DEBUG nova.compute.manager [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Jan 31 02:32:27 np0005603609 nova_compute[221550]: 2026-01-31 07:32:27.908 221554 INFO nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Deleting instance files /var/lib/nova/instances/23c338db-50ed-434c-ac85-8190b9b5f194_del#033[00m
Jan 31 02:32:27 np0005603609 nova_compute[221550]: 2026-01-31 07:32:27.909 221554 INFO nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Deletion of /var/lib/nova/instances/23c338db-50ed-434c-ac85-8190b9b5f194_del complete#033[00m
Jan 31 02:32:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:32:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:28.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:32:28 np0005603609 nova_compute[221550]: 2026-01-31 07:32:28.982 221554 DEBUG nova.network.neutron [req-dbf178df-0be9-4769-b3d3-335bdf1e5529 req-3ca65708-9125-489d-8496-216dcfe5dee5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Updated VIF entry in instance network info cache for port 2da07bdf-313d-4a90-a81e-e531c63b3d54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:32:28 np0005603609 nova_compute[221550]: 2026-01-31 07:32:28.983 221554 DEBUG nova.network.neutron [req-dbf178df-0be9-4769-b3d3-335bdf1e5529 req-3ca65708-9125-489d-8496-216dcfe5dee5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Updating instance_info_cache with network_info: [{"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:32:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:29.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:32:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:30.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:32:30 np0005603609 nova_compute[221550]: 2026-01-31 07:32:30.501 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:30 np0005603609 nova_compute[221550]: 2026-01-31 07:32:30.911 221554 DEBUG nova.compute.manager [req-5813832f-da97-42f5-8148-166d6c607567 req-febb3404-226d-4162-8a3b-10f6470c7a97 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received event network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:32:30 np0005603609 nova_compute[221550]: 2026-01-31 07:32:30.911 221554 DEBUG oslo_concurrency.lockutils [req-5813832f-da97-42f5-8148-166d6c607567 req-febb3404-226d-4162-8a3b-10f6470c7a97 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:32:30 np0005603609 nova_compute[221550]: 2026-01-31 07:32:30.912 221554 DEBUG oslo_concurrency.lockutils [req-5813832f-da97-42f5-8148-166d6c607567 req-febb3404-226d-4162-8a3b-10f6470c7a97 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:32:30 np0005603609 nova_compute[221550]: 2026-01-31 07:32:30.913 221554 DEBUG oslo_concurrency.lockutils [req-5813832f-da97-42f5-8148-166d6c607567 req-febb3404-226d-4162-8a3b-10f6470c7a97 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:32:30 np0005603609 nova_compute[221550]: 2026-01-31 07:32:30.913 221554 DEBUG nova.compute.manager [req-5813832f-da97-42f5-8148-166d6c607567 req-febb3404-226d-4162-8a3b-10f6470c7a97 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] No waiting events found dispatching network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:32:30 np0005603609 nova_compute[221550]: 2026-01-31 07:32:30.914 221554 WARNING nova.compute.manager [req-5813832f-da97-42f5-8148-166d6c607567 req-febb3404-226d-4162-8a3b-10f6470c7a97 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received unexpected event network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 for instance with vm_state active and task_state migrating.#033[00m
Jan 31 02:32:30 np0005603609 nova_compute[221550]: 2026-01-31 07:32:30.916 221554 DEBUG oslo_concurrency.lockutils [req-dbf178df-0be9-4769-b3d3-335bdf1e5529 req-3ca65708-9125-489d-8496-216dcfe5dee5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:32:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:31.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:32.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:32 np0005603609 nova_compute[221550]: 2026-01-31 07:32:32.932 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:33 np0005603609 nova_compute[221550]: 2026-01-31 07:32:33.267 221554 DEBUG nova.compute.manager [req-eba39845-8d36-49f1-aaa1-bc4006fd0722 req-efe06a6d-0ad3-4197-a72e-a24b28ac768d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received event network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:32:33 np0005603609 nova_compute[221550]: 2026-01-31 07:32:33.267 221554 DEBUG oslo_concurrency.lockutils [req-eba39845-8d36-49f1-aaa1-bc4006fd0722 req-efe06a6d-0ad3-4197-a72e-a24b28ac768d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:32:33 np0005603609 nova_compute[221550]: 2026-01-31 07:32:33.267 221554 DEBUG oslo_concurrency.lockutils [req-eba39845-8d36-49f1-aaa1-bc4006fd0722 req-efe06a6d-0ad3-4197-a72e-a24b28ac768d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:32:33 np0005603609 nova_compute[221550]: 2026-01-31 07:32:33.267 221554 DEBUG oslo_concurrency.lockutils [req-eba39845-8d36-49f1-aaa1-bc4006fd0722 req-efe06a6d-0ad3-4197-a72e-a24b28ac768d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:32:33 np0005603609 nova_compute[221550]: 2026-01-31 07:32:33.267 221554 DEBUG nova.compute.manager [req-eba39845-8d36-49f1-aaa1-bc4006fd0722 req-efe06a6d-0ad3-4197-a72e-a24b28ac768d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] No waiting events found dispatching network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:32:33 np0005603609 nova_compute[221550]: 2026-01-31 07:32:33.268 221554 WARNING nova.compute.manager [req-eba39845-8d36-49f1-aaa1-bc4006fd0722 req-efe06a6d-0ad3-4197-a72e-a24b28ac768d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received unexpected event network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 for instance with vm_state active and task_state migrating.#033[00m
Jan 31 02:32:33 np0005603609 nova_compute[221550]: 2026-01-31 07:32:33.268 221554 DEBUG nova.compute.manager [req-eba39845-8d36-49f1-aaa1-bc4006fd0722 req-efe06a6d-0ad3-4197-a72e-a24b28ac768d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received event network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:32:33 np0005603609 nova_compute[221550]: 2026-01-31 07:32:33.268 221554 DEBUG oslo_concurrency.lockutils [req-eba39845-8d36-49f1-aaa1-bc4006fd0722 req-efe06a6d-0ad3-4197-a72e-a24b28ac768d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:32:33 np0005603609 nova_compute[221550]: 2026-01-31 07:32:33.268 221554 DEBUG oslo_concurrency.lockutils [req-eba39845-8d36-49f1-aaa1-bc4006fd0722 req-efe06a6d-0ad3-4197-a72e-a24b28ac768d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:32:33 np0005603609 nova_compute[221550]: 2026-01-31 07:32:33.268 221554 DEBUG oslo_concurrency.lockutils [req-eba39845-8d36-49f1-aaa1-bc4006fd0722 req-efe06a6d-0ad3-4197-a72e-a24b28ac768d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:32:33 np0005603609 nova_compute[221550]: 2026-01-31 07:32:33.268 221554 DEBUG nova.compute.manager [req-eba39845-8d36-49f1-aaa1-bc4006fd0722 req-efe06a6d-0ad3-4197-a72e-a24b28ac768d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] No waiting events found dispatching network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:32:33 np0005603609 nova_compute[221550]: 2026-01-31 07:32:33.268 221554 WARNING nova.compute.manager [req-eba39845-8d36-49f1-aaa1-bc4006fd0722 req-efe06a6d-0ad3-4197-a72e-a24b28ac768d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received unexpected event network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 for instance with vm_state active and task_state migrating.#033[00m
Jan 31 02:32:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:33.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:34 np0005603609 podman[225438]: 2026-01-31 07:32:34.168324989 +0000 UTC m=+0.048625951 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:32:34 np0005603609 podman[225437]: 2026-01-31 07:32:34.194736126 +0000 UTC m=+0.076976605 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 02:32:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:32:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:34.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:32:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:35.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:35 np0005603609 nova_compute[221550]: 2026-01-31 07:32:35.504 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:36.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:37.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:37 np0005603609 nova_compute[221550]: 2026-01-31 07:32:37.820 221554 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:32:37 np0005603609 nova_compute[221550]: 2026-01-31 07:32:37.821 221554 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:32:37 np0005603609 nova_compute[221550]: 2026-01-31 07:32:37.821 221554 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:32:37 np0005603609 nova_compute[221550]: 2026-01-31 07:32:37.893 221554 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:32:37 np0005603609 nova_compute[221550]: 2026-01-31 07:32:37.893 221554 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:32:37 np0005603609 nova_compute[221550]: 2026-01-31 07:32:37.893 221554 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:32:37 np0005603609 nova_compute[221550]: 2026-01-31 07:32:37.894 221554 DEBUG nova.compute.resource_tracker [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:32:37 np0005603609 nova_compute[221550]: 2026-01-31 07:32:37.894 221554 DEBUG oslo_concurrency.processutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:32:37 np0005603609 nova_compute[221550]: 2026-01-31 07:32:37.935 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:32:38 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2490782525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:32:38 np0005603609 nova_compute[221550]: 2026-01-31 07:32:38.301 221554 DEBUG oslo_concurrency.processutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:32:38 np0005603609 nova_compute[221550]: 2026-01-31 07:32:38.432 221554 WARNING nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:32:38 np0005603609 nova_compute[221550]: 2026-01-31 07:32:38.433 221554 DEBUG nova.compute.resource_tracker [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5000MB free_disk=20.921836853027344GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:32:38 np0005603609 nova_compute[221550]: 2026-01-31 07:32:38.433 221554 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:32:38 np0005603609 nova_compute[221550]: 2026-01-31 07:32:38.434 221554 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:32:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:38.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:38 np0005603609 nova_compute[221550]: 2026-01-31 07:32:38.733 221554 DEBUG nova.compute.resource_tracker [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Migration for instance 23c338db-50ed-434c-ac85-8190b9b5f194 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 31 02:32:38 np0005603609 nova_compute[221550]: 2026-01-31 07:32:38.792 221554 DEBUG nova.compute.resource_tracker [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Jan 31 02:32:38 np0005603609 nova_compute[221550]: 2026-01-31 07:32:38.868 221554 DEBUG nova.compute.resource_tracker [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Migration fea0878b-0db1-45df-9b74-0c1f00190bc4 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 02:32:38 np0005603609 nova_compute[221550]: 2026-01-31 07:32:38.868 221554 DEBUG nova.compute.resource_tracker [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:32:38 np0005603609 nova_compute[221550]: 2026-01-31 07:32:38.869 221554 DEBUG nova.compute.resource_tracker [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:32:38 np0005603609 nova_compute[221550]: 2026-01-31 07:32:38.936 221554 DEBUG oslo_concurrency.processutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:32:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:39.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:32:39 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/529714511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:32:39 np0005603609 nova_compute[221550]: 2026-01-31 07:32:39.349 221554 DEBUG oslo_concurrency.processutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:32:39 np0005603609 nova_compute[221550]: 2026-01-31 07:32:39.355 221554 DEBUG nova.compute.provider_tree [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:32:39 np0005603609 nova_compute[221550]: 2026-01-31 07:32:39.410 221554 DEBUG nova.scheduler.client.report [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:32:39 np0005603609 nova_compute[221550]: 2026-01-31 07:32:39.413 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844744.4126368, 23c338db-50ed-434c-ac85-8190b9b5f194 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:32:39 np0005603609 nova_compute[221550]: 2026-01-31 07:32:39.414 221554 INFO nova.compute.manager [-] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:32:39 np0005603609 nova_compute[221550]: 2026-01-31 07:32:39.475 221554 DEBUG nova.compute.resource_tracker [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:32:39 np0005603609 nova_compute[221550]: 2026-01-31 07:32:39.475 221554 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:32:39 np0005603609 nova_compute[221550]: 2026-01-31 07:32:39.481 221554 DEBUG nova.compute.manager [None req-332e964d-f625-4762-8518-77782af40bf1 - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:32:39 np0005603609 nova_compute[221550]: 2026-01-31 07:32:39.486 221554 INFO nova.compute.manager [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Migrating instance to compute-0.ctlplane.example.com finished successfully.#033[00m
Jan 31 02:32:39 np0005603609 nova_compute[221550]: 2026-01-31 07:32:39.748 221554 INFO nova.scheduler.client.report [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Deleted allocation for migration fea0878b-0db1-45df-9b74-0c1f00190bc4#033[00m
Jan 31 02:32:39 np0005603609 nova_compute[221550]: 2026-01-31 07:32:39.749 221554 DEBUG nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Jan 31 02:32:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:40.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:40 np0005603609 nova_compute[221550]: 2026-01-31 07:32:40.508 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:41.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:42.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:42 np0005603609 nova_compute[221550]: 2026-01-31 07:32:42.939 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:43.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:44.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:44 np0005603609 nova_compute[221550]: 2026-01-31 07:32:44.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:32:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:45.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:45 np0005603609 nova_compute[221550]: 2026-01-31 07:32:45.522 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:45 np0005603609 nova_compute[221550]: 2026-01-31 07:32:45.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:32:45 np0005603609 nova_compute[221550]: 2026-01-31 07:32:45.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:32:45 np0005603609 nova_compute[221550]: 2026-01-31 07:32:45.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:32:45 np0005603609 nova_compute[221550]: 2026-01-31 07:32:45.705 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:32:45 np0005603609 nova_compute[221550]: 2026-01-31 07:32:45.705 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:45.773395) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844765773502, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 837, "num_deletes": 251, "total_data_size": 1605022, "memory_usage": 1633472, "flush_reason": "Manual Compaction"}
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844765786648, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1049459, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21935, "largest_seqno": 22767, "table_properties": {"data_size": 1045501, "index_size": 1674, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9031, "raw_average_key_size": 19, "raw_value_size": 1037552, "raw_average_value_size": 2245, "num_data_blocks": 75, "num_entries": 462, "num_filter_entries": 462, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844708, "oldest_key_time": 1769844708, "file_creation_time": 1769844765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 13309 microseconds, and 5365 cpu microseconds.
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:45.786725) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1049459 bytes OK
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:45.786763) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:45.788352) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:45.788381) EVENT_LOG_v1 {"time_micros": 1769844765788371, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:45.788416) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1600628, prev total WAL file size 1600628, number of live WAL files 2.
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:45.789283) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1024KB)], [42(8294KB)]
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844765789362, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 9542570, "oldest_snapshot_seqno": -1}
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 4712 keys, 7504925 bytes, temperature: kUnknown
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844765856793, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 7504925, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7474285, "index_size": 17730, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 118072, "raw_average_key_size": 25, "raw_value_size": 7389842, "raw_average_value_size": 1568, "num_data_blocks": 726, "num_entries": 4712, "num_filter_entries": 4712, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769844765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:45.857267) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7504925 bytes
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:45.869793) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.3 rd, 111.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.1 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(16.2) write-amplify(7.2) OK, records in: 5227, records dropped: 515 output_compression: NoCompression
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:45.869845) EVENT_LOG_v1 {"time_micros": 1769844765869825, "job": 24, "event": "compaction_finished", "compaction_time_micros": 67552, "compaction_time_cpu_micros": 14390, "output_level": 6, "num_output_files": 1, "total_output_size": 7504925, "num_input_records": 5227, "num_output_records": 4712, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844765870330, "job": 24, "event": "table_file_deletion", "file_number": 44}
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844765871940, "job": 24, "event": "table_file_deletion", "file_number": 42}
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:45.789132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:45.872110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:45.872118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:45.872119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:45.872121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:32:45 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:45.872123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:32:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:46.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:46 np0005603609 nova_compute[221550]: 2026-01-31 07:32:46.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:32:46 np0005603609 nova_compute[221550]: 2026-01-31 07:32:46.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:32:46 np0005603609 nova_compute[221550]: 2026-01-31 07:32:46.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:32:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:47.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:47 np0005603609 nova_compute[221550]: 2026-01-31 07:32:47.944 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:32:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:48.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:32:48 np0005603609 nova_compute[221550]: 2026-01-31 07:32:48.643 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:32:48 np0005603609 nova_compute[221550]: 2026-01-31 07:32:48.643 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:32:48 np0005603609 nova_compute[221550]: 2026-01-31 07:32:48.644 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:32:48 np0005603609 nova_compute[221550]: 2026-01-31 07:32:48.644 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:32:48 np0005603609 nova_compute[221550]: 2026-01-31 07:32:48.645 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:32:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:32:49 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1117879748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:32:49 np0005603609 nova_compute[221550]: 2026-01-31 07:32:49.097 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:32:49 np0005603609 nova_compute[221550]: 2026-01-31 07:32:49.230 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:32:49 np0005603609 nova_compute[221550]: 2026-01-31 07:32:49.231 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5007MB free_disk=20.897197723388672GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:32:49 np0005603609 nova_compute[221550]: 2026-01-31 07:32:49.232 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:32:49 np0005603609 nova_compute[221550]: 2026-01-31 07:32:49.232 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:32:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:49.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:49 np0005603609 nova_compute[221550]: 2026-01-31 07:32:49.342 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:32:49 np0005603609 nova_compute[221550]: 2026-01-31 07:32:49.343 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:32:49 np0005603609 nova_compute[221550]: 2026-01-31 07:32:49.387 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:32:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:32:49 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/467322244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:32:49 np0005603609 nova_compute[221550]: 2026-01-31 07:32:49.817 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:32:49 np0005603609 nova_compute[221550]: 2026-01-31 07:32:49.822 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:32:49 np0005603609 nova_compute[221550]: 2026-01-31 07:32:49.852 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:32:49 np0005603609 nova_compute[221550]: 2026-01-31 07:32:49.853 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:32:49 np0005603609 nova_compute[221550]: 2026-01-31 07:32:49.854 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:32:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:50.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:50 np0005603609 nova_compute[221550]: 2026-01-31 07:32:50.521 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:51.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:52.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:52 np0005603609 nova_compute[221550]: 2026-01-31 07:32:52.855 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:32:52 np0005603609 nova_compute[221550]: 2026-01-31 07:32:52.855 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:32:52 np0005603609 nova_compute[221550]: 2026-01-31 07:32:52.856 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:32:52 np0005603609 nova_compute[221550]: 2026-01-31 07:32:52.856 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:32:52 np0005603609 nova_compute[221550]: 2026-01-31 07:32:52.948 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:32:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:53.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:53.452741) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844773452794, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 332, "num_deletes": 256, "total_data_size": 221867, "memory_usage": 229848, "flush_reason": "Manual Compaction"}
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844773458351, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 146562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22772, "largest_seqno": 23099, "table_properties": {"data_size": 144471, "index_size": 255, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4893, "raw_average_key_size": 16, "raw_value_size": 140335, "raw_average_value_size": 483, "num_data_blocks": 12, "num_entries": 290, "num_filter_entries": 290, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844766, "oldest_key_time": 1769844766, "file_creation_time": 1769844773, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 5637 microseconds, and 933 cpu microseconds.
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:53.458382) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 146562 bytes OK
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:53.458395) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:53.462079) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:53.462105) EVENT_LOG_v1 {"time_micros": 1769844773462096, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:53.462127) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 219515, prev total WAL file size 219515, number of live WAL files 2.
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:53.462482) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(143KB)], [45(7329KB)]
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844773462544, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 7651487, "oldest_snapshot_seqno": -1}
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 4482 keys, 7531167 bytes, temperature: kUnknown
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844773639806, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 7531167, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7501381, "index_size": 17452, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 114506, "raw_average_key_size": 25, "raw_value_size": 7420257, "raw_average_value_size": 1655, "num_data_blocks": 710, "num_entries": 4482, "num_filter_entries": 4482, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769844773, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:53.640158) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 7531167 bytes
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:53.676197) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 43.1 rd, 42.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 7.2 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(103.6) write-amplify(51.4) OK, records in: 5002, records dropped: 520 output_compression: NoCompression
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:53.676229) EVENT_LOG_v1 {"time_micros": 1769844773676216, "job": 26, "event": "compaction_finished", "compaction_time_micros": 177356, "compaction_time_cpu_micros": 11901, "output_level": 6, "num_output_files": 1, "total_output_size": 7531167, "num_input_records": 5002, "num_output_records": 4482, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844773676375, "job": 26, "event": "table_file_deletion", "file_number": 47}
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844773676931, "job": 26, "event": "table_file_deletion", "file_number": 45}
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:53.462391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:53.677026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:53.677033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:53.677035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:53.677038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:32:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:32:53.677040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:32:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:54.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:55.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:55 np0005603609 nova_compute[221550]: 2026-01-31 07:32:55.523 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:56.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:57.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:57 np0005603609 nova_compute[221550]: 2026-01-31 07:32:57.952 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:32:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:58.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:32:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:59.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:00 np0005603609 nova_compute[221550]: 2026-01-31 07:33:00.526 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:00.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:33:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:01.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:33:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:33:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:02.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:33:02 np0005603609 nova_compute[221550]: 2026-01-31 07:33:02.956 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:03.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:03 np0005603609 nova_compute[221550]: 2026-01-31 07:33:03.974 221554 DEBUG oslo_concurrency.lockutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Acquiring lock "8e37bfba-68d6-438e-baa4-12552ba6308e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:03 np0005603609 nova_compute[221550]: 2026-01-31 07:33:03.975 221554 DEBUG oslo_concurrency.lockutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "8e37bfba-68d6-438e-baa4-12552ba6308e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:33:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:04.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:33:04 np0005603609 nova_compute[221550]: 2026-01-31 07:33:04.556 221554 DEBUG nova.compute.manager [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:33:04 np0005603609 nova_compute[221550]: 2026-01-31 07:33:04.714 221554 DEBUG oslo_concurrency.lockutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:04 np0005603609 nova_compute[221550]: 2026-01-31 07:33:04.714 221554 DEBUG oslo_concurrency.lockutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:04 np0005603609 nova_compute[221550]: 2026-01-31 07:33:04.721 221554 DEBUG nova.virt.hardware [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:33:04 np0005603609 nova_compute[221550]: 2026-01-31 07:33:04.722 221554 INFO nova.compute.claims [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:33:04 np0005603609 nova_compute[221550]: 2026-01-31 07:33:04.933 221554 DEBUG oslo_concurrency.processutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:05 np0005603609 podman[225593]: 2026-01-31 07:33:05.162162533 +0000 UTC m=+0.043411863 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 02:33:05 np0005603609 podman[225592]: 2026-01-31 07:33:05.185694169 +0000 UTC m=+0.072019784 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 02:33:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:05.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:33:05 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1858440281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.379 221554 DEBUG oslo_concurrency.processutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.384 221554 DEBUG nova.compute.provider_tree [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.410 221554 DEBUG nova.scheduler.client.report [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.444 221554 DEBUG oslo_concurrency.lockutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.446 221554 DEBUG nova.compute.manager [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.503 221554 DEBUG nova.compute.manager [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.504 221554 DEBUG nova.network.neutron [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.525 221554 INFO nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.528 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.545 221554 DEBUG nova.compute.manager [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.664 221554 DEBUG nova.compute.manager [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.665 221554 DEBUG nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.666 221554 INFO nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Creating image(s)#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.691 221554 DEBUG nova.storage.rbd_utils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] rbd image 8e37bfba-68d6-438e-baa4-12552ba6308e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.719 221554 DEBUG nova.storage.rbd_utils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] rbd image 8e37bfba-68d6-438e-baa4-12552ba6308e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.744 221554 DEBUG nova.storage.rbd_utils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] rbd image 8e37bfba-68d6-438e-baa4-12552ba6308e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.748 221554 DEBUG oslo_concurrency.processutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.813 221554 DEBUG oslo_concurrency.processutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.814 221554 DEBUG oslo_concurrency.lockutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.815 221554 DEBUG oslo_concurrency.lockutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.815 221554 DEBUG oslo_concurrency.lockutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.844 221554 DEBUG nova.storage.rbd_utils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] rbd image 8e37bfba-68d6-438e-baa4-12552ba6308e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:05 np0005603609 nova_compute[221550]: 2026-01-31 07:33:05.848 221554 DEBUG oslo_concurrency.processutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 8e37bfba-68d6-438e-baa4-12552ba6308e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.255 221554 DEBUG oslo_concurrency.processutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 8e37bfba-68d6-438e-baa4-12552ba6308e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.340 221554 DEBUG nova.storage.rbd_utils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] resizing rbd image 8e37bfba-68d6-438e-baa4-12552ba6308e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:33:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:33:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:06.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.612 221554 DEBUG nova.network.neutron [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.614 221554 DEBUG nova.compute.manager [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.736 221554 DEBUG nova.objects.instance [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lazy-loading 'migration_context' on Instance uuid 8e37bfba-68d6-438e-baa4-12552ba6308e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.764 221554 DEBUG nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.764 221554 DEBUG nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Ensure instance console log exists: /var/lib/nova/instances/8e37bfba-68d6-438e-baa4-12552ba6308e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.765 221554 DEBUG oslo_concurrency.lockutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.765 221554 DEBUG oslo_concurrency.lockutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.765 221554 DEBUG oslo_concurrency.lockutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.767 221554 DEBUG nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.770 221554 WARNING nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.776 221554 DEBUG nova.virt.libvirt.host [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.777 221554 DEBUG nova.virt.libvirt.host [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.780 221554 DEBUG nova.virt.libvirt.host [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.781 221554 DEBUG nova.virt.libvirt.host [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.784 221554 DEBUG nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.784 221554 DEBUG nova.virt.hardware [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.784 221554 DEBUG nova.virt.hardware [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.785 221554 DEBUG nova.virt.hardware [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.785 221554 DEBUG nova.virt.hardware [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.785 221554 DEBUG nova.virt.hardware [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.786 221554 DEBUG nova.virt.hardware [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.786 221554 DEBUG nova.virt.hardware [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.786 221554 DEBUG nova.virt.hardware [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.786 221554 DEBUG nova.virt.hardware [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.787 221554 DEBUG nova.virt.hardware [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.787 221554 DEBUG nova.virt.hardware [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:33:06 np0005603609 nova_compute[221550]: 2026-01-31 07:33:06.790 221554 DEBUG oslo_concurrency.processutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:33:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/39252063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:33:07 np0005603609 nova_compute[221550]: 2026-01-31 07:33:07.225 221554 DEBUG oslo_concurrency.processutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:07 np0005603609 nova_compute[221550]: 2026-01-31 07:33:07.273 221554 DEBUG nova.storage.rbd_utils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] rbd image 8e37bfba-68d6-438e-baa4-12552ba6308e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:07 np0005603609 nova_compute[221550]: 2026-01-31 07:33:07.279 221554 DEBUG oslo_concurrency.processutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:07.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:07.465 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:07.466 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:07.466 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:33:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/802696960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:33:07 np0005603609 nova_compute[221550]: 2026-01-31 07:33:07.690 221554 DEBUG oslo_concurrency.processutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:07 np0005603609 nova_compute[221550]: 2026-01-31 07:33:07.691 221554 DEBUG nova.objects.instance [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e37bfba-68d6-438e-baa4-12552ba6308e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:33:07 np0005603609 nova_compute[221550]: 2026-01-31 07:33:07.714 221554 DEBUG nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:33:07 np0005603609 nova_compute[221550]:  <uuid>8e37bfba-68d6-438e-baa4-12552ba6308e</uuid>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:  <name>instance-00000007</name>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <nova:name>tempest-LiveMigrationNegativeTest-server-631754079</nova:name>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:33:06</nova:creationTime>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:33:07 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:        <nova:user uuid="44a89b2f3d1949e59e137d897a95ee6c">tempest-LiveMigrationNegativeTest-342716053-project-member</nova:user>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:        <nova:project uuid="a9699f00d4814c89960a4f9e938fc7e2">tempest-LiveMigrationNegativeTest-342716053</nova:project>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <nova:ports/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <entry name="serial">8e37bfba-68d6-438e-baa4-12552ba6308e</entry>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <entry name="uuid">8e37bfba-68d6-438e-baa4-12552ba6308e</entry>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/8e37bfba-68d6-438e-baa4-12552ba6308e_disk">
Jan 31 02:33:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:33:07 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/8e37bfba-68d6-438e-baa4-12552ba6308e_disk.config">
Jan 31 02:33:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:33:07 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/8e37bfba-68d6-438e-baa4-12552ba6308e/console.log" append="off"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:33:07 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:33:07 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:33:07 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:33:07 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:33:07 np0005603609 nova_compute[221550]: 2026-01-31 07:33:07.793 221554 DEBUG nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:33:07 np0005603609 nova_compute[221550]: 2026-01-31 07:33:07.794 221554 DEBUG nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:33:07 np0005603609 nova_compute[221550]: 2026-01-31 07:33:07.794 221554 INFO nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Using config drive#033[00m
Jan 31 02:33:07 np0005603609 nova_compute[221550]: 2026-01-31 07:33:07.825 221554 DEBUG nova.storage.rbd_utils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] rbd image 8e37bfba-68d6-438e-baa4-12552ba6308e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:07 np0005603609 nova_compute[221550]: 2026-01-31 07:33:07.960 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.087 221554 INFO nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Creating config drive at /var/lib/nova/instances/8e37bfba-68d6-438e-baa4-12552ba6308e/disk.config#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.094 221554 DEBUG oslo_concurrency.processutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e37bfba-68d6-438e-baa4-12552ba6308e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpuby5oaob execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.217 221554 DEBUG oslo_concurrency.processutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e37bfba-68d6-438e-baa4-12552ba6308e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpuby5oaob" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.247 221554 DEBUG nova.storage.rbd_utils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] rbd image 8e37bfba-68d6-438e-baa4-12552ba6308e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.251 221554 DEBUG oslo_concurrency.processutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e37bfba-68d6-438e-baa4-12552ba6308e/disk.config 8e37bfba-68d6-438e-baa4-12552ba6308e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.434 221554 DEBUG oslo_concurrency.processutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e37bfba-68d6-438e-baa4-12552ba6308e/disk.config 8e37bfba-68d6-438e-baa4-12552ba6308e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.436 221554 INFO nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Deleting local config drive /var/lib/nova/instances/8e37bfba-68d6-438e-baa4-12552ba6308e/disk.config because it was imported into RBD.#033[00m
Jan 31 02:33:08 np0005603609 systemd-machined[190912]: New machine qemu-3-instance-00000007.
Jan 31 02:33:08 np0005603609 systemd[1]: Started Virtual Machine qemu-3-instance-00000007.
Jan 31 02:33:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:08.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.897 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844788.8974555, 8e37bfba-68d6-438e-baa4-12552ba6308e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.898 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.900 221554 DEBUG nova.compute.manager [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.901 221554 DEBUG nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.903 221554 INFO nova.virt.libvirt.driver [-] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Instance spawned successfully.#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.903 221554 DEBUG nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.933 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.937 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.941 221554 DEBUG nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.941 221554 DEBUG nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.941 221554 DEBUG nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.942 221554 DEBUG nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.942 221554 DEBUG nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.943 221554 DEBUG nova.virt.libvirt.driver [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.977 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.977 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844788.8982716, 8e37bfba-68d6-438e-baa4-12552ba6308e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:33:08 np0005603609 nova_compute[221550]: 2026-01-31 07:33:08.978 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] VM Started (Lifecycle Event)#033[00m
Jan 31 02:33:09 np0005603609 nova_compute[221550]: 2026-01-31 07:33:09.006 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:09 np0005603609 nova_compute[221550]: 2026-01-31 07:33:09.008 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:33:09 np0005603609 nova_compute[221550]: 2026-01-31 07:33:09.040 221554 INFO nova.compute.manager [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Took 3.38 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:33:09 np0005603609 nova_compute[221550]: 2026-01-31 07:33:09.040 221554 DEBUG nova.compute.manager [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:09 np0005603609 nova_compute[221550]: 2026-01-31 07:33:09.041 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:33:09 np0005603609 nova_compute[221550]: 2026-01-31 07:33:09.124 221554 INFO nova.compute.manager [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Took 4.45 seconds to build instance.#033[00m
Jan 31 02:33:09 np0005603609 nova_compute[221550]: 2026-01-31 07:33:09.149 221554 DEBUG oslo_concurrency.lockutils [None req-e83d41ea-c217-4074-99f9-b92d12d2bd1c 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "8e37bfba-68d6-438e-baa4-12552ba6308e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:09.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:10 np0005603609 nova_compute[221550]: 2026-01-31 07:33:10.530 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:33:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:10.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:33:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:11.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:12.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:12 np0005603609 nova_compute[221550]: 2026-01-31 07:33:12.963 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:33:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:13.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:33:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:14.037 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:33:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:14.038 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:33:14 np0005603609 nova_compute[221550]: 2026-01-31 07:33:14.037 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:14.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:14 np0005603609 nova_compute[221550]: 2026-01-31 07:33:14.611 221554 DEBUG oslo_concurrency.lockutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Acquiring lock "e58ac907-ed7c-4271-a225-0335ab3c15ae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:14 np0005603609 nova_compute[221550]: 2026-01-31 07:33:14.612 221554 DEBUG oslo_concurrency.lockutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "e58ac907-ed7c-4271-a225-0335ab3c15ae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:14 np0005603609 nova_compute[221550]: 2026-01-31 07:33:14.632 221554 DEBUG nova.compute.manager [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:33:14 np0005603609 nova_compute[221550]: 2026-01-31 07:33:14.724 221554 DEBUG oslo_concurrency.lockutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:14 np0005603609 nova_compute[221550]: 2026-01-31 07:33:14.725 221554 DEBUG oslo_concurrency.lockutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:14 np0005603609 nova_compute[221550]: 2026-01-31 07:33:14.732 221554 DEBUG nova.virt.hardware [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:33:14 np0005603609 nova_compute[221550]: 2026-01-31 07:33:14.732 221554 INFO nova.compute.claims [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:33:14 np0005603609 nova_compute[221550]: 2026-01-31 07:33:14.919 221554 DEBUG oslo_concurrency.processutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:33:15 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3348589074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.304 221554 DEBUG oslo_concurrency.processutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.311 221554 DEBUG nova.compute.provider_tree [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:33:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:15.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.338 221554 DEBUG nova.scheduler.client.report [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.376 221554 DEBUG oslo_concurrency.lockutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.376 221554 DEBUG nova.compute.manager [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.450 221554 DEBUG nova.compute.manager [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.451 221554 DEBUG nova.network.neutron [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.531 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.543 221554 INFO nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.574 221554 DEBUG nova.compute.manager [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.687 221554 DEBUG nova.compute.manager [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.689 221554 DEBUG nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.690 221554 INFO nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Creating image(s)#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.737 221554 DEBUG nova.storage.rbd_utils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] rbd image e58ac907-ed7c-4271-a225-0335ab3c15ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.764 221554 DEBUG nova.storage.rbd_utils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] rbd image e58ac907-ed7c-4271-a225-0335ab3c15ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.789 221554 DEBUG nova.storage.rbd_utils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] rbd image e58ac907-ed7c-4271-a225-0335ab3c15ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.792 221554 DEBUG oslo_concurrency.processutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.871 221554 DEBUG oslo_concurrency.processutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.872 221554 DEBUG oslo_concurrency.lockutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.872 221554 DEBUG oslo_concurrency.lockutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.873 221554 DEBUG oslo_concurrency.lockutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.897 221554 DEBUG nova.storage.rbd_utils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] rbd image e58ac907-ed7c-4271-a225-0335ab3c15ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.899 221554 DEBUG oslo_concurrency.processutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e58ac907-ed7c-4271-a225-0335ab3c15ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.973 221554 DEBUG nova.network.neutron [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 31 02:33:15 np0005603609 nova_compute[221550]: 2026-01-31 07:33:15.974 221554 DEBUG nova.compute.manager [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.192 221554 DEBUG oslo_concurrency.processutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e58ac907-ed7c-4271-a225-0335ab3c15ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.287 221554 DEBUG nova.storage.rbd_utils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] resizing rbd image e58ac907-ed7c-4271-a225-0335ab3c15ae_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.419 221554 DEBUG nova.objects.instance [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lazy-loading 'migration_context' on Instance uuid e58ac907-ed7c-4271-a225-0335ab3c15ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.439 221554 DEBUG nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.441 221554 DEBUG nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Ensure instance console log exists: /var/lib/nova/instances/e58ac907-ed7c-4271-a225-0335ab3c15ae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.443 221554 DEBUG oslo_concurrency.lockutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.444 221554 DEBUG oslo_concurrency.lockutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.446 221554 DEBUG oslo_concurrency.lockutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.452 221554 DEBUG nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.460 221554 WARNING nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.467 221554 DEBUG nova.virt.libvirt.host [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.468 221554 DEBUG nova.virt.libvirt.host [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.474 221554 DEBUG nova.virt.libvirt.host [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.475 221554 DEBUG nova.virt.libvirt.host [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.478 221554 DEBUG nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.479 221554 DEBUG nova.virt.hardware [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.479 221554 DEBUG nova.virt.hardware [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.480 221554 DEBUG nova.virt.hardware [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.481 221554 DEBUG nova.virt.hardware [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.482 221554 DEBUG nova.virt.hardware [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.483 221554 DEBUG nova.virt.hardware [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.483 221554 DEBUG nova.virt.hardware [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.484 221554 DEBUG nova.virt.hardware [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.484 221554 DEBUG nova.virt.hardware [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.484 221554 DEBUG nova.virt.hardware [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.486 221554 DEBUG nova.virt.hardware [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.492 221554 DEBUG oslo_concurrency.processutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:16.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:33:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2707124294' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.913 221554 DEBUG oslo_concurrency.processutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.941 221554 DEBUG nova.storage.rbd_utils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] rbd image e58ac907-ed7c-4271-a225-0335ab3c15ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:16 np0005603609 nova_compute[221550]: 2026-01-31 07:33:16.945 221554 DEBUG oslo_concurrency.processutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:17.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:33:17 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2950240011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:33:17 np0005603609 nova_compute[221550]: 2026-01-31 07:33:17.442 221554 DEBUG oslo_concurrency.processutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:17 np0005603609 nova_compute[221550]: 2026-01-31 07:33:17.444 221554 DEBUG nova.objects.instance [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lazy-loading 'pci_devices' on Instance uuid e58ac907-ed7c-4271-a225-0335ab3c15ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:33:17 np0005603609 nova_compute[221550]: 2026-01-31 07:33:17.464 221554 DEBUG nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:33:17 np0005603609 nova_compute[221550]:  <uuid>e58ac907-ed7c-4271-a225-0335ab3c15ae</uuid>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:  <name>instance-0000000a</name>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <nova:name>tempest-LiveMigrationNegativeTest-server-1870658004</nova:name>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:33:16</nova:creationTime>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:33:17 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:        <nova:user uuid="44a89b2f3d1949e59e137d897a95ee6c">tempest-LiveMigrationNegativeTest-342716053-project-member</nova:user>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:        <nova:project uuid="a9699f00d4814c89960a4f9e938fc7e2">tempest-LiveMigrationNegativeTest-342716053</nova:project>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <nova:ports/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <entry name="serial">e58ac907-ed7c-4271-a225-0335ab3c15ae</entry>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <entry name="uuid">e58ac907-ed7c-4271-a225-0335ab3c15ae</entry>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/e58ac907-ed7c-4271-a225-0335ab3c15ae_disk">
Jan 31 02:33:17 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:33:17 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/e58ac907-ed7c-4271-a225-0335ab3c15ae_disk.config">
Jan 31 02:33:17 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:33:17 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/e58ac907-ed7c-4271-a225-0335ab3c15ae/console.log" append="off"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:33:17 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:33:17 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:33:17 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:33:17 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:33:17 np0005603609 nova_compute[221550]: 2026-01-31 07:33:17.548 221554 DEBUG nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:33:17 np0005603609 nova_compute[221550]: 2026-01-31 07:33:17.549 221554 DEBUG nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:33:17 np0005603609 nova_compute[221550]: 2026-01-31 07:33:17.550 221554 INFO nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Using config drive#033[00m
Jan 31 02:33:17 np0005603609 nova_compute[221550]: 2026-01-31 07:33:17.585 221554 DEBUG nova.storage.rbd_utils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] rbd image e58ac907-ed7c-4271-a225-0335ab3c15ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:17 np0005603609 nova_compute[221550]: 2026-01-31 07:33:17.830 221554 INFO nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Creating config drive at /var/lib/nova/instances/e58ac907-ed7c-4271-a225-0335ab3c15ae/disk.config#033[00m
Jan 31 02:33:17 np0005603609 nova_compute[221550]: 2026-01-31 07:33:17.834 221554 DEBUG oslo_concurrency.processutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e58ac907-ed7c-4271-a225-0335ab3c15ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmphkjmlpyl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:17 np0005603609 nova_compute[221550]: 2026-01-31 07:33:17.960 221554 DEBUG oslo_concurrency.processutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e58ac907-ed7c-4271-a225-0335ab3c15ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmphkjmlpyl" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:17 np0005603609 nova_compute[221550]: 2026-01-31 07:33:17.992 221554 DEBUG nova.storage.rbd_utils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] rbd image e58ac907-ed7c-4271-a225-0335ab3c15ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:17.999 221554 DEBUG oslo_concurrency.processutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e58ac907-ed7c-4271-a225-0335ab3c15ae/disk.config e58ac907-ed7c-4271-a225-0335ab3c15ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.010 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:18.040 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.201 221554 DEBUG oslo_concurrency.processutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e58ac907-ed7c-4271-a225-0335ab3c15ae/disk.config e58ac907-ed7c-4271-a225-0335ab3c15ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.202 221554 INFO nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Deleting local config drive /var/lib/nova/instances/e58ac907-ed7c-4271-a225-0335ab3c15ae/disk.config because it was imported into RBD.#033[00m
Jan 31 02:33:18 np0005603609 systemd-machined[190912]: New machine qemu-4-instance-0000000a.
Jan 31 02:33:18 np0005603609 systemd[1]: Started Virtual Machine qemu-4-instance-0000000a.
Jan 31 02:33:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:18.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.837 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844798.837318, e58ac907-ed7c-4271-a225-0335ab3c15ae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.839 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.843 221554 DEBUG nova.compute.manager [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.843 221554 DEBUG nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.846 221554 INFO nova.virt.libvirt.driver [-] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Instance spawned successfully.#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.846 221554 DEBUG nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.864 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.868 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.871 221554 DEBUG nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.871 221554 DEBUG nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.872 221554 DEBUG nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.872 221554 DEBUG nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.872 221554 DEBUG nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.873 221554 DEBUG nova.virt.libvirt.driver [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.897 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.897 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844798.8427682, e58ac907-ed7c-4271-a225-0335ab3c15ae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.897 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] VM Started (Lifecycle Event)#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.920 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.922 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.939 221554 INFO nova.compute.manager [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Took 3.25 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.939 221554 DEBUG nova.compute.manager [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:18 np0005603609 nova_compute[221550]: 2026-01-31 07:33:18.941 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:33:19 np0005603609 nova_compute[221550]: 2026-01-31 07:33:19.006 221554 INFO nova.compute.manager [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Took 4.31 seconds to build instance.#033[00m
Jan 31 02:33:19 np0005603609 nova_compute[221550]: 2026-01-31 07:33:19.031 221554 DEBUG oslo_concurrency.lockutils [None req-91328479-b07c-471a-8ca6-58b08a5c05d8 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "e58ac907-ed7c-4271-a225-0335ab3c15ae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:19.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:19 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:19Z|00044|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 02:33:20 np0005603609 nova_compute[221550]: 2026-01-31 07:33:20.533 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:20.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:20 np0005603609 nova_compute[221550]: 2026-01-31 07:33:20.642 221554 DEBUG nova.objects.instance [None req-441fc63e-e870-4c6d-8b47-36adce90d633 dd92da532fbd49548e7a93f628dbba23 2cf74fea573741a89a174219563a54c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid e58ac907-ed7c-4271-a225-0335ab3c15ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:33:20 np0005603609 nova_compute[221550]: 2026-01-31 07:33:20.665 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844800.6645584, e58ac907-ed7c-4271-a225-0335ab3c15ae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:33:20 np0005603609 nova_compute[221550]: 2026-01-31 07:33:20.666 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:33:20 np0005603609 nova_compute[221550]: 2026-01-31 07:33:20.689 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:20 np0005603609 nova_compute[221550]: 2026-01-31 07:33:20.694 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:33:20 np0005603609 nova_compute[221550]: 2026-01-31 07:33:20.728 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 31 02:33:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:21 np0005603609 nova_compute[221550]: 2026-01-31 07:33:21.216 221554 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "434e368f-4c40-49d6-8935-ed469ee03717" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:21 np0005603609 nova_compute[221550]: 2026-01-31 07:33:21.216 221554 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "434e368f-4c40-49d6-8935-ed469ee03717" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:21 np0005603609 nova_compute[221550]: 2026-01-31 07:33:21.248 221554 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:33:21 np0005603609 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Jan 31 02:33:21 np0005603609 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Consumed 2.578s CPU time.
Jan 31 02:33:21 np0005603609 systemd-machined[190912]: Machine qemu-4-instance-0000000a terminated.
Jan 31 02:33:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:21.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:21 np0005603609 nova_compute[221550]: 2026-01-31 07:33:21.416 221554 DEBUG nova.compute.manager [None req-441fc63e-e870-4c6d-8b47-36adce90d633 dd92da532fbd49548e7a93f628dbba23 2cf74fea573741a89a174219563a54c3 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:21 np0005603609 nova_compute[221550]: 2026-01-31 07:33:21.438 221554 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:21 np0005603609 nova_compute[221550]: 2026-01-31 07:33:21.438 221554 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:21 np0005603609 nova_compute[221550]: 2026-01-31 07:33:21.447 221554 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:33:21 np0005603609 nova_compute[221550]: 2026-01-31 07:33:21.447 221554 INFO nova.compute.claims [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:33:21 np0005603609 nova_compute[221550]: 2026-01-31 07:33:21.669 221554 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:33:22 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3803425268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.116 221554 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.121 221554 DEBUG nova.compute.provider_tree [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.139 221554 DEBUG nova.scheduler.client.report [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.177 221554 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.178 221554 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.253 221554 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.254 221554 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.289 221554 INFO nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.333 221554 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.484 221554 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.485 221554 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.486 221554 INFO nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Creating image(s)#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.515 221554 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 434e368f-4c40-49d6-8935-ed469ee03717_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.547 221554 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 434e368f-4c40-49d6-8935-ed469ee03717_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:22.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.586 221554 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 434e368f-4c40-49d6-8935-ed469ee03717_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.591 221554 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.614 221554 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Automatically allocating a network for project ca84ce9280d74b4588f89bf679f563fa. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.669 221554 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.670 221554 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.671 221554 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.671 221554 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.711 221554 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 434e368f-4c40-49d6-8935-ed469ee03717_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.716 221554 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 434e368f-4c40-49d6-8935-ed469ee03717_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.825 221554 DEBUG nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Creating tmpfile /var/lib/nova/instances/tmp603267h0 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Jan 31 02:33:22 np0005603609 nova_compute[221550]: 2026-01-31 07:33:22.991 221554 DEBUG nova.compute.manager [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp603267h0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Jan 31 02:33:23 np0005603609 nova_compute[221550]: 2026-01-31 07:33:23.015 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:23.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:23 np0005603609 nova_compute[221550]: 2026-01-31 07:33:23.422 221554 DEBUG oslo_concurrency.lockutils [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Acquiring lock "e58ac907-ed7c-4271-a225-0335ab3c15ae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:23 np0005603609 nova_compute[221550]: 2026-01-31 07:33:23.422 221554 DEBUG oslo_concurrency.lockutils [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "e58ac907-ed7c-4271-a225-0335ab3c15ae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:23 np0005603609 nova_compute[221550]: 2026-01-31 07:33:23.423 221554 DEBUG oslo_concurrency.lockutils [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Acquiring lock "e58ac907-ed7c-4271-a225-0335ab3c15ae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:23 np0005603609 nova_compute[221550]: 2026-01-31 07:33:23.423 221554 DEBUG oslo_concurrency.lockutils [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "e58ac907-ed7c-4271-a225-0335ab3c15ae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:23 np0005603609 nova_compute[221550]: 2026-01-31 07:33:23.423 221554 DEBUG oslo_concurrency.lockutils [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "e58ac907-ed7c-4271-a225-0335ab3c15ae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:23 np0005603609 nova_compute[221550]: 2026-01-31 07:33:23.425 221554 INFO nova.compute.manager [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Terminating instance#033[00m
Jan 31 02:33:23 np0005603609 nova_compute[221550]: 2026-01-31 07:33:23.426 221554 DEBUG oslo_concurrency.lockutils [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Acquiring lock "refresh_cache-e58ac907-ed7c-4271-a225-0335ab3c15ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:33:23 np0005603609 nova_compute[221550]: 2026-01-31 07:33:23.426 221554 DEBUG oslo_concurrency.lockutils [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Acquired lock "refresh_cache-e58ac907-ed7c-4271-a225-0335ab3c15ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:33:23 np0005603609 nova_compute[221550]: 2026-01-31 07:33:23.426 221554 DEBUG nova.network.neutron [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:33:23 np0005603609 nova_compute[221550]: 2026-01-31 07:33:23.676 221554 DEBUG nova.network.neutron [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:33:23 np0005603609 nova_compute[221550]: 2026-01-31 07:33:23.827 221554 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 434e368f-4c40-49d6-8935-ed469ee03717_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:23 np0005603609 nova_compute[221550]: 2026-01-31 07:33:23.909 221554 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] resizing rbd image 434e368f-4c40-49d6-8935-ed469ee03717_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:33:24 np0005603609 nova_compute[221550]: 2026-01-31 07:33:24.110 221554 DEBUG nova.network.neutron [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:33:24 np0005603609 nova_compute[221550]: 2026-01-31 07:33:24.120 221554 DEBUG nova.objects.instance [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lazy-loading 'migration_context' on Instance uuid 434e368f-4c40-49d6-8935-ed469ee03717 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:33:24 np0005603609 nova_compute[221550]: 2026-01-31 07:33:24.146 221554 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:33:24 np0005603609 nova_compute[221550]: 2026-01-31 07:33:24.147 221554 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Ensure instance console log exists: /var/lib/nova/instances/434e368f-4c40-49d6-8935-ed469ee03717/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:33:24 np0005603609 nova_compute[221550]: 2026-01-31 07:33:24.147 221554 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:24 np0005603609 nova_compute[221550]: 2026-01-31 07:33:24.148 221554 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:24 np0005603609 nova_compute[221550]: 2026-01-31 07:33:24.149 221554 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:24 np0005603609 nova_compute[221550]: 2026-01-31 07:33:24.150 221554 DEBUG oslo_concurrency.lockutils [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Releasing lock "refresh_cache-e58ac907-ed7c-4271-a225-0335ab3c15ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:33:24 np0005603609 nova_compute[221550]: 2026-01-31 07:33:24.150 221554 DEBUG nova.compute.manager [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:33:24 np0005603609 nova_compute[221550]: 2026-01-31 07:33:24.162 221554 INFO nova.virt.libvirt.driver [-] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Instance destroyed successfully.#033[00m
Jan 31 02:33:24 np0005603609 nova_compute[221550]: 2026-01-31 07:33:24.163 221554 DEBUG nova.objects.instance [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lazy-loading 'resources' on Instance uuid e58ac907-ed7c-4271-a225-0335ab3c15ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:33:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:24.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:24 np0005603609 nova_compute[221550]: 2026-01-31 07:33:24.967 221554 INFO nova.virt.libvirt.driver [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Deleting instance files /var/lib/nova/instances/e58ac907-ed7c-4271-a225-0335ab3c15ae_del#033[00m
Jan 31 02:33:24 np0005603609 nova_compute[221550]: 2026-01-31 07:33:24.968 221554 INFO nova.virt.libvirt.driver [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Deletion of /var/lib/nova/instances/e58ac907-ed7c-4271-a225-0335ab3c15ae_del complete#033[00m
Jan 31 02:33:24 np0005603609 nova_compute[221550]: 2026-01-31 07:33:24.979 221554 DEBUG nova.compute.manager [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp603267h0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4444d8df-265a-48a7-a945-08eb55a365e1',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Jan 31 02:33:25 np0005603609 nova_compute[221550]: 2026-01-31 07:33:25.027 221554 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:33:25 np0005603609 nova_compute[221550]: 2026-01-31 07:33:25.028 221554 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquired lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:33:25 np0005603609 nova_compute[221550]: 2026-01-31 07:33:25.028 221554 DEBUG nova.network.neutron [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:33:25 np0005603609 nova_compute[221550]: 2026-01-31 07:33:25.065 221554 INFO nova.compute.manager [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Took 0.91 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:33:25 np0005603609 nova_compute[221550]: 2026-01-31 07:33:25.066 221554 DEBUG oslo.service.loopingcall [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:33:25 np0005603609 nova_compute[221550]: 2026-01-31 07:33:25.066 221554 DEBUG nova.compute.manager [-] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:33:25 np0005603609 nova_compute[221550]: 2026-01-31 07:33:25.067 221554 DEBUG nova.network.neutron [-] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:33:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:25.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:25 np0005603609 nova_compute[221550]: 2026-01-31 07:33:25.402 221554 DEBUG nova.network.neutron [-] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:33:25 np0005603609 nova_compute[221550]: 2026-01-31 07:33:25.461 221554 DEBUG nova.network.neutron [-] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:33:25 np0005603609 nova_compute[221550]: 2026-01-31 07:33:25.516 221554 INFO nova.compute.manager [-] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Took 0.45 seconds to deallocate network for instance.#033[00m
Jan 31 02:33:25 np0005603609 nova_compute[221550]: 2026-01-31 07:33:25.535 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:25 np0005603609 nova_compute[221550]: 2026-01-31 07:33:25.572 221554 DEBUG oslo_concurrency.lockutils [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:25 np0005603609 nova_compute[221550]: 2026-01-31 07:33:25.573 221554 DEBUG oslo_concurrency.lockutils [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:25 np0005603609 nova_compute[221550]: 2026-01-31 07:33:25.669 221554 DEBUG oslo_concurrency.processutils [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:26 np0005603609 nova_compute[221550]: 2026-01-31 07:33:26.099 221554 DEBUG oslo_concurrency.processutils [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:26 np0005603609 nova_compute[221550]: 2026-01-31 07:33:26.104 221554 DEBUG nova.compute.provider_tree [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:33:26 np0005603609 nova_compute[221550]: 2026-01-31 07:33:26.127 221554 DEBUG nova.scheduler.client.report [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:33:26 np0005603609 nova_compute[221550]: 2026-01-31 07:33:26.149 221554 DEBUG oslo_concurrency.lockutils [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:26 np0005603609 nova_compute[221550]: 2026-01-31 07:33:26.200 221554 INFO nova.scheduler.client.report [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Deleted allocations for instance e58ac907-ed7c-4271-a225-0335ab3c15ae#033[00m
Jan 31 02:33:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:26 np0005603609 nova_compute[221550]: 2026-01-31 07:33:26.284 221554 DEBUG oslo_concurrency.lockutils [None req-a87807df-65c2-4168-b3d0-be68dc9e16f1 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "e58ac907-ed7c-4271-a225-0335ab3c15ae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:33:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:26.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:33:26 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:33:26 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:33:26 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:33:26 np0005603609 nova_compute[221550]: 2026-01-31 07:33:26.952 221554 DEBUG oslo_concurrency.lockutils [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Acquiring lock "8e37bfba-68d6-438e-baa4-12552ba6308e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:26 np0005603609 nova_compute[221550]: 2026-01-31 07:33:26.953 221554 DEBUG oslo_concurrency.lockutils [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "8e37bfba-68d6-438e-baa4-12552ba6308e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:26 np0005603609 nova_compute[221550]: 2026-01-31 07:33:26.953 221554 DEBUG oslo_concurrency.lockutils [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Acquiring lock "8e37bfba-68d6-438e-baa4-12552ba6308e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:26 np0005603609 nova_compute[221550]: 2026-01-31 07:33:26.953 221554 DEBUG oslo_concurrency.lockutils [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "8e37bfba-68d6-438e-baa4-12552ba6308e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:26 np0005603609 nova_compute[221550]: 2026-01-31 07:33:26.953 221554 DEBUG oslo_concurrency.lockutils [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "8e37bfba-68d6-438e-baa4-12552ba6308e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:26 np0005603609 nova_compute[221550]: 2026-01-31 07:33:26.954 221554 INFO nova.compute.manager [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Terminating instance#033[00m
Jan 31 02:33:26 np0005603609 nova_compute[221550]: 2026-01-31 07:33:26.955 221554 DEBUG oslo_concurrency.lockutils [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Acquiring lock "refresh_cache-8e37bfba-68d6-438e-baa4-12552ba6308e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:33:26 np0005603609 nova_compute[221550]: 2026-01-31 07:33:26.955 221554 DEBUG oslo_concurrency.lockutils [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Acquired lock "refresh_cache-8e37bfba-68d6-438e-baa4-12552ba6308e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:33:26 np0005603609 nova_compute[221550]: 2026-01-31 07:33:26.955 221554 DEBUG nova.network.neutron [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:33:27 np0005603609 nova_compute[221550]: 2026-01-31 07:33:27.151 221554 DEBUG nova.network.neutron [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:33:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:27.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:27 np0005603609 nova_compute[221550]: 2026-01-31 07:33:27.435 221554 DEBUG nova.network.neutron [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:33:27 np0005603609 nova_compute[221550]: 2026-01-31 07:33:27.452 221554 DEBUG oslo_concurrency.lockutils [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Releasing lock "refresh_cache-8e37bfba-68d6-438e-baa4-12552ba6308e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:33:27 np0005603609 nova_compute[221550]: 2026-01-31 07:33:27.453 221554 DEBUG nova.compute.manager [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:33:27 np0005603609 nova_compute[221550]: 2026-01-31 07:33:27.471 221554 DEBUG nova.network.neutron [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Updating instance_info_cache with network_info: [{"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:33:27 np0005603609 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Deactivated successfully.
Jan 31 02:33:27 np0005603609 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Consumed 12.946s CPU time.
Jan 31 02:33:27 np0005603609 systemd-machined[190912]: Machine qemu-3-instance-00000007 terminated.
Jan 31 02:33:27 np0005603609 nova_compute[221550]: 2026-01-31 07:33:27.605 221554 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Releasing lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:33:27 np0005603609 nova_compute[221550]: 2026-01-31 07:33:27.606 221554 DEBUG os_brick.utils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 02:33:27 np0005603609 nova_compute[221550]: 2026-01-31 07:33:27.607 221554 INFO oslo.privsep.daemon [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpfp3n89gn/privsep.sock']#033[00m
Jan 31 02:33:27 np0005603609 nova_compute[221550]: 2026-01-31 07:33:27.667 221554 INFO nova.virt.libvirt.driver [-] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Instance destroyed successfully.#033[00m
Jan 31 02:33:27 np0005603609 nova_compute[221550]: 2026-01-31 07:33:27.668 221554 DEBUG nova.objects.instance [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lazy-loading 'resources' on Instance uuid 8e37bfba-68d6-438e-baa4-12552ba6308e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.019 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.237 221554 INFO oslo.privsep.daemon [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.106 226739 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.109 226739 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.110 226739 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.110 226739 INFO oslo.privsep.daemon [-] privsep daemon running as pid 226739#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.240 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[e80b5469-b648-444d-8fe1-ef249742bca4]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.272 221554 INFO nova.virt.libvirt.driver [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Deleting instance files /var/lib/nova/instances/8e37bfba-68d6-438e-baa4-12552ba6308e_del#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.274 221554 INFO nova.virt.libvirt.driver [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Deletion of /var/lib/nova/instances/8e37bfba-68d6-438e-baa4-12552ba6308e_del complete#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.340 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.350 221554 INFO nova.compute.manager [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.351 221554 DEBUG oslo.service.loopingcall [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.351 226739 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.351 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[6f0ec00b-3928-462f-83f3-2b67d02daecf]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.352 221554 DEBUG nova.compute.manager [-] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.353 221554 DEBUG nova.network.neutron [-] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.357 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.361 226739 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.362 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[3143fac9-6ace-4dfe-a721-0d2015d2948a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1765b9b6275c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.364 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.373 226739 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.373 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[155839ec-77bf-489e-9650-3fb4385c22cd]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.376 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[e3da58e9-392d-47cd-968a-5842f72fa58f]: (4, '231927d4-1ded-4b84-843c-456d697af567') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.376 221554 DEBUG oslo_concurrency.processutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.399 221554 DEBUG oslo_concurrency.processutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.403 221554 DEBUG os_brick.initiator.connectors.lightos [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.404 221554 DEBUG os_brick.initiator.connectors.lightos [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.405 221554 DEBUG os_brick.initiator.connectors.lightos [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.405 221554 DEBUG os_brick.utils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] <== get_connector_properties: return (797ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1765b9b6275c', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '231927d4-1ded-4b84-843c-456d697af567', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 02:33:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:28.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.613 221554 DEBUG nova.network.neutron [-] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.633 221554 DEBUG nova.network.neutron [-] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.650 221554 INFO nova.compute.manager [-] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Took 0.30 seconds to deallocate network for instance.#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.711 221554 DEBUG oslo_concurrency.lockutils [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.711 221554 DEBUG oslo_concurrency.lockutils [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:28 np0005603609 nova_compute[221550]: 2026-01-31 07:33:28.871 221554 DEBUG oslo_concurrency.processutils [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:33:29 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3738968175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:33:29 np0005603609 nova_compute[221550]: 2026-01-31 07:33:29.279 221554 DEBUG oslo_concurrency.processutils [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:29 np0005603609 nova_compute[221550]: 2026-01-31 07:33:29.287 221554 DEBUG nova.compute.provider_tree [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:33:29 np0005603609 nova_compute[221550]: 2026-01-31 07:33:29.312 221554 DEBUG nova.scheduler.client.report [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:33:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:29.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:29 np0005603609 nova_compute[221550]: 2026-01-31 07:33:29.355 221554 DEBUG oslo_concurrency.lockutils [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:29 np0005603609 nova_compute[221550]: 2026-01-31 07:33:29.406 221554 INFO nova.scheduler.client.report [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Deleted allocations for instance 8e37bfba-68d6-438e-baa4-12552ba6308e#033[00m
Jan 31 02:33:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:33:29 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2346403018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:33:29 np0005603609 nova_compute[221550]: 2026-01-31 07:33:29.521 221554 DEBUG oslo_concurrency.lockutils [None req-f04acb77-09e5-4d9c-bc08-eeb820f9b71d 44a89b2f3d1949e59e137d897a95ee6c a9699f00d4814c89960a4f9e938fc7e2 - - default default] Lock "8e37bfba-68d6-438e-baa4-12552ba6308e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.052 221554 DEBUG nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp603267h0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4444d8df-265a-48a7-a945-08eb55a365e1',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={802e207e-cc7b-4779-89dc-d399ba68dc38='574cf48c-3357-4054-9e8a-58b071261019'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.053 221554 DEBUG nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Creating instance directory: /var/lib/nova/instances/4444d8df-265a-48a7-a945-08eb55a365e1 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.053 221554 DEBUG nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Ensure instance console log exists: /var/lib/nova/instances/4444d8df-265a-48a7-a945-08eb55a365e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.054 221554 DEBUG nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.055 221554 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.055 221554 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.056 221554 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.063 221554 DEBUG nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.065 221554 DEBUG nova.virt.libvirt.vif [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:33:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-396476796',display_name='tempest-LiveMigrationTest-server-396476796',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-396476796',id=9,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:33:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dfb4d4079ac944b288d5e285ce1de95a',ramdisk_id='',reservation_id='r-m03yz7pt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-48073594',owner_user_name='tempest-LiveMigrationTest-48073594-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:33:20Z,user_data=None,user_id='0a59df5da6284e4e8764816e1f8dfaa3',uuid=4444d8df-265a-48a7-a945-08eb55a365e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.066 221554 DEBUG nova.network.os_vif_util [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Converting VIF {"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.067 221554 DEBUG nova.network.os_vif_util [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.067 221554 DEBUG os_vif [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.068 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.069 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.070 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.074 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.074 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc9a4557-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.075 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcc9a4557-33, col_values=(('external_ids', {'iface-id': 'cc9a4557-33da-44f5-87b4-ca945cbc819c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3c:9d:19', 'vm-uuid': '4444d8df-265a-48a7-a945-08eb55a365e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:33:30 np0005603609 NetworkManager[49064]: <info>  [1769844810.0790] manager: (tapcc9a4557-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.081 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.087 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.088 221554 INFO os_vif [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33')#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.092 221554 DEBUG nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.093 221554 DEBUG nova.compute.manager [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp603267h0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4444d8df-265a-48a7-a945-08eb55a365e1',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={802e207e-cc7b-4779-89dc-d399ba68dc38='574cf48c-3357-4054-9e8a-58b071261019'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Jan 31 02:33:30 np0005603609 nova_compute[221550]: 2026-01-31 07:33:30.538 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:30.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:31.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:31 np0005603609 nova_compute[221550]: 2026-01-31 07:33:31.524 221554 DEBUG nova.network.neutron [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Port cc9a4557-33da-44f5-87b4-ca945cbc819c updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Jan 31 02:33:31 np0005603609 nova_compute[221550]: 2026-01-31 07:33:31.853 221554 DEBUG nova.compute.manager [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp603267h0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4444d8df-265a-48a7-a945-08eb55a365e1',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={802e207e-cc7b-4779-89dc-d399ba68dc38='574cf48c-3357-4054-9e8a-58b071261019'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Jan 31 02:33:31 np0005603609 systemd[1]: Starting libvirt proxy daemon...
Jan 31 02:33:32 np0005603609 systemd[1]: Started libvirt proxy daemon.
Jan 31 02:33:32 np0005603609 kernel: tapcc9a4557-33: entered promiscuous mode
Jan 31 02:33:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:32Z|00045|binding|INFO|Claiming lport cc9a4557-33da-44f5-87b4-ca945cbc819c for this additional chassis.
Jan 31 02:33:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:32Z|00046|binding|INFO|cc9a4557-33da-44f5-87b4-ca945cbc819c: Claiming fa:16:3e:3c:9d:19 10.100.0.11
Jan 31 02:33:32 np0005603609 nova_compute[221550]: 2026-01-31 07:33:32.189 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:32 np0005603609 NetworkManager[49064]: <info>  [1769844812.1910] manager: (tapcc9a4557-33): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Jan 31 02:33:32 np0005603609 nova_compute[221550]: 2026-01-31 07:33:32.192 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:32 np0005603609 nova_compute[221550]: 2026-01-31 07:33:32.211 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:32 np0005603609 systemd-machined[190912]: New machine qemu-5-instance-00000009.
Jan 31 02:33:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:32Z|00047|binding|INFO|Setting lport cc9a4557-33da-44f5-87b4-ca945cbc819c ovn-installed in OVS
Jan 31 02:33:32 np0005603609 nova_compute[221550]: 2026-01-31 07:33:32.223 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:32 np0005603609 systemd[1]: Started Virtual Machine qemu-5-instance-00000009.
Jan 31 02:33:32 np0005603609 systemd-udevd[226805]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:33:32 np0005603609 NetworkManager[49064]: <info>  [1769844812.2549] device (tapcc9a4557-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:33:32 np0005603609 NetworkManager[49064]: <info>  [1769844812.2557] device (tapcc9a4557-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:33:32 np0005603609 nova_compute[221550]: 2026-01-31 07:33:32.266 221554 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Automatically allocated network: {'id': '3741a1e4-1d7f-4ca0-b02c-790a05701782', 'name': 'auto_allocated_network', 'tenant_id': 'ca84ce9280d74b4588f89bf679f563fa', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['023e1bb7-94fd-4d3e-985f-de3a59c69198', 'c3df2e1c-5e37-495f-93dc-b811f92707e7'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2026-01-31T07:33:22Z', 'updated_at': '2026-01-31T07:33:31Z', 'revision_number': 4, 'project_id': 'ca84ce9280d74b4588f89bf679f563fa'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478#033[00m
Jan 31 02:33:32 np0005603609 nova_compute[221550]: 2026-01-31 07:33:32.269 221554 DEBUG nova.policy [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38001f2ce5654228b098939fd9619d3e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca84ce9280d74b4588f89bf679f563fa', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:33:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:33:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:32.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:33:33 np0005603609 nova_compute[221550]: 2026-01-31 07:33:33.007 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844813.0069873, 4444d8df-265a-48a7-a945-08eb55a365e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:33:33 np0005603609 nova_compute[221550]: 2026-01-31 07:33:33.008 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] VM Started (Lifecycle Event)#033[00m
Jan 31 02:33:33 np0005603609 nova_compute[221550]: 2026-01-31 07:33:33.032 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:33.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:33 np0005603609 nova_compute[221550]: 2026-01-31 07:33:33.543 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844813.5430815, 4444d8df-265a-48a7-a945-08eb55a365e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:33:33 np0005603609 nova_compute[221550]: 2026-01-31 07:33:33.543 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:33:33 np0005603609 nova_compute[221550]: 2026-01-31 07:33:33.561 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:33 np0005603609 nova_compute[221550]: 2026-01-31 07:33:33.565 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:33:33 np0005603609 nova_compute[221550]: 2026-01-31 07:33:33.582 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Jan 31 02:33:33 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:33:33 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:33:34 np0005603609 nova_compute[221550]: 2026-01-31 07:33:34.000 221554 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Successfully created port: b4547e99-5de6-41da-be86-73186521c22b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:33:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:34.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:35 np0005603609 nova_compute[221550]: 2026-01-31 07:33:35.078 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:35 np0005603609 nova_compute[221550]: 2026-01-31 07:33:35.197 221554 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Successfully updated port: b4547e99-5de6-41da-be86-73186521c22b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:33:35 np0005603609 nova_compute[221550]: 2026-01-31 07:33:35.217 221554 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "refresh_cache-434e368f-4c40-49d6-8935-ed469ee03717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:33:35 np0005603609 nova_compute[221550]: 2026-01-31 07:33:35.218 221554 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquired lock "refresh_cache-434e368f-4c40-49d6-8935-ed469ee03717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:33:35 np0005603609 nova_compute[221550]: 2026-01-31 07:33:35.218 221554 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:33:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:33:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:35.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:33:35 np0005603609 nova_compute[221550]: 2026-01-31 07:33:35.380 221554 DEBUG nova.compute.manager [req-14c996f2-2bac-404c-b770-b0565ba51129 req-14b806cb-a166-4080-ab57-0ff6f1bd861b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Received event network-changed-b4547e99-5de6-41da-be86-73186521c22b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:33:35 np0005603609 nova_compute[221550]: 2026-01-31 07:33:35.381 221554 DEBUG nova.compute.manager [req-14c996f2-2bac-404c-b770-b0565ba51129 req-14b806cb-a166-4080-ab57-0ff6f1bd861b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Refreshing instance network info cache due to event network-changed-b4547e99-5de6-41da-be86-73186521c22b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:33:35 np0005603609 nova_compute[221550]: 2026-01-31 07:33:35.381 221554 DEBUG oslo_concurrency.lockutils [req-14c996f2-2bac-404c-b770-b0565ba51129 req-14b806cb-a166-4080-ab57-0ff6f1bd861b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-434e368f-4c40-49d6-8935-ed469ee03717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:33:35 np0005603609 nova_compute[221550]: 2026-01-31 07:33:35.529 221554 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:33:35 np0005603609 nova_compute[221550]: 2026-01-31 07:33:35.540 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:35 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:35Z|00048|binding|INFO|Claiming lport cc9a4557-33da-44f5-87b4-ca945cbc819c for this chassis.
Jan 31 02:33:35 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:35Z|00049|binding|INFO|cc9a4557-33da-44f5-87b4-ca945cbc819c: Claiming fa:16:3e:3c:9d:19 10.100.0.11
Jan 31 02:33:35 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:35Z|00050|binding|INFO|Setting lport cc9a4557-33da-44f5-87b4-ca945cbc819c up in Southbound
Jan 31 02:33:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:35.707 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:9d:19 10.100.0.11'], port_security=['fa:16:3e:3c:9d:19 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4444d8df-265a-48a7-a945-08eb55a365e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb4d4079ac944b288d5e285ce1de95a', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'abfb90a3-3499-4078-8409-95077c250314', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8850cb79-5a97-415d-8eee-4d7273f04968, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=cc9a4557-33da-44f5-87b4-ca945cbc819c) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:33:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:35.709 140058 INFO neutron.agent.ovn.metadata.agent [-] Port cc9a4557-33da-44f5-87b4-ca945cbc819c in datapath 272cbcfe-dc1b-4319-84a2-27d245d969a3 bound to our chassis#033[00m
Jan 31 02:33:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:35.710 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 272cbcfe-dc1b-4319-84a2-27d245d969a3#033[00m
Jan 31 02:33:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:35.720 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[895a39b2-906d-42a9-8086-46df8a59df0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:35.721 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap272cbcfe-d1 in ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:33:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:35.723 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap272cbcfe-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:33:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:35.723 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f2aef6a9-26bc-4fa5-adef-be101cc7e63b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:35.725 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[408117cc-f25a-4633-9dca-d10f14b7284b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:35.740 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[91cc1db3-48e5-4c2d-bea9-23d7ebb991f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:35.764 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[088ff7ab-603d-4bf8-9c23-5881dc175c36]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:35.786 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[5e882c4d-1576-4630-b759-d4d16e50d77d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:35 np0005603609 NetworkManager[49064]: <info>  [1769844815.8164] manager: (tap272cbcfe-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Jan 31 02:33:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:35.816 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[79741851-438f-4984-8514-beaf91a6318d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:35 np0005603609 systemd-udevd[226939]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:33:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:35.851 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c267468c-48ef-485e-b741-45dc91a0e923]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:35.855 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[fc20ef04-aa46-430b-8af7-afcf075e4e04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:35 np0005603609 podman[226910]: 2026-01-31 07:33:35.858018943 +0000 UTC m=+0.088476671 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:33:35 np0005603609 NetworkManager[49064]: <info>  [1769844815.8783] device (tap272cbcfe-d0): carrier: link connected
Jan 31 02:33:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:35.881 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[1ccca946-7650-4e16-b312-630984ffcad8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:35 np0005603609 podman[226908]: 2026-01-31 07:33:35.886318635 +0000 UTC m=+0.121907448 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 31 02:33:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:35.900 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5dd64837-1454-40b1-9dfa-d71647cf0201]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap272cbcfe-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:ea:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505557, 'reachable_time': 33488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 226972, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:35.915 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[19d3a014-506a-4016-bb61-68e449f4acc6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee9:eac2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505557, 'tstamp': 505557}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 226973, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:35.930 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[263acebb-54b0-44c1-88fc-7a329312cb3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap272cbcfe-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:ea:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505557, 'reachable_time': 33488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 226974, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:35 np0005603609 nova_compute[221550]: 2026-01-31 07:33:35.933 221554 INFO nova.compute.manager [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Post operation of migration started#033[00m
Jan 31 02:33:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:35.954 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0ebcb8af-625a-44f4-8445-4e610cb7ff1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:36.004 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f49c4d32-5e88-44a9-9850-c0be8a09f572]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:36.006 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap272cbcfe-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:36.006 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:36.007 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap272cbcfe-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:33:36 np0005603609 nova_compute[221550]: 2026-01-31 07:33:36.009 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:36 np0005603609 kernel: tap272cbcfe-d0: entered promiscuous mode
Jan 31 02:33:36 np0005603609 nova_compute[221550]: 2026-01-31 07:33:36.011 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:36 np0005603609 NetworkManager[49064]: <info>  [1769844816.0117] manager: (tap272cbcfe-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:36.012 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap272cbcfe-d0, col_values=(('external_ids', {'iface-id': '8bd64eda-9666-44b9-9b11-431cc2aca18a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:33:36 np0005603609 nova_compute[221550]: 2026-01-31 07:33:36.013 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:36 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:36Z|00051|binding|INFO|Releasing lport 8bd64eda-9666-44b9-9b11-431cc2aca18a from this chassis (sb_readonly=0)
Jan 31 02:33:36 np0005603609 nova_compute[221550]: 2026-01-31 07:33:36.024 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:36.026 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/272cbcfe-dc1b-4319-84a2-27d245d969a3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/272cbcfe-dc1b-4319-84a2-27d245d969a3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:36.027 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fe5478-33f1-402b-93d9-3fadd3662385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:36.028 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-272cbcfe-dc1b-4319-84a2-27d245d969a3
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/272cbcfe-dc1b-4319-84a2-27d245d969a3.pid.haproxy
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 272cbcfe-dc1b-4319-84a2-27d245d969a3
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:33:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:36.028 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'env', 'PROCESS_TAG=haproxy-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/272cbcfe-dc1b-4319-84a2-27d245d969a3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:33:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:36 np0005603609 podman[227007]: 2026-01-31 07:33:36.349623509 +0000 UTC m=+0.049361027 container create ddbf3fe93ebd9a01726d0234b07cbadcae8c6dd27a964705cf12681634ffbc40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 02:33:36 np0005603609 systemd[1]: Started libpod-conmon-ddbf3fe93ebd9a01726d0234b07cbadcae8c6dd27a964705cf12681634ffbc40.scope.
Jan 31 02:33:36 np0005603609 podman[227007]: 2026-01-31 07:33:36.322533097 +0000 UTC m=+0.022270595 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:33:36 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:33:36 np0005603609 nova_compute[221550]: 2026-01-31 07:33:36.418 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844801.4164355, e58ac907-ed7c-4271-a225-0335ab3c15ae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:33:36 np0005603609 nova_compute[221550]: 2026-01-31 07:33:36.419 221554 INFO nova.compute.manager [-] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:33:36 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa8fc31de6af8cdf47a7f8c3b8abfd3088f31c6bb14c84d8a373a6457237329/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:33:36 np0005603609 podman[227007]: 2026-01-31 07:33:36.436647114 +0000 UTC m=+0.136384592 container init ddbf3fe93ebd9a01726d0234b07cbadcae8c6dd27a964705cf12681634ffbc40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 02:33:36 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:36Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3c:9d:19 10.100.0.11
Jan 31 02:33:36 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:36Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3c:9d:19 10.100.0.11
Jan 31 02:33:36 np0005603609 podman[227007]: 2026-01-31 07:33:36.444832054 +0000 UTC m=+0.144569532 container start ddbf3fe93ebd9a01726d0234b07cbadcae8c6dd27a964705cf12681634ffbc40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 02:33:36 np0005603609 nova_compute[221550]: 2026-01-31 07:33:36.446 221554 DEBUG nova.compute.manager [None req-ce24b99a-bb3a-4d1a-a332-75f0da6ddd2c - - - - - -] [instance: e58ac907-ed7c-4271-a225-0335ab3c15ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:36 np0005603609 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[227022]: [NOTICE]   (227026) : New worker (227028) forked
Jan 31 02:33:36 np0005603609 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[227022]: [NOTICE]   (227026) : Loading success.
Jan 31 02:33:36 np0005603609 nova_compute[221550]: 2026-01-31 07:33:36.539 221554 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:33:36 np0005603609 nova_compute[221550]: 2026-01-31 07:33:36.540 221554 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquired lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:33:36 np0005603609 nova_compute[221550]: 2026-01-31 07:33:36.541 221554 DEBUG nova.network.neutron [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:33:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:33:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:36.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.185 221554 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Updating instance_info_cache with network_info: [{"id": "b4547e99-5de6-41da-be86-73186521c22b", "address": "fa:16:3e:2d:17:d2", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::20e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547e99-5d", "ovs_interfaceid": "b4547e99-5de6-41da-be86-73186521c22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.215 221554 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Releasing lock "refresh_cache-434e368f-4c40-49d6-8935-ed469ee03717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.216 221554 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Instance network_info: |[{"id": "b4547e99-5de6-41da-be86-73186521c22b", "address": "fa:16:3e:2d:17:d2", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::20e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547e99-5d", "ovs_interfaceid": "b4547e99-5de6-41da-be86-73186521c22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.217 221554 DEBUG oslo_concurrency.lockutils [req-14c996f2-2bac-404c-b770-b0565ba51129 req-14b806cb-a166-4080-ab57-0ff6f1bd861b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-434e368f-4c40-49d6-8935-ed469ee03717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.217 221554 DEBUG nova.network.neutron [req-14c996f2-2bac-404c-b770-b0565ba51129 req-14b806cb-a166-4080-ab57-0ff6f1bd861b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Refreshing network info cache for port b4547e99-5de6-41da-be86-73186521c22b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.223 221554 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Start _get_guest_xml network_info=[{"id": "b4547e99-5de6-41da-be86-73186521c22b", "address": "fa:16:3e:2d:17:d2", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::20e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547e99-5d", "ovs_interfaceid": "b4547e99-5de6-41da-be86-73186521c22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.229 221554 WARNING nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.235 221554 DEBUG nova.virt.libvirt.host [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.236 221554 DEBUG nova.virt.libvirt.host [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.245 221554 DEBUG nova.virt.libvirt.host [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.246 221554 DEBUG nova.virt.libvirt.host [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.248 221554 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.248 221554 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.249 221554 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.249 221554 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.250 221554 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.251 221554 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.251 221554 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.251 221554 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.252 221554 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.252 221554 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.253 221554 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.253 221554 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.258 221554 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:37.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:33:37 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/882411958' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.779 221554 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.814 221554 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 434e368f-4c40-49d6-8935-ed469ee03717_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:37 np0005603609 nova_compute[221550]: 2026-01-31 07:33:37.820 221554 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:33:38 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3081057397' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:33:38 np0005603609 nova_compute[221550]: 2026-01-31 07:33:38.270 221554 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:38 np0005603609 nova_compute[221550]: 2026-01-31 07:33:38.273 221554 DEBUG nova.virt.libvirt.vif [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:33:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-2133368505-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2133368505-3',id=13,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca84ce9280d74b4588f89bf679f563fa',ramdisk_id='',reservation_id='r-xsv2fd0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1036306085',owner_user_name='tempest-AutoAllocateNetworkTest-1036306085-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:33:22Z,user_data=None,user_id='38001f2ce5654228b098939fd9619d3e',uuid=434e368f-4c40-49d6-8935-ed469ee03717,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4547e99-5de6-41da-be86-73186521c22b", "address": "fa:16:3e:2d:17:d2", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::20e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547e99-5d", "ovs_interfaceid": "b4547e99-5de6-41da-be86-73186521c22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:33:38 np0005603609 nova_compute[221550]: 2026-01-31 07:33:38.273 221554 DEBUG nova.network.os_vif_util [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Converting VIF {"id": "b4547e99-5de6-41da-be86-73186521c22b", "address": "fa:16:3e:2d:17:d2", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::20e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547e99-5d", "ovs_interfaceid": "b4547e99-5de6-41da-be86-73186521c22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:33:38 np0005603609 nova_compute[221550]: 2026-01-31 07:33:38.275 221554 DEBUG nova.network.os_vif_util [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:17:d2,bridge_name='br-int',has_traffic_filtering=True,id=b4547e99-5de6-41da-be86-73186521c22b,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4547e99-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:33:38 np0005603609 nova_compute[221550]: 2026-01-31 07:33:38.277 221554 DEBUG nova.objects.instance [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 434e368f-4c40-49d6-8935-ed469ee03717 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:33:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:33:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:38.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.076 221554 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:33:39 np0005603609 nova_compute[221550]:  <uuid>434e368f-4c40-49d6-8935-ed469ee03717</uuid>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:  <name>instance-0000000d</name>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <nova:name>tempest-tempest.common.compute-instance-2133368505-3</nova:name>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:33:37</nova:creationTime>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:33:39 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:        <nova:user uuid="38001f2ce5654228b098939fd9619d3e">tempest-AutoAllocateNetworkTest-1036306085-project-member</nova:user>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:        <nova:project uuid="ca84ce9280d74b4588f89bf679f563fa">tempest-AutoAllocateNetworkTest-1036306085</nova:project>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:        <nova:port uuid="b4547e99-5de6-41da-be86-73186521c22b">
Jan 31 02:33:39 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.1.0.47" ipVersion="4"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="fdfe:381f:8400::20e" ipVersion="6"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <entry name="serial">434e368f-4c40-49d6-8935-ed469ee03717</entry>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <entry name="uuid">434e368f-4c40-49d6-8935-ed469ee03717</entry>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/434e368f-4c40-49d6-8935-ed469ee03717_disk">
Jan 31 02:33:39 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:33:39 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/434e368f-4c40-49d6-8935-ed469ee03717_disk.config">
Jan 31 02:33:39 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:33:39 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:2d:17:d2"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <target dev="tapb4547e99-5d"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/434e368f-4c40-49d6-8935-ed469ee03717/console.log" append="off"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:33:39 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:33:39 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:33:39 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:33:39 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.078 221554 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Preparing to wait for external event network-vif-plugged-b4547e99-5de6-41da-be86-73186521c22b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.078 221554 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "434e368f-4c40-49d6-8935-ed469ee03717-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.079 221554 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "434e368f-4c40-49d6-8935-ed469ee03717-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.079 221554 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "434e368f-4c40-49d6-8935-ed469ee03717-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.080 221554 DEBUG nova.virt.libvirt.vif [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:33:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-2133368505-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2133368505-3',id=13,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca84ce9280d74b4588f89bf679f563fa',ramdisk_id='',reservation_id='r-xsv2fd0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1036306085',owner_user_name='tempest-AutoAllocateNetworkTest-1036306085-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:33:22Z,user_data=None,user_id='38001f2ce5654228b098939fd9619d3e',uuid=434e368f-4c40-49d6-8935-ed469ee03717,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4547e99-5de6-41da-be86-73186521c22b", "address": "fa:16:3e:2d:17:d2", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::20e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547e99-5d", "ovs_interfaceid": "b4547e99-5de6-41da-be86-73186521c22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.081 221554 DEBUG nova.network.os_vif_util [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Converting VIF {"id": "b4547e99-5de6-41da-be86-73186521c22b", "address": "fa:16:3e:2d:17:d2", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::20e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547e99-5d", "ovs_interfaceid": "b4547e99-5de6-41da-be86-73186521c22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.082 221554 DEBUG nova.network.os_vif_util [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:17:d2,bridge_name='br-int',has_traffic_filtering=True,id=b4547e99-5de6-41da-be86-73186521c22b,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4547e99-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.083 221554 DEBUG os_vif [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:17:d2,bridge_name='br-int',has_traffic_filtering=True,id=b4547e99-5de6-41da-be86-73186521c22b,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4547e99-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.084 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.084 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.085 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.088 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.089 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4547e99-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.089 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb4547e99-5d, col_values=(('external_ids', {'iface-id': 'b4547e99-5de6-41da-be86-73186521c22b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:17:d2', 'vm-uuid': '434e368f-4c40-49d6-8935-ed469ee03717'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.095 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:39 np0005603609 NetworkManager[49064]: <info>  [1769844819.0969] manager: (tapb4547e99-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.099 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.104 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.106 221554 INFO os_vif [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:17:d2,bridge_name='br-int',has_traffic_filtering=True,id=b4547e99-5de6-41da-be86-73186521c22b,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4547e99-5d')#033[00m
Jan 31 02:33:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:39.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.394 221554 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.396 221554 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.396 221554 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] No VIF found with MAC fa:16:3e:2d:17:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.397 221554 INFO nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Using config drive#033[00m
Jan 31 02:33:39 np0005603609 nova_compute[221550]: 2026-01-31 07:33:39.426 221554 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 434e368f-4c40-49d6-8935-ed469ee03717_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:40 np0005603609 nova_compute[221550]: 2026-01-31 07:33:40.291 221554 DEBUG nova.network.neutron [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Updating instance_info_cache with network_info: [{"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:33:40 np0005603609 nova_compute[221550]: 2026-01-31 07:33:40.316 221554 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Releasing lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:33:40 np0005603609 nova_compute[221550]: 2026-01-31 07:33:40.332 221554 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:40 np0005603609 nova_compute[221550]: 2026-01-31 07:33:40.333 221554 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:40 np0005603609 nova_compute[221550]: 2026-01-31 07:33:40.333 221554 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:40 np0005603609 nova_compute[221550]: 2026-01-31 07:33:40.338 221554 INFO nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Jan 31 02:33:40 np0005603609 virtqemud[221292]: Domain id=5 name='instance-00000009' uuid=4444d8df-265a-48a7-a945-08eb55a365e1 is tainted: custom-monitor
Jan 31 02:33:40 np0005603609 nova_compute[221550]: 2026-01-31 07:33:40.452 221554 INFO nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Creating config drive at /var/lib/nova/instances/434e368f-4c40-49d6-8935-ed469ee03717/disk.config#033[00m
Jan 31 02:33:40 np0005603609 nova_compute[221550]: 2026-01-31 07:33:40.455 221554 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/434e368f-4c40-49d6-8935-ed469ee03717/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp55efxiwj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:40 np0005603609 nova_compute[221550]: 2026-01-31 07:33:40.543 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:40 np0005603609 nova_compute[221550]: 2026-01-31 07:33:40.574 221554 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/434e368f-4c40-49d6-8935-ed469ee03717/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp55efxiwj" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:40.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:40 np0005603609 nova_compute[221550]: 2026-01-31 07:33:40.615 221554 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 434e368f-4c40-49d6-8935-ed469ee03717_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:40 np0005603609 nova_compute[221550]: 2026-01-31 07:33:40.621 221554 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/434e368f-4c40-49d6-8935-ed469ee03717/disk.config 434e368f-4c40-49d6-8935-ed469ee03717_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:40 np0005603609 nova_compute[221550]: 2026-01-31 07:33:40.786 221554 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/434e368f-4c40-49d6-8935-ed469ee03717/disk.config 434e368f-4c40-49d6-8935-ed469ee03717_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:40 np0005603609 nova_compute[221550]: 2026-01-31 07:33:40.787 221554 INFO nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Deleting local config drive /var/lib/nova/instances/434e368f-4c40-49d6-8935-ed469ee03717/disk.config because it was imported into RBD.#033[00m
Jan 31 02:33:40 np0005603609 kernel: tapb4547e99-5d: entered promiscuous mode
Jan 31 02:33:40 np0005603609 NetworkManager[49064]: <info>  [1769844820.8277] manager: (tapb4547e99-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Jan 31 02:33:40 np0005603609 nova_compute[221550]: 2026-01-31 07:33:40.829 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:40 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:40Z|00052|binding|INFO|Claiming lport b4547e99-5de6-41da-be86-73186521c22b for this chassis.
Jan 31 02:33:40 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:40Z|00053|binding|INFO|b4547e99-5de6-41da-be86-73186521c22b: Claiming fa:16:3e:2d:17:d2 10.1.0.47 fdfe:381f:8400::20e
Jan 31 02:33:40 np0005603609 nova_compute[221550]: 2026-01-31 07:33:40.838 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:40.848 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:17:d2 10.1.0.47 fdfe:381f:8400::20e'], port_security=['fa:16:3e:2d:17:d2 10.1.0.47 fdfe:381f:8400::20e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.47/26 fdfe:381f:8400::20e/64', 'neutron:device_id': '434e368f-4c40-49d6-8935-ed469ee03717', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca84ce9280d74b4588f89bf679f563fa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4c568fd1-1fe0-466e-9918-da0b2ee34267', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5e73fc0-b2f3-4e49-a543-f424bee97362, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=b4547e99-5de6-41da-be86-73186521c22b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:33:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:40.850 140058 INFO neutron.agent.ovn.metadata.agent [-] Port b4547e99-5de6-41da-be86-73186521c22b in datapath 3741a1e4-1d7f-4ca0-b02c-790a05701782 bound to our chassis#033[00m
Jan 31 02:33:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:40.852 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3741a1e4-1d7f-4ca0-b02c-790a05701782#033[00m
Jan 31 02:33:40 np0005603609 systemd-udevd[227172]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:33:40 np0005603609 systemd-machined[190912]: New machine qemu-6-instance-0000000d.
Jan 31 02:33:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:40.859 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2a6e8e4f-f874-48a2-8e17-6ee61977f22a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:40.860 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3741a1e4-11 in ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:33:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:40.862 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3741a1e4-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:33:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:40.862 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6d90078f-29de-4529-a2ed-77ac92ba9544]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:40.863 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e851af52-31a4-494a-94f3-5a0f118e4d1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:40 np0005603609 NetworkManager[49064]: <info>  [1769844820.8689] device (tapb4547e99-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:33:40 np0005603609 NetworkManager[49064]: <info>  [1769844820.8698] device (tapb4547e99-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:33:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:40.874 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5259f3-c4f2-48fe-92b0-e891cb641ae4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:40 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:40Z|00054|binding|INFO|Setting lport b4547e99-5de6-41da-be86-73186521c22b ovn-installed in OVS
Jan 31 02:33:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:40.883 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6331b9-4e15-446a-9f79-d6dda344d1c4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:40 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:40Z|00055|binding|INFO|Setting lport b4547e99-5de6-41da-be86-73186521c22b up in Southbound
Jan 31 02:33:40 np0005603609 systemd[1]: Started Virtual Machine qemu-6-instance-0000000d.
Jan 31 02:33:40 np0005603609 nova_compute[221550]: 2026-01-31 07:33:40.885 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:40.902 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[cb585eb1-74ae-465a-a11a-af17c8ce0572]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:40 np0005603609 NetworkManager[49064]: <info>  [1769844820.9081] manager: (tap3741a1e4-10): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Jan 31 02:33:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:40.908 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[332f164b-b8b8-4268-9c80-e373a72e3793]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:40.936 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[725f9bfb-b842-4dd0-9597-4da83d78e3aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:40.938 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c30fe69c-0e11-4247-97f1-21ce9a798b6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:40 np0005603609 NetworkManager[49064]: <info>  [1769844820.9590] device (tap3741a1e4-10): carrier: link connected
Jan 31 02:33:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:40.963 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b160b4f8-3acf-4c3f-b389-b4b21fe0658c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:40.979 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9087eaec-b546-4550-b0a4-e7fcec85c79a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3741a1e4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:7f:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506065, 'reachable_time': 24971, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 227206, 'error': None, 'target': 'ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:40.990 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2fabf8f7-1eb7-4b75-b0df-acb69800572e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef5:7f1a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506065, 'tstamp': 506065}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 227207, 'error': None, 'target': 'ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:40 np0005603609 nova_compute[221550]: 2026-01-31 07:33:40.989 221554 DEBUG nova.network.neutron [req-14c996f2-2bac-404c-b770-b0565ba51129 req-14b806cb-a166-4080-ab57-0ff6f1bd861b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Updated VIF entry in instance network info cache for port b4547e99-5de6-41da-be86-73186521c22b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:33:40 np0005603609 nova_compute[221550]: 2026-01-31 07:33:40.990 221554 DEBUG nova.network.neutron [req-14c996f2-2bac-404c-b770-b0565ba51129 req-14b806cb-a166-4080-ab57-0ff6f1bd861b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Updating instance_info_cache with network_info: [{"id": "b4547e99-5de6-41da-be86-73186521c22b", "address": "fa:16:3e:2d:17:d2", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::20e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547e99-5d", "ovs_interfaceid": "b4547e99-5de6-41da-be86-73186521c22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:41.003 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c4b1f9-e0bf-4a03-be13-960991de976e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3741a1e4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:7f:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506065, 'reachable_time': 24971, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 227208, 'error': None, 'target': 'ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:41 np0005603609 nova_compute[221550]: 2026-01-31 07:33:41.017 221554 DEBUG oslo_concurrency.lockutils [req-14c996f2-2bac-404c-b770-b0565ba51129 req-14b806cb-a166-4080-ab57-0ff6f1bd861b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-434e368f-4c40-49d6-8935-ed469ee03717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:41.026 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[52b6e847-4960-45fe-b34e-db3e3296f719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:41.070 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[934c8fe0-63c4-4bd5-960a-feb8221b63e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:41.071 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3741a1e4-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:41.071 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:41.072 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3741a1e4-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:33:41 np0005603609 kernel: tap3741a1e4-10: entered promiscuous mode
Jan 31 02:33:41 np0005603609 NetworkManager[49064]: <info>  [1769844821.0740] manager: (tap3741a1e4-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Jan 31 02:33:41 np0005603609 nova_compute[221550]: 2026-01-31 07:33:41.073 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:41 np0005603609 nova_compute[221550]: 2026-01-31 07:33:41.075 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:41.079 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3741a1e4-10, col_values=(('external_ids', {'iface-id': 'b790fc33-bc7c-415a-abab-507204f5d28b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:33:41 np0005603609 nova_compute[221550]: 2026-01-31 07:33:41.080 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:41 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:41Z|00056|binding|INFO|Releasing lport b790fc33-bc7c-415a-abab-507204f5d28b from this chassis (sb_readonly=0)
Jan 31 02:33:41 np0005603609 nova_compute[221550]: 2026-01-31 07:33:41.085 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:41.086 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3741a1e4-1d7f-4ca0-b02c-790a05701782.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3741a1e4-1d7f-4ca0-b02c-790a05701782.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:41.087 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8d71ddfa-2c97-4b7c-a908-630d37fb4c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:41.087 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-3741a1e4-1d7f-4ca0-b02c-790a05701782
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/3741a1e4-1d7f-4ca0-b02c-790a05701782.pid.haproxy
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 3741a1e4-1d7f-4ca0-b02c-790a05701782
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:33:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:41.087 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'env', 'PROCESS_TAG=haproxy-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3741a1e4-1d7f-4ca0-b02c-790a05701782.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:33:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:41 np0005603609 nova_compute[221550]: 2026-01-31 07:33:41.348 221554 INFO nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Jan 31 02:33:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:33:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:41.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:33:41 np0005603609 podman[227240]: 2026-01-31 07:33:41.422941232 +0000 UTC m=+0.051517020 container create 1b9dab6afc107ba79ffbf1583df1a6b2a1a814e4f4ee260e51d9f0516ff369b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 02:33:41 np0005603609 systemd[1]: Started libpod-conmon-1b9dab6afc107ba79ffbf1583df1a6b2a1a814e4f4ee260e51d9f0516ff369b5.scope.
Jan 31 02:33:41 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:33:41 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7c5d0f48bdffdfdd7e5a7146bf10c4a2c0ea61409d2e55dac8484391c97454/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:33:41 np0005603609 podman[227240]: 2026-01-31 07:33:41.395320427 +0000 UTC m=+0.023896275 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:33:41 np0005603609 podman[227240]: 2026-01-31 07:33:41.493338151 +0000 UTC m=+0.121914029 container init 1b9dab6afc107ba79ffbf1583df1a6b2a1a814e4f4ee260e51d9f0516ff369b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Jan 31 02:33:41 np0005603609 podman[227240]: 2026-01-31 07:33:41.497206455 +0000 UTC m=+0.125782243 container start 1b9dab6afc107ba79ffbf1583df1a6b2a1a814e4f4ee260e51d9f0516ff369b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:33:41 np0005603609 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[227288]: [NOTICE]   (227300) : New worker (227302) forked
Jan 31 02:33:41 np0005603609 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[227288]: [NOTICE]   (227300) : Loading success.
Jan 31 02:33:41 np0005603609 nova_compute[221550]: 2026-01-31 07:33:41.576 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844821.5765193, 434e368f-4c40-49d6-8935-ed469ee03717 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:33:41 np0005603609 nova_compute[221550]: 2026-01-31 07:33:41.577 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] VM Started (Lifecycle Event)#033[00m
Jan 31 02:33:41 np0005603609 nova_compute[221550]: 2026-01-31 07:33:41.597 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:41 np0005603609 nova_compute[221550]: 2026-01-31 07:33:41.601 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844821.5783079, 434e368f-4c40-49d6-8935-ed469ee03717 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:33:41 np0005603609 nova_compute[221550]: 2026-01-31 07:33:41.601 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:33:41 np0005603609 nova_compute[221550]: 2026-01-31 07:33:41.619 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:41 np0005603609 nova_compute[221550]: 2026-01-31 07:33:41.622 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:33:41 np0005603609 nova_compute[221550]: 2026-01-31 07:33:41.642 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.273 221554 DEBUG oslo_concurrency.lockutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "d7327aed-ddc6-4772-8d2e-6b8be365dd2b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.274 221554 DEBUG oslo_concurrency.lockutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "d7327aed-ddc6-4772-8d2e-6b8be365dd2b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.288 221554 DEBUG nova.compute.manager [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.354 221554 INFO nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.360 221554 DEBUG nova.compute.manager [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.369 221554 DEBUG oslo_concurrency.lockutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.369 221554 DEBUG oslo_concurrency.lockutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.376 221554 DEBUG nova.virt.hardware [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.376 221554 INFO nova.compute.claims [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.379 221554 DEBUG nova.objects.instance [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.534 221554 DEBUG oslo_concurrency.processutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:42.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.666 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844807.6655903, 8e37bfba-68d6-438e-baa4-12552ba6308e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.667 221554 INFO nova.compute.manager [-] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.702 221554 DEBUG nova.compute.manager [None req-bf487da7-a7f5-4915-a1e6-8da3109da560 - - - - - -] [instance: 8e37bfba-68d6-438e-baa4-12552ba6308e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.732 221554 DEBUG nova.compute.manager [req-b6f09e63-371c-4c43-82f0-b96f319c2415 req-6725ba76-42d8-47a0-b03e-0a953c8887fb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.733 221554 DEBUG oslo_concurrency.lockutils [req-b6f09e63-371c-4c43-82f0-b96f319c2415 req-6725ba76-42d8-47a0-b03e-0a953c8887fb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.733 221554 DEBUG oslo_concurrency.lockutils [req-b6f09e63-371c-4c43-82f0-b96f319c2415 req-6725ba76-42d8-47a0-b03e-0a953c8887fb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.734 221554 DEBUG oslo_concurrency.lockutils [req-b6f09e63-371c-4c43-82f0-b96f319c2415 req-6725ba76-42d8-47a0-b03e-0a953c8887fb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.735 221554 DEBUG nova.compute.manager [req-b6f09e63-371c-4c43-82f0-b96f319c2415 req-6725ba76-42d8-47a0-b03e-0a953c8887fb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] No waiting events found dispatching network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.735 221554 WARNING nova.compute.manager [req-b6f09e63-371c-4c43-82f0-b96f319c2415 req-6725ba76-42d8-47a0-b03e-0a953c8887fb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received unexpected event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c for instance with vm_state active and task_state None.#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.736 221554 DEBUG nova.compute.manager [req-b6f09e63-371c-4c43-82f0-b96f319c2415 req-6725ba76-42d8-47a0-b03e-0a953c8887fb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.736 221554 DEBUG oslo_concurrency.lockutils [req-b6f09e63-371c-4c43-82f0-b96f319c2415 req-6725ba76-42d8-47a0-b03e-0a953c8887fb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.737 221554 DEBUG oslo_concurrency.lockutils [req-b6f09e63-371c-4c43-82f0-b96f319c2415 req-6725ba76-42d8-47a0-b03e-0a953c8887fb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.738 221554 DEBUG oslo_concurrency.lockutils [req-b6f09e63-371c-4c43-82f0-b96f319c2415 req-6725ba76-42d8-47a0-b03e-0a953c8887fb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.738 221554 DEBUG nova.compute.manager [req-b6f09e63-371c-4c43-82f0-b96f319c2415 req-6725ba76-42d8-47a0-b03e-0a953c8887fb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] No waiting events found dispatching network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.739 221554 WARNING nova.compute.manager [req-b6f09e63-371c-4c43-82f0-b96f319c2415 req-6725ba76-42d8-47a0-b03e-0a953c8887fb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received unexpected event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c for instance with vm_state active and task_state None.#033[00m
Jan 31 02:33:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:33:42 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/67064187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.939 221554 DEBUG oslo_concurrency.processutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.944 221554 DEBUG nova.compute.provider_tree [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.958 221554 DEBUG nova.scheduler.client.report [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.978 221554 DEBUG oslo_concurrency.lockutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:42 np0005603609 nova_compute[221550]: 2026-01-31 07:33:42.979 221554 DEBUG nova.compute.manager [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.021 221554 DEBUG nova.compute.manager [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.022 221554 DEBUG nova.network.neutron [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.042 221554 INFO nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.061 221554 DEBUG nova.compute.manager [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.171 221554 DEBUG nova.compute.manager [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.173 221554 DEBUG nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.173 221554 INFO nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Creating image(s)#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.211 221554 DEBUG nova.storage.rbd_utils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image d7327aed-ddc6-4772-8d2e-6b8be365dd2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.255 221554 DEBUG nova.storage.rbd_utils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image d7327aed-ddc6-4772-8d2e-6b8be365dd2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.296 221554 DEBUG nova.storage.rbd_utils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image d7327aed-ddc6-4772-8d2e-6b8be365dd2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.300 221554 DEBUG oslo_concurrency.processutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:43.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.378 221554 DEBUG oslo_concurrency.processutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.379 221554 DEBUG oslo_concurrency.lockutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.380 221554 DEBUG oslo_concurrency.lockutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.381 221554 DEBUG oslo_concurrency.lockutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.411 221554 DEBUG nova.storage.rbd_utils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image d7327aed-ddc6-4772-8d2e-6b8be365dd2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.416 221554 DEBUG oslo_concurrency.processutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 d7327aed-ddc6-4772-8d2e-6b8be365dd2b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.619 221554 DEBUG nova.network.neutron [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.620 221554 DEBUG nova.compute.manager [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.681 221554 DEBUG oslo_concurrency.processutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 d7327aed-ddc6-4772-8d2e-6b8be365dd2b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.786 221554 DEBUG nova.storage.rbd_utils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] resizing rbd image d7327aed-ddc6-4772-8d2e-6b8be365dd2b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:33:43 np0005603609 nova_compute[221550]: 2026-01-31 07:33:43.972 221554 DEBUG nova.objects.instance [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'migration_context' on Instance uuid d7327aed-ddc6-4772-8d2e-6b8be365dd2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.003 221554 DEBUG nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.003 221554 DEBUG nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Ensure instance console log exists: /var/lib/nova/instances/d7327aed-ddc6-4772-8d2e-6b8be365dd2b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.004 221554 DEBUG oslo_concurrency.lockutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.004 221554 DEBUG oslo_concurrency.lockutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.005 221554 DEBUG oslo_concurrency.lockutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.008 221554 DEBUG nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.014 221554 WARNING nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.021 221554 DEBUG nova.virt.libvirt.host [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.022 221554 DEBUG nova.virt.libvirt.host [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.026 221554 DEBUG nova.virt.libvirt.host [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.026 221554 DEBUG nova.virt.libvirt.host [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.028 221554 DEBUG nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.029 221554 DEBUG nova.virt.hardware [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.029 221554 DEBUG nova.virt.hardware [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.030 221554 DEBUG nova.virt.hardware [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.030 221554 DEBUG nova.virt.hardware [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.031 221554 DEBUG nova.virt.hardware [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.031 221554 DEBUG nova.virt.hardware [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.032 221554 DEBUG nova.virt.hardware [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.032 221554 DEBUG nova.virt.hardware [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.033 221554 DEBUG nova.virt.hardware [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.033 221554 DEBUG nova.virt.hardware [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.034 221554 DEBUG nova.virt.hardware [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.040 221554 DEBUG oslo_concurrency.processutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.095 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:33:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2106868559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.534 221554 DEBUG oslo_concurrency.processutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.563 221554 DEBUG nova.storage.rbd_utils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image d7327aed-ddc6-4772-8d2e-6b8be365dd2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:44 np0005603609 nova_compute[221550]: 2026-01-31 07:33:44.567 221554 DEBUG oslo_concurrency.processutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:44.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:33:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1044616484' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:33:45 np0005603609 nova_compute[221550]: 2026-01-31 07:33:45.003 221554 DEBUG oslo_concurrency.processutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:45 np0005603609 nova_compute[221550]: 2026-01-31 07:33:45.005 221554 DEBUG nova.objects.instance [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'pci_devices' on Instance uuid d7327aed-ddc6-4772-8d2e-6b8be365dd2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:33:45 np0005603609 nova_compute[221550]: 2026-01-31 07:33:45.029 221554 DEBUG nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:33:45 np0005603609 nova_compute[221550]:  <uuid>d7327aed-ddc6-4772-8d2e-6b8be365dd2b</uuid>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:  <name>instance-0000000e</name>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <nova:name>tempest-MigrationsAdminTest-server-1723888188</nova:name>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:33:44</nova:creationTime>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:33:45 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:        <nova:user uuid="71f887fd92fb486a959e5ca100cb1e10">tempest-MigrationsAdminTest-137263588-project-member</nova:user>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:        <nova:project uuid="7c1ddd67115f4f7bab056dbb2f270ccc">tempest-MigrationsAdminTest-137263588</nova:project>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <nova:ports/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <entry name="serial">d7327aed-ddc6-4772-8d2e-6b8be365dd2b</entry>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <entry name="uuid">d7327aed-ddc6-4772-8d2e-6b8be365dd2b</entry>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/d7327aed-ddc6-4772-8d2e-6b8be365dd2b_disk">
Jan 31 02:33:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:33:45 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/d7327aed-ddc6-4772-8d2e-6b8be365dd2b_disk.config">
Jan 31 02:33:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:33:45 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/d7327aed-ddc6-4772-8d2e-6b8be365dd2b/console.log" append="off"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:33:45 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:33:45 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:33:45 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:33:45 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:33:45 np0005603609 nova_compute[221550]: 2026-01-31 07:33:45.088 221554 DEBUG nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:33:45 np0005603609 nova_compute[221550]: 2026-01-31 07:33:45.089 221554 DEBUG nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:33:45 np0005603609 nova_compute[221550]: 2026-01-31 07:33:45.090 221554 INFO nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Using config drive#033[00m
Jan 31 02:33:45 np0005603609 nova_compute[221550]: 2026-01-31 07:33:45.126 221554 DEBUG nova.storage.rbd_utils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image d7327aed-ddc6-4772-8d2e-6b8be365dd2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:45 np0005603609 nova_compute[221550]: 2026-01-31 07:33:45.312 221554 INFO nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Creating config drive at /var/lib/nova/instances/d7327aed-ddc6-4772-8d2e-6b8be365dd2b/disk.config#033[00m
Jan 31 02:33:45 np0005603609 nova_compute[221550]: 2026-01-31 07:33:45.317 221554 DEBUG oslo_concurrency.processutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d7327aed-ddc6-4772-8d2e-6b8be365dd2b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmphyzou2sl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:33:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:45.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:33:45 np0005603609 nova_compute[221550]: 2026-01-31 07:33:45.439 221554 DEBUG oslo_concurrency.processutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d7327aed-ddc6-4772-8d2e-6b8be365dd2b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmphyzou2sl" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:45 np0005603609 nova_compute[221550]: 2026-01-31 07:33:45.473 221554 DEBUG nova.storage.rbd_utils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image d7327aed-ddc6-4772-8d2e-6b8be365dd2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:33:45 np0005603609 nova_compute[221550]: 2026-01-31 07:33:45.477 221554 DEBUG oslo_concurrency.processutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d7327aed-ddc6-4772-8d2e-6b8be365dd2b/disk.config d7327aed-ddc6-4772-8d2e-6b8be365dd2b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:45 np0005603609 nova_compute[221550]: 2026-01-31 07:33:45.545 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:45 np0005603609 nova_compute[221550]: 2026-01-31 07:33:45.646 221554 DEBUG oslo_concurrency.processutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d7327aed-ddc6-4772-8d2e-6b8be365dd2b/disk.config d7327aed-ddc6-4772-8d2e-6b8be365dd2b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:45 np0005603609 nova_compute[221550]: 2026-01-31 07:33:45.647 221554 INFO nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Deleting local config drive /var/lib/nova/instances/d7327aed-ddc6-4772-8d2e-6b8be365dd2b/disk.config because it was imported into RBD.#033[00m
Jan 31 02:33:45 np0005603609 nova_compute[221550]: 2026-01-31 07:33:45.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:33:45 np0005603609 systemd-machined[190912]: New machine qemu-7-instance-0000000e.
Jan 31 02:33:45 np0005603609 systemd[1]: Started Virtual Machine qemu-7-instance-0000000e.
Jan 31 02:33:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.239 221554 DEBUG nova.compute.manager [req-6342a0fe-23e3-46ad-a2da-5b911e8d043a req-bf24273f-bc20-4cc2-8add-d49d2178da84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Received event network-vif-plugged-b4547e99-5de6-41da-be86-73186521c22b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.240 221554 DEBUG oslo_concurrency.lockutils [req-6342a0fe-23e3-46ad-a2da-5b911e8d043a req-bf24273f-bc20-4cc2-8add-d49d2178da84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "434e368f-4c40-49d6-8935-ed469ee03717-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.240 221554 DEBUG oslo_concurrency.lockutils [req-6342a0fe-23e3-46ad-a2da-5b911e8d043a req-bf24273f-bc20-4cc2-8add-d49d2178da84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "434e368f-4c40-49d6-8935-ed469ee03717-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.240 221554 DEBUG oslo_concurrency.lockutils [req-6342a0fe-23e3-46ad-a2da-5b911e8d043a req-bf24273f-bc20-4cc2-8add-d49d2178da84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "434e368f-4c40-49d6-8935-ed469ee03717-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.240 221554 DEBUG nova.compute.manager [req-6342a0fe-23e3-46ad-a2da-5b911e8d043a req-bf24273f-bc20-4cc2-8add-d49d2178da84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Processing event network-vif-plugged-b4547e99-5de6-41da-be86-73186521c22b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.241 221554 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.245 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844826.244549, 434e368f-4c40-49d6-8935-ed469ee03717 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.245 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.248 221554 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.252 221554 INFO nova.virt.libvirt.driver [-] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Instance spawned successfully.#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.252 221554 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.259 221554 DEBUG nova.compute.manager [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.260 221554 DEBUG nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.262 221554 INFO nova.virt.libvirt.driver [-] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Instance spawned successfully.#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.262 221554 DEBUG nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.272 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.277 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.282 221554 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.282 221554 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.283 221554 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.283 221554 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.284 221554 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.284 221554 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.295 221554 DEBUG nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.296 221554 DEBUG nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.297 221554 DEBUG nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.297 221554 DEBUG nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.297 221554 DEBUG nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.298 221554 DEBUG nova.virt.libvirt.driver [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.332 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.333 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844826.2574859, d7327aed-ddc6-4772-8d2e-6b8be365dd2b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.333 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.403 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.406 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.434 221554 INFO nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Took 23.95 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.435 221554 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.442 221554 INFO nova.compute.manager [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Took 3.27 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.443 221554 DEBUG nova.compute.manager [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.445 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.445 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844826.2582777, d7327aed-ddc6-4772-8d2e-6b8be365dd2b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.446 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] VM Started (Lifecycle Event)#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.500 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.504 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.552 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.576 221554 INFO nova.compute.manager [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Took 4.24 seconds to build instance.#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.582 221554 INFO nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Took 25.25 seconds to build instance.#033[00m
Jan 31 02:33:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:46.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.607 221554 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Check if temp file /var/lib/nova/instances/tmpxr7wz9jw exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.608 221554 DEBUG nova.compute.manager [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxr7wz9jw',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4444d8df-265a-48a7-a945-08eb55a365e1',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.612 221554 DEBUG oslo_concurrency.lockutils [None req-15613c2e-666c-4d56-9758-ebfed080b631 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "d7327aed-ddc6-4772-8d2e-6b8be365dd2b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.618 221554 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "434e368f-4c40-49d6-8935-ed469ee03717" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.976 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-434e368f-4c40-49d6-8935-ed469ee03717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.977 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-434e368f-4c40-49d6-8935-ed469ee03717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.977 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:33:46 np0005603609 nova_compute[221550]: 2026-01-31 07:33:46.977 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 434e368f-4c40-49d6-8935-ed469ee03717 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:33:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:47.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:48.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:48 np0005603609 nova_compute[221550]: 2026-01-31 07:33:48.636 221554 DEBUG nova.compute.manager [req-87fcfc93-fcb1-4a4b-989d-38ea5caf2372 req-ba3a244f-8db9-4f82-a71b-cbe92df59bfe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Received event network-vif-plugged-b4547e99-5de6-41da-be86-73186521c22b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:33:48 np0005603609 nova_compute[221550]: 2026-01-31 07:33:48.637 221554 DEBUG oslo_concurrency.lockutils [req-87fcfc93-fcb1-4a4b-989d-38ea5caf2372 req-ba3a244f-8db9-4f82-a71b-cbe92df59bfe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "434e368f-4c40-49d6-8935-ed469ee03717-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:48 np0005603609 nova_compute[221550]: 2026-01-31 07:33:48.637 221554 DEBUG oslo_concurrency.lockutils [req-87fcfc93-fcb1-4a4b-989d-38ea5caf2372 req-ba3a244f-8db9-4f82-a71b-cbe92df59bfe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "434e368f-4c40-49d6-8935-ed469ee03717-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:48 np0005603609 nova_compute[221550]: 2026-01-31 07:33:48.638 221554 DEBUG oslo_concurrency.lockutils [req-87fcfc93-fcb1-4a4b-989d-38ea5caf2372 req-ba3a244f-8db9-4f82-a71b-cbe92df59bfe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "434e368f-4c40-49d6-8935-ed469ee03717-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:48 np0005603609 nova_compute[221550]: 2026-01-31 07:33:48.638 221554 DEBUG nova.compute.manager [req-87fcfc93-fcb1-4a4b-989d-38ea5caf2372 req-ba3a244f-8db9-4f82-a71b-cbe92df59bfe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] No waiting events found dispatching network-vif-plugged-b4547e99-5de6-41da-be86-73186521c22b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:33:48 np0005603609 nova_compute[221550]: 2026-01-31 07:33:48.639 221554 WARNING nova.compute.manager [req-87fcfc93-fcb1-4a4b-989d-38ea5caf2372 req-ba3a244f-8db9-4f82-a71b-cbe92df59bfe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Received unexpected event network-vif-plugged-b4547e99-5de6-41da-be86-73186521c22b for instance with vm_state active and task_state None.#033[00m
Jan 31 02:33:49 np0005603609 nova_compute[221550]: 2026-01-31 07:33:49.098 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:49.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:49 np0005603609 nova_compute[221550]: 2026-01-31 07:33:49.965 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Updating instance_info_cache with network_info: [{"id": "b4547e99-5de6-41da-be86-73186521c22b", "address": "fa:16:3e:2d:17:d2", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::20e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547e99-5d", "ovs_interfaceid": "b4547e99-5de6-41da-be86-73186521c22b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.008 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-434e368f-4c40-49d6-8935-ed469ee03717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.009 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.009 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.010 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.010 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.010 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.011 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.038 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.038 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.039 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.039 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.039 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:33:50 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3297960115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.506 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.547 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.601 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.601 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.604 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.604 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:33:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:50.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.608 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.608 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.726 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.727 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4499MB free_disk=20.814002990722656GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.727 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.728 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.887 221554 INFO nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Updating resource usage from migration 4db0dc53-9175-4575-b2ef-88d2c46606ff#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.888 221554 INFO nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Updating resource usage from migration e3aa997c-5864-4157-ad53-1755be938003#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.946 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 434e368f-4c40-49d6-8935-ed469ee03717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.946 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Migration 4db0dc53-9175-4575-b2ef-88d2c46606ff is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.947 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Migration e3aa997c-5864-4157-ad53-1755be938003 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.947 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:33:50 np0005603609 nova_compute[221550]: 2026-01-31 07:33:50.948 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:33:51 np0005603609 nova_compute[221550]: 2026-01-31 07:33:51.059 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:51.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:33:51 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2439060104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:33:51 np0005603609 nova_compute[221550]: 2026-01-31 07:33:51.559 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:51 np0005603609 nova_compute[221550]: 2026-01-31 07:33:51.566 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:33:51 np0005603609 nova_compute[221550]: 2026-01-31 07:33:51.589 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:33:51 np0005603609 nova_compute[221550]: 2026-01-31 07:33:51.614 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:33:51 np0005603609 nova_compute[221550]: 2026-01-31 07:33:51.615 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:51 np0005603609 nova_compute[221550]: 2026-01-31 07:33:51.798 221554 DEBUG oslo_concurrency.lockutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Acquiring lock "refresh_cache-d7327aed-ddc6-4772-8d2e-6b8be365dd2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:33:51 np0005603609 nova_compute[221550]: 2026-01-31 07:33:51.799 221554 DEBUG oslo_concurrency.lockutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Acquired lock "refresh_cache-d7327aed-ddc6-4772-8d2e-6b8be365dd2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:33:51 np0005603609 nova_compute[221550]: 2026-01-31 07:33:51.800 221554 DEBUG nova.network.neutron [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:33:52 np0005603609 nova_compute[221550]: 2026-01-31 07:33:52.042 221554 DEBUG nova.network.neutron [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:33:52 np0005603609 nova_compute[221550]: 2026-01-31 07:33:52.267 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:33:52 np0005603609 nova_compute[221550]: 2026-01-31 07:33:52.268 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:33:52 np0005603609 nova_compute[221550]: 2026-01-31 07:33:52.293 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:33:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:52.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:52 np0005603609 nova_compute[221550]: 2026-01-31 07:33:52.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:33:53 np0005603609 nova_compute[221550]: 2026-01-31 07:33:53.066 221554 DEBUG nova.network.neutron [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:33:53 np0005603609 nova_compute[221550]: 2026-01-31 07:33:53.093 221554 DEBUG oslo_concurrency.lockutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Releasing lock "refresh_cache-d7327aed-ddc6-4772-8d2e-6b8be365dd2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:33:53 np0005603609 nova_compute[221550]: 2026-01-31 07:33:53.210 221554 DEBUG nova.virt.libvirt.driver [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Jan 31 02:33:53 np0005603609 nova_compute[221550]: 2026-01-31 07:33:53.211 221554 DEBUG nova.virt.libvirt.volume.remotefs [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Creating file /var/lib/nova/instances/d7327aed-ddc6-4772-8d2e-6b8be365dd2b/9690971a1f904f5db929b0d0040d5925.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Jan 31 02:33:53 np0005603609 nova_compute[221550]: 2026-01-31 07:33:53.211 221554 DEBUG oslo_concurrency.processutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/d7327aed-ddc6-4772-8d2e-6b8be365dd2b/9690971a1f904f5db929b0d0040d5925.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:53.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:53 np0005603609 nova_compute[221550]: 2026-01-31 07:33:53.467 221554 DEBUG nova.compute.manager [req-90492911-d251-4ed0-bb0c-3cd2769f2843 req-aff3dee5-b3ea-41a1-92b9-bc4a9dc15472 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-unplugged-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:33:53 np0005603609 nova_compute[221550]: 2026-01-31 07:33:53.468 221554 DEBUG oslo_concurrency.lockutils [req-90492911-d251-4ed0-bb0c-3cd2769f2843 req-aff3dee5-b3ea-41a1-92b9-bc4a9dc15472 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:53 np0005603609 nova_compute[221550]: 2026-01-31 07:33:53.469 221554 DEBUG oslo_concurrency.lockutils [req-90492911-d251-4ed0-bb0c-3cd2769f2843 req-aff3dee5-b3ea-41a1-92b9-bc4a9dc15472 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:53 np0005603609 nova_compute[221550]: 2026-01-31 07:33:53.469 221554 DEBUG oslo_concurrency.lockutils [req-90492911-d251-4ed0-bb0c-3cd2769f2843 req-aff3dee5-b3ea-41a1-92b9-bc4a9dc15472 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:53 np0005603609 nova_compute[221550]: 2026-01-31 07:33:53.470 221554 DEBUG nova.compute.manager [req-90492911-d251-4ed0-bb0c-3cd2769f2843 req-aff3dee5-b3ea-41a1-92b9-bc4a9dc15472 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] No waiting events found dispatching network-vif-unplugged-cc9a4557-33da-44f5-87b4-ca945cbc819c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:33:53 np0005603609 nova_compute[221550]: 2026-01-31 07:33:53.470 221554 DEBUG nova.compute.manager [req-90492911-d251-4ed0-bb0c-3cd2769f2843 req-aff3dee5-b3ea-41a1-92b9-bc4a9dc15472 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-unplugged-cc9a4557-33da-44f5-87b4-ca945cbc819c for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:33:53 np0005603609 nova_compute[221550]: 2026-01-31 07:33:53.625 221554 DEBUG oslo_concurrency.processutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/d7327aed-ddc6-4772-8d2e-6b8be365dd2b/9690971a1f904f5db929b0d0040d5925.tmp" returned: 1 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:53 np0005603609 nova_compute[221550]: 2026-01-31 07:33:53.627 221554 DEBUG oslo_concurrency.processutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/d7327aed-ddc6-4772-8d2e-6b8be365dd2b/9690971a1f904f5db929b0d0040d5925.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 02:33:53 np0005603609 nova_compute[221550]: 2026-01-31 07:33:53.627 221554 DEBUG nova.virt.libvirt.volume.remotefs [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Creating directory /var/lib/nova/instances/d7327aed-ddc6-4772-8d2e-6b8be365dd2b on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Jan 31 02:33:53 np0005603609 nova_compute[221550]: 2026-01-31 07:33:53.627 221554 DEBUG oslo_concurrency.processutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/d7327aed-ddc6-4772-8d2e-6b8be365dd2b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:33:53 np0005603609 nova_compute[221550]: 2026-01-31 07:33:53.842 221554 DEBUG oslo_concurrency.processutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/d7327aed-ddc6-4772-8d2e-6b8be365dd2b" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:33:53 np0005603609 nova_compute[221550]: 2026-01-31 07:33:53.847 221554 DEBUG nova.virt.libvirt.driver [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 02:33:54 np0005603609 nova_compute[221550]: 2026-01-31 07:33:54.100 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:54 np0005603609 nova_compute[221550]: 2026-01-31 07:33:54.312 221554 INFO nova.compute.manager [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Took 6.64 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.#033[00m
Jan 31 02:33:54 np0005603609 nova_compute[221550]: 2026-01-31 07:33:54.313 221554 DEBUG nova.compute.manager [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:33:54 np0005603609 nova_compute[221550]: 2026-01-31 07:33:54.331 221554 DEBUG nova.compute.manager [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxr7wz9jw',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4444d8df-265a-48a7-a945-08eb55a365e1',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(4db0dc53-9175-4575-b2ef-88d2c46606ff),old_vol_attachment_ids={802e207e-cc7b-4779-89dc-d399ba68dc38='632e6ca3-f050-475e-bcd6-268090116cf1'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Jan 31 02:33:54 np0005603609 nova_compute[221550]: 2026-01-31 07:33:54.334 221554 DEBUG nova.objects.instance [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lazy-loading 'migration_context' on Instance uuid 4444d8df-265a-48a7-a945-08eb55a365e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:33:54 np0005603609 nova_compute[221550]: 2026-01-31 07:33:54.335 221554 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Jan 31 02:33:54 np0005603609 nova_compute[221550]: 2026-01-31 07:33:54.336 221554 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Jan 31 02:33:54 np0005603609 nova_compute[221550]: 2026-01-31 07:33:54.336 221554 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Jan 31 02:33:54 np0005603609 nova_compute[221550]: 2026-01-31 07:33:54.370 221554 DEBUG nova.virt.libvirt.migration [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Find same serial number: pos=1, serial=802e207e-cc7b-4779-89dc-d399ba68dc38 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Jan 31 02:33:54 np0005603609 nova_compute[221550]: 2026-01-31 07:33:54.374 221554 DEBUG nova.virt.libvirt.vif [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:33:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-396476796',display_name='tempest-LiveMigrationTest-server-396476796',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-396476796',id=9,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:33:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dfb4d4079ac944b288d5e285ce1de95a',ramdisk_id='',reservation_id='r-m03yz7pt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-48073594',owner_user_name='tempest-LiveMigrationTest-48073594-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:33:42Z,user_data=None,user_id='0a59df5da6284e4e8764816e1f8dfaa3',uuid=4444d8df-265a-48a7-a945-08eb55a365e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:33:54 np0005603609 nova_compute[221550]: 2026-01-31 07:33:54.375 221554 DEBUG nova.network.os_vif_util [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Converting VIF {"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:33:54 np0005603609 nova_compute[221550]: 2026-01-31 07:33:54.376 221554 DEBUG nova.network.os_vif_util [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:33:54 np0005603609 nova_compute[221550]: 2026-01-31 07:33:54.377 221554 DEBUG nova.virt.libvirt.migration [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Updating guest XML with vif config: <interface type="ethernet">
Jan 31 02:33:54 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:3c:9d:19"/>
Jan 31 02:33:54 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 02:33:54 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:33:54 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 02:33:54 np0005603609 nova_compute[221550]:  <target dev="tapcc9a4557-33"/>
Jan 31 02:33:54 np0005603609 nova_compute[221550]: </interface>
Jan 31 02:33:54 np0005603609 nova_compute[221550]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Jan 31 02:33:54 np0005603609 nova_compute[221550]: 2026-01-31 07:33:54.380 221554 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Jan 31 02:33:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:54.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:54 np0005603609 nova_compute[221550]: 2026-01-31 07:33:54.838 221554 DEBUG nova.virt.libvirt.migration [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 31 02:33:54 np0005603609 nova_compute[221550]: 2026-01-31 07:33:54.838 221554 INFO nova.virt.libvirt.migration [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Jan 31 02:33:54 np0005603609 nova_compute[221550]: 2026-01-31 07:33:54.916 221554 INFO nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Jan 31 02:33:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:55.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:55 np0005603609 nova_compute[221550]: 2026-01-31 07:33:55.418 221554 DEBUG nova.virt.libvirt.migration [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 31 02:33:55 np0005603609 nova_compute[221550]: 2026-01-31 07:33:55.418 221554 DEBUG nova.virt.libvirt.migration [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Jan 31 02:33:55 np0005603609 nova_compute[221550]: 2026-01-31 07:33:55.561 221554 DEBUG nova.compute.manager [req-55a3b6c2-959e-4100-a779-c1d6bce538a4 req-330fad01-b232-4ce7-aa23-90b6f296e486 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:33:55 np0005603609 nova_compute[221550]: 2026-01-31 07:33:55.561 221554 DEBUG oslo_concurrency.lockutils [req-55a3b6c2-959e-4100-a779-c1d6bce538a4 req-330fad01-b232-4ce7-aa23-90b6f296e486 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:55 np0005603609 nova_compute[221550]: 2026-01-31 07:33:55.561 221554 DEBUG oslo_concurrency.lockutils [req-55a3b6c2-959e-4100-a779-c1d6bce538a4 req-330fad01-b232-4ce7-aa23-90b6f296e486 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:55 np0005603609 nova_compute[221550]: 2026-01-31 07:33:55.562 221554 DEBUG oslo_concurrency.lockutils [req-55a3b6c2-959e-4100-a779-c1d6bce538a4 req-330fad01-b232-4ce7-aa23-90b6f296e486 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:55 np0005603609 nova_compute[221550]: 2026-01-31 07:33:55.562 221554 DEBUG nova.compute.manager [req-55a3b6c2-959e-4100-a779-c1d6bce538a4 req-330fad01-b232-4ce7-aa23-90b6f296e486 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] No waiting events found dispatching network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:33:55 np0005603609 nova_compute[221550]: 2026-01-31 07:33:55.562 221554 WARNING nova.compute.manager [req-55a3b6c2-959e-4100-a779-c1d6bce538a4 req-330fad01-b232-4ce7-aa23-90b6f296e486 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received unexpected event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c for instance with vm_state active and task_state migrating.#033[00m
Jan 31 02:33:55 np0005603609 nova_compute[221550]: 2026-01-31 07:33:55.562 221554 DEBUG nova.compute.manager [req-55a3b6c2-959e-4100-a779-c1d6bce538a4 req-330fad01-b232-4ce7-aa23-90b6f296e486 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-changed-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:33:55 np0005603609 nova_compute[221550]: 2026-01-31 07:33:55.562 221554 DEBUG nova.compute.manager [req-55a3b6c2-959e-4100-a779-c1d6bce538a4 req-330fad01-b232-4ce7-aa23-90b6f296e486 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Refreshing instance network info cache due to event network-changed-cc9a4557-33da-44f5-87b4-ca945cbc819c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:33:55 np0005603609 nova_compute[221550]: 2026-01-31 07:33:55.563 221554 DEBUG oslo_concurrency.lockutils [req-55a3b6c2-959e-4100-a779-c1d6bce538a4 req-330fad01-b232-4ce7-aa23-90b6f296e486 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:33:55 np0005603609 nova_compute[221550]: 2026-01-31 07:33:55.563 221554 DEBUG oslo_concurrency.lockutils [req-55a3b6c2-959e-4100-a779-c1d6bce538a4 req-330fad01-b232-4ce7-aa23-90b6f296e486 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:33:55 np0005603609 nova_compute[221550]: 2026-01-31 07:33:55.563 221554 DEBUG nova.network.neutron [req-55a3b6c2-959e-4100-a779-c1d6bce538a4 req-330fad01-b232-4ce7-aa23-90b6f296e486 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Refreshing network info cache for port cc9a4557-33da-44f5-87b4-ca945cbc819c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:33:55 np0005603609 nova_compute[221550]: 2026-01-31 07:33:55.583 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:55 np0005603609 nova_compute[221550]: 2026-01-31 07:33:55.729 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844835.729352, 4444d8df-265a-48a7-a945-08eb55a365e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:33:55 np0005603609 nova_compute[221550]: 2026-01-31 07:33:55.730 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:33:55 np0005603609 nova_compute[221550]: 2026-01-31 07:33:55.755 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:33:55 np0005603609 nova_compute[221550]: 2026-01-31 07:33:55.761 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:33:55 np0005603609 nova_compute[221550]: 2026-01-31 07:33:55.785 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Jan 31 02:33:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:56 np0005603609 kernel: tapcc9a4557-33 (unregistering): left promiscuous mode
Jan 31 02:33:56 np0005603609 NetworkManager[49064]: <info>  [1769844836.4734] device (tapcc9a4557-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:33:56 np0005603609 nova_compute[221550]: 2026-01-31 07:33:56.479 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:56 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:56Z|00057|binding|INFO|Releasing lport cc9a4557-33da-44f5-87b4-ca945cbc819c from this chassis (sb_readonly=0)
Jan 31 02:33:56 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:56Z|00058|binding|INFO|Setting lport cc9a4557-33da-44f5-87b4-ca945cbc819c down in Southbound
Jan 31 02:33:56 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:56Z|00059|binding|INFO|Removing iface tapcc9a4557-33 ovn-installed in OVS
Jan 31 02:33:56 np0005603609 nova_compute[221550]: 2026-01-31 07:33:56.482 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:56 np0005603609 nova_compute[221550]: 2026-01-31 07:33:56.486 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:56.497 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:9d:19 10.100.0.11'], port_security=['fa:16:3e:3c:9d:19 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '5c307474-e9ec-4d19-9f52-463eb0ff26d1'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4444d8df-265a-48a7-a945-08eb55a365e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb4d4079ac944b288d5e285ce1de95a', 'neutron:revision_number': '16', 'neutron:security_group_ids': 'abfb90a3-3499-4078-8409-95077c250314', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8850cb79-5a97-415d-8eee-4d7273f04968, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=cc9a4557-33da-44f5-87b4-ca945cbc819c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:33:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:56.498 140058 INFO neutron.agent.ovn.metadata.agent [-] Port cc9a4557-33da-44f5-87b4-ca945cbc819c in datapath 272cbcfe-dc1b-4319-84a2-27d245d969a3 unbound from our chassis#033[00m
Jan 31 02:33:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:56.499 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 272cbcfe-dc1b-4319-84a2-27d245d969a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:33:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:56.501 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[683fc54b-ff9c-47bf-81d0-cda0c6b1d085]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:56.503 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3 namespace which is not needed anymore#033[00m
Jan 31 02:33:56 np0005603609 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 31 02:33:56 np0005603609 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000009.scope: Consumed 3.441s CPU time.
Jan 31 02:33:56 np0005603609 systemd-machined[190912]: Machine qemu-5-instance-00000009 terminated.
Jan 31 02:33:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:56.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:56 np0005603609 virtqemud[221292]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-802e207e-cc7b-4779-89dc-d399ba68dc38: No such file or directory
Jan 31 02:33:56 np0005603609 virtqemud[221292]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-802e207e-cc7b-4779-89dc-d399ba68dc38: No such file or directory
Jan 31 02:33:56 np0005603609 NetworkManager[49064]: <info>  [1769844836.6990] manager: (tapcc9a4557-33): new Tun device (/org/freedesktop/NetworkManager/Devices/37)
Jan 31 02:33:56 np0005603609 nova_compute[221550]: 2026-01-31 07:33:56.721 221554 DEBUG nova.virt.libvirt.guest [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Jan 31 02:33:56 np0005603609 nova_compute[221550]: 2026-01-31 07:33:56.722 221554 INFO nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Migration operation has completed#033[00m
Jan 31 02:33:56 np0005603609 nova_compute[221550]: 2026-01-31 07:33:56.723 221554 INFO nova.compute.manager [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] _post_live_migration() is started..#033[00m
Jan 31 02:33:56 np0005603609 nova_compute[221550]: 2026-01-31 07:33:56.724 221554 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Jan 31 02:33:56 np0005603609 nova_compute[221550]: 2026-01-31 07:33:56.724 221554 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Jan 31 02:33:56 np0005603609 nova_compute[221550]: 2026-01-31 07:33:56.724 221554 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Jan 31 02:33:56 np0005603609 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[227022]: [NOTICE]   (227026) : haproxy version is 2.8.14-c23fe91
Jan 31 02:33:56 np0005603609 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[227022]: [NOTICE]   (227026) : path to executable is /usr/sbin/haproxy
Jan 31 02:33:56 np0005603609 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[227022]: [WARNING]  (227026) : Exiting Master process...
Jan 31 02:33:56 np0005603609 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[227022]: [WARNING]  (227026) : Exiting Master process...
Jan 31 02:33:56 np0005603609 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[227022]: [ALERT]    (227026) : Current worker (227028) exited with code 143 (Terminated)
Jan 31 02:33:56 np0005603609 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[227022]: [WARNING]  (227026) : All workers exited. Exiting... (0)
Jan 31 02:33:56 np0005603609 systemd[1]: libpod-ddbf3fe93ebd9a01726d0234b07cbadcae8c6dd27a964705cf12681634ffbc40.scope: Deactivated successfully.
Jan 31 02:33:56 np0005603609 podman[227754]: 2026-01-31 07:33:56.752153908 +0000 UTC m=+0.164256682 container died ddbf3fe93ebd9a01726d0234b07cbadcae8c6dd27a964705cf12681634ffbc40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:33:56 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ddbf3fe93ebd9a01726d0234b07cbadcae8c6dd27a964705cf12681634ffbc40-userdata-shm.mount: Deactivated successfully.
Jan 31 02:33:56 np0005603609 systemd[1]: var-lib-containers-storage-overlay-8fa8fc31de6af8cdf47a7f8c3b8abfd3088f31c6bb14c84d8a373a6457237329-merged.mount: Deactivated successfully.
Jan 31 02:33:56 np0005603609 podman[227754]: 2026-01-31 07:33:56.853945034 +0000 UTC m=+0.266047788 container cleanup ddbf3fe93ebd9a01726d0234b07cbadcae8c6dd27a964705cf12681634ffbc40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Jan 31 02:33:56 np0005603609 systemd[1]: libpod-conmon-ddbf3fe93ebd9a01726d0234b07cbadcae8c6dd27a964705cf12681634ffbc40.scope: Deactivated successfully.
Jan 31 02:33:56 np0005603609 podman[227797]: 2026-01-31 07:33:56.92870639 +0000 UTC m=+0.050987286 container remove ddbf3fe93ebd9a01726d0234b07cbadcae8c6dd27a964705cf12681634ffbc40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:33:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:56.932 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[00719f07-0140-4e59-9538-ae293571a132]: (4, ('Sat Jan 31 07:33:56 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3 (ddbf3fe93ebd9a01726d0234b07cbadcae8c6dd27a964705cf12681634ffbc40)\nddbf3fe93ebd9a01726d0234b07cbadcae8c6dd27a964705cf12681634ffbc40\nSat Jan 31 07:33:56 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3 (ddbf3fe93ebd9a01726d0234b07cbadcae8c6dd27a964705cf12681634ffbc40)\nddbf3fe93ebd9a01726d0234b07cbadcae8c6dd27a964705cf12681634ffbc40\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:56.934 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d901c6e4-0421-41d2-82cd-4e66e4029d5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:56.934 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap272cbcfe-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:33:56 np0005603609 nova_compute[221550]: 2026-01-31 07:33:56.943 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:56 np0005603609 kernel: tap272cbcfe-d0: left promiscuous mode
Jan 31 02:33:56 np0005603609 nova_compute[221550]: 2026-01-31 07:33:56.954 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:56 np0005603609 nova_compute[221550]: 2026-01-31 07:33:56.956 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:56.959 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0a24b409-c096-40a4-adbe-2944939b6bb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:56.973 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f559456f-8037-445d-a162-ed11c371d2de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:56.974 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b2241aba-9ae2-4a5e-857c-6eb5f63c680f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:56.986 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f4544d-0a87-47be-b244-a3e18ef5bb99]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505548, 'reachable_time': 35301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 227816, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:56 np0005603609 systemd[1]: run-netns-ovnmeta\x2d272cbcfe\x2ddc1b\x2d4319\x2d84a2\x2d27d245d969a3.mount: Deactivated successfully.
Jan 31 02:33:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:56.993 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:33:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:33:56.993 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[ca648bb4-49d4-46ec-9ac2-15fe9e976017]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:33:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:57.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:57 np0005603609 nova_compute[221550]: 2026-01-31 07:33:57.606 221554 DEBUG nova.network.neutron [req-55a3b6c2-959e-4100-a779-c1d6bce538a4 req-330fad01-b232-4ce7-aa23-90b6f296e486 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Updated VIF entry in instance network info cache for port cc9a4557-33da-44f5-87b4-ca945cbc819c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:33:57 np0005603609 nova_compute[221550]: 2026-01-31 07:33:57.607 221554 DEBUG nova.network.neutron [req-55a3b6c2-959e-4100-a779-c1d6bce538a4 req-330fad01-b232-4ce7-aa23-90b6f296e486 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Updating instance_info_cache with network_info: [{"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:33:57 np0005603609 nova_compute[221550]: 2026-01-31 07:33:57.639 221554 DEBUG oslo_concurrency.lockutils [req-55a3b6c2-959e-4100-a779-c1d6bce538a4 req-330fad01-b232-4ce7-aa23-90b6f296e486 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.133 221554 DEBUG nova.compute.manager [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-unplugged-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.134 221554 DEBUG oslo_concurrency.lockutils [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.134 221554 DEBUG oslo_concurrency.lockutils [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.134 221554 DEBUG oslo_concurrency.lockutils [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.134 221554 DEBUG nova.compute.manager [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] No waiting events found dispatching network-vif-unplugged-cc9a4557-33da-44f5-87b4-ca945cbc819c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.135 221554 DEBUG nova.compute.manager [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-unplugged-cc9a4557-33da-44f5-87b4-ca945cbc819c for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.135 221554 DEBUG nova.compute.manager [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.135 221554 DEBUG oslo_concurrency.lockutils [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.136 221554 DEBUG oslo_concurrency.lockutils [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.136 221554 DEBUG oslo_concurrency.lockutils [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.136 221554 DEBUG nova.compute.manager [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] No waiting events found dispatching network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.137 221554 WARNING nova.compute.manager [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received unexpected event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c for instance with vm_state active and task_state migrating.#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.137 221554 DEBUG nova.compute.manager [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.137 221554 DEBUG oslo_concurrency.lockutils [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.137 221554 DEBUG oslo_concurrency.lockutils [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.138 221554 DEBUG oslo_concurrency.lockutils [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.138 221554 DEBUG nova.compute.manager [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] No waiting events found dispatching network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.138 221554 WARNING nova.compute.manager [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received unexpected event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c for instance with vm_state active and task_state migrating.#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.269 221554 DEBUG nova.compute.manager [req-bc638b27-f5bb-4c41-809b-6ca24abf7ee2 req-c196e52e-0ee9-486c-b444-38025273385b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-unplugged-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.270 221554 DEBUG oslo_concurrency.lockutils [req-bc638b27-f5bb-4c41-809b-6ca24abf7ee2 req-c196e52e-0ee9-486c-b444-38025273385b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.270 221554 DEBUG oslo_concurrency.lockutils [req-bc638b27-f5bb-4c41-809b-6ca24abf7ee2 req-c196e52e-0ee9-486c-b444-38025273385b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.271 221554 DEBUG oslo_concurrency.lockutils [req-bc638b27-f5bb-4c41-809b-6ca24abf7ee2 req-c196e52e-0ee9-486c-b444-38025273385b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.271 221554 DEBUG nova.compute.manager [req-bc638b27-f5bb-4c41-809b-6ca24abf7ee2 req-c196e52e-0ee9-486c-b444-38025273385b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] No waiting events found dispatching network-vif-unplugged-cc9a4557-33da-44f5-87b4-ca945cbc819c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.271 221554 DEBUG nova.compute.manager [req-bc638b27-f5bb-4c41-809b-6ca24abf7ee2 req-c196e52e-0ee9-486c-b444-38025273385b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-unplugged-cc9a4557-33da-44f5-87b4-ca945cbc819c for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.279 221554 DEBUG nova.network.neutron [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Activated binding for port cc9a4557-33da-44f5-87b4-ca945cbc819c and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.280 221554 DEBUG nova.compute.manager [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.281 221554 DEBUG nova.virt.libvirt.vif [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:33:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-396476796',display_name='tempest-LiveMigrationTest-server-396476796',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-396476796',id=9,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:33:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dfb4d4079ac944b288d5e285ce1de95a',ramdisk_id='',reservation_id='r-m03yz7pt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-48073594',owner_user_name='tempest-LiveMigrationTest-48073594-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:33:45Z,user_data=None,user_id='0a59df5da6284e4e8764816e1f8dfaa3',uuid=4444d8df-265a-48a7-a945-08eb55a365e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.281 221554 DEBUG nova.network.os_vif_util [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Converting VIF {"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.282 221554 DEBUG nova.network.os_vif_util [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.282 221554 DEBUG os_vif [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.283 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.284 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc9a4557-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.286 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.288 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.288 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.291 221554 INFO os_vif [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33')#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.291 221554 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.291 221554 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.292 221554 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.292 221554 DEBUG nova.compute.manager [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.292 221554 INFO nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Deleting instance files /var/lib/nova/instances/4444d8df-265a-48a7-a945-08eb55a365e1_del#033[00m
Jan 31 02:33:58 np0005603609 nova_compute[221550]: 2026-01-31 07:33:58.293 221554 INFO nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Deletion of /var/lib/nova/instances/4444d8df-265a-48a7-a945-08eb55a365e1_del complete#033[00m
Jan 31 02:33:58 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:58Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:17:d2 10.1.0.47
Jan 31 02:33:58 np0005603609 ovn_controller[130359]: 2026-01-31T07:33:58Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:17:d2 10.1.0.47
Jan 31 02:33:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:33:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:58.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:33:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:33:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:59.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:00 np0005603609 nova_compute[221550]: 2026-01-31 07:34:00.246 221554 DEBUG nova.compute.manager [req-ba5474cd-25cc-45e5-9e0a-994a3b5a0f5c req-e4afa335-64b7-42ab-81be-99ab0581efde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:34:00 np0005603609 nova_compute[221550]: 2026-01-31 07:34:00.247 221554 DEBUG oslo_concurrency.lockutils [req-ba5474cd-25cc-45e5-9e0a-994a3b5a0f5c req-e4afa335-64b7-42ab-81be-99ab0581efde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:00 np0005603609 nova_compute[221550]: 2026-01-31 07:34:00.247 221554 DEBUG oslo_concurrency.lockutils [req-ba5474cd-25cc-45e5-9e0a-994a3b5a0f5c req-e4afa335-64b7-42ab-81be-99ab0581efde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:00 np0005603609 nova_compute[221550]: 2026-01-31 07:34:00.247 221554 DEBUG oslo_concurrency.lockutils [req-ba5474cd-25cc-45e5-9e0a-994a3b5a0f5c req-e4afa335-64b7-42ab-81be-99ab0581efde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:00 np0005603609 nova_compute[221550]: 2026-01-31 07:34:00.247 221554 DEBUG nova.compute.manager [req-ba5474cd-25cc-45e5-9e0a-994a3b5a0f5c req-e4afa335-64b7-42ab-81be-99ab0581efde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] No waiting events found dispatching network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:34:00 np0005603609 nova_compute[221550]: 2026-01-31 07:34:00.247 221554 WARNING nova.compute.manager [req-ba5474cd-25cc-45e5-9e0a-994a3b5a0f5c req-e4afa335-64b7-42ab-81be-99ab0581efde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received unexpected event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c for instance with vm_state active and task_state migrating.#033[00m
Jan 31 02:34:00 np0005603609 nova_compute[221550]: 2026-01-31 07:34:00.248 221554 DEBUG nova.compute.manager [req-ba5474cd-25cc-45e5-9e0a-994a3b5a0f5c req-e4afa335-64b7-42ab-81be-99ab0581efde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:34:00 np0005603609 nova_compute[221550]: 2026-01-31 07:34:00.248 221554 DEBUG oslo_concurrency.lockutils [req-ba5474cd-25cc-45e5-9e0a-994a3b5a0f5c req-e4afa335-64b7-42ab-81be-99ab0581efde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:00 np0005603609 nova_compute[221550]: 2026-01-31 07:34:00.248 221554 DEBUG oslo_concurrency.lockutils [req-ba5474cd-25cc-45e5-9e0a-994a3b5a0f5c req-e4afa335-64b7-42ab-81be-99ab0581efde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:00 np0005603609 nova_compute[221550]: 2026-01-31 07:34:00.248 221554 DEBUG oslo_concurrency.lockutils [req-ba5474cd-25cc-45e5-9e0a-994a3b5a0f5c req-e4afa335-64b7-42ab-81be-99ab0581efde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:00 np0005603609 nova_compute[221550]: 2026-01-31 07:34:00.249 221554 DEBUG nova.compute.manager [req-ba5474cd-25cc-45e5-9e0a-994a3b5a0f5c req-e4afa335-64b7-42ab-81be-99ab0581efde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] No waiting events found dispatching network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:34:00 np0005603609 nova_compute[221550]: 2026-01-31 07:34:00.249 221554 WARNING nova.compute.manager [req-ba5474cd-25cc-45e5-9e0a-994a3b5a0f5c req-e4afa335-64b7-42ab-81be-99ab0581efde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received unexpected event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c for instance with vm_state active and task_state migrating.#033[00m
Jan 31 02:34:00 np0005603609 nova_compute[221550]: 2026-01-31 07:34:00.585 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:00.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:01.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:02.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:03 np0005603609 nova_compute[221550]: 2026-01-31 07:34:03.287 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:03.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:03 np0005603609 nova_compute[221550]: 2026-01-31 07:34:03.547 221554 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:03 np0005603609 nova_compute[221550]: 2026-01-31 07:34:03.547 221554 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:03 np0005603609 nova_compute[221550]: 2026-01-31 07:34:03.548 221554 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:03 np0005603609 nova_compute[221550]: 2026-01-31 07:34:03.642 221554 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:03 np0005603609 nova_compute[221550]: 2026-01-31 07:34:03.643 221554 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:03 np0005603609 nova_compute[221550]: 2026-01-31 07:34:03.643 221554 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:03 np0005603609 nova_compute[221550]: 2026-01-31 07:34:03.644 221554 DEBUG nova.compute.resource_tracker [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:34:03 np0005603609 nova_compute[221550]: 2026-01-31 07:34:03.644 221554 DEBUG oslo_concurrency.processutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:34:03 np0005603609 nova_compute[221550]: 2026-01-31 07:34:03.890 221554 DEBUG nova.virt.libvirt.driver [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 02:34:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:34:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2122459944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:34:04 np0005603609 nova_compute[221550]: 2026-01-31 07:34:04.057 221554 DEBUG oslo_concurrency.processutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:34:04 np0005603609 nova_compute[221550]: 2026-01-31 07:34:04.160 221554 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:34:04 np0005603609 nova_compute[221550]: 2026-01-31 07:34:04.160 221554 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:34:04 np0005603609 nova_compute[221550]: 2026-01-31 07:34:04.163 221554 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:34:04 np0005603609 nova_compute[221550]: 2026-01-31 07:34:04.163 221554 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:34:04 np0005603609 nova_compute[221550]: 2026-01-31 07:34:04.300 221554 WARNING nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:34:04 np0005603609 nova_compute[221550]: 2026-01-31 07:34:04.300 221554 DEBUG nova.compute.resource_tracker [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4548MB free_disk=20.77044677734375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:34:04 np0005603609 nova_compute[221550]: 2026-01-31 07:34:04.301 221554 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:04 np0005603609 nova_compute[221550]: 2026-01-31 07:34:04.301 221554 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:04 np0005603609 nova_compute[221550]: 2026-01-31 07:34:04.483 221554 DEBUG nova.compute.resource_tracker [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Migration for instance 4444d8df-265a-48a7-a945-08eb55a365e1 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 31 02:34:04 np0005603609 nova_compute[221550]: 2026-01-31 07:34:04.596 221554 DEBUG nova.compute.resource_tracker [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Jan 31 02:34:04 np0005603609 nova_compute[221550]: 2026-01-31 07:34:04.596 221554 INFO nova.compute.resource_tracker [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Updating resource usage from migration e3aa997c-5864-4157-ad53-1755be938003#033[00m
Jan 31 02:34:04 np0005603609 nova_compute[221550]: 2026-01-31 07:34:04.628 221554 DEBUG nova.compute.resource_tracker [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Instance 434e368f-4c40-49d6-8935-ed469ee03717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:34:04 np0005603609 nova_compute[221550]: 2026-01-31 07:34:04.628 221554 DEBUG nova.compute.resource_tracker [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Migration 4db0dc53-9175-4575-b2ef-88d2c46606ff is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 02:34:04 np0005603609 nova_compute[221550]: 2026-01-31 07:34:04.629 221554 DEBUG nova.compute.resource_tracker [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Migration e3aa997c-5864-4157-ad53-1755be938003 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 02:34:04 np0005603609 nova_compute[221550]: 2026-01-31 07:34:04.629 221554 DEBUG nova.compute.resource_tracker [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:34:04 np0005603609 nova_compute[221550]: 2026-01-31 07:34:04.629 221554 DEBUG nova.compute.resource_tracker [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:34:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:04.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:04 np0005603609 nova_compute[221550]: 2026-01-31 07:34:04.696 221554 DEBUG oslo_concurrency.processutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:34:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:34:05 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4057250205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:34:05 np0005603609 nova_compute[221550]: 2026-01-31 07:34:05.130 221554 DEBUG oslo_concurrency.processutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:34:05 np0005603609 nova_compute[221550]: 2026-01-31 07:34:05.135 221554 DEBUG nova.compute.provider_tree [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:34:05 np0005603609 nova_compute[221550]: 2026-01-31 07:34:05.265 221554 DEBUG nova.scheduler.client.report [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:34:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:05.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:05 np0005603609 nova_compute[221550]: 2026-01-31 07:34:05.543 221554 DEBUG nova.compute.resource_tracker [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:34:05 np0005603609 nova_compute[221550]: 2026-01-31 07:34:05.544 221554 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:05 np0005603609 nova_compute[221550]: 2026-01-31 07:34:05.550 221554 INFO nova.compute.manager [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Migrating instance to compute-0.ctlplane.example.com finished successfully.#033[00m
Jan 31 02:34:05 np0005603609 nova_compute[221550]: 2026-01-31 07:34:05.604 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:06 np0005603609 podman[227863]: 2026-01-31 07:34:06.167746251 +0000 UTC m=+0.051215152 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 02:34:06 np0005603609 podman[227862]: 2026-01-31 07:34:06.21687377 +0000 UTC m=+0.099763507 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 31 02:34:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:06 np0005603609 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Jan 31 02:34:06 np0005603609 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000e.scope: Consumed 13.237s CPU time.
Jan 31 02:34:06 np0005603609 systemd-machined[190912]: Machine qemu-7-instance-0000000e terminated.
Jan 31 02:34:06 np0005603609 nova_compute[221550]: 2026-01-31 07:34:06.411 221554 INFO nova.scheduler.client.report [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Deleted allocation for migration 4db0dc53-9175-4575-b2ef-88d2c46606ff#033[00m
Jan 31 02:34:06 np0005603609 nova_compute[221550]: 2026-01-31 07:34:06.412 221554 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Jan 31 02:34:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:06.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:06 np0005603609 nova_compute[221550]: 2026-01-31 07:34:06.905 221554 INFO nova.virt.libvirt.driver [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Instance shutdown successfully after 13 seconds.#033[00m
Jan 31 02:34:06 np0005603609 nova_compute[221550]: 2026-01-31 07:34:06.911 221554 INFO nova.virt.libvirt.driver [-] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Instance destroyed successfully.#033[00m
Jan 31 02:34:06 np0005603609 nova_compute[221550]: 2026-01-31 07:34:06.914 221554 DEBUG nova.virt.libvirt.driver [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:34:06 np0005603609 nova_compute[221550]: 2026-01-31 07:34:06.915 221554 DEBUG nova.virt.libvirt.driver [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:34:07 np0005603609 nova_compute[221550]: 2026-01-31 07:34:07.012 221554 DEBUG oslo_concurrency.lockutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Acquiring lock "d7327aed-ddc6-4772-8d2e-6b8be365dd2b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:07 np0005603609 nova_compute[221550]: 2026-01-31 07:34:07.013 221554 DEBUG oslo_concurrency.lockutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lock "d7327aed-ddc6-4772-8d2e-6b8be365dd2b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:07 np0005603609 nova_compute[221550]: 2026-01-31 07:34:07.013 221554 DEBUG oslo_concurrency.lockutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lock "d7327aed-ddc6-4772-8d2e-6b8be365dd2b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:07.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:07.466 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:07.466 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:07.467 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:08 np0005603609 nova_compute[221550]: 2026-01-31 07:34:08.289 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:08.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Jan 31 02:34:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:09.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:10 np0005603609 nova_compute[221550]: 2026-01-31 07:34:10.606 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:34:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:10.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:34:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:11 np0005603609 nova_compute[221550]: 2026-01-31 07:34:11.357 221554 DEBUG oslo_concurrency.lockutils [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "434e368f-4c40-49d6-8935-ed469ee03717" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:11 np0005603609 nova_compute[221550]: 2026-01-31 07:34:11.358 221554 DEBUG oslo_concurrency.lockutils [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "434e368f-4c40-49d6-8935-ed469ee03717" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:11 np0005603609 nova_compute[221550]: 2026-01-31 07:34:11.358 221554 DEBUG oslo_concurrency.lockutils [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "434e368f-4c40-49d6-8935-ed469ee03717-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:11 np0005603609 nova_compute[221550]: 2026-01-31 07:34:11.359 221554 DEBUG oslo_concurrency.lockutils [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "434e368f-4c40-49d6-8935-ed469ee03717-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:11 np0005603609 nova_compute[221550]: 2026-01-31 07:34:11.359 221554 DEBUG oslo_concurrency.lockutils [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "434e368f-4c40-49d6-8935-ed469ee03717-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:11 np0005603609 nova_compute[221550]: 2026-01-31 07:34:11.360 221554 INFO nova.compute.manager [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Terminating instance#033[00m
Jan 31 02:34:11 np0005603609 nova_compute[221550]: 2026-01-31 07:34:11.361 221554 DEBUG nova.compute.manager [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:34:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:11.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:11 np0005603609 nova_compute[221550]: 2026-01-31 07:34:11.722 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844836.720852, 4444d8df-265a-48a7-a945-08eb55a365e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:34:11 np0005603609 nova_compute[221550]: 2026-01-31 07:34:11.722 221554 INFO nova.compute.manager [-] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:34:11 np0005603609 nova_compute[221550]: 2026-01-31 07:34:11.741 221554 DEBUG nova.compute.manager [None req-41feab5e-752b-4131-95f7-d280702a55d9 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:34:11 np0005603609 kernel: tapb4547e99-5d (unregistering): left promiscuous mode
Jan 31 02:34:11 np0005603609 NetworkManager[49064]: <info>  [1769844851.7942] device (tapb4547e99-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:34:11 np0005603609 nova_compute[221550]: 2026-01-31 07:34:11.797 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:11 np0005603609 ovn_controller[130359]: 2026-01-31T07:34:11Z|00060|binding|INFO|Releasing lport b4547e99-5de6-41da-be86-73186521c22b from this chassis (sb_readonly=0)
Jan 31 02:34:11 np0005603609 ovn_controller[130359]: 2026-01-31T07:34:11Z|00061|binding|INFO|Setting lport b4547e99-5de6-41da-be86-73186521c22b down in Southbound
Jan 31 02:34:11 np0005603609 ovn_controller[130359]: 2026-01-31T07:34:11Z|00062|binding|INFO|Removing iface tapb4547e99-5d ovn-installed in OVS
Jan 31 02:34:11 np0005603609 nova_compute[221550]: 2026-01-31 07:34:11.799 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:11.804 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:17:d2 10.1.0.47 fdfe:381f:8400::20e'], port_security=['fa:16:3e:2d:17:d2 10.1.0.47 fdfe:381f:8400::20e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.47/26 fdfe:381f:8400::20e/64', 'neutron:device_id': '434e368f-4c40-49d6-8935-ed469ee03717', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca84ce9280d74b4588f89bf679f563fa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4c568fd1-1fe0-466e-9918-da0b2ee34267', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5e73fc0-b2f3-4e49-a543-f424bee97362, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=b4547e99-5de6-41da-be86-73186521c22b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:34:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:11.805 140058 INFO neutron.agent.ovn.metadata.agent [-] Port b4547e99-5de6-41da-be86-73186521c22b in datapath 3741a1e4-1d7f-4ca0-b02c-790a05701782 unbound from our chassis#033[00m
Jan 31 02:34:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:11.806 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3741a1e4-1d7f-4ca0-b02c-790a05701782, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:34:11 np0005603609 nova_compute[221550]: 2026-01-31 07:34:11.806 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:11.807 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[34c0f2ff-b967-4ec7-8d38-3aa3d7cfb93c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:34:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:11.808 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782 namespace which is not needed anymore#033[00m
Jan 31 02:34:11 np0005603609 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Jan 31 02:34:11 np0005603609 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Consumed 13.562s CPU time.
Jan 31 02:34:11 np0005603609 systemd-machined[190912]: Machine qemu-6-instance-0000000d terminated.
Jan 31 02:34:11 np0005603609 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[227288]: [NOTICE]   (227300) : haproxy version is 2.8.14-c23fe91
Jan 31 02:34:11 np0005603609 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[227288]: [NOTICE]   (227300) : path to executable is /usr/sbin/haproxy
Jan 31 02:34:11 np0005603609 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[227288]: [WARNING]  (227300) : Exiting Master process...
Jan 31 02:34:11 np0005603609 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[227288]: [WARNING]  (227300) : Exiting Master process...
Jan 31 02:34:11 np0005603609 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[227288]: [ALERT]    (227300) : Current worker (227302) exited with code 143 (Terminated)
Jan 31 02:34:11 np0005603609 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[227288]: [WARNING]  (227300) : All workers exited. Exiting... (0)
Jan 31 02:34:11 np0005603609 systemd[1]: libpod-1b9dab6afc107ba79ffbf1583df1a6b2a1a814e4f4ee260e51d9f0516ff369b5.scope: Deactivated successfully.
Jan 31 02:34:11 np0005603609 podman[227933]: 2026-01-31 07:34:11.937115962 +0000 UTC m=+0.048840634 container died 1b9dab6afc107ba79ffbf1583df1a6b2a1a814e4f4ee260e51d9f0516ff369b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 02:34:11 np0005603609 nova_compute[221550]: 2026-01-31 07:34:11.997 221554 INFO nova.virt.libvirt.driver [-] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Instance destroyed successfully.#033[00m
Jan 31 02:34:11 np0005603609 nova_compute[221550]: 2026-01-31 07:34:11.998 221554 DEBUG nova.objects.instance [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lazy-loading 'resources' on Instance uuid 434e368f-4c40-49d6-8935-ed469ee03717 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.012 221554 DEBUG nova.virt.libvirt.vif [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:33:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-2133368505-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2133368505-3',id=13,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2026-01-31T07:33:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca84ce9280d74b4588f89bf679f563fa',ramdisk_id='',reservation_id='r-xsv2fd0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-1036306085',owner_user_name='tempest-AutoAllocateNetworkTest-1036306085-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:33:46Z,user_data=None,user_id='38001f2ce5654228b098939fd9619d3e',uuid=434e368f-4c40-49d6-8935-ed469ee03717,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b4547e99-5de6-41da-be86-73186521c22b", "address": "fa:16:3e:2d:17:d2", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::20e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547e99-5d", "ovs_interfaceid": "b4547e99-5de6-41da-be86-73186521c22b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.012 221554 DEBUG nova.network.os_vif_util [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Converting VIF {"id": "b4547e99-5de6-41da-be86-73186521c22b", "address": "fa:16:3e:2d:17:d2", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::20e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4547e99-5d", "ovs_interfaceid": "b4547e99-5de6-41da-be86-73186521c22b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.013 221554 DEBUG nova.network.os_vif_util [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:17:d2,bridge_name='br-int',has_traffic_filtering=True,id=b4547e99-5de6-41da-be86-73186521c22b,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4547e99-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.014 221554 DEBUG os_vif [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:17:d2,bridge_name='br-int',has_traffic_filtering=True,id=b4547e99-5de6-41da-be86-73186521c22b,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4547e99-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.015 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.016 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4547e99-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.017 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.021 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.023 221554 INFO os_vif [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:17:d2,bridge_name='br-int',has_traffic_filtering=True,id=b4547e99-5de6-41da-be86-73186521c22b,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4547e99-5d')#033[00m
Jan 31 02:34:12 np0005603609 systemd[1]: var-lib-containers-storage-overlay-8c7c5d0f48bdffdfdd7e5a7146bf10c4a2c0ea61409d2e55dac8484391c97454-merged.mount: Deactivated successfully.
Jan 31 02:34:12 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b9dab6afc107ba79ffbf1583df1a6b2a1a814e4f4ee260e51d9f0516ff369b5-userdata-shm.mount: Deactivated successfully.
Jan 31 02:34:12 np0005603609 podman[227933]: 2026-01-31 07:34:12.08940017 +0000 UTC m=+0.201124852 container cleanup 1b9dab6afc107ba79ffbf1583df1a6b2a1a814e4f4ee260e51d9f0516ff369b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 02:34:12 np0005603609 systemd[1]: libpod-conmon-1b9dab6afc107ba79ffbf1583df1a6b2a1a814e4f4ee260e51d9f0516ff369b5.scope: Deactivated successfully.
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.156 221554 DEBUG nova.compute.manager [req-db4e5ec0-5d12-4e75-ab73-f3fb40be361f req-28f2f93c-8b33-48a0-817b-52860b337742 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Received event network-vif-unplugged-b4547e99-5de6-41da-be86-73186521c22b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.156 221554 DEBUG oslo_concurrency.lockutils [req-db4e5ec0-5d12-4e75-ab73-f3fb40be361f req-28f2f93c-8b33-48a0-817b-52860b337742 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "434e368f-4c40-49d6-8935-ed469ee03717-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.157 221554 DEBUG oslo_concurrency.lockutils [req-db4e5ec0-5d12-4e75-ab73-f3fb40be361f req-28f2f93c-8b33-48a0-817b-52860b337742 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "434e368f-4c40-49d6-8935-ed469ee03717-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.157 221554 DEBUG oslo_concurrency.lockutils [req-db4e5ec0-5d12-4e75-ab73-f3fb40be361f req-28f2f93c-8b33-48a0-817b-52860b337742 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "434e368f-4c40-49d6-8935-ed469ee03717-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.157 221554 DEBUG nova.compute.manager [req-db4e5ec0-5d12-4e75-ab73-f3fb40be361f req-28f2f93c-8b33-48a0-817b-52860b337742 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] No waiting events found dispatching network-vif-unplugged-b4547e99-5de6-41da-be86-73186521c22b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.158 221554 DEBUG nova.compute.manager [req-db4e5ec0-5d12-4e75-ab73-f3fb40be361f req-28f2f93c-8b33-48a0-817b-52860b337742 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Received event network-vif-unplugged-b4547e99-5de6-41da-be86-73186521c22b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.205 221554 DEBUG oslo_concurrency.lockutils [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "d7327aed-ddc6-4772-8d2e-6b8be365dd2b" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.205 221554 DEBUG oslo_concurrency.lockutils [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "d7327aed-ddc6-4772-8d2e-6b8be365dd2b" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.206 221554 DEBUG nova.compute.manager [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Going to confirm migration 4 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Jan 31 02:34:12 np0005603609 podman[227991]: 2026-01-31 07:34:12.239883005 +0000 UTC m=+0.127270009 container remove 1b9dab6afc107ba79ffbf1583df1a6b2a1a814e4f4ee260e51d9f0516ff369b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:34:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:12.245 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2798782e-604b-4ac5-9295-c26048217e22]: (4, ('Sat Jan 31 07:34:11 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782 (1b9dab6afc107ba79ffbf1583df1a6b2a1a814e4f4ee260e51d9f0516ff369b5)\n1b9dab6afc107ba79ffbf1583df1a6b2a1a814e4f4ee260e51d9f0516ff369b5\nSat Jan 31 07:34:12 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782 (1b9dab6afc107ba79ffbf1583df1a6b2a1a814e4f4ee260e51d9f0516ff369b5)\n1b9dab6afc107ba79ffbf1583df1a6b2a1a814e4f4ee260e51d9f0516ff369b5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:34:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:12.246 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8c3acf-dc1f-47ff-9cff-17ff08a83cc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:34:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:12.247 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3741a1e4-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.249 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:12 np0005603609 kernel: tap3741a1e4-10: left promiscuous mode
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.255 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.257 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:12.259 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[eec46e0f-7763-4721-bb5e-20461a165ffb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:34:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:12.273 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c343494d-e9ef-4216-9cb3-f64e0f62d512]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:34:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:12.275 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5448dc57-a4d8-4573-9f15-81f779a3c2d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:34:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:12.289 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c85114-b933-4795-8b3d-81f58c49e904]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506060, 'reachable_time': 31730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 228006, 'error': None, 'target': 'ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:34:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:12.292 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:34:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:12.292 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f14651-7d77-45af-b0ce-818d47f31774]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:34:12 np0005603609 systemd[1]: run-netns-ovnmeta\x2d3741a1e4\x2d1d7f\x2d4ca0\x2db02c\x2d790a05701782.mount: Deactivated successfully.
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.538 221554 DEBUG oslo_concurrency.lockutils [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "refresh_cache-d7327aed-ddc6-4772-8d2e-6b8be365dd2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.539 221554 DEBUG oslo_concurrency.lockutils [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquired lock "refresh_cache-d7327aed-ddc6-4772-8d2e-6b8be365dd2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.539 221554 DEBUG nova.network.neutron [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.539 221554 DEBUG nova.objects.instance [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'info_cache' on Instance uuid d7327aed-ddc6-4772-8d2e-6b8be365dd2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:34:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:12.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:12 np0005603609 nova_compute[221550]: 2026-01-31 07:34:12.894 221554 DEBUG nova.network.neutron [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:34:13 np0005603609 nova_compute[221550]: 2026-01-31 07:34:13.165 221554 DEBUG nova.network.neutron [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:34:13 np0005603609 nova_compute[221550]: 2026-01-31 07:34:13.180 221554 DEBUG oslo_concurrency.lockutils [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Releasing lock "refresh_cache-d7327aed-ddc6-4772-8d2e-6b8be365dd2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:34:13 np0005603609 nova_compute[221550]: 2026-01-31 07:34:13.180 221554 DEBUG nova.objects.instance [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'migration_context' on Instance uuid d7327aed-ddc6-4772-8d2e-6b8be365dd2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:34:13 np0005603609 nova_compute[221550]: 2026-01-31 07:34:13.266 221554 DEBUG nova.storage.rbd_utils [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] removing snapshot(nova-resize) on rbd image(d7327aed-ddc6-4772-8d2e-6b8be365dd2b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 02:34:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:13.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e142 e142: 3 total, 3 up, 3 in
Jan 31 02:34:14 np0005603609 nova_compute[221550]: 2026-01-31 07:34:14.252 221554 DEBUG nova.compute.manager [req-7911923e-2843-4954-b038-ae908cd30864 req-adc45906-eed7-43fc-8e0d-19f970d3f91b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Received event network-vif-plugged-b4547e99-5de6-41da-be86-73186521c22b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:34:14 np0005603609 nova_compute[221550]: 2026-01-31 07:34:14.253 221554 DEBUG oslo_concurrency.lockutils [req-7911923e-2843-4954-b038-ae908cd30864 req-adc45906-eed7-43fc-8e0d-19f970d3f91b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "434e368f-4c40-49d6-8935-ed469ee03717-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:14 np0005603609 nova_compute[221550]: 2026-01-31 07:34:14.253 221554 DEBUG oslo_concurrency.lockutils [req-7911923e-2843-4954-b038-ae908cd30864 req-adc45906-eed7-43fc-8e0d-19f970d3f91b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "434e368f-4c40-49d6-8935-ed469ee03717-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:14 np0005603609 nova_compute[221550]: 2026-01-31 07:34:14.254 221554 DEBUG oslo_concurrency.lockutils [req-7911923e-2843-4954-b038-ae908cd30864 req-adc45906-eed7-43fc-8e0d-19f970d3f91b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "434e368f-4c40-49d6-8935-ed469ee03717-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:14 np0005603609 nova_compute[221550]: 2026-01-31 07:34:14.254 221554 DEBUG nova.compute.manager [req-7911923e-2843-4954-b038-ae908cd30864 req-adc45906-eed7-43fc-8e0d-19f970d3f91b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] No waiting events found dispatching network-vif-plugged-b4547e99-5de6-41da-be86-73186521c22b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:34:14 np0005603609 nova_compute[221550]: 2026-01-31 07:34:14.254 221554 WARNING nova.compute.manager [req-7911923e-2843-4954-b038-ae908cd30864 req-adc45906-eed7-43fc-8e0d-19f970d3f91b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Received unexpected event network-vif-plugged-b4547e99-5de6-41da-be86-73186521c22b for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:34:14 np0005603609 nova_compute[221550]: 2026-01-31 07:34:14.409 221554 DEBUG oslo_concurrency.lockutils [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:14 np0005603609 nova_compute[221550]: 2026-01-31 07:34:14.410 221554 DEBUG oslo_concurrency.lockutils [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:14 np0005603609 nova_compute[221550]: 2026-01-31 07:34:14.506 221554 DEBUG oslo_concurrency.processutils [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:34:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:34:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:14.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:34:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:34:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2027725969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:34:14 np0005603609 nova_compute[221550]: 2026-01-31 07:34:14.926 221554 DEBUG oslo_concurrency.processutils [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:34:14 np0005603609 nova_compute[221550]: 2026-01-31 07:34:14.932 221554 DEBUG nova.compute.provider_tree [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:34:14 np0005603609 nova_compute[221550]: 2026-01-31 07:34:14.950 221554 DEBUG nova.scheduler.client.report [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:34:14 np0005603609 nova_compute[221550]: 2026-01-31 07:34:14.959 221554 INFO nova.virt.libvirt.driver [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Deleting instance files /var/lib/nova/instances/434e368f-4c40-49d6-8935-ed469ee03717_del#033[00m
Jan 31 02:34:14 np0005603609 nova_compute[221550]: 2026-01-31 07:34:14.960 221554 INFO nova.virt.libvirt.driver [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Deletion of /var/lib/nova/instances/434e368f-4c40-49d6-8935-ed469ee03717_del complete#033[00m
Jan 31 02:34:15 np0005603609 nova_compute[221550]: 2026-01-31 07:34:15.041 221554 DEBUG oslo_concurrency.lockutils [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:15 np0005603609 nova_compute[221550]: 2026-01-31 07:34:15.053 221554 INFO nova.compute.manager [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Took 3.69 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:34:15 np0005603609 nova_compute[221550]: 2026-01-31 07:34:15.053 221554 DEBUG oslo.service.loopingcall [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:34:15 np0005603609 nova_compute[221550]: 2026-01-31 07:34:15.054 221554 DEBUG nova.compute.manager [-] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:34:15 np0005603609 nova_compute[221550]: 2026-01-31 07:34:15.054 221554 DEBUG nova.network.neutron [-] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:34:15 np0005603609 nova_compute[221550]: 2026-01-31 07:34:15.154 221554 INFO nova.scheduler.client.report [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Deleted allocation for migration e3aa997c-5864-4157-ad53-1755be938003#033[00m
Jan 31 02:34:15 np0005603609 nova_compute[221550]: 2026-01-31 07:34:15.214 221554 DEBUG oslo_concurrency.lockutils [None req-568b2d60-3f39-4923-9a8b-2d107a857370 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "d7327aed-ddc6-4772-8d2e-6b8be365dd2b" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 3.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:15.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:15 np0005603609 nova_compute[221550]: 2026-01-31 07:34:15.650 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:15 np0005603609 nova_compute[221550]: 2026-01-31 07:34:15.741 221554 DEBUG nova.network.neutron [-] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:34:15 np0005603609 nova_compute[221550]: 2026-01-31 07:34:15.763 221554 INFO nova.compute.manager [-] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Took 0.71 seconds to deallocate network for instance.#033[00m
Jan 31 02:34:15 np0005603609 nova_compute[221550]: 2026-01-31 07:34:15.809 221554 DEBUG oslo_concurrency.lockutils [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:15 np0005603609 nova_compute[221550]: 2026-01-31 07:34:15.809 221554 DEBUG oslo_concurrency.lockutils [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:15 np0005603609 nova_compute[221550]: 2026-01-31 07:34:15.876 221554 DEBUG nova.compute.manager [req-914cc747-ccf6-40c1-8c2d-f9f47795533f req-5251b6e1-9aa9-45ed-927a-8a5400a6c33d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Received event network-vif-deleted-b4547e99-5de6-41da-be86-73186521c22b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:34:15 np0005603609 nova_compute[221550]: 2026-01-31 07:34:15.881 221554 DEBUG oslo_concurrency.processutils [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:34:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:34:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3529144254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:34:16 np0005603609 nova_compute[221550]: 2026-01-31 07:34:16.370 221554 DEBUG oslo_concurrency.processutils [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:34:16 np0005603609 nova_compute[221550]: 2026-01-31 07:34:16.374 221554 DEBUG nova.compute.provider_tree [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:34:16 np0005603609 nova_compute[221550]: 2026-01-31 07:34:16.399 221554 DEBUG nova.scheduler.client.report [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:34:16 np0005603609 nova_compute[221550]: 2026-01-31 07:34:16.423 221554 DEBUG oslo_concurrency.lockutils [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:16 np0005603609 nova_compute[221550]: 2026-01-31 07:34:16.453 221554 INFO nova.scheduler.client.report [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Deleted allocations for instance 434e368f-4c40-49d6-8935-ed469ee03717#033[00m
Jan 31 02:34:16 np0005603609 nova_compute[221550]: 2026-01-31 07:34:16.533 221554 DEBUG oslo_concurrency.lockutils [None req-c7e7be46-a3d6-4bbc-90b1-3dee88792e78 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "434e368f-4c40-49d6-8935-ed469ee03717" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:16.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:16.735 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:34:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:16.736 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:34:16 np0005603609 nova_compute[221550]: 2026-01-31 07:34:16.781 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:17 np0005603609 nova_compute[221550]: 2026-01-31 07:34:17.018 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:17.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:34:17.738 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:34:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:18.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:19.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:20 np0005603609 nova_compute[221550]: 2026-01-31 07:34:20.652 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:34:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:20.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:34:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:21 np0005603609 nova_compute[221550]: 2026-01-31 07:34:21.450 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844846.4489148, d7327aed-ddc6-4772-8d2e-6b8be365dd2b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:34:21 np0005603609 nova_compute[221550]: 2026-01-31 07:34:21.450 221554 INFO nova.compute.manager [-] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:34:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:21.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:21 np0005603609 nova_compute[221550]: 2026-01-31 07:34:21.479 221554 DEBUG nova.compute.manager [None req-338c8152-6e91-48f4-b16e-13f0ad8c855c - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:34:21 np0005603609 nova_compute[221550]: 2026-01-31 07:34:21.856 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:22 np0005603609 nova_compute[221550]: 2026-01-31 07:34:22.019 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:22.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:23.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e143 e143: 3 total, 3 up, 3 in
Jan 31 02:34:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:34:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:24.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:34:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:25.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:25 np0005603609 nova_compute[221550]: 2026-01-31 07:34:25.655 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:34:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:26.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:34:26 np0005603609 nova_compute[221550]: 2026-01-31 07:34:26.995 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844851.9929142, 434e368f-4c40-49d6-8935-ed469ee03717 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:34:26 np0005603609 nova_compute[221550]: 2026-01-31 07:34:26.995 221554 INFO nova.compute.manager [-] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:34:27 np0005603609 nova_compute[221550]: 2026-01-31 07:34:27.023 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:27 np0005603609 nova_compute[221550]: 2026-01-31 07:34:27.039 221554 DEBUG nova.compute.manager [None req-1bf5eed1-a9e8-4afd-a9d3-b523683d932e - - - - - -] [instance: 434e368f-4c40-49d6-8935-ed469ee03717] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:34:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:27.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:34:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:28.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:34:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:29.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:30 np0005603609 nova_compute[221550]: 2026-01-31 07:34:30.657 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:30.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:34:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:31.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:34:32 np0005603609 nova_compute[221550]: 2026-01-31 07:34:32.096 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:34:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:32.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:34:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:33.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:34.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:34:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:35.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:34:35 np0005603609 nova_compute[221550]: 2026-01-31 07:34:35.685 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:35 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:34:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:34:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:36.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:34:36 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:34:36 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:34:36 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:34:36 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:34:37 np0005603609 nova_compute[221550]: 2026-01-31 07:34:37.099 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:37 np0005603609 podman[228223]: 2026-01-31 07:34:37.183011247 +0000 UTC m=+0.056951332 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 31 02:34:37 np0005603609 podman[228222]: 2026-01-31 07:34:37.234808612 +0000 UTC m=+0.112645352 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 02:34:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:34:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:37.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:34:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:38.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:34:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:39.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:34:40 np0005603609 nova_compute[221550]: 2026-01-31 07:34:40.687 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:34:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:40.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:34:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:34:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:41.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:34:42 np0005603609 nova_compute[221550]: 2026-01-31 07:34:42.103 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:42.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:43.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:44.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:34:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2215771126' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:34:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:34:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2215771126' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:34:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:45.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:45 np0005603609 nova_compute[221550]: 2026-01-31 07:34:45.690 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:46 np0005603609 nova_compute[221550]: 2026-01-31 07:34:46.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:34:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:46.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:47 np0005603609 nova_compute[221550]: 2026-01-31 07:34:47.106 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:34:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:47.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:34:47 np0005603609 nova_compute[221550]: 2026-01-31 07:34:47.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:34:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e144 e144: 3 total, 3 up, 3 in
Jan 31 02:34:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:34:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:34:48 np0005603609 nova_compute[221550]: 2026-01-31 07:34:48.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:34:48 np0005603609 nova_compute[221550]: 2026-01-31 07:34:48.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:34:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:34:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:48.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:34:48 np0005603609 nova_compute[221550]: 2026-01-31 07:34:48.713 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Skipping network cache update for instance because it has been migrated to another host. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9902#033[00m
Jan 31 02:34:48 np0005603609 nova_compute[221550]: 2026-01-31 07:34:48.741 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:34:48 np0005603609 nova_compute[221550]: 2026-01-31 07:34:48.741 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:34:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:49.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:49 np0005603609 nova_compute[221550]: 2026-01-31 07:34:49.737 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:34:50 np0005603609 nova_compute[221550]: 2026-01-31 07:34:50.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:34:50 np0005603609 nova_compute[221550]: 2026-01-31 07:34:50.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:34:50 np0005603609 nova_compute[221550]: 2026-01-31 07:34:50.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:34:50 np0005603609 nova_compute[221550]: 2026-01-31 07:34:50.690 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:50 np0005603609 nova_compute[221550]: 2026-01-31 07:34:50.690 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:50 np0005603609 nova_compute[221550]: 2026-01-31 07:34:50.690 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:50 np0005603609 nova_compute[221550]: 2026-01-31 07:34:50.691 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:34:50 np0005603609 nova_compute[221550]: 2026-01-31 07:34:50.691 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:34:50 np0005603609 nova_compute[221550]: 2026-01-31 07:34:50.702 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:34:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:50.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:34:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:34:51 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1232797084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:34:51 np0005603609 nova_compute[221550]: 2026-01-31 07:34:51.202 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:34:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:51 np0005603609 nova_compute[221550]: 2026-01-31 07:34:51.344 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:34:51 np0005603609 nova_compute[221550]: 2026-01-31 07:34:51.346 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4945MB free_disk=20.897266387939453GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:34:51 np0005603609 nova_compute[221550]: 2026-01-31 07:34:51.346 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:51 np0005603609 nova_compute[221550]: 2026-01-31 07:34:51.347 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:51.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:51 np0005603609 nova_compute[221550]: 2026-01-31 07:34:51.505 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:34:51 np0005603609 nova_compute[221550]: 2026-01-31 07:34:51.505 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:34:51 np0005603609 nova_compute[221550]: 2026-01-31 07:34:51.522 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:34:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:34:51 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2232137789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:34:51 np0005603609 nova_compute[221550]: 2026-01-31 07:34:51.941 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:34:51 np0005603609 nova_compute[221550]: 2026-01-31 07:34:51.947 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:34:52 np0005603609 nova_compute[221550]: 2026-01-31 07:34:52.109 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:52 np0005603609 nova_compute[221550]: 2026-01-31 07:34:52.132 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:34:52 np0005603609 nova_compute[221550]: 2026-01-31 07:34:52.237 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:34:52 np0005603609 nova_compute[221550]: 2026-01-31 07:34:52.238 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:34:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:52.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:34:53 np0005603609 nova_compute[221550]: 2026-01-31 07:34:53.238 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:34:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:53.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:53 np0005603609 nova_compute[221550]: 2026-01-31 07:34:53.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:34:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:34:54 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1992311293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:34:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:54.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:34:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:55.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:34:55 np0005603609 nova_compute[221550]: 2026-01-31 07:34:55.694 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e145 e145: 3 total, 3 up, 3 in
Jan 31 02:34:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:56.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:56 np0005603609 nova_compute[221550]: 2026-01-31 07:34:56.983 221554 DEBUG oslo_concurrency.lockutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Acquiring lock "49a7f5cb-f98a-40cd-953f-1129db04fa9a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:56 np0005603609 nova_compute[221550]: 2026-01-31 07:34:56.984 221554 DEBUG oslo_concurrency.lockutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Lock "49a7f5cb-f98a-40cd-953f-1129db04fa9a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:57 np0005603609 nova_compute[221550]: 2026-01-31 07:34:57.006 221554 DEBUG nova.compute.manager [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:34:57 np0005603609 nova_compute[221550]: 2026-01-31 07:34:57.086 221554 DEBUG oslo_concurrency.lockutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:57 np0005603609 nova_compute[221550]: 2026-01-31 07:34:57.086 221554 DEBUG oslo_concurrency.lockutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:57 np0005603609 nova_compute[221550]: 2026-01-31 07:34:57.108 221554 DEBUG nova.virt.hardware [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:34:57 np0005603609 nova_compute[221550]: 2026-01-31 07:34:57.109 221554 INFO nova.compute.claims [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:34:57 np0005603609 nova_compute[221550]: 2026-01-31 07:34:57.112 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:34:57 np0005603609 nova_compute[221550]: 2026-01-31 07:34:57.354 221554 DEBUG oslo_concurrency.processutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:34:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:57.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:34:57 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/586764486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:34:57 np0005603609 nova_compute[221550]: 2026-01-31 07:34:57.814 221554 DEBUG oslo_concurrency.processutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:34:57 np0005603609 nova_compute[221550]: 2026-01-31 07:34:57.820 221554 DEBUG nova.compute.provider_tree [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:34:58 np0005603609 nova_compute[221550]: 2026-01-31 07:34:58.018 221554 DEBUG nova.scheduler.client.report [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:34:58 np0005603609 nova_compute[221550]: 2026-01-31 07:34:58.050 221554 DEBUG oslo_concurrency.lockutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:58 np0005603609 nova_compute[221550]: 2026-01-31 07:34:58.051 221554 DEBUG nova.compute.manager [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:34:58 np0005603609 nova_compute[221550]: 2026-01-31 07:34:58.110 221554 DEBUG nova.compute.manager [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Jan 31 02:34:58 np0005603609 nova_compute[221550]: 2026-01-31 07:34:58.131 221554 INFO nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:34:58 np0005603609 nova_compute[221550]: 2026-01-31 07:34:58.154 221554 DEBUG nova.compute.manager [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:34:58 np0005603609 ovn_controller[130359]: 2026-01-31T07:34:58Z|00063|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 02:34:58 np0005603609 nova_compute[221550]: 2026-01-31 07:34:58.274 221554 DEBUG nova.compute.manager [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:34:58 np0005603609 nova_compute[221550]: 2026-01-31 07:34:58.276 221554 DEBUG nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:34:58 np0005603609 nova_compute[221550]: 2026-01-31 07:34:58.276 221554 INFO nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Creating image(s)#033[00m
Jan 31 02:34:58 np0005603609 nova_compute[221550]: 2026-01-31 07:34:58.307 221554 DEBUG nova.storage.rbd_utils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] rbd image 49a7f5cb-f98a-40cd-953f-1129db04fa9a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:34:58 np0005603609 nova_compute[221550]: 2026-01-31 07:34:58.339 221554 DEBUG nova.storage.rbd_utils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] rbd image 49a7f5cb-f98a-40cd-953f-1129db04fa9a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:34:58 np0005603609 nova_compute[221550]: 2026-01-31 07:34:58.365 221554 DEBUG nova.storage.rbd_utils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] rbd image 49a7f5cb-f98a-40cd-953f-1129db04fa9a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:34:58 np0005603609 nova_compute[221550]: 2026-01-31 07:34:58.369 221554 DEBUG oslo_concurrency.processutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:34:58 np0005603609 nova_compute[221550]: 2026-01-31 07:34:58.430 221554 DEBUG oslo_concurrency.processutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:34:58 np0005603609 nova_compute[221550]: 2026-01-31 07:34:58.430 221554 DEBUG oslo_concurrency.lockutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:58 np0005603609 nova_compute[221550]: 2026-01-31 07:34:58.431 221554 DEBUG oslo_concurrency.lockutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:58 np0005603609 nova_compute[221550]: 2026-01-31 07:34:58.431 221554 DEBUG oslo_concurrency.lockutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:58 np0005603609 nova_compute[221550]: 2026-01-31 07:34:58.454 221554 DEBUG nova.storage.rbd_utils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] rbd image 49a7f5cb-f98a-40cd-953f-1129db04fa9a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:34:58 np0005603609 nova_compute[221550]: 2026-01-31 07:34:58.503 221554 DEBUG oslo_concurrency.processutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 49a7f5cb-f98a-40cd-953f-1129db04fa9a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:34:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:58.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.045 221554 DEBUG oslo_concurrency.processutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 49a7f5cb-f98a-40cd-953f-1129db04fa9a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.139 221554 DEBUG nova.storage.rbd_utils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] resizing rbd image 49a7f5cb-f98a-40cd-953f-1129db04fa9a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:34:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:34:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:59.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.566 221554 DEBUG nova.objects.instance [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Lazy-loading 'migration_context' on Instance uuid 49a7f5cb-f98a-40cd-953f-1129db04fa9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.582 221554 DEBUG nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.583 221554 DEBUG nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Ensure instance console log exists: /var/lib/nova/instances/49a7f5cb-f98a-40cd-953f-1129db04fa9a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.584 221554 DEBUG oslo_concurrency.lockutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.584 221554 DEBUG oslo_concurrency.lockutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.585 221554 DEBUG oslo_concurrency.lockutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.587 221554 DEBUG nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.593 221554 WARNING nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.598 221554 DEBUG nova.virt.libvirt.host [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.599 221554 DEBUG nova.virt.libvirt.host [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.604 221554 DEBUG nova.virt.libvirt.host [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.605 221554 DEBUG nova.virt.libvirt.host [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.607 221554 DEBUG nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.607 221554 DEBUG nova.virt.hardware [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.608 221554 DEBUG nova.virt.hardware [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.608 221554 DEBUG nova.virt.hardware [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.609 221554 DEBUG nova.virt.hardware [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.609 221554 DEBUG nova.virt.hardware [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.610 221554 DEBUG nova.virt.hardware [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.610 221554 DEBUG nova.virt.hardware [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.610 221554 DEBUG nova.virt.hardware [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.611 221554 DEBUG nova.virt.hardware [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.611 221554 DEBUG nova.virt.hardware [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.612 221554 DEBUG nova.virt.hardware [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:34:59 np0005603609 nova_compute[221550]: 2026-01-31 07:34:59.616 221554 DEBUG oslo_concurrency.processutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:35:00 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4086487062' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:35:00 np0005603609 nova_compute[221550]: 2026-01-31 07:35:00.043 221554 DEBUG oslo_concurrency.processutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:00 np0005603609 nova_compute[221550]: 2026-01-31 07:35:00.066 221554 DEBUG nova.storage.rbd_utils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] rbd image 49a7f5cb-f98a-40cd-953f-1129db04fa9a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:35:00 np0005603609 nova_compute[221550]: 2026-01-31 07:35:00.070 221554 DEBUG oslo_concurrency.processutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:35:00 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2055595108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:35:00 np0005603609 nova_compute[221550]: 2026-01-31 07:35:00.507 221554 DEBUG oslo_concurrency.processutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:00 np0005603609 nova_compute[221550]: 2026-01-31 07:35:00.508 221554 DEBUG nova.objects.instance [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 49a7f5cb-f98a-40cd-953f-1129db04fa9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:35:00 np0005603609 nova_compute[221550]: 2026-01-31 07:35:00.553 221554 DEBUG nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:35:00 np0005603609 nova_compute[221550]:  <uuid>49a7f5cb-f98a-40cd-953f-1129db04fa9a</uuid>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:  <name>instance-00000010</name>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerDiagnosticsV248Test-server-195629218</nova:name>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:34:59</nova:creationTime>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:35:00 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:        <nova:user uuid="f206df257dcf4fb1aa17f4cc2a4c0ef0">tempest-ServerDiagnosticsV248Test-371051177-project-member</nova:user>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:        <nova:project uuid="ef637202bdd64f0ea4eb2b3d5ba702a7">tempest-ServerDiagnosticsV248Test-371051177</nova:project>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <nova:ports/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <entry name="serial">49a7f5cb-f98a-40cd-953f-1129db04fa9a</entry>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <entry name="uuid">49a7f5cb-f98a-40cd-953f-1129db04fa9a</entry>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/49a7f5cb-f98a-40cd-953f-1129db04fa9a_disk">
Jan 31 02:35:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:35:00 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/49a7f5cb-f98a-40cd-953f-1129db04fa9a_disk.config">
Jan 31 02:35:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:35:00 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/49a7f5cb-f98a-40cd-953f-1129db04fa9a/console.log" append="off"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:35:00 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:35:00 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:35:00 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:35:00 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:35:00 np0005603609 nova_compute[221550]: 2026-01-31 07:35:00.696 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:00.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:00 np0005603609 nova_compute[221550]: 2026-01-31 07:35:00.774 221554 DEBUG nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:35:00 np0005603609 nova_compute[221550]: 2026-01-31 07:35:00.775 221554 DEBUG nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:35:00 np0005603609 nova_compute[221550]: 2026-01-31 07:35:00.775 221554 INFO nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Using config drive#033[00m
Jan 31 02:35:00 np0005603609 nova_compute[221550]: 2026-01-31 07:35:00.799 221554 DEBUG nova.storage.rbd_utils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] rbd image 49a7f5cb-f98a-40cd-953f-1129db04fa9a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:35:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:01 np0005603609 nova_compute[221550]: 2026-01-31 07:35:01.272 221554 INFO nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Creating config drive at /var/lib/nova/instances/49a7f5cb-f98a-40cd-953f-1129db04fa9a/disk.config#033[00m
Jan 31 02:35:01 np0005603609 nova_compute[221550]: 2026-01-31 07:35:01.275 221554 DEBUG oslo_concurrency.processutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49a7f5cb-f98a-40cd-953f-1129db04fa9a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmplahgmcs1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:01 np0005603609 nova_compute[221550]: 2026-01-31 07:35:01.392 221554 DEBUG oslo_concurrency.processutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49a7f5cb-f98a-40cd-953f-1129db04fa9a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmplahgmcs1" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:01 np0005603609 nova_compute[221550]: 2026-01-31 07:35:01.420 221554 DEBUG nova.storage.rbd_utils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] rbd image 49a7f5cb-f98a-40cd-953f-1129db04fa9a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:35:01 np0005603609 nova_compute[221550]: 2026-01-31 07:35:01.424 221554 DEBUG oslo_concurrency.processutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49a7f5cb-f98a-40cd-953f-1129db04fa9a/disk.config 49a7f5cb-f98a-40cd-953f-1129db04fa9a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:01.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:01 np0005603609 nova_compute[221550]: 2026-01-31 07:35:01.773 221554 DEBUG oslo_concurrency.processutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49a7f5cb-f98a-40cd-953f-1129db04fa9a/disk.config 49a7f5cb-f98a-40cd-953f-1129db04fa9a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:01 np0005603609 nova_compute[221550]: 2026-01-31 07:35:01.775 221554 INFO nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Deleting local config drive /var/lib/nova/instances/49a7f5cb-f98a-40cd-953f-1129db04fa9a/disk.config because it was imported into RBD.#033[00m
Jan 31 02:35:01 np0005603609 systemd-machined[190912]: New machine qemu-8-instance-00000010.
Jan 31 02:35:01 np0005603609 systemd[1]: Started Virtual Machine qemu-8-instance-00000010.
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.115 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.527 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844902.5273852, 49a7f5cb-f98a-40cd-953f-1129db04fa9a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.528 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.530 221554 DEBUG nova.compute.manager [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.530 221554 DEBUG nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.534 221554 INFO nova.virt.libvirt.driver [-] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Instance spawned successfully.#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.534 221554 DEBUG nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.609 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.616 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.620 221554 DEBUG nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.620 221554 DEBUG nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.621 221554 DEBUG nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.622 221554 DEBUG nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.622 221554 DEBUG nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.622 221554 DEBUG nova.virt.libvirt.driver [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:35:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:35:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:02.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.797 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.798 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844902.5300534, 49a7f5cb-f98a-40cd-953f-1129db04fa9a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.798 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] VM Started (Lifecycle Event)#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.879 221554 INFO nova.compute.manager [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Took 4.60 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.879 221554 DEBUG nova.compute.manager [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.909 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:35:02 np0005603609 nova_compute[221550]: 2026-01-31 07:35:02.913 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:35:03 np0005603609 nova_compute[221550]: 2026-01-31 07:35:03.003 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:35:03 np0005603609 nova_compute[221550]: 2026-01-31 07:35:03.055 221554 INFO nova.compute.manager [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Took 5.99 seconds to build instance.#033[00m
Jan 31 02:35:03 np0005603609 nova_compute[221550]: 2026-01-31 07:35:03.114 221554 DEBUG oslo_concurrency.lockutils [None req-ebc05264-fa88-49e4-834d-6a6c56f5b3c6 f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Lock "49a7f5cb-f98a-40cd-953f-1129db04fa9a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:03.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e146 e146: 3 total, 3 up, 3 in
Jan 31 02:35:04 np0005603609 nova_compute[221550]: 2026-01-31 07:35:04.436 221554 DEBUG nova.compute.manager [None req-2f64cf7f-9152-4c87-9101-a87a488243b2 8e43d63d0de94c329e636d3d5cceb3e7 810e5a1997cb4613b59fb65f4db09603 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:35:04 np0005603609 nova_compute[221550]: 2026-01-31 07:35:04.440 221554 INFO nova.compute.manager [None req-2f64cf7f-9152-4c87-9101-a87a488243b2 8e43d63d0de94c329e636d3d5cceb3e7 810e5a1997cb4613b59fb65f4db09603 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Retrieving diagnostics#033[00m
Jan 31 02:35:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:04.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:05.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:05 np0005603609 nova_compute[221550]: 2026-01-31 07:35:05.698 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:06.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:07 np0005603609 nova_compute[221550]: 2026-01-31 07:35:07.118 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:07.467 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:07.467 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:07.467 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:07.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:08 np0005603609 podman[228725]: 2026-01-31 07:35:08.168627208 +0000 UTC m=+0.049191482 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Jan 31 02:35:08 np0005603609 podman[228724]: 2026-01-31 07:35:08.193728262 +0000 UTC m=+0.074407228 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 31 02:35:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:08.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:09.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:10 np0005603609 nova_compute[221550]: 2026-01-31 07:35:10.700 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:10.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:35:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:11.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:35:12 np0005603609 nova_compute[221550]: 2026-01-31 07:35:12.121 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:12.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:13.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:14.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:15 np0005603609 nova_compute[221550]: 2026-01-31 07:35:15.166 221554 DEBUG nova.compute.manager [None req-67ae9eaa-9ef5-498e-a1c3-5a877145c751 8e43d63d0de94c329e636d3d5cceb3e7 810e5a1997cb4613b59fb65f4db09603 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:35:15 np0005603609 nova_compute[221550]: 2026-01-31 07:35:15.170 221554 INFO nova.compute.manager [None req-67ae9eaa-9ef5-498e-a1c3-5a877145c751 8e43d63d0de94c329e636d3d5cceb3e7 810e5a1997cb4613b59fb65f4db09603 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Retrieving diagnostics#033[00m
Jan 31 02:35:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:15.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:15 np0005603609 nova_compute[221550]: 2026-01-31 07:35:15.701 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:15 np0005603609 nova_compute[221550]: 2026-01-31 07:35:15.916 221554 DEBUG oslo_concurrency.lockutils [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Acquiring lock "49a7f5cb-f98a-40cd-953f-1129db04fa9a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:15 np0005603609 nova_compute[221550]: 2026-01-31 07:35:15.916 221554 DEBUG oslo_concurrency.lockutils [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Lock "49a7f5cb-f98a-40cd-953f-1129db04fa9a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:15 np0005603609 nova_compute[221550]: 2026-01-31 07:35:15.917 221554 DEBUG oslo_concurrency.lockutils [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Acquiring lock "49a7f5cb-f98a-40cd-953f-1129db04fa9a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:15 np0005603609 nova_compute[221550]: 2026-01-31 07:35:15.917 221554 DEBUG oslo_concurrency.lockutils [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Lock "49a7f5cb-f98a-40cd-953f-1129db04fa9a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:15 np0005603609 nova_compute[221550]: 2026-01-31 07:35:15.917 221554 DEBUG oslo_concurrency.lockutils [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Lock "49a7f5cb-f98a-40cd-953f-1129db04fa9a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:15 np0005603609 nova_compute[221550]: 2026-01-31 07:35:15.918 221554 INFO nova.compute.manager [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Terminating instance#033[00m
Jan 31 02:35:15 np0005603609 nova_compute[221550]: 2026-01-31 07:35:15.919 221554 DEBUG oslo_concurrency.lockutils [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Acquiring lock "refresh_cache-49a7f5cb-f98a-40cd-953f-1129db04fa9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:35:15 np0005603609 nova_compute[221550]: 2026-01-31 07:35:15.919 221554 DEBUG oslo_concurrency.lockutils [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Acquired lock "refresh_cache-49a7f5cb-f98a-40cd-953f-1129db04fa9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:35:15 np0005603609 nova_compute[221550]: 2026-01-31 07:35:15.920 221554 DEBUG nova.network.neutron [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:35:16 np0005603609 nova_compute[221550]: 2026-01-31 07:35:16.125 221554 DEBUG nova.network.neutron [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:35:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:16 np0005603609 nova_compute[221550]: 2026-01-31 07:35:16.421 221554 DEBUG nova.network.neutron [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:35:16 np0005603609 nova_compute[221550]: 2026-01-31 07:35:16.588 221554 DEBUG oslo_concurrency.lockutils [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Releasing lock "refresh_cache-49a7f5cb-f98a-40cd-953f-1129db04fa9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:35:16 np0005603609 nova_compute[221550]: 2026-01-31 07:35:16.589 221554 DEBUG nova.compute.manager [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:35:16 np0005603609 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000010.scope: Deactivated successfully.
Jan 31 02:35:16 np0005603609 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000010.scope: Consumed 12.386s CPU time.
Jan 31 02:35:16 np0005603609 systemd-machined[190912]: Machine qemu-8-instance-00000010 terminated.
Jan 31 02:35:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:35:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:16.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:35:16 np0005603609 nova_compute[221550]: 2026-01-31 07:35:16.810 221554 INFO nova.virt.libvirt.driver [-] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Instance destroyed successfully.#033[00m
Jan 31 02:35:16 np0005603609 nova_compute[221550]: 2026-01-31 07:35:16.811 221554 DEBUG nova.objects.instance [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Lazy-loading 'resources' on Instance uuid 49a7f5cb-f98a-40cd-953f-1129db04fa9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:35:17 np0005603609 nova_compute[221550]: 2026-01-31 07:35:17.168 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:17.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:18 np0005603609 nova_compute[221550]: 2026-01-31 07:35:18.329 221554 DEBUG nova.compute.manager [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 31 02:35:18 np0005603609 nova_compute[221550]: 2026-01-31 07:35:18.581 221554 DEBUG oslo_concurrency.lockutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:18 np0005603609 nova_compute[221550]: 2026-01-31 07:35:18.582 221554 DEBUG oslo_concurrency.lockutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:18 np0005603609 nova_compute[221550]: 2026-01-31 07:35:18.738 221554 DEBUG nova.objects.instance [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'pci_requests' on Instance uuid 871711de-f993-4592-83a2-a36c4039786d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:35:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:18.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:18 np0005603609 nova_compute[221550]: 2026-01-31 07:35:18.843 221554 DEBUG nova.virt.hardware [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:35:18 np0005603609 nova_compute[221550]: 2026-01-31 07:35:18.843 221554 INFO nova.compute.claims [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:35:18 np0005603609 nova_compute[221550]: 2026-01-31 07:35:18.844 221554 DEBUG nova.objects.instance [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'resources' on Instance uuid 871711de-f993-4592-83a2-a36c4039786d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:35:19 np0005603609 nova_compute[221550]: 2026-01-31 07:35:19.173 221554 DEBUG nova.objects.instance [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'pci_devices' on Instance uuid 871711de-f993-4592-83a2-a36c4039786d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:35:19 np0005603609 nova_compute[221550]: 2026-01-31 07:35:19.271 221554 INFO nova.compute.resource_tracker [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Updating resource usage from migration 7ac5adc1-a6e9-43a5-8509-2086500fde0f#033[00m
Jan 31 02:35:19 np0005603609 nova_compute[221550]: 2026-01-31 07:35:19.271 221554 DEBUG nova.compute.resource_tracker [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Starting to track incoming migration 7ac5adc1-a6e9-43a5-8509-2086500fde0f with flavor e3bd1dad-95f3-4ed9-94b4-27245cd798b5 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 31 02:35:19 np0005603609 nova_compute[221550]: 2026-01-31 07:35:19.350 221554 DEBUG oslo_concurrency.processutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:19.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:35:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1939639995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:35:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:19.822 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:35:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:19.823 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:35:19 np0005603609 nova_compute[221550]: 2026-01-31 07:35:19.827 221554 DEBUG oslo_concurrency.processutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:19 np0005603609 nova_compute[221550]: 2026-01-31 07:35:19.836 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:19 np0005603609 nova_compute[221550]: 2026-01-31 07:35:19.841 221554 DEBUG nova.compute.provider_tree [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:35:19 np0005603609 nova_compute[221550]: 2026-01-31 07:35:19.871 221554 DEBUG nova.scheduler.client.report [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:35:19 np0005603609 nova_compute[221550]: 2026-01-31 07:35:19.921 221554 DEBUG oslo_concurrency.lockutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:19 np0005603609 nova_compute[221550]: 2026-01-31 07:35:19.921 221554 INFO nova.compute.manager [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Migrating#033[00m
Jan 31 02:35:20 np0005603609 nova_compute[221550]: 2026-01-31 07:35:20.676 221554 INFO nova.virt.libvirt.driver [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Deleting instance files /var/lib/nova/instances/49a7f5cb-f98a-40cd-953f-1129db04fa9a_del#033[00m
Jan 31 02:35:20 np0005603609 nova_compute[221550]: 2026-01-31 07:35:20.677 221554 INFO nova.virt.libvirt.driver [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Deletion of /var/lib/nova/instances/49a7f5cb-f98a-40cd-953f-1129db04fa9a_del complete#033[00m
Jan 31 02:35:20 np0005603609 nova_compute[221550]: 2026-01-31 07:35:20.702 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:20 np0005603609 nova_compute[221550]: 2026-01-31 07:35:20.737 221554 INFO nova.compute.manager [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Took 4.15 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:35:20 np0005603609 nova_compute[221550]: 2026-01-31 07:35:20.738 221554 DEBUG oslo.service.loopingcall [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:35:20 np0005603609 nova_compute[221550]: 2026-01-31 07:35:20.738 221554 DEBUG nova.compute.manager [-] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:35:20 np0005603609 nova_compute[221550]: 2026-01-31 07:35:20.739 221554 DEBUG nova.network.neutron [-] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:35:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:35:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:20.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:35:20 np0005603609 systemd-logind[823]: New session 51 of user nova.
Jan 31 02:35:20 np0005603609 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 02:35:20 np0005603609 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 02:35:20 np0005603609 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 02:35:20 np0005603609 systemd[1]: Starting User Manager for UID 42436...
Jan 31 02:35:21 np0005603609 systemd[228817]: Queued start job for default target Main User Target.
Jan 31 02:35:21 np0005603609 systemd[228817]: Created slice User Application Slice.
Jan 31 02:35:21 np0005603609 systemd[228817]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 02:35:21 np0005603609 systemd[228817]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 02:35:21 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:35:21 np0005603609 systemd[228817]: Reached target Paths.
Jan 31 02:35:21 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:35:21 np0005603609 systemd[228817]: Reached target Timers.
Jan 31 02:35:21 np0005603609 systemd[228817]: Starting D-Bus User Message Bus Socket...
Jan 31 02:35:21 np0005603609 systemd[228817]: Starting Create User's Volatile Files and Directories...
Jan 31 02:35:21 np0005603609 systemd[228817]: Finished Create User's Volatile Files and Directories.
Jan 31 02:35:21 np0005603609 systemd[228817]: Listening on D-Bus User Message Bus Socket.
Jan 31 02:35:21 np0005603609 systemd[228817]: Reached target Sockets.
Jan 31 02:35:21 np0005603609 systemd[228817]: Reached target Basic System.
Jan 31 02:35:21 np0005603609 systemd[228817]: Reached target Main User Target.
Jan 31 02:35:21 np0005603609 systemd[228817]: Startup finished in 198ms.
Jan 31 02:35:21 np0005603609 systemd[1]: Started User Manager for UID 42436.
Jan 31 02:35:21 np0005603609 systemd[1]: Started Session 51 of User nova.
Jan 31 02:35:21 np0005603609 systemd[1]: session-51.scope: Deactivated successfully.
Jan 31 02:35:21 np0005603609 systemd-logind[823]: Session 51 logged out. Waiting for processes to exit.
Jan 31 02:35:21 np0005603609 systemd-logind[823]: Removed session 51.
Jan 31 02:35:21 np0005603609 nova_compute[221550]: 2026-01-31 07:35:21.220 221554 DEBUG nova.network.neutron [-] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:35:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:21 np0005603609 systemd-logind[823]: New session 53 of user nova.
Jan 31 02:35:21 np0005603609 systemd[1]: Started Session 53 of User nova.
Jan 31 02:35:21 np0005603609 nova_compute[221550]: 2026-01-31 07:35:21.337 221554 DEBUG nova.network.neutron [-] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:35:21 np0005603609 systemd[1]: session-53.scope: Deactivated successfully.
Jan 31 02:35:21 np0005603609 systemd-logind[823]: Session 53 logged out. Waiting for processes to exit.
Jan 31 02:35:21 np0005603609 systemd-logind[823]: Removed session 53.
Jan 31 02:35:21 np0005603609 nova_compute[221550]: 2026-01-31 07:35:21.383 221554 INFO nova.compute.manager [-] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Took 0.64 seconds to deallocate network for instance.#033[00m
Jan 31 02:35:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:21.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:21 np0005603609 nova_compute[221550]: 2026-01-31 07:35:21.687 221554 DEBUG oslo_concurrency.lockutils [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:21 np0005603609 nova_compute[221550]: 2026-01-31 07:35:21.688 221554 DEBUG oslo_concurrency.lockutils [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:21 np0005603609 nova_compute[221550]: 2026-01-31 07:35:21.761 221554 DEBUG oslo_concurrency.processutils [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:21.824 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:35:22 np0005603609 nova_compute[221550]: 2026-01-31 07:35:22.225 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:35:22 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3973763842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:35:22 np0005603609 nova_compute[221550]: 2026-01-31 07:35:22.275 221554 DEBUG oslo_concurrency.processutils [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:22 np0005603609 nova_compute[221550]: 2026-01-31 07:35:22.279 221554 DEBUG nova.compute.provider_tree [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:35:22 np0005603609 nova_compute[221550]: 2026-01-31 07:35:22.442 221554 DEBUG nova.scheduler.client.report [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:35:22 np0005603609 nova_compute[221550]: 2026-01-31 07:35:22.733 221554 DEBUG oslo_concurrency.lockutils [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:22.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:22 np0005603609 nova_compute[221550]: 2026-01-31 07:35:22.954 221554 INFO nova.scheduler.client.report [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Deleted allocations for instance 49a7f5cb-f98a-40cd-953f-1129db04fa9a#033[00m
Jan 31 02:35:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:23.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:23 np0005603609 nova_compute[221550]: 2026-01-31 07:35:23.629 221554 DEBUG oslo_concurrency.lockutils [None req-2ef02d66-7d29-4a8c-a1d5-6befbed0920a f206df257dcf4fb1aa17f4cc2a4c0ef0 ef637202bdd64f0ea4eb2b3d5ba702a7 - - default default] Lock "49a7f5cb-f98a-40cd-953f-1129db04fa9a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:24.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:25.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:25 np0005603609 nova_compute[221550]: 2026-01-31 07:35:25.705 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:26.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:27 np0005603609 nova_compute[221550]: 2026-01-31 07:35:27.239 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:27.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:28.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:28 np0005603609 nova_compute[221550]: 2026-01-31 07:35:28.883 221554 DEBUG oslo_concurrency.lockutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquiring lock "b9a76fba-cff2-455a-9aa6-7b839819e78b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:28 np0005603609 nova_compute[221550]: 2026-01-31 07:35:28.883 221554 DEBUG oslo_concurrency.lockutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:28 np0005603609 nova_compute[221550]: 2026-01-31 07:35:28.899 221554 DEBUG nova.compute.manager [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:35:28 np0005603609 nova_compute[221550]: 2026-01-31 07:35:28.989 221554 DEBUG oslo_concurrency.lockutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:28 np0005603609 nova_compute[221550]: 2026-01-31 07:35:28.990 221554 DEBUG oslo_concurrency.lockutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:28 np0005603609 nova_compute[221550]: 2026-01-31 07:35:28.997 221554 DEBUG nova.virt.hardware [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:35:28 np0005603609 nova_compute[221550]: 2026-01-31 07:35:28.998 221554 INFO nova.compute.claims [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.125 221554 DEBUG oslo_concurrency.processutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:29.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:35:29 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3479424062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.563 221554 DEBUG oslo_concurrency.processutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.569 221554 DEBUG nova.compute.provider_tree [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.583 221554 DEBUG nova.scheduler.client.report [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.607 221554 DEBUG oslo_concurrency.lockutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.608 221554 DEBUG nova.compute.manager [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.691 221554 DEBUG nova.compute.manager [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.692 221554 DEBUG nova.network.neutron [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.711 221554 INFO nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.728 221554 DEBUG nova.compute.manager [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.825 221554 DEBUG nova.compute.manager [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.826 221554 DEBUG nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.827 221554 INFO nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Creating image(s)#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.854 221554 DEBUG nova.storage.rbd_utils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] rbd image b9a76fba-cff2-455a-9aa6-7b839819e78b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.883 221554 DEBUG nova.storage.rbd_utils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] rbd image b9a76fba-cff2-455a-9aa6-7b839819e78b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.915 221554 DEBUG nova.storage.rbd_utils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] rbd image b9a76fba-cff2-455a-9aa6-7b839819e78b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.919 221554 DEBUG oslo_concurrency.processutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.989 221554 DEBUG oslo_concurrency.processutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.990 221554 DEBUG oslo_concurrency.lockutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.991 221554 DEBUG oslo_concurrency.lockutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:29 np0005603609 nova_compute[221550]: 2026-01-31 07:35:29.991 221554 DEBUG oslo_concurrency.lockutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:30 np0005603609 nova_compute[221550]: 2026-01-31 07:35:30.016 221554 DEBUG nova.storage.rbd_utils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] rbd image b9a76fba-cff2-455a-9aa6-7b839819e78b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:35:30 np0005603609 nova_compute[221550]: 2026-01-31 07:35:30.020 221554 DEBUG oslo_concurrency.processutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 b9a76fba-cff2-455a-9aa6-7b839819e78b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:30 np0005603609 nova_compute[221550]: 2026-01-31 07:35:30.307 221554 DEBUG oslo_concurrency.processutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 b9a76fba-cff2-455a-9aa6-7b839819e78b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:30 np0005603609 nova_compute[221550]: 2026-01-31 07:35:30.377 221554 DEBUG nova.storage.rbd_utils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] resizing rbd image b9a76fba-cff2-455a-9aa6-7b839819e78b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:35:30 np0005603609 nova_compute[221550]: 2026-01-31 07:35:30.485 221554 DEBUG nova.objects.instance [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lazy-loading 'migration_context' on Instance uuid b9a76fba-cff2-455a-9aa6-7b839819e78b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:35:30 np0005603609 nova_compute[221550]: 2026-01-31 07:35:30.511 221554 DEBUG nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:35:30 np0005603609 nova_compute[221550]: 2026-01-31 07:35:30.511 221554 DEBUG nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Ensure instance console log exists: /var/lib/nova/instances/b9a76fba-cff2-455a-9aa6-7b839819e78b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:35:30 np0005603609 nova_compute[221550]: 2026-01-31 07:35:30.512 221554 DEBUG oslo_concurrency.lockutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:30 np0005603609 nova_compute[221550]: 2026-01-31 07:35:30.512 221554 DEBUG oslo_concurrency.lockutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:30 np0005603609 nova_compute[221550]: 2026-01-31 07:35:30.512 221554 DEBUG oslo_concurrency.lockutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:30 np0005603609 nova_compute[221550]: 2026-01-31 07:35:30.559 221554 DEBUG nova.policy [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea44c45fe7df4f36b5c722fbfc214f2e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '29d136be5e384689a95acd607131dfd0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:35:30 np0005603609 nova_compute[221550]: 2026-01-31 07:35:30.706 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:30.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:31.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:31 np0005603609 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 02:35:31 np0005603609 systemd[228817]: Activating special unit Exit the Session...
Jan 31 02:35:31 np0005603609 systemd[228817]: Stopped target Main User Target.
Jan 31 02:35:31 np0005603609 systemd[228817]: Stopped target Basic System.
Jan 31 02:35:31 np0005603609 systemd[228817]: Stopped target Paths.
Jan 31 02:35:31 np0005603609 systemd[228817]: Stopped target Sockets.
Jan 31 02:35:31 np0005603609 systemd[228817]: Stopped target Timers.
Jan 31 02:35:31 np0005603609 systemd[228817]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 02:35:31 np0005603609 systemd[228817]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 02:35:31 np0005603609 systemd[228817]: Closed D-Bus User Message Bus Socket.
Jan 31 02:35:31 np0005603609 systemd[228817]: Stopped Create User's Volatile Files and Directories.
Jan 31 02:35:31 np0005603609 systemd[228817]: Removed slice User Application Slice.
Jan 31 02:35:31 np0005603609 systemd[228817]: Reached target Shutdown.
Jan 31 02:35:31 np0005603609 systemd[228817]: Finished Exit the Session.
Jan 31 02:35:31 np0005603609 systemd[228817]: Reached target Exit the Session.
Jan 31 02:35:31 np0005603609 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 02:35:31 np0005603609 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 02:35:31 np0005603609 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 02:35:31 np0005603609 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 02:35:31 np0005603609 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 02:35:31 np0005603609 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 02:35:31 np0005603609 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 02:35:31 np0005603609 nova_compute[221550]: 2026-01-31 07:35:31.808 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844916.8064196, 49a7f5cb-f98a-40cd-953f-1129db04fa9a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:35:31 np0005603609 nova_compute[221550]: 2026-01-31 07:35:31.808 221554 INFO nova.compute.manager [-] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:35:31 np0005603609 nova_compute[221550]: 2026-01-31 07:35:31.847 221554 DEBUG nova.compute.manager [None req-a634eede-ba1d-415f-9a41-2e42f1cc9a59 - - - - - -] [instance: 49a7f5cb-f98a-40cd-953f-1129db04fa9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:35:32 np0005603609 nova_compute[221550]: 2026-01-31 07:35:32.243 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:32 np0005603609 nova_compute[221550]: 2026-01-31 07:35:32.562 221554 DEBUG nova.network.neutron [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Successfully updated port: 596408d2-9689-472f-b2fb-a85a75df2923 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:35:32 np0005603609 nova_compute[221550]: 2026-01-31 07:35:32.581 221554 DEBUG oslo_concurrency.lockutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquiring lock "refresh_cache-b9a76fba-cff2-455a-9aa6-7b839819e78b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:35:32 np0005603609 nova_compute[221550]: 2026-01-31 07:35:32.581 221554 DEBUG oslo_concurrency.lockutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquired lock "refresh_cache-b9a76fba-cff2-455a-9aa6-7b839819e78b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:35:32 np0005603609 nova_compute[221550]: 2026-01-31 07:35:32.582 221554 DEBUG nova.network.neutron [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:35:32 np0005603609 nova_compute[221550]: 2026-01-31 07:35:32.744 221554 DEBUG nova.compute.manager [req-86da61ee-a062-42de-8d4c-0d59c56c2184 req-5a019298-42a4-45b9-bb24-b31320367660 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received event network-changed-596408d2-9689-472f-b2fb-a85a75df2923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:35:32 np0005603609 nova_compute[221550]: 2026-01-31 07:35:32.744 221554 DEBUG nova.compute.manager [req-86da61ee-a062-42de-8d4c-0d59c56c2184 req-5a019298-42a4-45b9-bb24-b31320367660 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Refreshing instance network info cache due to event network-changed-596408d2-9689-472f-b2fb-a85a75df2923. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:35:32 np0005603609 nova_compute[221550]: 2026-01-31 07:35:32.744 221554 DEBUG oslo_concurrency.lockutils [req-86da61ee-a062-42de-8d4c-0d59c56c2184 req-5a019298-42a4-45b9-bb24-b31320367660 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b9a76fba-cff2-455a-9aa6-7b839819e78b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:35:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:32.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:32 np0005603609 nova_compute[221550]: 2026-01-31 07:35:32.872 221554 DEBUG nova.network.neutron [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:35:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:33.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.028 221554 DEBUG nova.network.neutron [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Updating instance_info_cache with network_info: [{"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:35:34 np0005603609 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.340 221554 DEBUG oslo_concurrency.lockutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Releasing lock "refresh_cache-b9a76fba-cff2-455a-9aa6-7b839819e78b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.340 221554 DEBUG nova.compute.manager [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Instance network_info: |[{"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.341 221554 DEBUG oslo_concurrency.lockutils [req-86da61ee-a062-42de-8d4c-0d59c56c2184 req-5a019298-42a4-45b9-bb24-b31320367660 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b9a76fba-cff2-455a-9aa6-7b839819e78b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.341 221554 DEBUG nova.network.neutron [req-86da61ee-a062-42de-8d4c-0d59c56c2184 req-5a019298-42a4-45b9-bb24-b31320367660 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Refreshing network info cache for port 596408d2-9689-472f-b2fb-a85a75df2923 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.346 221554 DEBUG nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Start _get_guest_xml network_info=[{"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.351 221554 WARNING nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.357 221554 DEBUG nova.virt.libvirt.host [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.357 221554 DEBUG nova.virt.libvirt.host [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.360 221554 DEBUG nova.virt.libvirt.host [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.360 221554 DEBUG nova.virt.libvirt.host [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.361 221554 DEBUG nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.361 221554 DEBUG nova.virt.hardware [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.362 221554 DEBUG nova.virt.hardware [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.362 221554 DEBUG nova.virt.hardware [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.362 221554 DEBUG nova.virt.hardware [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.362 221554 DEBUG nova.virt.hardware [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.363 221554 DEBUG nova.virt.hardware [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.363 221554 DEBUG nova.virt.hardware [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.363 221554 DEBUG nova.virt.hardware [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.363 221554 DEBUG nova.virt.hardware [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.364 221554 DEBUG nova.virt.hardware [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.364 221554 DEBUG nova.virt.hardware [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.366 221554 DEBUG oslo_concurrency.processutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:35:34 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3916158877' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:35:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:34.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.788 221554 DEBUG oslo_concurrency.processutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.815 221554 DEBUG nova.storage.rbd_utils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] rbd image b9a76fba-cff2-455a-9aa6-7b839819e78b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:35:34 np0005603609 nova_compute[221550]: 2026-01-31 07:35:34.820 221554 DEBUG oslo_concurrency.processutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:35:35 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/749401607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.363 221554 DEBUG oslo_concurrency.processutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.365 221554 DEBUG nova.virt.libvirt.vif [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:35:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-879430066',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-879430066',id=19,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='29d136be5e384689a95acd607131dfd0',ramdisk_id='',reservation_id='r-9uooh3ao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1421195096',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1421195096-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:35:29Z,user_data=None,user_id='ea44c45fe7df4f36b5c722fbfc214f2e',uuid=b9a76fba-cff2-455a-9aa6-7b839819e78b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.365 221554 DEBUG nova.network.os_vif_util [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Converting VIF {"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.366 221554 DEBUG nova.network.os_vif_util [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:f1:d6,bridge_name='br-int',has_traffic_filtering=True,id=596408d2-9689-472f-b2fb-a85a75df2923,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap596408d2-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.367 221554 DEBUG nova.objects.instance [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lazy-loading 'pci_devices' on Instance uuid b9a76fba-cff2-455a-9aa6-7b839819e78b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.403 221554 DEBUG nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:35:35 np0005603609 nova_compute[221550]:  <uuid>b9a76fba-cff2-455a-9aa6-7b839819e78b</uuid>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:  <name>instance-00000013</name>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <nova:name>tempest-LiveAutoBlockMigrationV225Test-server-879430066</nova:name>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:35:34</nova:creationTime>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:35:35 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:        <nova:user uuid="ea44c45fe7df4f36b5c722fbfc214f2e">tempest-LiveAutoBlockMigrationV225Test-1421195096-project-member</nova:user>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:        <nova:project uuid="29d136be5e384689a95acd607131dfd0">tempest-LiveAutoBlockMigrationV225Test-1421195096</nova:project>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:        <nova:port uuid="596408d2-9689-472f-b2fb-a85a75df2923">
Jan 31 02:35:35 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <entry name="serial">b9a76fba-cff2-455a-9aa6-7b839819e78b</entry>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <entry name="uuid">b9a76fba-cff2-455a-9aa6-7b839819e78b</entry>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/b9a76fba-cff2-455a-9aa6-7b839819e78b_disk">
Jan 31 02:35:35 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:35:35 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/b9a76fba-cff2-455a-9aa6-7b839819e78b_disk.config">
Jan 31 02:35:35 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:35:35 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:45:f1:d6"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <target dev="tap596408d2-96"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/b9a76fba-cff2-455a-9aa6-7b839819e78b/console.log" append="off"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:35:35 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:35:35 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:35:35 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:35:35 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.405 221554 DEBUG nova.compute.manager [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Preparing to wait for external event network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.405 221554 DEBUG oslo_concurrency.lockutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquiring lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.405 221554 DEBUG oslo_concurrency.lockutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.406 221554 DEBUG oslo_concurrency.lockutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.406 221554 DEBUG nova.virt.libvirt.vif [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:35:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-879430066',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-879430066',id=19,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='29d136be5e384689a95acd607131dfd0',ramdisk_id='',reservation_id='r-9uooh3ao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1421195096',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1421195096-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:35:29Z,user_data=None,user_id='ea44c45fe7df4f36b5c722fbfc214f2e',uuid=b9a76fba-cff2-455a-9aa6-7b839819e78b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.407 221554 DEBUG nova.network.os_vif_util [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Converting VIF {"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.407 221554 DEBUG nova.network.os_vif_util [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:f1:d6,bridge_name='br-int',has_traffic_filtering=True,id=596408d2-9689-472f-b2fb-a85a75df2923,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap596408d2-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.408 221554 DEBUG os_vif [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:f1:d6,bridge_name='br-int',has_traffic_filtering=True,id=596408d2-9689-472f-b2fb-a85a75df2923,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap596408d2-96') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.408 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.409 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.409 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.414 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.414 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap596408d2-96, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.415 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap596408d2-96, col_values=(('external_ids', {'iface-id': '596408d2-9689-472f-b2fb-a85a75df2923', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:f1:d6', 'vm-uuid': 'b9a76fba-cff2-455a-9aa6-7b839819e78b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.417 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:35 np0005603609 NetworkManager[49064]: <info>  [1769844935.4185] manager: (tap596408d2-96): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.422 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.432 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.434 221554 INFO os_vif [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:f1:d6,bridge_name='br-int',has_traffic_filtering=True,id=596408d2-9689-472f-b2fb-a85a75df2923,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap596408d2-96')#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.530 221554 DEBUG nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.530 221554 DEBUG nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.530 221554 DEBUG nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] No VIF found with MAC fa:16:3e:45:f1:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.531 221554 INFO nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Using config drive#033[00m
Jan 31 02:35:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:35.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.560 221554 DEBUG nova.storage.rbd_utils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] rbd image b9a76fba-cff2-455a-9aa6-7b839819e78b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:35:35 np0005603609 nova_compute[221550]: 2026-01-31 07:35:35.709 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:36 np0005603609 nova_compute[221550]: 2026-01-31 07:35:36.194 221554 DEBUG oslo_concurrency.lockutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "refresh_cache-871711de-f993-4592-83a2-a36c4039786d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:35:36 np0005603609 nova_compute[221550]: 2026-01-31 07:35:36.195 221554 DEBUG oslo_concurrency.lockutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquired lock "refresh_cache-871711de-f993-4592-83a2-a36c4039786d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:35:36 np0005603609 nova_compute[221550]: 2026-01-31 07:35:36.195 221554 DEBUG nova.network.neutron [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:35:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:36 np0005603609 nova_compute[221550]: 2026-01-31 07:35:36.391 221554 INFO nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Creating config drive at /var/lib/nova/instances/b9a76fba-cff2-455a-9aa6-7b839819e78b/disk.config#033[00m
Jan 31 02:35:36 np0005603609 nova_compute[221550]: 2026-01-31 07:35:36.394 221554 DEBUG oslo_concurrency.processutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b9a76fba-cff2-455a-9aa6-7b839819e78b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpnxteiezq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:36 np0005603609 nova_compute[221550]: 2026-01-31 07:35:36.420 221554 DEBUG nova.network.neutron [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:35:36 np0005603609 nova_compute[221550]: 2026-01-31 07:35:36.510 221554 DEBUG oslo_concurrency.processutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b9a76fba-cff2-455a-9aa6-7b839819e78b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpnxteiezq" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:36 np0005603609 nova_compute[221550]: 2026-01-31 07:35:36.542 221554 DEBUG nova.storage.rbd_utils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] rbd image b9a76fba-cff2-455a-9aa6-7b839819e78b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:35:36 np0005603609 nova_compute[221550]: 2026-01-31 07:35:36.545 221554 DEBUG oslo_concurrency.processutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b9a76fba-cff2-455a-9aa6-7b839819e78b/disk.config b9a76fba-cff2-455a-9aa6-7b839819e78b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:36 np0005603609 nova_compute[221550]: 2026-01-31 07:35:36.705 221554 DEBUG nova.network.neutron [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:35:36 np0005603609 nova_compute[221550]: 2026-01-31 07:35:36.722 221554 DEBUG oslo_concurrency.lockutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Releasing lock "refresh_cache-871711de-f993-4592-83a2-a36c4039786d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:35:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:36.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:36 np0005603609 nova_compute[221550]: 2026-01-31 07:35:36.906 221554 DEBUG nova.virt.libvirt.driver [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 31 02:35:36 np0005603609 nova_compute[221550]: 2026-01-31 07:35:36.909 221554 DEBUG nova.virt.libvirt.driver [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 02:35:36 np0005603609 nova_compute[221550]: 2026-01-31 07:35:36.909 221554 INFO nova.virt.libvirt.driver [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Creating image(s)#033[00m
Jan 31 02:35:36 np0005603609 nova_compute[221550]: 2026-01-31 07:35:36.964 221554 DEBUG nova.storage.rbd_utils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] creating snapshot(nova-resize) on rbd image(871711de-f993-4592-83a2-a36c4039786d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.207 221554 DEBUG oslo_concurrency.processutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b9a76fba-cff2-455a-9aa6-7b839819e78b/disk.config b9a76fba-cff2-455a-9aa6-7b839819e78b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.208 221554 INFO nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Deleting local config drive /var/lib/nova/instances/b9a76fba-cff2-455a-9aa6-7b839819e78b/disk.config because it was imported into RBD.#033[00m
Jan 31 02:35:37 np0005603609 kernel: tap596408d2-96: entered promiscuous mode
Jan 31 02:35:37 np0005603609 NetworkManager[49064]: <info>  [1769844937.2644] manager: (tap596408d2-96): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Jan 31 02:35:37 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:37Z|00064|binding|INFO|Claiming lport 596408d2-9689-472f-b2fb-a85a75df2923 for this chassis.
Jan 31 02:35:37 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:37Z|00065|binding|INFO|596408d2-9689-472f-b2fb-a85a75df2923: Claiming fa:16:3e:45:f1:d6 10.100.0.14
Jan 31 02:35:37 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:37Z|00066|binding|INFO|Claiming lport b2e77a7e-2125-46bd-8c49-dd619c7caf36 for this chassis.
Jan 31 02:35:37 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:37Z|00067|binding|INFO|b2e77a7e-2125-46bd-8c49-dd619c7caf36: Claiming fa:16:3e:cc:5d:23 19.80.0.215
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.264 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.272 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.284 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:5d:23 19.80.0.215'], port_security=['fa:16:3e:cc:5d:23 19.80.0.215'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['596408d2-9689-472f-b2fb-a85a75df2923'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1541952823', 'neutron:cidrs': '19.80.0.215/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-994f8d42-6738-4c92-b80e-8dbb63919128', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1541952823', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=67507551-f2d4-4c0b-ab9d-4c732fbaf469, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b2e77a7e-2125-46bd-8c49-dd619c7caf36) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.287 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:f1:d6 10.100.0.14'], port_security=['fa:16:3e:45:f1:d6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-334582475', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b9a76fba-cff2-455a-9aa6-7b839819e78b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-334582475', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6f42f93-f94d-4170-883d-d45cddf5fdad, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=596408d2-9689-472f-b2fb-a85a75df2923) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.288 140058 INFO neutron.agent.ovn.metadata.agent [-] Port b2e77a7e-2125-46bd-8c49-dd619c7caf36 in datapath 994f8d42-6738-4c92-b80e-8dbb63919128 bound to our chassis#033[00m
Jan 31 02:35:37 np0005603609 systemd-udevd[229224]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.290 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 994f8d42-6738-4c92-b80e-8dbb63919128#033[00m
Jan 31 02:35:37 np0005603609 systemd-machined[190912]: New machine qemu-9-instance-00000013.
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.301 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0e2c2a-035c-4660-be09-b5cfeb3789c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.302 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap994f8d42-61 in ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.303 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap994f8d42-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.303 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3e441867-9e9e-40b7-a3a3-ddf6a796d4f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.304 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7e602a70-be93-4b24-b844-29e7600ae7f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:37 np0005603609 NetworkManager[49064]: <info>  [1769844937.3111] device (tap596408d2-96): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:35:37 np0005603609 NetworkManager[49064]: <info>  [1769844937.3117] device (tap596408d2-96): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.311 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.314 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[6a052c70-d4fc-403c-b116-f926d6651403]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:37 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:37Z|00068|binding|INFO|Setting lport 596408d2-9689-472f-b2fb-a85a75df2923 ovn-installed in OVS
Jan 31 02:35:37 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:37Z|00069|binding|INFO|Setting lport 596408d2-9689-472f-b2fb-a85a75df2923 up in Southbound
Jan 31 02:35:37 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:37Z|00070|binding|INFO|Setting lport b2e77a7e-2125-46bd-8c49-dd619c7caf36 up in Southbound
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.319 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:37 np0005603609 systemd[1]: Started Virtual Machine qemu-9-instance-00000013.
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.337 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4153c25c-ce24-4279-9594-3d8da8448432]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.361 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[8139adee-f8e4-4532-bd2c-4a9da6ee05ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.365 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2e19fe-c904-4090-b4e6-c1df84dfcb7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:37 np0005603609 systemd-udevd[229228]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:35:37 np0005603609 NetworkManager[49064]: <info>  [1769844937.3668] manager: (tap994f8d42-60): new Veth device (/org/freedesktop/NetworkManager/Devices/40)
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.392 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[60fa1e51-48ad-452d-99ae-1a825531acda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.396 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d716fd-241a-4888-b258-f8a2f6a763a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:37 np0005603609 NetworkManager[49064]: <info>  [1769844937.4187] device (tap994f8d42-60): carrier: link connected
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.423 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f2248a1d-295b-430e-849f-6e9caf9099ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.437 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f3650171-64c7-4ccb-93dd-c09829d3982f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap994f8d42-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:ce:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517711, 'reachable_time': 22256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 229257, 'error': None, 'target': 'ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.452 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0be829f4-1f54-40b8-9219-869f51bf5eaf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:ceb2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517711, 'tstamp': 517711}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 229258, 'error': None, 'target': 'ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.466 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d67bd5c7-49d0-49d2-a7d6-20f732fe7434]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap994f8d42-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:ce:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517711, 'reachable_time': 22256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 229259, 'error': None, 'target': 'ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.492 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8d944c99-0970-4092-8267-1668d453ba0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e147 e147: 3 total, 3 up, 3 in
Jan 31 02:35:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:37.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.543 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2dff9acb-f935-4648-98df-913af689f805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.544 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap994f8d42-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.544 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.545 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap994f8d42-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.547 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:37 np0005603609 kernel: tap994f8d42-60: entered promiscuous mode
Jan 31 02:35:37 np0005603609 NetworkManager[49064]: <info>  [1769844937.5477] manager: (tap994f8d42-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.549 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap994f8d42-60, col_values=(('external_ids', {'iface-id': 'a50841ea-6849-4032-a8d6-6ba9e6fd3a95'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:35:37 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:37Z|00071|binding|INFO|Releasing lport a50841ea-6849-4032-a8d6-6ba9e6fd3a95 from this chassis (sb_readonly=0)
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.551 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.553 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/994f8d42-6738-4c92-b80e-8dbb63919128.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/994f8d42-6738-4c92-b80e-8dbb63919128.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.554 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2b421515-3d50-4eab-b91c-57c325acba53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.555 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-994f8d42-6738-4c92-b80e-8dbb63919128
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/994f8d42-6738-4c92-b80e-8dbb63919128.pid.haproxy
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 994f8d42-6738-4c92-b80e-8dbb63919128
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:35:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:37.556 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128', 'env', 'PROCESS_TAG=haproxy-994f8d42-6738-4c92-b80e-8dbb63919128', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/994f8d42-6738-4c92-b80e-8dbb63919128.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.557 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.578 221554 DEBUG nova.objects.instance [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'trusted_certs' on Instance uuid 871711de-f993-4592-83a2-a36c4039786d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.635 221554 DEBUG nova.network.neutron [req-86da61ee-a062-42de-8d4c-0d59c56c2184 req-5a019298-42a4-45b9-bb24-b31320367660 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Updated VIF entry in instance network info cache for port 596408d2-9689-472f-b2fb-a85a75df2923. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.637 221554 DEBUG nova.network.neutron [req-86da61ee-a062-42de-8d4c-0d59c56c2184 req-5a019298-42a4-45b9-bb24-b31320367660 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Updating instance_info_cache with network_info: [{"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.688 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844937.684459, b9a76fba-cff2-455a-9aa6-7b839819e78b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.688 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] VM Started (Lifecycle Event)#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.691 221554 DEBUG oslo_concurrency.lockutils [req-86da61ee-a062-42de-8d4c-0d59c56c2184 req-5a019298-42a4-45b9-bb24-b31320367660 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b9a76fba-cff2-455a-9aa6-7b839819e78b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.720 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.724 221554 DEBUG nova.virt.libvirt.driver [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.724 221554 DEBUG nova.virt.libvirt.driver [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Ensure instance console log exists: /var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.725 221554 DEBUG oslo_concurrency.lockutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.725 221554 DEBUG oslo_concurrency.lockutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.726 221554 DEBUG oslo_concurrency.lockutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.728 221554 DEBUG nova.virt.libvirt.driver [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.731 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844937.6846898, b9a76fba-cff2-455a-9aa6-7b839819e78b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.732 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.737 221554 WARNING nova.virt.libvirt.driver [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.753 221554 DEBUG nova.virt.libvirt.host [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.754 221554 DEBUG nova.virt.libvirt.host [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.755 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.759 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.761 221554 DEBUG nova.virt.libvirt.host [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.761 221554 DEBUG nova.virt.libvirt.host [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.762 221554 DEBUG nova.virt.libvirt.driver [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.763 221554 DEBUG nova.virt.hardware [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e3bd1dad-95f3-4ed9-94b4-27245cd798b5',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.763 221554 DEBUG nova.virt.hardware [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.763 221554 DEBUG nova.virt.hardware [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.764 221554 DEBUG nova.virt.hardware [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.764 221554 DEBUG nova.virt.hardware [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.764 221554 DEBUG nova.virt.hardware [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.764 221554 DEBUG nova.virt.hardware [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.765 221554 DEBUG nova.virt.hardware [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.765 221554 DEBUG nova.virt.hardware [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.765 221554 DEBUG nova.virt.hardware [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.765 221554 DEBUG nova.virt.hardware [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.766 221554 DEBUG nova.objects.instance [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'vcpu_model' on Instance uuid 871711de-f993-4592-83a2-a36c4039786d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.795 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:35:37 np0005603609 nova_compute[221550]: 2026-01-31 07:35:37.813 221554 DEBUG oslo_concurrency.processutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:37 np0005603609 podman[229370]: 2026-01-31 07:35:37.855288861 +0000 UTC m=+0.025105735 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:35:37 np0005603609 podman[229370]: 2026-01-31 07:35:37.963888552 +0000 UTC m=+0.133705426 container create e7e3401beb5b9d1e83425a96127696d9ae3f35135b2dfd8c1b9c70ca126a2520 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 02:35:38 np0005603609 systemd[1]: Started libpod-conmon-e7e3401beb5b9d1e83425a96127696d9ae3f35135b2dfd8c1b9c70ca126a2520.scope.
Jan 31 02:35:38 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:35:38 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ffda892e4c242ce325daba11f7785a58742fa201232b364435846c70b5da044/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:35:38 np0005603609 podman[229370]: 2026-01-31 07:35:38.070497335 +0000 UTC m=+0.240314219 container init e7e3401beb5b9d1e83425a96127696d9ae3f35135b2dfd8c1b9c70ca126a2520 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:35:38 np0005603609 podman[229370]: 2026-01-31 07:35:38.079359212 +0000 UTC m=+0.249176066 container start e7e3401beb5b9d1e83425a96127696d9ae3f35135b2dfd8c1b9c70ca126a2520 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.085 221554 DEBUG nova.compute.manager [req-e3e754a3-4cf4-4089-a8f3-9b5e2dbcd54e req-804e2877-a4e7-4fbc-837c-44b9fc586f8d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received event network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.085 221554 DEBUG oslo_concurrency.lockutils [req-e3e754a3-4cf4-4089-a8f3-9b5e2dbcd54e req-804e2877-a4e7-4fbc-837c-44b9fc586f8d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.086 221554 DEBUG oslo_concurrency.lockutils [req-e3e754a3-4cf4-4089-a8f3-9b5e2dbcd54e req-804e2877-a4e7-4fbc-837c-44b9fc586f8d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.086 221554 DEBUG oslo_concurrency.lockutils [req-e3e754a3-4cf4-4089-a8f3-9b5e2dbcd54e req-804e2877-a4e7-4fbc-837c-44b9fc586f8d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.086 221554 DEBUG nova.compute.manager [req-e3e754a3-4cf4-4089-a8f3-9b5e2dbcd54e req-804e2877-a4e7-4fbc-837c-44b9fc586f8d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Processing event network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.087 221554 DEBUG nova.compute.manager [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.092 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844938.0917296, b9a76fba-cff2-455a-9aa6-7b839819e78b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.092 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.094 221554 DEBUG nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.100 221554 INFO nova.virt.libvirt.driver [-] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Instance spawned successfully.#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.101 221554 DEBUG nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:35:38 np0005603609 neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128[229405]: [NOTICE]   (229409) : New worker (229411) forked
Jan 31 02:35:38 np0005603609 neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128[229405]: [NOTICE]   (229409) : Loading success.
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.146 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 596408d2-9689-472f-b2fb-a85a75df2923 in datapath e4d1862b-2abc-4d60-bc48-19a5318038f4 unbound from our chassis#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.149 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4d1862b-2abc-4d60-bc48-19a5318038f4#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.151 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.156 221554 DEBUG nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.157 221554 DEBUG nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.158 221554 DEBUG nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.159 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[890fa5f7-2c7e-4778-b3d8-58557f586b27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.160 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape4d1862b-21 in ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.159 221554 DEBUG nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.161 221554 DEBUG nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.162 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape4d1862b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.163 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9415d313-be61-4228-94a7-99ca186f7640]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.162 221554 DEBUG nova.virt.libvirt.driver [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.165 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf1199f-33b6-4897-868c-e867e5ccf0f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.177 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[7930901b-a763-4cdd-a181-6fadcda98a7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.178 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.205 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[989c1712-d866-4c83-b831-9a60e0a4cf10]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.225 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[a6bb82e4-da09-41a4-b58f-051e65cf0114]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.230 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:35:38 np0005603609 systemd-udevd[229241]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:35:38 np0005603609 NetworkManager[49064]: <info>  [1769844938.2351] manager: (tape4d1862b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/42)
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.235 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e995f5e7-f40c-40ff-9b88-41dae9885c5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:38 np0005603609 podman[229422]: 2026-01-31 07:35:38.254726195 +0000 UTC m=+0.048247740 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.256 221554 INFO nova.compute.manager [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Took 8.43 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.257 221554 DEBUG nova.compute.manager [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.264 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[fb1b216e-fbde-4c06-ac45-f3a934097cd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.267 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[99d146ca-aea8-40aa-8862-225c597add60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:38 np0005603609 NetworkManager[49064]: <info>  [1769844938.2907] device (tape4d1862b-20): carrier: link connected
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.291 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d4ebd8-e817-4e84-a5b5-5a3712ae04be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.302 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9740439f-e50e-4f7d-bd9d-c8f1cede617d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4d1862b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:38:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517798, 'reachable_time': 43138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 229464, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.314 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7342cdd4-c0ae-4c82-907b-1c81d193ee33]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:3856'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517798, 'tstamp': 517798}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 229469, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.323 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[706aee68-feaf-4853-9fbe-011bb119ad94]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4d1862b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:38:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517798, 'reachable_time': 43138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 229472, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:35:38 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1976072540' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.344 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b1cbff9c-1e5b-4516-aa98-839ddb83ca92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:38 np0005603609 podman[229444]: 2026-01-31 07:35:38.353167398 +0000 UTC m=+0.092880208 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller)
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.361 221554 DEBUG oslo_concurrency.processutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.386 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[95efcae4-5726-4c09-bad6-98c829c861d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.388 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4d1862b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.388 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.388 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4d1862b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:35:38 np0005603609 kernel: tape4d1862b-20: entered promiscuous mode
Jan 31 02:35:38 np0005603609 NetworkManager[49064]: <info>  [1769844938.3909] manager: (tape4d1862b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.392 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4d1862b-20, col_values=(('external_ids', {'iface-id': '632f26c5-40a9-4337-84da-ea4b4bbdf89c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:35:38 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:38Z|00072|binding|INFO|Releasing lport 632f26c5-40a9-4337-84da-ea4b4bbdf89c from this chassis (sb_readonly=0)
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.395 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e4d1862b-2abc-4d60-bc48-19a5318038f4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e4d1862b-2abc-4d60-bc48-19a5318038f4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.394 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.399 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[18e298b3-76d9-4d71-aa86-e1720fb698a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.399 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-e4d1862b-2abc-4d60-bc48-19a5318038f4
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/e4d1862b-2abc-4d60-bc48-19a5318038f4.pid.haproxy
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID e4d1862b-2abc-4d60-bc48-19a5318038f4
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:35:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:38.400 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'env', 'PROCESS_TAG=haproxy-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e4d1862b-2abc-4d60-bc48-19a5318038f4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.407 221554 DEBUG oslo_concurrency.processutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.420 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.442 221554 INFO nova.compute.manager [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Took 9.49 seconds to build instance.#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.484 221554 DEBUG oslo_concurrency.lockutils [None req-d386c006-b869-4448-8d92-2a8bf748dd90 ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:38 np0005603609 podman[229547]: 2026-01-31 07:35:38.766695708 +0000 UTC m=+0.060540231 container create 7bc057a84eed6f6cf36b4e8bf79ed703e5b45883fc64953d5de5015a3cfd115a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 02:35:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:38.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:38 np0005603609 systemd[1]: Started libpod-conmon-7bc057a84eed6f6cf36b4e8bf79ed703e5b45883fc64953d5de5015a3cfd115a.scope.
Jan 31 02:35:38 np0005603609 podman[229547]: 2026-01-31 07:35:38.735738991 +0000 UTC m=+0.029583614 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:35:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:35:38 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3834992716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.848 221554 DEBUG oslo_concurrency.processutils [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:38 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.852 221554 DEBUG nova.virt.libvirt.driver [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:35:38 np0005603609 nova_compute[221550]:  <uuid>871711de-f993-4592-83a2-a36c4039786d</uuid>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:  <name>instance-00000011</name>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:  <memory>196608</memory>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <nova:name>tempest-MigrationsAdminTest-server-1825538190</nova:name>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:35:37</nova:creationTime>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.micro">
Jan 31 02:35:38 np0005603609 nova_compute[221550]:        <nova:memory>192</nova:memory>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:        <nova:user uuid="71f887fd92fb486a959e5ca100cb1e10">tempest-MigrationsAdminTest-137263588-project-member</nova:user>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:        <nova:project uuid="7c1ddd67115f4f7bab056dbb2f270ccc">tempest-MigrationsAdminTest-137263588</nova:project>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <nova:ports/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <entry name="serial">871711de-f993-4592-83a2-a36c4039786d</entry>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <entry name="uuid">871711de-f993-4592-83a2-a36c4039786d</entry>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/871711de-f993-4592-83a2-a36c4039786d_disk">
Jan 31 02:35:38 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:35:38 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/871711de-f993-4592-83a2-a36c4039786d_disk.config">
Jan 31 02:35:38 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:35:38 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/871711de-f993-4592-83a2-a36c4039786d/console.log" append="off"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:35:38 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:35:38 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:35:38 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:35:38 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:35:38 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b596342e4882252efc3805d1f9c4f854e73a52c0fe7d1374e2c4812215c717/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:35:38 np0005603609 podman[229547]: 2026-01-31 07:35:38.883554211 +0000 UTC m=+0.177398804 container init 7bc057a84eed6f6cf36b4e8bf79ed703e5b45883fc64953d5de5015a3cfd115a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 02:35:38 np0005603609 podman[229547]: 2026-01-31 07:35:38.888227305 +0000 UTC m=+0.182071868 container start 7bc057a84eed6f6cf36b4e8bf79ed703e5b45883fc64953d5de5015a3cfd115a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:35:38 np0005603609 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[229562]: [NOTICE]   (229569) : New worker (229571) forked
Jan 31 02:35:38 np0005603609 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[229562]: [NOTICE]   (229569) : Loading success.
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.952 221554 DEBUG nova.virt.libvirt.driver [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.953 221554 DEBUG nova.virt.libvirt.driver [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:35:38 np0005603609 nova_compute[221550]: 2026-01-31 07:35:38.953 221554 INFO nova.virt.libvirt.driver [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Using config drive#033[00m
Jan 31 02:35:39 np0005603609 systemd-machined[190912]: New machine qemu-10-instance-00000011.
Jan 31 02:35:39 np0005603609 systemd[1]: Started Virtual Machine qemu-10-instance-00000011.
Jan 31 02:35:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:39.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.177 221554 DEBUG nova.compute.manager [req-d778a401-6106-40d0-982b-326158fff2ec req-784a8c0e-73f7-4fe4-a277-b81d4c309a64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received event network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.179 221554 DEBUG oslo_concurrency.lockutils [req-d778a401-6106-40d0-982b-326158fff2ec req-784a8c0e-73f7-4fe4-a277-b81d4c309a64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.179 221554 DEBUG oslo_concurrency.lockutils [req-d778a401-6106-40d0-982b-326158fff2ec req-784a8c0e-73f7-4fe4-a277-b81d4c309a64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.180 221554 DEBUG oslo_concurrency.lockutils [req-d778a401-6106-40d0-982b-326158fff2ec req-784a8c0e-73f7-4fe4-a277-b81d4c309a64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.180 221554 DEBUG nova.compute.manager [req-d778a401-6106-40d0-982b-326158fff2ec req-784a8c0e-73f7-4fe4-a277-b81d4c309a64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] No waiting events found dispatching network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.181 221554 WARNING nova.compute.manager [req-d778a401-6106-40d0-982b-326158fff2ec req-784a8c0e-73f7-4fe4-a277-b81d4c309a64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received unexpected event network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.423 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.512 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844940.5124047, 871711de-f993-4592-83a2-a36c4039786d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.513 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.521 221554 DEBUG nova.compute.manager [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.525 221554 INFO nova.virt.libvirt.driver [-] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance running successfully.#033[00m
Jan 31 02:35:40 np0005603609 virtqemud[221292]: argument unsupported: QEMU guest agent is not configured
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.528 221554 DEBUG nova.virt.libvirt.guest [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.528 221554 DEBUG nova.virt.libvirt.driver [None req-7bae7ee9-4fe0-4ff2-be59-11cfbe936c34 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.550 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.553 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.589 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.589 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844940.5211806, 871711de-f993-4592-83a2-a36c4039786d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.590 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] VM Started (Lifecycle Event)#033[00m
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.626 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.629 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:35:40 np0005603609 nova_compute[221550]: 2026-01-31 07:35:40.710 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:40.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:41.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:41 np0005603609 nova_compute[221550]: 2026-01-31 07:35:41.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:35:41 np0005603609 nova_compute[221550]: 2026-01-31 07:35:41.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 02:35:41 np0005603609 nova_compute[221550]: 2026-01-31 07:35:41.676 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 02:35:41 np0005603609 nova_compute[221550]: 2026-01-31 07:35:41.676 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:35:41 np0005603609 nova_compute[221550]: 2026-01-31 07:35:41.677 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 02:35:41 np0005603609 nova_compute[221550]: 2026-01-31 07:35:41.757 221554 DEBUG oslo_concurrency.lockutils [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "refresh_cache-871711de-f993-4592-83a2-a36c4039786d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:35:41 np0005603609 nova_compute[221550]: 2026-01-31 07:35:41.758 221554 DEBUG oslo_concurrency.lockutils [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquired lock "refresh_cache-871711de-f993-4592-83a2-a36c4039786d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:35:41 np0005603609 nova_compute[221550]: 2026-01-31 07:35:41.758 221554 DEBUG nova.network.neutron [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:35:41 np0005603609 nova_compute[221550]: 2026-01-31 07:35:41.963 221554 DEBUG nova.network.neutron [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:35:42 np0005603609 nova_compute[221550]: 2026-01-31 07:35:42.093 221554 DEBUG nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Check if temp file /var/lib/nova/instances/tmp1qs86hm9 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Jan 31 02:35:42 np0005603609 nova_compute[221550]: 2026-01-31 07:35:42.093 221554 DEBUG nova.compute.manager [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp1qs86hm9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b9a76fba-cff2-455a-9aa6-7b839819e78b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Jan 31 02:35:42 np0005603609 nova_compute[221550]: 2026-01-31 07:35:42.291 221554 DEBUG nova.network.neutron [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 871711de-f993-4592-83a2-a36c4039786d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:35:42 np0005603609 nova_compute[221550]: 2026-01-31 07:35:42.305 221554 DEBUG oslo_concurrency.lockutils [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Releasing lock "refresh_cache-871711de-f993-4592-83a2-a36c4039786d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:35:42 np0005603609 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000011.scope: Deactivated successfully.
Jan 31 02:35:42 np0005603609 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000011.scope: Consumed 3.241s CPU time.
Jan 31 02:35:42 np0005603609 systemd-machined[190912]: Machine qemu-10-instance-00000011 terminated.
Jan 31 02:35:42 np0005603609 nova_compute[221550]: 2026-01-31 07:35:42.538 221554 INFO nova.virt.libvirt.driver [-] [instance: 871711de-f993-4592-83a2-a36c4039786d] Instance destroyed successfully.#033[00m
Jan 31 02:35:42 np0005603609 nova_compute[221550]: 2026-01-31 07:35:42.539 221554 DEBUG nova.objects.instance [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'resources' on Instance uuid 871711de-f993-4592-83a2-a36c4039786d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:35:42 np0005603609 nova_compute[221550]: 2026-01-31 07:35:42.562 221554 DEBUG oslo_concurrency.lockutils [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:42 np0005603609 nova_compute[221550]: 2026-01-31 07:35:42.564 221554 DEBUG oslo_concurrency.lockutils [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:42 np0005603609 nova_compute[221550]: 2026-01-31 07:35:42.590 221554 DEBUG nova.objects.instance [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'migration_context' on Instance uuid 871711de-f993-4592-83a2-a36c4039786d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:35:42 np0005603609 nova_compute[221550]: 2026-01-31 07:35:42.660 221554 DEBUG oslo_concurrency.processutils [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:42.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:35:43 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2520892971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:35:43 np0005603609 nova_compute[221550]: 2026-01-31 07:35:43.081 221554 DEBUG oslo_concurrency.processutils [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:43 np0005603609 nova_compute[221550]: 2026-01-31 07:35:43.085 221554 DEBUG nova.compute.provider_tree [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:35:43 np0005603609 nova_compute[221550]: 2026-01-31 07:35:43.119 221554 DEBUG nova.scheduler.client.report [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:35:43 np0005603609 nova_compute[221550]: 2026-01-31 07:35:43.197 221554 DEBUG oslo_concurrency.lockutils [None req-b70714b3-3b4f-404f-aa6c-1934d03655a5 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:43.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:44.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e148 e148: 3 total, 3 up, 3 in
Jan 31 02:35:45 np0005603609 nova_compute[221550]: 2026-01-31 07:35:45.425 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:45.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:45 np0005603609 nova_compute[221550]: 2026-01-31 07:35:45.712 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:46 np0005603609 nova_compute[221550]: 2026-01-31 07:35:46.692 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:35:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:46.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:46 np0005603609 nova_compute[221550]: 2026-01-31 07:35:46.872 221554 DEBUG nova.compute.manager [req-9be15450-8b1c-4571-aee3-777fbe97f475 req-3fefdb73-c4db-405a-bfda-bfd556c86b4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received event network-vif-unplugged-596408d2-9689-472f-b2fb-a85a75df2923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:35:46 np0005603609 nova_compute[221550]: 2026-01-31 07:35:46.872 221554 DEBUG oslo_concurrency.lockutils [req-9be15450-8b1c-4571-aee3-777fbe97f475 req-3fefdb73-c4db-405a-bfda-bfd556c86b4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:46 np0005603609 nova_compute[221550]: 2026-01-31 07:35:46.873 221554 DEBUG oslo_concurrency.lockutils [req-9be15450-8b1c-4571-aee3-777fbe97f475 req-3fefdb73-c4db-405a-bfda-bfd556c86b4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:46 np0005603609 nova_compute[221550]: 2026-01-31 07:35:46.873 221554 DEBUG oslo_concurrency.lockutils [req-9be15450-8b1c-4571-aee3-777fbe97f475 req-3fefdb73-c4db-405a-bfda-bfd556c86b4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:46 np0005603609 nova_compute[221550]: 2026-01-31 07:35:46.874 221554 DEBUG nova.compute.manager [req-9be15450-8b1c-4571-aee3-777fbe97f475 req-3fefdb73-c4db-405a-bfda-bfd556c86b4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] No waiting events found dispatching network-vif-unplugged-596408d2-9689-472f-b2fb-a85a75df2923 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:35:46 np0005603609 nova_compute[221550]: 2026-01-31 07:35:46.874 221554 DEBUG nova.compute.manager [req-9be15450-8b1c-4571-aee3-777fbe97f475 req-3fefdb73-c4db-405a-bfda-bfd556c86b4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received event network-vif-unplugged-596408d2-9689-472f-b2fb-a85a75df2923 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:35:47 np0005603609 nova_compute[221550]: 2026-01-31 07:35:47.529 221554 INFO nova.compute.manager [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Took 4.42 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.#033[00m
Jan 31 02:35:47 np0005603609 nova_compute[221550]: 2026-01-31 07:35:47.530 221554 DEBUG nova.compute.manager [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:35:47 np0005603609 nova_compute[221550]: 2026-01-31 07:35:47.547 221554 DEBUG nova.compute.manager [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp1qs86hm9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b9a76fba-cff2-455a-9aa6-7b839819e78b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(72833866-0b19-4638-bbd6-0ea0fd606531),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Jan 31 02:35:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:47.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:47 np0005603609 nova_compute[221550]: 2026-01-31 07:35:47.551 221554 DEBUG nova.objects.instance [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lazy-loading 'migration_context' on Instance uuid b9a76fba-cff2-455a-9aa6-7b839819e78b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:35:47 np0005603609 nova_compute[221550]: 2026-01-31 07:35:47.552 221554 DEBUG nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Jan 31 02:35:47 np0005603609 nova_compute[221550]: 2026-01-31 07:35:47.554 221554 DEBUG nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Jan 31 02:35:47 np0005603609 nova_compute[221550]: 2026-01-31 07:35:47.555 221554 DEBUG nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Jan 31 02:35:47 np0005603609 nova_compute[221550]: 2026-01-31 07:35:47.569 221554 DEBUG nova.virt.libvirt.vif [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:35:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-879430066',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-879430066',id=19,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:35:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='29d136be5e384689a95acd607131dfd0',ramdisk_id='',reservation_id='r-9uooh3ao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1421195096',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1421195096-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:35:38Z,user_data=None,user_id='ea44c45fe7df4f36b5c722fbfc214f2e',uuid=b9a76fba-cff2-455a-9aa6-7b839819e78b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:35:47 np0005603609 nova_compute[221550]: 2026-01-31 07:35:47.570 221554 DEBUG nova.network.os_vif_util [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Converting VIF {"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:35:47 np0005603609 nova_compute[221550]: 2026-01-31 07:35:47.570 221554 DEBUG nova.network.os_vif_util [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:f1:d6,bridge_name='br-int',has_traffic_filtering=True,id=596408d2-9689-472f-b2fb-a85a75df2923,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap596408d2-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:35:47 np0005603609 nova_compute[221550]: 2026-01-31 07:35:47.571 221554 DEBUG nova.virt.libvirt.migration [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Updating guest XML with vif config: <interface type="ethernet">
Jan 31 02:35:47 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:45:f1:d6"/>
Jan 31 02:35:47 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 02:35:47 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:35:47 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 02:35:47 np0005603609 nova_compute[221550]:  <target dev="tap596408d2-96"/>
Jan 31 02:35:47 np0005603609 nova_compute[221550]: </interface>
Jan 31 02:35:47 np0005603609 nova_compute[221550]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Jan 31 02:35:47 np0005603609 nova_compute[221550]: 2026-01-31 07:35:47.572 221554 DEBUG nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Jan 31 02:35:47 np0005603609 nova_compute[221550]: 2026-01-31 07:35:47.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.057 221554 DEBUG nova.virt.libvirt.migration [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.058 221554 INFO nova.virt.libvirt.migration [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.122 221554 INFO nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.557 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844948.556274, b9a76fba-cff2-455a-9aa6-7b839819e78b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.557 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.574 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.577 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.594 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.625 221554 DEBUG nova.virt.libvirt.migration [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.626 221554 DEBUG nova.virt.libvirt.migration [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:35:48 np0005603609 kernel: tap596408d2-96 (unregistering): left promiscuous mode
Jan 31 02:35:48 np0005603609 NetworkManager[49064]: <info>  [1769844948.6818] device (tap596408d2-96): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.682 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:48 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:48Z|00073|binding|INFO|Releasing lport 596408d2-9689-472f-b2fb-a85a75df2923 from this chassis (sb_readonly=0)
Jan 31 02:35:48 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:48Z|00074|binding|INFO|Setting lport 596408d2-9689-472f-b2fb-a85a75df2923 down in Southbound
Jan 31 02:35:48 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:48Z|00075|binding|INFO|Releasing lport b2e77a7e-2125-46bd-8c49-dd619c7caf36 from this chassis (sb_readonly=0)
Jan 31 02:35:48 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:48Z|00076|binding|INFO|Setting lport b2e77a7e-2125-46bd-8c49-dd619c7caf36 down in Southbound
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.690 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:48 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:48Z|00077|binding|INFO|Removing iface tap596408d2-96 ovn-installed in OVS
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.693 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:48 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:48Z|00078|binding|INFO|Releasing lport 632f26c5-40a9-4337-84da-ea4b4bbdf89c from this chassis (sb_readonly=0)
Jan 31 02:35:48 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:48Z|00079|binding|INFO|Releasing lport a50841ea-6849-4032-a8d6-6ba9e6fd3a95 from this chassis (sb_readonly=0)
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.697 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:5d:23 19.80.0.215'], port_security=['fa:16:3e:cc:5d:23 19.80.0.215'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['596408d2-9689-472f-b2fb-a85a75df2923'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1541952823', 'neutron:cidrs': '19.80.0.215/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-994f8d42-6738-4c92-b80e-8dbb63919128', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1541952823', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '3', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=67507551-f2d4-4c0b-ab9d-4c732fbaf469, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b2e77a7e-2125-46bd-8c49-dd619c7caf36) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.699 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:f1:d6 10.100.0.14'], port_security=['fa:16:3e:45:f1:d6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '5c307474-e9ec-4d19-9f52-463eb0ff26d1'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-334582475', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b9a76fba-cff2-455a-9aa6-7b839819e78b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-334582475', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6f42f93-f94d-4170-883d-d45cddf5fdad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=596408d2-9689-472f-b2fb-a85a75df2923) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.700 140058 INFO neutron.agent.ovn.metadata.agent [-] Port b2e77a7e-2125-46bd-8c49-dd619c7caf36 in datapath 994f8d42-6738-4c92-b80e-8dbb63919128 unbound from our chassis#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.701 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 994f8d42-6738-4c92-b80e-8dbb63919128, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.701 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.702 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[68c6b2e7-0d34-4251-a55c-68499fdfcd53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.702 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128 namespace which is not needed anymore#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.723 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:48 np0005603609 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000013.scope: Deactivated successfully.
Jan 31 02:35:48 np0005603609 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000013.scope: Consumed 10.865s CPU time.
Jan 31 02:35:48 np0005603609 systemd-machined[190912]: Machine qemu-9-instance-00000013 terminated.
Jan 31 02:35:48 np0005603609 neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128[229405]: [NOTICE]   (229409) : haproxy version is 2.8.14-c23fe91
Jan 31 02:35:48 np0005603609 neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128[229405]: [NOTICE]   (229409) : path to executable is /usr/sbin/haproxy
Jan 31 02:35:48 np0005603609 neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128[229405]: [WARNING]  (229409) : Exiting Master process...
Jan 31 02:35:48 np0005603609 neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128[229405]: [ALERT]    (229409) : Current worker (229411) exited with code 143 (Terminated)
Jan 31 02:35:48 np0005603609 neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128[229405]: [WARNING]  (229409) : All workers exited. Exiting... (0)
Jan 31 02:35:48 np0005603609 systemd[1]: libpod-e7e3401beb5b9d1e83425a96127696d9ae3f35135b2dfd8c1b9c70ca126a2520.scope: Deactivated successfully.
Jan 31 02:35:48 np0005603609 podman[229838]: 2026-01-31 07:35:48.800987989 +0000 UTC m=+0.036499652 container died e7e3401beb5b9d1e83425a96127696d9ae3f35135b2dfd8c1b9c70ca126a2520 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 02:35:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:35:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:48.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:35:48 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e7e3401beb5b9d1e83425a96127696d9ae3f35135b2dfd8c1b9c70ca126a2520-userdata-shm.mount: Deactivated successfully.
Jan 31 02:35:48 np0005603609 systemd[1]: var-lib-containers-storage-overlay-1ffda892e4c242ce325daba11f7785a58742fa201232b364435846c70b5da044-merged.mount: Deactivated successfully.
Jan 31 02:35:48 np0005603609 podman[229838]: 2026-01-31 07:35:48.839950591 +0000 UTC m=+0.075462234 container cleanup e7e3401beb5b9d1e83425a96127696d9ae3f35135b2dfd8c1b9c70ca126a2520 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 02:35:48 np0005603609 systemd[1]: libpod-conmon-e7e3401beb5b9d1e83425a96127696d9ae3f35135b2dfd8c1b9c70ca126a2520.scope: Deactivated successfully.
Jan 31 02:35:48 np0005603609 virtqemud[221292]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/b9a76fba-cff2-455a-9aa6-7b839819e78b_disk: No such file or directory
Jan 31 02:35:48 np0005603609 virtqemud[221292]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/b9a76fba-cff2-455a-9aa6-7b839819e78b_disk: No such file or directory
Jan 31 02:35:48 np0005603609 kernel: tap596408d2-96: entered promiscuous mode
Jan 31 02:35:48 np0005603609 kernel: tap596408d2-96 (unregistering): left promiscuous mode
Jan 31 02:35:48 np0005603609 NetworkManager[49064]: <info>  [1769844948.8677] manager: (tap596408d2-96): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.871 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:48 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:48Z|00080|binding|INFO|Claiming lport 596408d2-9689-472f-b2fb-a85a75df2923 for this chassis.
Jan 31 02:35:48 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:48Z|00081|binding|INFO|596408d2-9689-472f-b2fb-a85a75df2923: Claiming fa:16:3e:45:f1:d6 10.100.0.14
Jan 31 02:35:48 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:48Z|00082|binding|INFO|Claiming lport b2e77a7e-2125-46bd-8c49-dd619c7caf36 for this chassis.
Jan 31 02:35:48 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:48Z|00083|binding|INFO|b2e77a7e-2125-46bd-8c49-dd619c7caf36: Claiming fa:16:3e:cc:5d:23 19.80.0.215
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.881 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:5d:23 19.80.0.215'], port_security=['fa:16:3e:cc:5d:23 19.80.0.215'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['596408d2-9689-472f-b2fb-a85a75df2923'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1541952823', 'neutron:cidrs': '19.80.0.215/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-994f8d42-6738-4c92-b80e-8dbb63919128', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1541952823', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '3', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=67507551-f2d4-4c0b-ab9d-4c732fbaf469, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b2e77a7e-2125-46bd-8c49-dd619c7caf36) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.883 221554 DEBUG nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.883 221554 DEBUG nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.883 221554 DEBUG nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.884 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:f1:d6 10.100.0.14'], port_security=['fa:16:3e:45:f1:d6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '5c307474-e9ec-4d19-9f52-463eb0ff26d1'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-334582475', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b9a76fba-cff2-455a-9aa6-7b839819e78b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-334582475', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6f42f93-f94d-4170-883d-d45cddf5fdad, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=596408d2-9689-472f-b2fb-a85a75df2923) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:35:48 np0005603609 podman[229869]: 2026-01-31 07:35:48.90131821 +0000 UTC m=+0.047123052 container remove e7e3401beb5b9d1e83425a96127696d9ae3f35135b2dfd8c1b9c70ca126a2520 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 02:35:48 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:48Z|00084|binding|INFO|Releasing lport 596408d2-9689-472f-b2fb-a85a75df2923 from this chassis (sb_readonly=0)
Jan 31 02:35:48 np0005603609 ovn_controller[130359]: 2026-01-31T07:35:48Z|00085|binding|INFO|Releasing lport b2e77a7e-2125-46bd-8c49-dd619c7caf36 from this chassis (sb_readonly=0)
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.903 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.905 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5b7f9b70-0d74-4c33-9d41-c484dec7bb48]: (4, ('Sat Jan 31 07:35:48 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128 (e7e3401beb5b9d1e83425a96127696d9ae3f35135b2dfd8c1b9c70ca126a2520)\ne7e3401beb5b9d1e83425a96127696d9ae3f35135b2dfd8c1b9c70ca126a2520\nSat Jan 31 07:35:48 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128 (e7e3401beb5b9d1e83425a96127696d9ae3f35135b2dfd8c1b9c70ca126a2520)\ne7e3401beb5b9d1e83425a96127696d9ae3f35135b2dfd8c1b9c70ca126a2520\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.906 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b3674202-f315-4b19-80d1-c031861bae15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.907 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap994f8d42-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.908 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.909 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.911 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:5d:23 19.80.0.215'], port_security=['fa:16:3e:cc:5d:23 19.80.0.215'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['596408d2-9689-472f-b2fb-a85a75df2923'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1541952823', 'neutron:cidrs': '19.80.0.215/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-994f8d42-6738-4c92-b80e-8dbb63919128', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1541952823', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '3', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=67507551-f2d4-4c0b-ab9d-4c732fbaf469, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b2e77a7e-2125-46bd-8c49-dd619c7caf36) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.913 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:f1:d6 10.100.0.14'], port_security=['fa:16:3e:45:f1:d6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '5c307474-e9ec-4d19-9f52-463eb0ff26d1'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-334582475', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b9a76fba-cff2-455a-9aa6-7b839819e78b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-334582475', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6f42f93-f94d-4170-883d-d45cddf5fdad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=596408d2-9689-472f-b2fb-a85a75df2923) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:35:48 np0005603609 kernel: tap994f8d42-60: left promiscuous mode
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.921 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.923 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[45dca92c-dba2-4f0d-b2ec-2f6242bbcc3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.939 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c80303-456e-413d-8e63-5a5c5f0f7b09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.940 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[595bb36d-83b3-4bb9-ba7a-bfdc81f21d63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.951 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ea34b431-f1f0-4209-832d-072a587bc2ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517705, 'reachable_time': 35155, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 229897, 'error': None, 'target': 'ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.953 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.954 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[a70d8550-fa83-4e05-91fa-5894c3b06800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.954 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 596408d2-9689-472f-b2fb-a85a75df2923 in datapath e4d1862b-2abc-4d60-bc48-19a5318038f4 unbound from our chassis#033[00m
Jan 31 02:35:48 np0005603609 systemd[1]: run-netns-ovnmeta\x2d994f8d42\x2d6738\x2d4c92\x2db80e\x2d8dbb63919128.mount: Deactivated successfully.
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.956 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4d1862b-2abc-4d60-bc48-19a5318038f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.956 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2ddf3f92-ff9d-44c0-8876-a46f0f476827]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:48.956 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 namespace which is not needed anymore#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.970 221554 DEBUG nova.compute.manager [req-e01bb64d-ff73-4843-93ca-7d2ab22491a8 req-ce52673e-025f-4870-abf4-14316e07706a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received event network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.970 221554 DEBUG oslo_concurrency.lockutils [req-e01bb64d-ff73-4843-93ca-7d2ab22491a8 req-ce52673e-025f-4870-abf4-14316e07706a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.970 221554 DEBUG oslo_concurrency.lockutils [req-e01bb64d-ff73-4843-93ca-7d2ab22491a8 req-ce52673e-025f-4870-abf4-14316e07706a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.970 221554 DEBUG oslo_concurrency.lockutils [req-e01bb64d-ff73-4843-93ca-7d2ab22491a8 req-ce52673e-025f-4870-abf4-14316e07706a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.971 221554 DEBUG nova.compute.manager [req-e01bb64d-ff73-4843-93ca-7d2ab22491a8 req-ce52673e-025f-4870-abf4-14316e07706a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] No waiting events found dispatching network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.971 221554 WARNING nova.compute.manager [req-e01bb64d-ff73-4843-93ca-7d2ab22491a8 req-ce52673e-025f-4870-abf4-14316e07706a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received unexpected event network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 for instance with vm_state active and task_state migrating.#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.971 221554 DEBUG nova.compute.manager [req-e01bb64d-ff73-4843-93ca-7d2ab22491a8 req-ce52673e-025f-4870-abf4-14316e07706a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received event network-changed-596408d2-9689-472f-b2fb-a85a75df2923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.971 221554 DEBUG nova.compute.manager [req-e01bb64d-ff73-4843-93ca-7d2ab22491a8 req-ce52673e-025f-4870-abf4-14316e07706a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Refreshing instance network info cache due to event network-changed-596408d2-9689-472f-b2fb-a85a75df2923. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.971 221554 DEBUG oslo_concurrency.lockutils [req-e01bb64d-ff73-4843-93ca-7d2ab22491a8 req-ce52673e-025f-4870-abf4-14316e07706a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b9a76fba-cff2-455a-9aa6-7b839819e78b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.971 221554 DEBUG oslo_concurrency.lockutils [req-e01bb64d-ff73-4843-93ca-7d2ab22491a8 req-ce52673e-025f-4870-abf4-14316e07706a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b9a76fba-cff2-455a-9aa6-7b839819e78b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:35:48 np0005603609 nova_compute[221550]: 2026-01-31 07:35:48.972 221554 DEBUG nova.network.neutron [req-e01bb64d-ff73-4843-93ca-7d2ab22491a8 req-ce52673e-025f-4870-abf4-14316e07706a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Refreshing network info cache for port 596408d2-9689-472f-b2fb-a85a75df2923 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:35:49 np0005603609 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[229562]: [NOTICE]   (229569) : haproxy version is 2.8.14-c23fe91
Jan 31 02:35:49 np0005603609 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[229562]: [NOTICE]   (229569) : path to executable is /usr/sbin/haproxy
Jan 31 02:35:49 np0005603609 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[229562]: [WARNING]  (229569) : Exiting Master process...
Jan 31 02:35:49 np0005603609 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[229562]: [ALERT]    (229569) : Current worker (229571) exited with code 143 (Terminated)
Jan 31 02:35:49 np0005603609 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[229562]: [WARNING]  (229569) : All workers exited. Exiting... (0)
Jan 31 02:35:49 np0005603609 systemd[1]: libpod-7bc057a84eed6f6cf36b4e8bf79ed703e5b45883fc64953d5de5015a3cfd115a.scope: Deactivated successfully.
Jan 31 02:35:49 np0005603609 podman[229915]: 2026-01-31 07:35:49.068167564 +0000 UTC m=+0.043650347 container died 7bc057a84eed6f6cf36b4e8bf79ed703e5b45883fc64953d5de5015a3cfd115a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 02:35:49 np0005603609 systemd[1]: var-lib-containers-storage-overlay-a5b596342e4882252efc3805d1f9c4f854e73a52c0fe7d1374e2c4812215c717-merged.mount: Deactivated successfully.
Jan 31 02:35:49 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7bc057a84eed6f6cf36b4e8bf79ed703e5b45883fc64953d5de5015a3cfd115a-userdata-shm.mount: Deactivated successfully.
Jan 31 02:35:49 np0005603609 podman[229915]: 2026-01-31 07:35:49.099398486 +0000 UTC m=+0.074881289 container cleanup 7bc057a84eed6f6cf36b4e8bf79ed703e5b45883fc64953d5de5015a3cfd115a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:35:49 np0005603609 systemd[1]: libpod-conmon-7bc057a84eed6f6cf36b4e8bf79ed703e5b45883fc64953d5de5015a3cfd115a.scope: Deactivated successfully.
Jan 31 02:35:49 np0005603609 nova_compute[221550]: 2026-01-31 07:35:49.127 221554 DEBUG nova.virt.libvirt.guest [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid 'b9a76fba-cff2-455a-9aa6-7b839819e78b' (instance-00000013) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Jan 31 02:35:49 np0005603609 nova_compute[221550]: 2026-01-31 07:35:49.128 221554 INFO nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Migration operation has completed#033[00m
Jan 31 02:35:49 np0005603609 nova_compute[221550]: 2026-01-31 07:35:49.128 221554 INFO nova.compute.manager [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] _post_live_migration() is started..#033[00m
Jan 31 02:35:49 np0005603609 podman[229946]: 2026-01-31 07:35:49.162096788 +0000 UTC m=+0.048361102 container remove 7bc057a84eed6f6cf36b4e8bf79ed703e5b45883fc64953d5de5015a3cfd115a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.166 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[39c07879-7781-492b-8a43-0cd40f8aaf3c]: (4, ('Sat Jan 31 07:35:49 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 (7bc057a84eed6f6cf36b4e8bf79ed703e5b45883fc64953d5de5015a3cfd115a)\n7bc057a84eed6f6cf36b4e8bf79ed703e5b45883fc64953d5de5015a3cfd115a\nSat Jan 31 07:35:49 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 (7bc057a84eed6f6cf36b4e8bf79ed703e5b45883fc64953d5de5015a3cfd115a)\n7bc057a84eed6f6cf36b4e8bf79ed703e5b45883fc64953d5de5015a3cfd115a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.168 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[321766ea-2741-4c7f-8f50-586f2bd2d223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.169 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4d1862b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:35:49 np0005603609 nova_compute[221550]: 2026-01-31 07:35:49.171 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:49 np0005603609 kernel: tape4d1862b-20: left promiscuous mode
Jan 31 02:35:49 np0005603609 nova_compute[221550]: 2026-01-31 07:35:49.183 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.186 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec3cabe-c0cc-45ea-a65d-046960999c7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.200 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7661d77f-a10d-4b8e-89e2-1dea6c591540]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.202 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f1cef449-81f0-4962-9f23-2cf8326cab2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.213 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[92169d25-45bb-4cbf-a389-7a4842699e23]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517792, 'reachable_time': 19954, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 229966, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:49 np0005603609 systemd[1]: run-netns-ovnmeta\x2de4d1862b\x2d2abc\x2d4d60\x2dbc48\x2d19a5318038f4.mount: Deactivated successfully.
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.217 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.217 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[5a915a62-af04-4106-acfe-9a85104718a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.218 140058 INFO neutron.agent.ovn.metadata.agent [-] Port b2e77a7e-2125-46bd-8c49-dd619c7caf36 in datapath 994f8d42-6738-4c92-b80e-8dbb63919128 unbound from our chassis#033[00m
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.219 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 994f8d42-6738-4c92-b80e-8dbb63919128, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.219 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c6dc0f13-662e-4788-bf83-71e3ec977eb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.220 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 596408d2-9689-472f-b2fb-a85a75df2923 in datapath e4d1862b-2abc-4d60-bc48-19a5318038f4 unbound from our chassis#033[00m
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.221 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4d1862b-2abc-4d60-bc48-19a5318038f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.221 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[61a3a7f9-16cb-4569-9a49-6497e4659119]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.222 140058 INFO neutron.agent.ovn.metadata.agent [-] Port b2e77a7e-2125-46bd-8c49-dd619c7caf36 in datapath 994f8d42-6738-4c92-b80e-8dbb63919128 unbound from our chassis#033[00m
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.223 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 994f8d42-6738-4c92-b80e-8dbb63919128, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.223 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7034f341-4cc9-4cf6-bc0e-b9950a0d04f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.224 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 596408d2-9689-472f-b2fb-a85a75df2923 in datapath e4d1862b-2abc-4d60-bc48-19a5318038f4 unbound from our chassis#033[00m
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.225 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4d1862b-2abc-4d60-bc48-19a5318038f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:35:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:35:49.225 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[dae18731-c5fe-47fd-a6aa-14b798f74bec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:35:49 np0005603609 nova_compute[221550]: 2026-01-31 07:35:49.241 221554 DEBUG nova.compute.manager [req-7facdb23-bc27-4962-85bc-36431d29d78e req-43af1cbf-66a3-4f2d-ac7e-ae82f88f2ea5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received event network-vif-unplugged-596408d2-9689-472f-b2fb-a85a75df2923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:35:49 np0005603609 nova_compute[221550]: 2026-01-31 07:35:49.242 221554 DEBUG oslo_concurrency.lockutils [req-7facdb23-bc27-4962-85bc-36431d29d78e req-43af1cbf-66a3-4f2d-ac7e-ae82f88f2ea5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:49 np0005603609 nova_compute[221550]: 2026-01-31 07:35:49.242 221554 DEBUG oslo_concurrency.lockutils [req-7facdb23-bc27-4962-85bc-36431d29d78e req-43af1cbf-66a3-4f2d-ac7e-ae82f88f2ea5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:49 np0005603609 nova_compute[221550]: 2026-01-31 07:35:49.242 221554 DEBUG oslo_concurrency.lockutils [req-7facdb23-bc27-4962-85bc-36431d29d78e req-43af1cbf-66a3-4f2d-ac7e-ae82f88f2ea5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:49 np0005603609 nova_compute[221550]: 2026-01-31 07:35:49.242 221554 DEBUG nova.compute.manager [req-7facdb23-bc27-4962-85bc-36431d29d78e req-43af1cbf-66a3-4f2d-ac7e-ae82f88f2ea5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] No waiting events found dispatching network-vif-unplugged-596408d2-9689-472f-b2fb-a85a75df2923 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:35:49 np0005603609 nova_compute[221550]: 2026-01-31 07:35:49.242 221554 DEBUG nova.compute.manager [req-7facdb23-bc27-4962-85bc-36431d29d78e req-43af1cbf-66a3-4f2d-ac7e-ae82f88f2ea5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received event network-vif-unplugged-596408d2-9689-472f-b2fb-a85a75df2923 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:35:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:35:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:35:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:35:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:49.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:50 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:35:50 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.431 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.536 221554 DEBUG nova.network.neutron [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Activated binding for port 596408d2-9689-472f-b2fb-a85a75df2923 and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.537 221554 DEBUG nova.compute.manager [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.537 221554 DEBUG nova.virt.libvirt.vif [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:35:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-879430066',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-879430066',id=19,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:35:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='29d136be5e384689a95acd607131dfd0',ramdisk_id='',reservation_id='r-9uooh3ao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1421195096',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1421195096-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:35:41Z,user_data=None,user_id='ea44c45fe7df4f36b5c722fbfc214f2e',uuid=b9a76fba-cff2-455a-9aa6-7b839819e78b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.538 221554 DEBUG nova.network.os_vif_util [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Converting VIF {"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.538 221554 DEBUG nova.network.os_vif_util [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:f1:d6,bridge_name='br-int',has_traffic_filtering=True,id=596408d2-9689-472f-b2fb-a85a75df2923,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap596408d2-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.539 221554 DEBUG os_vif [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:f1:d6,bridge_name='br-int',has_traffic_filtering=True,id=596408d2-9689-472f-b2fb-a85a75df2923,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap596408d2-96') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.540 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.541 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap596408d2-96, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.542 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.544 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.546 221554 INFO os_vif [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:f1:d6,bridge_name='br-int',has_traffic_filtering=True,id=596408d2-9689-472f-b2fb-a85a75df2923,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap596408d2-96')#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.546 221554 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.547 221554 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.547 221554 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.547 221554 DEBUG nova.compute.manager [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.547 221554 INFO nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Deleting instance files /var/lib/nova/instances/b9a76fba-cff2-455a-9aa6-7b839819e78b_del#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.548 221554 INFO nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Deletion of /var/lib/nova/instances/b9a76fba-cff2-455a-9aa6-7b839819e78b_del complete#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.680 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-b9a76fba-cff2-455a-9aa6-7b839819e78b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.752 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:35:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:50.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.981 221554 DEBUG nova.network.neutron [req-e01bb64d-ff73-4843-93ca-7d2ab22491a8 req-ce52673e-025f-4870-abf4-14316e07706a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Updated VIF entry in instance network info cache for port 596408d2-9689-472f-b2fb-a85a75df2923. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:35:50 np0005603609 nova_compute[221550]: 2026-01-31 07:35:50.982 221554 DEBUG nova.network.neutron [req-e01bb64d-ff73-4843-93ca-7d2ab22491a8 req-ce52673e-025f-4870-abf4-14316e07706a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Updating instance_info_cache with network_info: [{"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.012 221554 DEBUG oslo_concurrency.lockutils [req-e01bb64d-ff73-4843-93ca-7d2ab22491a8 req-ce52673e-025f-4870-abf4-14316e07706a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b9a76fba-cff2-455a-9aa6-7b839819e78b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.012 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-b9a76fba-cff2-455a-9aa6-7b839819e78b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.012 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.012 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b9a76fba-cff2-455a-9aa6-7b839819e78b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:35:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.356 221554 DEBUG nova.compute.manager [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received event network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.357 221554 DEBUG oslo_concurrency.lockutils [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.357 221554 DEBUG oslo_concurrency.lockutils [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.357 221554 DEBUG oslo_concurrency.lockutils [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.357 221554 DEBUG nova.compute.manager [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] No waiting events found dispatching network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.358 221554 WARNING nova.compute.manager [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received unexpected event network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 for instance with vm_state active and task_state migrating.#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.358 221554 DEBUG nova.compute.manager [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received event network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.358 221554 DEBUG oslo_concurrency.lockutils [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.358 221554 DEBUG oslo_concurrency.lockutils [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.358 221554 DEBUG oslo_concurrency.lockutils [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.359 221554 DEBUG nova.compute.manager [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] No waiting events found dispatching network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.359 221554 WARNING nova.compute.manager [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received unexpected event network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 for instance with vm_state active and task_state migrating.#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.359 221554 DEBUG nova.compute.manager [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received event network-vif-unplugged-596408d2-9689-472f-b2fb-a85a75df2923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.359 221554 DEBUG oslo_concurrency.lockutils [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.359 221554 DEBUG oslo_concurrency.lockutils [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.360 221554 DEBUG oslo_concurrency.lockutils [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.360 221554 DEBUG nova.compute.manager [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] No waiting events found dispatching network-vif-unplugged-596408d2-9689-472f-b2fb-a85a75df2923 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.360 221554 DEBUG nova.compute.manager [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received event network-vif-unplugged-596408d2-9689-472f-b2fb-a85a75df2923 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.360 221554 DEBUG nova.compute.manager [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received event network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.360 221554 DEBUG oslo_concurrency.lockutils [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.361 221554 DEBUG oslo_concurrency.lockutils [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.361 221554 DEBUG oslo_concurrency.lockutils [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.361 221554 DEBUG nova.compute.manager [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] No waiting events found dispatching network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:35:51 np0005603609 nova_compute[221550]: 2026-01-31 07:35:51.361 221554 WARNING nova.compute.manager [req-a4efc90d-47bf-4543-a6bb-55d9107a585b req-84937c34-8666-4557-83db-55c62cf839c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received unexpected event network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 for instance with vm_state active and task_state migrating.#033[00m
Jan 31 02:35:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:35:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:51.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:35:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:52.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:35:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:53.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:35:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e149 e149: 3 total, 3 up, 3 in
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.636 221554 DEBUG nova.compute.manager [req-3306f0f5-5555-43c8-9c0b-c911158c97fa req-46f01320-708f-41ec-8e32-60d72e06b2e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received event network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.637 221554 DEBUG oslo_concurrency.lockutils [req-3306f0f5-5555-43c8-9c0b-c911158c97fa req-46f01320-708f-41ec-8e32-60d72e06b2e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.637 221554 DEBUG oslo_concurrency.lockutils [req-3306f0f5-5555-43c8-9c0b-c911158c97fa req-46f01320-708f-41ec-8e32-60d72e06b2e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.637 221554 DEBUG oslo_concurrency.lockutils [req-3306f0f5-5555-43c8-9c0b-c911158c97fa req-46f01320-708f-41ec-8e32-60d72e06b2e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.637 221554 DEBUG nova.compute.manager [req-3306f0f5-5555-43c8-9c0b-c911158c97fa req-46f01320-708f-41ec-8e32-60d72e06b2e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] No waiting events found dispatching network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.637 221554 WARNING nova.compute.manager [req-3306f0f5-5555-43c8-9c0b-c911158c97fa req-46f01320-708f-41ec-8e32-60d72e06b2e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received unexpected event network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 for instance with vm_state active and task_state migrating.#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.774 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Updating instance_info_cache with network_info: [{"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.799 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-b9a76fba-cff2-455a-9aa6-7b839819e78b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.799 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.800 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.800 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.800 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.817 221554 WARNING nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.817 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Triggering sync for uuid b9a76fba-cff2-455a-9aa6-7b839819e78b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.817 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "b9a76fba-cff2-455a-9aa6-7b839819e78b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.818 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.818 221554 INFO nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.818 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.818 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.818 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.819 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.855 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.855 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.855 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.856 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:35:53 np0005603609 nova_compute[221550]: 2026-01-31 07:35:53.856 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:35:54 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/209638890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:35:54 np0005603609 nova_compute[221550]: 2026-01-31 07:35:54.279 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:54 np0005603609 nova_compute[221550]: 2026-01-31 07:35:54.407 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:35:54 np0005603609 nova_compute[221550]: 2026-01-31 07:35:54.408 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4865MB free_disk=20.823078155517578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:35:54 np0005603609 nova_compute[221550]: 2026-01-31 07:35:54.409 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:54 np0005603609 nova_compute[221550]: 2026-01-31 07:35:54.409 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:54 np0005603609 nova_compute[221550]: 2026-01-31 07:35:54.447 221554 INFO nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Updating resource usage from migration 72833866-0b19-4638-bbd6-0ea0fd606531#033[00m
Jan 31 02:35:54 np0005603609 nova_compute[221550]: 2026-01-31 07:35:54.520 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Migration 72833866-0b19-4638-bbd6-0ea0fd606531 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 02:35:54 np0005603609 nova_compute[221550]: 2026-01-31 07:35:54.521 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:35:54 np0005603609 nova_compute[221550]: 2026-01-31 07:35:54.521 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:35:54 np0005603609 nova_compute[221550]: 2026-01-31 07:35:54.615 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 02:35:54 np0005603609 nova_compute[221550]: 2026-01-31 07:35:54.684 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 02:35:54 np0005603609 nova_compute[221550]: 2026-01-31 07:35:54.684 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 02:35:54 np0005603609 nova_compute[221550]: 2026-01-31 07:35:54.700 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 02:35:54 np0005603609 nova_compute[221550]: 2026-01-31 07:35:54.728 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 02:35:54 np0005603609 nova_compute[221550]: 2026-01-31 07:35:54.763 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:35:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:54.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:35:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:35:55 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/694674579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:35:55 np0005603609 nova_compute[221550]: 2026-01-31 07:35:55.167 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:55 np0005603609 nova_compute[221550]: 2026-01-31 07:35:55.174 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:35:55 np0005603609 nova_compute[221550]: 2026-01-31 07:35:55.206 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:35:55 np0005603609 nova_compute[221550]: 2026-01-31 07:35:55.246 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:35:55 np0005603609 nova_compute[221550]: 2026-01-31 07:35:55.247 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:55 np0005603609 nova_compute[221550]: 2026-01-31 07:35:55.247 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:35:55 np0005603609 nova_compute[221550]: 2026-01-31 07:35:55.544 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:35:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:55.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:35:55 np0005603609 nova_compute[221550]: 2026-01-31 07:35:55.754 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:35:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:35:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:35:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:56 np0005603609 nova_compute[221550]: 2026-01-31 07:35:56.274 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:35:56 np0005603609 nova_compute[221550]: 2026-01-31 07:35:56.274 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:35:56 np0005603609 nova_compute[221550]: 2026-01-31 07:35:56.441 221554 DEBUG oslo_concurrency.lockutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Acquiring lock "d3d4b995-0e3e-43ff-98d2-3a10297555b3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:56 np0005603609 nova_compute[221550]: 2026-01-31 07:35:56.442 221554 DEBUG oslo_concurrency.lockutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "d3d4b995-0e3e-43ff-98d2-3a10297555b3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:56 np0005603609 nova_compute[221550]: 2026-01-31 07:35:56.463 221554 DEBUG nova.compute.manager [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:35:56 np0005603609 nova_compute[221550]: 2026-01-31 07:35:56.541 221554 DEBUG oslo_concurrency.lockutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:56 np0005603609 nova_compute[221550]: 2026-01-31 07:35:56.542 221554 DEBUG oslo_concurrency.lockutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:56 np0005603609 nova_compute[221550]: 2026-01-31 07:35:56.551 221554 DEBUG nova.virt.hardware [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:35:56 np0005603609 nova_compute[221550]: 2026-01-31 07:35:56.552 221554 INFO nova.compute.claims [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:35:56 np0005603609 nova_compute[221550]: 2026-01-31 07:35:56.670 221554 DEBUG oslo_concurrency.processutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:56.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:35:57 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3645022635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.135 221554 DEBUG oslo_concurrency.processutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.141 221554 DEBUG nova.compute.provider_tree [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.167 221554 DEBUG nova.scheduler.client.report [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.223 221554 DEBUG oslo_concurrency.lockutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.225 221554 DEBUG nova.compute.manager [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.290 221554 DEBUG nova.compute.manager [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.291 221554 DEBUG nova.network.neutron [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.315 221554 INFO nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.335 221554 DEBUG nova.compute.manager [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.423 221554 DEBUG nova.compute.manager [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.425 221554 DEBUG nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.425 221554 INFO nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Creating image(s)#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.452 221554 DEBUG nova.storage.rbd_utils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] rbd image d3d4b995-0e3e-43ff-98d2-3a10297555b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.482 221554 DEBUG nova.storage.rbd_utils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] rbd image d3d4b995-0e3e-43ff-98d2-3a10297555b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.512 221554 DEBUG nova.storage.rbd_utils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] rbd image d3d4b995-0e3e-43ff-98d2-3a10297555b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.516 221554 DEBUG oslo_concurrency.processutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.537 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844942.5364144, 871711de-f993-4592-83a2-a36c4039786d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.537 221554 INFO nova.compute.manager [-] [instance: 871711de-f993-4592-83a2-a36c4039786d] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.560 221554 DEBUG oslo_concurrency.processutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.561 221554 DEBUG oslo_concurrency.lockutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.562 221554 DEBUG oslo_concurrency.lockutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.562 221554 DEBUG oslo_concurrency.lockutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:57.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.589 221554 DEBUG nova.storage.rbd_utils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] rbd image d3d4b995-0e3e-43ff-98d2-3a10297555b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.593 221554 DEBUG oslo_concurrency.processutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 d3d4b995-0e3e-43ff-98d2-3a10297555b3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.603 221554 DEBUG nova.compute.manager [None req-da713303-2c31-49b2-9981-6a8ffaa48df6 - - - - - -] [instance: 871711de-f993-4592-83a2-a36c4039786d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.717 221554 DEBUG nova.network.neutron [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.718 221554 DEBUG nova.compute.manager [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.905 221554 DEBUG oslo_concurrency.processutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 d3d4b995-0e3e-43ff-98d2-3a10297555b3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:57 np0005603609 nova_compute[221550]: 2026-01-31 07:35:57.998 221554 DEBUG nova.storage.rbd_utils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] resizing rbd image d3d4b995-0e3e-43ff-98d2-3a10297555b3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.044 221554 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.045 221554 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.045 221554 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.067 221554 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.068 221554 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.068 221554 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.068 221554 DEBUG nova.compute.resource_tracker [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.069 221554 DEBUG oslo_concurrency.processutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.142 221554 DEBUG nova.objects.instance [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lazy-loading 'migration_context' on Instance uuid d3d4b995-0e3e-43ff-98d2-3a10297555b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.164 221554 DEBUG nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.164 221554 DEBUG nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Ensure instance console log exists: /var/lib/nova/instances/d3d4b995-0e3e-43ff-98d2-3a10297555b3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.165 221554 DEBUG oslo_concurrency.lockutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.165 221554 DEBUG oslo_concurrency.lockutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.166 221554 DEBUG oslo_concurrency.lockutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.169 221554 DEBUG nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.174 221554 WARNING nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.179 221554 DEBUG nova.virt.libvirt.host [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.180 221554 DEBUG nova.virt.libvirt.host [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.185 221554 DEBUG nova.virt.libvirt.host [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.186 221554 DEBUG nova.virt.libvirt.host [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.187 221554 DEBUG nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.188 221554 DEBUG nova.virt.hardware [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.189 221554 DEBUG nova.virt.hardware [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.189 221554 DEBUG nova.virt.hardware [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.189 221554 DEBUG nova.virt.hardware [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.190 221554 DEBUG nova.virt.hardware [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.190 221554 DEBUG nova.virt.hardware [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.191 221554 DEBUG nova.virt.hardware [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.191 221554 DEBUG nova.virt.hardware [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.191 221554 DEBUG nova.virt.hardware [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.192 221554 DEBUG nova.virt.hardware [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.192 221554 DEBUG nova.virt.hardware [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.197 221554 DEBUG oslo_concurrency.processutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:35:58 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1301746315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.515 221554 DEBUG oslo_concurrency.processutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:35:58 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3732196518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.688 221554 DEBUG oslo_concurrency.processutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.723 221554 DEBUG nova.storage.rbd_utils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] rbd image d3d4b995-0e3e-43ff-98d2-3a10297555b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.729 221554 DEBUG oslo_concurrency.processutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.774 221554 WARNING nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.775 221554 DEBUG nova.compute.resource_tracker [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4841MB free_disk=20.806346893310547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.776 221554 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.776 221554 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:35:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:58.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.868 221554 DEBUG nova.compute.resource_tracker [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Migration for instance b9a76fba-cff2-455a-9aa6-7b839819e78b refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.899 221554 DEBUG nova.compute.resource_tracker [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.918 221554 DEBUG nova.compute.resource_tracker [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Migration 72833866-0b19-4638-bbd6-0ea0fd606531 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.918 221554 DEBUG nova.compute.resource_tracker [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Instance d3d4b995-0e3e-43ff-98d2-3a10297555b3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.919 221554 DEBUG nova.compute.resource_tracker [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.919 221554 DEBUG nova.compute.resource_tracker [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:35:58 np0005603609 nova_compute[221550]: 2026-01-31 07:35:58.980 221554 DEBUG oslo_concurrency.processutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1681356475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.150 221554 DEBUG oslo_concurrency.processutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.152 221554 DEBUG nova.objects.instance [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid d3d4b995-0e3e-43ff-98d2-3a10297555b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.201 221554 DEBUG nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:35:59 np0005603609 nova_compute[221550]:  <uuid>d3d4b995-0e3e-43ff-98d2-3a10297555b3</uuid>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:  <name>instance-00000017</name>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-621964122</nova:name>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:35:58</nova:creationTime>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:35:59 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:        <nova:user uuid="415bed69690f43f1930afb0d3fb21fb9">tempest-ServersAdminNegativeTestJSON-1416688807-project-member</nova:user>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:        <nova:project uuid="22109e5bc4ad462b83d70290539ac0c3">tempest-ServersAdminNegativeTestJSON-1416688807</nova:project>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <nova:ports/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <entry name="serial">d3d4b995-0e3e-43ff-98d2-3a10297555b3</entry>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <entry name="uuid">d3d4b995-0e3e-43ff-98d2-3a10297555b3</entry>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/d3d4b995-0e3e-43ff-98d2-3a10297555b3_disk">
Jan 31 02:35:59 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:35:59 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/d3d4b995-0e3e-43ff-98d2-3a10297555b3_disk.config">
Jan 31 02:35:59 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:35:59 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/d3d4b995-0e3e-43ff-98d2-3a10297555b3/console.log" append="off"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:35:59 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:35:59 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:35:59 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:35:59 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.297 221554 DEBUG nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.298 221554 DEBUG nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.299 221554 INFO nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Using config drive#033[00m
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.341 221554 DEBUG nova.storage.rbd_utils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] rbd image d3d4b995-0e3e-43ff-98d2-3a10297555b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1061276305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.460 221554 DEBUG oslo_concurrency.processutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.465 221554 DEBUG nova.compute.provider_tree [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.485 221554 DEBUG nova.scheduler.client.report [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.519 221554 DEBUG nova.compute.resource_tracker [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.520 221554 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.524 221554 INFO nova.compute.manager [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Migrating instance to compute-0.ctlplane.example.com finished successfully.#033[00m
Jan 31 02:35:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:35:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:59.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.663 221554 INFO nova.scheduler.client.report [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Deleted allocation for migration 72833866-0b19-4638-bbd6-0ea0fd606531#033[00m
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.664 221554 DEBUG nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.692 221554 INFO nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Creating config drive at /var/lib/nova/instances/d3d4b995-0e3e-43ff-98d2-3a10297555b3/disk.config#033[00m
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.696 221554 DEBUG oslo_concurrency.processutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d3d4b995-0e3e-43ff-98d2-3a10297555b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpu4xeodqp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.815 221554 DEBUG oslo_concurrency.processutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d3d4b995-0e3e-43ff-98d2-3a10297555b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpu4xeodqp" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.843 221554 DEBUG nova.storage.rbd_utils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] rbd image d3d4b995-0e3e-43ff-98d2-3a10297555b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:35:59 np0005603609 nova_compute[221550]: 2026-01-31 07:35:59.846 221554 DEBUG oslo_concurrency.processutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d3d4b995-0e3e-43ff-98d2-3a10297555b3/disk.config d3d4b995-0e3e-43ff-98d2-3a10297555b3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:35:59.894719) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844959894763, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2361, "num_deletes": 254, "total_data_size": 5322160, "memory_usage": 5404336, "flush_reason": "Manual Compaction"}
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844959928687, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 3479806, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23104, "largest_seqno": 25460, "table_properties": {"data_size": 3470339, "index_size": 5833, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21135, "raw_average_key_size": 20, "raw_value_size": 3450801, "raw_average_value_size": 3393, "num_data_blocks": 257, "num_entries": 1017, "num_filter_entries": 1017, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844774, "oldest_key_time": 1769844774, "file_creation_time": 1769844959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 34010 microseconds, and 5989 cpu microseconds.
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:35:59.928733) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 3479806 bytes OK
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:35:59.928750) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:35:59.930850) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:35:59.930864) EVENT_LOG_v1 {"time_micros": 1769844959930859, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:35:59.930886) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 5311403, prev total WAL file size 5311403, number of live WAL files 2.
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:35:59.931657) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(3398KB)], [48(7354KB)]
Jan 31 02:35:59 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844959931681, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 11010973, "oldest_snapshot_seqno": -1}
Jan 31 02:36:00 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 4974 keys, 8996334 bytes, temperature: kUnknown
Jan 31 02:36:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844960027800, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 8996334, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8962267, "index_size": 20517, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 125775, "raw_average_key_size": 25, "raw_value_size": 8871645, "raw_average_value_size": 1783, "num_data_blocks": 837, "num_entries": 4974, "num_filter_entries": 4974, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769844959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:36:00 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:36:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:36:00.028104) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 8996334 bytes
Jan 31 02:36:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:36:00.030721) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.4 rd, 93.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.2 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 5499, records dropped: 525 output_compression: NoCompression
Jan 31 02:36:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:36:00.030742) EVENT_LOG_v1 {"time_micros": 1769844960030732, "job": 28, "event": "compaction_finished", "compaction_time_micros": 96234, "compaction_time_cpu_micros": 13686, "output_level": 6, "num_output_files": 1, "total_output_size": 8996334, "num_input_records": 5499, "num_output_records": 4974, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:36:00 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:36:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844960031232, "job": 28, "event": "table_file_deletion", "file_number": 50}
Jan 31 02:36:00 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:36:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844960032080, "job": 28, "event": "table_file_deletion", "file_number": 48}
Jan 31 02:36:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:35:59.931575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:36:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:36:00.032122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:36:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:36:00.032127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:36:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:36:00.032129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:36:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:36:00.032131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:36:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:36:00.032133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:36:00 np0005603609 nova_compute[221550]: 2026-01-31 07:36:00.072 221554 DEBUG oslo_concurrency.processutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d3d4b995-0e3e-43ff-98d2-3a10297555b3/disk.config d3d4b995-0e3e-43ff-98d2-3a10297555b3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:00 np0005603609 nova_compute[221550]: 2026-01-31 07:36:00.073 221554 INFO nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Deleting local config drive /var/lib/nova/instances/d3d4b995-0e3e-43ff-98d2-3a10297555b3/disk.config because it was imported into RBD.#033[00m
Jan 31 02:36:00 np0005603609 systemd-machined[190912]: New machine qemu-11-instance-00000017.
Jan 31 02:36:00 np0005603609 systemd[1]: Started Virtual Machine qemu-11-instance-00000017.
Jan 31 02:36:00 np0005603609 nova_compute[221550]: 2026-01-31 07:36:00.546 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:00 np0005603609 nova_compute[221550]: 2026-01-31 07:36:00.757 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:00.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:00 np0005603609 nova_compute[221550]: 2026-01-31 07:36:00.873 221554 DEBUG nova.compute.manager [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.003 221554 DEBUG oslo_concurrency.lockutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.004 221554 DEBUG oslo_concurrency.lockutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.036 221554 DEBUG nova.objects.instance [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.054 221554 DEBUG nova.virt.hardware [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.055 221554 INFO nova.compute.claims [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.055 221554 DEBUG nova.objects.instance [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lazy-loading 'resources' on Instance uuid 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.069 221554 DEBUG nova.objects.instance [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.084 221554 DEBUG nova.objects.instance [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.129 221554 INFO nova.compute.resource_tracker [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Updating resource usage from migration ea0c264e-0692-46a6-bf07-d98d6a516761#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.129 221554 DEBUG nova.compute.resource_tracker [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Starting to track incoming migration ea0c264e-0692-46a6-bf07-d98d6a516761 with flavor fea01737-128b-41fa-a695-aaaa6e96e4b2 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.176 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844961.1757689, d3d4b995-0e3e-43ff-98d2-3a10297555b3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.176 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.178 221554 DEBUG nova.compute.manager [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.178 221554 DEBUG nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.185 221554 INFO nova.virt.libvirt.driver [-] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Instance spawned successfully.#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.186 221554 DEBUG nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.197 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.199 221554 DEBUG oslo_concurrency.processutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.217 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.219 221554 DEBUG nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.220 221554 DEBUG nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.221 221554 DEBUG nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.221 221554 DEBUG nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.221 221554 DEBUG nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.222 221554 DEBUG nova.virt.libvirt.driver [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:36:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.264 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.265 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844961.178043, d3d4b995-0e3e-43ff-98d2-3a10297555b3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.265 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] VM Started (Lifecycle Event)#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.336 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.342 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.366 221554 INFO nova.compute.manager [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Took 3.94 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.367 221554 DEBUG nova.compute.manager [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.441 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.539 221554 INFO nova.compute.manager [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Took 5.03 seconds to build instance.#033[00m
Jan 31 02:36:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:36:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:01.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.568 221554 DEBUG oslo_concurrency.lockutils [None req-47697650-0322-4b60-ad95-56b548473c1d 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "d3d4b995-0e3e-43ff-98d2-3a10297555b3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:36:01 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/200743085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.620 221554 DEBUG oslo_concurrency.processutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.626 221554 DEBUG nova.compute.provider_tree [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.649 221554 DEBUG nova.scheduler.client.report [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.673 221554 DEBUG oslo_concurrency.lockutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:01 np0005603609 nova_compute[221550]: 2026-01-31 07:36:01.673 221554 INFO nova.compute.manager [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Migrating#033[00m
Jan 31 02:36:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:02.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:03 np0005603609 systemd-logind[823]: New session 54 of user nova.
Jan 31 02:36:03 np0005603609 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 02:36:03 np0005603609 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 02:36:03 np0005603609 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 02:36:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:36:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:03.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:36:03 np0005603609 systemd[1]: Starting User Manager for UID 42436...
Jan 31 02:36:03 np0005603609 systemd[230499]: Queued start job for default target Main User Target.
Jan 31 02:36:03 np0005603609 systemd[230499]: Created slice User Application Slice.
Jan 31 02:36:03 np0005603609 systemd[230499]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 02:36:03 np0005603609 systemd[230499]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 02:36:03 np0005603609 systemd[230499]: Reached target Paths.
Jan 31 02:36:03 np0005603609 systemd[230499]: Reached target Timers.
Jan 31 02:36:03 np0005603609 systemd[230499]: Starting D-Bus User Message Bus Socket...
Jan 31 02:36:03 np0005603609 systemd[230499]: Starting Create User's Volatile Files and Directories...
Jan 31 02:36:03 np0005603609 systemd[230499]: Finished Create User's Volatile Files and Directories.
Jan 31 02:36:03 np0005603609 systemd[230499]: Listening on D-Bus User Message Bus Socket.
Jan 31 02:36:03 np0005603609 systemd[230499]: Reached target Sockets.
Jan 31 02:36:03 np0005603609 systemd[230499]: Reached target Basic System.
Jan 31 02:36:03 np0005603609 systemd[230499]: Reached target Main User Target.
Jan 31 02:36:03 np0005603609 systemd[230499]: Startup finished in 120ms.
Jan 31 02:36:03 np0005603609 systemd[1]: Started User Manager for UID 42436.
Jan 31 02:36:03 np0005603609 systemd[1]: Started Session 54 of User nova.
Jan 31 02:36:03 np0005603609 systemd-logind[823]: Session 54 logged out. Waiting for processes to exit.
Jan 31 02:36:03 np0005603609 systemd[1]: session-54.scope: Deactivated successfully.
Jan 31 02:36:03 np0005603609 systemd-logind[823]: Removed session 54.
Jan 31 02:36:03 np0005603609 nova_compute[221550]: 2026-01-31 07:36:03.881 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844948.879787, b9a76fba-cff2-455a-9aa6-7b839819e78b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:36:03 np0005603609 nova_compute[221550]: 2026-01-31 07:36:03.882 221554 INFO nova.compute.manager [-] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:36:03 np0005603609 nova_compute[221550]: 2026-01-31 07:36:03.902 221554 DEBUG nova.compute.manager [None req-7959de98-9ed3-4493-b230-743c8d63af7e - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:03 np0005603609 systemd-logind[823]: New session 56 of user nova.
Jan 31 02:36:03 np0005603609 systemd[1]: Started Session 56 of User nova.
Jan 31 02:36:03 np0005603609 systemd[1]: session-56.scope: Deactivated successfully.
Jan 31 02:36:03 np0005603609 systemd-logind[823]: Session 56 logged out. Waiting for processes to exit.
Jan 31 02:36:04 np0005603609 systemd-logind[823]: Removed session 56.
Jan 31 02:36:04 np0005603609 nova_compute[221550]: 2026-01-31 07:36:04.309 221554 DEBUG oslo_concurrency.lockutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Acquiring lock "d033aef8-a3ed-4466-a952-762e3953a1ca" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:04 np0005603609 nova_compute[221550]: 2026-01-31 07:36:04.309 221554 DEBUG oslo_concurrency.lockutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "d033aef8-a3ed-4466-a952-762e3953a1ca" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:04 np0005603609 nova_compute[221550]: 2026-01-31 07:36:04.323 221554 DEBUG nova.compute.manager [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:36:04 np0005603609 nova_compute[221550]: 2026-01-31 07:36:04.378 221554 DEBUG oslo_concurrency.lockutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:04 np0005603609 nova_compute[221550]: 2026-01-31 07:36:04.378 221554 DEBUG oslo_concurrency.lockutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:04 np0005603609 nova_compute[221550]: 2026-01-31 07:36:04.386 221554 DEBUG nova.virt.hardware [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:36:04 np0005603609 nova_compute[221550]: 2026-01-31 07:36:04.386 221554 INFO nova.compute.claims [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:36:04 np0005603609 nova_compute[221550]: 2026-01-31 07:36:04.528 221554 DEBUG oslo_concurrency.processutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:36:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:04.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:36:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:36:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1924817307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:36:04 np0005603609 nova_compute[221550]: 2026-01-31 07:36:04.981 221554 DEBUG oslo_concurrency.processutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:04 np0005603609 nova_compute[221550]: 2026-01-31 07:36:04.987 221554 DEBUG nova.compute.provider_tree [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.009 221554 DEBUG nova.scheduler.client.report [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.213 221554 DEBUG oslo_concurrency.lockutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.214 221554 DEBUG nova.compute.manager [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.290 221554 DEBUG nova.compute.manager [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.291 221554 DEBUG nova.network.neutron [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.340 221554 INFO nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.387 221554 DEBUG nova.compute.manager [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.549 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.571 221554 DEBUG nova.compute.manager [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:36:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:05.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.573 221554 DEBUG nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.573 221554 INFO nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Creating image(s)#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.603 221554 DEBUG nova.storage.rbd_utils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] rbd image d033aef8-a3ed-4466-a952-762e3953a1ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.631 221554 DEBUG nova.storage.rbd_utils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] rbd image d033aef8-a3ed-4466-a952-762e3953a1ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.658 221554 DEBUG nova.storage.rbd_utils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] rbd image d033aef8-a3ed-4466-a952-762e3953a1ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.662 221554 DEBUG oslo_concurrency.processutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.736 221554 DEBUG oslo_concurrency.processutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.736 221554 DEBUG oslo_concurrency.lockutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.737 221554 DEBUG oslo_concurrency.lockutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.737 221554 DEBUG oslo_concurrency.lockutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.811 221554 DEBUG nova.storage.rbd_utils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] rbd image d033aef8-a3ed-4466-a952-762e3953a1ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.818 221554 DEBUG oslo_concurrency.processutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 d033aef8-a3ed-4466-a952-762e3953a1ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:05 np0005603609 nova_compute[221550]: 2026-01-31 07:36:05.833 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.059 221554 DEBUG nova.network.neutron [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.059 221554 DEBUG nova.compute.manager [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.112 221554 DEBUG oslo_concurrency.processutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 d033aef8-a3ed-4466-a952-762e3953a1ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.218 221554 DEBUG nova.storage.rbd_utils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] resizing rbd image d033aef8-a3ed-4466-a952-762e3953a1ca_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:36:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.468 221554 DEBUG nova.objects.instance [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lazy-loading 'migration_context' on Instance uuid d033aef8-a3ed-4466-a952-762e3953a1ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.490 221554 DEBUG nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.491 221554 DEBUG nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Ensure instance console log exists: /var/lib/nova/instances/d033aef8-a3ed-4466-a952-762e3953a1ca/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.491 221554 DEBUG oslo_concurrency.lockutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.491 221554 DEBUG oslo_concurrency.lockutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.491 221554 DEBUG oslo_concurrency.lockutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.493 221554 DEBUG nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.496 221554 WARNING nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.500 221554 DEBUG nova.virt.libvirt.host [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.500 221554 DEBUG nova.virt.libvirt.host [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.504 221554 DEBUG nova.virt.libvirt.host [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.504 221554 DEBUG nova.virt.libvirt.host [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.505 221554 DEBUG nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.505 221554 DEBUG nova.virt.hardware [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.506 221554 DEBUG nova.virt.hardware [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.506 221554 DEBUG nova.virt.hardware [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.506 221554 DEBUG nova.virt.hardware [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.506 221554 DEBUG nova.virt.hardware [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.507 221554 DEBUG nova.virt.hardware [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.507 221554 DEBUG nova.virt.hardware [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.507 221554 DEBUG nova.virt.hardware [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.507 221554 DEBUG nova.virt.hardware [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.507 221554 DEBUG nova.virt.hardware [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.507 221554 DEBUG nova.virt.hardware [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.510 221554 DEBUG oslo_concurrency.processutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:06.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:36:06 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2434256068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.930 221554 DEBUG oslo_concurrency.processutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.953 221554 DEBUG nova.storage.rbd_utils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] rbd image d033aef8-a3ed-4466-a952-762e3953a1ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:36:06 np0005603609 nova_compute[221550]: 2026-01-31 07:36:06.958 221554 DEBUG oslo_concurrency.processutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:36:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2234691212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:36:07 np0005603609 nova_compute[221550]: 2026-01-31 07:36:07.360 221554 DEBUG oslo_concurrency.processutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:07 np0005603609 nova_compute[221550]: 2026-01-31 07:36:07.362 221554 DEBUG nova.objects.instance [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid d033aef8-a3ed-4466-a952-762e3953a1ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:36:07 np0005603609 nova_compute[221550]: 2026-01-31 07:36:07.378 221554 DEBUG nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:36:07 np0005603609 nova_compute[221550]:  <uuid>d033aef8-a3ed-4466-a952-762e3953a1ca</uuid>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:  <name>instance-00000019</name>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-1070291598</nova:name>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:36:06</nova:creationTime>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:36:07 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:        <nova:user uuid="415bed69690f43f1930afb0d3fb21fb9">tempest-ServersAdminNegativeTestJSON-1416688807-project-member</nova:user>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:        <nova:project uuid="22109e5bc4ad462b83d70290539ac0c3">tempest-ServersAdminNegativeTestJSON-1416688807</nova:project>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <nova:ports/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <entry name="serial">d033aef8-a3ed-4466-a952-762e3953a1ca</entry>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <entry name="uuid">d033aef8-a3ed-4466-a952-762e3953a1ca</entry>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/d033aef8-a3ed-4466-a952-762e3953a1ca_disk">
Jan 31 02:36:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:36:07 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/d033aef8-a3ed-4466-a952-762e3953a1ca_disk.config">
Jan 31 02:36:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:36:07 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/d033aef8-a3ed-4466-a952-762e3953a1ca/console.log" append="off"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:36:07 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:36:07 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:36:07 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:36:07 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:36:07 np0005603609 nova_compute[221550]: 2026-01-31 07:36:07.436 221554 DEBUG nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:36:07 np0005603609 nova_compute[221550]: 2026-01-31 07:36:07.437 221554 DEBUG nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:36:07 np0005603609 nova_compute[221550]: 2026-01-31 07:36:07.438 221554 INFO nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Using config drive#033[00m
Jan 31 02:36:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:07.468 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:07.468 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:07.468 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:07 np0005603609 nova_compute[221550]: 2026-01-31 07:36:07.474 221554 DEBUG nova.storage.rbd_utils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] rbd image d033aef8-a3ed-4466-a952-762e3953a1ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:36:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:07.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:07 np0005603609 nova_compute[221550]: 2026-01-31 07:36:07.643 221554 INFO nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Creating config drive at /var/lib/nova/instances/d033aef8-a3ed-4466-a952-762e3953a1ca/disk.config#033[00m
Jan 31 02:36:07 np0005603609 nova_compute[221550]: 2026-01-31 07:36:07.652 221554 DEBUG oslo_concurrency.processutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d033aef8-a3ed-4466-a952-762e3953a1ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbw1bsjim execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:07 np0005603609 nova_compute[221550]: 2026-01-31 07:36:07.777 221554 DEBUG oslo_concurrency.processutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d033aef8-a3ed-4466-a952-762e3953a1ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbw1bsjim" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:07 np0005603609 nova_compute[221550]: 2026-01-31 07:36:07.808 221554 DEBUG nova.storage.rbd_utils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] rbd image d033aef8-a3ed-4466-a952-762e3953a1ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:36:07 np0005603609 nova_compute[221550]: 2026-01-31 07:36:07.812 221554 DEBUG oslo_concurrency.processutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d033aef8-a3ed-4466-a952-762e3953a1ca/disk.config d033aef8-a3ed-4466-a952-762e3953a1ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:08 np0005603609 nova_compute[221550]: 2026-01-31 07:36:08.197 221554 DEBUG oslo_concurrency.processutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d033aef8-a3ed-4466-a952-762e3953a1ca/disk.config d033aef8-a3ed-4466-a952-762e3953a1ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:08 np0005603609 nova_compute[221550]: 2026-01-31 07:36:08.198 221554 INFO nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Deleting local config drive /var/lib/nova/instances/d033aef8-a3ed-4466-a952-762e3953a1ca/disk.config because it was imported into RBD.#033[00m
Jan 31 02:36:08 np0005603609 systemd-machined[190912]: New machine qemu-12-instance-00000019.
Jan 31 02:36:08 np0005603609 systemd[1]: Started Virtual Machine qemu-12-instance-00000019.
Jan 31 02:36:08 np0005603609 podman[230839]: 2026-01-31 07:36:08.39443059 +0000 UTC m=+0.079895572 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 02:36:08 np0005603609 podman[230864]: 2026-01-31 07:36:08.488675901 +0000 UTC m=+0.077134705 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 02:36:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:08.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:08 np0005603609 nova_compute[221550]: 2026-01-31 07:36:08.936 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844968.9360583, d033aef8-a3ed-4466-a952-762e3953a1ca => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:36:08 np0005603609 nova_compute[221550]: 2026-01-31 07:36:08.937 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:36:08 np0005603609 nova_compute[221550]: 2026-01-31 07:36:08.940 221554 DEBUG nova.compute.manager [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:36:08 np0005603609 nova_compute[221550]: 2026-01-31 07:36:08.940 221554 DEBUG nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:36:08 np0005603609 nova_compute[221550]: 2026-01-31 07:36:08.943 221554 INFO nova.virt.libvirt.driver [-] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Instance spawned successfully.#033[00m
Jan 31 02:36:08 np0005603609 nova_compute[221550]: 2026-01-31 07:36:08.944 221554 DEBUG nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:36:08 np0005603609 nova_compute[221550]: 2026-01-31 07:36:08.962 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:08 np0005603609 nova_compute[221550]: 2026-01-31 07:36:08.968 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:36:08 np0005603609 nova_compute[221550]: 2026-01-31 07:36:08.970 221554 DEBUG nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:36:08 np0005603609 nova_compute[221550]: 2026-01-31 07:36:08.971 221554 DEBUG nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:36:08 np0005603609 nova_compute[221550]: 2026-01-31 07:36:08.971 221554 DEBUG nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:36:08 np0005603609 nova_compute[221550]: 2026-01-31 07:36:08.971 221554 DEBUG nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:36:08 np0005603609 nova_compute[221550]: 2026-01-31 07:36:08.972 221554 DEBUG nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:36:08 np0005603609 nova_compute[221550]: 2026-01-31 07:36:08.972 221554 DEBUG nova.virt.libvirt.driver [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:36:09 np0005603609 nova_compute[221550]: 2026-01-31 07:36:09.017 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:36:09 np0005603609 nova_compute[221550]: 2026-01-31 07:36:09.017 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844968.9373784, d033aef8-a3ed-4466-a952-762e3953a1ca => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:36:09 np0005603609 nova_compute[221550]: 2026-01-31 07:36:09.017 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] VM Started (Lifecycle Event)#033[00m
Jan 31 02:36:09 np0005603609 nova_compute[221550]: 2026-01-31 07:36:09.049 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:09 np0005603609 nova_compute[221550]: 2026-01-31 07:36:09.052 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:36:09 np0005603609 nova_compute[221550]: 2026-01-31 07:36:09.062 221554 INFO nova.compute.manager [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Took 3.49 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:36:09 np0005603609 nova_compute[221550]: 2026-01-31 07:36:09.063 221554 DEBUG nova.compute.manager [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:09 np0005603609 nova_compute[221550]: 2026-01-31 07:36:09.090 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:36:09 np0005603609 nova_compute[221550]: 2026-01-31 07:36:09.121 221554 INFO nova.compute.manager [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Took 4.76 seconds to build instance.#033[00m
Jan 31 02:36:09 np0005603609 nova_compute[221550]: 2026-01-31 07:36:09.141 221554 DEBUG oslo_concurrency.lockutils [None req-ea4425e1-2e2b-435b-85e6-3687048ef349 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "d033aef8-a3ed-4466-a952-762e3953a1ca" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:36:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:09.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:36:10 np0005603609 nova_compute[221550]: 2026-01-31 07:36:10.014 221554 DEBUG nova.objects.instance [None req-effc66e4-b4c0-4abc-8252-a2b2a6e1d704 87ae7ed9760e42009f7df25d95ef7f8d 7c80e45c09404149ada4f08aefed4e9c - - default default] Lazy-loading 'pci_devices' on Instance uuid d033aef8-a3ed-4466-a952-762e3953a1ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:36:10 np0005603609 nova_compute[221550]: 2026-01-31 07:36:10.033 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844970.0331473, d033aef8-a3ed-4466-a952-762e3953a1ca => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:36:10 np0005603609 nova_compute[221550]: 2026-01-31 07:36:10.033 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:36:10 np0005603609 nova_compute[221550]: 2026-01-31 07:36:10.053 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:10 np0005603609 nova_compute[221550]: 2026-01-31 07:36:10.056 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:36:10 np0005603609 nova_compute[221550]: 2026-01-31 07:36:10.073 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 31 02:36:10 np0005603609 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000019.scope: Deactivated successfully.
Jan 31 02:36:10 np0005603609 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000019.scope: Consumed 1.771s CPU time.
Jan 31 02:36:10 np0005603609 systemd-machined[190912]: Machine qemu-12-instance-00000019 terminated.
Jan 31 02:36:10 np0005603609 nova_compute[221550]: 2026-01-31 07:36:10.340 221554 DEBUG nova.compute.manager [None req-effc66e4-b4c0-4abc-8252-a2b2a6e1d704 87ae7ed9760e42009f7df25d95ef7f8d 7c80e45c09404149ada4f08aefed4e9c - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:10 np0005603609 nova_compute[221550]: 2026-01-31 07:36:10.552 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:10 np0005603609 nova_compute[221550]: 2026-01-31 07:36:10.815 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:10.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:36:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:11.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:36:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:12.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:36:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:13.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:36:14 np0005603609 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 02:36:14 np0005603609 systemd[230499]: Activating special unit Exit the Session...
Jan 31 02:36:14 np0005603609 systemd[230499]: Stopped target Main User Target.
Jan 31 02:36:14 np0005603609 systemd[230499]: Stopped target Basic System.
Jan 31 02:36:14 np0005603609 systemd[230499]: Stopped target Paths.
Jan 31 02:36:14 np0005603609 systemd[230499]: Stopped target Sockets.
Jan 31 02:36:14 np0005603609 systemd[230499]: Stopped target Timers.
Jan 31 02:36:14 np0005603609 systemd[230499]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 02:36:14 np0005603609 systemd[230499]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 02:36:14 np0005603609 systemd[230499]: Closed D-Bus User Message Bus Socket.
Jan 31 02:36:14 np0005603609 systemd[230499]: Stopped Create User's Volatile Files and Directories.
Jan 31 02:36:14 np0005603609 systemd[230499]: Removed slice User Application Slice.
Jan 31 02:36:14 np0005603609 systemd[230499]: Reached target Shutdown.
Jan 31 02:36:14 np0005603609 systemd[230499]: Finished Exit the Session.
Jan 31 02:36:14 np0005603609 systemd[230499]: Reached target Exit the Session.
Jan 31 02:36:14 np0005603609 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 02:36:14 np0005603609 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 02:36:14 np0005603609 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 02:36:14 np0005603609 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 02:36:14 np0005603609 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 02:36:14 np0005603609 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 02:36:14 np0005603609 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 02:36:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:36:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:14.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:36:15 np0005603609 nova_compute[221550]: 2026-01-31 07:36:15.088 221554 DEBUG oslo_concurrency.lockutils [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Acquiring lock "d033aef8-a3ed-4466-a952-762e3953a1ca" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:15 np0005603609 nova_compute[221550]: 2026-01-31 07:36:15.089 221554 DEBUG oslo_concurrency.lockutils [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "d033aef8-a3ed-4466-a952-762e3953a1ca" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:15 np0005603609 nova_compute[221550]: 2026-01-31 07:36:15.089 221554 DEBUG oslo_concurrency.lockutils [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Acquiring lock "d033aef8-a3ed-4466-a952-762e3953a1ca-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:15 np0005603609 nova_compute[221550]: 2026-01-31 07:36:15.090 221554 DEBUG oslo_concurrency.lockutils [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "d033aef8-a3ed-4466-a952-762e3953a1ca-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:15 np0005603609 nova_compute[221550]: 2026-01-31 07:36:15.090 221554 DEBUG oslo_concurrency.lockutils [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "d033aef8-a3ed-4466-a952-762e3953a1ca-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:15 np0005603609 nova_compute[221550]: 2026-01-31 07:36:15.091 221554 INFO nova.compute.manager [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Terminating instance#033[00m
Jan 31 02:36:15 np0005603609 nova_compute[221550]: 2026-01-31 07:36:15.092 221554 DEBUG oslo_concurrency.lockutils [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Acquiring lock "refresh_cache-d033aef8-a3ed-4466-a952-762e3953a1ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:36:15 np0005603609 nova_compute[221550]: 2026-01-31 07:36:15.092 221554 DEBUG oslo_concurrency.lockutils [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Acquired lock "refresh_cache-d033aef8-a3ed-4466-a952-762e3953a1ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:36:15 np0005603609 nova_compute[221550]: 2026-01-31 07:36:15.092 221554 DEBUG nova.network.neutron [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:36:15 np0005603609 nova_compute[221550]: 2026-01-31 07:36:15.358 221554 DEBUG nova.network.neutron [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:36:15 np0005603609 nova_compute[221550]: 2026-01-31 07:36:15.555 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:15.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:15 np0005603609 nova_compute[221550]: 2026-01-31 07:36:15.700 221554 DEBUG nova.network.neutron [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:36:15 np0005603609 nova_compute[221550]: 2026-01-31 07:36:15.744 221554 DEBUG oslo_concurrency.lockutils [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Releasing lock "refresh_cache-d033aef8-a3ed-4466-a952-762e3953a1ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:36:15 np0005603609 nova_compute[221550]: 2026-01-31 07:36:15.744 221554 DEBUG nova.compute.manager [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:36:15 np0005603609 nova_compute[221550]: 2026-01-31 07:36:15.752 221554 INFO nova.virt.libvirt.driver [-] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Instance destroyed successfully.#033[00m
Jan 31 02:36:15 np0005603609 nova_compute[221550]: 2026-01-31 07:36:15.752 221554 DEBUG nova.objects.instance [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lazy-loading 'resources' on Instance uuid d033aef8-a3ed-4466-a952-762e3953a1ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:36:15 np0005603609 nova_compute[221550]: 2026-01-31 07:36:15.840 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:16.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:17 np0005603609 nova_compute[221550]: 2026-01-31 07:36:17.142 221554 INFO nova.virt.libvirt.driver [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Deleting instance files /var/lib/nova/instances/d033aef8-a3ed-4466-a952-762e3953a1ca_del#033[00m
Jan 31 02:36:17 np0005603609 nova_compute[221550]: 2026-01-31 07:36:17.142 221554 INFO nova.virt.libvirt.driver [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Deletion of /var/lib/nova/instances/d033aef8-a3ed-4466-a952-762e3953a1ca_del complete#033[00m
Jan 31 02:36:17 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 31 02:36:17 np0005603609 nova_compute[221550]: 2026-01-31 07:36:17.214 221554 INFO nova.compute.manager [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Took 1.47 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:36:17 np0005603609 nova_compute[221550]: 2026-01-31 07:36:17.215 221554 DEBUG oslo.service.loopingcall [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:36:17 np0005603609 nova_compute[221550]: 2026-01-31 07:36:17.215 221554 DEBUG nova.compute.manager [-] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:36:17 np0005603609 nova_compute[221550]: 2026-01-31 07:36:17.215 221554 DEBUG nova.network.neutron [-] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:36:17 np0005603609 nova_compute[221550]: 2026-01-31 07:36:17.342 221554 DEBUG nova.network.neutron [-] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:36:17 np0005603609 nova_compute[221550]: 2026-01-31 07:36:17.362 221554 DEBUG nova.network.neutron [-] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:36:17 np0005603609 nova_compute[221550]: 2026-01-31 07:36:17.383 221554 INFO nova.compute.manager [-] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Took 0.17 seconds to deallocate network for instance.#033[00m
Jan 31 02:36:17 np0005603609 nova_compute[221550]: 2026-01-31 07:36:17.454 221554 DEBUG oslo_concurrency.lockutils [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:17 np0005603609 nova_compute[221550]: 2026-01-31 07:36:17.454 221554 DEBUG oslo_concurrency.lockutils [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:17.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:17 np0005603609 nova_compute[221550]: 2026-01-31 07:36:17.612 221554 DEBUG oslo_concurrency.processutils [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:17 np0005603609 nova_compute[221550]: 2026-01-31 07:36:17.641 221554 DEBUG oslo_concurrency.lockutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Acquiring lock "refresh_cache-6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:36:17 np0005603609 nova_compute[221550]: 2026-01-31 07:36:17.641 221554 DEBUG oslo_concurrency.lockutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Acquired lock "refresh_cache-6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:36:17 np0005603609 nova_compute[221550]: 2026-01-31 07:36:17.642 221554 DEBUG nova.network.neutron [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:36:17 np0005603609 nova_compute[221550]: 2026-01-31 07:36:17.780 221554 DEBUG nova.network.neutron [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:36:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:36:18 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1305993749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:36:18 np0005603609 nova_compute[221550]: 2026-01-31 07:36:18.070 221554 DEBUG oslo_concurrency.processutils [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:18 np0005603609 nova_compute[221550]: 2026-01-31 07:36:18.076 221554 DEBUG nova.compute.provider_tree [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:36:18 np0005603609 nova_compute[221550]: 2026-01-31 07:36:18.323 221554 DEBUG nova.scheduler.client.report [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:36:18 np0005603609 nova_compute[221550]: 2026-01-31 07:36:18.349 221554 DEBUG oslo_concurrency.lockutils [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:18 np0005603609 nova_compute[221550]: 2026-01-31 07:36:18.380 221554 INFO nova.scheduler.client.report [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Deleted allocations for instance d033aef8-a3ed-4466-a952-762e3953a1ca#033[00m
Jan 31 02:36:18 np0005603609 nova_compute[221550]: 2026-01-31 07:36:18.385 221554 DEBUG nova.network.neutron [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:36:18 np0005603609 nova_compute[221550]: 2026-01-31 07:36:18.408 221554 DEBUG oslo_concurrency.lockutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Releasing lock "refresh_cache-6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:36:18 np0005603609 nova_compute[221550]: 2026-01-31 07:36:18.539 221554 DEBUG oslo_concurrency.lockutils [None req-870dc8e3-7870-4c63-adef-1c2bcb4e79f8 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "d033aef8-a3ed-4466-a952-762e3953a1ca" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:18 np0005603609 nova_compute[221550]: 2026-01-31 07:36:18.540 221554 DEBUG nova.virt.libvirt.driver [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 31 02:36:18 np0005603609 nova_compute[221550]: 2026-01-31 07:36:18.541 221554 DEBUG nova.virt.libvirt.driver [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 02:36:18 np0005603609 nova_compute[221550]: 2026-01-31 07:36:18.542 221554 INFO nova.virt.libvirt.driver [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Creating image(s)#033[00m
Jan 31 02:36:18 np0005603609 nova_compute[221550]: 2026-01-31 07:36:18.585 221554 DEBUG nova.storage.rbd_utils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] creating snapshot(nova-resize) on rbd image(6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 02:36:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:18.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e150 e150: 3 total, 3 up, 3 in
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.576 221554 DEBUG nova.objects.instance [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:36:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:19.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.593 221554 DEBUG oslo_concurrency.lockutils [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Acquiring lock "d3d4b995-0e3e-43ff-98d2-3a10297555b3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.593 221554 DEBUG oslo_concurrency.lockutils [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "d3d4b995-0e3e-43ff-98d2-3a10297555b3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.593 221554 DEBUG oslo_concurrency.lockutils [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Acquiring lock "d3d4b995-0e3e-43ff-98d2-3a10297555b3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.594 221554 DEBUG oslo_concurrency.lockutils [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "d3d4b995-0e3e-43ff-98d2-3a10297555b3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.594 221554 DEBUG oslo_concurrency.lockutils [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "d3d4b995-0e3e-43ff-98d2-3a10297555b3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.595 221554 INFO nova.compute.manager [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Terminating instance#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.596 221554 DEBUG oslo_concurrency.lockutils [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Acquiring lock "refresh_cache-d3d4b995-0e3e-43ff-98d2-3a10297555b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.596 221554 DEBUG oslo_concurrency.lockutils [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Acquired lock "refresh_cache-d3d4b995-0e3e-43ff-98d2-3a10297555b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.596 221554 DEBUG nova.network.neutron [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.717 221554 DEBUG nova.virt.libvirt.driver [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.717 221554 DEBUG nova.virt.libvirt.driver [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Ensure instance console log exists: /var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.718 221554 DEBUG oslo_concurrency.lockutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.718 221554 DEBUG oslo_concurrency.lockutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.718 221554 DEBUG oslo_concurrency.lockutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.720 221554 DEBUG nova.virt.libvirt.driver [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.724 221554 WARNING nova.virt.libvirt.driver [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.728 221554 DEBUG nova.virt.libvirt.host [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.729 221554 DEBUG nova.virt.libvirt.host [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.734 221554 DEBUG nova.virt.libvirt.host [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.734 221554 DEBUG nova.virt.libvirt.host [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.735 221554 DEBUG nova.virt.libvirt.driver [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.735 221554 DEBUG nova.virt.hardware [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.736 221554 DEBUG nova.virt.hardware [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.736 221554 DEBUG nova.virt.hardware [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.736 221554 DEBUG nova.virt.hardware [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.737 221554 DEBUG nova.virt.hardware [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.737 221554 DEBUG nova.virt.hardware [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.737 221554 DEBUG nova.virt.hardware [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.737 221554 DEBUG nova.virt.hardware [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.737 221554 DEBUG nova.virt.hardware [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.738 221554 DEBUG nova.virt.hardware [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.738 221554 DEBUG nova.virt.hardware [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.738 221554 DEBUG nova.objects.instance [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.791 221554 DEBUG oslo_concurrency.processutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:19 np0005603609 nova_compute[221550]: 2026-01-31 07:36:19.878 221554 DEBUG nova.network.neutron [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:36:20 np0005603609 nova_compute[221550]: 2026-01-31 07:36:20.059 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:20.060 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:36:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:20.060 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:36:20 np0005603609 nova_compute[221550]: 2026-01-31 07:36:20.177 221554 DEBUG nova.network.neutron [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:36:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:36:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1679574900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:36:20 np0005603609 nova_compute[221550]: 2026-01-31 07:36:20.196 221554 DEBUG oslo_concurrency.lockutils [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Releasing lock "refresh_cache-d3d4b995-0e3e-43ff-98d2-3a10297555b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:36:20 np0005603609 nova_compute[221550]: 2026-01-31 07:36:20.198 221554 DEBUG nova.compute.manager [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:36:20 np0005603609 nova_compute[221550]: 2026-01-31 07:36:20.205 221554 DEBUG oslo_concurrency.processutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:20 np0005603609 nova_compute[221550]: 2026-01-31 07:36:20.248 221554 DEBUG oslo_concurrency.processutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:20 np0005603609 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000017.scope: Deactivated successfully.
Jan 31 02:36:20 np0005603609 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000017.scope: Consumed 12.698s CPU time.
Jan 31 02:36:20 np0005603609 systemd-machined[190912]: Machine qemu-11-instance-00000017 terminated.
Jan 31 02:36:20 np0005603609 nova_compute[221550]: 2026-01-31 07:36:20.419 221554 INFO nova.virt.libvirt.driver [-] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Instance destroyed successfully.#033[00m
Jan 31 02:36:20 np0005603609 nova_compute[221550]: 2026-01-31 07:36:20.421 221554 DEBUG nova.objects.instance [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lazy-loading 'resources' on Instance uuid d3d4b995-0e3e-43ff-98d2-3a10297555b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:36:20 np0005603609 nova_compute[221550]: 2026-01-31 07:36:20.557 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:36:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4154236657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:36:20 np0005603609 nova_compute[221550]: 2026-01-31 07:36:20.648 221554 DEBUG oslo_concurrency.processutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:20 np0005603609 nova_compute[221550]: 2026-01-31 07:36:20.652 221554 DEBUG nova.virt.libvirt.driver [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:36:20 np0005603609 nova_compute[221550]:  <uuid>6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31</uuid>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:  <name>instance-00000016</name>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <nova:name>tempest-MigrationsAdminTest-server-1458295136</nova:name>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:36:19</nova:creationTime>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:36:20 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:        <nova:user uuid="71f887fd92fb486a959e5ca100cb1e10">tempest-MigrationsAdminTest-137263588-project-member</nova:user>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:        <nova:project uuid="7c1ddd67115f4f7bab056dbb2f270ccc">tempest-MigrationsAdminTest-137263588</nova:project>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <nova:ports/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <entry name="serial">6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31</entry>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <entry name="uuid">6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31</entry>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk">
Jan 31 02:36:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:36:20 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk.config">
Jan 31 02:36:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:36:20 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31/console.log" append="off"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:36:20 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:36:20 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:36:20 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:36:20 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:36:20 np0005603609 nova_compute[221550]: 2026-01-31 07:36:20.711 221554 DEBUG nova.virt.libvirt.driver [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:36:20 np0005603609 nova_compute[221550]: 2026-01-31 07:36:20.712 221554 DEBUG nova.virt.libvirt.driver [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:36:20 np0005603609 nova_compute[221550]: 2026-01-31 07:36:20.713 221554 INFO nova.virt.libvirt.driver [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Using config drive#033[00m
Jan 31 02:36:20 np0005603609 nova_compute[221550]: 2026-01-31 07:36:20.837 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:20 np0005603609 systemd-machined[190912]: New machine qemu-13-instance-00000016.
Jan 31 02:36:20 np0005603609 systemd[1]: Started Virtual Machine qemu-13-instance-00000016.
Jan 31 02:36:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:20.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.030 221554 INFO nova.virt.libvirt.driver [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Deleting instance files /var/lib/nova/instances/d3d4b995-0e3e-43ff-98d2-3a10297555b3_del#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.031 221554 INFO nova.virt.libvirt.driver [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Deletion of /var/lib/nova/instances/d3d4b995-0e3e-43ff-98d2-3a10297555b3_del complete#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.077 221554 INFO nova.compute.manager [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.078 221554 DEBUG oslo.service.loopingcall [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.079 221554 DEBUG nova.compute.manager [-] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.079 221554 DEBUG nova.network.neutron [-] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.215 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844981.2152824, 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.216 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.218 221554 DEBUG nova.compute.manager [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.220 221554 INFO nova.virt.libvirt.driver [-] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Instance running successfully.#033[00m
Jan 31 02:36:21 np0005603609 virtqemud[221292]: argument unsupported: QEMU guest agent is not configured
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.223 221554 DEBUG nova.virt.libvirt.guest [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.224 221554 DEBUG nova.virt.libvirt.driver [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.238 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.250 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:36:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.283 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.284 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844981.2159848, 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.285 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] VM Started (Lifecycle Event)#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.307 221554 DEBUG nova.network.neutron [-] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.329 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.332 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.346 221554 DEBUG nova.network.neutron [-] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.378 221554 INFO nova.compute.manager [-] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Took 0.30 seconds to deallocate network for instance.#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.441 221554 DEBUG oslo_concurrency.lockutils [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.442 221554 DEBUG oslo_concurrency.lockutils [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:21 np0005603609 nova_compute[221550]: 2026-01-31 07:36:21.558 221554 DEBUG oslo_concurrency.processutils [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:21.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:36:21 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3505978811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:36:22 np0005603609 nova_compute[221550]: 2026-01-31 07:36:22.001 221554 DEBUG oslo_concurrency.processutils [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:22 np0005603609 nova_compute[221550]: 2026-01-31 07:36:22.009 221554 DEBUG nova.compute.provider_tree [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:36:22 np0005603609 nova_compute[221550]: 2026-01-31 07:36:22.027 221554 DEBUG nova.scheduler.client.report [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:36:22 np0005603609 nova_compute[221550]: 2026-01-31 07:36:22.053 221554 DEBUG oslo_concurrency.lockutils [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:22 np0005603609 nova_compute[221550]: 2026-01-31 07:36:22.085 221554 INFO nova.scheduler.client.report [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Deleted allocations for instance d3d4b995-0e3e-43ff-98d2-3a10297555b3#033[00m
Jan 31 02:36:22 np0005603609 nova_compute[221550]: 2026-01-31 07:36:22.167 221554 DEBUG oslo_concurrency.lockutils [None req-17344458-eb2e-43cb-bb88-1f95174e07e0 415bed69690f43f1930afb0d3fb21fb9 22109e5bc4ad462b83d70290539ac0c3 - - default default] Lock "d3d4b995-0e3e-43ff-98d2-3a10297555b3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:22 np0005603609 nova_compute[221550]: 2026-01-31 07:36:22.854 221554 DEBUG oslo_concurrency.lockutils [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "refresh_cache-6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:36:22 np0005603609 nova_compute[221550]: 2026-01-31 07:36:22.855 221554 DEBUG oslo_concurrency.lockutils [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquired lock "refresh_cache-6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:36:22 np0005603609 nova_compute[221550]: 2026-01-31 07:36:22.855 221554 DEBUG nova.network.neutron [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:36:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:22.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:23 np0005603609 nova_compute[221550]: 2026-01-31 07:36:23.200 221554 DEBUG nova.network.neutron [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:36:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:23.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:24 np0005603609 nova_compute[221550]: 2026-01-31 07:36:24.313 221554 DEBUG nova.network.neutron [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:36:24 np0005603609 nova_compute[221550]: 2026-01-31 07:36:24.342 221554 DEBUG oslo_concurrency.lockutils [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Releasing lock "refresh_cache-6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:36:24 np0005603609 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000016.scope: Deactivated successfully.
Jan 31 02:36:24 np0005603609 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000016.scope: Consumed 3.565s CPU time.
Jan 31 02:36:24 np0005603609 systemd-machined[190912]: Machine qemu-13-instance-00000016 terminated.
Jan 31 02:36:24 np0005603609 nova_compute[221550]: 2026-01-31 07:36:24.578 221554 INFO nova.virt.libvirt.driver [-] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Instance destroyed successfully.#033[00m
Jan 31 02:36:24 np0005603609 nova_compute[221550]: 2026-01-31 07:36:24.578 221554 DEBUG nova.objects.instance [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'resources' on Instance uuid 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:36:24 np0005603609 nova_compute[221550]: 2026-01-31 07:36:24.596 221554 DEBUG oslo_concurrency.lockutils [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:24 np0005603609 nova_compute[221550]: 2026-01-31 07:36:24.597 221554 DEBUG oslo_concurrency.lockutils [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:24 np0005603609 nova_compute[221550]: 2026-01-31 07:36:24.624 221554 DEBUG nova.objects.instance [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'migration_context' on Instance uuid 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:36:24 np0005603609 nova_compute[221550]: 2026-01-31 07:36:24.683 221554 DEBUG oslo_concurrency.processutils [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:24.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:25 np0005603609 nova_compute[221550]: 2026-01-31 07:36:25.098 221554 DEBUG oslo_concurrency.processutils [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:25 np0005603609 nova_compute[221550]: 2026-01-31 07:36:25.103 221554 DEBUG nova.compute.provider_tree [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:36:25 np0005603609 nova_compute[221550]: 2026-01-31 07:36:25.119 221554 DEBUG nova.scheduler.client.report [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:36:25 np0005603609 nova_compute[221550]: 2026-01-31 07:36:25.172 221554 DEBUG oslo_concurrency.lockutils [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:25 np0005603609 nova_compute[221550]: 2026-01-31 07:36:25.342 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844970.3396156, d033aef8-a3ed-4466-a952-762e3953a1ca => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:36:25 np0005603609 nova_compute[221550]: 2026-01-31 07:36:25.343 221554 INFO nova.compute.manager [-] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:36:25 np0005603609 nova_compute[221550]: 2026-01-31 07:36:25.370 221554 DEBUG nova.compute.manager [None req-f1f002cf-5e46-480c-9de9-2eeecbb6a078 - - - - - -] [instance: d033aef8-a3ed-4466-a952-762e3953a1ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:25 np0005603609 nova_compute[221550]: 2026-01-31 07:36:25.560 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:25.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:25 np0005603609 nova_compute[221550]: 2026-01-31 07:36:25.878 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:36:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:26.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:36:27 np0005603609 nova_compute[221550]: 2026-01-31 07:36:27.015 221554 DEBUG nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Creating tmpfile /var/lib/nova/instances/tmp7yca7bub to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Jan 31 02:36:27 np0005603609 nova_compute[221550]: 2026-01-31 07:36:27.016 221554 DEBUG nova.compute.manager [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp7yca7bub',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Jan 31 02:36:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:36:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:27.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:36:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e151 e151: 3 total, 3 up, 3 in
Jan 31 02:36:27 np0005603609 nova_compute[221550]: 2026-01-31 07:36:27.916 221554 DEBUG nova.compute.manager [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp7yca7bub',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='71265e55-f168-471c-80bc-80b49177a637',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Jan 31 02:36:27 np0005603609 nova_compute[221550]: 2026-01-31 07:36:27.950 221554 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:36:27 np0005603609 nova_compute[221550]: 2026-01-31 07:36:27.951 221554 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquired lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:36:27 np0005603609 nova_compute[221550]: 2026-01-31 07:36:27.951 221554 DEBUG nova.network.neutron [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:36:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:36:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:28.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:36:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:29.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:36:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 11K writes, 46K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 3082 syncs, 3.66 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5022 writes, 20K keys, 5022 commit groups, 1.0 writes per commit group, ingest: 22.45 MB, 0.04 MB/s#012Interval WAL: 5022 writes, 1947 syncs, 2.58 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 02:36:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:30.062 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.089 221554 DEBUG nova.network.neutron [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Updating instance_info_cache with network_info: [{"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.116 221554 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Releasing lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.119 221554 DEBUG os_brick.utils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.120 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.131 226739 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.131 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe25762-6501-43c9-ba95-b6c2e94b0108]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.133 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.138 226739 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.139 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[38578a6a-ac74-4f32-bea7-2c4f48755717]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1765b9b6275c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.141 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.147 226739 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.148 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd4a867-7469-4310-b1fc-a58f90057b84]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.150 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[d8fb7add-764f-422e-bd44-31c51bef0137]: (4, '231927d4-1ded-4b84-843c-456d697af567') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.151 221554 DEBUG oslo_concurrency.processutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.172 221554 DEBUG oslo_concurrency.processutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.174 221554 DEBUG os_brick.initiator.connectors.lightos [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.174 221554 DEBUG os_brick.initiator.connectors.lightos [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.174 221554 DEBUG os_brick.initiator.connectors.lightos [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.174 221554 DEBUG os_brick.utils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] <== get_connector_properties: return (55ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1765b9b6275c', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '231927d4-1ded-4b84-843c-456d697af567', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.564 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:36:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:30.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:36:30 np0005603609 nova_compute[221550]: 2026-01-31 07:36:30.929 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:31.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.743 221554 DEBUG nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp7yca7bub',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='71265e55-f168-471c-80bc-80b49177a637',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={21ac0cb5-f889-4135-9b17-5debc0b9246e='07354d43-185b-4fd2-8dc3-63e0a2cce98a'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.743 221554 DEBUG nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Creating instance directory: /var/lib/nova/instances/71265e55-f168-471c-80bc-80b49177a637 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.743 221554 DEBUG nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Ensure instance console log exists: /var/lib/nova/instances/71265e55-f168-471c-80bc-80b49177a637/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.744 221554 DEBUG nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.747 221554 DEBUG nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.748 221554 DEBUG nova.virt.libvirt.vif [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:36:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-596053430',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-596053430',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:36:20Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='29d136be5e384689a95acd607131dfd0',ramdisk_id='',reservation_id='r-w7qr5y3u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1421195096',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1421195096-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:36:20Z,user_data=None,user_id='ea44c45fe7df4f36b5c722fbfc214f2e',uuid=71265e55-f168-471c-80bc-80b49177a637,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.748 221554 DEBUG nova.network.os_vif_util [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Converting VIF {"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.748 221554 DEBUG nova.network.os_vif_util [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.749 221554 DEBUG os_vif [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.749 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.749 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.750 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.752 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.753 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84bfd4cb-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.753 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap84bfd4cb-81, col_values=(('external_ids', {'iface-id': '84bfd4cb-8188-4fde-bca2-fdb0a732119f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:c1:cc', 'vm-uuid': '71265e55-f168-471c-80bc-80b49177a637'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.754 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:31 np0005603609 NetworkManager[49064]: <info>  [1769844991.7558] manager: (tap84bfd4cb-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.756 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.763 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.763 221554 INFO os_vif [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81')#033[00m
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.768 221554 DEBUG nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Jan 31 02:36:31 np0005603609 nova_compute[221550]: 2026-01-31 07:36:31.768 221554 DEBUG nova.compute.manager [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp7yca7bub',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='71265e55-f168-471c-80bc-80b49177a637',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={21ac0cb5-f889-4135-9b17-5debc0b9246e='07354d43-185b-4fd2-8dc3-63e0a2cce98a'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Jan 31 02:36:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:32.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:33.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:34 np0005603609 nova_compute[221550]: 2026-01-31 07:36:34.370 221554 DEBUG nova.network.neutron [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Port 84bfd4cb-8188-4fde-bca2-fdb0a732119f updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Jan 31 02:36:34 np0005603609 nova_compute[221550]: 2026-01-31 07:36:34.586 221554 DEBUG nova.compute.manager [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp7yca7bub',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='71265e55-f168-471c-80bc-80b49177a637',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={21ac0cb5-f889-4135-9b17-5debc0b9246e='07354d43-185b-4fd2-8dc3-63e0a2cce98a'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Jan 31 02:36:34 np0005603609 systemd[1]: Starting libvirt proxy daemon...
Jan 31 02:36:34 np0005603609 systemd[1]: Started libvirt proxy daemon.
Jan 31 02:36:34 np0005603609 kernel: tap84bfd4cb-81: entered promiscuous mode
Jan 31 02:36:34 np0005603609 NetworkManager[49064]: <info>  [1769844994.8589] manager: (tap84bfd4cb-81): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Jan 31 02:36:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:34.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:34 np0005603609 ovn_controller[130359]: 2026-01-31T07:36:34Z|00086|binding|INFO|Claiming lport 84bfd4cb-8188-4fde-bca2-fdb0a732119f for this additional chassis.
Jan 31 02:36:34 np0005603609 ovn_controller[130359]: 2026-01-31T07:36:34Z|00087|binding|INFO|84bfd4cb-8188-4fde-bca2-fdb0a732119f: Claiming fa:16:3e:eb:c1:cc 10.100.0.8
Jan 31 02:36:34 np0005603609 nova_compute[221550]: 2026-01-31 07:36:34.893 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:34 np0005603609 ovn_controller[130359]: 2026-01-31T07:36:34Z|00088|binding|INFO|Setting lport 84bfd4cb-8188-4fde-bca2-fdb0a732119f ovn-installed in OVS
Jan 31 02:36:34 np0005603609 nova_compute[221550]: 2026-01-31 07:36:34.901 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:34 np0005603609 systemd-udevd[231303]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:36:34 np0005603609 systemd-machined[190912]: New machine qemu-14-instance-0000001a.
Jan 31 02:36:34 np0005603609 NetworkManager[49064]: <info>  [1769844994.9259] device (tap84bfd4cb-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:36:34 np0005603609 NetworkManager[49064]: <info>  [1769844994.9266] device (tap84bfd4cb-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:36:34 np0005603609 systemd[1]: Started Virtual Machine qemu-14-instance-0000001a.
Jan 31 02:36:35 np0005603609 nova_compute[221550]: 2026-01-31 07:36:35.107 221554 DEBUG oslo_concurrency.lockutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "566c132c-f694-42ee-8c41-84769367d6f5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:35 np0005603609 nova_compute[221550]: 2026-01-31 07:36:35.110 221554 DEBUG oslo_concurrency.lockutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "566c132c-f694-42ee-8c41-84769367d6f5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:35 np0005603609 nova_compute[221550]: 2026-01-31 07:36:35.133 221554 DEBUG nova.compute.manager [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:36:35 np0005603609 nova_compute[221550]: 2026-01-31 07:36:35.225 221554 DEBUG oslo_concurrency.lockutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:35 np0005603609 nova_compute[221550]: 2026-01-31 07:36:35.225 221554 DEBUG oslo_concurrency.lockutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:35 np0005603609 nova_compute[221550]: 2026-01-31 07:36:35.234 221554 DEBUG nova.virt.hardware [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:36:35 np0005603609 nova_compute[221550]: 2026-01-31 07:36:35.235 221554 INFO nova.compute.claims [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:36:35 np0005603609 nova_compute[221550]: 2026-01-31 07:36:35.417 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844980.4154522, d3d4b995-0e3e-43ff-98d2-3a10297555b3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:36:35 np0005603609 nova_compute[221550]: 2026-01-31 07:36:35.417 221554 INFO nova.compute.manager [-] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:36:35 np0005603609 nova_compute[221550]: 2026-01-31 07:36:35.577 221554 DEBUG nova.compute.manager [None req-f8f4c086-2c18-424a-9106-57ac6fd67f98 - - - - - -] [instance: d3d4b995-0e3e-43ff-98d2-3a10297555b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:35.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:35 np0005603609 nova_compute[221550]: 2026-01-31 07:36:35.654 221554 DEBUG oslo_concurrency.processutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:35 np0005603609 nova_compute[221550]: 2026-01-31 07:36:35.978 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:36:36 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/267814869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.113 221554 DEBUG oslo_concurrency.processutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.117 221554 DEBUG nova.compute.provider_tree [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.148 221554 DEBUG nova.scheduler.client.report [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.244 221554 DEBUG oslo_concurrency.lockutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.245 221554 DEBUG nova.compute.manager [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:36:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.345 221554 DEBUG nova.compute.manager [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.345 221554 DEBUG nova.network.neutron [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.364 221554 INFO nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.379 221554 DEBUG nova.compute.manager [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.472 221554 DEBUG nova.compute.manager [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.473 221554 DEBUG nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.474 221554 INFO nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Creating image(s)#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.503 221554 DEBUG nova.storage.rbd_utils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image 566c132c-f694-42ee-8c41-84769367d6f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.536 221554 DEBUG nova.storage.rbd_utils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image 566c132c-f694-42ee-8c41-84769367d6f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.567 221554 DEBUG nova.storage.rbd_utils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image 566c132c-f694-42ee-8c41-84769367d6f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.571 221554 DEBUG oslo_concurrency.processutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.585 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844996.5184321, 71265e55-f168-471c-80bc-80b49177a637 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.586 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] VM Started (Lifecycle Event)#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.633 221554 DEBUG oslo_concurrency.processutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.633 221554 DEBUG oslo_concurrency.lockutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.634 221554 DEBUG oslo_concurrency.lockutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.634 221554 DEBUG oslo_concurrency.lockutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.683 221554 DEBUG nova.storage.rbd_utils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image 566c132c-f694-42ee-8c41-84769367d6f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.688 221554 DEBUG oslo_concurrency.processutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 566c132c-f694-42ee-8c41-84769367d6f5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.716 221554 DEBUG nova.network.neutron [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.717 221554 DEBUG nova.compute.manager [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.718 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:36 np0005603609 nova_compute[221550]: 2026-01-31 07:36:36.755 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:37.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:36:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:37.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:36:37 np0005603609 nova_compute[221550]: 2026-01-31 07:36:37.757 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769844997.7566876, 71265e55-f168-471c-80bc-80b49177a637 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:36:37 np0005603609 nova_compute[221550]: 2026-01-31 07:36:37.757 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:36:37 np0005603609 nova_compute[221550]: 2026-01-31 07:36:37.804 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:37 np0005603609 nova_compute[221550]: 2026-01-31 07:36:37.808 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:36:37 np0005603609 nova_compute[221550]: 2026-01-31 07:36:37.836 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Jan 31 02:36:37 np0005603609 nova_compute[221550]: 2026-01-31 07:36:37.843 221554 DEBUG oslo_concurrency.processutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 566c132c-f694-42ee-8c41-84769367d6f5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:37 np0005603609 nova_compute[221550]: 2026-01-31 07:36:37.931 221554 DEBUG nova.storage.rbd_utils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] resizing rbd image 566c132c-f694-42ee-8c41-84769367d6f5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.182 221554 DEBUG nova.objects.instance [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lazy-loading 'migration_context' on Instance uuid 566c132c-f694-42ee-8c41-84769367d6f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.206 221554 DEBUG nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.207 221554 DEBUG nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Ensure instance console log exists: /var/lib/nova/instances/566c132c-f694-42ee-8c41-84769367d6f5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.208 221554 DEBUG oslo_concurrency.lockutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.209 221554 DEBUG oslo_concurrency.lockutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.209 221554 DEBUG oslo_concurrency.lockutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.212 221554 DEBUG nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.219 221554 WARNING nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.227 221554 DEBUG nova.virt.libvirt.host [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.228 221554 DEBUG nova.virt.libvirt.host [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.232 221554 DEBUG nova.virt.libvirt.host [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.232 221554 DEBUG nova.virt.libvirt.host [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.234 221554 DEBUG nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.235 221554 DEBUG nova.virt.hardware [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.235 221554 DEBUG nova.virt.hardware [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.236 221554 DEBUG nova.virt.hardware [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.236 221554 DEBUG nova.virt.hardware [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.237 221554 DEBUG nova.virt.hardware [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.237 221554 DEBUG nova.virt.hardware [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.237 221554 DEBUG nova.virt.hardware [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.238 221554 DEBUG nova.virt.hardware [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.238 221554 DEBUG nova.virt.hardware [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.239 221554 DEBUG nova.virt.hardware [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.239 221554 DEBUG nova.virt.hardware [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.244 221554 DEBUG oslo_concurrency.processutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:36:38 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2898010128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:36:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e152 e152: 3 total, 3 up, 3 in
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.678 221554 DEBUG oslo_concurrency.processutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.753 221554 DEBUG nova.storage.rbd_utils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image 566c132c-f694-42ee-8c41-84769367d6f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:36:38 np0005603609 nova_compute[221550]: 2026-01-31 07:36:38.758 221554 DEBUG oslo_concurrency.processutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:36:39 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1723160795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.177 221554 DEBUG oslo_concurrency.processutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.179 221554 DEBUG nova.objects.instance [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lazy-loading 'pci_devices' on Instance uuid 566c132c-f694-42ee-8c41-84769367d6f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:36:39 np0005603609 podman[231603]: 2026-01-31 07:36:39.182884077 +0000 UTC m=+0.067399378 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.200 221554 DEBUG nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:36:39 np0005603609 nova_compute[221550]:  <uuid>566c132c-f694-42ee-8c41-84769367d6f5</uuid>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:  <name>instance-0000001b</name>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServersOnMultiNodesTest-server-694437805</nova:name>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:36:38</nova:creationTime>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:36:39 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:        <nova:user uuid="741e8133b32342e083b6dd5f0e316abf">tempest-ServersOnMultiNodesTest-1893229170-project-member</nova:user>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:        <nova:project uuid="b2c9f3f1d94b49ae835ac14aae70bd73">tempest-ServersOnMultiNodesTest-1893229170</nova:project>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <nova:ports/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <entry name="serial">566c132c-f694-42ee-8c41-84769367d6f5</entry>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <entry name="uuid">566c132c-f694-42ee-8c41-84769367d6f5</entry>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/566c132c-f694-42ee-8c41-84769367d6f5_disk">
Jan 31 02:36:39 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:36:39 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/566c132c-f694-42ee-8c41-84769367d6f5_disk.config">
Jan 31 02:36:39 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:36:39 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/566c132c-f694-42ee-8c41-84769367d6f5/console.log" append="off"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:36:39 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:36:39 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:36:39 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:36:39 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:36:39 np0005603609 podman[231602]: 2026-01-31 07:36:39.206810561 +0000 UTC m=+0.095925894 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.254 221554 DEBUG nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.255 221554 DEBUG nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.256 221554 INFO nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Using config drive#033[00m
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.291 221554 DEBUG nova.storage.rbd_utils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image 566c132c-f694-42ee-8c41-84769367d6f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:36:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:39.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.531 221554 INFO nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Creating config drive at /var/lib/nova/instances/566c132c-f694-42ee-8c41-84769367d6f5/disk.config#033[00m
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.537 221554 DEBUG oslo_concurrency.processutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/566c132c-f694-42ee-8c41-84769367d6f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1yuo9wjk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.576 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844984.575166, 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.577 221554 INFO nova.compute.manager [-] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.604 221554 DEBUG nova.compute.manager [None req-975977ba-062e-4d97-8729-688ef8c7b515 - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:39.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.655 221554 DEBUG oslo_concurrency.processutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/566c132c-f694-42ee-8c41-84769367d6f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1yuo9wjk" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.689 221554 DEBUG nova.storage.rbd_utils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image 566c132c-f694-42ee-8c41-84769367d6f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.693 221554 DEBUG oslo_concurrency.processutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/566c132c-f694-42ee-8c41-84769367d6f5/disk.config 566c132c-f694-42ee-8c41-84769367d6f5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:39 np0005603609 ovn_controller[130359]: 2026-01-31T07:36:39Z|00089|binding|INFO|Claiming lport 84bfd4cb-8188-4fde-bca2-fdb0a732119f for this chassis.
Jan 31 02:36:39 np0005603609 ovn_controller[130359]: 2026-01-31T07:36:39Z|00090|binding|INFO|84bfd4cb-8188-4fde-bca2-fdb0a732119f: Claiming fa:16:3e:eb:c1:cc 10.100.0.8
Jan 31 02:36:39 np0005603609 ovn_controller[130359]: 2026-01-31T07:36:39Z|00091|binding|INFO|Setting lport 84bfd4cb-8188-4fde-bca2-fdb0a732119f up in Southbound
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.788 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:c1:cc 10.100.0.8'], port_security=['fa:16:3e:eb:c1:cc 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '71265e55-f168-471c-80bc-80b49177a637', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6f42f93-f94d-4170-883d-d45cddf5fdad, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=84bfd4cb-8188-4fde-bca2-fdb0a732119f) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.789 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 84bfd4cb-8188-4fde-bca2-fdb0a732119f in datapath e4d1862b-2abc-4d60-bc48-19a5318038f4 bound to our chassis#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.790 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4d1862b-2abc-4d60-bc48-19a5318038f4#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.797 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[08158a60-1011-4457-92f8-3ce23283efec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.798 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape4d1862b-21 in ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.800 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape4d1862b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.801 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ff5fc479-6f7b-4043-9103-d8c157f0a859]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.801 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[15abbeb6-b322-487c-851e-097c23507502]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.811 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[8f85c66c-3384-423a-a688-752990571425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.820 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e81c2720-0e75-44d5-82b5-f5ba009d2493]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.838 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[191a6232-e461-4715-8aa3-0ef96ebf240f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.843 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa81610-e829-4bb6-be34-131071065f56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:39 np0005603609 NetworkManager[49064]: <info>  [1769844999.8446] manager: (tape4d1862b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.861 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[eef8d803-ef3f-49be-919d-01022a7a5221]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:39 np0005603609 systemd-udevd[231716]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.864 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[6de6417e-3239-4cd5-98c7-1ae1e4be4ed0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.879 221554 DEBUG oslo_concurrency.processutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/566c132c-f694-42ee-8c41-84769367d6f5/disk.config 566c132c-f694-42ee-8c41-84769367d6f5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.880 221554 INFO nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Deleting local config drive /var/lib/nova/instances/566c132c-f694-42ee-8c41-84769367d6f5/disk.config because it was imported into RBD.#033[00m
Jan 31 02:36:39 np0005603609 NetworkManager[49064]: <info>  [1769844999.8817] device (tape4d1862b-20): carrier: link connected
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.883 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[a5a43129-b7d2-42e4-9245-440327a2e853]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.895 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0015d1e1-4074-40f0-9e61-ddbf191af36b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4d1862b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:38:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523958, 'reachable_time': 29738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 231735, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.908 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8d943ced-56d6-4728-b36f-578f66a1b52b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:3856'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 523958, 'tstamp': 523958}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 231740, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.919 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[867a7166-ac00-4753-8875-ebc7b79f5295]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4d1862b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:38:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523958, 'reachable_time': 29738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 231744, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:39 np0005603609 systemd-machined[190912]: New machine qemu-15-instance-0000001b.
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.931 221554 INFO nova.compute.manager [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Post operation of migration started#033[00m
Jan 31 02:36:39 np0005603609 systemd[1]: Started Virtual Machine qemu-15-instance-0000001b.
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.936 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b3586433-ff1f-4c71-a7c3-edb7353e1ef9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.969 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6404d6f8-3efd-4eca-96d3-5f4dd002891b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.970 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4d1862b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.970 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.971 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4d1862b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.972 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:39 np0005603609 NetworkManager[49064]: <info>  [1769844999.9733] manager: (tape4d1862b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Jan 31 02:36:39 np0005603609 kernel: tape4d1862b-20: entered promiscuous mode
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.974 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.975 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4d1862b-20, col_values=(('external_ids', {'iface-id': '632f26c5-40a9-4337-84da-ea4b4bbdf89c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.976 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:39 np0005603609 ovn_controller[130359]: 2026-01-31T07:36:39Z|00092|binding|INFO|Releasing lport 632f26c5-40a9-4337-84da-ea4b4bbdf89c from this chassis (sb_readonly=0)
Jan 31 02:36:39 np0005603609 nova_compute[221550]: 2026-01-31 07:36:39.987 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.987 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e4d1862b-2abc-4d60-bc48-19a5318038f4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e4d1862b-2abc-4d60-bc48-19a5318038f4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.988 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9d1b1c79-580c-4637-989a-0f7f6f93e9e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.989 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-e4d1862b-2abc-4d60-bc48-19a5318038f4
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/e4d1862b-2abc-4d60-bc48-19a5318038f4.pid.haproxy
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID e4d1862b-2abc-4d60-bc48-19a5318038f4
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:36:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:39.990 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'env', 'PROCESS_TAG=haproxy-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e4d1862b-2abc-4d60-bc48-19a5318038f4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:36:40 np0005603609 nova_compute[221550]: 2026-01-31 07:36:40.241 221554 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:36:40 np0005603609 nova_compute[221550]: 2026-01-31 07:36:40.241 221554 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquired lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:36:40 np0005603609 nova_compute[221550]: 2026-01-31 07:36:40.242 221554 DEBUG nova.network.neutron [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:36:40 np0005603609 podman[231786]: 2026-01-31 07:36:40.274467403 +0000 UTC m=+0.054165763 container create f1b9dbc1bbd7c5eff5ff0669d6b134c1b8a5f9dcbc9963c386576b04550e3007 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 02:36:40 np0005603609 systemd[1]: Started libpod-conmon-f1b9dbc1bbd7c5eff5ff0669d6b134c1b8a5f9dcbc9963c386576b04550e3007.scope.
Jan 31 02:36:40 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:36:40 np0005603609 podman[231786]: 2026-01-31 07:36:40.24278859 +0000 UTC m=+0.022486990 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:36:40 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55044e5a134e5173d7d3a772c21f3c459a883bfac78b29b5f66b373d065ccc3a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:36:40 np0005603609 podman[231786]: 2026-01-31 07:36:40.358615838 +0000 UTC m=+0.138314228 container init f1b9dbc1bbd7c5eff5ff0669d6b134c1b8a5f9dcbc9963c386576b04550e3007 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 02:36:40 np0005603609 podman[231786]: 2026-01-31 07:36:40.363440926 +0000 UTC m=+0.143139286 container start f1b9dbc1bbd7c5eff5ff0669d6b134c1b8a5f9dcbc9963c386576b04550e3007 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:36:40 np0005603609 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[231801]: [NOTICE]   (231805) : New worker (231807) forked
Jan 31 02:36:40 np0005603609 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[231801]: [NOTICE]   (231805) : Loading success.
Jan 31 02:36:40 np0005603609 nova_compute[221550]: 2026-01-31 07:36:40.864 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845000.8637629, 566c132c-f694-42ee-8c41-84769367d6f5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:36:40 np0005603609 nova_compute[221550]: 2026-01-31 07:36:40.865 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:36:40 np0005603609 nova_compute[221550]: 2026-01-31 07:36:40.867 221554 DEBUG nova.compute.manager [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:36:40 np0005603609 nova_compute[221550]: 2026-01-31 07:36:40.867 221554 DEBUG nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:36:40 np0005603609 nova_compute[221550]: 2026-01-31 07:36:40.870 221554 INFO nova.virt.libvirt.driver [-] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Instance spawned successfully.#033[00m
Jan 31 02:36:40 np0005603609 nova_compute[221550]: 2026-01-31 07:36:40.870 221554 DEBUG nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:36:40 np0005603609 nova_compute[221550]: 2026-01-31 07:36:40.979 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:41 np0005603609 nova_compute[221550]: 2026-01-31 07:36:41.076 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:41 np0005603609 nova_compute[221550]: 2026-01-31 07:36:41.081 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:36:41 np0005603609 nova_compute[221550]: 2026-01-31 07:36:41.086 221554 DEBUG nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:36:41 np0005603609 nova_compute[221550]: 2026-01-31 07:36:41.086 221554 DEBUG nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:36:41 np0005603609 nova_compute[221550]: 2026-01-31 07:36:41.086 221554 DEBUG nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:36:41 np0005603609 nova_compute[221550]: 2026-01-31 07:36:41.087 221554 DEBUG nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:36:41 np0005603609 nova_compute[221550]: 2026-01-31 07:36:41.087 221554 DEBUG nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:36:41 np0005603609 nova_compute[221550]: 2026-01-31 07:36:41.087 221554 DEBUG nova.virt.libvirt.driver [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:36:41 np0005603609 nova_compute[221550]: 2026-01-31 07:36:41.113 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:36:41 np0005603609 nova_compute[221550]: 2026-01-31 07:36:41.113 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845000.8638885, 566c132c-f694-42ee-8c41-84769367d6f5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:36:41 np0005603609 nova_compute[221550]: 2026-01-31 07:36:41.121 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] VM Started (Lifecycle Event)#033[00m
Jan 31 02:36:41 np0005603609 nova_compute[221550]: 2026-01-31 07:36:41.142 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:41 np0005603609 nova_compute[221550]: 2026-01-31 07:36:41.146 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:36:41 np0005603609 nova_compute[221550]: 2026-01-31 07:36:41.157 221554 INFO nova.compute.manager [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Took 4.68 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:36:41 np0005603609 nova_compute[221550]: 2026-01-31 07:36:41.157 221554 DEBUG nova.compute.manager [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:41 np0005603609 nova_compute[221550]: 2026-01-31 07:36:41.170 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:36:41 np0005603609 nova_compute[221550]: 2026-01-31 07:36:41.223 221554 INFO nova.compute.manager [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Took 6.03 seconds to build instance.#033[00m
Jan 31 02:36:41 np0005603609 nova_compute[221550]: 2026-01-31 07:36:41.242 221554 DEBUG oslo_concurrency.lockutils [None req-b213087f-9f56-420e-add6-36d6c2248695 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "566c132c-f694-42ee-8c41-84769367d6f5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:41.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:41.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:41 np0005603609 nova_compute[221550]: 2026-01-31 07:36:41.757 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:42 np0005603609 nova_compute[221550]: 2026-01-31 07:36:42.064 221554 DEBUG nova.network.neutron [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Updating instance_info_cache with network_info: [{"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:36:42 np0005603609 nova_compute[221550]: 2026-01-31 07:36:42.086 221554 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Releasing lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:36:42 np0005603609 nova_compute[221550]: 2026-01-31 07:36:42.111 221554 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:42 np0005603609 nova_compute[221550]: 2026-01-31 07:36:42.112 221554 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:42 np0005603609 nova_compute[221550]: 2026-01-31 07:36:42.112 221554 DEBUG oslo_concurrency.lockutils [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:42 np0005603609 nova_compute[221550]: 2026-01-31 07:36:42.118 221554 INFO nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Jan 31 02:36:42 np0005603609 virtqemud[221292]: Domain id=14 name='instance-0000001a' uuid=71265e55-f168-471c-80bc-80b49177a637 is tainted: custom-monitor
Jan 31 02:36:43 np0005603609 nova_compute[221550]: 2026-01-31 07:36:43.129 221554 INFO nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Jan 31 02:36:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:43.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:43.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:44 np0005603609 nova_compute[221550]: 2026-01-31 07:36:44.137 221554 INFO nova.virt.libvirt.driver [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Jan 31 02:36:44 np0005603609 nova_compute[221550]: 2026-01-31 07:36:44.142 221554 DEBUG nova.compute.manager [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:44 np0005603609 nova_compute[221550]: 2026-01-31 07:36:44.167 221554 DEBUG nova.objects.instance [None req-5333470e-3584-45ad-9c28-82c5870f4ef6 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 02:36:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:36:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1356891215' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:36:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:36:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1356891215' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:36:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:45.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:45.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:45 np0005603609 nova_compute[221550]: 2026-01-31 07:36:45.983 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:46 np0005603609 nova_compute[221550]: 2026-01-31 07:36:46.761 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:47 np0005603609 nova_compute[221550]: 2026-01-31 07:36:47.045 221554 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Check if temp file /var/lib/nova/instances/tmp0wegpadh exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Jan 31 02:36:47 np0005603609 nova_compute[221550]: 2026-01-31 07:36:47.046 221554 DEBUG nova.compute.manager [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp0wegpadh',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='71265e55-f168-471c-80bc-80b49177a637',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Jan 31 02:36:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:47.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:36:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:47.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:36:47 np0005603609 nova_compute[221550]: 2026-01-31 07:36:47.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:36:48 np0005603609 nova_compute[221550]: 2026-01-31 07:36:48.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:36:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:49.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:49.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:49 np0005603609 nova_compute[221550]: 2026-01-31 07:36:49.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:36:50 np0005603609 nova_compute[221550]: 2026-01-31 07:36:50.984 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:36:51 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/339851413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:36:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:51.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:51.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:51 np0005603609 nova_compute[221550]: 2026-01-31 07:36:51.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:36:51 np0005603609 nova_compute[221550]: 2026-01-31 07:36:51.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:36:51 np0005603609 nova_compute[221550]: 2026-01-31 07:36:51.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:36:51 np0005603609 nova_compute[221550]: 2026-01-31 07:36:51.762 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:52 np0005603609 nova_compute[221550]: 2026-01-31 07:36:52.113 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-566c132c-f694-42ee-8c41-84769367d6f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:36:52 np0005603609 nova_compute[221550]: 2026-01-31 07:36:52.114 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-566c132c-f694-42ee-8c41-84769367d6f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:36:52 np0005603609 nova_compute[221550]: 2026-01-31 07:36:52.114 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:36:52 np0005603609 nova_compute[221550]: 2026-01-31 07:36:52.114 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 566c132c-f694-42ee-8c41-84769367d6f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:36:52 np0005603609 nova_compute[221550]: 2026-01-31 07:36:52.318 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:36:52 np0005603609 nova_compute[221550]: 2026-01-31 07:36:52.637 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:36:52 np0005603609 nova_compute[221550]: 2026-01-31 07:36:52.655 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-566c132c-f694-42ee-8c41-84769367d6f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:36:52 np0005603609 nova_compute[221550]: 2026-01-31 07:36:52.656 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:36:52 np0005603609 nova_compute[221550]: 2026-01-31 07:36:52.656 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:36:52 np0005603609 nova_compute[221550]: 2026-01-31 07:36:52.676 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:52 np0005603609 nova_compute[221550]: 2026-01-31 07:36:52.676 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:52 np0005603609 nova_compute[221550]: 2026-01-31 07:36:52.676 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:52 np0005603609 nova_compute[221550]: 2026-01-31 07:36:52.676 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:36:52 np0005603609 nova_compute[221550]: 2026-01-31 07:36:52.677 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:36:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3600492494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:36:53 np0005603609 nova_compute[221550]: 2026-01-31 07:36:53.094 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:53 np0005603609 nova_compute[221550]: 2026-01-31 07:36:53.260 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:36:53 np0005603609 nova_compute[221550]: 2026-01-31 07:36:53.261 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:36:53 np0005603609 nova_compute[221550]: 2026-01-31 07:36:53.266 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:36:53 np0005603609 nova_compute[221550]: 2026-01-31 07:36:53.266 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:36:53 np0005603609 nova_compute[221550]: 2026-01-31 07:36:53.395 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:36:53 np0005603609 nova_compute[221550]: 2026-01-31 07:36:53.396 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4497MB free_disk=20.813800811767578GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:36:53 np0005603609 nova_compute[221550]: 2026-01-31 07:36:53.396 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:53 np0005603609 nova_compute[221550]: 2026-01-31 07:36:53.396 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:53 np0005603609 nova_compute[221550]: 2026-01-31 07:36:53.488 221554 INFO nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Updating resource usage from migration 4a1b0c4a-1bfe-448c-bdfa-5cee17568893#033[00m
Jan 31 02:36:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:36:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:53.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:36:53 np0005603609 nova_compute[221550]: 2026-01-31 07:36:53.537 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 566c132c-f694-42ee-8c41-84769367d6f5 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:36:53 np0005603609 nova_compute[221550]: 2026-01-31 07:36:53.538 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Migration 4a1b0c4a-1bfe-448c-bdfa-5cee17568893 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 02:36:53 np0005603609 nova_compute[221550]: 2026-01-31 07:36:53.538 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:36:53 np0005603609 nova_compute[221550]: 2026-01-31 07:36:53.538 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:36:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:36:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:53.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:36:53 np0005603609 nova_compute[221550]: 2026-01-31 07:36:53.690 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:36:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/483490000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:36:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:36:54 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/654419686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:36:54 np0005603609 nova_compute[221550]: 2026-01-31 07:36:54.115 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:36:54 np0005603609 nova_compute[221550]: 2026-01-31 07:36:54.119 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:36:54 np0005603609 nova_compute[221550]: 2026-01-31 07:36:54.138 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:36:54 np0005603609 nova_compute[221550]: 2026-01-31 07:36:54.187 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:36:54 np0005603609 nova_compute[221550]: 2026-01-31 07:36:54.187 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:55 np0005603609 nova_compute[221550]: 2026-01-31 07:36:55.190 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:36:55 np0005603609 nova_compute[221550]: 2026-01-31 07:36:55.191 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:36:55 np0005603609 nova_compute[221550]: 2026-01-31 07:36:55.191 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:36:55 np0005603609 nova_compute[221550]: 2026-01-31 07:36:55.192 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:36:55 np0005603609 nova_compute[221550]: 2026-01-31 07:36:55.192 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:36:55 np0005603609 nova_compute[221550]: 2026-01-31 07:36:55.463 221554 DEBUG nova.compute.manager [req-3789df96-7968-46bc-b8e9-b5531265ea7a req-6bf254a5-2a81-4387-81eb-20a46c8fcfa8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:36:55 np0005603609 nova_compute[221550]: 2026-01-31 07:36:55.464 221554 DEBUG oslo_concurrency.lockutils [req-3789df96-7968-46bc-b8e9-b5531265ea7a req-6bf254a5-2a81-4387-81eb-20a46c8fcfa8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:55 np0005603609 nova_compute[221550]: 2026-01-31 07:36:55.464 221554 DEBUG oslo_concurrency.lockutils [req-3789df96-7968-46bc-b8e9-b5531265ea7a req-6bf254a5-2a81-4387-81eb-20a46c8fcfa8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:55 np0005603609 nova_compute[221550]: 2026-01-31 07:36:55.464 221554 DEBUG oslo_concurrency.lockutils [req-3789df96-7968-46bc-b8e9-b5531265ea7a req-6bf254a5-2a81-4387-81eb-20a46c8fcfa8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:55 np0005603609 nova_compute[221550]: 2026-01-31 07:36:55.464 221554 DEBUG nova.compute.manager [req-3789df96-7968-46bc-b8e9-b5531265ea7a req-6bf254a5-2a81-4387-81eb-20a46c8fcfa8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] No waiting events found dispatching network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:36:55 np0005603609 nova_compute[221550]: 2026-01-31 07:36:55.464 221554 DEBUG nova.compute.manager [req-3789df96-7968-46bc-b8e9-b5531265ea7a req-6bf254a5-2a81-4387-81eb-20a46c8fcfa8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:36:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:55.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:55.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:36:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:36:56 np0005603609 nova_compute[221550]: 2026-01-31 07:36:56.032 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:56 np0005603609 nova_compute[221550]: 2026-01-31 07:36:56.440 221554 INFO nova.compute.manager [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Took 6.71 seconds for pre_live_migration on destination host compute-2.ctlplane.example.com.#033[00m
Jan 31 02:36:56 np0005603609 nova_compute[221550]: 2026-01-31 07:36:56.440 221554 DEBUG nova.compute.manager [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:36:56 np0005603609 nova_compute[221550]: 2026-01-31 07:36:56.467 221554 DEBUG nova.compute.manager [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp0wegpadh',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='71265e55-f168-471c-80bc-80b49177a637',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(4a1b0c4a-1bfe-448c-bdfa-5cee17568893),old_vol_attachment_ids={21ac0cb5-f889-4135-9b17-5debc0b9246e='813cfeab-a1f5-44af-8928-fed3ed24d043'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Jan 31 02:36:56 np0005603609 nova_compute[221550]: 2026-01-31 07:36:56.470 221554 DEBUG nova.objects.instance [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lazy-loading 'migration_context' on Instance uuid 71265e55-f168-471c-80bc-80b49177a637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:36:56 np0005603609 nova_compute[221550]: 2026-01-31 07:36:56.471 221554 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Jan 31 02:36:56 np0005603609 nova_compute[221550]: 2026-01-31 07:36:56.472 221554 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Jan 31 02:36:56 np0005603609 nova_compute[221550]: 2026-01-31 07:36:56.472 221554 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Jan 31 02:36:56 np0005603609 nova_compute[221550]: 2026-01-31 07:36:56.493 221554 DEBUG nova.virt.libvirt.migration [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Find same serial number: pos=1, serial=21ac0cb5-f889-4135-9b17-5debc0b9246e _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Jan 31 02:36:56 np0005603609 nova_compute[221550]: 2026-01-31 07:36:56.494 221554 DEBUG nova.virt.libvirt.vif [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:36:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-596053430',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-596053430',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:36:20Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='29d136be5e384689a95acd607131dfd0',ramdisk_id='',reservation_id='r-w7qr5y3u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1421195096',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1421195096-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:36:44Z,user_data=None,user_id='ea44c45fe7df4f36b5c722fbfc214f2e',uuid=71265e55-f168-471c-80bc-80b49177a637,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:36:56 np0005603609 nova_compute[221550]: 2026-01-31 07:36:56.494 221554 DEBUG nova.network.os_vif_util [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Converting VIF {"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:36:56 np0005603609 nova_compute[221550]: 2026-01-31 07:36:56.495 221554 DEBUG nova.network.os_vif_util [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:36:56 np0005603609 nova_compute[221550]: 2026-01-31 07:36:56.495 221554 DEBUG nova.virt.libvirt.migration [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Updating guest XML with vif config: <interface type="ethernet">
Jan 31 02:36:56 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:eb:c1:cc"/>
Jan 31 02:36:56 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 02:36:56 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:36:56 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 02:36:56 np0005603609 nova_compute[221550]:  <target dev="tap84bfd4cb-81"/>
Jan 31 02:36:56 np0005603609 nova_compute[221550]: </interface>
Jan 31 02:36:56 np0005603609 nova_compute[221550]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Jan 31 02:36:56 np0005603609 nova_compute[221550]: 2026-01-31 07:36:56.496 221554 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Jan 31 02:36:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:36:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:36:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:36:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:36:56 np0005603609 podman[232293]: 2026-01-31 07:36:56.72899539 +0000 UTC m=+0.048980837 container create f11bb701dc71f0e5c63da58120321df9159ae07ab3154dd6651e22c178f5d029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 02:36:56 np0005603609 systemd[1]: Started libpod-conmon-f11bb701dc71f0e5c63da58120321df9159ae07ab3154dd6651e22c178f5d029.scope.
Jan 31 02:36:56 np0005603609 nova_compute[221550]: 2026-01-31 07:36:56.764 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:56 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:36:56 np0005603609 podman[232293]: 2026-01-31 07:36:56.702587965 +0000 UTC m=+0.022573512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:36:56 np0005603609 podman[232293]: 2026-01-31 07:36:56.803048418 +0000 UTC m=+0.123033905 container init f11bb701dc71f0e5c63da58120321df9159ae07ab3154dd6651e22c178f5d029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:36:56 np0005603609 podman[232293]: 2026-01-31 07:36:56.809116526 +0000 UTC m=+0.129101973 container start f11bb701dc71f0e5c63da58120321df9159ae07ab3154dd6651e22c178f5d029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 02:36:56 np0005603609 epic_rhodes[232310]: 167 167
Jan 31 02:36:56 np0005603609 systemd[1]: libpod-f11bb701dc71f0e5c63da58120321df9159ae07ab3154dd6651e22c178f5d029.scope: Deactivated successfully.
Jan 31 02:36:56 np0005603609 conmon[232310]: conmon f11bb701dc71f0e5c63d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f11bb701dc71f0e5c63da58120321df9159ae07ab3154dd6651e22c178f5d029.scope/container/memory.events
Jan 31 02:36:56 np0005603609 podman[232293]: 2026-01-31 07:36:56.820033743 +0000 UTC m=+0.140019190 container attach f11bb701dc71f0e5c63da58120321df9159ae07ab3154dd6651e22c178f5d029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 02:36:56 np0005603609 podman[232293]: 2026-01-31 07:36:56.820368272 +0000 UTC m=+0.140353729 container died f11bb701dc71f0e5c63da58120321df9159ae07ab3154dd6651e22c178f5d029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:36:56 np0005603609 systemd[1]: var-lib-containers-storage-overlay-e702a0de9a1c325e9dbdef38115914a9c43c0de3e40cf5899022beb1e15e3879-merged.mount: Deactivated successfully.
Jan 31 02:36:56 np0005603609 podman[232293]: 2026-01-31 07:36:56.902830865 +0000 UTC m=+0.222816342 container remove f11bb701dc71f0e5c63da58120321df9159ae07ab3154dd6651e22c178f5d029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 02:36:56 np0005603609 systemd[1]: libpod-conmon-f11bb701dc71f0e5c63da58120321df9159ae07ab3154dd6651e22c178f5d029.scope: Deactivated successfully.
Jan 31 02:36:56 np0005603609 nova_compute[221550]: 2026-01-31 07:36:56.974 221554 DEBUG nova.virt.libvirt.migration [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 31 02:36:56 np0005603609 nova_compute[221550]: 2026-01-31 07:36:56.975 221554 INFO nova.virt.libvirt.migration [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Jan 31 02:36:57 np0005603609 podman[232335]: 2026-01-31 07:36:57.058672671 +0000 UTC m=+0.053737344 container create 5dd9f1849f1d20e373ddb67192bc98575ee4fbb288f19a0977df6e2daaeceafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:36:57 np0005603609 systemd[1]: Started libpod-conmon-5dd9f1849f1d20e373ddb67192bc98575ee4fbb288f19a0977df6e2daaeceafa.scope.
Jan 31 02:36:57 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:36:57 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cac66650e214beae94ba19ec0ebfb1adbdfe6e100f639652b871f0392a49709/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:36:57 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cac66650e214beae94ba19ec0ebfb1adbdfe6e100f639652b871f0392a49709/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:36:57 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cac66650e214beae94ba19ec0ebfb1adbdfe6e100f639652b871f0392a49709/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:36:57 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cac66650e214beae94ba19ec0ebfb1adbdfe6e100f639652b871f0392a49709/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:36:57 np0005603609 podman[232335]: 2026-01-31 07:36:57.030340918 +0000 UTC m=+0.025405681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:36:57 np0005603609 nova_compute[221550]: 2026-01-31 07:36:57.136 221554 INFO nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Jan 31 02:36:57 np0005603609 podman[232335]: 2026-01-31 07:36:57.151029287 +0000 UTC m=+0.146093980 container init 5dd9f1849f1d20e373ddb67192bc98575ee4fbb288f19a0977df6e2daaeceafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_austin, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:36:57 np0005603609 podman[232335]: 2026-01-31 07:36:57.156692685 +0000 UTC m=+0.151757358 container start 5dd9f1849f1d20e373ddb67192bc98575ee4fbb288f19a0977df6e2daaeceafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 02:36:57 np0005603609 podman[232335]: 2026-01-31 07:36:57.168204436 +0000 UTC m=+0.163269149 container attach 5dd9f1849f1d20e373ddb67192bc98575ee4fbb288f19a0977df6e2daaeceafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_austin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:36:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:57.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:57.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:57 np0005603609 nova_compute[221550]: 2026-01-31 07:36:57.658 221554 DEBUG nova.virt.libvirt.migration [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 31 02:36:57 np0005603609 nova_compute[221550]: 2026-01-31 07:36:57.659 221554 DEBUG nova.virt.libvirt.migration [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Jan 31 02:36:57 np0005603609 nova_compute[221550]: 2026-01-31 07:36:57.803 221554 DEBUG nova.compute.manager [req-e4f80a9e-6127-49e7-afdd-d87f442bda07 req-f5e2c7f8-6cc1-4bef-b017-9bcd446a538e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:36:57 np0005603609 nova_compute[221550]: 2026-01-31 07:36:57.804 221554 DEBUG oslo_concurrency.lockutils [req-e4f80a9e-6127-49e7-afdd-d87f442bda07 req-f5e2c7f8-6cc1-4bef-b017-9bcd446a538e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:57 np0005603609 nova_compute[221550]: 2026-01-31 07:36:57.804 221554 DEBUG oslo_concurrency.lockutils [req-e4f80a9e-6127-49e7-afdd-d87f442bda07 req-f5e2c7f8-6cc1-4bef-b017-9bcd446a538e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:57 np0005603609 nova_compute[221550]: 2026-01-31 07:36:57.805 221554 DEBUG oslo_concurrency.lockutils [req-e4f80a9e-6127-49e7-afdd-d87f442bda07 req-f5e2c7f8-6cc1-4bef-b017-9bcd446a538e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:57 np0005603609 nova_compute[221550]: 2026-01-31 07:36:57.805 221554 DEBUG nova.compute.manager [req-e4f80a9e-6127-49e7-afdd-d87f442bda07 req-f5e2c7f8-6cc1-4bef-b017-9bcd446a538e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] No waiting events found dispatching network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:36:57 np0005603609 nova_compute[221550]: 2026-01-31 07:36:57.806 221554 WARNING nova.compute.manager [req-e4f80a9e-6127-49e7-afdd-d87f442bda07 req-f5e2c7f8-6cc1-4bef-b017-9bcd446a538e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received unexpected event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f for instance with vm_state active and task_state migrating.#033[00m
Jan 31 02:36:57 np0005603609 nova_compute[221550]: 2026-01-31 07:36:57.806 221554 DEBUG nova.compute.manager [req-e4f80a9e-6127-49e7-afdd-d87f442bda07 req-f5e2c7f8-6cc1-4bef-b017-9bcd446a538e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-changed-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:36:57 np0005603609 nova_compute[221550]: 2026-01-31 07:36:57.807 221554 DEBUG nova.compute.manager [req-e4f80a9e-6127-49e7-afdd-d87f442bda07 req-f5e2c7f8-6cc1-4bef-b017-9bcd446a538e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Refreshing instance network info cache due to event network-changed-84bfd4cb-8188-4fde-bca2-fdb0a732119f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:36:57 np0005603609 nova_compute[221550]: 2026-01-31 07:36:57.807 221554 DEBUG oslo_concurrency.lockutils [req-e4f80a9e-6127-49e7-afdd-d87f442bda07 req-f5e2c7f8-6cc1-4bef-b017-9bcd446a538e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:36:57 np0005603609 nova_compute[221550]: 2026-01-31 07:36:57.808 221554 DEBUG oslo_concurrency.lockutils [req-e4f80a9e-6127-49e7-afdd-d87f442bda07 req-f5e2c7f8-6cc1-4bef-b017-9bcd446a538e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:36:57 np0005603609 nova_compute[221550]: 2026-01-31 07:36:57.808 221554 DEBUG nova.network.neutron [req-e4f80a9e-6127-49e7-afdd-d87f442bda07 req-f5e2c7f8-6cc1-4bef-b017-9bcd446a538e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Refreshing network info cache for port 84bfd4cb-8188-4fde-bca2-fdb0a732119f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:36:57 np0005603609 nova_compute[221550]: 2026-01-31 07:36:57.982 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845017.9819822, 71265e55-f168-471c-80bc-80b49177a637 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:36:57 np0005603609 nova_compute[221550]: 2026-01-31 07:36:57.982 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:36:58 np0005603609 friendly_austin[232352]: [
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:    {
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:        "available": false,
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:        "ceph_device": false,
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:        "lsm_data": {},
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:        "lvs": [],
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:        "path": "/dev/sr0",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:        "rejected_reasons": [
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "Insufficient space (<5GB)",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "Has a FileSystem"
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:        ],
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:        "sys_api": {
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "actuators": null,
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "device_nodes": "sr0",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "devname": "sr0",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "human_readable_size": "482.00 KB",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "id_bus": "ata",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "model": "QEMU DVD-ROM",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "nr_requests": "2",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "parent": "/dev/sr0",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "partitions": {},
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "path": "/dev/sr0",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "removable": "1",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "rev": "2.5+",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "ro": "0",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "rotational": "1",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "sas_address": "",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "sas_device_handle": "",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "scheduler_mode": "mq-deadline",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "sectors": 0,
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "sectorsize": "2048",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "size": 493568.0,
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "support_discard": "2048",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "type": "disk",
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:            "vendor": "QEMU"
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:        }
Jan 31 02:36:58 np0005603609 friendly_austin[232352]:    }
Jan 31 02:36:58 np0005603609 friendly_austin[232352]: ]
Jan 31 02:36:58 np0005603609 nova_compute[221550]: 2026-01-31 07:36:58.146 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:36:58 np0005603609 systemd[1]: libpod-5dd9f1849f1d20e373ddb67192bc98575ee4fbb288f19a0977df6e2daaeceafa.scope: Deactivated successfully.
Jan 31 02:36:58 np0005603609 podman[232335]: 2026-01-31 07:36:58.16465635 +0000 UTC m=+1.159721033 container died 5dd9f1849f1d20e373ddb67192bc98575ee4fbb288f19a0977df6e2daaeceafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:36:58 np0005603609 kernel: tap84bfd4cb-81 (unregistering): left promiscuous mode
Jan 31 02:36:58 np0005603609 NetworkManager[49064]: <info>  [1769845018.3679] device (tap84bfd4cb-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:36:58 np0005603609 ovn_controller[130359]: 2026-01-31T07:36:58Z|00093|binding|INFO|Releasing lport 84bfd4cb-8188-4fde-bca2-fdb0a732119f from this chassis (sb_readonly=0)
Jan 31 02:36:58 np0005603609 ovn_controller[130359]: 2026-01-31T07:36:58Z|00094|binding|INFO|Setting lport 84bfd4cb-8188-4fde-bca2-fdb0a732119f down in Southbound
Jan 31 02:36:58 np0005603609 ovn_controller[130359]: 2026-01-31T07:36:58Z|00095|binding|INFO|Removing iface tap84bfd4cb-81 ovn-installed in OVS
Jan 31 02:36:58 np0005603609 nova_compute[221550]: 2026-01-31 07:36:58.377 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:58 np0005603609 nova_compute[221550]: 2026-01-31 07:36:58.388 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:58.391 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:c1:cc 10.100.0.8'], port_security=['fa:16:3e:eb:c1:cc 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'c06836a7-1d29-4815-800d-4e6d21a36ae0'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '71265e55-f168-471c-80bc-80b49177a637', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '18', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6f42f93-f94d-4170-883d-d45cddf5fdad, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=84bfd4cb-8188-4fde-bca2-fdb0a732119f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:36:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:58.393 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 84bfd4cb-8188-4fde-bca2-fdb0a732119f in datapath e4d1862b-2abc-4d60-bc48-19a5318038f4 unbound from our chassis#033[00m
Jan 31 02:36:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:58.394 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4d1862b-2abc-4d60-bc48-19a5318038f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:36:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:58.395 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cab51325-f3ed-41e8-b190-c0bd9a35fef2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:58.396 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 namespace which is not needed anymore#033[00m
Jan 31 02:36:58 np0005603609 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Jan 31 02:36:58 np0005603609 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001a.scope: Consumed 2.916s CPU time.
Jan 31 02:36:58 np0005603609 systemd-machined[190912]: Machine qemu-14-instance-0000001a terminated.
Jan 31 02:36:58 np0005603609 systemd[1]: var-lib-containers-storage-overlay-8cac66650e214beae94ba19ec0ebfb1adbdfe6e100f639652b871f0392a49709-merged.mount: Deactivated successfully.
Jan 31 02:36:58 np0005603609 virtqemud[221292]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-21ac0cb5-f889-4135-9b17-5debc0b9246e: No such file or directory
Jan 31 02:36:58 np0005603609 virtqemud[221292]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-21ac0cb5-f889-4135-9b17-5debc0b9246e: No such file or directory
Jan 31 02:36:58 np0005603609 nova_compute[221550]: 2026-01-31 07:36:58.541 221554 DEBUG nova.virt.libvirt.guest [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Jan 31 02:36:58 np0005603609 nova_compute[221550]: 2026-01-31 07:36:58.542 221554 INFO nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Migration operation has completed#033[00m
Jan 31 02:36:58 np0005603609 nova_compute[221550]: 2026-01-31 07:36:58.542 221554 INFO nova.compute.manager [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] _post_live_migration() is started..#033[00m
Jan 31 02:36:58 np0005603609 nova_compute[221550]: 2026-01-31 07:36:58.543 221554 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Jan 31 02:36:58 np0005603609 nova_compute[221550]: 2026-01-31 07:36:58.544 221554 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Jan 31 02:36:58 np0005603609 nova_compute[221550]: 2026-01-31 07:36:58.544 221554 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Jan 31 02:36:58 np0005603609 nova_compute[221550]: 2026-01-31 07:36:58.666 221554 DEBUG nova.compute.manager [req-3ba88c20-5bc2-47a5-a18d-5506e74f4ec4 req-1f2e054d-53d1-4afd-9845-be3ebb06742a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:36:58 np0005603609 nova_compute[221550]: 2026-01-31 07:36:58.668 221554 DEBUG oslo_concurrency.lockutils [req-3ba88c20-5bc2-47a5-a18d-5506e74f4ec4 req-1f2e054d-53d1-4afd-9845-be3ebb06742a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:58 np0005603609 nova_compute[221550]: 2026-01-31 07:36:58.669 221554 DEBUG oslo_concurrency.lockutils [req-3ba88c20-5bc2-47a5-a18d-5506e74f4ec4 req-1f2e054d-53d1-4afd-9845-be3ebb06742a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:58 np0005603609 nova_compute[221550]: 2026-01-31 07:36:58.669 221554 DEBUG oslo_concurrency.lockutils [req-3ba88c20-5bc2-47a5-a18d-5506e74f4ec4 req-1f2e054d-53d1-4afd-9845-be3ebb06742a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:58 np0005603609 nova_compute[221550]: 2026-01-31 07:36:58.670 221554 DEBUG nova.compute.manager [req-3ba88c20-5bc2-47a5-a18d-5506e74f4ec4 req-1f2e054d-53d1-4afd-9845-be3ebb06742a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] No waiting events found dispatching network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:36:58 np0005603609 nova_compute[221550]: 2026-01-31 07:36:58.671 221554 DEBUG nova.compute.manager [req-3ba88c20-5bc2-47a5-a18d-5506e74f4ec4 req-1f2e054d-53d1-4afd-9845-be3ebb06742a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:36:58 np0005603609 podman[232335]: 2026-01-31 07:36:58.895258211 +0000 UTC m=+1.890322914 container remove 5dd9f1849f1d20e373ddb67192bc98575ee4fbb288f19a0977df6e2daaeceafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_austin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:36:58 np0005603609 systemd[1]: libpod-conmon-5dd9f1849f1d20e373ddb67192bc98575ee4fbb288f19a0977df6e2daaeceafa.scope: Deactivated successfully.
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.163 221554 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.164 221554 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.187 221554 DEBUG nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.286 221554 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.287 221554 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.295 221554 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.296 221554 INFO nova.compute.claims [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:36:59 np0005603609 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[231801]: [NOTICE]   (231805) : haproxy version is 2.8.14-c23fe91
Jan 31 02:36:59 np0005603609 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[231801]: [NOTICE]   (231805) : path to executable is /usr/sbin/haproxy
Jan 31 02:36:59 np0005603609 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[231801]: [WARNING]  (231805) : Exiting Master process...
Jan 31 02:36:59 np0005603609 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[231801]: [ALERT]    (231805) : Current worker (231807) exited with code 143 (Terminated)
Jan 31 02:36:59 np0005603609 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[231801]: [WARNING]  (231805) : All workers exited. Exiting... (0)
Jan 31 02:36:59 np0005603609 systemd[1]: libpod-f1b9dbc1bbd7c5eff5ff0669d6b134c1b8a5f9dcbc9963c386576b04550e3007.scope: Deactivated successfully.
Jan 31 02:36:59 np0005603609 conmon[231801]: conmon f1b9dbc1bbd7c5eff5ff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f1b9dbc1bbd7c5eff5ff0669d6b134c1b8a5f9dcbc9963c386576b04550e3007.scope/container/memory.events
Jan 31 02:36:59 np0005603609 podman[233504]: 2026-01-31 07:36:59.317123703 +0000 UTC m=+0.291727485 container died f1b9dbc1bbd7c5eff5ff0669d6b134c1b8a5f9dcbc9963c386576b04550e3007 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.396 221554 DEBUG nova.network.neutron [req-e4f80a9e-6127-49e7-afdd-d87f442bda07 req-f5e2c7f8-6cc1-4bef-b017-9bcd446a538e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Updated VIF entry in instance network info cache for port 84bfd4cb-8188-4fde-bca2-fdb0a732119f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.397 221554 DEBUG nova.network.neutron [req-e4f80a9e-6127-49e7-afdd-d87f442bda07 req-f5e2c7f8-6cc1-4bef-b017-9bcd446a538e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Updating instance_info_cache with network_info: [{"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-2.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.437 221554 DEBUG oslo_concurrency.lockutils [req-e4f80a9e-6127-49e7-afdd-d87f442bda07 req-f5e2c7f8-6cc1-4bef-b017-9bcd446a538e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-71265e55-f168-471c-80bc-80b49177a637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:36:59 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f1b9dbc1bbd7c5eff5ff0669d6b134c1b8a5f9dcbc9963c386576b04550e3007-userdata-shm.mount: Deactivated successfully.
Jan 31 02:36:59 np0005603609 systemd[1]: var-lib-containers-storage-overlay-55044e5a134e5173d7d3a772c21f3c459a883bfac78b29b5f66b373d065ccc3a-merged.mount: Deactivated successfully.
Jan 31 02:36:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:59.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:59 np0005603609 podman[233504]: 2026-01-31 07:36:59.545214704 +0000 UTC m=+0.519818456 container cleanup f1b9dbc1bbd7c5eff5ff0669d6b134c1b8a5f9dcbc9963c386576b04550e3007 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 02:36:59 np0005603609 systemd[1]: libpod-conmon-f1b9dbc1bbd7c5eff5ff0669d6b134c1b8a5f9dcbc9963c386576b04550e3007.scope: Deactivated successfully.
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.593 221554 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:36:59 np0005603609 podman[233535]: 2026-01-31 07:36:59.623565227 +0000 UTC m=+0.060060648 container remove f1b9dbc1bbd7c5eff5ff0669d6b134c1b8a5f9dcbc9963c386576b04550e3007 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:36:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:59.630 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7c228322-0068-4a90-a797-a50419354b5d]: (4, ('Sat Jan 31 07:36:59 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 (f1b9dbc1bbd7c5eff5ff0669d6b134c1b8a5f9dcbc9963c386576b04550e3007)\nf1b9dbc1bbd7c5eff5ff0669d6b134c1b8a5f9dcbc9963c386576b04550e3007\nSat Jan 31 07:36:59 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 (f1b9dbc1bbd7c5eff5ff0669d6b134c1b8a5f9dcbc9963c386576b04550e3007)\nf1b9dbc1bbd7c5eff5ff0669d6b134c1b8a5f9dcbc9963c386576b04550e3007\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:59.632 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b48f847c-76e1-4ac9-b90e-c41b4643010f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:59.633 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4d1862b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:36:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:36:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:59.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.637 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:59 np0005603609 kernel: tape4d1862b-20: left promiscuous mode
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.648 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:59.651 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7d729d21-c964-49fe-860b-c43636b444bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:59.670 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ad7922-4f11-412b-8ca4-ef7a9939913b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:59.671 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[19cbda28-370e-49d0-aa31-39bcb56f5c39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:59.689 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[40a5c43d-a29e-46d0-b15f-fb1de7ad3487]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523953, 'reachable_time': 22841, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 233556, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:59 np0005603609 systemd[1]: run-netns-ovnmeta\x2de4d1862b\x2d2abc\x2d4d60\x2dbc48\x2d19a5318038f4.mount: Deactivated successfully.
Jan 31 02:36:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:59.692 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:36:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:36:59.692 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[be89c977-99de-46fd-8b7e-f5b94c18e36d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.791 221554 DEBUG nova.network.neutron [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Activated binding for port 84bfd4cb-8188-4fde-bca2-fdb0a732119f and host compute-2.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.792 221554 DEBUG nova.compute.manager [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.792 221554 DEBUG nova.virt.libvirt.vif [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:36:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-596053430',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-596053430',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:36:20Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='29d136be5e384689a95acd607131dfd0',ramdisk_id='',reservation_id='r-w7qr5y3u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1421195096',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1421195096-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:36:46Z,user_data=None,user_id='ea44c45fe7df4f36b5c722fbfc214f2e',uuid=71265e55-f168-471c-80bc-80b49177a637,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.792 221554 DEBUG nova.network.os_vif_util [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Converting VIF {"id": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "address": "fa:16:3e:eb:c1:cc", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84bfd4cb-81", "ovs_interfaceid": "84bfd4cb-8188-4fde-bca2-fdb0a732119f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.793 221554 DEBUG nova.network.os_vif_util [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.794 221554 DEBUG os_vif [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.795 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.795 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84bfd4cb-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.796 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.799 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.801 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.804 221554 INFO os_vif [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:c1:cc,bridge_name='br-int',has_traffic_filtering=True,id=84bfd4cb-8188-4fde-bca2-fdb0a732119f,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84bfd4cb-81')#033[00m
Jan 31 02:36:59 np0005603609 nova_compute[221550]: 2026-01-31 07:36:59.804 221554 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:59 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:36:59 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:36:59 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:36:59 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:36:59 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:36:59 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:36:59 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:37:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:37:00 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/351743600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.064 221554 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.071 221554 DEBUG nova.compute.manager [req-b31ce65e-39ee-4698-a5a5-dc65d03340df req-e42739b9-8abd-4e28-8576-475fd701c84d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.072 221554 DEBUG oslo_concurrency.lockutils [req-b31ce65e-39ee-4698-a5a5-dc65d03340df req-e42739b9-8abd-4e28-8576-475fd701c84d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.072 221554 DEBUG oslo_concurrency.lockutils [req-b31ce65e-39ee-4698-a5a5-dc65d03340df req-e42739b9-8abd-4e28-8576-475fd701c84d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.072 221554 DEBUG oslo_concurrency.lockutils [req-b31ce65e-39ee-4698-a5a5-dc65d03340df req-e42739b9-8abd-4e28-8576-475fd701c84d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.072 221554 DEBUG nova.compute.manager [req-b31ce65e-39ee-4698-a5a5-dc65d03340df req-e42739b9-8abd-4e28-8576-475fd701c84d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] No waiting events found dispatching network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.072 221554 DEBUG nova.compute.manager [req-b31ce65e-39ee-4698-a5a5-dc65d03340df req-e42739b9-8abd-4e28-8576-475fd701c84d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-unplugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.073 221554 DEBUG nova.compute.provider_tree [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.086 221554 DEBUG nova.scheduler.client.report [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.113 221554 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.114 221554 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.115 221554 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.115 221554 DEBUG nova.compute.manager [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.115 221554 INFO nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Deleting instance files /var/lib/nova/instances/71265e55-f168-471c-80bc-80b49177a637_del#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.116 221554 INFO nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Deletion of /var/lib/nova/instances/71265e55-f168-471c-80bc-80b49177a637_del complete#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.136 221554 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "d9753a5d-0d10-4222-ad74-25f55ff7027f" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.136 221554 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "d9753a5d-0d10-4222-ad74-25f55ff7027f" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.181 221554 DEBUG nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] No node specified, defaulting to compute-1.ctlplane.example.com _get_nodename /usr/lib/python3.9/site-packages/nova/compute/manager.py:10505#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.232 221554 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "d9753a5d-0d10-4222-ad74-25f55ff7027f" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.233 221554 DEBUG nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.305 221554 DEBUG nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.306 221554 DEBUG nova.network.neutron [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.336 221554 INFO nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.360 221554 DEBUG nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.452 221554 DEBUG nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.453 221554 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.453 221554 INFO nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Creating image(s)#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.477 221554 DEBUG nova.storage.rbd_utils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.502 221554 DEBUG nova.storage.rbd_utils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.529 221554 DEBUG nova.storage.rbd_utils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.533 221554 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.580 221554 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.581 221554 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.582 221554 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.583 221554 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.614 221554 DEBUG nova.storage.rbd_utils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.618 221554 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.810 221554 DEBUG nova.compute.manager [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.812 221554 DEBUG oslo_concurrency.lockutils [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.812 221554 DEBUG oslo_concurrency.lockutils [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.813 221554 DEBUG oslo_concurrency.lockutils [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.813 221554 DEBUG nova.compute.manager [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] No waiting events found dispatching network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.814 221554 WARNING nova.compute.manager [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received unexpected event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f for instance with vm_state active and task_state migrating.#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.814 221554 DEBUG nova.compute.manager [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.815 221554 DEBUG oslo_concurrency.lockutils [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.815 221554 DEBUG oslo_concurrency.lockutils [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.816 221554 DEBUG oslo_concurrency.lockutils [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.816 221554 DEBUG nova.compute.manager [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] No waiting events found dispatching network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.817 221554 WARNING nova.compute.manager [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received unexpected event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f for instance with vm_state active and task_state migrating.#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.817 221554 DEBUG nova.compute.manager [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.818 221554 DEBUG oslo_concurrency.lockutils [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.819 221554 DEBUG oslo_concurrency.lockutils [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.819 221554 DEBUG oslo_concurrency.lockutils [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.820 221554 DEBUG nova.compute.manager [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] No waiting events found dispatching network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.821 221554 WARNING nova.compute.manager [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received unexpected event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f for instance with vm_state active and task_state migrating.#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.821 221554 DEBUG nova.compute.manager [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.822 221554 DEBUG oslo_concurrency.lockutils [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.822 221554 DEBUG oslo_concurrency.lockutils [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.822 221554 DEBUG oslo_concurrency.lockutils [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.822 221554 DEBUG nova.compute.manager [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] No waiting events found dispatching network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.822 221554 WARNING nova.compute.manager [req-c190e666-ddbb-4331-a787-417be6076189 req-39a0a444-7b49-4c7a-bfaf-4674fa4a9470 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Received unexpected event network-vif-plugged-84bfd4cb-8188-4fde-bca2-fdb0a732119f for instance with vm_state active and task_state migrating.#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.835 221554 DEBUG nova.network.neutron [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.836 221554 DEBUG nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:37:00 np0005603609 nova_compute[221550]: 2026-01-31 07:37:00.919 221554 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.000 221554 DEBUG nova.storage.rbd_utils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] resizing rbd image ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.056 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.155 221554 DEBUG nova.objects.instance [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lazy-loading 'migration_context' on Instance uuid ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.184 221554 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.185 221554 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Ensure instance console log exists: /var/lib/nova/instances/ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.186 221554 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.186 221554 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.186 221554 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.187 221554 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.192 221554 WARNING nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.198 221554 DEBUG nova.virt.libvirt.host [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.199 221554 DEBUG nova.virt.libvirt.host [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.203 221554 DEBUG nova.virt.libvirt.host [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.204 221554 DEBUG nova.virt.libvirt.host [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.205 221554 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.205 221554 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.205 221554 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.205 221554 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.206 221554 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.206 221554 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.206 221554 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.206 221554 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.207 221554 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.207 221554 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.207 221554 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.207 221554 DEBUG nova.virt.hardware [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.210 221554 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:37:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:01.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:01.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:37:01 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3581421991' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.671 221554 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.709 221554 DEBUG nova.storage.rbd_utils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:37:01 np0005603609 nova_compute[221550]: 2026-01-31 07:37:01.716 221554 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:37:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:37:02 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2162880359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:37:02 np0005603609 nova_compute[221550]: 2026-01-31 07:37:02.172 221554 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:02 np0005603609 nova_compute[221550]: 2026-01-31 07:37:02.174 221554 DEBUG nova.objects.instance [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lazy-loading 'pci_devices' on Instance uuid ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:37:02 np0005603609 nova_compute[221550]: 2026-01-31 07:37:02.202 221554 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:37:02 np0005603609 nova_compute[221550]:  <uuid>ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd</uuid>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:  <name>instance-0000001f</name>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServersOnMultiNodesTest-server-459196044-2</nova:name>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:37:01</nova:creationTime>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:37:02 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:        <nova:user uuid="741e8133b32342e083b6dd5f0e316abf">tempest-ServersOnMultiNodesTest-1893229170-project-member</nova:user>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:        <nova:project uuid="b2c9f3f1d94b49ae835ac14aae70bd73">tempest-ServersOnMultiNodesTest-1893229170</nova:project>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <nova:ports/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <entry name="serial">ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd</entry>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <entry name="uuid">ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd</entry>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd_disk">
Jan 31 02:37:02 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:37:02 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd_disk.config">
Jan 31 02:37:02 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:37:02 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd/console.log" append="off"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:37:02 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:37:02 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:37:02 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:37:02 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:37:02 np0005603609 nova_compute[221550]: 2026-01-31 07:37:02.303 221554 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:37:02 np0005603609 nova_compute[221550]: 2026-01-31 07:37:02.304 221554 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:37:02 np0005603609 nova_compute[221550]: 2026-01-31 07:37:02.305 221554 INFO nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Using config drive#033[00m
Jan 31 02:37:02 np0005603609 nova_compute[221550]: 2026-01-31 07:37:02.334 221554 DEBUG nova.storage.rbd_utils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:37:02 np0005603609 nova_compute[221550]: 2026-01-31 07:37:02.621 221554 INFO nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Creating config drive at /var/lib/nova/instances/ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd/disk.config#033[00m
Jan 31 02:37:02 np0005603609 nova_compute[221550]: 2026-01-31 07:37:02.625 221554 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpz6cp67j2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:37:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:37:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.3 total, 600.0 interval#012Cumulative writes: 4941 writes, 26K keys, 4941 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s#012Cumulative WAL: 4941 writes, 4941 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1542 writes, 7644 keys, 1542 commit groups, 1.0 writes per commit group, ingest: 16.06 MB, 0.03 MB/s#012Interval WAL: 1542 writes, 1542 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     76.2      0.40              0.07        14    0.029       0      0       0.0       0.0#012  L6      1/0    8.58 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5     83.5     69.3      1.55              0.24        13    0.119     61K   6830       0.0       0.0#012 Sum      1/0    8.58 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5     66.3     70.7      1.95              0.31        27    0.072     61K   6830       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.1     72.2     73.8      0.68              0.12        10    0.068     25K   2532       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     83.5     69.3      1.55              0.24        13    0.119     61K   6830       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     82.8      0.37              0.07        13    0.028       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.3 total, 600.0 interval#012Flush(GB): cumulative 0.030, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.08 MB/s write, 0.13 GB read, 0.07 MB/s read, 2.0 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619ad8ed1f0#2 capacity: 304.00 MB usage: 12.19 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000103 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(704,11.69 MB,3.84423%) FilterBlock(27,177.36 KB,0.0569745%) IndexBlock(27,333.45 KB,0.107118%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 02:37:02 np0005603609 nova_compute[221550]: 2026-01-31 07:37:02.744 221554 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpz6cp67j2" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:02 np0005603609 nova_compute[221550]: 2026-01-31 07:37:02.775 221554 DEBUG nova.storage.rbd_utils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:37:02 np0005603609 nova_compute[221550]: 2026-01-31 07:37:02.779 221554 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd/disk.config ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:37:03 np0005603609 nova_compute[221550]: 2026-01-31 07:37:03.304 221554 DEBUG oslo_concurrency.processutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd/disk.config ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:03 np0005603609 nova_compute[221550]: 2026-01-31 07:37:03.306 221554 INFO nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Deleting local config drive /var/lib/nova/instances/ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd/disk.config because it was imported into RBD.#033[00m
Jan 31 02:37:03 np0005603609 systemd-machined[190912]: New machine qemu-16-instance-0000001f.
Jan 31 02:37:03 np0005603609 systemd[1]: Started Virtual Machine qemu-16-instance-0000001f.
Jan 31 02:37:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:37:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:03.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:37:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:37:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:03.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:37:03 np0005603609 nova_compute[221550]: 2026-01-31 07:37:03.982 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845023.9814088, ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:37:03 np0005603609 nova_compute[221550]: 2026-01-31 07:37:03.985 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:37:03 np0005603609 nova_compute[221550]: 2026-01-31 07:37:03.990 221554 DEBUG nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:37:03 np0005603609 nova_compute[221550]: 2026-01-31 07:37:03.991 221554 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:37:03 np0005603609 nova_compute[221550]: 2026-01-31 07:37:03.997 221554 INFO nova.virt.libvirt.driver [-] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Instance spawned successfully.#033[00m
Jan 31 02:37:03 np0005603609 nova_compute[221550]: 2026-01-31 07:37:03.998 221554 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:37:04 np0005603609 nova_compute[221550]: 2026-01-31 07:37:04.518 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:37:04 np0005603609 nova_compute[221550]: 2026-01-31 07:37:04.526 221554 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:37:04 np0005603609 nova_compute[221550]: 2026-01-31 07:37:04.527 221554 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:37:04 np0005603609 nova_compute[221550]: 2026-01-31 07:37:04.528 221554 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:37:04 np0005603609 nova_compute[221550]: 2026-01-31 07:37:04.530 221554 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:37:04 np0005603609 nova_compute[221550]: 2026-01-31 07:37:04.531 221554 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:37:04 np0005603609 nova_compute[221550]: 2026-01-31 07:37:04.533 221554 DEBUG nova.virt.libvirt.driver [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:37:04 np0005603609 nova_compute[221550]: 2026-01-31 07:37:04.542 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:37:04 np0005603609 nova_compute[221550]: 2026-01-31 07:37:04.607 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:37:04 np0005603609 nova_compute[221550]: 2026-01-31 07:37:04.608 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845023.9838626, ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:37:04 np0005603609 nova_compute[221550]: 2026-01-31 07:37:04.608 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] VM Started (Lifecycle Event)#033[00m
Jan 31 02:37:04 np0005603609 nova_compute[221550]: 2026-01-31 07:37:04.698 221554 INFO nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Took 4.25 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:37:04 np0005603609 nova_compute[221550]: 2026-01-31 07:37:04.698 221554 DEBUG nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:37:04 np0005603609 nova_compute[221550]: 2026-01-31 07:37:04.722 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:37:04 np0005603609 nova_compute[221550]: 2026-01-31 07:37:04.727 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:37:04 np0005603609 nova_compute[221550]: 2026-01-31 07:37:04.761 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:37:04 np0005603609 nova_compute[221550]: 2026-01-31 07:37:04.797 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:04 np0005603609 nova_compute[221550]: 2026-01-31 07:37:04.837 221554 INFO nova.compute.manager [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Took 5.58 seconds to build instance.#033[00m
Jan 31 02:37:04 np0005603609 ovn_controller[130359]: 2026-01-31T07:37:04Z|00096|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 31 02:37:04 np0005603609 nova_compute[221550]: 2026-01-31 07:37:04.972 221554 DEBUG oslo_concurrency.lockutils [None req-326f5e13-cb1f-4798-acdd-9e0973dcf595 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:05.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:37:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:05.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.036 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.192 221554 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "71265e55-f168-471c-80bc-80b49177a637-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.193 221554 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.193 221554 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "71265e55-f168-471c-80bc-80b49177a637-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.214 221554 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.214 221554 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.214 221554 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.214 221554 DEBUG nova.compute.resource_tracker [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.215 221554 DEBUG oslo_concurrency.processutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:37:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:06 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:37:06 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:37:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:37:06 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3541215565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.638 221554 DEBUG oslo_concurrency.processutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.714 221554 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.714 221554 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.718 221554 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.718 221554 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.856 221554 WARNING nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.857 221554 DEBUG nova.compute.resource_tracker [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4518MB free_disk=20.80389404296875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.858 221554 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.858 221554 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.911 221554 DEBUG nova.compute.resource_tracker [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Migration for instance 71265e55-f168-471c-80bc-80b49177a637 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.931 221554 DEBUG nova.compute.resource_tracker [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.963 221554 DEBUG nova.compute.resource_tracker [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Instance 566c132c-f694-42ee-8c41-84769367d6f5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.964 221554 DEBUG nova.compute.resource_tracker [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Migration 4a1b0c4a-1bfe-448c-bdfa-5cee17568893 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.964 221554 DEBUG nova.compute.resource_tracker [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Instance ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.964 221554 DEBUG nova.compute.resource_tracker [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:37:06 np0005603609 nova_compute[221550]: 2026-01-31 07:37:06.965 221554 DEBUG nova.compute.resource_tracker [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:37:07 np0005603609 nova_compute[221550]: 2026-01-31 07:37:07.041 221554 DEBUG oslo_concurrency.processutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:37:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:07.470 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:07.470 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:07.470 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:37:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/741240861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:37:07 np0005603609 nova_compute[221550]: 2026-01-31 07:37:07.493 221554 DEBUG oslo_concurrency.processutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:07 np0005603609 nova_compute[221550]: 2026-01-31 07:37:07.500 221554 DEBUG nova.compute.provider_tree [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:37:07 np0005603609 nova_compute[221550]: 2026-01-31 07:37:07.522 221554 DEBUG nova.scheduler.client.report [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:37:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:37:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:07.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:37:07 np0005603609 nova_compute[221550]: 2026-01-31 07:37:07.557 221554 DEBUG nova.compute.resource_tracker [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:37:07 np0005603609 nova_compute[221550]: 2026-01-31 07:37:07.558 221554 DEBUG oslo_concurrency.lockutils [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:07 np0005603609 nova_compute[221550]: 2026-01-31 07:37:07.571 221554 INFO nova.compute.manager [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Migrating instance to compute-2.ctlplane.example.com finished successfully.#033[00m
Jan 31 02:37:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:07.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:07 np0005603609 nova_compute[221550]: 2026-01-31 07:37:07.683 221554 INFO nova.scheduler.client.report [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Deleted allocation for migration 4a1b0c4a-1bfe-448c-bdfa-5cee17568893#033[00m
Jan 31 02:37:07 np0005603609 nova_compute[221550]: 2026-01-31 07:37:07.684 221554 DEBUG nova.virt.libvirt.driver [None req-dc1fa5c2-0468-4420-a8f7-89fc60245915 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: 71265e55-f168-471c-80bc-80b49177a637] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Jan 31 02:37:08 np0005603609 nova_compute[221550]: 2026-01-31 07:37:08.194 221554 DEBUG oslo_concurrency.lockutils [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:08 np0005603609 nova_compute[221550]: 2026-01-31 07:37:08.194 221554 DEBUG oslo_concurrency.lockutils [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:08 np0005603609 nova_compute[221550]: 2026-01-31 07:37:08.195 221554 DEBUG oslo_concurrency.lockutils [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:08 np0005603609 nova_compute[221550]: 2026-01-31 07:37:08.195 221554 DEBUG oslo_concurrency.lockutils [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:08 np0005603609 nova_compute[221550]: 2026-01-31 07:37:08.195 221554 DEBUG oslo_concurrency.lockutils [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:08 np0005603609 nova_compute[221550]: 2026-01-31 07:37:08.196 221554 INFO nova.compute.manager [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Terminating instance#033[00m
Jan 31 02:37:08 np0005603609 nova_compute[221550]: 2026-01-31 07:37:08.197 221554 DEBUG oslo_concurrency.lockutils [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "refresh_cache-ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:37:08 np0005603609 nova_compute[221550]: 2026-01-31 07:37:08.197 221554 DEBUG oslo_concurrency.lockutils [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquired lock "refresh_cache-ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:37:08 np0005603609 nova_compute[221550]: 2026-01-31 07:37:08.198 221554 DEBUG nova.network.neutron [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:37:08 np0005603609 nova_compute[221550]: 2026-01-31 07:37:08.534 221554 DEBUG nova.network.neutron [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:37:09 np0005603609 nova_compute[221550]: 2026-01-31 07:37:09.013 221554 DEBUG nova.network.neutron [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:37:09 np0005603609 nova_compute[221550]: 2026-01-31 07:37:09.034 221554 DEBUG oslo_concurrency.lockutils [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Releasing lock "refresh_cache-ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:37:09 np0005603609 nova_compute[221550]: 2026-01-31 07:37:09.034 221554 DEBUG nova.compute.manager [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:37:09 np0005603609 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Jan 31 02:37:09 np0005603609 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001f.scope: Consumed 5.523s CPU time.
Jan 31 02:37:09 np0005603609 systemd-machined[190912]: Machine qemu-16-instance-0000001f terminated.
Jan 31 02:37:09 np0005603609 nova_compute[221550]: 2026-01-31 07:37:09.255 221554 INFO nova.virt.libvirt.driver [-] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Instance destroyed successfully.#033[00m
Jan 31 02:37:09 np0005603609 nova_compute[221550]: 2026-01-31 07:37:09.255 221554 DEBUG nova.objects.instance [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lazy-loading 'resources' on Instance uuid ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:37:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:09.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:09.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:09 np0005603609 nova_compute[221550]: 2026-01-31 07:37:09.838 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:10 np0005603609 nova_compute[221550]: 2026-01-31 07:37:10.156 221554 INFO nova.virt.libvirt.driver [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Deleting instance files /var/lib/nova/instances/ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd_del#033[00m
Jan 31 02:37:10 np0005603609 nova_compute[221550]: 2026-01-31 07:37:10.157 221554 INFO nova.virt.libvirt.driver [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Deletion of /var/lib/nova/instances/ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd_del complete#033[00m
Jan 31 02:37:10 np0005603609 podman[234040]: 2026-01-31 07:37:10.206267111 +0000 UTC m=+0.082404444 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:37:10 np0005603609 podman[234039]: 2026-01-31 07:37:10.2062339 +0000 UTC m=+0.082043785 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:37:10 np0005603609 nova_compute[221550]: 2026-01-31 07:37:10.216 221554 INFO nova.compute.manager [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Took 1.18 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:37:10 np0005603609 nova_compute[221550]: 2026-01-31 07:37:10.217 221554 DEBUG oslo.service.loopingcall [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:37:10 np0005603609 nova_compute[221550]: 2026-01-31 07:37:10.217 221554 DEBUG nova.compute.manager [-] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:37:10 np0005603609 nova_compute[221550]: 2026-01-31 07:37:10.217 221554 DEBUG nova.network.neutron [-] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:37:10 np0005603609 nova_compute[221550]: 2026-01-31 07:37:10.587 221554 DEBUG nova.network.neutron [-] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:37:10 np0005603609 nova_compute[221550]: 2026-01-31 07:37:10.607 221554 DEBUG nova.network.neutron [-] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:37:10 np0005603609 nova_compute[221550]: 2026-01-31 07:37:10.626 221554 INFO nova.compute.manager [-] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Took 0.41 seconds to deallocate network for instance.#033[00m
Jan 31 02:37:10 np0005603609 nova_compute[221550]: 2026-01-31 07:37:10.679 221554 DEBUG oslo_concurrency.lockutils [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:10 np0005603609 nova_compute[221550]: 2026-01-31 07:37:10.680 221554 DEBUG oslo_concurrency.lockutils [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:10 np0005603609 nova_compute[221550]: 2026-01-31 07:37:10.777 221554 DEBUG oslo_concurrency.processutils [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:37:11 np0005603609 nova_compute[221550]: 2026-01-31 07:37:11.039 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:37:11 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1151136867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:37:11 np0005603609 nova_compute[221550]: 2026-01-31 07:37:11.197 221554 DEBUG oslo_concurrency.processutils [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:11 np0005603609 nova_compute[221550]: 2026-01-31 07:37:11.203 221554 DEBUG nova.compute.provider_tree [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:37:11 np0005603609 nova_compute[221550]: 2026-01-31 07:37:11.236 221554 DEBUG nova.scheduler.client.report [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:37:11 np0005603609 nova_compute[221550]: 2026-01-31 07:37:11.261 221554 DEBUG oslo_concurrency.lockutils [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:11 np0005603609 nova_compute[221550]: 2026-01-31 07:37:11.295 221554 INFO nova.scheduler.client.report [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Deleted allocations for instance ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd#033[00m
Jan 31 02:37:11 np0005603609 nova_compute[221550]: 2026-01-31 07:37:11.371 221554 DEBUG oslo_concurrency.lockutils [None req-fec0cf32-235c-4287-88d4-28e28052fd1a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:37:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:11.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:37:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:37:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:11.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:37:13 np0005603609 nova_compute[221550]: 2026-01-31 07:37:13.544 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845018.540882, 71265e55-f168-471c-80bc-80b49177a637 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:37:13 np0005603609 nova_compute[221550]: 2026-01-31 07:37:13.545 221554 INFO nova.compute.manager [-] [instance: 71265e55-f168-471c-80bc-80b49177a637] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:37:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:13.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:13 np0005603609 nova_compute[221550]: 2026-01-31 07:37:13.565 221554 DEBUG nova.compute.manager [None req-97a5dc71-461c-4df0-b9dc-df36179d5c6e - - - - - -] [instance: 71265e55-f168-471c-80bc-80b49177a637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:37:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:37:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:13.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:37:14 np0005603609 nova_compute[221550]: 2026-01-31 07:37:14.840 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:15.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:15.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:16 np0005603609 nova_compute[221550]: 2026-01-31 07:37:16.040 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:17 np0005603609 nova_compute[221550]: 2026-01-31 07:37:17.360 221554 DEBUG oslo_concurrency.lockutils [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "566c132c-f694-42ee-8c41-84769367d6f5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:17 np0005603609 nova_compute[221550]: 2026-01-31 07:37:17.360 221554 DEBUG oslo_concurrency.lockutils [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "566c132c-f694-42ee-8c41-84769367d6f5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:17 np0005603609 nova_compute[221550]: 2026-01-31 07:37:17.361 221554 DEBUG oslo_concurrency.lockutils [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "566c132c-f694-42ee-8c41-84769367d6f5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:17 np0005603609 nova_compute[221550]: 2026-01-31 07:37:17.361 221554 DEBUG oslo_concurrency.lockutils [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "566c132c-f694-42ee-8c41-84769367d6f5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:17 np0005603609 nova_compute[221550]: 2026-01-31 07:37:17.361 221554 DEBUG oslo_concurrency.lockutils [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "566c132c-f694-42ee-8c41-84769367d6f5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:17 np0005603609 nova_compute[221550]: 2026-01-31 07:37:17.362 221554 INFO nova.compute.manager [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Terminating instance#033[00m
Jan 31 02:37:17 np0005603609 nova_compute[221550]: 2026-01-31 07:37:17.363 221554 DEBUG oslo_concurrency.lockutils [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "refresh_cache-566c132c-f694-42ee-8c41-84769367d6f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:37:17 np0005603609 nova_compute[221550]: 2026-01-31 07:37:17.363 221554 DEBUG oslo_concurrency.lockutils [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquired lock "refresh_cache-566c132c-f694-42ee-8c41-84769367d6f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:37:17 np0005603609 nova_compute[221550]: 2026-01-31 07:37:17.363 221554 DEBUG nova.network.neutron [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:37:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:17.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:17.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:17 np0005603609 nova_compute[221550]: 2026-01-31 07:37:17.954 221554 DEBUG nova.network.neutron [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:37:18 np0005603609 nova_compute[221550]: 2026-01-31 07:37:18.230 221554 DEBUG nova.network.neutron [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:37:18 np0005603609 nova_compute[221550]: 2026-01-31 07:37:18.251 221554 DEBUG oslo_concurrency.lockutils [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Releasing lock "refresh_cache-566c132c-f694-42ee-8c41-84769367d6f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:37:18 np0005603609 nova_compute[221550]: 2026-01-31 07:37:18.252 221554 DEBUG nova.compute.manager [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:37:18 np0005603609 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Jan 31 02:37:18 np0005603609 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001b.scope: Consumed 14.355s CPU time.
Jan 31 02:37:18 np0005603609 systemd-machined[190912]: Machine qemu-15-instance-0000001b terminated.
Jan 31 02:37:18 np0005603609 nova_compute[221550]: 2026-01-31 07:37:18.473 221554 INFO nova.virt.libvirt.driver [-] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Instance destroyed successfully.#033[00m
Jan 31 02:37:18 np0005603609 nova_compute[221550]: 2026-01-31 07:37:18.474 221554 DEBUG nova.objects.instance [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lazy-loading 'resources' on Instance uuid 566c132c-f694-42ee-8c41-84769367d6f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:37:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:37:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:19.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:37:19 np0005603609 nova_compute[221550]: 2026-01-31 07:37:19.602 221554 INFO nova.virt.libvirt.driver [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Deleting instance files /var/lib/nova/instances/566c132c-f694-42ee-8c41-84769367d6f5_del#033[00m
Jan 31 02:37:19 np0005603609 nova_compute[221550]: 2026-01-31 07:37:19.603 221554 INFO nova.virt.libvirt.driver [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Deletion of /var/lib/nova/instances/566c132c-f694-42ee-8c41-84769367d6f5_del complete#033[00m
Jan 31 02:37:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:37:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:19.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:37:19 np0005603609 nova_compute[221550]: 2026-01-31 07:37:19.668 221554 INFO nova.compute.manager [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Took 1.42 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:37:19 np0005603609 nova_compute[221550]: 2026-01-31 07:37:19.669 221554 DEBUG oslo.service.loopingcall [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:37:19 np0005603609 nova_compute[221550]: 2026-01-31 07:37:19.669 221554 DEBUG nova.compute.manager [-] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:37:19 np0005603609 nova_compute[221550]: 2026-01-31 07:37:19.670 221554 DEBUG nova.network.neutron [-] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:37:19 np0005603609 nova_compute[221550]: 2026-01-31 07:37:19.842 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e153 e153: 3 total, 3 up, 3 in
Jan 31 02:37:20 np0005603609 nova_compute[221550]: 2026-01-31 07:37:20.026 221554 DEBUG nova.network.neutron [-] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:37:20 np0005603609 nova_compute[221550]: 2026-01-31 07:37:20.042 221554 DEBUG nova.network.neutron [-] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:37:20 np0005603609 nova_compute[221550]: 2026-01-31 07:37:20.058 221554 INFO nova.compute.manager [-] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Took 0.39 seconds to deallocate network for instance.#033[00m
Jan 31 02:37:20 np0005603609 nova_compute[221550]: 2026-01-31 07:37:20.102 221554 DEBUG oslo_concurrency.lockutils [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:20 np0005603609 nova_compute[221550]: 2026-01-31 07:37:20.102 221554 DEBUG oslo_concurrency.lockutils [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:20 np0005603609 nova_compute[221550]: 2026-01-31 07:37:20.161 221554 DEBUG oslo_concurrency.processutils [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:37:20 np0005603609 nova_compute[221550]: 2026-01-31 07:37:20.313 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:37:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3163764212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:37:20 np0005603609 nova_compute[221550]: 2026-01-31 07:37:20.611 221554 DEBUG oslo_concurrency.processutils [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:20 np0005603609 nova_compute[221550]: 2026-01-31 07:37:20.615 221554 DEBUG nova.compute.provider_tree [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:37:20 np0005603609 nova_compute[221550]: 2026-01-31 07:37:20.635 221554 DEBUG nova.scheduler.client.report [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:37:20 np0005603609 nova_compute[221550]: 2026-01-31 07:37:20.663 221554 DEBUG oslo_concurrency.lockutils [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:20 np0005603609 nova_compute[221550]: 2026-01-31 07:37:20.690 221554 INFO nova.scheduler.client.report [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Deleted allocations for instance 566c132c-f694-42ee-8c41-84769367d6f5#033[00m
Jan 31 02:37:20 np0005603609 nova_compute[221550]: 2026-01-31 07:37:20.789 221554 DEBUG oslo_concurrency.lockutils [None req-d6ed2375-4b70-4488-ba81-5382b897e7a5 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "566c132c-f694-42ee-8c41-84769367d6f5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.429s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:21 np0005603609 nova_compute[221550]: 2026-01-31 07:37:21.042 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:37:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:21.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:37:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:21.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:22 np0005603609 nova_compute[221550]: 2026-01-31 07:37:22.258 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:22.260 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:37:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:22.261 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:37:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:23.263 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:37:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e154 e154: 3 total, 3 up, 3 in
Jan 31 02:37:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:23.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:37:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:23.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:37:24 np0005603609 nova_compute[221550]: 2026-01-31 07:37:24.254 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845029.2533264, ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:37:24 np0005603609 nova_compute[221550]: 2026-01-31 07:37:24.255 221554 INFO nova.compute.manager [-] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:37:24 np0005603609 nova_compute[221550]: 2026-01-31 07:37:24.275 221554 DEBUG nova.compute.manager [None req-80e361f8-0c35-4e5d-9cc7-f8cebcea82e8 - - - - - -] [instance: ebaba60b-bdb8-433b-b1d0-65d8eb7c3dbd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:37:24 np0005603609 nova_compute[221550]: 2026-01-31 07:37:24.845 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:37:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:25.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:37:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:37:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:25.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:37:26 np0005603609 nova_compute[221550]: 2026-01-31 07:37:26.045 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:27.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:27.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:29.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:37:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:29.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:37:29 np0005603609 nova_compute[221550]: 2026-01-31 07:37:29.846 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:31 np0005603609 nova_compute[221550]: 2026-01-31 07:37:31.047 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:31.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:31.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:33 np0005603609 nova_compute[221550]: 2026-01-31 07:37:33.471 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845038.4691224, 566c132c-f694-42ee-8c41-84769367d6f5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:37:33 np0005603609 nova_compute[221550]: 2026-01-31 07:37:33.472 221554 INFO nova.compute.manager [-] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:37:33 np0005603609 nova_compute[221550]: 2026-01-31 07:37:33.506 221554 DEBUG nova.compute.manager [None req-79134620-5687-4f66-96c8-e33f277d9cf7 - - - - - -] [instance: 566c132c-f694-42ee-8c41-84769367d6f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:37:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:33.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e155 e155: 3 total, 3 up, 3 in
Jan 31 02:37:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:33.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:34 np0005603609 nova_compute[221550]: 2026-01-31 07:37:34.891 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:35.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:35.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:36 np0005603609 nova_compute[221550]: 2026-01-31 07:37:36.076 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:36 np0005603609 ceph-mgr[82033]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 02:37:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:37.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:37.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:39.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:39.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:39 np0005603609 nova_compute[221550]: 2026-01-31 07:37:39.893 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:41 np0005603609 nova_compute[221550]: 2026-01-31 07:37:41.078 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:41 np0005603609 podman[234151]: 2026-01-31 07:37:41.183032461 +0000 UTC m=+0.066643061 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 02:37:41 np0005603609 podman[234150]: 2026-01-31 07:37:41.202768194 +0000 UTC m=+0.093460148 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 02:37:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:37:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:41.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:37:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:41.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:43.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:43.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:44 np0005603609 nova_compute[221550]: 2026-01-31 07:37:44.933 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:45.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:45.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:46 np0005603609 nova_compute[221550]: 2026-01-31 07:37:46.114 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:47 np0005603609 nova_compute[221550]: 2026-01-31 07:37:47.517 221554 DEBUG oslo_concurrency.lockutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "eed178fc-d0db-47ac-a368-0d3058e94697" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:47 np0005603609 nova_compute[221550]: 2026-01-31 07:37:47.517 221554 DEBUG oslo_concurrency.lockutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:47 np0005603609 nova_compute[221550]: 2026-01-31 07:37:47.545 221554 DEBUG nova.compute.manager [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:37:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:47.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:47 np0005603609 nova_compute[221550]: 2026-01-31 07:37:47.642 221554 DEBUG oslo_concurrency.lockutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:47 np0005603609 nova_compute[221550]: 2026-01-31 07:37:47.643 221554 DEBUG oslo_concurrency.lockutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:47 np0005603609 nova_compute[221550]: 2026-01-31 07:37:47.648 221554 DEBUG nova.virt.hardware [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:37:47 np0005603609 nova_compute[221550]: 2026-01-31 07:37:47.649 221554 INFO nova.compute.claims [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:37:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:47.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:47 np0005603609 nova_compute[221550]: 2026-01-31 07:37:47.888 221554 DEBUG oslo_concurrency.processutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:37:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:37:48 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2122428780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.306 221554 DEBUG oslo_concurrency.processutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.312 221554 DEBUG nova.compute.provider_tree [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.330 221554 DEBUG nova.scheduler.client.report [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.366 221554 DEBUG oslo_concurrency.lockutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.366 221554 DEBUG nova.compute.manager [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.414 221554 DEBUG nova.compute.manager [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.415 221554 DEBUG nova.network.neutron [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.436 221554 INFO nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.453 221554 DEBUG nova.compute.manager [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.609 221554 DEBUG nova.compute.manager [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.610 221554 DEBUG nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.611 221554 INFO nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Creating image(s)#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.634 221554 DEBUG nova.storage.rbd_utils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.662 221554 DEBUG nova.storage.rbd_utils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.693 221554 DEBUG nova.storage.rbd_utils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.697 221554 DEBUG oslo_concurrency.processutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.712 221554 DEBUG nova.policy [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8a44db09acbd4aeb990147dc979f0bfd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0554655ad0a48c8bf0551298dd31919', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.715 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.742 221554 DEBUG oslo_concurrency.processutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.743 221554 DEBUG oslo_concurrency.lockutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.744 221554 DEBUG oslo_concurrency.lockutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.744 221554 DEBUG oslo_concurrency.lockutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.772 221554 DEBUG nova.storage.rbd_utils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:37:48 np0005603609 nova_compute[221550]: 2026-01-31 07:37:48.775 221554 DEBUG oslo_concurrency.processutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 eed178fc-d0db-47ac-a368-0d3058e94697_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:37:49 np0005603609 nova_compute[221550]: 2026-01-31 07:37:49.048 221554 DEBUG oslo_concurrency.processutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 eed178fc-d0db-47ac-a368-0d3058e94697_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:49 np0005603609 nova_compute[221550]: 2026-01-31 07:37:49.139 221554 DEBUG nova.storage.rbd_utils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] resizing rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:37:49 np0005603609 nova_compute[221550]: 2026-01-31 07:37:49.260 221554 DEBUG nova.objects.instance [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'migration_context' on Instance uuid eed178fc-d0db-47ac-a368-0d3058e94697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:37:49 np0005603609 nova_compute[221550]: 2026-01-31 07:37:49.286 221554 DEBUG nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:37:49 np0005603609 nova_compute[221550]: 2026-01-31 07:37:49.287 221554 DEBUG nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Ensure instance console log exists: /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:37:49 np0005603609 nova_compute[221550]: 2026-01-31 07:37:49.288 221554 DEBUG oslo_concurrency.lockutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:49 np0005603609 nova_compute[221550]: 2026-01-31 07:37:49.289 221554 DEBUG oslo_concurrency.lockutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:49 np0005603609 nova_compute[221550]: 2026-01-31 07:37:49.289 221554 DEBUG oslo_concurrency.lockutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:49 np0005603609 nova_compute[221550]: 2026-01-31 07:37:49.578 221554 DEBUG nova.network.neutron [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Successfully created port: bf338813-3c1d-456b-a6fc-b4b2c8235740 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:37:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:37:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:49.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:37:49 np0005603609 nova_compute[221550]: 2026-01-31 07:37:49.695 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:37:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:49.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:49 np0005603609 nova_compute[221550]: 2026-01-31 07:37:49.934 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:50 np0005603609 nova_compute[221550]: 2026-01-31 07:37:50.546 221554 DEBUG nova.network.neutron [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Successfully updated port: bf338813-3c1d-456b-a6fc-b4b2c8235740 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:37:50 np0005603609 nova_compute[221550]: 2026-01-31 07:37:50.596 221554 DEBUG oslo_concurrency.lockutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "refresh_cache-eed178fc-d0db-47ac-a368-0d3058e94697" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:37:50 np0005603609 nova_compute[221550]: 2026-01-31 07:37:50.596 221554 DEBUG oslo_concurrency.lockutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquired lock "refresh_cache-eed178fc-d0db-47ac-a368-0d3058e94697" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:37:50 np0005603609 nova_compute[221550]: 2026-01-31 07:37:50.597 221554 DEBUG nova.network.neutron [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:37:50 np0005603609 nova_compute[221550]: 2026-01-31 07:37:50.948 221554 DEBUG nova.compute.manager [req-3756e3cf-3416-4ba1-82a7-19a3bddd7926 req-396d29fe-b1ef-41b5-9f5b-d390375dc18a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received event network-changed-bf338813-3c1d-456b-a6fc-b4b2c8235740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:37:50 np0005603609 nova_compute[221550]: 2026-01-31 07:37:50.948 221554 DEBUG nova.compute.manager [req-3756e3cf-3416-4ba1-82a7-19a3bddd7926 req-396d29fe-b1ef-41b5-9f5b-d390375dc18a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Refreshing instance network info cache due to event network-changed-bf338813-3c1d-456b-a6fc-b4b2c8235740. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:37:50 np0005603609 nova_compute[221550]: 2026-01-31 07:37:50.948 221554 DEBUG oslo_concurrency.lockutils [req-3756e3cf-3416-4ba1-82a7-19a3bddd7926 req-396d29fe-b1ef-41b5-9f5b-d390375dc18a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-eed178fc-d0db-47ac-a368-0d3058e94697" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:37:51 np0005603609 nova_compute[221550]: 2026-01-31 07:37:51.052 221554 DEBUG nova.network.neutron [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:37:51 np0005603609 nova_compute[221550]: 2026-01-31 07:37:51.117 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:51.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:51 np0005603609 nova_compute[221550]: 2026-01-31 07:37:51.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:37:51 np0005603609 nova_compute[221550]: 2026-01-31 07:37:51.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:37:51 np0005603609 nova_compute[221550]: 2026-01-31 07:37:51.683 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:51 np0005603609 nova_compute[221550]: 2026-01-31 07:37:51.683 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:51 np0005603609 nova_compute[221550]: 2026-01-31 07:37:51.684 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:51 np0005603609 nova_compute[221550]: 2026-01-31 07:37:51.684 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:37:51 np0005603609 nova_compute[221550]: 2026-01-31 07:37:51.684 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:37:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:51.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:37:52 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2146859592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.140 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.185 221554 DEBUG nova.network.neutron [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Updating instance_info_cache with network_info: [{"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.209 221554 DEBUG oslo_concurrency.lockutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Releasing lock "refresh_cache-eed178fc-d0db-47ac-a368-0d3058e94697" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.210 221554 DEBUG nova.compute.manager [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Instance network_info: |[{"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.210 221554 DEBUG oslo_concurrency.lockutils [req-3756e3cf-3416-4ba1-82a7-19a3bddd7926 req-396d29fe-b1ef-41b5-9f5b-d390375dc18a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-eed178fc-d0db-47ac-a368-0d3058e94697" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.211 221554 DEBUG nova.network.neutron [req-3756e3cf-3416-4ba1-82a7-19a3bddd7926 req-396d29fe-b1ef-41b5-9f5b-d390375dc18a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Refreshing network info cache for port bf338813-3c1d-456b-a6fc-b4b2c8235740 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.215 221554 DEBUG nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Start _get_guest_xml network_info=[{"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.219 221554 WARNING nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.225 221554 DEBUG nova.virt.libvirt.host [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.226 221554 DEBUG nova.virt.libvirt.host [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.234 221554 DEBUG nova.virt.libvirt.host [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.234 221554 DEBUG nova.virt.libvirt.host [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.236 221554 DEBUG nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.236 221554 DEBUG nova.virt.hardware [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.237 221554 DEBUG nova.virt.hardware [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.237 221554 DEBUG nova.virt.hardware [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.238 221554 DEBUG nova.virt.hardware [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.238 221554 DEBUG nova.virt.hardware [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.238 221554 DEBUG nova.virt.hardware [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.238 221554 DEBUG nova.virt.hardware [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.239 221554 DEBUG nova.virt.hardware [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.239 221554 DEBUG nova.virt.hardware [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.239 221554 DEBUG nova.virt.hardware [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.239 221554 DEBUG nova.virt.hardware [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.243 221554 DEBUG oslo_concurrency.processutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.360 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.362 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4871MB free_disk=20.962631225585938GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.362 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.363 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.462 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance eed178fc-d0db-47ac-a368-0d3058e94697 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.463 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.464 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.548 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:37:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:37:52 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2987293693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.659 221554 DEBUG oslo_concurrency.processutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.689 221554 DEBUG nova.storage.rbd_utils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.693 221554 DEBUG oslo_concurrency.processutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:37:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:37:52 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1404436523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.970 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.974 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:37:52 np0005603609 nova_compute[221550]: 2026-01-31 07:37:52.989 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.014 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.014 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:37:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/343125461' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.101 221554 DEBUG oslo_concurrency.processutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.102 221554 DEBUG nova.virt.libvirt.vif [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:37:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-540870110',display_name='tempest-ServersAdminTestJSON-server-540870110',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-540870110',id=33,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0554655ad0a48c8bf0551298dd31919',ramdisk_id='',reservation_id='r-naihjech',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1156607975',owner_user_name='tempest-ServersAdminTestJSON-1156607975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:37:48Z,user_data=None,user_id='8a44db09acbd4aeb990147dc979f0bfd',uuid=eed178fc-d0db-47ac-a368-0d3058e94697,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.102 221554 DEBUG nova.network.os_vif_util [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converting VIF {"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.103 221554 DEBUG nova.network.os_vif_util [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.104 221554 DEBUG nova.objects.instance [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'pci_devices' on Instance uuid eed178fc-d0db-47ac-a368-0d3058e94697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.120 221554 DEBUG nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:37:53 np0005603609 nova_compute[221550]:  <uuid>eed178fc-d0db-47ac-a368-0d3058e94697</uuid>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:  <name>instance-00000021</name>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServersAdminTestJSON-server-540870110</nova:name>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:37:52</nova:creationTime>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:37:53 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:        <nova:user uuid="8a44db09acbd4aeb990147dc979f0bfd">tempest-ServersAdminTestJSON-1156607975-project-member</nova:user>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:        <nova:project uuid="b0554655ad0a48c8bf0551298dd31919">tempest-ServersAdminTestJSON-1156607975</nova:project>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:        <nova:port uuid="bf338813-3c1d-456b-a6fc-b4b2c8235740">
Jan 31 02:37:53 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <entry name="serial">eed178fc-d0db-47ac-a368-0d3058e94697</entry>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <entry name="uuid">eed178fc-d0db-47ac-a368-0d3058e94697</entry>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/eed178fc-d0db-47ac-a368-0d3058e94697_disk">
Jan 31 02:37:53 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:37:53 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/eed178fc-d0db-47ac-a368-0d3058e94697_disk.config">
Jan 31 02:37:53 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:37:53 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:e7:af:41"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <target dev="tapbf338813-3c"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/console.log" append="off"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:37:53 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:37:53 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:37:53 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:37:53 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.121 221554 DEBUG nova.compute.manager [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Preparing to wait for external event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.121 221554 DEBUG oslo_concurrency.lockutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.121 221554 DEBUG oslo_concurrency.lockutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.122 221554 DEBUG oslo_concurrency.lockutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.122 221554 DEBUG nova.virt.libvirt.vif [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:37:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-540870110',display_name='tempest-ServersAdminTestJSON-server-540870110',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-540870110',id=33,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0554655ad0a48c8bf0551298dd31919',ramdisk_id='',reservation_id='r-naihjech',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1156607975',owner_user_name='tempest-ServersAdminTestJSON-1156607975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:37:48Z,user_data=None,user_id='8a44db09acbd4aeb990147dc979f0bfd',uuid=eed178fc-d0db-47ac-a368-0d3058e94697,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.122 221554 DEBUG nova.network.os_vif_util [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converting VIF {"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.123 221554 DEBUG nova.network.os_vif_util [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.123 221554 DEBUG os_vif [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.124 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.124 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.124 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.127 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.127 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf338813-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.128 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf338813-3c, col_values=(('external_ids', {'iface-id': 'bf338813-3c1d-456b-a6fc-b4b2c8235740', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:af:41', 'vm-uuid': 'eed178fc-d0db-47ac-a368-0d3058e94697'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.129 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:53 np0005603609 NetworkManager[49064]: <info>  [1769845073.1300] manager: (tapbf338813-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.131 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.135 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.136 221554 INFO os_vif [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c')#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.185 221554 DEBUG nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.186 221554 DEBUG nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.187 221554 DEBUG nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] No VIF found with MAC fa:16:3e:e7:af:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.188 221554 INFO nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Using config drive#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.228 221554 DEBUG nova.storage.rbd_utils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:37:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:53.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:53.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.974 221554 INFO nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Creating config drive at /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/disk.config#033[00m
Jan 31 02:37:53 np0005603609 nova_compute[221550]: 2026-01-31 07:37:53.981 221554 DEBUG oslo_concurrency.processutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpepqoc072 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:37:54 np0005603609 nova_compute[221550]: 2026-01-31 07:37:54.016 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:37:54 np0005603609 nova_compute[221550]: 2026-01-31 07:37:54.017 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:37:54 np0005603609 nova_compute[221550]: 2026-01-31 07:37:54.037 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:37:54 np0005603609 nova_compute[221550]: 2026-01-31 07:37:54.108 221554 DEBUG oslo_concurrency.processutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpepqoc072" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:54 np0005603609 nova_compute[221550]: 2026-01-31 07:37:54.147 221554 DEBUG nova.storage.rbd_utils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:37:54 np0005603609 nova_compute[221550]: 2026-01-31 07:37:54.152 221554 DEBUG oslo_concurrency.processutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/disk.config eed178fc-d0db-47ac-a368-0d3058e94697_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:37:54 np0005603609 nova_compute[221550]: 2026-01-31 07:37:54.254 221554 DEBUG nova.network.neutron [req-3756e3cf-3416-4ba1-82a7-19a3bddd7926 req-396d29fe-b1ef-41b5-9f5b-d390375dc18a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Updated VIF entry in instance network info cache for port bf338813-3c1d-456b-a6fc-b4b2c8235740. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:37:54 np0005603609 nova_compute[221550]: 2026-01-31 07:37:54.255 221554 DEBUG nova.network.neutron [req-3756e3cf-3416-4ba1-82a7-19a3bddd7926 req-396d29fe-b1ef-41b5-9f5b-d390375dc18a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Updating instance_info_cache with network_info: [{"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:37:54 np0005603609 nova_compute[221550]: 2026-01-31 07:37:54.280 221554 DEBUG oslo_concurrency.lockutils [req-3756e3cf-3416-4ba1-82a7-19a3bddd7926 req-396d29fe-b1ef-41b5-9f5b-d390375dc18a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-eed178fc-d0db-47ac-a368-0d3058e94697" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:37:54 np0005603609 nova_compute[221550]: 2026-01-31 07:37:54.675 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:37:54 np0005603609 nova_compute[221550]: 2026-01-31 07:37:54.676 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:37:54 np0005603609 nova_compute[221550]: 2026-01-31 07:37:54.832 221554 DEBUG oslo_concurrency.processutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/disk.config eed178fc-d0db-47ac-a368-0d3058e94697_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:37:54 np0005603609 nova_compute[221550]: 2026-01-31 07:37:54.833 221554 INFO nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Deleting local config drive /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/disk.config because it was imported into RBD.#033[00m
Jan 31 02:37:54 np0005603609 kernel: tapbf338813-3c: entered promiscuous mode
Jan 31 02:37:54 np0005603609 NetworkManager[49064]: <info>  [1769845074.8883] manager: (tapbf338813-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Jan 31 02:37:54 np0005603609 ovn_controller[130359]: 2026-01-31T07:37:54Z|00097|binding|INFO|Claiming lport bf338813-3c1d-456b-a6fc-b4b2c8235740 for this chassis.
Jan 31 02:37:54 np0005603609 ovn_controller[130359]: 2026-01-31T07:37:54Z|00098|binding|INFO|bf338813-3c1d-456b-a6fc-b4b2c8235740: Claiming fa:16:3e:e7:af:41 10.100.0.7
Jan 31 02:37:54 np0005603609 nova_compute[221550]: 2026-01-31 07:37:54.889 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:54 np0005603609 nova_compute[221550]: 2026-01-31 07:37:54.897 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:54 np0005603609 nova_compute[221550]: 2026-01-31 07:37:54.903 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:54.913 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:af:41 10.100.0.7'], port_security=['fa:16:3e:e7:af:41 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eed178fc-d0db-47ac-a368-0d3058e94697', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0554655ad0a48c8bf0551298dd31919', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b25af7c2-ecfe-428a-9b4f-51874d47219e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3f23c6f-f389-4487-9d19-0cf4a6c28cbc, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=bf338813-3c1d-456b-a6fc-b4b2c8235740) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:37:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:54.915 140058 INFO neutron.agent.ovn.metadata.agent [-] Port bf338813-3c1d-456b-a6fc-b4b2c8235740 in datapath 8c92e27e-f16c-4df2-a299-60ef2ca44f53 bound to our chassis#033[00m
Jan 31 02:37:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:54.918 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c92e27e-f16c-4df2-a299-60ef2ca44f53#033[00m
Jan 31 02:37:54 np0005603609 systemd-udevd[234564]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:37:54 np0005603609 systemd-machined[190912]: New machine qemu-17-instance-00000021.
Jan 31 02:37:54 np0005603609 NetworkManager[49064]: <info>  [1769845074.9320] device (tapbf338813-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:37:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:54.930 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[02ec20d7-bdea-4cc9-a1ec-43978d786cf3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:37:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:54.932 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8c92e27e-f1 in ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:37:54 np0005603609 NetworkManager[49064]: <info>  [1769845074.9343] device (tapbf338813-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:37:54 np0005603609 systemd[1]: Started Virtual Machine qemu-17-instance-00000021.
Jan 31 02:37:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:54.938 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8c92e27e-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:37:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:54.939 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c1795b-a4ce-4660-b9f3-8824a301e9cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:37:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:54.941 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[827441cb-e98b-4dd8-9127-083f5ce65e8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:37:54 np0005603609 ovn_controller[130359]: 2026-01-31T07:37:54Z|00099|binding|INFO|Setting lport bf338813-3c1d-456b-a6fc-b4b2c8235740 ovn-installed in OVS
Jan 31 02:37:54 np0005603609 ovn_controller[130359]: 2026-01-31T07:37:54Z|00100|binding|INFO|Setting lport bf338813-3c1d-456b-a6fc-b4b2c8235740 up in Southbound
Jan 31 02:37:54 np0005603609 nova_compute[221550]: 2026-01-31 07:37:54.945 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:54.953 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[52987398-0b7d-4072-b93a-ec8c3ae5381a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:37:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:54.967 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a7244d97-5f1f-408a-9b72-3fd7e48e9171]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:37:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:54.993 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b4d70a82-0eb6-4a19-8e88-4d8cf93ca44f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:37:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:54.998 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3cbfe7c0-f962-4216-84dc-513704038647]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:37:55 np0005603609 NetworkManager[49064]: <info>  [1769845074.9995] manager: (tap8c92e27e-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Jan 31 02:37:55 np0005603609 systemd-udevd[234567]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:55.033 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[cf74e61b-3f70-424a-9d75-134f24720d96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:55.037 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[efa92d03-a1a0-4f5d-b858-adec92416b59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:37:55 np0005603609 NetworkManager[49064]: <info>  [1769845075.0548] device (tap8c92e27e-f0): carrier: link connected
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:55.059 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[08d6372f-431d-48c7-9954-3d4ec0a7b530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:55.070 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a4aafc-bd6f-48d4-9255-283767e08c8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c92e27e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:62:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531475, 'reachable_time': 25068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 234597, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:55.083 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff4bbff-28de-4786-a440-88e7b26c4051]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:629b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531475, 'tstamp': 531475}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 234598, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:55.096 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[49f93ad7-b2d3-4ee5-ac93-39fb64cf0203]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c92e27e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:62:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531475, 'reachable_time': 25068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 234599, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:55.118 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b587e55a-1b48-4c00-bb6b-e0f73291487d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:55.159 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[96f1f1bb-6af6-42cc-b0e6-eded41cdb5a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:55.160 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c92e27e-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:55.160 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:55.161 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c92e27e-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:37:55 np0005603609 NetworkManager[49064]: <info>  [1769845075.1632] manager: (tap8c92e27e-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Jan 31 02:37:55 np0005603609 nova_compute[221550]: 2026-01-31 07:37:55.162 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:55 np0005603609 kernel: tap8c92e27e-f0: entered promiscuous mode
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:55.166 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c92e27e-f0, col_values=(('external_ids', {'iface-id': 'b682c189-93d2-4c14-8b2a-bafbda6df8a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:37:55 np0005603609 nova_compute[221550]: 2026-01-31 07:37:55.167 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:55 np0005603609 ovn_controller[130359]: 2026-01-31T07:37:55Z|00101|binding|INFO|Releasing lport b682c189-93d2-4c14-8b2a-bafbda6df8a4 from this chassis (sb_readonly=0)
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:55.169 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8c92e27e-f16c-4df2-a299-60ef2ca44f53.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8c92e27e-f16c-4df2-a299-60ef2ca44f53.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:37:55 np0005603609 nova_compute[221550]: 2026-01-31 07:37:55.172 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:55.172 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc47bef-d32f-45a2-b3d6-f8e6a08d9e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:55.173 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-8c92e27e-f16c-4df2-a299-60ef2ca44f53
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/8c92e27e-f16c-4df2-a299-60ef2ca44f53.pid.haproxy
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 8c92e27e-f16c-4df2-a299-60ef2ca44f53
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:37:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:37:55.174 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'env', 'PROCESS_TAG=haproxy-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8c92e27e-f16c-4df2-a299-60ef2ca44f53.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:37:55 np0005603609 podman[234631]: 2026-01-31 07:37:55.487181427 +0000 UTC m=+0.047575515 container create 207baca9a485698b518111ca8dcf02c1f720951c1af64060cf97ee370359d083 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 02:37:55 np0005603609 systemd[1]: Started libpod-conmon-207baca9a485698b518111ca8dcf02c1f720951c1af64060cf97ee370359d083.scope.
Jan 31 02:37:55 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:37:55 np0005603609 nova_compute[221550]: 2026-01-31 07:37:55.536 221554 DEBUG nova.compute.manager [req-ad7ab928-488e-4ad5-ba2c-b878259e2c78 req-7b9bc887-a124-4ecf-847b-24ead947c9a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:37:55 np0005603609 nova_compute[221550]: 2026-01-31 07:37:55.536 221554 DEBUG oslo_concurrency.lockutils [req-ad7ab928-488e-4ad5-ba2c-b878259e2c78 req-7b9bc887-a124-4ecf-847b-24ead947c9a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:55 np0005603609 nova_compute[221550]: 2026-01-31 07:37:55.536 221554 DEBUG oslo_concurrency.lockutils [req-ad7ab928-488e-4ad5-ba2c-b878259e2c78 req-7b9bc887-a124-4ecf-847b-24ead947c9a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:55 np0005603609 nova_compute[221550]: 2026-01-31 07:37:55.537 221554 DEBUG oslo_concurrency.lockutils [req-ad7ab928-488e-4ad5-ba2c-b878259e2c78 req-7b9bc887-a124-4ecf-847b-24ead947c9a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:55 np0005603609 nova_compute[221550]: 2026-01-31 07:37:55.537 221554 DEBUG nova.compute.manager [req-ad7ab928-488e-4ad5-ba2c-b878259e2c78 req-7b9bc887-a124-4ecf-847b-24ead947c9a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Processing event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:37:55 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f34e297e6133047fac1065760f47282aaf59b4f908e210e50bc6a797a1f3f30f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:37:55 np0005603609 podman[234631]: 2026-01-31 07:37:55.457193514 +0000 UTC m=+0.017587602 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:37:55 np0005603609 podman[234631]: 2026-01-31 07:37:55.553818278 +0000 UTC m=+0.114212356 container init 207baca9a485698b518111ca8dcf02c1f720951c1af64060cf97ee370359d083 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 02:37:55 np0005603609 podman[234631]: 2026-01-31 07:37:55.559315163 +0000 UTC m=+0.119709221 container start 207baca9a485698b518111ca8dcf02c1f720951c1af64060cf97ee370359d083 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 02:37:55 np0005603609 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[234647]: [NOTICE]   (234651) : New worker (234653) forked
Jan 31 02:37:55 np0005603609 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[234647]: [NOTICE]   (234651) : Loading success.
Jan 31 02:37:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:37:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:55.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:37:55 np0005603609 nova_compute[221550]: 2026-01-31 07:37:55.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:37:55 np0005603609 nova_compute[221550]: 2026-01-31 07:37:55.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:37:55 np0005603609 nova_compute[221550]: 2026-01-31 07:37:55.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:37:55 np0005603609 nova_compute[221550]: 2026-01-31 07:37:55.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:37:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:55.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.063 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845076.0627348, eed178fc-d0db-47ac-a368-0d3058e94697 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.063 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] VM Started (Lifecycle Event)#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.065 221554 DEBUG nova.compute.manager [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.068 221554 DEBUG nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.070 221554 INFO nova.virt.libvirt.driver [-] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Instance spawned successfully.#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.070 221554 DEBUG nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.090 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.097 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.101 221554 DEBUG nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.102 221554 DEBUG nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.102 221554 DEBUG nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.103 221554 DEBUG nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.104 221554 DEBUG nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.104 221554 DEBUG nova.virt.libvirt.driver [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.119 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.135 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.135 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845076.062975, eed178fc-d0db-47ac-a368-0d3058e94697 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.135 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.164 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.169 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845076.067351, eed178fc-d0db-47ac-a368-0d3058e94697 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.169 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.191 221554 INFO nova.compute.manager [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Took 7.58 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.191 221554 DEBUG nova.compute.manager [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.193 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.206 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.239 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:37:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.276 221554 INFO nova.compute.manager [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Took 8.67 seconds to build instance.#033[00m
Jan 31 02:37:56 np0005603609 nova_compute[221550]: 2026-01-31 07:37:56.306 221554 DEBUG oslo_concurrency.lockutils [None req-db7a3826-3d40-4402-9279-157c12dfc9c3 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:57.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:57.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:57 np0005603609 nova_compute[221550]: 2026-01-31 07:37:57.879 221554 DEBUG nova.compute.manager [req-ce8de440-3d15-40d8-a18d-d8f05eda2447 req-ee956298-bec0-4984-9c52-94de61aa9d66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:37:57 np0005603609 nova_compute[221550]: 2026-01-31 07:37:57.879 221554 DEBUG oslo_concurrency.lockutils [req-ce8de440-3d15-40d8-a18d-d8f05eda2447 req-ee956298-bec0-4984-9c52-94de61aa9d66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:57 np0005603609 nova_compute[221550]: 2026-01-31 07:37:57.880 221554 DEBUG oslo_concurrency.lockutils [req-ce8de440-3d15-40d8-a18d-d8f05eda2447 req-ee956298-bec0-4984-9c52-94de61aa9d66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:57 np0005603609 nova_compute[221550]: 2026-01-31 07:37:57.880 221554 DEBUG oslo_concurrency.lockutils [req-ce8de440-3d15-40d8-a18d-d8f05eda2447 req-ee956298-bec0-4984-9c52-94de61aa9d66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:57 np0005603609 nova_compute[221550]: 2026-01-31 07:37:57.881 221554 DEBUG nova.compute.manager [req-ce8de440-3d15-40d8-a18d-d8f05eda2447 req-ee956298-bec0-4984-9c52-94de61aa9d66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] No waiting events found dispatching network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:37:57 np0005603609 nova_compute[221550]: 2026-01-31 07:37:57.881 221554 WARNING nova.compute.manager [req-ce8de440-3d15-40d8-a18d-d8f05eda2447 req-ee956298-bec0-4984-9c52-94de61aa9d66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received unexpected event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:37:58 np0005603609 nova_compute[221550]: 2026-01-31 07:37:58.130 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:37:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:59.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:37:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:59.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:01 np0005603609 nova_compute[221550]: 2026-01-31 07:38:01.121 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:01.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:38:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:01.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:38:03 np0005603609 nova_compute[221550]: 2026-01-31 07:38:03.180 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:03.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:03.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:04 np0005603609 nova_compute[221550]: 2026-01-31 07:38:04.648 221554 DEBUG oslo_concurrency.lockutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:38:04 np0005603609 nova_compute[221550]: 2026-01-31 07:38:04.649 221554 DEBUG oslo_concurrency.lockutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:38:04 np0005603609 nova_compute[221550]: 2026-01-31 07:38:04.674 221554 DEBUG nova.compute.manager [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:38:04 np0005603609 nova_compute[221550]: 2026-01-31 07:38:04.758 221554 DEBUG oslo_concurrency.lockutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:38:04 np0005603609 nova_compute[221550]: 2026-01-31 07:38:04.759 221554 DEBUG oslo_concurrency.lockutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:38:04 np0005603609 nova_compute[221550]: 2026-01-31 07:38:04.766 221554 DEBUG nova.virt.hardware [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:38:04 np0005603609 nova_compute[221550]: 2026-01-31 07:38:04.767 221554 INFO nova.compute.claims [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:38:04 np0005603609 nova_compute[221550]: 2026-01-31 07:38:04.933 221554 DEBUG oslo_concurrency.processutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:38:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:38:05 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4156463602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:38:05 np0005603609 nova_compute[221550]: 2026-01-31 07:38:05.327 221554 DEBUG oslo_concurrency.processutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:38:05 np0005603609 nova_compute[221550]: 2026-01-31 07:38:05.332 221554 DEBUG nova.compute.provider_tree [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:38:05 np0005603609 nova_compute[221550]: 2026-01-31 07:38:05.373 221554 DEBUG nova.scheduler.client.report [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:38:05 np0005603609 nova_compute[221550]: 2026-01-31 07:38:05.440 221554 DEBUG oslo_concurrency.lockutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:38:05 np0005603609 nova_compute[221550]: 2026-01-31 07:38:05.441 221554 DEBUG nova.compute.manager [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:38:05 np0005603609 nova_compute[221550]: 2026-01-31 07:38:05.615 221554 DEBUG nova.compute.manager [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:38:05 np0005603609 nova_compute[221550]: 2026-01-31 07:38:05.616 221554 DEBUG nova.network.neutron [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:38:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:05.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:38:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:05.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:38:05 np0005603609 nova_compute[221550]: 2026-01-31 07:38:05.722 221554 INFO nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:38:05 np0005603609 nova_compute[221550]: 2026-01-31 07:38:05.755 221554 DEBUG nova.compute.manager [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:38:05 np0005603609 nova_compute[221550]: 2026-01-31 07:38:05.946 221554 DEBUG nova.compute.manager [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:38:05 np0005603609 nova_compute[221550]: 2026-01-31 07:38:05.947 221554 DEBUG nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:38:05 np0005603609 nova_compute[221550]: 2026-01-31 07:38:05.947 221554 INFO nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Creating image(s)#033[00m
Jan 31 02:38:05 np0005603609 nova_compute[221550]: 2026-01-31 07:38:05.970 221554 DEBUG nova.storage.rbd_utils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image 33cde392-20ea-4fd7-88d1-f66b9d14e19a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:38:05 np0005603609 nova_compute[221550]: 2026-01-31 07:38:05.995 221554 DEBUG nova.storage.rbd_utils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image 33cde392-20ea-4fd7-88d1-f66b9d14e19a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:38:06 np0005603609 nova_compute[221550]: 2026-01-31 07:38:06.018 221554 DEBUG nova.storage.rbd_utils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image 33cde392-20ea-4fd7-88d1-f66b9d14e19a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:38:06 np0005603609 nova_compute[221550]: 2026-01-31 07:38:06.021 221554 DEBUG oslo_concurrency.processutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:38:06 np0005603609 nova_compute[221550]: 2026-01-31 07:38:06.037 221554 DEBUG nova.policy [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8a44db09acbd4aeb990147dc979f0bfd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0554655ad0a48c8bf0551298dd31919', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:38:06 np0005603609 nova_compute[221550]: 2026-01-31 07:38:06.081 221554 DEBUG oslo_concurrency.processutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:38:06 np0005603609 nova_compute[221550]: 2026-01-31 07:38:06.082 221554 DEBUG oslo_concurrency.lockutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:38:06 np0005603609 nova_compute[221550]: 2026-01-31 07:38:06.082 221554 DEBUG oslo_concurrency.lockutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:38:06 np0005603609 nova_compute[221550]: 2026-01-31 07:38:06.082 221554 DEBUG oslo_concurrency.lockutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:38:06 np0005603609 nova_compute[221550]: 2026-01-31 07:38:06.103 221554 DEBUG nova.storage.rbd_utils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image 33cde392-20ea-4fd7-88d1-f66b9d14e19a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:38:06 np0005603609 nova_compute[221550]: 2026-01-31 07:38:06.106 221554 DEBUG oslo_concurrency.processutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 33cde392-20ea-4fd7-88d1-f66b9d14e19a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:38:06 np0005603609 nova_compute[221550]: 2026-01-31 07:38:06.123 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:06 np0005603609 nova_compute[221550]: 2026-01-31 07:38:06.319 221554 DEBUG oslo_concurrency.processutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 33cde392-20ea-4fd7-88d1-f66b9d14e19a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:38:06 np0005603609 nova_compute[221550]: 2026-01-31 07:38:06.395 221554 DEBUG nova.storage.rbd_utils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] resizing rbd image 33cde392-20ea-4fd7-88d1-f66b9d14e19a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:38:06 np0005603609 nova_compute[221550]: 2026-01-31 07:38:06.558 221554 DEBUG nova.objects.instance [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'migration_context' on Instance uuid 33cde392-20ea-4fd7-88d1-f66b9d14e19a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:38:06 np0005603609 nova_compute[221550]: 2026-01-31 07:38:06.591 221554 DEBUG nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:38:06 np0005603609 nova_compute[221550]: 2026-01-31 07:38:06.592 221554 DEBUG nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Ensure instance console log exists: /var/lib/nova/instances/33cde392-20ea-4fd7-88d1-f66b9d14e19a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:38:06 np0005603609 nova_compute[221550]: 2026-01-31 07:38:06.592 221554 DEBUG oslo_concurrency.lockutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:38:06 np0005603609 nova_compute[221550]: 2026-01-31 07:38:06.592 221554 DEBUG oslo_concurrency.lockutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:38:06 np0005603609 nova_compute[221550]: 2026-01-31 07:38:06.592 221554 DEBUG oslo_concurrency.lockutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:38:07 np0005603609 nova_compute[221550]: 2026-01-31 07:38:07.454 221554 DEBUG nova.network.neutron [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Successfully created port: e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:38:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:07.470 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:38:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:07.471 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:38:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:07.472 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:38:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:07.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:07.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:38:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:38:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 02:38:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:38:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:38:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 02:38:08 np0005603609 nova_compute[221550]: 2026-01-31 07:38:08.182 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:09 np0005603609 ovn_controller[130359]: 2026-01-31T07:38:09Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e7:af:41 10.100.0.7
Jan 31 02:38:09 np0005603609 ovn_controller[130359]: 2026-01-31T07:38:09Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e7:af:41 10.100.0.7
Jan 31 02:38:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:09.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:38:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:09.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:38:11 np0005603609 nova_compute[221550]: 2026-01-31 07:38:11.126 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:38:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:38:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:38:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:38:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:38:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:38:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:11.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:38:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:11.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:12 np0005603609 podman[235144]: 2026-01-31 07:38:12.204825101 +0000 UTC m=+0.078080701 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 02:38:12 np0005603609 podman[235143]: 2026-01-31 07:38:12.234689931 +0000 UTC m=+0.110699959 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:38:13 np0005603609 nova_compute[221550]: 2026-01-31 07:38:13.213 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:13 np0005603609 nova_compute[221550]: 2026-01-31 07:38:13.249 221554 DEBUG nova.network.neutron [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Successfully updated port: e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:38:13 np0005603609 nova_compute[221550]: 2026-01-31 07:38:13.329 221554 DEBUG oslo_concurrency.lockutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "refresh_cache-33cde392-20ea-4fd7-88d1-f66b9d14e19a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:38:13 np0005603609 nova_compute[221550]: 2026-01-31 07:38:13.330 221554 DEBUG oslo_concurrency.lockutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquired lock "refresh_cache-33cde392-20ea-4fd7-88d1-f66b9d14e19a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:38:13 np0005603609 nova_compute[221550]: 2026-01-31 07:38:13.330 221554 DEBUG nova.network.neutron [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:38:13 np0005603609 nova_compute[221550]: 2026-01-31 07:38:13.467 221554 DEBUG nova.compute.manager [req-2202f31a-7c13-484e-9822-f87a0376c6a2 req-9b554dc0-6d1a-497a-a79e-0c567e520839 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Received event network-changed-e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:38:13 np0005603609 nova_compute[221550]: 2026-01-31 07:38:13.468 221554 DEBUG nova.compute.manager [req-2202f31a-7c13-484e-9822-f87a0376c6a2 req-9b554dc0-6d1a-497a-a79e-0c567e520839 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Refreshing instance network info cache due to event network-changed-e917fc77-9783-4d44-9cc1-c8ddd25bb8e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:38:13 np0005603609 nova_compute[221550]: 2026-01-31 07:38:13.468 221554 DEBUG oslo_concurrency.lockutils [req-2202f31a-7c13-484e-9822-f87a0376c6a2 req-9b554dc0-6d1a-497a-a79e-0c567e520839 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-33cde392-20ea-4fd7-88d1-f66b9d14e19a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:38:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 02:38:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:13.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 02:38:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:38:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:13.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:38:13 np0005603609 nova_compute[221550]: 2026-01-31 07:38:13.729 221554 DEBUG nova.network.neutron [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.170 221554 DEBUG nova.network.neutron [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Updating instance_info_cache with network_info: [{"id": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "address": "fa:16:3e:d7:8d:be", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape917fc77-97", "ovs_interfaceid": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.426 221554 DEBUG oslo_concurrency.lockutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Releasing lock "refresh_cache-33cde392-20ea-4fd7-88d1-f66b9d14e19a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.427 221554 DEBUG nova.compute.manager [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Instance network_info: |[{"id": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "address": "fa:16:3e:d7:8d:be", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape917fc77-97", "ovs_interfaceid": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.428 221554 DEBUG oslo_concurrency.lockutils [req-2202f31a-7c13-484e-9822-f87a0376c6a2 req-9b554dc0-6d1a-497a-a79e-0c567e520839 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-33cde392-20ea-4fd7-88d1-f66b9d14e19a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.428 221554 DEBUG nova.network.neutron [req-2202f31a-7c13-484e-9822-f87a0376c6a2 req-9b554dc0-6d1a-497a-a79e-0c567e520839 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Refreshing network info cache for port e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.433 221554 DEBUG nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Start _get_guest_xml network_info=[{"id": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "address": "fa:16:3e:d7:8d:be", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape917fc77-97", "ovs_interfaceid": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.441 221554 WARNING nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.450 221554 DEBUG nova.virt.libvirt.host [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.450 221554 DEBUG nova.virt.libvirt.host [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.454 221554 DEBUG nova.virt.libvirt.host [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.454 221554 DEBUG nova.virt.libvirt.host [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.455 221554 DEBUG nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.455 221554 DEBUG nova.virt.hardware [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.456 221554 DEBUG nova.virt.hardware [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.456 221554 DEBUG nova.virt.hardware [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.457 221554 DEBUG nova.virt.hardware [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.457 221554 DEBUG nova.virt.hardware [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.457 221554 DEBUG nova.virt.hardware [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.457 221554 DEBUG nova.virt.hardware [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.458 221554 DEBUG nova.virt.hardware [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.458 221554 DEBUG nova.virt.hardware [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.458 221554 DEBUG nova.virt.hardware [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.458 221554 DEBUG nova.virt.hardware [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.461 221554 DEBUG oslo_concurrency.processutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:38:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:15.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:15.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:38:15 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3568645214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.925 221554 DEBUG oslo_concurrency.processutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.957 221554 DEBUG nova.storage.rbd_utils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image 33cde392-20ea-4fd7-88d1-f66b9d14e19a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:38:15 np0005603609 nova_compute[221550]: 2026-01-31 07:38:15.961 221554 DEBUG oslo_concurrency.processutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.129 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:38:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4036725515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.368 221554 DEBUG oslo_concurrency.processutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.370 221554 DEBUG nova.virt.libvirt.vif [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:38:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1844120546',display_name='tempest-ServersAdminTestJSON-server-1844120546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1844120546',id=36,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0554655ad0a48c8bf0551298dd31919',ramdisk_id='',reservation_id='r-9kljyvde',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1156607975',owner_user_name='tempest-ServersAdminTestJSON-1156607975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:38:05Z,user_data=None,user_id='8a44db09acbd4aeb990147dc979f0bfd',uuid=33cde392-20ea-4fd7-88d1-f66b9d14e19a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "address": "fa:16:3e:d7:8d:be", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape917fc77-97", "ovs_interfaceid": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.371 221554 DEBUG nova.network.os_vif_util [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converting VIF {"id": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "address": "fa:16:3e:d7:8d:be", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape917fc77-97", "ovs_interfaceid": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.372 221554 DEBUG nova.network.os_vif_util [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=e917fc77-9783-4d44-9cc1-c8ddd25bb8e4,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape917fc77-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.374 221554 DEBUG nova.objects.instance [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'pci_devices' on Instance uuid 33cde392-20ea-4fd7-88d1-f66b9d14e19a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.394 221554 DEBUG nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:38:16 np0005603609 nova_compute[221550]:  <uuid>33cde392-20ea-4fd7-88d1-f66b9d14e19a</uuid>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:  <name>instance-00000024</name>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServersAdminTestJSON-server-1844120546</nova:name>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:38:15</nova:creationTime>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:38:16 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:        <nova:user uuid="8a44db09acbd4aeb990147dc979f0bfd">tempest-ServersAdminTestJSON-1156607975-project-member</nova:user>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:        <nova:project uuid="b0554655ad0a48c8bf0551298dd31919">tempest-ServersAdminTestJSON-1156607975</nova:project>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:        <nova:port uuid="e917fc77-9783-4d44-9cc1-c8ddd25bb8e4">
Jan 31 02:38:16 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <entry name="serial">33cde392-20ea-4fd7-88d1-f66b9d14e19a</entry>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <entry name="uuid">33cde392-20ea-4fd7-88d1-f66b9d14e19a</entry>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/33cde392-20ea-4fd7-88d1-f66b9d14e19a_disk">
Jan 31 02:38:16 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:38:16 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/33cde392-20ea-4fd7-88d1-f66b9d14e19a_disk.config">
Jan 31 02:38:16 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:38:16 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:d7:8d:be"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <target dev="tape917fc77-97"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/33cde392-20ea-4fd7-88d1-f66b9d14e19a/console.log" append="off"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:38:16 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:38:16 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:38:16 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:38:16 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.395 221554 DEBUG nova.compute.manager [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Preparing to wait for external event network-vif-plugged-e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.395 221554 DEBUG oslo_concurrency.lockutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.395 221554 DEBUG oslo_concurrency.lockutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.395 221554 DEBUG oslo_concurrency.lockutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.396 221554 DEBUG nova.virt.libvirt.vif [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:38:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1844120546',display_name='tempest-ServersAdminTestJSON-server-1844120546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1844120546',id=36,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0554655ad0a48c8bf0551298dd31919',ramdisk_id='',reservation_id='r-9kljyvde',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1156607975',owner_user_name='tempest-ServersAdminTestJSON-1156607975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:38:05Z,user_data=None,user_id='8a44db09acbd4aeb990147dc979f0bfd',uuid=33cde392-20ea-4fd7-88d1-f66b9d14e19a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "address": "fa:16:3e:d7:8d:be", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape917fc77-97", "ovs_interfaceid": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.396 221554 DEBUG nova.network.os_vif_util [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converting VIF {"id": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "address": "fa:16:3e:d7:8d:be", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape917fc77-97", "ovs_interfaceid": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.397 221554 DEBUG nova.network.os_vif_util [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=e917fc77-9783-4d44-9cc1-c8ddd25bb8e4,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape917fc77-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.397 221554 DEBUG os_vif [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=e917fc77-9783-4d44-9cc1-c8ddd25bb8e4,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape917fc77-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.400 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.401 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.401 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.404 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.405 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape917fc77-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.405 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape917fc77-97, col_values=(('external_ids', {'iface-id': 'e917fc77-9783-4d44-9cc1-c8ddd25bb8e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:8d:be', 'vm-uuid': '33cde392-20ea-4fd7-88d1-f66b9d14e19a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.407 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:16 np0005603609 NetworkManager[49064]: <info>  [1769845096.4081] manager: (tape917fc77-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.409 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.414 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.416 221554 INFO os_vif [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=e917fc77-9783-4d44-9cc1-c8ddd25bb8e4,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape917fc77-97')#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.550 221554 DEBUG nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.550 221554 DEBUG nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.551 221554 DEBUG nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] No VIF found with MAC fa:16:3e:d7:8d:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.551 221554 INFO nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Using config drive#033[00m
Jan 31 02:38:16 np0005603609 nova_compute[221550]: 2026-01-31 07:38:16.577 221554 DEBUG nova.storage.rbd_utils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image 33cde392-20ea-4fd7-88d1-f66b9d14e19a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:38:16 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:38:16 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:38:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e156 e156: 3 total, 3 up, 3 in
Jan 31 02:38:17 np0005603609 nova_compute[221550]: 2026-01-31 07:38:17.590 221554 INFO nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Creating config drive at /var/lib/nova/instances/33cde392-20ea-4fd7-88d1-f66b9d14e19a/disk.config#033[00m
Jan 31 02:38:17 np0005603609 nova_compute[221550]: 2026-01-31 07:38:17.594 221554 DEBUG oslo_concurrency.processutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/33cde392-20ea-4fd7-88d1-f66b9d14e19a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpcua425mk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:38:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:17.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:17 np0005603609 nova_compute[221550]: 2026-01-31 07:38:17.723 221554 DEBUG oslo_concurrency.processutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/33cde392-20ea-4fd7-88d1-f66b9d14e19a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpcua425mk" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:38:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:17.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:17 np0005603609 nova_compute[221550]: 2026-01-31 07:38:17.757 221554 DEBUG nova.storage.rbd_utils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image 33cde392-20ea-4fd7-88d1-f66b9d14e19a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:38:17 np0005603609 nova_compute[221550]: 2026-01-31 07:38:17.761 221554 DEBUG oslo_concurrency.processutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/33cde392-20ea-4fd7-88d1-f66b9d14e19a/disk.config 33cde392-20ea-4fd7-88d1-f66b9d14e19a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:38:17 np0005603609 nova_compute[221550]: 2026-01-31 07:38:17.933 221554 DEBUG oslo_concurrency.processutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/33cde392-20ea-4fd7-88d1-f66b9d14e19a/disk.config 33cde392-20ea-4fd7-88d1-f66b9d14e19a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:38:17 np0005603609 nova_compute[221550]: 2026-01-31 07:38:17.934 221554 INFO nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Deleting local config drive /var/lib/nova/instances/33cde392-20ea-4fd7-88d1-f66b9d14e19a/disk.config because it was imported into RBD.#033[00m
Jan 31 02:38:17 np0005603609 kernel: tape917fc77-97: entered promiscuous mode
Jan 31 02:38:17 np0005603609 NetworkManager[49064]: <info>  [1769845097.9970] manager: (tape917fc77-97): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Jan 31 02:38:17 np0005603609 nova_compute[221550]: 2026-01-31 07:38:17.996 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:17 np0005603609 ovn_controller[130359]: 2026-01-31T07:38:17Z|00102|binding|INFO|Claiming lport e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 for this chassis.
Jan 31 02:38:17 np0005603609 ovn_controller[130359]: 2026-01-31T07:38:17Z|00103|binding|INFO|e917fc77-9783-4d44-9cc1-c8ddd25bb8e4: Claiming fa:16:3e:d7:8d:be 10.100.0.4
Jan 31 02:38:18 np0005603609 ovn_controller[130359]: 2026-01-31T07:38:18Z|00104|binding|INFO|Setting lport e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 ovn-installed in OVS
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.009 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:18 np0005603609 ovn_controller[130359]: 2026-01-31T07:38:18Z|00105|binding|INFO|Setting lport e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 up in Southbound
Jan 31 02:38:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:18.011 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:8d:be 10.100.0.4'], port_security=['fa:16:3e:d7:8d:be 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '33cde392-20ea-4fd7-88d1-f66b9d14e19a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0554655ad0a48c8bf0551298dd31919', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b25af7c2-ecfe-428a-9b4f-51874d47219e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3f23c6f-f389-4487-9d19-0cf4a6c28cbc, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=e917fc77-9783-4d44-9cc1-c8ddd25bb8e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.011 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:18.012 140058 INFO neutron.agent.ovn.metadata.agent [-] Port e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 in datapath 8c92e27e-f16c-4df2-a299-60ef2ca44f53 bound to our chassis#033[00m
Jan 31 02:38:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:18.013 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c92e27e-f16c-4df2-a299-60ef2ca44f53#033[00m
Jan 31 02:38:18 np0005603609 systemd-udevd[235374]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:38:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:18.029 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d8de987b-b596-4ea1-a18d-e6a4dd056fc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:38:18 np0005603609 systemd-machined[190912]: New machine qemu-18-instance-00000024.
Jan 31 02:38:18 np0005603609 NetworkManager[49064]: <info>  [1769845098.0379] device (tape917fc77-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:38:18 np0005603609 NetworkManager[49064]: <info>  [1769845098.0387] device (tape917fc77-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:38:18 np0005603609 systemd[1]: Started Virtual Machine qemu-18-instance-00000024.
Jan 31 02:38:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:18.053 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[be87aa93-3239-4c73-953b-2dd3fb6cd348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:38:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:18.055 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[bbc78064-4ef7-4816-8acf-02a650dfc161]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:38:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:18.079 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d2b10436-ca8b-4261-a690-b72266018ae7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:38:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:18.090 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f276acd8-7274-47f2-98ba-53916e2db077]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c92e27e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:62:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531475, 'reachable_time': 25068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235386, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:38:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:18.106 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6f9a4c01-5650-4fe4-92b0-fa3313e5a902]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8c92e27e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531483, 'tstamp': 531483}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235387, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8c92e27e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531485, 'tstamp': 531485}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235387, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:38:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:18.108 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c92e27e-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.110 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.111 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:18.112 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c92e27e-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:38:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:18.113 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:38:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:18.113 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c92e27e-f0, col_values=(('external_ids', {'iface-id': 'b682c189-93d2-4c14-8b2a-bafbda6df8a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:38:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:18.113 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:38:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e157 e157: 3 total, 3 up, 3 in
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.733 221554 DEBUG nova.compute.manager [req-d37ba552-5ac9-4e1b-bcb7-5ba3fc0b14a5 req-95fb3460-8c1d-41bf-8d36-540277d203f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Received event network-vif-plugged-e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.734 221554 DEBUG oslo_concurrency.lockutils [req-d37ba552-5ac9-4e1b-bcb7-5ba3fc0b14a5 req-95fb3460-8c1d-41bf-8d36-540277d203f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.734 221554 DEBUG oslo_concurrency.lockutils [req-d37ba552-5ac9-4e1b-bcb7-5ba3fc0b14a5 req-95fb3460-8c1d-41bf-8d36-540277d203f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.736 221554 DEBUG oslo_concurrency.lockutils [req-d37ba552-5ac9-4e1b-bcb7-5ba3fc0b14a5 req-95fb3460-8c1d-41bf-8d36-540277d203f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.737 221554 DEBUG nova.compute.manager [req-d37ba552-5ac9-4e1b-bcb7-5ba3fc0b14a5 req-95fb3460-8c1d-41bf-8d36-540277d203f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Processing event network-vif-plugged-e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.738 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845098.7334146, 33cde392-20ea-4fd7-88d1-f66b9d14e19a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.738 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] VM Started (Lifecycle Event)#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.742 221554 DEBUG nova.compute.manager [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.748 221554 DEBUG nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.752 221554 DEBUG nova.network.neutron [req-2202f31a-7c13-484e-9822-f87a0376c6a2 req-9b554dc0-6d1a-497a-a79e-0c567e520839 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Updated VIF entry in instance network info cache for port e917fc77-9783-4d44-9cc1-c8ddd25bb8e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.753 221554 DEBUG nova.network.neutron [req-2202f31a-7c13-484e-9822-f87a0376c6a2 req-9b554dc0-6d1a-497a-a79e-0c567e520839 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Updating instance_info_cache with network_info: [{"id": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "address": "fa:16:3e:d7:8d:be", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape917fc77-97", "ovs_interfaceid": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.761 221554 INFO nova.virt.libvirt.driver [-] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Instance spawned successfully.#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.762 221554 DEBUG nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.788 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.795 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.798 221554 DEBUG nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.799 221554 DEBUG nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.799 221554 DEBUG nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.800 221554 DEBUG nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.800 221554 DEBUG nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.801 221554 DEBUG nova.virt.libvirt.driver [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.820 221554 DEBUG oslo_concurrency.lockutils [req-2202f31a-7c13-484e-9822-f87a0376c6a2 req-9b554dc0-6d1a-497a-a79e-0c567e520839 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-33cde392-20ea-4fd7-88d1-f66b9d14e19a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.855 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.855 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845098.734204, 33cde392-20ea-4fd7-88d1-f66b9d14e19a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.855 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.876 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.880 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845098.7479281, 33cde392-20ea-4fd7-88d1-f66b9d14e19a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.881 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.889 221554 INFO nova.compute.manager [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Took 12.94 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.889 221554 DEBUG nova.compute.manager [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.903 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.907 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.931 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:38:18 np0005603609 nova_compute[221550]: 2026-01-31 07:38:18.985 221554 INFO nova.compute.manager [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Took 14.25 seconds to build instance.#033[00m
Jan 31 02:38:19 np0005603609 nova_compute[221550]: 2026-01-31 07:38:19.027 221554 DEBUG oslo_concurrency.lockutils [None req-68e75842-bd75-453b-8a11-19a59cb888bf 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.378s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:38:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e158 e158: 3 total, 3 up, 3 in
Jan 31 02:38:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:38:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:19.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:38:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:19.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:20 np0005603609 nova_compute[221550]: 2026-01-31 07:38:20.885 221554 DEBUG nova.compute.manager [req-c04d26ba-44bc-4493-a056-df9027610d6b req-ea7c9aa6-6fe7-42b3-a177-154cf9e5edcc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Received event network-vif-plugged-e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:38:20 np0005603609 nova_compute[221550]: 2026-01-31 07:38:20.886 221554 DEBUG oslo_concurrency.lockutils [req-c04d26ba-44bc-4493-a056-df9027610d6b req-ea7c9aa6-6fe7-42b3-a177-154cf9e5edcc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:38:20 np0005603609 nova_compute[221550]: 2026-01-31 07:38:20.887 221554 DEBUG oslo_concurrency.lockutils [req-c04d26ba-44bc-4493-a056-df9027610d6b req-ea7c9aa6-6fe7-42b3-a177-154cf9e5edcc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:38:20 np0005603609 nova_compute[221550]: 2026-01-31 07:38:20.887 221554 DEBUG oslo_concurrency.lockutils [req-c04d26ba-44bc-4493-a056-df9027610d6b req-ea7c9aa6-6fe7-42b3-a177-154cf9e5edcc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:38:20 np0005603609 nova_compute[221550]: 2026-01-31 07:38:20.887 221554 DEBUG nova.compute.manager [req-c04d26ba-44bc-4493-a056-df9027610d6b req-ea7c9aa6-6fe7-42b3-a177-154cf9e5edcc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] No waiting events found dispatching network-vif-plugged-e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:38:20 np0005603609 nova_compute[221550]: 2026-01-31 07:38:20.887 221554 WARNING nova.compute.manager [req-c04d26ba-44bc-4493-a056-df9027610d6b req-ea7c9aa6-6fe7-42b3-a177-154cf9e5edcc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Received unexpected event network-vif-plugged-e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:38:21 np0005603609 nova_compute[221550]: 2026-01-31 07:38:21.163 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:21 np0005603609 nova_compute[221550]: 2026-01-31 07:38:21.407 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:21.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:21.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.003000071s ======
Jan 31 02:38:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:23.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Jan 31 02:38:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e159 e159: 3 total, 3 up, 3 in
Jan 31 02:38:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:23.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:24.971 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:38:24 np0005603609 nova_compute[221550]: 2026-01-31 07:38:24.971 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:24.972 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:38:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:25.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:25.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:26 np0005603609 nova_compute[221550]: 2026-01-31 07:38:26.166 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:26 np0005603609 nova_compute[221550]: 2026-01-31 07:38:26.410 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:27.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:38:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:27.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:38:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e160 e160: 3 total, 3 up, 3 in
Jan 31 02:38:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:29.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:38:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:29.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:38:31 np0005603609 nova_compute[221550]: 2026-01-31 07:38:31.168 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:31 np0005603609 nova_compute[221550]: 2026-01-31 07:38:31.412 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:31 np0005603609 ovn_controller[130359]: 2026-01-31T07:38:31Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d7:8d:be 10.100.0.4
Jan 31 02:38:31 np0005603609 ovn_controller[130359]: 2026-01-31T07:38:31Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d7:8d:be 10.100.0.4
Jan 31 02:38:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:38:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:31.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:38:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:31.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:33.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e161 e161: 3 total, 3 up, 3 in
Jan 31 02:38:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:33.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:34.974 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:38:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:35.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:35.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:36 np0005603609 nova_compute[221550]: 2026-01-31 07:38:36.209 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:36 np0005603609 nova_compute[221550]: 2026-01-31 07:38:36.415 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:37.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:37.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:38 np0005603609 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 02:38:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:39.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:39.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:38:40 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3320545784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:38:41 np0005603609 nova_compute[221550]: 2026-01-31 07:38:41.214 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:41 np0005603609 nova_compute[221550]: 2026-01-31 07:38:41.417 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:38:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:41.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:38:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:38:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:41.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:38:43 np0005603609 podman[235435]: 2026-01-31 07:38:43.1720607 +0000 UTC m=+0.056488193 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:38:43 np0005603609 podman[235434]: 2026-01-31 07:38:43.194071069 +0000 UTC m=+0.078793029 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:38:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:38:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:43.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:38:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:43.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:38:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3533633623' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:38:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:38:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3533633623' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:38:45 np0005603609 nova_compute[221550]: 2026-01-31 07:38:45.689 221554 INFO nova.compute.manager [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Rebuilding instance#033[00m
Jan 31 02:38:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:45.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:38:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:45.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:38:46 np0005603609 nova_compute[221550]: 2026-01-31 07:38:46.132 221554 DEBUG nova.objects.instance [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'trusted_certs' on Instance uuid eed178fc-d0db-47ac-a368-0d3058e94697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:38:46 np0005603609 nova_compute[221550]: 2026-01-31 07:38:46.210 221554 DEBUG nova.compute.manager [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:38:46 np0005603609 nova_compute[221550]: 2026-01-31 07:38:46.216 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:46 np0005603609 nova_compute[221550]: 2026-01-31 07:38:46.418 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:46 np0005603609 nova_compute[221550]: 2026-01-31 07:38:46.504 221554 DEBUG nova.objects.instance [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'pci_requests' on Instance uuid eed178fc-d0db-47ac-a368-0d3058e94697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:38:46 np0005603609 nova_compute[221550]: 2026-01-31 07:38:46.529 221554 DEBUG nova.objects.instance [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'pci_devices' on Instance uuid eed178fc-d0db-47ac-a368-0d3058e94697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:38:46 np0005603609 nova_compute[221550]: 2026-01-31 07:38:46.581 221554 DEBUG nova.objects.instance [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'resources' on Instance uuid eed178fc-d0db-47ac-a368-0d3058e94697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:38:46 np0005603609 nova_compute[221550]: 2026-01-31 07:38:46.675 221554 DEBUG nova.objects.instance [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'migration_context' on Instance uuid eed178fc-d0db-47ac-a368-0d3058e94697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:38:46 np0005603609 nova_compute[221550]: 2026-01-31 07:38:46.722 221554 DEBUG nova.objects.instance [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 02:38:46 np0005603609 nova_compute[221550]: 2026-01-31 07:38:46.727 221554 DEBUG nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 02:38:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:47.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:38:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:47.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:38:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e162 e162: 3 total, 3 up, 3 in
Jan 31 02:38:49 np0005603609 kernel: tapbf338813-3c (unregistering): left promiscuous mode
Jan 31 02:38:49 np0005603609 nova_compute[221550]: 2026-01-31 07:38:49.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:38:49 np0005603609 NetworkManager[49064]: <info>  [1769845129.6597] device (tapbf338813-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:38:49 np0005603609 nova_compute[221550]: 2026-01-31 07:38:49.666 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:49 np0005603609 ovn_controller[130359]: 2026-01-31T07:38:49Z|00106|binding|INFO|Releasing lport bf338813-3c1d-456b-a6fc-b4b2c8235740 from this chassis (sb_readonly=0)
Jan 31 02:38:49 np0005603609 ovn_controller[130359]: 2026-01-31T07:38:49Z|00107|binding|INFO|Setting lport bf338813-3c1d-456b-a6fc-b4b2c8235740 down in Southbound
Jan 31 02:38:49 np0005603609 ovn_controller[130359]: 2026-01-31T07:38:49Z|00108|binding|INFO|Removing iface tapbf338813-3c ovn-installed in OVS
Jan 31 02:38:49 np0005603609 nova_compute[221550]: 2026-01-31 07:38:49.675 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:49 np0005603609 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000021.scope: Deactivated successfully.
Jan 31 02:38:49 np0005603609 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000021.scope: Consumed 15.511s CPU time.
Jan 31 02:38:49 np0005603609 systemd-machined[190912]: Machine qemu-17-instance-00000021 terminated.
Jan 31 02:38:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:49.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:49.753 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:af:41 10.100.0.7'], port_security=['fa:16:3e:e7:af:41 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eed178fc-d0db-47ac-a368-0d3058e94697', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0554655ad0a48c8bf0551298dd31919', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b25af7c2-ecfe-428a-9b4f-51874d47219e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3f23c6f-f389-4487-9d19-0cf4a6c28cbc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=bf338813-3c1d-456b-a6fc-b4b2c8235740) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:38:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:49.756 140058 INFO neutron.agent.ovn.metadata.agent [-] Port bf338813-3c1d-456b-a6fc-b4b2c8235740 in datapath 8c92e27e-f16c-4df2-a299-60ef2ca44f53 unbound from our chassis#033[00m
Jan 31 02:38:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:49.759 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c92e27e-f16c-4df2-a299-60ef2ca44f53#033[00m
Jan 31 02:38:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:49.769 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[70894396-b56d-4943-936d-ad19aca603be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:38:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:49.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:49.788 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[276123e9-7499-426f-868a-fda4de0fa6f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:38:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:49.790 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[dcfae33d-a2e8-42d0-bc66-1a6c474fedac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:38:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:49.806 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[3187628f-ee45-4437-9ae1-5df094ad3b83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:38:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:49.818 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cb207b57-9157-43c8-bfaf-c9277b26f4cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c92e27e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:62:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531475, 'reachable_time': 25068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235492, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:38:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:49.832 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[de817c52-095d-4b11-8ba2-2c723e50ce18]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8c92e27e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531483, 'tstamp': 531483}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235493, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8c92e27e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531485, 'tstamp': 531485}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235493, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:38:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:49.833 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c92e27e-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:38:49 np0005603609 nova_compute[221550]: 2026-01-31 07:38:49.870 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:49.875 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c92e27e-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:38:49 np0005603609 nova_compute[221550]: 2026-01-31 07:38:49.875 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:49.876 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:38:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:49.876 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c92e27e-f0, col_values=(('external_ids', {'iface-id': 'b682c189-93d2-4c14-8b2a-bafbda6df8a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:38:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:49.877 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:38:49 np0005603609 nova_compute[221550]: 2026-01-31 07:38:49.897 221554 INFO nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 02:38:49 np0005603609 nova_compute[221550]: 2026-01-31 07:38:49.903 221554 INFO nova.virt.libvirt.driver [-] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Instance destroyed successfully.#033[00m
Jan 31 02:38:49 np0005603609 nova_compute[221550]: 2026-01-31 07:38:49.910 221554 INFO nova.virt.libvirt.driver [-] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Instance destroyed successfully.#033[00m
Jan 31 02:38:49 np0005603609 nova_compute[221550]: 2026-01-31 07:38:49.911 221554 DEBUG nova.virt.libvirt.vif [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:37:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-540870110',display_name='tempest-ServersAdminTestJSON-server-540870110',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-540870110',id=33,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:37:56Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b0554655ad0a48c8bf0551298dd31919',ramdisk_id='',reservation_id='r-naihjech',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1156607975',owner_user_name='tempest-ServersAdminTestJSON-1156607975-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:38:43Z,user_data=None,user_id='8a44db09acbd4aeb990147dc979f0bfd',uuid=eed178fc-d0db-47ac-a368-0d3058e94697,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:38:49 np0005603609 nova_compute[221550]: 2026-01-31 07:38:49.912 221554 DEBUG nova.network.os_vif_util [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converting VIF {"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:38:49 np0005603609 nova_compute[221550]: 2026-01-31 07:38:49.913 221554 DEBUG nova.network.os_vif_util [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:38:49 np0005603609 nova_compute[221550]: 2026-01-31 07:38:49.914 221554 DEBUG os_vif [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:38:49 np0005603609 nova_compute[221550]: 2026-01-31 07:38:49.916 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:49 np0005603609 nova_compute[221550]: 2026-01-31 07:38:49.916 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf338813-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:38:49 np0005603609 nova_compute[221550]: 2026-01-31 07:38:49.918 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:49 np0005603609 nova_compute[221550]: 2026-01-31 07:38:49.919 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:49 np0005603609 nova_compute[221550]: 2026-01-31 07:38:49.923 221554 INFO os_vif [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c')#033[00m
Jan 31 02:38:51 np0005603609 nova_compute[221550]: 2026-01-31 07:38:51.061 221554 DEBUG nova.compute.manager [req-05bf8b2d-9433-4e9b-9384-e1f994e94541 req-149b1622-6acf-4d0b-9119-e33a95598e21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received event network-vif-unplugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:38:51 np0005603609 nova_compute[221550]: 2026-01-31 07:38:51.061 221554 DEBUG oslo_concurrency.lockutils [req-05bf8b2d-9433-4e9b-9384-e1f994e94541 req-149b1622-6acf-4d0b-9119-e33a95598e21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:38:51 np0005603609 nova_compute[221550]: 2026-01-31 07:38:51.062 221554 DEBUG oslo_concurrency.lockutils [req-05bf8b2d-9433-4e9b-9384-e1f994e94541 req-149b1622-6acf-4d0b-9119-e33a95598e21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:38:51 np0005603609 nova_compute[221550]: 2026-01-31 07:38:51.062 221554 DEBUG oslo_concurrency.lockutils [req-05bf8b2d-9433-4e9b-9384-e1f994e94541 req-149b1622-6acf-4d0b-9119-e33a95598e21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:38:51 np0005603609 nova_compute[221550]: 2026-01-31 07:38:51.062 221554 DEBUG nova.compute.manager [req-05bf8b2d-9433-4e9b-9384-e1f994e94541 req-149b1622-6acf-4d0b-9119-e33a95598e21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] No waiting events found dispatching network-vif-unplugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:38:51 np0005603609 nova_compute[221550]: 2026-01-31 07:38:51.062 221554 WARNING nova.compute.manager [req-05bf8b2d-9433-4e9b-9384-e1f994e94541 req-149b1622-6acf-4d0b-9119-e33a95598e21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received unexpected event network-vif-unplugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 for instance with vm_state error and task_state rebuilding.#033[00m
Jan 31 02:38:51 np0005603609 nova_compute[221550]: 2026-01-31 07:38:51.219 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:51 np0005603609 nova_compute[221550]: 2026-01-31 07:38:51.375 221554 INFO nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Deleting instance files /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697_del#033[00m
Jan 31 02:38:51 np0005603609 nova_compute[221550]: 2026-01-31 07:38:51.377 221554 INFO nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Deletion of /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697_del complete#033[00m
Jan 31 02:38:51 np0005603609 nova_compute[221550]: 2026-01-31 07:38:51.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:38:51 np0005603609 nova_compute[221550]: 2026-01-31 07:38:51.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:38:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:38:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:51.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:38:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:38:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:51.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:38:52 np0005603609 nova_compute[221550]: 2026-01-31 07:38:52.015 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:38:52 np0005603609 nova_compute[221550]: 2026-01-31 07:38:52.015 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:38:52 np0005603609 nova_compute[221550]: 2026-01-31 07:38:52.016 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:38:52 np0005603609 nova_compute[221550]: 2026-01-31 07:38:52.016 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:38:52 np0005603609 nova_compute[221550]: 2026-01-31 07:38:52.016 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:38:52 np0005603609 nova_compute[221550]: 2026-01-31 07:38:52.427 221554 DEBUG nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:38:52 np0005603609 nova_compute[221550]: 2026-01-31 07:38:52.428 221554 INFO nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Creating image(s)#033[00m
Jan 31 02:38:52 np0005603609 nova_compute[221550]: 2026-01-31 07:38:52.459 221554 DEBUG nova.storage.rbd_utils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:38:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:38:52 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3921444299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:38:52 np0005603609 nova_compute[221550]: 2026-01-31 07:38:52.491 221554 DEBUG nova.storage.rbd_utils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:38:52 np0005603609 nova_compute[221550]: 2026-01-31 07:38:52.524 221554 DEBUG nova.storage.rbd_utils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:38:52 np0005603609 nova_compute[221550]: 2026-01-31 07:38:52.528 221554 DEBUG oslo_concurrency.lockutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:38:52 np0005603609 nova_compute[221550]: 2026-01-31 07:38:52.529 221554 DEBUG oslo_concurrency.lockutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:38:52 np0005603609 nova_compute[221550]: 2026-01-31 07:38:52.535 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:38:52 np0005603609 nova_compute[221550]: 2026-01-31 07:38:52.720 221554 DEBUG nova.virt.libvirt.imagebackend [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/40cf2ff3-f7ff-4843-b4ab-b7dcc843006f/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/40cf2ff3-f7ff-4843-b4ab-b7dcc843006f/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 31 02:38:53 np0005603609 nova_compute[221550]: 2026-01-31 07:38:53.429 221554 DEBUG nova.compute.manager [req-057f9c4f-5c55-423d-99af-406d7847e488 req-aee60327-a578-47e7-a3a4-87a718e4d1e2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:38:53 np0005603609 nova_compute[221550]: 2026-01-31 07:38:53.430 221554 DEBUG oslo_concurrency.lockutils [req-057f9c4f-5c55-423d-99af-406d7847e488 req-aee60327-a578-47e7-a3a4-87a718e4d1e2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:38:53 np0005603609 nova_compute[221550]: 2026-01-31 07:38:53.430 221554 DEBUG oslo_concurrency.lockutils [req-057f9c4f-5c55-423d-99af-406d7847e488 req-aee60327-a578-47e7-a3a4-87a718e4d1e2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:38:53 np0005603609 nova_compute[221550]: 2026-01-31 07:38:53.430 221554 DEBUG oslo_concurrency.lockutils [req-057f9c4f-5c55-423d-99af-406d7847e488 req-aee60327-a578-47e7-a3a4-87a718e4d1e2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:38:53 np0005603609 nova_compute[221550]: 2026-01-31 07:38:53.431 221554 DEBUG nova.compute.manager [req-057f9c4f-5c55-423d-99af-406d7847e488 req-aee60327-a578-47e7-a3a4-87a718e4d1e2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] No waiting events found dispatching network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:38:53 np0005603609 nova_compute[221550]: 2026-01-31 07:38:53.431 221554 WARNING nova.compute.manager [req-057f9c4f-5c55-423d-99af-406d7847e488 req-aee60327-a578-47e7-a3a4-87a718e4d1e2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received unexpected event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 for instance with vm_state error and task_state rebuild_spawning.#033[00m
Jan 31 02:38:53 np0005603609 nova_compute[221550]: 2026-01-31 07:38:53.442 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:38:53 np0005603609 nova_compute[221550]: 2026-01-31 07:38:53.443 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:38:53.712526) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845133712619, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2374, "num_deletes": 259, "total_data_size": 5455037, "memory_usage": 5525456, "flush_reason": "Manual Compaction"}
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Jan 31 02:38:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:53.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845133746851, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3527382, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25465, "largest_seqno": 27834, "table_properties": {"data_size": 3517726, "index_size": 6022, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20633, "raw_average_key_size": 20, "raw_value_size": 3497941, "raw_average_value_size": 3439, "num_data_blocks": 265, "num_entries": 1017, "num_filter_entries": 1017, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844960, "oldest_key_time": 1769844960, "file_creation_time": 1769845133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 34355 microseconds, and 8816 cpu microseconds.
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:38:53.746897) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3527382 bytes OK
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:38:53.746915) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:38:53.750442) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:38:53.750493) EVENT_LOG_v1 {"time_micros": 1769845133750481, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:38:53.750519) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 5444232, prev total WAL file size 5444232, number of live WAL files 2.
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:38:53.751949) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3444KB)], [51(8785KB)]
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845133752033, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 12523716, "oldest_snapshot_seqno": -1}
Jan 31 02:38:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:53.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:53 np0005603609 nova_compute[221550]: 2026-01-31 07:38:53.777 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:38:53 np0005603609 nova_compute[221550]: 2026-01-31 07:38:53.779 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4634MB free_disk=20.795711517333984GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:38:53 np0005603609 nova_compute[221550]: 2026-01-31 07:38:53.779 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:38:53 np0005603609 nova_compute[221550]: 2026-01-31 07:38:53.780 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5454 keys, 12404472 bytes, temperature: kUnknown
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845133845666, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 12404472, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12363661, "index_size": 26045, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 137229, "raw_average_key_size": 25, "raw_value_size": 12261311, "raw_average_value_size": 2248, "num_data_blocks": 1075, "num_entries": 5454, "num_filter_entries": 5454, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769845133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:38:53.845907) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 12404472 bytes
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:38:53.863504) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.7 rd, 132.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 8.6 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(7.1) write-amplify(3.5) OK, records in: 5991, records dropped: 537 output_compression: NoCompression
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:38:53.863529) EVENT_LOG_v1 {"time_micros": 1769845133863518, "job": 30, "event": "compaction_finished", "compaction_time_micros": 93699, "compaction_time_cpu_micros": 36295, "output_level": 6, "num_output_files": 1, "total_output_size": 12404472, "num_input_records": 5991, "num_output_records": 5454, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845133864103, "job": 30, "event": "table_file_deletion", "file_number": 53}
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845133865125, "job": 30, "event": "table_file_deletion", "file_number": 51}
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:38:53.751769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:38:53.865315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:38:53.865323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:38:53.865326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:38:53.865329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:38:53 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:38:53.865332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:38:53 np0005603609 nova_compute[221550]: 2026-01-31 07:38:53.929 221554 DEBUG oslo_concurrency.processutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:38:53 np0005603609 nova_compute[221550]: 2026-01-31 07:38:53.988 221554 DEBUG oslo_concurrency.processutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.part --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:38:53 np0005603609 nova_compute[221550]: 2026-01-31 07:38:53.991 221554 DEBUG nova.virt.images [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] 40cf2ff3-f7ff-4843-b4ab-b7dcc843006f was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Jan 31 02:38:53 np0005603609 nova_compute[221550]: 2026-01-31 07:38:53.992 221554 DEBUG nova.privsep.utils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 31 02:38:53 np0005603609 nova_compute[221550]: 2026-01-31 07:38:53.992 221554 DEBUG oslo_concurrency.processutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.part /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:38:54 np0005603609 nova_compute[221550]: 2026-01-31 07:38:54.190 221554 DEBUG oslo_concurrency.processutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.part /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.converted" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:38:54 np0005603609 nova_compute[221550]: 2026-01-31 07:38:54.194 221554 DEBUG oslo_concurrency.processutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:38:54 np0005603609 nova_compute[221550]: 2026-01-31 07:38:54.268 221554 DEBUG oslo_concurrency.processutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.converted --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:38:54 np0005603609 nova_compute[221550]: 2026-01-31 07:38:54.270 221554 DEBUG oslo_concurrency.lockutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:38:54 np0005603609 nova_compute[221550]: 2026-01-31 07:38:54.298 221554 DEBUG nova.storage.rbd_utils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:38:54 np0005603609 nova_compute[221550]: 2026-01-31 07:38:54.301 221554 DEBUG oslo_concurrency.processutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c eed178fc-d0db-47ac-a368-0d3058e94697_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:38:54 np0005603609 nova_compute[221550]: 2026-01-31 07:38:54.364 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance eed178fc-d0db-47ac-a368-0d3058e94697 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:38:54 np0005603609 nova_compute[221550]: 2026-01-31 07:38:54.365 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 33cde392-20ea-4fd7-88d1-f66b9d14e19a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:38:54 np0005603609 nova_compute[221550]: 2026-01-31 07:38:54.365 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:38:54 np0005603609 nova_compute[221550]: 2026-01-31 07:38:54.365 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:38:54 np0005603609 nova_compute[221550]: 2026-01-31 07:38:54.416 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:38:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:38:54 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1990709756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:38:54 np0005603609 nova_compute[221550]: 2026-01-31 07:38:54.860 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:38:54 np0005603609 nova_compute[221550]: 2026-01-31 07:38:54.865 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:38:54 np0005603609 nova_compute[221550]: 2026-01-31 07:38:54.919 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:55 np0005603609 nova_compute[221550]: 2026-01-31 07:38:55.028 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:38:55 np0005603609 nova_compute[221550]: 2026-01-31 07:38:55.189 221554 DEBUG oslo_concurrency.processutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c eed178fc-d0db-47ac-a368-0d3058e94697_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.889s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:38:55 np0005603609 nova_compute[221550]: 2026-01-31 07:38:55.270 221554 DEBUG nova.storage.rbd_utils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] resizing rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:38:55 np0005603609 nova_compute[221550]: 2026-01-31 07:38:55.385 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:38:55 np0005603609 nova_compute[221550]: 2026-01-31 07:38:55.386 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:38:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:38:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:55.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:38:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:38:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:55.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.018 221554 DEBUG nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.019 221554 DEBUG nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Ensure instance console log exists: /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.020 221554 DEBUG oslo_concurrency.lockutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.020 221554 DEBUG oslo_concurrency.lockutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.021 221554 DEBUG oslo_concurrency.lockutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.025 221554 DEBUG nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Start _get_guest_xml network_info=[{"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.030 221554 WARNING nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.036 221554 DEBUG nova.virt.libvirt.host [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.037 221554 DEBUG nova.virt.libvirt.host [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.040 221554 DEBUG nova.virt.libvirt.host [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.041 221554 DEBUG nova.virt.libvirt.host [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.043 221554 DEBUG nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.043 221554 DEBUG nova.virt.hardware [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.044 221554 DEBUG nova.virt.hardware [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.045 221554 DEBUG nova.virt.hardware [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.045 221554 DEBUG nova.virt.hardware [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.045 221554 DEBUG nova.virt.hardware [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.046 221554 DEBUG nova.virt.hardware [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.046 221554 DEBUG nova.virt.hardware [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.047 221554 DEBUG nova.virt.hardware [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.047 221554 DEBUG nova.virt.hardware [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.048 221554 DEBUG nova.virt.hardware [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.048 221554 DEBUG nova.virt.hardware [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.049 221554 DEBUG nova.objects.instance [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'vcpu_model' on Instance uuid eed178fc-d0db-47ac-a368-0d3058e94697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.151 221554 DEBUG oslo_concurrency.processutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.221 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:38:56 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3481407038' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.615 221554 DEBUG oslo_concurrency.processutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.647 221554 DEBUG nova.storage.rbd_utils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:38:56 np0005603609 nova_compute[221550]: 2026-01-31 07:38:56.651 221554 DEBUG oslo_concurrency.processutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:38:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:38:57 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4008658442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.113 221554 DEBUG oslo_concurrency.processutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.115 221554 DEBUG nova.virt.libvirt.vif [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:37:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-540870110',display_name='tempest-ServersAdminTestJSON-server-540870110',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-540870110',id=33,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:37:56Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b0554655ad0a48c8bf0551298dd31919',ramdisk_id='',reservation_id='r-naihjech',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1156607975',owner_user_name='tempest-ServersAdminTestJSON-1156607975-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:38:52Z,user_data=None,user_id='8a44db09acbd4aeb990147dc979f0bfd',uuid=eed178fc-d0db-47ac-a368-0d3058e94697,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.116 221554 DEBUG nova.network.os_vif_util [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converting VIF {"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.117 221554 DEBUG nova.network.os_vif_util [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.121 221554 DEBUG nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:38:57 np0005603609 nova_compute[221550]:  <uuid>eed178fc-d0db-47ac-a368-0d3058e94697</uuid>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:  <name>instance-00000021</name>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServersAdminTestJSON-server-540870110</nova:name>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:38:56</nova:creationTime>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:38:57 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:        <nova:user uuid="8a44db09acbd4aeb990147dc979f0bfd">tempest-ServersAdminTestJSON-1156607975-project-member</nova:user>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:        <nova:project uuid="b0554655ad0a48c8bf0551298dd31919">tempest-ServersAdminTestJSON-1156607975</nova:project>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="40cf2ff3-f7ff-4843-b4ab-b7dcc843006f"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:        <nova:port uuid="bf338813-3c1d-456b-a6fc-b4b2c8235740">
Jan 31 02:38:57 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <entry name="serial">eed178fc-d0db-47ac-a368-0d3058e94697</entry>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <entry name="uuid">eed178fc-d0db-47ac-a368-0d3058e94697</entry>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/eed178fc-d0db-47ac-a368-0d3058e94697_disk">
Jan 31 02:38:57 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:38:57 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/eed178fc-d0db-47ac-a368-0d3058e94697_disk.config">
Jan 31 02:38:57 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:38:57 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:e7:af:41"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <target dev="tapbf338813-3c"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/console.log" append="off"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:38:57 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:38:57 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:38:57 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:38:57 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.122 221554 DEBUG nova.compute.manager [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Preparing to wait for external event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.122 221554 DEBUG oslo_concurrency.lockutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.123 221554 DEBUG oslo_concurrency.lockutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.123 221554 DEBUG oslo_concurrency.lockutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.124 221554 DEBUG nova.virt.libvirt.vif [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:37:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-540870110',display_name='tempest-ServersAdminTestJSON-server-540870110',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-540870110',id=33,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:37:56Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b0554655ad0a48c8bf0551298dd31919',ramdisk_id='',reservation_id='r-naihjech',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1156607975',owner_user_name='tempest-ServersAdminTestJSON-1156607975-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:38:52Z,user_data=None,user_id='8a44db09acbd4aeb990147dc979f0bfd',uuid=eed178fc-d0db-47ac-a368-0d3058e94697,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.125 221554 DEBUG nova.network.os_vif_util [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converting VIF {"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.126 221554 DEBUG nova.network.os_vif_util [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.126 221554 DEBUG os_vif [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.127 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.128 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.128 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.132 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.133 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf338813-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.134 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf338813-3c, col_values=(('external_ids', {'iface-id': 'bf338813-3c1d-456b-a6fc-b4b2c8235740', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:af:41', 'vm-uuid': 'eed178fc-d0db-47ac-a368-0d3058e94697'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.136 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:57 np0005603609 NetworkManager[49064]: <info>  [1769845137.1370] manager: (tapbf338813-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.139 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.141 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.142 221554 INFO os_vif [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c')#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.240 221554 DEBUG nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.241 221554 DEBUG nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.241 221554 DEBUG nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] No VIF found with MAC fa:16:3e:e7:af:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.243 221554 INFO nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Using config drive#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.277 221554 DEBUG nova.storage.rbd_utils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.357 221554 DEBUG nova.objects.instance [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'ec2_ids' on Instance uuid eed178fc-d0db-47ac-a368-0d3058e94697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.389 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.390 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.390 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.390 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.545 221554 DEBUG nova.objects.instance [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'keypairs' on Instance uuid eed178fc-d0db-47ac-a368-0d3058e94697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.710 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-33cde392-20ea-4fd7-88d1-f66b9d14e19a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.710 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-33cde392-20ea-4fd7-88d1-f66b9d14e19a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.710 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.711 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 33cde392-20ea-4fd7-88d1-f66b9d14e19a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:38:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 02:38:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:57.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 02:38:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:57.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.983 221554 INFO nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Creating config drive at /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/disk.config#033[00m
Jan 31 02:38:57 np0005603609 nova_compute[221550]: 2026-01-31 07:38:57.987 221554 DEBUG oslo_concurrency.processutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgg6lkgu3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:38:58 np0005603609 nova_compute[221550]: 2026-01-31 07:38:58.115 221554 DEBUG oslo_concurrency.processutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgg6lkgu3" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:38:58 np0005603609 nova_compute[221550]: 2026-01-31 07:38:58.157 221554 DEBUG nova.storage.rbd_utils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:38:58 np0005603609 nova_compute[221550]: 2026-01-31 07:38:58.162 221554 DEBUG oslo_concurrency.processutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/disk.config eed178fc-d0db-47ac-a368-0d3058e94697_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:38:58 np0005603609 nova_compute[221550]: 2026-01-31 07:38:58.953 221554 DEBUG oslo_concurrency.processutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/disk.config eed178fc-d0db-47ac-a368-0d3058e94697_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.791s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:38:58 np0005603609 nova_compute[221550]: 2026-01-31 07:38:58.955 221554 INFO nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Deleting local config drive /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/disk.config because it was imported into RBD.#033[00m
Jan 31 02:38:59 np0005603609 kernel: tapbf338813-3c: entered promiscuous mode
Jan 31 02:38:59 np0005603609 NetworkManager[49064]: <info>  [1769845139.0072] manager: (tapbf338813-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Jan 31 02:38:59 np0005603609 ovn_controller[130359]: 2026-01-31T07:38:59Z|00109|binding|INFO|Claiming lport bf338813-3c1d-456b-a6fc-b4b2c8235740 for this chassis.
Jan 31 02:38:59 np0005603609 ovn_controller[130359]: 2026-01-31T07:38:59Z|00110|binding|INFO|bf338813-3c1d-456b-a6fc-b4b2c8235740: Claiming fa:16:3e:e7:af:41 10.100.0.7
Jan 31 02:38:59 np0005603609 nova_compute[221550]: 2026-01-31 07:38:59.007 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:59 np0005603609 ovn_controller[130359]: 2026-01-31T07:38:59Z|00111|binding|INFO|Setting lport bf338813-3c1d-456b-a6fc-b4b2c8235740 ovn-installed in OVS
Jan 31 02:38:59 np0005603609 nova_compute[221550]: 2026-01-31 07:38:59.023 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:59 np0005603609 nova_compute[221550]: 2026-01-31 07:38:59.027 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:59 np0005603609 systemd-udevd[235883]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:38:59 np0005603609 ovn_controller[130359]: 2026-01-31T07:38:59Z|00112|binding|INFO|Setting lport bf338813-3c1d-456b-a6fc-b4b2c8235740 up in Southbound
Jan 31 02:38:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:59.045 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:af:41 10.100.0.7'], port_security=['fa:16:3e:e7:af:41 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eed178fc-d0db-47ac-a368-0d3058e94697', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0554655ad0a48c8bf0551298dd31919', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b25af7c2-ecfe-428a-9b4f-51874d47219e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3f23c6f-f389-4487-9d19-0cf4a6c28cbc, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=bf338813-3c1d-456b-a6fc-b4b2c8235740) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:38:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:59.046 140058 INFO neutron.agent.ovn.metadata.agent [-] Port bf338813-3c1d-456b-a6fc-b4b2c8235740 in datapath 8c92e27e-f16c-4df2-a299-60ef2ca44f53 bound to our chassis#033[00m
Jan 31 02:38:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:59.048 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c92e27e-f16c-4df2-a299-60ef2ca44f53#033[00m
Jan 31 02:38:59 np0005603609 NetworkManager[49064]: <info>  [1769845139.0499] device (tapbf338813-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:38:59 np0005603609 NetworkManager[49064]: <info>  [1769845139.0508] device (tapbf338813-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:38:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:59.060 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f06c5eef-96bf-4ca4-b388-aebbadf0df7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:38:59 np0005603609 systemd-machined[190912]: New machine qemu-19-instance-00000021.
Jan 31 02:38:59 np0005603609 systemd[1]: Started Virtual Machine qemu-19-instance-00000021.
Jan 31 02:38:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:59.085 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[bf758304-bb39-46cd-8a46-2cecd34caf6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:38:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:59.089 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[dc12474f-b123-494f-845b-38311364d023]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:38:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:59.109 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[4b369b3a-241d-4bef-bfcc-f11a7c97b8c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:38:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:59.125 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[dbfaa77b-1ebd-45e7-bbf6-094d48414ef5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c92e27e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:62:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531475, 'reachable_time': 25068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235894, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:38:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:59.139 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0bab4ffa-766a-45b2-9e6f-7841e5c7cba0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8c92e27e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531483, 'tstamp': 531483}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235895, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8c92e27e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531485, 'tstamp': 531485}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235895, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:38:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:59.140 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c92e27e-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:38:59 np0005603609 nova_compute[221550]: 2026-01-31 07:38:59.142 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:59 np0005603609 nova_compute[221550]: 2026-01-31 07:38:59.143 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:38:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:59.143 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c92e27e-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:38:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:59.144 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:38:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:59.144 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c92e27e-f0, col_values=(('external_ids', {'iface-id': 'b682c189-93d2-4c14-8b2a-bafbda6df8a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:38:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:38:59.145 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:38:59 np0005603609 nova_compute[221550]: 2026-01-31 07:38:59.418 221554 DEBUG nova.compute.manager [req-fddeee8a-a0a0-43fd-b6ad-1976b8077b88 req-2e86a0bc-8947-4fff-9258-3e77f7b32a27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:38:59 np0005603609 nova_compute[221550]: 2026-01-31 07:38:59.419 221554 DEBUG oslo_concurrency.lockutils [req-fddeee8a-a0a0-43fd-b6ad-1976b8077b88 req-2e86a0bc-8947-4fff-9258-3e77f7b32a27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:38:59 np0005603609 nova_compute[221550]: 2026-01-31 07:38:59.419 221554 DEBUG oslo_concurrency.lockutils [req-fddeee8a-a0a0-43fd-b6ad-1976b8077b88 req-2e86a0bc-8947-4fff-9258-3e77f7b32a27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:38:59 np0005603609 nova_compute[221550]: 2026-01-31 07:38:59.420 221554 DEBUG oslo_concurrency.lockutils [req-fddeee8a-a0a0-43fd-b6ad-1976b8077b88 req-2e86a0bc-8947-4fff-9258-3e77f7b32a27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:38:59 np0005603609 nova_compute[221550]: 2026-01-31 07:38:59.420 221554 DEBUG nova.compute.manager [req-fddeee8a-a0a0-43fd-b6ad-1976b8077b88 req-2e86a0bc-8947-4fff-9258-3e77f7b32a27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Processing event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:38:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:59.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:59 np0005603609 nova_compute[221550]: 2026-01-31 07:38:59.743 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Updating instance_info_cache with network_info: [{"id": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "address": "fa:16:3e:d7:8d:be", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape917fc77-97", "ovs_interfaceid": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:38:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:38:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:59.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:59 np0005603609 nova_compute[221550]: 2026-01-31 07:38:59.791 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-33cde392-20ea-4fd7-88d1-f66b9d14e19a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:38:59 np0005603609 nova_compute[221550]: 2026-01-31 07:38:59.791 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:38:59 np0005603609 nova_compute[221550]: 2026-01-31 07:38:59.791 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:38:59 np0005603609 nova_compute[221550]: 2026-01-31 07:38:59.791 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:38:59 np0005603609 nova_compute[221550]: 2026-01-31 07:38:59.792 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:38:59 np0005603609 nova_compute[221550]: 2026-01-31 07:38:59.792 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:38:59 np0005603609 nova_compute[221550]: 2026-01-31 07:38:59.792 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:39:00.024848) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845140024904, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 329, "num_deletes": 251, "total_data_size": 182791, "memory_usage": 189624, "flush_reason": "Manual Compaction"}
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845140027138, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 119857, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27839, "largest_seqno": 28163, "table_properties": {"data_size": 117813, "index_size": 208, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5325, "raw_average_key_size": 18, "raw_value_size": 113791, "raw_average_value_size": 396, "num_data_blocks": 9, "num_entries": 287, "num_filter_entries": 287, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845133, "oldest_key_time": 1769845133, "file_creation_time": 1769845140, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 2334 microseconds, and 893 cpu microseconds.
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:39:00.027192) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 119857 bytes OK
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:39:00.027209) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:39:00.028759) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:39:00.028775) EVENT_LOG_v1 {"time_micros": 1769845140028769, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:39:00.028793) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 180453, prev total WAL file size 180453, number of live WAL files 2.
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:39:00.029125) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(117KB)], [54(11MB)]
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845140029184, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 12524329, "oldest_snapshot_seqno": -1}
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.107 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for eed178fc-d0db-47ac-a368-0d3058e94697 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.107 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845140.106616, eed178fc-d0db-47ac-a368-0d3058e94697 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.107 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] VM Started (Lifecycle Event)#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.109 221554 DEBUG nova.compute.manager [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.111 221554 DEBUG nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.114 221554 INFO nova.virt.libvirt.driver [-] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Instance spawned successfully.#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.114 221554 DEBUG nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 5232 keys, 10609165 bytes, temperature: kUnknown
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845140152973, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 10609165, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10571512, "index_size": 23463, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 133335, "raw_average_key_size": 25, "raw_value_size": 10474606, "raw_average_value_size": 2002, "num_data_blocks": 960, "num_entries": 5232, "num_filter_entries": 5232, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769845140, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:39:00.153201) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 10609165 bytes
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:39:00.154614) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 101.1 rd, 85.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.8 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(193.0) write-amplify(88.5) OK, records in: 5741, records dropped: 509 output_compression: NoCompression
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:39:00.154638) EVENT_LOG_v1 {"time_micros": 1769845140154629, "job": 32, "event": "compaction_finished", "compaction_time_micros": 123869, "compaction_time_cpu_micros": 21141, "output_level": 6, "num_output_files": 1, "total_output_size": 10609165, "num_input_records": 5741, "num_output_records": 5232, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845140154880, "job": 32, "event": "table_file_deletion", "file_number": 56}
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.154 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845140156226, "job": 32, "event": "table_file_deletion", "file_number": 54}
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:39:00.029044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:39:00.156292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:39:00.156296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:39:00.156297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:39:00.156299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:39:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:39:00.156300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.161 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.164 221554 DEBUG nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.164 221554 DEBUG nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.164 221554 DEBUG nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.164 221554 DEBUG nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.165 221554 DEBUG nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.165 221554 DEBUG nova.virt.libvirt.driver [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.200 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.201 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845140.1067548, eed178fc-d0db-47ac-a368-0d3058e94697 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.201 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.239 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.243 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845140.111193, eed178fc-d0db-47ac-a368-0d3058e94697 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.243 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.250 221554 DEBUG nova.compute.manager [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.274 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.278 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.307 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.329 221554 DEBUG oslo_concurrency.lockutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.330 221554 DEBUG oslo_concurrency.lockutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.331 221554 DEBUG nova.objects.instance [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 02:39:00 np0005603609 nova_compute[221550]: 2026-01-31 07:39:00.415 221554 DEBUG oslo_concurrency.lockutils [None req-903e4bbc-c612-4dfa-99fe-579be183d06a 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:01 np0005603609 nova_compute[221550]: 2026-01-31 07:39:01.224 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:01 np0005603609 nova_compute[221550]: 2026-01-31 07:39:01.528 221554 DEBUG nova.compute.manager [req-acce7425-2bd1-448a-9e0e-83d7b020c6a3 req-4df373fb-d2c6-4e4f-9a49-57c05b8697d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:39:01 np0005603609 nova_compute[221550]: 2026-01-31 07:39:01.528 221554 DEBUG oslo_concurrency.lockutils [req-acce7425-2bd1-448a-9e0e-83d7b020c6a3 req-4df373fb-d2c6-4e4f-9a49-57c05b8697d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:01 np0005603609 nova_compute[221550]: 2026-01-31 07:39:01.529 221554 DEBUG oslo_concurrency.lockutils [req-acce7425-2bd1-448a-9e0e-83d7b020c6a3 req-4df373fb-d2c6-4e4f-9a49-57c05b8697d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:01 np0005603609 nova_compute[221550]: 2026-01-31 07:39:01.529 221554 DEBUG oslo_concurrency.lockutils [req-acce7425-2bd1-448a-9e0e-83d7b020c6a3 req-4df373fb-d2c6-4e4f-9a49-57c05b8697d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:01 np0005603609 nova_compute[221550]: 2026-01-31 07:39:01.530 221554 DEBUG nova.compute.manager [req-acce7425-2bd1-448a-9e0e-83d7b020c6a3 req-4df373fb-d2c6-4e4f-9a49-57c05b8697d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] No waiting events found dispatching network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:39:01 np0005603609 nova_compute[221550]: 2026-01-31 07:39:01.530 221554 WARNING nova.compute.manager [req-acce7425-2bd1-448a-9e0e-83d7b020c6a3 req-4df373fb-d2c6-4e4f-9a49-57c05b8697d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received unexpected event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:39:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:01.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:39:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:01.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:39:02 np0005603609 nova_compute[221550]: 2026-01-31 07:39:02.136 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:03 np0005603609 nova_compute[221550]: 2026-01-31 07:39:03.065 221554 INFO nova.compute.manager [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Rebuilding instance#033[00m
Jan 31 02:39:03 np0005603609 nova_compute[221550]: 2026-01-31 07:39:03.307 221554 DEBUG nova.objects.instance [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'trusted_certs' on Instance uuid eed178fc-d0db-47ac-a368-0d3058e94697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:39:03 np0005603609 nova_compute[221550]: 2026-01-31 07:39:03.329 221554 DEBUG nova.compute.manager [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:39:03 np0005603609 nova_compute[221550]: 2026-01-31 07:39:03.539 221554 DEBUG nova.objects.instance [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'pci_requests' on Instance uuid eed178fc-d0db-47ac-a368-0d3058e94697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:39:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:03.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:39:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:03.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:39:03 np0005603609 nova_compute[221550]: 2026-01-31 07:39:03.815 221554 DEBUG nova.objects.instance [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'pci_devices' on Instance uuid eed178fc-d0db-47ac-a368-0d3058e94697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:39:04 np0005603609 nova_compute[221550]: 2026-01-31 07:39:04.214 221554 DEBUG nova.objects.instance [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'resources' on Instance uuid eed178fc-d0db-47ac-a368-0d3058e94697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:39:04 np0005603609 nova_compute[221550]: 2026-01-31 07:39:04.337 221554 DEBUG nova.objects.instance [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'migration_context' on Instance uuid eed178fc-d0db-47ac-a368-0d3058e94697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:39:04 np0005603609 nova_compute[221550]: 2026-01-31 07:39:04.520 221554 DEBUG nova.objects.instance [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 02:39:04 np0005603609 nova_compute[221550]: 2026-01-31 07:39:04.525 221554 DEBUG nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 02:39:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:05.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:05.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:06 np0005603609 nova_compute[221550]: 2026-01-31 07:39:06.226 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:07 np0005603609 nova_compute[221550]: 2026-01-31 07:39:07.179 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:07.471 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:07.471 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:07.472 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:07.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:39:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:07.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:39:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:09.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:09.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e163 e163: 3 total, 3 up, 3 in
Jan 31 02:39:11 np0005603609 nova_compute[221550]: 2026-01-31 07:39:11.228 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:11.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:39:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:11.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:39:12 np0005603609 nova_compute[221550]: 2026-01-31 07:39:12.185 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e164 e164: 3 total, 3 up, 3 in
Jan 31 02:39:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:13.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:13.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:14 np0005603609 podman[235945]: 2026-01-31 07:39:14.181736956 +0000 UTC m=+0.057336644 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:39:14 np0005603609 podman[235944]: 2026-01-31 07:39:14.200818474 +0000 UTC m=+0.077621561 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 02:39:14 np0005603609 nova_compute[221550]: 2026-01-31 07:39:14.570 221554 DEBUG nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 02:39:15 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:15Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e7:af:41 10.100.0.7
Jan 31 02:39:15 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:15Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e7:af:41 10.100.0.7
Jan 31 02:39:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e165 e165: 3 total, 3 up, 3 in
Jan 31 02:39:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:15.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:15.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:16 np0005603609 nova_compute[221550]: 2026-01-31 07:39:16.266 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:17 np0005603609 nova_compute[221550]: 2026-01-31 07:39:17.187 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:17.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e166 e166: 3 total, 3 up, 3 in
Jan 31 02:39:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:17.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:39:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:39:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:39:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:39:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e167 e167: 3 total, 3 up, 3 in
Jan 31 02:39:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:19.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:19.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:21 np0005603609 nova_compute[221550]: 2026-01-31 07:39:21.318 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:21 np0005603609 nova_compute[221550]: 2026-01-31 07:39:21.601 221554 INFO nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Instance shutdown successfully after 17 seconds.#033[00m
Jan 31 02:39:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:21.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:21.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:22 np0005603609 nova_compute[221550]: 2026-01-31 07:39:22.189 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:23.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:23.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:25.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:25.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:26 np0005603609 nova_compute[221550]: 2026-01-31 07:39:26.362 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.191 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:27 np0005603609 kernel: tapbf338813-3c (unregistering): left promiscuous mode
Jan 31 02:39:27 np0005603609 NetworkManager[49064]: <info>  [1769845167.7412] device (tapbf338813-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:39:27 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:27Z|00113|binding|INFO|Releasing lport bf338813-3c1d-456b-a6fc-b4b2c8235740 from this chassis (sb_readonly=0)
Jan 31 02:39:27 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:27Z|00114|binding|INFO|Setting lport bf338813-3c1d-456b-a6fc-b4b2c8235740 down in Southbound
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.750 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:27 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:27Z|00115|binding|INFO|Removing iface tapbf338813-3c ovn-installed in OVS
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.753 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:27.761 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:af:41 10.100.0.7'], port_security=['fa:16:3e:e7:af:41 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eed178fc-d0db-47ac-a368-0d3058e94697', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0554655ad0a48c8bf0551298dd31919', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b25af7c2-ecfe-428a-9b4f-51874d47219e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3f23c6f-f389-4487-9d19-0cf4a6c28cbc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=bf338813-3c1d-456b-a6fc-b4b2c8235740) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:39:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:27.765 140058 INFO neutron.agent.ovn.metadata.agent [-] Port bf338813-3c1d-456b-a6fc-b4b2c8235740 in datapath 8c92e27e-f16c-4df2-a299-60ef2ca44f53 unbound from our chassis#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.765 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:27.772 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c92e27e-f16c-4df2-a299-60ef2ca44f53#033[00m
Jan 31 02:39:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:27.787 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5bf13e2a-f95b-47ff-9fc5-0b5adcb9785a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:27.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:27 np0005603609 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000021.scope: Deactivated successfully.
Jan 31 02:39:27 np0005603609 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000021.scope: Consumed 13.511s CPU time.
Jan 31 02:39:27 np0005603609 systemd-machined[190912]: Machine qemu-19-instance-00000021 terminated.
Jan 31 02:39:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:27.811 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[beebeae0-80c8-44ad-8bef-a3a3dde7447d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:27.813 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f87c2211-e8d6-4d73-b4d6-175fe429abca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:27 np0005603609 kernel: tapbf338813-3c: entered promiscuous mode
Jan 31 02:39:27 np0005603609 kernel: tapbf338813-3c (unregistering): left promiscuous mode
Jan 31 02:39:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:27.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:27.837 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c63df499-548b-4266-a193-7c5c19a71b91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.838 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.848 221554 INFO nova.virt.libvirt.driver [-] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Instance destroyed successfully.#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.852 221554 INFO nova.virt.libvirt.driver [-] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Instance destroyed successfully.#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.853 221554 DEBUG nova.virt.libvirt.vif [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:37:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-540870110',display_name='tempest-ServersAdminTestJSON-server-540870110',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-540870110',id=33,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:39:00Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b0554655ad0a48c8bf0551298dd31919',ramdisk_id='',reservation_id='r-naihjech',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1156607975',owner_user_name='tempest-ServersAdminTestJSON-1156607975-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:39:02Z,user_data=None,user_id='8a44db09acbd4aeb990147dc979f0bfd',uuid=eed178fc-d0db-47ac-a368-0d3058e94697,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.853 221554 DEBUG nova.network.os_vif_util [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converting VIF {"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.854 221554 DEBUG nova.network.os_vif_util [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.854 221554 DEBUG os_vif [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.855 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.855 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf338813-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.857 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.858 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.860 221554 INFO os_vif [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c')#033[00m
Jan 31 02:39:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:27.861 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bd99d6dd-c5d5-4a8d-9107-8e1796bc7912]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c92e27e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:62:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531475, 'reachable_time': 25068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236140, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:27.873 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4b13e40f-f6f9-48f7-a680-a4ae46c3c0a8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8c92e27e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531483, 'tstamp': 531483}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236142, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8c92e27e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531485, 'tstamp': 531485}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236142, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:27.875 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c92e27e-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:27.879 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c92e27e-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:27.879 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:39:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:27.880 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c92e27e-f0, col_values=(('external_ids', {'iface-id': 'b682c189-93d2-4c14-8b2a-bafbda6df8a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:27.880 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.885 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.960 221554 DEBUG nova.compute.manager [req-5a3a6219-fb02-4da6-b13b-8afa7df94c02 req-dca7e7b3-f454-4b3e-adba-b9cfab66112d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received event network-vif-unplugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.960 221554 DEBUG oslo_concurrency.lockutils [req-5a3a6219-fb02-4da6-b13b-8afa7df94c02 req-dca7e7b3-f454-4b3e-adba-b9cfab66112d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.961 221554 DEBUG oslo_concurrency.lockutils [req-5a3a6219-fb02-4da6-b13b-8afa7df94c02 req-dca7e7b3-f454-4b3e-adba-b9cfab66112d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.961 221554 DEBUG oslo_concurrency.lockutils [req-5a3a6219-fb02-4da6-b13b-8afa7df94c02 req-dca7e7b3-f454-4b3e-adba-b9cfab66112d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.961 221554 DEBUG nova.compute.manager [req-5a3a6219-fb02-4da6-b13b-8afa7df94c02 req-dca7e7b3-f454-4b3e-adba-b9cfab66112d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] No waiting events found dispatching network-vif-unplugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:39:27 np0005603609 nova_compute[221550]: 2026-01-31 07:39:27.961 221554 WARNING nova.compute.manager [req-5a3a6219-fb02-4da6-b13b-8afa7df94c02 req-dca7e7b3-f454-4b3e-adba-b9cfab66112d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received unexpected event network-vif-unplugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 for instance with vm_state active and task_state rebuilding.#033[00m
Jan 31 02:39:28 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 31 02:39:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:39:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:39:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e168 e168: 3 total, 3 up, 3 in
Jan 31 02:39:29 np0005603609 nova_compute[221550]: 2026-01-31 07:39:29.392 221554 INFO nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Deleting instance files /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697_del#033[00m
Jan 31 02:39:29 np0005603609 nova_compute[221550]: 2026-01-31 07:39:29.392 221554 INFO nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Deletion of /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697_del complete#033[00m
Jan 31 02:39:29 np0005603609 nova_compute[221550]: 2026-01-31 07:39:29.590 221554 DEBUG nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:39:29 np0005603609 nova_compute[221550]: 2026-01-31 07:39:29.591 221554 INFO nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Creating image(s)#033[00m
Jan 31 02:39:29 np0005603609 nova_compute[221550]: 2026-01-31 07:39:29.628 221554 DEBUG nova.storage.rbd_utils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:39:29 np0005603609 nova_compute[221550]: 2026-01-31 07:39:29.655 221554 DEBUG nova.storage.rbd_utils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:39:29 np0005603609 nova_compute[221550]: 2026-01-31 07:39:29.687 221554 DEBUG nova.storage.rbd_utils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:39:29 np0005603609 nova_compute[221550]: 2026-01-31 07:39:29.692 221554 DEBUG oslo_concurrency.processutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:39:29 np0005603609 nova_compute[221550]: 2026-01-31 07:39:29.754 221554 DEBUG oslo_concurrency.processutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:39:29 np0005603609 nova_compute[221550]: 2026-01-31 07:39:29.755 221554 DEBUG oslo_concurrency.lockutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:29 np0005603609 nova_compute[221550]: 2026-01-31 07:39:29.756 221554 DEBUG oslo_concurrency.lockutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:29 np0005603609 nova_compute[221550]: 2026-01-31 07:39:29.757 221554 DEBUG oslo_concurrency.lockutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:29 np0005603609 nova_compute[221550]: 2026-01-31 07:39:29.789 221554 DEBUG nova.storage.rbd_utils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:39:29 np0005603609 nova_compute[221550]: 2026-01-31 07:39:29.793 221554 DEBUG oslo_concurrency.processutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 eed178fc-d0db-47ac-a368-0d3058e94697_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:39:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:29.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:29.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:30 np0005603609 nova_compute[221550]: 2026-01-31 07:39:30.062 221554 DEBUG nova.compute.manager [req-b5751923-b153-49fa-94cf-0f7663ca6250 req-35599179-11e8-43fe-9ea7-0316511a36bb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:39:30 np0005603609 nova_compute[221550]: 2026-01-31 07:39:30.063 221554 DEBUG oslo_concurrency.lockutils [req-b5751923-b153-49fa-94cf-0f7663ca6250 req-35599179-11e8-43fe-9ea7-0316511a36bb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:30 np0005603609 nova_compute[221550]: 2026-01-31 07:39:30.063 221554 DEBUG oslo_concurrency.lockutils [req-b5751923-b153-49fa-94cf-0f7663ca6250 req-35599179-11e8-43fe-9ea7-0316511a36bb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:30 np0005603609 nova_compute[221550]: 2026-01-31 07:39:30.064 221554 DEBUG oslo_concurrency.lockutils [req-b5751923-b153-49fa-94cf-0f7663ca6250 req-35599179-11e8-43fe-9ea7-0316511a36bb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:30 np0005603609 nova_compute[221550]: 2026-01-31 07:39:30.064 221554 DEBUG nova.compute.manager [req-b5751923-b153-49fa-94cf-0f7663ca6250 req-35599179-11e8-43fe-9ea7-0316511a36bb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] No waiting events found dispatching network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:39:30 np0005603609 nova_compute[221550]: 2026-01-31 07:39:30.065 221554 WARNING nova.compute.manager [req-b5751923-b153-49fa-94cf-0f7663ca6250 req-35599179-11e8-43fe-9ea7-0316511a36bb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received unexpected event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 31 02:39:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:31 np0005603609 nova_compute[221550]: 2026-01-31 07:39:31.393 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:31.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:31.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:32Z|00116|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 31 02:39:32 np0005603609 nova_compute[221550]: 2026-01-31 07:39:32.654 221554 DEBUG oslo_concurrency.processutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 eed178fc-d0db-47ac-a368-0d3058e94697_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.861s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:39:32 np0005603609 nova_compute[221550]: 2026-01-31 07:39:32.691 221554 DEBUG oslo_concurrency.lockutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "2622b77a-7331-4b1f-b3f6-b18902724ea8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:32 np0005603609 nova_compute[221550]: 2026-01-31 07:39:32.692 221554 DEBUG oslo_concurrency.lockutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "2622b77a-7331-4b1f-b3f6-b18902724ea8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:32 np0005603609 nova_compute[221550]: 2026-01-31 07:39:32.729 221554 DEBUG nova.storage.rbd_utils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] resizing rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:39:32 np0005603609 nova_compute[221550]: 2026-01-31 07:39:32.765 221554 DEBUG nova.compute.manager [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:39:32 np0005603609 nova_compute[221550]: 2026-01-31 07:39:32.858 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:32 np0005603609 nova_compute[221550]: 2026-01-31 07:39:32.878 221554 DEBUG oslo_concurrency.lockutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:32 np0005603609 nova_compute[221550]: 2026-01-31 07:39:32.879 221554 DEBUG oslo_concurrency.lockutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:32 np0005603609 nova_compute[221550]: 2026-01-31 07:39:32.889 221554 DEBUG nova.virt.hardware [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:39:32 np0005603609 nova_compute[221550]: 2026-01-31 07:39:32.889 221554 INFO nova.compute.claims [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.074 221554 DEBUG oslo_concurrency.processutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.362 221554 DEBUG nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.363 221554 DEBUG nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Ensure instance console log exists: /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.364 221554 DEBUG oslo_concurrency.lockutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.364 221554 DEBUG oslo_concurrency.lockutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.365 221554 DEBUG oslo_concurrency.lockutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.367 221554 DEBUG nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Start _get_guest_xml network_info=[{"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.371 221554 WARNING nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.377 221554 DEBUG nova.virt.libvirt.host [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.377 221554 DEBUG nova.virt.libvirt.host [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.380 221554 DEBUG nova.virt.libvirt.host [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.380 221554 DEBUG nova.virt.libvirt.host [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.381 221554 DEBUG nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.382 221554 DEBUG nova.virt.hardware [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.382 221554 DEBUG nova.virt.hardware [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.382 221554 DEBUG nova.virt.hardware [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.383 221554 DEBUG nova.virt.hardware [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.383 221554 DEBUG nova.virt.hardware [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.383 221554 DEBUG nova.virt.hardware [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.383 221554 DEBUG nova.virt.hardware [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.384 221554 DEBUG nova.virt.hardware [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.384 221554 DEBUG nova.virt.hardware [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.384 221554 DEBUG nova.virt.hardware [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.384 221554 DEBUG nova.virt.hardware [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.385 221554 DEBUG nova.objects.instance [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'vcpu_model' on Instance uuid eed178fc-d0db-47ac-a368-0d3058e94697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.434 221554 DEBUG oslo_concurrency.processutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:39:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:39:33 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1659632397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.462 221554 DEBUG oslo_concurrency.processutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.467 221554 DEBUG nova.compute.provider_tree [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.504 221554 DEBUG nova.scheduler.client.report [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.576 221554 DEBUG oslo_concurrency.lockutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.577 221554 DEBUG nova.compute.manager [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.650 221554 DEBUG nova.compute.manager [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.650 221554 DEBUG nova.network.neutron [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.792 221554 INFO nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.806 221554 DEBUG nova.policy [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '533eaca1e9c4430dabe2b0a39039ca65', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b3e3e6f216d24c1f9f68777cfb63dbf8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:39:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:33.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:39:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:33.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:39:33 np0005603609 nova_compute[221550]: 2026-01-31 07:39:33.844 221554 DEBUG nova.compute.manager [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:39:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:39:33 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1211290531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.062 221554 DEBUG nova.compute.manager [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.063 221554 DEBUG nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.063 221554 INFO nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Creating image(s)#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.089 221554 DEBUG nova.storage.rbd_utils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 2622b77a-7331-4b1f-b3f6-b18902724ea8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:39:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e169 e169: 3 total, 3 up, 3 in
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.118 221554 DEBUG nova.storage.rbd_utils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 2622b77a-7331-4b1f-b3f6-b18902724ea8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.143 221554 DEBUG nova.storage.rbd_utils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 2622b77a-7331-4b1f-b3f6-b18902724ea8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.146 221554 DEBUG oslo_concurrency.processutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.158 221554 DEBUG oslo_concurrency.processutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.723s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.183 221554 DEBUG nova.storage.rbd_utils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.188 221554 DEBUG oslo_concurrency.processutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.202 221554 DEBUG oslo_concurrency.processutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.203 221554 DEBUG oslo_concurrency.lockutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.204 221554 DEBUG oslo_concurrency.lockutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.204 221554 DEBUG oslo_concurrency.lockutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.228 221554 DEBUG nova.storage.rbd_utils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 2622b77a-7331-4b1f-b3f6-b18902724ea8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.231 221554 DEBUG oslo_concurrency.processutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 2622b77a-7331-4b1f-b3f6-b18902724ea8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:39:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:39:34 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2681325381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.609 221554 DEBUG oslo_concurrency.processutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.611 221554 DEBUG nova.virt.libvirt.vif [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:37:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-540870110',display_name='tempest-ServersAdminTestJSON-server-540870110',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-540870110',id=33,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:39:00Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b0554655ad0a48c8bf0551298dd31919',ramdisk_id='',reservation_id='r-naihjech',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1156607975',owner_user_name='tempest-ServersAdminTestJSON-1156607975-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:39:29Z,user_data=None,user_id='8a44db09acbd4aeb990147dc979f0bfd',uuid=eed178fc-d0db-47ac-a368-0d3058e94697,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.612 221554 DEBUG nova.network.os_vif_util [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converting VIF {"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.614 221554 DEBUG nova.network.os_vif_util [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.618 221554 DEBUG nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:39:34 np0005603609 nova_compute[221550]:  <uuid>eed178fc-d0db-47ac-a368-0d3058e94697</uuid>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:  <name>instance-00000021</name>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServersAdminTestJSON-server-540870110</nova:name>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:39:33</nova:creationTime>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:39:34 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:        <nova:user uuid="8a44db09acbd4aeb990147dc979f0bfd">tempest-ServersAdminTestJSON-1156607975-project-member</nova:user>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:        <nova:project uuid="b0554655ad0a48c8bf0551298dd31919">tempest-ServersAdminTestJSON-1156607975</nova:project>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:        <nova:port uuid="bf338813-3c1d-456b-a6fc-b4b2c8235740">
Jan 31 02:39:34 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <entry name="serial">eed178fc-d0db-47ac-a368-0d3058e94697</entry>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <entry name="uuid">eed178fc-d0db-47ac-a368-0d3058e94697</entry>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/eed178fc-d0db-47ac-a368-0d3058e94697_disk">
Jan 31 02:39:34 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:39:34 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/eed178fc-d0db-47ac-a368-0d3058e94697_disk.config">
Jan 31 02:39:34 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:39:34 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:e7:af:41"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <target dev="tapbf338813-3c"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/console.log" append="off"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:39:34 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:39:34 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:39:34 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:39:34 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.620 221554 DEBUG nova.compute.manager [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Preparing to wait for external event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.621 221554 DEBUG oslo_concurrency.lockutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.621 221554 DEBUG oslo_concurrency.lockutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.621 221554 DEBUG oslo_concurrency.lockutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.622 221554 DEBUG nova.virt.libvirt.vif [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:37:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-540870110',display_name='tempest-ServersAdminTestJSON-server-540870110',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-540870110',id=33,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:39:00Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b0554655ad0a48c8bf0551298dd31919',ramdisk_id='',reservation_id='r-naihjech',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1156607975',owner_user_name='tempest-ServersAdminTestJSON-1156607975-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:39:29Z,user_data=None,user_id='8a44db09acbd4aeb990147dc979f0bfd',uuid=eed178fc-d0db-47ac-a368-0d3058e94697,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.623 221554 DEBUG nova.network.os_vif_util [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converting VIF {"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.624 221554 DEBUG nova.network.os_vif_util [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.624 221554 DEBUG os_vif [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.626 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.626 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.627 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.629 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.629 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf338813-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.630 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf338813-3c, col_values=(('external_ids', {'iface-id': 'bf338813-3c1d-456b-a6fc-b4b2c8235740', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:af:41', 'vm-uuid': 'eed178fc-d0db-47ac-a368-0d3058e94697'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.632 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:34 np0005603609 NetworkManager[49064]: <info>  [1769845174.6329] manager: (tapbf338813-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.635 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.639 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.639 221554 INFO os_vif [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c')#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.757 221554 DEBUG nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.757 221554 DEBUG nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.757 221554 DEBUG nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] No VIF found with MAC fa:16:3e:e7:af:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.758 221554 INFO nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Using config drive#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.791 221554 DEBUG nova.storage.rbd_utils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.801 221554 DEBUG nova.network.neutron [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Successfully created port: 2c9a7c4c-6849-408d-a061-7f9109458d84 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.828 221554 DEBUG nova.objects.instance [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'ec2_ids' on Instance uuid eed178fc-d0db-47ac-a368-0d3058e94697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:39:34 np0005603609 nova_compute[221550]: 2026-01-31 07:39:34.870 221554 DEBUG nova.objects.instance [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'keypairs' on Instance uuid eed178fc-d0db-47ac-a368-0d3058e94697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:39:35 np0005603609 nova_compute[221550]: 2026-01-31 07:39:35.338 221554 INFO nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Creating config drive at /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/disk.config#033[00m
Jan 31 02:39:35 np0005603609 nova_compute[221550]: 2026-01-31 07:39:35.344 221554 DEBUG oslo_concurrency.processutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpat6x19of execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:39:35 np0005603609 nova_compute[221550]: 2026-01-31 07:39:35.463 221554 DEBUG oslo_concurrency.processutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpat6x19of" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:39:35 np0005603609 nova_compute[221550]: 2026-01-31 07:39:35.500 221554 DEBUG nova.storage.rbd_utils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image eed178fc-d0db-47ac-a368-0d3058e94697_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:39:35 np0005603609 nova_compute[221550]: 2026-01-31 07:39:35.507 221554 DEBUG oslo_concurrency.processutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/disk.config eed178fc-d0db-47ac-a368-0d3058e94697_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:39:35 np0005603609 nova_compute[221550]: 2026-01-31 07:39:35.527 221554 DEBUG oslo_concurrency.processutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 2622b77a-7331-4b1f-b3f6-b18902724ea8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:39:35 np0005603609 nova_compute[221550]: 2026-01-31 07:39:35.600 221554 DEBUG nova.storage.rbd_utils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] resizing rbd image 2622b77a-7331-4b1f-b3f6-b18902724ea8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:39:35 np0005603609 nova_compute[221550]: 2026-01-31 07:39:35.729 221554 DEBUG oslo_concurrency.processutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/disk.config eed178fc-d0db-47ac-a368-0d3058e94697_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:39:35 np0005603609 nova_compute[221550]: 2026-01-31 07:39:35.731 221554 INFO nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Deleting local config drive /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697/disk.config because it was imported into RBD.#033[00m
Jan 31 02:39:35 np0005603609 nova_compute[221550]: 2026-01-31 07:39:35.738 221554 DEBUG nova.objects.instance [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'migration_context' on Instance uuid 2622b77a-7331-4b1f-b3f6-b18902724ea8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:39:35 np0005603609 nova_compute[221550]: 2026-01-31 07:39:35.777 221554 DEBUG nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:39:35 np0005603609 nova_compute[221550]: 2026-01-31 07:39:35.777 221554 DEBUG nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Ensure instance console log exists: /var/lib/nova/instances/2622b77a-7331-4b1f-b3f6-b18902724ea8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:39:35 np0005603609 nova_compute[221550]: 2026-01-31 07:39:35.779 221554 DEBUG oslo_concurrency.lockutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:35 np0005603609 kernel: tapbf338813-3c: entered promiscuous mode
Jan 31 02:39:35 np0005603609 nova_compute[221550]: 2026-01-31 07:39:35.779 221554 DEBUG oslo_concurrency.lockutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:35 np0005603609 nova_compute[221550]: 2026-01-31 07:39:35.779 221554 DEBUG oslo_concurrency.lockutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:35 np0005603609 NetworkManager[49064]: <info>  [1769845175.7815] manager: (tapbf338813-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Jan 31 02:39:35 np0005603609 nova_compute[221550]: 2026-01-31 07:39:35.781 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:35 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:35Z|00117|binding|INFO|Claiming lport bf338813-3c1d-456b-a6fc-b4b2c8235740 for this chassis.
Jan 31 02:39:35 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:35Z|00118|binding|INFO|bf338813-3c1d-456b-a6fc-b4b2c8235740: Claiming fa:16:3e:e7:af:41 10.100.0.7
Jan 31 02:39:35 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:35Z|00119|binding|INFO|Setting lport bf338813-3c1d-456b-a6fc-b4b2c8235740 ovn-installed in OVS
Jan 31 02:39:35 np0005603609 nova_compute[221550]: 2026-01-31 07:39:35.798 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:35 np0005603609 nova_compute[221550]: 2026-01-31 07:39:35.803 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:35 np0005603609 systemd-machined[190912]: New machine qemu-20-instance-00000021.
Jan 31 02:39:35 np0005603609 systemd-udevd[236702]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:39:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:35.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:35 np0005603609 systemd[1]: Started Virtual Machine qemu-20-instance-00000021.
Jan 31 02:39:35 np0005603609 NetworkManager[49064]: <info>  [1769845175.8262] device (tapbf338813-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:39:35 np0005603609 NetworkManager[49064]: <info>  [1769845175.8283] device (tapbf338813-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:39:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:35.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:35 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:35Z|00120|binding|INFO|Setting lport bf338813-3c1d-456b-a6fc-b4b2c8235740 up in Southbound
Jan 31 02:39:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:35.879 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:af:41 10.100.0.7'], port_security=['fa:16:3e:e7:af:41 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eed178fc-d0db-47ac-a368-0d3058e94697', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0554655ad0a48c8bf0551298dd31919', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'b25af7c2-ecfe-428a-9b4f-51874d47219e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3f23c6f-f389-4487-9d19-0cf4a6c28cbc, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=bf338813-3c1d-456b-a6fc-b4b2c8235740) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:39:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:35.881 140058 INFO neutron.agent.ovn.metadata.agent [-] Port bf338813-3c1d-456b-a6fc-b4b2c8235740 in datapath 8c92e27e-f16c-4df2-a299-60ef2ca44f53 bound to our chassis#033[00m
Jan 31 02:39:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:35.884 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c92e27e-f16c-4df2-a299-60ef2ca44f53#033[00m
Jan 31 02:39:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:35.896 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c4508916-c035-4aff-b723-c348af55b20f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:35.915 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[456e9225-9972-4281-b18f-fe9b2d6cfeda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:35.920 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef26b9e-d394-4146-a27c-15f5fb533208]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:35.943 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[4f6c878c-ec00-4238-a722-c0ed2d075645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:35.957 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0c96bf7a-2c25-47ae-9799-d67e9e281dcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c92e27e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:62:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531475, 'reachable_time': 25068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236716, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:35.968 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[538213dc-a002-4407-bfa5-e20321698de7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8c92e27e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531483, 'tstamp': 531483}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236717, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8c92e27e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531485, 'tstamp': 531485}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236717, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:35.970 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c92e27e-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:35 np0005603609 nova_compute[221550]: 2026-01-31 07:39:35.972 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:35.973 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c92e27e-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:35.973 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:39:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:35.974 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c92e27e-f0, col_values=(('external_ids', {'iface-id': 'b682c189-93d2-4c14-8b2a-bafbda6df8a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:35.974 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:39:36 np0005603609 nova_compute[221550]: 2026-01-31 07:39:36.281 221554 DEBUG nova.network.neutron [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Successfully updated port: 2c9a7c4c-6849-408d-a061-7f9109458d84 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:39:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:36 np0005603609 nova_compute[221550]: 2026-01-31 07:39:36.346 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for eed178fc-d0db-47ac-a368-0d3058e94697 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 02:39:36 np0005603609 nova_compute[221550]: 2026-01-31 07:39:36.347 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845176.3456604, eed178fc-d0db-47ac-a368-0d3058e94697 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:39:36 np0005603609 nova_compute[221550]: 2026-01-31 07:39:36.347 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] VM Started (Lifecycle Event)#033[00m
Jan 31 02:39:36 np0005603609 nova_compute[221550]: 2026-01-31 07:39:36.351 221554 DEBUG oslo_concurrency.lockutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "refresh_cache-2622b77a-7331-4b1f-b3f6-b18902724ea8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:39:36 np0005603609 nova_compute[221550]: 2026-01-31 07:39:36.352 221554 DEBUG oslo_concurrency.lockutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquired lock "refresh_cache-2622b77a-7331-4b1f-b3f6-b18902724ea8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:39:36 np0005603609 nova_compute[221550]: 2026-01-31 07:39:36.352 221554 DEBUG nova.network.neutron [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:39:36 np0005603609 nova_compute[221550]: 2026-01-31 07:39:36.430 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:39:36 np0005603609 nova_compute[221550]: 2026-01-31 07:39:36.451 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845176.345818, eed178fc-d0db-47ac-a368-0d3058e94697 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:39:36 np0005603609 nova_compute[221550]: 2026-01-31 07:39:36.452 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:39:36 np0005603609 nova_compute[221550]: 2026-01-31 07:39:36.453 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.027 221554 DEBUG nova.compute.manager [req-4767a178-8921-4ca3-94f2-c68222141110 req-f1eae221-36c5-4471-8b22-848e2de7d105 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.028 221554 DEBUG oslo_concurrency.lockutils [req-4767a178-8921-4ca3-94f2-c68222141110 req-f1eae221-36c5-4471-8b22-848e2de7d105 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.028 221554 DEBUG oslo_concurrency.lockutils [req-4767a178-8921-4ca3-94f2-c68222141110 req-f1eae221-36c5-4471-8b22-848e2de7d105 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.029 221554 DEBUG oslo_concurrency.lockutils [req-4767a178-8921-4ca3-94f2-c68222141110 req-f1eae221-36c5-4471-8b22-848e2de7d105 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.029 221554 DEBUG nova.compute.manager [req-4767a178-8921-4ca3-94f2-c68222141110 req-f1eae221-36c5-4471-8b22-848e2de7d105 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Processing event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.030 221554 DEBUG nova.compute.manager [req-7593b216-9073-43b4-b21c-113792ebe075 req-527d0a4f-456e-47bf-870b-8fa785d785f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Received event network-changed-2c9a7c4c-6849-408d-a061-7f9109458d84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.030 221554 DEBUG nova.compute.manager [req-7593b216-9073-43b4-b21c-113792ebe075 req-527d0a4f-456e-47bf-870b-8fa785d785f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Refreshing instance network info cache due to event network-changed-2c9a7c4c-6849-408d-a061-7f9109458d84. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.030 221554 DEBUG oslo_concurrency.lockutils [req-7593b216-9073-43b4-b21c-113792ebe075 req-527d0a4f-456e-47bf-870b-8fa785d785f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2622b77a-7331-4b1f-b3f6-b18902724ea8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.031 221554 DEBUG nova.compute.manager [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.035 221554 DEBUG nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.038 221554 INFO nova.virt.libvirt.driver [-] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Instance spawned successfully.#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.038 221554 DEBUG nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.065 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.069 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845177.034788, eed178fc-d0db-47ac-a368-0d3058e94697 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.069 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.434 221554 DEBUG nova.network.neutron [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.530 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.531 221554 DEBUG nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.531 221554 DEBUG nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.531 221554 DEBUG nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.532 221554 DEBUG nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.532 221554 DEBUG nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.532 221554 DEBUG nova.virt.libvirt.driver [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.537 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:39:37 np0005603609 nova_compute[221550]: 2026-01-31 07:39:37.630 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 02:39:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:39:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:37.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:39:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:37.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:38 np0005603609 nova_compute[221550]: 2026-01-31 07:39:38.028 221554 DEBUG nova.compute.manager [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:39:38 np0005603609 nova_compute[221550]: 2026-01-31 07:39:38.195 221554 DEBUG oslo_concurrency.lockutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:38 np0005603609 nova_compute[221550]: 2026-01-31 07:39:38.196 221554 DEBUG oslo_concurrency.lockutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:38 np0005603609 nova_compute[221550]: 2026-01-31 07:39:38.196 221554 DEBUG nova.objects.instance [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 02:39:38 np0005603609 nova_compute[221550]: 2026-01-31 07:39:38.385 221554 DEBUG oslo_concurrency.lockutils [None req-6a1e33da-dce6-4f6c-84d1-2561507b86bb 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.189s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e170 e170: 3 total, 3 up, 3 in
Jan 31 02:39:39 np0005603609 nova_compute[221550]: 2026-01-31 07:39:39.144 221554 DEBUG nova.compute.manager [req-d198c8ae-5504-4bb1-9f41-c5e1ca414f92 req-ff3715a7-641f-4c81-9df3-6701c0dfa822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:39:39 np0005603609 nova_compute[221550]: 2026-01-31 07:39:39.144 221554 DEBUG oslo_concurrency.lockutils [req-d198c8ae-5504-4bb1-9f41-c5e1ca414f92 req-ff3715a7-641f-4c81-9df3-6701c0dfa822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:39 np0005603609 nova_compute[221550]: 2026-01-31 07:39:39.145 221554 DEBUG oslo_concurrency.lockutils [req-d198c8ae-5504-4bb1-9f41-c5e1ca414f92 req-ff3715a7-641f-4c81-9df3-6701c0dfa822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:39 np0005603609 nova_compute[221550]: 2026-01-31 07:39:39.145 221554 DEBUG oslo_concurrency.lockutils [req-d198c8ae-5504-4bb1-9f41-c5e1ca414f92 req-ff3715a7-641f-4c81-9df3-6701c0dfa822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:39 np0005603609 nova_compute[221550]: 2026-01-31 07:39:39.145 221554 DEBUG nova.compute.manager [req-d198c8ae-5504-4bb1-9f41-c5e1ca414f92 req-ff3715a7-641f-4c81-9df3-6701c0dfa822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] No waiting events found dispatching network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:39:39 np0005603609 nova_compute[221550]: 2026-01-31 07:39:39.145 221554 WARNING nova.compute.manager [req-d198c8ae-5504-4bb1-9f41-c5e1ca414f92 req-ff3715a7-641f-4c81-9df3-6701c0dfa822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received unexpected event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:39:39 np0005603609 nova_compute[221550]: 2026-01-31 07:39:39.632 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:39:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:39.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:39:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:39.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e171 e171: 3 total, 3 up, 3 in
Jan 31 02:39:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:40.565 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:39:40 np0005603609 nova_compute[221550]: 2026-01-31 07:39:40.565 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:40.565 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:39:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.454 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.537 221554 DEBUG nova.network.neutron [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Updating instance_info_cache with network_info: [{"id": "2c9a7c4c-6849-408d-a061-7f9109458d84", "address": "fa:16:3e:09:7f:dc", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9a7c4c-68", "ovs_interfaceid": "2c9a7c4c-6849-408d-a061-7f9109458d84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.752 221554 DEBUG oslo_concurrency.lockutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Releasing lock "refresh_cache-2622b77a-7331-4b1f-b3f6-b18902724ea8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.753 221554 DEBUG nova.compute.manager [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Instance network_info: |[{"id": "2c9a7c4c-6849-408d-a061-7f9109458d84", "address": "fa:16:3e:09:7f:dc", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9a7c4c-68", "ovs_interfaceid": "2c9a7c4c-6849-408d-a061-7f9109458d84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.753 221554 DEBUG oslo_concurrency.lockutils [req-7593b216-9073-43b4-b21c-113792ebe075 req-527d0a4f-456e-47bf-870b-8fa785d785f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2622b77a-7331-4b1f-b3f6-b18902724ea8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.753 221554 DEBUG nova.network.neutron [req-7593b216-9073-43b4-b21c-113792ebe075 req-527d0a4f-456e-47bf-870b-8fa785d785f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Refreshing network info cache for port 2c9a7c4c-6849-408d-a061-7f9109458d84 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.756 221554 DEBUG nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Start _get_guest_xml network_info=[{"id": "2c9a7c4c-6849-408d-a061-7f9109458d84", "address": "fa:16:3e:09:7f:dc", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9a7c4c-68", "ovs_interfaceid": "2c9a7c4c-6849-408d-a061-7f9109458d84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.760 221554 WARNING nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.765 221554 DEBUG nova.virt.libvirt.host [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.765 221554 DEBUG nova.virt.libvirt.host [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.768 221554 DEBUG nova.virt.libvirt.host [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.768 221554 DEBUG nova.virt.libvirt.host [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.769 221554 DEBUG nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.769 221554 DEBUG nova.virt.hardware [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.770 221554 DEBUG nova.virt.hardware [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.770 221554 DEBUG nova.virt.hardware [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.770 221554 DEBUG nova.virt.hardware [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.770 221554 DEBUG nova.virt.hardware [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.771 221554 DEBUG nova.virt.hardware [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.771 221554 DEBUG nova.virt.hardware [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.771 221554 DEBUG nova.virt.hardware [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.771 221554 DEBUG nova.virt.hardware [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.771 221554 DEBUG nova.virt.hardware [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.772 221554 DEBUG nova.virt.hardware [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:39:41 np0005603609 nova_compute[221550]: 2026-01-31 07:39:41.774 221554 DEBUG oslo_concurrency.processutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:39:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:41.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:41.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:39:42 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2104040955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.266 221554 DEBUG oslo_concurrency.processutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.289 221554 DEBUG nova.storage.rbd_utils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 2622b77a-7331-4b1f-b3f6-b18902724ea8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.293 221554 DEBUG oslo_concurrency.processutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:39:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:39:42 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1549460415' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.672 221554 DEBUG oslo_concurrency.processutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.379s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.674 221554 DEBUG nova.virt.libvirt.vif [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:39:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-107615132',display_name='tempest-ImagesTestJSON-server-107615132',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-107615132',id=41,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3e3e6f216d24c1f9f68777cfb63dbf8',ramdisk_id='',reservation_id='r-boo162pk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-533495031',owner_user_name='tempest-ImagesTestJSON-533495031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:39:33Z,user_data=None,user_id='533eaca1e9c4430dabe2b0a39039ca65',uuid=2622b77a-7331-4b1f-b3f6-b18902724ea8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c9a7c4c-6849-408d-a061-7f9109458d84", "address": "fa:16:3e:09:7f:dc", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9a7c4c-68", "ovs_interfaceid": "2c9a7c4c-6849-408d-a061-7f9109458d84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.675 221554 DEBUG nova.network.os_vif_util [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converting VIF {"id": "2c9a7c4c-6849-408d-a061-7f9109458d84", "address": "fa:16:3e:09:7f:dc", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9a7c4c-68", "ovs_interfaceid": "2c9a7c4c-6849-408d-a061-7f9109458d84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.676 221554 DEBUG nova.network.os_vif_util [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:7f:dc,bridge_name='br-int',has_traffic_filtering=True,id=2c9a7c4c-6849-408d-a061-7f9109458d84,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c9a7c4c-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.678 221554 DEBUG nova.objects.instance [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2622b77a-7331-4b1f-b3f6-b18902724ea8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.736 221554 DEBUG nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:39:42 np0005603609 nova_compute[221550]:  <uuid>2622b77a-7331-4b1f-b3f6-b18902724ea8</uuid>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:  <name>instance-00000029</name>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <nova:name>tempest-ImagesTestJSON-server-107615132</nova:name>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:39:41</nova:creationTime>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:39:42 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:        <nova:user uuid="533eaca1e9c4430dabe2b0a39039ca65">tempest-ImagesTestJSON-533495031-project-member</nova:user>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:        <nova:project uuid="b3e3e6f216d24c1f9f68777cfb63dbf8">tempest-ImagesTestJSON-533495031</nova:project>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:        <nova:port uuid="2c9a7c4c-6849-408d-a061-7f9109458d84">
Jan 31 02:39:42 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <entry name="serial">2622b77a-7331-4b1f-b3f6-b18902724ea8</entry>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <entry name="uuid">2622b77a-7331-4b1f-b3f6-b18902724ea8</entry>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/2622b77a-7331-4b1f-b3f6-b18902724ea8_disk">
Jan 31 02:39:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:39:42 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/2622b77a-7331-4b1f-b3f6-b18902724ea8_disk.config">
Jan 31 02:39:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:39:42 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:09:7f:dc"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <target dev="tap2c9a7c4c-68"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/2622b77a-7331-4b1f-b3f6-b18902724ea8/console.log" append="off"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:39:42 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:39:42 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:39:42 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:39:42 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.737 221554 DEBUG nova.compute.manager [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Preparing to wait for external event network-vif-plugged-2c9a7c4c-6849-408d-a061-7f9109458d84 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.737 221554 DEBUG oslo_concurrency.lockutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "2622b77a-7331-4b1f-b3f6-b18902724ea8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.737 221554 DEBUG oslo_concurrency.lockutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "2622b77a-7331-4b1f-b3f6-b18902724ea8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.737 221554 DEBUG oslo_concurrency.lockutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "2622b77a-7331-4b1f-b3f6-b18902724ea8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.738 221554 DEBUG nova.virt.libvirt.vif [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:39:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-107615132',display_name='tempest-ImagesTestJSON-server-107615132',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-107615132',id=41,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3e3e6f216d24c1f9f68777cfb63dbf8',ramdisk_id='',reservation_id='r-boo162pk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-533495031',owner_user_name='tempest-ImagesTestJSON-533495031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:39:33Z,user_data=None,user_id='533eaca1e9c4430dabe2b0a39039ca65',uuid=2622b77a-7331-4b1f-b3f6-b18902724ea8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c9a7c4c-6849-408d-a061-7f9109458d84", "address": "fa:16:3e:09:7f:dc", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9a7c4c-68", "ovs_interfaceid": "2c9a7c4c-6849-408d-a061-7f9109458d84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.738 221554 DEBUG nova.network.os_vif_util [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converting VIF {"id": "2c9a7c4c-6849-408d-a061-7f9109458d84", "address": "fa:16:3e:09:7f:dc", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9a7c4c-68", "ovs_interfaceid": "2c9a7c4c-6849-408d-a061-7f9109458d84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.739 221554 DEBUG nova.network.os_vif_util [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:7f:dc,bridge_name='br-int',has_traffic_filtering=True,id=2c9a7c4c-6849-408d-a061-7f9109458d84,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c9a7c4c-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.739 221554 DEBUG os_vif [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:7f:dc,bridge_name='br-int',has_traffic_filtering=True,id=2c9a7c4c-6849-408d-a061-7f9109458d84,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c9a7c4c-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.739 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.740 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.740 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.742 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.743 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c9a7c4c-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.743 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c9a7c4c-68, col_values=(('external_ids', {'iface-id': '2c9a7c4c-6849-408d-a061-7f9109458d84', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:7f:dc', 'vm-uuid': '2622b77a-7331-4b1f-b3f6-b18902724ea8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.744 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:42 np0005603609 NetworkManager[49064]: <info>  [1769845182.7454] manager: (tap2c9a7c4c-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.748 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.751 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.752 221554 INFO os_vif [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:7f:dc,bridge_name='br-int',has_traffic_filtering=True,id=2c9a7c4c-6849-408d-a061-7f9109458d84,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c9a7c4c-68')#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.881 221554 DEBUG nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.882 221554 DEBUG nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.882 221554 DEBUG nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No VIF found with MAC fa:16:3e:09:7f:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.883 221554 INFO nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Using config drive#033[00m
Jan 31 02:39:42 np0005603609 nova_compute[221550]: 2026-01-31 07:39:42.922 221554 DEBUG nova.storage.rbd_utils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 2622b77a-7331-4b1f-b3f6-b18902724ea8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:39:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e172 e172: 3 total, 3 up, 3 in
Jan 31 02:39:43 np0005603609 nova_compute[221550]: 2026-01-31 07:39:43.594 221554 DEBUG nova.network.neutron [req-7593b216-9073-43b4-b21c-113792ebe075 req-527d0a4f-456e-47bf-870b-8fa785d785f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Updated VIF entry in instance network info cache for port 2c9a7c4c-6849-408d-a061-7f9109458d84. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:39:43 np0005603609 nova_compute[221550]: 2026-01-31 07:39:43.596 221554 DEBUG nova.network.neutron [req-7593b216-9073-43b4-b21c-113792ebe075 req-527d0a4f-456e-47bf-870b-8fa785d785f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Updating instance_info_cache with network_info: [{"id": "2c9a7c4c-6849-408d-a061-7f9109458d84", "address": "fa:16:3e:09:7f:dc", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9a7c4c-68", "ovs_interfaceid": "2c9a7c4c-6849-408d-a061-7f9109458d84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:39:43 np0005603609 nova_compute[221550]: 2026-01-31 07:39:43.650 221554 DEBUG oslo_concurrency.lockutils [req-7593b216-9073-43b4-b21c-113792ebe075 req-527d0a4f-456e-47bf-870b-8fa785d785f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2622b77a-7331-4b1f-b3f6-b18902724ea8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:39:43 np0005603609 nova_compute[221550]: 2026-01-31 07:39:43.767 221554 INFO nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Creating config drive at /var/lib/nova/instances/2622b77a-7331-4b1f-b3f6-b18902724ea8/disk.config#033[00m
Jan 31 02:39:43 np0005603609 nova_compute[221550]: 2026-01-31 07:39:43.774 221554 DEBUG oslo_concurrency.processutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2622b77a-7331-4b1f-b3f6-b18902724ea8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpkf1vkpal execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:39:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:43.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:43.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:43 np0005603609 nova_compute[221550]: 2026-01-31 07:39:43.899 221554 DEBUG oslo_concurrency.processutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2622b77a-7331-4b1f-b3f6-b18902724ea8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpkf1vkpal" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:39:43 np0005603609 nova_compute[221550]: 2026-01-31 07:39:43.943 221554 DEBUG nova.storage.rbd_utils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 2622b77a-7331-4b1f-b3f6-b18902724ea8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:39:43 np0005603609 nova_compute[221550]: 2026-01-31 07:39:43.949 221554 DEBUG oslo_concurrency.processutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2622b77a-7331-4b1f-b3f6-b18902724ea8/disk.config 2622b77a-7331-4b1f-b3f6-b18902724ea8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:39:44 np0005603609 nova_compute[221550]: 2026-01-31 07:39:44.228 221554 DEBUG oslo_concurrency.processutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2622b77a-7331-4b1f-b3f6-b18902724ea8/disk.config 2622b77a-7331-4b1f-b3f6-b18902724ea8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:39:44 np0005603609 nova_compute[221550]: 2026-01-31 07:39:44.229 221554 INFO nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Deleting local config drive /var/lib/nova/instances/2622b77a-7331-4b1f-b3f6-b18902724ea8/disk.config because it was imported into RBD.#033[00m
Jan 31 02:39:44 np0005603609 kernel: tap2c9a7c4c-68: entered promiscuous mode
Jan 31 02:39:44 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:44Z|00121|binding|INFO|Claiming lport 2c9a7c4c-6849-408d-a061-7f9109458d84 for this chassis.
Jan 31 02:39:44 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:44Z|00122|binding|INFO|2c9a7c4c-6849-408d-a061-7f9109458d84: Claiming fa:16:3e:09:7f:dc 10.100.0.10
Jan 31 02:39:44 np0005603609 NetworkManager[49064]: <info>  [1769845184.2803] manager: (tap2c9a7c4c-68): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Jan 31 02:39:44 np0005603609 nova_compute[221550]: 2026-01-31 07:39:44.281 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:44 np0005603609 nova_compute[221550]: 2026-01-31 07:39:44.317 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:44 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:44Z|00123|binding|INFO|Setting lport 2c9a7c4c-6849-408d-a061-7f9109458d84 ovn-installed in OVS
Jan 31 02:39:44 np0005603609 nova_compute[221550]: 2026-01-31 07:39:44.322 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e173 e173: 3 total, 3 up, 3 in
Jan 31 02:39:44 np0005603609 systemd-machined[190912]: New machine qemu-21-instance-00000029.
Jan 31 02:39:44 np0005603609 systemd-udevd[236913]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:39:44 np0005603609 systemd[1]: Started Virtual Machine qemu-21-instance-00000029.
Jan 31 02:39:44 np0005603609 NetworkManager[49064]: <info>  [1769845184.3599] device (tap2c9a7c4c-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:39:44 np0005603609 NetworkManager[49064]: <info>  [1769845184.3612] device (tap2c9a7c4c-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:39:44 np0005603609 podman[236894]: 2026-01-31 07:39:44.413798636 +0000 UTC m=+0.102677412 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 02:39:44 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:44Z|00124|binding|INFO|Setting lport 2c9a7c4c-6849-408d-a061-7f9109458d84 up in Southbound
Jan 31 02:39:44 np0005603609 podman[236892]: 2026-01-31 07:39:44.444411375 +0000 UTC m=+0.134436630 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.445 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:7f:dc 10.100.0.10'], port_security=['fa:16:3e:09:7f:dc 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2622b77a-7331-4b1f-b3f6-b18902724ea8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cffffabd-62a6-4362-9315-bd726adce623', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3e3e6f216d24c1f9f68777cfb63dbf8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd60d680e-d6aa-48ac-a8a2-519ea9a8ff01', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a503d6-c9cb-4329-87a2-a939359a3572, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=2c9a7c4c-6849-408d-a061-7f9109458d84) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.447 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 2c9a7c4c-6849-408d-a061-7f9109458d84 in datapath cffffabd-62a6-4362-9315-bd726adce623 bound to our chassis#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.450 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cffffabd-62a6-4362-9315-bd726adce623#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.460 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c477fcc1-7a88-40cc-8243-79eb2bdddd72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.461 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcffffabd-61 in ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.464 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcffffabd-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.464 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f0c35b5f-f702-44c8-81d0-2b2600d60691]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.466 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a3479f75-158f-47ea-ab62-7597fa28e811]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.476 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[fb15c894-106a-4e48-a933-56869e5c5bd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.499 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7f926f-5f8c-4c6e-bd2d-90a0fcce0334]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.522 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[0aebf812-ccd3-4c08-b240-effb1ffb50c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:44 np0005603609 NetworkManager[49064]: <info>  [1769845184.5290] manager: (tapcffffabd-60): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.531 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[af14fd6e-72b3-4924-9f3b-0a4e29fdc472]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.560 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[9725b136-b548-43b6-8416-7799c2a179e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.564 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c229f91b-2264-41ee-bab7-c8d23cac01b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.567 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:44 np0005603609 NetworkManager[49064]: <info>  [1769845184.5816] device (tapcffffabd-60): carrier: link connected
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.587 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[2529bf00-261b-4fd6-92f8-1b796ff180bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.604 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[796c57c4-4ccc-4e5e-bbc0-1336ee13a1f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcffffabd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:96:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542428, 'reachable_time': 17050, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236999, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.617 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[959207ff-337d-40b3-95b1-06678ec9a5b2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:96c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 542428, 'tstamp': 542428}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237008, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.629 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c65f66b9-2647-40bc-a917-2439ae4d71f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcffffabd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:96:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542428, 'reachable_time': 17050, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 237012, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.654 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2a8dca81-1696-4ace-8830-db474f44c199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.713 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e93102-7fc8-449e-ade8-6e5a6861b0a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.715 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcffffabd-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.715 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.716 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcffffabd-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:44 np0005603609 NetworkManager[49064]: <info>  [1769845184.7441] manager: (tapcffffabd-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Jan 31 02:39:44 np0005603609 kernel: tapcffffabd-60: entered promiscuous mode
Jan 31 02:39:44 np0005603609 nova_compute[221550]: 2026-01-31 07:39:44.743 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:44 np0005603609 nova_compute[221550]: 2026-01-31 07:39:44.747 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845184.7434103, 2622b77a-7331-4b1f-b3f6-b18902724ea8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:39:44 np0005603609 nova_compute[221550]: 2026-01-31 07:39:44.747 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] VM Started (Lifecycle Event)#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.748 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcffffabd-60, col_values=(('external_ids', {'iface-id': '549e70cf-ed02-45f9-9021-3a04088f580f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:44 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:44Z|00125|binding|INFO|Releasing lport 549e70cf-ed02-45f9-9021-3a04088f580f from this chassis (sb_readonly=0)
Jan 31 02:39:44 np0005603609 nova_compute[221550]: 2026-01-31 07:39:44.751 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.754 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cffffabd-62a6-4362-9315-bd726adce623.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cffffabd-62a6-4362-9315-bd726adce623.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.756 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8508b58b-0afd-4571-8820-94d9e9798964]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:44 np0005603609 nova_compute[221550]: 2026-01-31 07:39:44.756 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.758 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-cffffabd-62a6-4362-9315-bd726adce623
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/cffffabd-62a6-4362-9315-bd726adce623.pid.haproxy
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID cffffabd-62a6-4362-9315-bd726adce623
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:39:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:44.760 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'env', 'PROCESS_TAG=haproxy-cffffabd-62a6-4362-9315-bd726adce623', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cffffabd-62a6-4362-9315-bd726adce623.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:39:44 np0005603609 nova_compute[221550]: 2026-01-31 07:39:44.803 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:39:44 np0005603609 nova_compute[221550]: 2026-01-31 07:39:44.808 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845184.750896, 2622b77a-7331-4b1f-b3f6-b18902724ea8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:39:44 np0005603609 nova_compute[221550]: 2026-01-31 07:39:44.808 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:39:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:39:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1935598199' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:39:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:39:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1935598199' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:39:44 np0005603609 nova_compute[221550]: 2026-01-31 07:39:44.897 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:39:44 np0005603609 nova_compute[221550]: 2026-01-31 07:39:44.899 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:39:44 np0005603609 nova_compute[221550]: 2026-01-31 07:39:44.934 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:39:45 np0005603609 podman[237050]: 2026-01-31 07:39:45.078984201 +0000 UTC m=+0.054292470 container create 7042f82a2576e79e3acd5be486cc2c7a6dc8bb07d6698d96e462c2a9ae1ddd5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:39:45 np0005603609 systemd[1]: Started libpod-conmon-7042f82a2576e79e3acd5be486cc2c7a6dc8bb07d6698d96e462c2a9ae1ddd5d.scope.
Jan 31 02:39:45 np0005603609 podman[237050]: 2026-01-31 07:39:45.045273206 +0000 UTC m=+0.020581515 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:39:45 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:39:45 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16103c84ede444b53d6a89630ea28ab349182b0b6c42991b3e7cafc0ec97a3e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:39:45 np0005603609 podman[237050]: 2026-01-31 07:39:45.169350542 +0000 UTC m=+0.144658851 container init 7042f82a2576e79e3acd5be486cc2c7a6dc8bb07d6698d96e462c2a9ae1ddd5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:39:45 np0005603609 podman[237050]: 2026-01-31 07:39:45.17460451 +0000 UTC m=+0.149912819 container start 7042f82a2576e79e3acd5be486cc2c7a6dc8bb07d6698d96e462c2a9ae1ddd5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:39:45 np0005603609 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[237066]: [NOTICE]   (237070) : New worker (237072) forked
Jan 31 02:39:45 np0005603609 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[237066]: [NOTICE]   (237070) : Loading success.
Jan 31 02:39:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:45.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:45.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:46 np0005603609 nova_compute[221550]: 2026-01-31 07:39:46.457 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:47 np0005603609 nova_compute[221550]: 2026-01-31 07:39:47.745 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:47.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:47.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.095 221554 DEBUG nova.compute.manager [req-9a94312e-8d35-4a92-8aad-12406d9224d4 req-32556bc6-4334-450c-ba2a-5b40c73ca756 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Received event network-vif-plugged-2c9a7c4c-6849-408d-a061-7f9109458d84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.096 221554 DEBUG oslo_concurrency.lockutils [req-9a94312e-8d35-4a92-8aad-12406d9224d4 req-32556bc6-4334-450c-ba2a-5b40c73ca756 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2622b77a-7331-4b1f-b3f6-b18902724ea8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.097 221554 DEBUG oslo_concurrency.lockutils [req-9a94312e-8d35-4a92-8aad-12406d9224d4 req-32556bc6-4334-450c-ba2a-5b40c73ca756 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2622b77a-7331-4b1f-b3f6-b18902724ea8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.097 221554 DEBUG oslo_concurrency.lockutils [req-9a94312e-8d35-4a92-8aad-12406d9224d4 req-32556bc6-4334-450c-ba2a-5b40c73ca756 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2622b77a-7331-4b1f-b3f6-b18902724ea8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.097 221554 DEBUG nova.compute.manager [req-9a94312e-8d35-4a92-8aad-12406d9224d4 req-32556bc6-4334-450c-ba2a-5b40c73ca756 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Processing event network-vif-plugged-2c9a7c4c-6849-408d-a061-7f9109458d84 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.098 221554 DEBUG nova.compute.manager [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.102 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845188.101835, 2622b77a-7331-4b1f-b3f6-b18902724ea8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.102 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.105 221554 DEBUG nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.108 221554 INFO nova.virt.libvirt.driver [-] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Instance spawned successfully.#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.109 221554 DEBUG nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.219 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.224 221554 DEBUG nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.224 221554 DEBUG nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.225 221554 DEBUG nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.225 221554 DEBUG nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.226 221554 DEBUG nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.226 221554 DEBUG nova.virt.libvirt.driver [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.231 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.313 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.396 221554 INFO nova.compute.manager [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Took 14.33 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.396 221554 DEBUG nova.compute.manager [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.520 221554 INFO nova.compute.manager [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Took 15.67 seconds to build instance.#033[00m
Jan 31 02:39:48 np0005603609 nova_compute[221550]: 2026-01-31 07:39:48.602 221554 DEBUG oslo_concurrency.lockutils [None req-e3859245-86ce-4853-ba82-cf332f68dbad 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "2622b77a-7331-4b1f-b3f6-b18902724ea8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:49 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:49Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e7:af:41 10.100.0.7
Jan 31 02:39:49 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:49Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e7:af:41 10.100.0.7
Jan 31 02:39:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e174 e174: 3 total, 3 up, 3 in
Jan 31 02:39:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:49.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:49.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:50 np0005603609 nova_compute[221550]: 2026-01-31 07:39:50.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:39:50 np0005603609 nova_compute[221550]: 2026-01-31 07:39:50.867 221554 DEBUG oslo_concurrency.lockutils [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:50 np0005603609 nova_compute[221550]: 2026-01-31 07:39:50.868 221554 DEBUG oslo_concurrency.lockutils [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:50 np0005603609 nova_compute[221550]: 2026-01-31 07:39:50.868 221554 DEBUG oslo_concurrency.lockutils [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:50 np0005603609 nova_compute[221550]: 2026-01-31 07:39:50.868 221554 DEBUG oslo_concurrency.lockutils [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:50 np0005603609 nova_compute[221550]: 2026-01-31 07:39:50.868 221554 DEBUG oslo_concurrency.lockutils [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:50 np0005603609 nova_compute[221550]: 2026-01-31 07:39:50.869 221554 INFO nova.compute.manager [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Terminating instance#033[00m
Jan 31 02:39:50 np0005603609 nova_compute[221550]: 2026-01-31 07:39:50.870 221554 DEBUG nova.compute.manager [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:39:50 np0005603609 nova_compute[221550]: 2026-01-31 07:39:50.917 221554 DEBUG nova.compute.manager [req-c9d5b820-4a47-40aa-b2b8-d56c09c24923 req-fabdcf70-773f-4f9a-ad72-cf6edf56bb71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Received event network-vif-plugged-2c9a7c4c-6849-408d-a061-7f9109458d84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:39:50 np0005603609 nova_compute[221550]: 2026-01-31 07:39:50.918 221554 DEBUG oslo_concurrency.lockutils [req-c9d5b820-4a47-40aa-b2b8-d56c09c24923 req-fabdcf70-773f-4f9a-ad72-cf6edf56bb71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2622b77a-7331-4b1f-b3f6-b18902724ea8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:50 np0005603609 nova_compute[221550]: 2026-01-31 07:39:50.918 221554 DEBUG oslo_concurrency.lockutils [req-c9d5b820-4a47-40aa-b2b8-d56c09c24923 req-fabdcf70-773f-4f9a-ad72-cf6edf56bb71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2622b77a-7331-4b1f-b3f6-b18902724ea8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:50 np0005603609 nova_compute[221550]: 2026-01-31 07:39:50.918 221554 DEBUG oslo_concurrency.lockutils [req-c9d5b820-4a47-40aa-b2b8-d56c09c24923 req-fabdcf70-773f-4f9a-ad72-cf6edf56bb71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2622b77a-7331-4b1f-b3f6-b18902724ea8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:50 np0005603609 nova_compute[221550]: 2026-01-31 07:39:50.918 221554 DEBUG nova.compute.manager [req-c9d5b820-4a47-40aa-b2b8-d56c09c24923 req-fabdcf70-773f-4f9a-ad72-cf6edf56bb71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] No waiting events found dispatching network-vif-plugged-2c9a7c4c-6849-408d-a061-7f9109458d84 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:39:50 np0005603609 nova_compute[221550]: 2026-01-31 07:39:50.918 221554 WARNING nova.compute.manager [req-c9d5b820-4a47-40aa-b2b8-d56c09c24923 req-fabdcf70-773f-4f9a-ad72-cf6edf56bb71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Received unexpected event network-vif-plugged-2c9a7c4c-6849-408d-a061-7f9109458d84 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:39:50 np0005603609 kernel: tape917fc77-97 (unregistering): left promiscuous mode
Jan 31 02:39:50 np0005603609 NetworkManager[49064]: <info>  [1769845190.9939] device (tape917fc77-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:39:51 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:51Z|00126|binding|INFO|Releasing lport e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 from this chassis (sb_readonly=0)
Jan 31 02:39:51 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:51Z|00127|binding|INFO|Setting lport e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 down in Southbound
Jan 31 02:39:51 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:51Z|00128|binding|INFO|Removing iface tape917fc77-97 ovn-installed in OVS
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.003 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.007 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:51 np0005603609 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000024.scope: Deactivated successfully.
Jan 31 02:39:51 np0005603609 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000024.scope: Consumed 15.977s CPU time.
Jan 31 02:39:51 np0005603609 systemd-machined[190912]: Machine qemu-18-instance-00000024 terminated.
Jan 31 02:39:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:51.049 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:8d:be 10.100.0.4'], port_security=['fa:16:3e:d7:8d:be 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '33cde392-20ea-4fd7-88d1-f66b9d14e19a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0554655ad0a48c8bf0551298dd31919', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b25af7c2-ecfe-428a-9b4f-51874d47219e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3f23c6f-f389-4487-9d19-0cf4a6c28cbc, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=e917fc77-9783-4d44-9cc1-c8ddd25bb8e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:39:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:51.050 140058 INFO neutron.agent.ovn.metadata.agent [-] Port e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 in datapath 8c92e27e-f16c-4df2-a299-60ef2ca44f53 unbound from our chassis#033[00m
Jan 31 02:39:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:51.052 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c92e27e-f16c-4df2-a299-60ef2ca44f53#033[00m
Jan 31 02:39:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:51.060 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1d3bdf8c-9c44-4202-ae6e-73133d5be73c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:51.086 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[891ce129-7c78-4cbb-acf1-439089184aa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:51.090 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[60189f62-059a-40ff-af6f-a89215c02891]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.111 221554 INFO nova.virt.libvirt.driver [-] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Instance destroyed successfully.#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.112 221554 DEBUG nova.objects.instance [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'resources' on Instance uuid 33cde392-20ea-4fd7-88d1-f66b9d14e19a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:39:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:51.113 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[1e33f604-4153-4191-a549-68f0a8f1fcd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:51.124 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2376d0e9-8079-40f1-8eb0-fdf6e1ad761f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c92e27e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:62:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531475, 'reachable_time': 25068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237105, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:51.135 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[623f7eb8-fb1e-44bb-a3eb-6d923c0b7a4f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8c92e27e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531483, 'tstamp': 531483}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237106, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8c92e27e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531485, 'tstamp': 531485}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237106, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:51.137 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c92e27e-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.138 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.142 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:51.142 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c92e27e-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:51.143 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:39:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:51.143 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c92e27e-f0, col_values=(('external_ids', {'iface-id': 'b682c189-93d2-4c14-8b2a-bafbda6df8a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:51.143 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.219 221554 DEBUG nova.virt.libvirt.vif [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:38:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1844120546',display_name='tempest-ServersAdminTestJSON-server-1844120546',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1844120546',id=36,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:38:18Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b0554655ad0a48c8bf0551298dd31919',ramdisk_id='',reservation_id='r-9kljyvde',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1156607975',owner_user_name='tempest-ServersAdminTestJSON-1156607975-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:38:18Z,user_data=None,user_id='8a44db09acbd4aeb990147dc979f0bfd',uuid=33cde392-20ea-4fd7-88d1-f66b9d14e19a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "address": "fa:16:3e:d7:8d:be", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape917fc77-97", "ovs_interfaceid": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.220 221554 DEBUG nova.network.os_vif_util [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converting VIF {"id": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "address": "fa:16:3e:d7:8d:be", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape917fc77-97", "ovs_interfaceid": "e917fc77-9783-4d44-9cc1-c8ddd25bb8e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.221 221554 DEBUG nova.network.os_vif_util [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d7:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=e917fc77-9783-4d44-9cc1-c8ddd25bb8e4,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape917fc77-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.221 221554 DEBUG os_vif [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=e917fc77-9783-4d44-9cc1-c8ddd25bb8e4,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape917fc77-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.223 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.224 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape917fc77-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.229 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.231 221554 INFO os_vif [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=e917fc77-9783-4d44-9cc1-c8ddd25bb8e4,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape917fc77-97')#033[00m
Jan 31 02:39:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.458 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.835 221554 DEBUG nova.objects.instance [None req-9eda091f-0647-4bef-a132-eaab5897f5e8 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2622b77a-7331-4b1f-b3f6-b18902724ea8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:39:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:51.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:51.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.923 221554 DEBUG nova.compute.manager [req-250d2f29-bf2d-4669-af74-b6394e7de691 req-cb2f29b5-5115-4d40-872d-0d391f58bd96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Received event network-vif-unplugged-e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.923 221554 DEBUG oslo_concurrency.lockutils [req-250d2f29-bf2d-4669-af74-b6394e7de691 req-cb2f29b5-5115-4d40-872d-0d391f58bd96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.924 221554 DEBUG oslo_concurrency.lockutils [req-250d2f29-bf2d-4669-af74-b6394e7de691 req-cb2f29b5-5115-4d40-872d-0d391f58bd96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.924 221554 DEBUG oslo_concurrency.lockutils [req-250d2f29-bf2d-4669-af74-b6394e7de691 req-cb2f29b5-5115-4d40-872d-0d391f58bd96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.924 221554 DEBUG nova.compute.manager [req-250d2f29-bf2d-4669-af74-b6394e7de691 req-cb2f29b5-5115-4d40-872d-0d391f58bd96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] No waiting events found dispatching network-vif-unplugged-e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.924 221554 DEBUG nova.compute.manager [req-250d2f29-bf2d-4669-af74-b6394e7de691 req-cb2f29b5-5115-4d40-872d-0d391f58bd96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Received event network-vif-unplugged-e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.941 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845191.9413118, 2622b77a-7331-4b1f-b3f6-b18902724ea8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:39:51 np0005603609 nova_compute[221550]: 2026-01-31 07:39:51.942 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.067 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.069 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.212 221554 INFO nova.virt.libvirt.driver [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Deleting instance files /var/lib/nova/instances/33cde392-20ea-4fd7-88d1-f66b9d14e19a_del#033[00m
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.212 221554 INFO nova.virt.libvirt.driver [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Deletion of /var/lib/nova/instances/33cde392-20ea-4fd7-88d1-f66b9d14e19a_del complete#033[00m
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.215 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 31 02:39:52 np0005603609 kernel: tap2c9a7c4c-68 (unregistering): left promiscuous mode
Jan 31 02:39:52 np0005603609 NetworkManager[49064]: <info>  [1769845192.2470] device (tap2c9a7c4c-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:39:52 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:52Z|00129|binding|INFO|Releasing lport 2c9a7c4c-6849-408d-a061-7f9109458d84 from this chassis (sb_readonly=0)
Jan 31 02:39:52 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:52Z|00130|binding|INFO|Setting lport 2c9a7c4c-6849-408d-a061-7f9109458d84 down in Southbound
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.253 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:52 np0005603609 ovn_controller[130359]: 2026-01-31T07:39:52Z|00131|binding|INFO|Removing iface tap2c9a7c4c-68 ovn-installed in OVS
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.255 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.261 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:52 np0005603609 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000029.scope: Deactivated successfully.
Jan 31 02:39:52 np0005603609 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000029.scope: Consumed 4.325s CPU time.
Jan 31 02:39:52 np0005603609 systemd-machined[190912]: Machine qemu-21-instance-00000029 terminated.
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.429 221554 DEBUG nova.compute.manager [None req-9eda091f-0647-4bef-a132-eaab5897f5e8 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:39:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:52.437 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:7f:dc 10.100.0.10'], port_security=['fa:16:3e:09:7f:dc 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2622b77a-7331-4b1f-b3f6-b18902724ea8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cffffabd-62a6-4362-9315-bd726adce623', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3e3e6f216d24c1f9f68777cfb63dbf8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd60d680e-d6aa-48ac-a8a2-519ea9a8ff01', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a503d6-c9cb-4329-87a2-a939359a3572, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=2c9a7c4c-6849-408d-a061-7f9109458d84) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:39:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:52.438 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 2c9a7c4c-6849-408d-a061-7f9109458d84 in datapath cffffabd-62a6-4362-9315-bd726adce623 unbound from our chassis#033[00m
Jan 31 02:39:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:52.440 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cffffabd-62a6-4362-9315-bd726adce623, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:39:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:52.441 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[676b71e9-4388-41d5-8ef0-d923efd620c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:52.441 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 namespace which is not needed anymore#033[00m
Jan 31 02:39:52 np0005603609 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[237066]: [NOTICE]   (237070) : haproxy version is 2.8.14-c23fe91
Jan 31 02:39:52 np0005603609 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[237066]: [NOTICE]   (237070) : path to executable is /usr/sbin/haproxy
Jan 31 02:39:52 np0005603609 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[237066]: [WARNING]  (237070) : Exiting Master process...
Jan 31 02:39:52 np0005603609 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[237066]: [ALERT]    (237070) : Current worker (237072) exited with code 143 (Terminated)
Jan 31 02:39:52 np0005603609 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[237066]: [WARNING]  (237070) : All workers exited. Exiting... (0)
Jan 31 02:39:52 np0005603609 systemd[1]: libpod-7042f82a2576e79e3acd5be486cc2c7a6dc8bb07d6698d96e462c2a9ae1ddd5d.scope: Deactivated successfully.
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.548 221554 INFO nova.compute.manager [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Took 1.68 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.549 221554 DEBUG oslo.service.loopingcall [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.549 221554 DEBUG nova.compute.manager [-] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.549 221554 DEBUG nova.network.neutron [-] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:39:52 np0005603609 podman[237162]: 2026-01-31 07:39:52.552008306 +0000 UTC m=+0.041525907 container died 7042f82a2576e79e3acd5be486cc2c7a6dc8bb07d6698d96e462c2a9ae1ddd5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 02:39:52 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7042f82a2576e79e3acd5be486cc2c7a6dc8bb07d6698d96e462c2a9ae1ddd5d-userdata-shm.mount: Deactivated successfully.
Jan 31 02:39:52 np0005603609 systemd[1]: var-lib-containers-storage-overlay-d16103c84ede444b53d6a89630ea28ab349182b0b6c42991b3e7cafc0ec97a3e-merged.mount: Deactivated successfully.
Jan 31 02:39:52 np0005603609 podman[237162]: 2026-01-31 07:39:52.590278492 +0000 UTC m=+0.079796063 container cleanup 7042f82a2576e79e3acd5be486cc2c7a6dc8bb07d6698d96e462c2a9ae1ddd5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:39:52 np0005603609 systemd[1]: libpod-conmon-7042f82a2576e79e3acd5be486cc2c7a6dc8bb07d6698d96e462c2a9ae1ddd5d.scope: Deactivated successfully.
Jan 31 02:39:52 np0005603609 podman[237193]: 2026-01-31 07:39:52.643100755 +0000 UTC m=+0.036660739 container remove 7042f82a2576e79e3acd5be486cc2c7a6dc8bb07d6698d96e462c2a9ae1ddd5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 02:39:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:52.645 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[41d8a812-a695-45e6-bbb8-6d93bd8f1e2d]: (4, ('Sat Jan 31 07:39:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 (7042f82a2576e79e3acd5be486cc2c7a6dc8bb07d6698d96e462c2a9ae1ddd5d)\n7042f82a2576e79e3acd5be486cc2c7a6dc8bb07d6698d96e462c2a9ae1ddd5d\nSat Jan 31 07:39:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 (7042f82a2576e79e3acd5be486cc2c7a6dc8bb07d6698d96e462c2a9ae1ddd5d)\n7042f82a2576e79e3acd5be486cc2c7a6dc8bb07d6698d96e462c2a9ae1ddd5d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:52.647 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3484891c-ab83-412a-b30f-df1357869f73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:52.647 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcffffabd-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.649 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:52 np0005603609 kernel: tapcffffabd-60: left promiscuous mode
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.657 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:39:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:52.659 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f758ace3-e083-463c-be17-9bf2fb69871f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:52.677 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[af4bff44-6eb3-4bd1-955e-4dd82f72aa83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:52.678 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[586a0109-3a72-43c9-9756-27764b3a9bbb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:52.691 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8761358e-d132-4341-bc1c-c88aa0a549b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542421, 'reachable_time': 41001, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237211, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:52.693 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:39:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:39:52.693 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[9314bfb0-7573-4885-a4b2-7b96b62000fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:39:52 np0005603609 systemd[1]: run-netns-ovnmeta\x2dcffffabd\x2d62a6\x2d4362\x2d9315\x2dbd726adce623.mount: Deactivated successfully.
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.715 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.716 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.716 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.716 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:39:52 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.716 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:52.999 221554 DEBUG nova.compute.manager [req-02cff844-65ee-4328-894f-9713cbe9ea73 req-e0051d5f-c13b-44c5-833a-cf344d5eb48c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Received event network-vif-unplugged-2c9a7c4c-6849-408d-a061-7f9109458d84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.000 221554 DEBUG oslo_concurrency.lockutils [req-02cff844-65ee-4328-894f-9713cbe9ea73 req-e0051d5f-c13b-44c5-833a-cf344d5eb48c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2622b77a-7331-4b1f-b3f6-b18902724ea8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.001 221554 DEBUG oslo_concurrency.lockutils [req-02cff844-65ee-4328-894f-9713cbe9ea73 req-e0051d5f-c13b-44c5-833a-cf344d5eb48c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2622b77a-7331-4b1f-b3f6-b18902724ea8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.001 221554 DEBUG oslo_concurrency.lockutils [req-02cff844-65ee-4328-894f-9713cbe9ea73 req-e0051d5f-c13b-44c5-833a-cf344d5eb48c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2622b77a-7331-4b1f-b3f6-b18902724ea8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.002 221554 DEBUG nova.compute.manager [req-02cff844-65ee-4328-894f-9713cbe9ea73 req-e0051d5f-c13b-44c5-833a-cf344d5eb48c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] No waiting events found dispatching network-vif-unplugged-2c9a7c4c-6849-408d-a061-7f9109458d84 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.002 221554 WARNING nova.compute.manager [req-02cff844-65ee-4328-894f-9713cbe9ea73 req-e0051d5f-c13b-44c5-833a-cf344d5eb48c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Received unexpected event network-vif-unplugged-2c9a7c4c-6849-408d-a061-7f9109458d84 for instance with vm_state suspended and task_state None.#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.003 221554 DEBUG nova.compute.manager [req-02cff844-65ee-4328-894f-9713cbe9ea73 req-e0051d5f-c13b-44c5-833a-cf344d5eb48c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Received event network-vif-plugged-2c9a7c4c-6849-408d-a061-7f9109458d84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.003 221554 DEBUG oslo_concurrency.lockutils [req-02cff844-65ee-4328-894f-9713cbe9ea73 req-e0051d5f-c13b-44c5-833a-cf344d5eb48c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2622b77a-7331-4b1f-b3f6-b18902724ea8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.004 221554 DEBUG oslo_concurrency.lockutils [req-02cff844-65ee-4328-894f-9713cbe9ea73 req-e0051d5f-c13b-44c5-833a-cf344d5eb48c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2622b77a-7331-4b1f-b3f6-b18902724ea8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.004 221554 DEBUG oslo_concurrency.lockutils [req-02cff844-65ee-4328-894f-9713cbe9ea73 req-e0051d5f-c13b-44c5-833a-cf344d5eb48c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2622b77a-7331-4b1f-b3f6-b18902724ea8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.005 221554 DEBUG nova.compute.manager [req-02cff844-65ee-4328-894f-9713cbe9ea73 req-e0051d5f-c13b-44c5-833a-cf344d5eb48c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] No waiting events found dispatching network-vif-plugged-2c9a7c4c-6849-408d-a061-7f9109458d84 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.005 221554 WARNING nova.compute.manager [req-02cff844-65ee-4328-894f-9713cbe9ea73 req-e0051d5f-c13b-44c5-833a-cf344d5eb48c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Received unexpected event network-vif-plugged-2c9a7c4c-6849-408d-a061-7f9109458d84 for instance with vm_state suspended and task_state None.#033[00m
Jan 31 02:39:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:39:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2608596935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.133 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.247 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000029 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.247 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000029 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.252 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.253 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.296 221554 DEBUG nova.network.neutron [-] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.355 221554 INFO nova.compute.manager [-] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Took 0.81 seconds to deallocate network for instance.#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.387 221554 DEBUG nova.compute.manager [req-19d103f2-01ad-4da2-9ec5-cfcf786312ec req-03b4326f-dfc3-4fd5-897b-33530a85b791 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Received event network-vif-deleted-e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.416 221554 DEBUG oslo_concurrency.lockutils [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.416 221554 DEBUG oslo_concurrency.lockutils [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.496 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.498 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4615MB free_disk=20.741592407226562GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.498 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.566 221554 DEBUG oslo_concurrency.processutils [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:39:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:53.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:53.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:39:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2782953610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.955 221554 DEBUG oslo_concurrency.processutils [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.960 221554 DEBUG nova.compute.provider_tree [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:39:53 np0005603609 nova_compute[221550]: 2026-01-31 07:39:53.992 221554 DEBUG nova.scheduler.client.report [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.004 221554 DEBUG nova.compute.manager [req-31f6ae5e-5f30-4938-b3c9-21f2786d9a96 req-06965ff9-01fc-43dc-b1ad-ac821775480a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Received event network-vif-plugged-e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.005 221554 DEBUG oslo_concurrency.lockutils [req-31f6ae5e-5f30-4938-b3c9-21f2786d9a96 req-06965ff9-01fc-43dc-b1ad-ac821775480a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.005 221554 DEBUG oslo_concurrency.lockutils [req-31f6ae5e-5f30-4938-b3c9-21f2786d9a96 req-06965ff9-01fc-43dc-b1ad-ac821775480a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.006 221554 DEBUG oslo_concurrency.lockutils [req-31f6ae5e-5f30-4938-b3c9-21f2786d9a96 req-06965ff9-01fc-43dc-b1ad-ac821775480a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.006 221554 DEBUG nova.compute.manager [req-31f6ae5e-5f30-4938-b3c9-21f2786d9a96 req-06965ff9-01fc-43dc-b1ad-ac821775480a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] No waiting events found dispatching network-vif-plugged-e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.007 221554 WARNING nova.compute.manager [req-31f6ae5e-5f30-4938-b3c9-21f2786d9a96 req-06965ff9-01fc-43dc-b1ad-ac821775480a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Received unexpected event network-vif-plugged-e917fc77-9783-4d44-9cc1-c8ddd25bb8e4 for instance with vm_state deleted and task_state None.#033[00m
Jan 31 02:39:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e175 e175: 3 total, 3 up, 3 in
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.057 221554 DEBUG oslo_concurrency.lockutils [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.061 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.108 221554 INFO nova.scheduler.client.report [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Deleted allocations for instance 33cde392-20ea-4fd7-88d1-f66b9d14e19a#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.215 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance eed178fc-d0db-47ac-a368-0d3058e94697 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.215 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 2622b77a-7331-4b1f-b3f6-b18902724ea8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.216 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.216 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.226 221554 DEBUG nova.compute.manager [None req-f85632f9-71f1-4bec-a81f-b097b71e192e 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.285 221554 DEBUG oslo_concurrency.lockutils [None req-083cc3d1-d5bc-47c8-8425-515887808a1e 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "33cde392-20ea-4fd7-88d1-f66b9d14e19a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.296 221554 INFO nova.compute.manager [None req-f85632f9-71f1-4bec-a81f-b097b71e192e 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] instance snapshotting#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.296 221554 WARNING nova.compute.manager [None req-f85632f9-71f1-4bec-a81f-b097b71e192e 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.300 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.630 221554 INFO nova.virt.libvirt.driver [None req-f85632f9-71f1-4bec-a81f-b097b71e192e 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Beginning cold snapshot process#033[00m
Jan 31 02:39:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:39:54 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/579142581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.795 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.800 221554 DEBUG nova.virt.libvirt.imagebackend [None req-f85632f9-71f1-4bec-a81f-b097b71e192e 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.804 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.830 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.855 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:39:54 np0005603609 nova_compute[221550]: 2026-01-31 07:39:54.856 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:55 np0005603609 nova_compute[221550]: 2026-01-31 07:39:55.087 221554 DEBUG nova.storage.rbd_utils [None req-f85632f9-71f1-4bec-a81f-b097b71e192e 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] creating snapshot(2353963c5d484b52a1afc642ecf4cc78) on rbd image(2622b77a-7331-4b1f-b3f6-b18902724ea8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 02:39:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:55.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:55.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e176 e176: 3 total, 3 up, 3 in
Jan 31 02:39:56 np0005603609 nova_compute[221550]: 2026-01-31 07:39:56.171 221554 DEBUG nova.storage.rbd_utils [None req-f85632f9-71f1-4bec-a81f-b097b71e192e 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] cloning vms/2622b77a-7331-4b1f-b3f6-b18902724ea8_disk@2353963c5d484b52a1afc642ecf4cc78 to images/19e1c9c2-3ea0-4549-b950-2d6bc24c2405 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 02:39:56 np0005603609 nova_compute[221550]: 2026-01-31 07:39:56.227 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:56 np0005603609 nova_compute[221550]: 2026-01-31 07:39:56.401 221554 DEBUG nova.storage.rbd_utils [None req-f85632f9-71f1-4bec-a81f-b097b71e192e 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] flattening images/19e1c9c2-3ea0-4549-b950-2d6bc24c2405 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 02:39:56 np0005603609 nova_compute[221550]: 2026-01-31 07:39:56.461 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:39:56 np0005603609 nova_compute[221550]: 2026-01-31 07:39:56.649 221554 DEBUG nova.storage.rbd_utils [None req-f85632f9-71f1-4bec-a81f-b097b71e192e 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] removing snapshot(2353963c5d484b52a1afc642ecf4cc78) on rbd image(2622b77a-7331-4b1f-b3f6-b18902724ea8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 02:39:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e177 e177: 3 total, 3 up, 3 in
Jan 31 02:39:57 np0005603609 nova_compute[221550]: 2026-01-31 07:39:57.266 221554 DEBUG nova.storage.rbd_utils [None req-f85632f9-71f1-4bec-a81f-b097b71e192e 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] creating snapshot(snap) on rbd image(19e1c9c2-3ea0-4549-b950-2d6bc24c2405) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 02:39:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:57.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:57 np0005603609 nova_compute[221550]: 2026-01-31 07:39:57.852 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:39:57 np0005603609 nova_compute[221550]: 2026-01-31 07:39:57.852 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:39:57 np0005603609 nova_compute[221550]: 2026-01-31 07:39:57.853 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:39:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:39:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:57.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:39:58 np0005603609 nova_compute[221550]: 2026-01-31 07:39:58.232 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-eed178fc-d0db-47ac-a368-0d3058e94697" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:39:58 np0005603609 nova_compute[221550]: 2026-01-31 07:39:58.233 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-eed178fc-d0db-47ac-a368-0d3058e94697" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:39:58 np0005603609 nova_compute[221550]: 2026-01-31 07:39:58.233 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:39:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e178 e178: 3 total, 3 up, 3 in
Jan 31 02:39:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e179 e179: 3 total, 3 up, 3 in
Jan 31 02:39:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:59.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:39:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:59.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:59 np0005603609 nova_compute[221550]: 2026-01-31 07:39:59.894 221554 DEBUG oslo_concurrency.lockutils [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "eed178fc-d0db-47ac-a368-0d3058e94697" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:59 np0005603609 nova_compute[221550]: 2026-01-31 07:39:59.895 221554 DEBUG oslo_concurrency.lockutils [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:59 np0005603609 nova_compute[221550]: 2026-01-31 07:39:59.895 221554 DEBUG oslo_concurrency.lockutils [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:59 np0005603609 nova_compute[221550]: 2026-01-31 07:39:59.895 221554 DEBUG oslo_concurrency.lockutils [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:59 np0005603609 nova_compute[221550]: 2026-01-31 07:39:59.896 221554 DEBUG oslo_concurrency.lockutils [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:59 np0005603609 nova_compute[221550]: 2026-01-31 07:39:59.897 221554 INFO nova.compute.manager [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Terminating instance#033[00m
Jan 31 02:39:59 np0005603609 nova_compute[221550]: 2026-01-31 07:39:59.898 221554 DEBUG nova.compute.manager [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:40:00 np0005603609 kernel: tapbf338813-3c (unregistering): left promiscuous mode
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.060 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:00 np0005603609 NetworkManager[49064]: <info>  [1769845200.0615] device (tapbf338813-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.066 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:00 np0005603609 ovn_controller[130359]: 2026-01-31T07:40:00Z|00132|binding|INFO|Releasing lport bf338813-3c1d-456b-a6fc-b4b2c8235740 from this chassis (sb_readonly=0)
Jan 31 02:40:00 np0005603609 ovn_controller[130359]: 2026-01-31T07:40:00Z|00133|binding|INFO|Setting lport bf338813-3c1d-456b-a6fc-b4b2c8235740 down in Southbound
Jan 31 02:40:00 np0005603609 ovn_controller[130359]: 2026-01-31T07:40:00Z|00134|binding|INFO|Removing iface tapbf338813-3c ovn-installed in OVS
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.068 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.076 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:00 np0005603609 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000021.scope: Deactivated successfully.
Jan 31 02:40:00 np0005603609 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000021.scope: Consumed 13.160s CPU time.
Jan 31 02:40:00 np0005603609 systemd-machined[190912]: Machine qemu-20-instance-00000021 terminated.
Jan 31 02:40:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:00.113 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:af:41 10.100.0.7'], port_security=['fa:16:3e:e7:af:41 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eed178fc-d0db-47ac-a368-0d3058e94697', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0554655ad0a48c8bf0551298dd31919', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b25af7c2-ecfe-428a-9b4f-51874d47219e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3f23c6f-f389-4487-9d19-0cf4a6c28cbc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=bf338813-3c1d-456b-a6fc-b4b2c8235740) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:40:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:00.114 140058 INFO neutron.agent.ovn.metadata.agent [-] Port bf338813-3c1d-456b-a6fc-b4b2c8235740 in datapath 8c92e27e-f16c-4df2-a299-60ef2ca44f53 unbound from our chassis#033[00m
Jan 31 02:40:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:00.115 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8c92e27e-f16c-4df2-a299-60ef2ca44f53, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:40:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:00.115 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[14717068-127b-4517-9d42-39d2dc4ad5c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:00.116 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53 namespace which is not needed anymore#033[00m
Jan 31 02:40:00 np0005603609 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[234647]: [NOTICE]   (234651) : haproxy version is 2.8.14-c23fe91
Jan 31 02:40:00 np0005603609 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[234647]: [NOTICE]   (234651) : path to executable is /usr/sbin/haproxy
Jan 31 02:40:00 np0005603609 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[234647]: [WARNING]  (234651) : Exiting Master process...
Jan 31 02:40:00 np0005603609 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[234647]: [WARNING]  (234651) : Exiting Master process...
Jan 31 02:40:00 np0005603609 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[234647]: [ALERT]    (234651) : Current worker (234653) exited with code 143 (Terminated)
Jan 31 02:40:00 np0005603609 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[234647]: [WARNING]  (234651) : All workers exited. Exiting... (0)
Jan 31 02:40:00 np0005603609 systemd[1]: libpod-207baca9a485698b518111ca8dcf02c1f720951c1af64060cf97ee370359d083.scope: Deactivated successfully.
Jan 31 02:40:00 np0005603609 podman[237447]: 2026-01-31 07:40:00.218507157 +0000 UTC m=+0.043239130 container died 207baca9a485698b518111ca8dcf02c1f720951c1af64060cf97ee370359d083 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 02:40:00 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-207baca9a485698b518111ca8dcf02c1f720951c1af64060cf97ee370359d083-userdata-shm.mount: Deactivated successfully.
Jan 31 02:40:00 np0005603609 systemd[1]: var-lib-containers-storage-overlay-f34e297e6133047fac1065760f47282aaf59b4f908e210e50bc6a797a1f3f30f-merged.mount: Deactivated successfully.
Jan 31 02:40:00 np0005603609 podman[237447]: 2026-01-31 07:40:00.256547557 +0000 UTC m=+0.081279510 container cleanup 207baca9a485698b518111ca8dcf02c1f720951c1af64060cf97ee370359d083 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:40:00 np0005603609 systemd[1]: libpod-conmon-207baca9a485698b518111ca8dcf02c1f720951c1af64060cf97ee370359d083.scope: Deactivated successfully.
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.321 221554 INFO nova.virt.libvirt.driver [None req-f85632f9-71f1-4bec-a81f-b097b71e192e 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Snapshot image upload complete#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.322 221554 INFO nova.compute.manager [None req-f85632f9-71f1-4bec-a81f-b097b71e192e 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Took 6.02 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.332 221554 INFO nova.virt.libvirt.driver [-] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Instance destroyed successfully.#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.332 221554 DEBUG nova.objects.instance [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'resources' on Instance uuid eed178fc-d0db-47ac-a368-0d3058e94697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:40:00 np0005603609 podman[237475]: 2026-01-31 07:40:00.339197229 +0000 UTC m=+0.062205502 container remove 207baca9a485698b518111ca8dcf02c1f720951c1af64060cf97ee370359d083 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 02:40:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:00.344 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8decefb5-7b5c-49a3-9cbe-75d6bdd24cc6]: (4, ('Sat Jan 31 07:40:00 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53 (207baca9a485698b518111ca8dcf02c1f720951c1af64060cf97ee370359d083)\n207baca9a485698b518111ca8dcf02c1f720951c1af64060cf97ee370359d083\nSat Jan 31 07:40:00 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53 (207baca9a485698b518111ca8dcf02c1f720951c1af64060cf97ee370359d083)\n207baca9a485698b518111ca8dcf02c1f720951c1af64060cf97ee370359d083\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:00.347 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5d153320-ad78-49c1-a53a-63436811b9b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:00.348 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c92e27e-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.351 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:00 np0005603609 kernel: tap8c92e27e-f0: left promiscuous mode
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.362 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.367 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:00.370 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b61bbddd-a229-4f32-bdd5-3e66ce7b6db9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:00.384 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ddcf19-0884-41c2-acfb-b20e09865cb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:00.386 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d40b718b-56b4-4cc3-809e-357ebd71af36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:00 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 02:40:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:00.399 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[36c4a9e5-7e76-4a93-9092-1b784e480160]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531468, 'reachable_time': 15967, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237511, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:00 np0005603609 systemd[1]: run-netns-ovnmeta\x2d8c92e27e\x2df16c\x2d4df2\x2da299\x2d60ef2ca44f53.mount: Deactivated successfully.
Jan 31 02:40:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:00.402 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:40:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:00.402 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[2ce5f0d9-1bea-4a4c-9896-572c79d281f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.580 221554 DEBUG nova.compute.manager [req-434acfd4-3ac3-4fa0-88aa-d86e23bcc955 req-bc30ae83-4e99-457f-a606-b14fdc6cb080 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received event network-vif-unplugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.581 221554 DEBUG oslo_concurrency.lockutils [req-434acfd4-3ac3-4fa0-88aa-d86e23bcc955 req-bc30ae83-4e99-457f-a606-b14fdc6cb080 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.582 221554 DEBUG oslo_concurrency.lockutils [req-434acfd4-3ac3-4fa0-88aa-d86e23bcc955 req-bc30ae83-4e99-457f-a606-b14fdc6cb080 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.582 221554 DEBUG oslo_concurrency.lockutils [req-434acfd4-3ac3-4fa0-88aa-d86e23bcc955 req-bc30ae83-4e99-457f-a606-b14fdc6cb080 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.583 221554 DEBUG nova.compute.manager [req-434acfd4-3ac3-4fa0-88aa-d86e23bcc955 req-bc30ae83-4e99-457f-a606-b14fdc6cb080 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] No waiting events found dispatching network-vif-unplugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.583 221554 DEBUG nova.compute.manager [req-434acfd4-3ac3-4fa0-88aa-d86e23bcc955 req-bc30ae83-4e99-457f-a606-b14fdc6cb080 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received event network-vif-unplugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.587 221554 DEBUG nova.virt.libvirt.vif [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:37:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-540870110',display_name='tempest-ServersAdminTestJSON-server-540870110',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-540870110',id=33,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:39:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b0554655ad0a48c8bf0551298dd31919',ramdisk_id='',reservation_id='r-naihjech',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1156607975',owner_user_name='tempest-ServersAdminTestJSON-1156607975-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:39:41Z,user_data=None,user_id='8a44db09acbd4aeb990147dc979f0bfd',uuid=eed178fc-d0db-47ac-a368-0d3058e94697,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.588 221554 DEBUG nova.network.os_vif_util [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converting VIF {"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.590 221554 DEBUG nova.network.os_vif_util [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.591 221554 DEBUG os_vif [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.594 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.595 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf338813-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.599 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.601 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.608 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.613 221554 INFO os_vif [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:af:41,bridge_name='br-int',has_traffic_filtering=True,id=bf338813-3c1d-456b-a6fc-b4b2c8235740,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf338813-3c')#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.661 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Updating instance_info_cache with network_info: [{"id": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "address": "fa:16:3e:e7:af:41", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf338813-3c", "ovs_interfaceid": "bf338813-3c1d-456b-a6fc-b4b2c8235740", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.813 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-eed178fc-d0db-47ac-a368-0d3058e94697" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.814 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.815 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.816 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.816 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.816 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:40:00 np0005603609 nova_compute[221550]: 2026-01-31 07:40:00.817 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:40:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:40:01 np0005603609 nova_compute[221550]: 2026-01-31 07:40:01.462 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:01 np0005603609 nova_compute[221550]: 2026-01-31 07:40:01.619 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:40:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:40:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:01.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:40:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:01.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:02 np0005603609 nova_compute[221550]: 2026-01-31 07:40:02.770 221554 DEBUG nova.compute.manager [req-a6780070-14a0-4dd0-bae1-a6a8521a95b2 req-81a02c99-cc07-427f-ba9d-d11589260f05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:40:02 np0005603609 nova_compute[221550]: 2026-01-31 07:40:02.770 221554 DEBUG oslo_concurrency.lockutils [req-a6780070-14a0-4dd0-bae1-a6a8521a95b2 req-81a02c99-cc07-427f-ba9d-d11589260f05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:40:02 np0005603609 nova_compute[221550]: 2026-01-31 07:40:02.770 221554 DEBUG oslo_concurrency.lockutils [req-a6780070-14a0-4dd0-bae1-a6a8521a95b2 req-81a02c99-cc07-427f-ba9d-d11589260f05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:40:02 np0005603609 nova_compute[221550]: 2026-01-31 07:40:02.770 221554 DEBUG oslo_concurrency.lockutils [req-a6780070-14a0-4dd0-bae1-a6a8521a95b2 req-81a02c99-cc07-427f-ba9d-d11589260f05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:40:02 np0005603609 nova_compute[221550]: 2026-01-31 07:40:02.771 221554 DEBUG nova.compute.manager [req-a6780070-14a0-4dd0-bae1-a6a8521a95b2 req-81a02c99-cc07-427f-ba9d-d11589260f05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] No waiting events found dispatching network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:40:02 np0005603609 nova_compute[221550]: 2026-01-31 07:40:02.771 221554 WARNING nova.compute.manager [req-a6780070-14a0-4dd0-bae1-a6a8521a95b2 req-81a02c99-cc07-427f-ba9d-d11589260f05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received unexpected event network-vif-plugged-bf338813-3c1d-456b-a6fc-b4b2c8235740 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:40:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e180 e180: 3 total, 3 up, 3 in
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.388 221554 INFO nova.virt.libvirt.driver [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Deleting instance files /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697_del#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.389 221554 INFO nova.virt.libvirt.driver [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Deletion of /var/lib/nova/instances/eed178fc-d0db-47ac-a368-0d3058e94697_del complete#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.535 221554 INFO nova.compute.manager [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Took 3.64 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.535 221554 DEBUG oslo.service.loopingcall [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.536 221554 DEBUG nova.compute.manager [-] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.536 221554 DEBUG nova.network.neutron [-] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:40:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e181 e181: 3 total, 3 up, 3 in
Jan 31 02:40:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:03.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:40:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:03.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.926 221554 DEBUG oslo_concurrency.lockutils [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "2622b77a-7331-4b1f-b3f6-b18902724ea8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.926 221554 DEBUG oslo_concurrency.lockutils [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "2622b77a-7331-4b1f-b3f6-b18902724ea8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.927 221554 DEBUG oslo_concurrency.lockutils [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "2622b77a-7331-4b1f-b3f6-b18902724ea8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.927 221554 DEBUG oslo_concurrency.lockutils [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "2622b77a-7331-4b1f-b3f6-b18902724ea8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.928 221554 DEBUG oslo_concurrency.lockutils [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "2622b77a-7331-4b1f-b3f6-b18902724ea8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.929 221554 INFO nova.compute.manager [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Terminating instance#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.931 221554 DEBUG nova.compute.manager [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.945 221554 INFO nova.virt.libvirt.driver [-] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Instance destroyed successfully.#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.946 221554 DEBUG nova.objects.instance [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'resources' on Instance uuid 2622b77a-7331-4b1f-b3f6-b18902724ea8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.973 221554 DEBUG nova.virt.libvirt.vif [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:39:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-107615132',display_name='tempest-ImagesTestJSON-server-107615132',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-107615132',id=41,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:39:48Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b3e3e6f216d24c1f9f68777cfb63dbf8',ramdisk_id='',reservation_id='r-boo162pk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-533495031',owner_user_name='tempest-ImagesTestJSON-533495031-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:40:00Z,user_data=None,user_id='533eaca1e9c4430dabe2b0a39039ca65',uuid=2622b77a-7331-4b1f-b3f6-b18902724ea8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "2c9a7c4c-6849-408d-a061-7f9109458d84", "address": "fa:16:3e:09:7f:dc", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9a7c4c-68", "ovs_interfaceid": "2c9a7c4c-6849-408d-a061-7f9109458d84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.974 221554 DEBUG nova.network.os_vif_util [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converting VIF {"id": "2c9a7c4c-6849-408d-a061-7f9109458d84", "address": "fa:16:3e:09:7f:dc", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9a7c4c-68", "ovs_interfaceid": "2c9a7c4c-6849-408d-a061-7f9109458d84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.975 221554 DEBUG nova.network.os_vif_util [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:7f:dc,bridge_name='br-int',has_traffic_filtering=True,id=2c9a7c4c-6849-408d-a061-7f9109458d84,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c9a7c4c-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.975 221554 DEBUG os_vif [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:7f:dc,bridge_name='br-int',has_traffic_filtering=True,id=2c9a7c4c-6849-408d-a061-7f9109458d84,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c9a7c4c-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.977 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.977 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c9a7c4c-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.980 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:03 np0005603609 nova_compute[221550]: 2026-01-31 07:40:03.984 221554 INFO os_vif [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:7f:dc,bridge_name='br-int',has_traffic_filtering=True,id=2c9a7c4c-6849-408d-a061-7f9109458d84,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c9a7c4c-68')#033[00m
Jan 31 02:40:04 np0005603609 nova_compute[221550]: 2026-01-31 07:40:04.581 221554 INFO nova.virt.libvirt.driver [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Deleting instance files /var/lib/nova/instances/2622b77a-7331-4b1f-b3f6-b18902724ea8_del#033[00m
Jan 31 02:40:04 np0005603609 nova_compute[221550]: 2026-01-31 07:40:04.582 221554 INFO nova.virt.libvirt.driver [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Deletion of /var/lib/nova/instances/2622b77a-7331-4b1f-b3f6-b18902724ea8_del complete#033[00m
Jan 31 02:40:04 np0005603609 nova_compute[221550]: 2026-01-31 07:40:04.698 221554 DEBUG nova.network.neutron [-] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:40:04 np0005603609 nova_compute[221550]: 2026-01-31 07:40:04.712 221554 INFO nova.compute.manager [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:40:04 np0005603609 nova_compute[221550]: 2026-01-31 07:40:04.713 221554 DEBUG oslo.service.loopingcall [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:40:04 np0005603609 nova_compute[221550]: 2026-01-31 07:40:04.714 221554 DEBUG nova.compute.manager [-] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:40:04 np0005603609 nova_compute[221550]: 2026-01-31 07:40:04.714 221554 DEBUG nova.network.neutron [-] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:40:04 np0005603609 nova_compute[221550]: 2026-01-31 07:40:04.728 221554 INFO nova.compute.manager [-] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Took 1.19 seconds to deallocate network for instance.#033[00m
Jan 31 02:40:04 np0005603609 nova_compute[221550]: 2026-01-31 07:40:04.795 221554 DEBUG oslo_concurrency.lockutils [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:40:04 np0005603609 nova_compute[221550]: 2026-01-31 07:40:04.796 221554 DEBUG oslo_concurrency.lockutils [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:40:04 np0005603609 nova_compute[221550]: 2026-01-31 07:40:04.880 221554 DEBUG nova.compute.manager [req-f36843e3-ab0a-44ce-9cfc-86bbf0af5350 req-8a39ad25-b464-4a7d-af95-6745488ffce8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Received event network-vif-deleted-bf338813-3c1d-456b-a6fc-b4b2c8235740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:40:04 np0005603609 nova_compute[221550]: 2026-01-31 07:40:04.882 221554 DEBUG oslo_concurrency.processutils [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:40:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:40:05 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1676011270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:40:05 np0005603609 nova_compute[221550]: 2026-01-31 07:40:05.393 221554 DEBUG oslo_concurrency.processutils [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:40:05 np0005603609 nova_compute[221550]: 2026-01-31 07:40:05.401 221554 DEBUG nova.compute.provider_tree [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:40:05 np0005603609 nova_compute[221550]: 2026-01-31 07:40:05.430 221554 DEBUG nova.scheduler.client.report [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:40:05 np0005603609 nova_compute[221550]: 2026-01-31 07:40:05.488 221554 DEBUG oslo_concurrency.lockutils [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:40:05 np0005603609 nova_compute[221550]: 2026-01-31 07:40:05.543 221554 INFO nova.scheduler.client.report [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Deleted allocations for instance eed178fc-d0db-47ac-a368-0d3058e94697#033[00m
Jan 31 02:40:05 np0005603609 nova_compute[221550]: 2026-01-31 07:40:05.669 221554 DEBUG oslo_concurrency.lockutils [None req-11c1fc70-66e0-4640-8d29-aa1eec0cbc71 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "eed178fc-d0db-47ac-a368-0d3058e94697" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:40:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:05.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:05.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:06 np0005603609 nova_compute[221550]: 2026-01-31 07:40:06.032 221554 DEBUG nova.network.neutron [-] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:40:06 np0005603609 nova_compute[221550]: 2026-01-31 07:40:06.100 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845191.0989127, 33cde392-20ea-4fd7-88d1-f66b9d14e19a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:40:06 np0005603609 nova_compute[221550]: 2026-01-31 07:40:06.101 221554 INFO nova.compute.manager [-] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:40:06 np0005603609 nova_compute[221550]: 2026-01-31 07:40:06.146 221554 INFO nova.compute.manager [-] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Took 1.43 seconds to deallocate network for instance.#033[00m
Jan 31 02:40:06 np0005603609 nova_compute[221550]: 2026-01-31 07:40:06.156 221554 DEBUG nova.compute.manager [None req-e9cd0455-afc3-4479-90a8-faef4a92f97d - - - - - -] [instance: 33cde392-20ea-4fd7-88d1-f66b9d14e19a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:40:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:40:06 np0005603609 nova_compute[221550]: 2026-01-31 07:40:06.327 221554 DEBUG oslo_concurrency.lockutils [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:40:06 np0005603609 nova_compute[221550]: 2026-01-31 07:40:06.328 221554 DEBUG oslo_concurrency.lockutils [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:40:06 np0005603609 nova_compute[221550]: 2026-01-31 07:40:06.378 221554 DEBUG oslo_concurrency.processutils [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:40:06 np0005603609 nova_compute[221550]: 2026-01-31 07:40:06.464 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:40:06 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/979781824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:40:06 np0005603609 nova_compute[221550]: 2026-01-31 07:40:06.796 221554 DEBUG oslo_concurrency.processutils [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:40:06 np0005603609 nova_compute[221550]: 2026-01-31 07:40:06.804 221554 DEBUG nova.compute.provider_tree [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:40:06 np0005603609 nova_compute[221550]: 2026-01-31 07:40:06.845 221554 DEBUG nova.scheduler.client.report [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:40:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e182 e182: 3 total, 3 up, 3 in
Jan 31 02:40:06 np0005603609 nova_compute[221550]: 2026-01-31 07:40:06.908 221554 DEBUG oslo_concurrency.lockutils [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:40:07 np0005603609 nova_compute[221550]: 2026-01-31 07:40:07.019 221554 DEBUG nova.compute.manager [req-cada5d54-3df7-4263-92fc-24eccdc59521 req-9ffb80db-0cca-4622-88af-a534f004662a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Received event network-vif-deleted-2c9a7c4c-6849-408d-a061-7f9109458d84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:40:07 np0005603609 nova_compute[221550]: 2026-01-31 07:40:07.069 221554 INFO nova.scheduler.client.report [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Deleted allocations for instance 2622b77a-7331-4b1f-b3f6-b18902724ea8#033[00m
Jan 31 02:40:07 np0005603609 nova_compute[221550]: 2026-01-31 07:40:07.316 221554 DEBUG oslo_concurrency.lockutils [None req-72050ea8-e1c6-409b-9512-7aef33aee586 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "2622b77a-7331-4b1f-b3f6-b18902724ea8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.390s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:40:07 np0005603609 nova_compute[221550]: 2026-01-31 07:40:07.430 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845192.4292703, 2622b77a-7331-4b1f-b3f6-b18902724ea8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:40:07 np0005603609 nova_compute[221550]: 2026-01-31 07:40:07.431 221554 INFO nova.compute.manager [-] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:40:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:07.473 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:40:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:07.473 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:40:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:07.473 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:40:07 np0005603609 nova_compute[221550]: 2026-01-31 07:40:07.829 221554 DEBUG nova.compute.manager [None req-58c40271-ffb2-447a-a2c2-eea60581fd20 - - - - - -] [instance: 2622b77a-7331-4b1f-b3f6-b18902724ea8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:40:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:07.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:40:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:07.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:40:08 np0005603609 nova_compute[221550]: 2026-01-31 07:40:08.981 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:09.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:40:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:09.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:40:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:40:10 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/694355676' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:40:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:40:10 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/694355676' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:40:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:40:11 np0005603609 nova_compute[221550]: 2026-01-31 07:40:11.466 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:40:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:11.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:40:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:11.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e183 e183: 3 total, 3 up, 3 in
Jan 31 02:40:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:13.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:13.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:13 np0005603609 nova_compute[221550]: 2026-01-31 07:40:13.986 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:15 np0005603609 nova_compute[221550]: 2026-01-31 07:40:15.090 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:15 np0005603609 podman[237596]: 2026-01-31 07:40:15.155354392 +0000 UTC m=+0.040234345 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 02:40:15 np0005603609 podman[237595]: 2026-01-31 07:40:15.18588992 +0000 UTC m=+0.071035279 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:40:15 np0005603609 nova_compute[221550]: 2026-01-31 07:40:15.330 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845200.3288972, eed178fc-d0db-47ac-a368-0d3058e94697 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:40:15 np0005603609 nova_compute[221550]: 2026-01-31 07:40:15.330 221554 INFO nova.compute.manager [-] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:40:15 np0005603609 nova_compute[221550]: 2026-01-31 07:40:15.456 221554 DEBUG nova.compute.manager [None req-888233c4-faa4-4b30-ba20-fae2f6cade71 - - - - - -] [instance: eed178fc-d0db-47ac-a368-0d3058e94697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:40:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:40:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:15.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:40:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:40:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:15.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:40:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:40:16 np0005603609 nova_compute[221550]: 2026-01-31 07:40:16.468 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:17.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:17.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:18 np0005603609 nova_compute[221550]: 2026-01-31 07:40:18.990 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:19.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:40:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:19.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:40:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:40:21 np0005603609 nova_compute[221550]: 2026-01-31 07:40:21.471 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e184 e184: 3 total, 3 up, 3 in
Jan 31 02:40:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:21.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:21.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e185 e185: 3 total, 3 up, 3 in
Jan 31 02:40:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:40:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:23.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:40:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:23.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e186 e186: 3 total, 3 up, 3 in
Jan 31 02:40:23 np0005603609 nova_compute[221550]: 2026-01-31 07:40:23.994 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:24 np0005603609 nova_compute[221550]: 2026-01-31 07:40:24.982 221554 DEBUG oslo_concurrency.lockutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Acquiring lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:40:24 np0005603609 nova_compute[221550]: 2026-01-31 07:40:24.983 221554 DEBUG oslo_concurrency.lockutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:40:25 np0005603609 nova_compute[221550]: 2026-01-31 07:40:25.002 221554 DEBUG nova.compute.manager [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:40:25 np0005603609 nova_compute[221550]: 2026-01-31 07:40:25.082 221554 DEBUG oslo_concurrency.lockutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:40:25 np0005603609 nova_compute[221550]: 2026-01-31 07:40:25.083 221554 DEBUG oslo_concurrency.lockutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:40:25 np0005603609 nova_compute[221550]: 2026-01-31 07:40:25.089 221554 DEBUG nova.virt.hardware [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:40:25 np0005603609 nova_compute[221550]: 2026-01-31 07:40:25.090 221554 INFO nova.compute.claims [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:40:25 np0005603609 nova_compute[221550]: 2026-01-31 07:40:25.211 221554 DEBUG oslo_concurrency.processutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:40:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:40:25 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3020357619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:40:25 np0005603609 nova_compute[221550]: 2026-01-31 07:40:25.775 221554 DEBUG oslo_concurrency.processutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:40:25 np0005603609 nova_compute[221550]: 2026-01-31 07:40:25.781 221554 DEBUG nova.compute.provider_tree [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:40:25 np0005603609 nova_compute[221550]: 2026-01-31 07:40:25.833 221554 DEBUG nova.scheduler.client.report [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:40:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:25.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:25.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:25 np0005603609 nova_compute[221550]: 2026-01-31 07:40:25.943 221554 DEBUG oslo_concurrency.lockutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:40:25 np0005603609 nova_compute[221550]: 2026-01-31 07:40:25.944 221554 DEBUG nova.compute.manager [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:40:26 np0005603609 nova_compute[221550]: 2026-01-31 07:40:26.109 221554 DEBUG nova.compute.manager [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:40:26 np0005603609 nova_compute[221550]: 2026-01-31 07:40:26.109 221554 DEBUG nova.network.neutron [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:40:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:40:26 np0005603609 nova_compute[221550]: 2026-01-31 07:40:26.329 221554 INFO nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:40:26 np0005603609 nova_compute[221550]: 2026-01-31 07:40:26.393 221554 DEBUG nova.compute.manager [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:40:26 np0005603609 nova_compute[221550]: 2026-01-31 07:40:26.509 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:26 np0005603609 nova_compute[221550]: 2026-01-31 07:40:26.602 221554 DEBUG nova.policy [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3df48259a0de49fab877d91292b8bbb2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4948c5b7b50412b95fb3ec320267d3a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:40:26 np0005603609 nova_compute[221550]: 2026-01-31 07:40:26.615 221554 DEBUG nova.compute.manager [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:40:26 np0005603609 nova_compute[221550]: 2026-01-31 07:40:26.617 221554 DEBUG nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:40:26 np0005603609 nova_compute[221550]: 2026-01-31 07:40:26.617 221554 INFO nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Creating image(s)#033[00m
Jan 31 02:40:26 np0005603609 nova_compute[221550]: 2026-01-31 07:40:26.648 221554 DEBUG nova.storage.rbd_utils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] rbd image c6bb990f-2315-4d5d-8563-c6d7d6206a58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:40:26 np0005603609 nova_compute[221550]: 2026-01-31 07:40:26.679 221554 DEBUG nova.storage.rbd_utils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] rbd image c6bb990f-2315-4d5d-8563-c6d7d6206a58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:40:26 np0005603609 nova_compute[221550]: 2026-01-31 07:40:26.707 221554 DEBUG nova.storage.rbd_utils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] rbd image c6bb990f-2315-4d5d-8563-c6d7d6206a58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:40:26 np0005603609 nova_compute[221550]: 2026-01-31 07:40:26.710 221554 DEBUG oslo_concurrency.processutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:40:26 np0005603609 nova_compute[221550]: 2026-01-31 07:40:26.756 221554 DEBUG oslo_concurrency.processutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:40:26 np0005603609 nova_compute[221550]: 2026-01-31 07:40:26.757 221554 DEBUG oslo_concurrency.lockutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:40:26 np0005603609 nova_compute[221550]: 2026-01-31 07:40:26.758 221554 DEBUG oslo_concurrency.lockutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:40:26 np0005603609 nova_compute[221550]: 2026-01-31 07:40:26.759 221554 DEBUG oslo_concurrency.lockutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:40:26 np0005603609 nova_compute[221550]: 2026-01-31 07:40:26.790 221554 DEBUG nova.storage.rbd_utils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] rbd image c6bb990f-2315-4d5d-8563-c6d7d6206a58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:40:26 np0005603609 nova_compute[221550]: 2026-01-31 07:40:26.793 221554 DEBUG oslo_concurrency.processutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c6bb990f-2315-4d5d-8563-c6d7d6206a58_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:40:27 np0005603609 nova_compute[221550]: 2026-01-31 07:40:27.050 221554 DEBUG oslo_concurrency.processutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c6bb990f-2315-4d5d-8563-c6d7d6206a58_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:40:27 np0005603609 nova_compute[221550]: 2026-01-31 07:40:27.146 221554 DEBUG nova.storage.rbd_utils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] resizing rbd image c6bb990f-2315-4d5d-8563-c6d7d6206a58_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:40:27 np0005603609 nova_compute[221550]: 2026-01-31 07:40:27.259 221554 DEBUG nova.objects.instance [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Lazy-loading 'migration_context' on Instance uuid c6bb990f-2315-4d5d-8563-c6d7d6206a58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:40:27 np0005603609 nova_compute[221550]: 2026-01-31 07:40:27.378 221554 DEBUG nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:40:27 np0005603609 nova_compute[221550]: 2026-01-31 07:40:27.379 221554 DEBUG nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Ensure instance console log exists: /var/lib/nova/instances/c6bb990f-2315-4d5d-8563-c6d7d6206a58/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:40:27 np0005603609 nova_compute[221550]: 2026-01-31 07:40:27.379 221554 DEBUG oslo_concurrency.lockutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:40:27 np0005603609 nova_compute[221550]: 2026-01-31 07:40:27.380 221554 DEBUG oslo_concurrency.lockutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:40:27 np0005603609 nova_compute[221550]: 2026-01-31 07:40:27.381 221554 DEBUG oslo_concurrency.lockutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:40:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:27.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:27.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:29 np0005603609 nova_compute[221550]: 2026-01-31 07:40:28.998 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:29 np0005603609 nova_compute[221550]: 2026-01-31 07:40:29.139 221554 DEBUG nova.network.neutron [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Successfully created port: d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:40:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:40:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:40:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:40:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:40:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:29.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:40:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:40:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:29.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:40:30 np0005603609 nova_compute[221550]: 2026-01-31 07:40:30.253 221554 DEBUG nova.network.neutron [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Successfully updated port: d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:40:30 np0005603609 nova_compute[221550]: 2026-01-31 07:40:30.276 221554 DEBUG oslo_concurrency.lockutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Acquiring lock "refresh_cache-c6bb990f-2315-4d5d-8563-c6d7d6206a58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:40:30 np0005603609 nova_compute[221550]: 2026-01-31 07:40:30.276 221554 DEBUG oslo_concurrency.lockutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Acquired lock "refresh_cache-c6bb990f-2315-4d5d-8563-c6d7d6206a58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:40:30 np0005603609 nova_compute[221550]: 2026-01-31 07:40:30.276 221554 DEBUG nova.network.neutron [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:40:30 np0005603609 nova_compute[221550]: 2026-01-31 07:40:30.432 221554 DEBUG nova.compute.manager [req-8594d072-a429-4465-beeb-b4ddea3f2a62 req-03e7f223-9612-4795-bbe2-3db4c5e60d52 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Received event network-changed-d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:40:30 np0005603609 nova_compute[221550]: 2026-01-31 07:40:30.432 221554 DEBUG nova.compute.manager [req-8594d072-a429-4465-beeb-b4ddea3f2a62 req-03e7f223-9612-4795-bbe2-3db4c5e60d52 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Refreshing instance network info cache due to event network-changed-d4c9242f-69fc-4fca-8c7c-d7ef12922ec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:40:30 np0005603609 nova_compute[221550]: 2026-01-31 07:40:30.432 221554 DEBUG oslo_concurrency.lockutils [req-8594d072-a429-4465-beeb-b4ddea3f2a62 req-03e7f223-9612-4795-bbe2-3db4c5e60d52 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c6bb990f-2315-4d5d-8563-c6d7d6206a58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:40:31 np0005603609 nova_compute[221550]: 2026-01-31 07:40:31.135 221554 DEBUG nova.network.neutron [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:40:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:40:31 np0005603609 nova_compute[221550]: 2026-01-31 07:40:31.552 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:31.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:31.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.371 221554 DEBUG nova.network.neutron [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Updating instance_info_cache with network_info: [{"id": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "address": "fa:16:3e:db:18:51", "network": {"id": "f3acd4b5-ddea-42f9-a2d5-15d28551c33b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1847664389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4948c5b7b50412b95fb3ec320267d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4c9242f-69", "ovs_interfaceid": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.400 221554 DEBUG oslo_concurrency.lockutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Releasing lock "refresh_cache-c6bb990f-2315-4d5d-8563-c6d7d6206a58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.401 221554 DEBUG nova.compute.manager [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Instance network_info: |[{"id": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "address": "fa:16:3e:db:18:51", "network": {"id": "f3acd4b5-ddea-42f9-a2d5-15d28551c33b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1847664389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4948c5b7b50412b95fb3ec320267d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4c9242f-69", "ovs_interfaceid": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.401 221554 DEBUG oslo_concurrency.lockutils [req-8594d072-a429-4465-beeb-b4ddea3f2a62 req-03e7f223-9612-4795-bbe2-3db4c5e60d52 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c6bb990f-2315-4d5d-8563-c6d7d6206a58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.401 221554 DEBUG nova.network.neutron [req-8594d072-a429-4465-beeb-b4ddea3f2a62 req-03e7f223-9612-4795-bbe2-3db4c5e60d52 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Refreshing network info cache for port d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.405 221554 DEBUG nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Start _get_guest_xml network_info=[{"id": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "address": "fa:16:3e:db:18:51", "network": {"id": "f3acd4b5-ddea-42f9-a2d5-15d28551c33b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1847664389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4948c5b7b50412b95fb3ec320267d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4c9242f-69", "ovs_interfaceid": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.410 221554 WARNING nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.414 221554 DEBUG nova.virt.libvirt.host [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.414 221554 DEBUG nova.virt.libvirt.host [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.422 221554 DEBUG nova.virt.libvirt.host [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.423 221554 DEBUG nova.virt.libvirt.host [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.425 221554 DEBUG nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.426 221554 DEBUG nova.virt.hardware [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.427 221554 DEBUG nova.virt.hardware [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.427 221554 DEBUG nova.virt.hardware [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.428 221554 DEBUG nova.virt.hardware [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.428 221554 DEBUG nova.virt.hardware [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.429 221554 DEBUG nova.virt.hardware [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.430 221554 DEBUG nova.virt.hardware [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.430 221554 DEBUG nova.virt.hardware [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.431 221554 DEBUG nova.virt.hardware [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.431 221554 DEBUG nova.virt.hardware [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.432 221554 DEBUG nova.virt.hardware [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:40:32 np0005603609 nova_compute[221550]: 2026-01-31 07:40:32.437 221554 DEBUG oslo_concurrency.processutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:40:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:40:32 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1789897216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:40:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:33.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:33.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:34 np0005603609 nova_compute[221550]: 2026-01-31 07:40:34.001 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:34 np0005603609 nova_compute[221550]: 2026-01-31 07:40:34.223 221554 DEBUG nova.network.neutron [req-8594d072-a429-4465-beeb-b4ddea3f2a62 req-03e7f223-9612-4795-bbe2-3db4c5e60d52 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Updated VIF entry in instance network info cache for port d4c9242f-69fc-4fca-8c7c-d7ef12922ec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:40:34 np0005603609 nova_compute[221550]: 2026-01-31 07:40:34.224 221554 DEBUG nova.network.neutron [req-8594d072-a429-4465-beeb-b4ddea3f2a62 req-03e7f223-9612-4795-bbe2-3db4c5e60d52 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Updating instance_info_cache with network_info: [{"id": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "address": "fa:16:3e:db:18:51", "network": {"id": "f3acd4b5-ddea-42f9-a2d5-15d28551c33b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1847664389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4948c5b7b50412b95fb3ec320267d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4c9242f-69", "ovs_interfaceid": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:40:34 np0005603609 nova_compute[221550]: 2026-01-31 07:40:34.342 221554 DEBUG oslo_concurrency.lockutils [req-8594d072-a429-4465-beeb-b4ddea3f2a62 req-03e7f223-9612-4795-bbe2-3db4c5e60d52 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c6bb990f-2315-4d5d-8563-c6d7d6206a58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:40:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:35.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:35.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:36 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 02:40:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:40:36 np0005603609 nova_compute[221550]: 2026-01-31 07:40:36.609 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 02:40:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:37.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:37 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:37.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:39 np0005603609 nova_compute[221550]: 2026-01-31 07:40:39.043 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).paxos(paxos updating c 2260..2883) lease_timeout -- calling new election
Jan 31 02:40:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 02:40:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:40.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:40 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:40.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:40 np0005603609 ceph-mon[81667]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Jan 31 02:40:40 np0005603609 ceph-mon[81667]: paxos.2).electionLogic(14) init, last seen epoch 14
Jan 31 02:40:40 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 02:40:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:40:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:40:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:40.883 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:40:40 np0005603609 nova_compute[221550]: 2026-01-31 07:40:40.883 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:40.884 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:40:41 np0005603609 nova_compute[221550]: 2026-01-31 07:40:41.611 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:42.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:40:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:42.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:40:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:40:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:44.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:40:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:40:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:44.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:40:44 np0005603609 nova_compute[221550]: 2026-01-31 07:40:44.047 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:40:44 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 02:40:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:40:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:40:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:40:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:40:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:40:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e187 e187: 3 total, 3 up, 3 in
Jan 31 02:40:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:40:45 np0005603609 nova_compute[221550]: 2026-01-31 07:40:45.501 221554 DEBUG oslo_concurrency.processutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 13.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:40:45 np0005603609 podman[238020]: 2026-01-31 07:40:45.853078233 +0000 UTC m=+0.054498684 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 31 02:40:45 np0005603609 podman[238019]: 2026-01-31 07:40:45.882103274 +0000 UTC m=+0.081256399 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 02:40:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:46.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:46.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.186 221554 DEBUG nova.storage.rbd_utils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] rbd image c6bb990f-2315-4d5d-8563-c6d7d6206a58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.191 221554 DEBUG oslo_concurrency.processutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:40:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:40:46 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1486128835' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:40:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:40:46 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1486128835' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:40:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:40:46 np0005603609 ceph-mon[81667]: mon.compute-1 calling monitor election
Jan 31 02:40:46 np0005603609 ceph-mon[81667]: mon.compute-0 calling monitor election
Jan 31 02:40:46 np0005603609 ceph-mon[81667]: mon.compute-0 is new leader, mons compute-0,compute-1 in quorum (ranks 0,2)
Jan 31 02:40:46 np0005603609 ceph-mon[81667]: Health check failed: 1/3 mons down, quorum compute-0,compute-1 (MON_DOWN)
Jan 31 02:40:46 np0005603609 ceph-mon[81667]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-1
Jan 31 02:40:46 np0005603609 ceph-mon[81667]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-1
Jan 31 02:40:46 np0005603609 ceph-mon[81667]:    mon.compute-2 (rank 1) addr [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] is down (out of quorum)
Jan 31 02:40:46 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:40:46 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:40:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:40:46 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2139412687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.587 221554 DEBUG oslo_concurrency.processutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.589 221554 DEBUG nova.virt.libvirt.vif [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:40:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1381501663',display_name='tempest-ServersTestManualDisk-server-1381501663',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1381501663',id=43,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGQlmHmTb3TsM6PyWm6pm5Ps30jDZrow7hg/bOixFhVWU2EkDQ4b1sbnGnRKWfEFdvU53m3ac1kwNTgJ9e8lXFEgqKAKA5DeNKjfgWn2s95AKODiUeTuLh5Av3iAXzVCjw==',key_name='tempest-keypair-1993286664',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4948c5b7b50412b95fb3ec320267d3a',ramdisk_id='',reservation_id='r-98eub7k1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-2020334594',owner_user_name='tempest-ServersTestManualDisk-2020334594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:40:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3df48259a0de49fab877d91292b8bbb2',uuid=c6bb990f-2315-4d5d-8563-c6d7d6206a58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "address": "fa:16:3e:db:18:51", "network": {"id": "f3acd4b5-ddea-42f9-a2d5-15d28551c33b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1847664389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4948c5b7b50412b95fb3ec320267d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4c9242f-69", "ovs_interfaceid": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.590 221554 DEBUG nova.network.os_vif_util [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Converting VIF {"id": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "address": "fa:16:3e:db:18:51", "network": {"id": "f3acd4b5-ddea-42f9-a2d5-15d28551c33b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1847664389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4948c5b7b50412b95fb3ec320267d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4c9242f-69", "ovs_interfaceid": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.591 221554 DEBUG nova.network.os_vif_util [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:18:51,bridge_name='br-int',has_traffic_filtering=True,id=d4c9242f-69fc-4fca-8c7c-d7ef12922ec0,network=Network(f3acd4b5-ddea-42f9-a2d5-15d28551c33b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4c9242f-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.593 221554 DEBUG nova.objects.instance [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Lazy-loading 'pci_devices' on Instance uuid c6bb990f-2315-4d5d-8563-c6d7d6206a58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.614 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.620 221554 DEBUG nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:40:46 np0005603609 nova_compute[221550]:  <uuid>c6bb990f-2315-4d5d-8563-c6d7d6206a58</uuid>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:  <name>instance-0000002b</name>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServersTestManualDisk-server-1381501663</nova:name>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:40:32</nova:creationTime>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:40:46 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:        <nova:user uuid="3df48259a0de49fab877d91292b8bbb2">tempest-ServersTestManualDisk-2020334594-project-member</nova:user>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:        <nova:project uuid="c4948c5b7b50412b95fb3ec320267d3a">tempest-ServersTestManualDisk-2020334594</nova:project>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:        <nova:port uuid="d4c9242f-69fc-4fca-8c7c-d7ef12922ec0">
Jan 31 02:40:46 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <entry name="serial">c6bb990f-2315-4d5d-8563-c6d7d6206a58</entry>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <entry name="uuid">c6bb990f-2315-4d5d-8563-c6d7d6206a58</entry>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/c6bb990f-2315-4d5d-8563-c6d7d6206a58_disk">
Jan 31 02:40:46 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:40:46 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/c6bb990f-2315-4d5d-8563-c6d7d6206a58_disk.config">
Jan 31 02:40:46 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:40:46 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:db:18:51"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <target dev="tapd4c9242f-69"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/c6bb990f-2315-4d5d-8563-c6d7d6206a58/console.log" append="off"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:40:46 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:40:46 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:40:46 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:40:46 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.622 221554 DEBUG nova.compute.manager [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Preparing to wait for external event network-vif-plugged-d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.623 221554 DEBUG oslo_concurrency.lockutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Acquiring lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.624 221554 DEBUG oslo_concurrency.lockutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.625 221554 DEBUG oslo_concurrency.lockutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.626 221554 DEBUG nova.virt.libvirt.vif [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:40:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1381501663',display_name='tempest-ServersTestManualDisk-server-1381501663',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1381501663',id=43,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGQlmHmTb3TsM6PyWm6pm5Ps30jDZrow7hg/bOixFhVWU2EkDQ4b1sbnGnRKWfEFdvU53m3ac1kwNTgJ9e8lXFEgqKAKA5DeNKjfgWn2s95AKODiUeTuLh5Av3iAXzVCjw==',key_name='tempest-keypair-1993286664',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4948c5b7b50412b95fb3ec320267d3a',ramdisk_id='',reservation_id='r-98eub7k1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-2020334594',owner_user_name='tempest-ServersTestManualDisk-2020334594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:40:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3df48259a0de49fab877d91292b8bbb2',uuid=c6bb990f-2315-4d5d-8563-c6d7d6206a58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "address": "fa:16:3e:db:18:51", "network": {"id": "f3acd4b5-ddea-42f9-a2d5-15d28551c33b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1847664389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4948c5b7b50412b95fb3ec320267d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4c9242f-69", "ovs_interfaceid": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.627 221554 DEBUG nova.network.os_vif_util [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Converting VIF {"id": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "address": "fa:16:3e:db:18:51", "network": {"id": "f3acd4b5-ddea-42f9-a2d5-15d28551c33b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1847664389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4948c5b7b50412b95fb3ec320267d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4c9242f-69", "ovs_interfaceid": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.628 221554 DEBUG nova.network.os_vif_util [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:18:51,bridge_name='br-int',has_traffic_filtering=True,id=d4c9242f-69fc-4fca-8c7c-d7ef12922ec0,network=Network(f3acd4b5-ddea-42f9-a2d5-15d28551c33b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4c9242f-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.628 221554 DEBUG os_vif [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:18:51,bridge_name='br-int',has_traffic_filtering=True,id=d4c9242f-69fc-4fca-8c7c-d7ef12922ec0,network=Network(f3acd4b5-ddea-42f9-a2d5-15d28551c33b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4c9242f-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.629 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.630 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.631 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.636 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.637 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4c9242f-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.637 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd4c9242f-69, col_values=(('external_ids', {'iface-id': 'd4c9242f-69fc-4fca-8c7c-d7ef12922ec0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:18:51', 'vm-uuid': 'c6bb990f-2315-4d5d-8563-c6d7d6206a58'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.639 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.641 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:40:46 np0005603609 NetworkManager[49064]: <info>  [1769845246.6412] manager: (tapd4c9242f-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.649 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.652 221554 INFO os_vif [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:18:51,bridge_name='br-int',has_traffic_filtering=True,id=d4c9242f-69fc-4fca-8c7c-d7ef12922ec0,network=Network(f3acd4b5-ddea-42f9-a2d5-15d28551c33b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4c9242f-69')#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.744 221554 DEBUG nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.745 221554 DEBUG nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.745 221554 DEBUG nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] No VIF found with MAC fa:16:3e:db:18:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.746 221554 INFO nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Using config drive#033[00m
Jan 31 02:40:46 np0005603609 nova_compute[221550]: 2026-01-31 07:40:46.771 221554 DEBUG nova.storage.rbd_utils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] rbd image c6bb990f-2315-4d5d-8563-c6d7d6206a58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:40:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:40:46 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1559322790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:40:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:40:47 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3158256233' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:40:47 np0005603609 nova_compute[221550]: 2026-01-31 07:40:47.418 221554 INFO nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Creating config drive at /var/lib/nova/instances/c6bb990f-2315-4d5d-8563-c6d7d6206a58/disk.config#033[00m
Jan 31 02:40:47 np0005603609 nova_compute[221550]: 2026-01-31 07:40:47.423 221554 DEBUG oslo_concurrency.processutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c6bb990f-2315-4d5d-8563-c6d7d6206a58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8p965yr8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:40:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:40:47 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2664734762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:40:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:40:47 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2581716136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:40:47 np0005603609 nova_compute[221550]: 2026-01-31 07:40:47.543 221554 DEBUG oslo_concurrency.processutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c6bb990f-2315-4d5d-8563-c6d7d6206a58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8p965yr8" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:40:47 np0005603609 nova_compute[221550]: 2026-01-31 07:40:47.575 221554 DEBUG nova.storage.rbd_utils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] rbd image c6bb990f-2315-4d5d-8563-c6d7d6206a58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:40:47 np0005603609 nova_compute[221550]: 2026-01-31 07:40:47.579 221554 DEBUG oslo_concurrency.processutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c6bb990f-2315-4d5d-8563-c6d7d6206a58/disk.config c6bb990f-2315-4d5d-8563-c6d7d6206a58_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:40:47 np0005603609 nova_compute[221550]: 2026-01-31 07:40:47.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:40:47 np0005603609 nova_compute[221550]: 2026-01-31 07:40:47.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 02:40:47 np0005603609 nova_compute[221550]: 2026-01-31 07:40:47.783 221554 DEBUG oslo_concurrency.processutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c6bb990f-2315-4d5d-8563-c6d7d6206a58/disk.config c6bb990f-2315-4d5d-8563-c6d7d6206a58_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:40:47 np0005603609 nova_compute[221550]: 2026-01-31 07:40:47.784 221554 INFO nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Deleting local config drive /var/lib/nova/instances/c6bb990f-2315-4d5d-8563-c6d7d6206a58/disk.config because it was imported into RBD.#033[00m
Jan 31 02:40:47 np0005603609 kernel: tapd4c9242f-69: entered promiscuous mode
Jan 31 02:40:47 np0005603609 NetworkManager[49064]: <info>  [1769845247.8399] manager: (tapd4c9242f-69): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Jan 31 02:40:47 np0005603609 ovn_controller[130359]: 2026-01-31T07:40:47Z|00135|binding|INFO|Claiming lport d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 for this chassis.
Jan 31 02:40:47 np0005603609 ovn_controller[130359]: 2026-01-31T07:40:47Z|00136|binding|INFO|d4c9242f-69fc-4fca-8c7c-d7ef12922ec0: Claiming fa:16:3e:db:18:51 10.100.0.11
Jan 31 02:40:47 np0005603609 nova_compute[221550]: 2026-01-31 07:40:47.909 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:47 np0005603609 nova_compute[221550]: 2026-01-31 07:40:47.914 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:47.925 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:18:51 10.100.0.11'], port_security=['fa:16:3e:db:18:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c6bb990f-2315-4d5d-8563-c6d7d6206a58', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3acd4b5-ddea-42f9-a2d5-15d28551c33b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4948c5b7b50412b95fb3ec320267d3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '456943a1-8247-4866-a57e-0b181e0e8a34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb871ec8-0267-4b0e-9bb9-1a86e6634b6e, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=d4c9242f-69fc-4fca-8c7c-d7ef12922ec0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:40:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:47.926 140058 INFO neutron.agent.ovn.metadata.agent [-] Port d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 in datapath f3acd4b5-ddea-42f9-a2d5-15d28551c33b bound to our chassis#033[00m
Jan 31 02:40:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:47.928 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3acd4b5-ddea-42f9-a2d5-15d28551c33b#033[00m
Jan 31 02:40:47 np0005603609 systemd-udevd[238188]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:40:47 np0005603609 ovn_controller[130359]: 2026-01-31T07:40:47Z|00137|binding|INFO|Setting lport d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 ovn-installed in OVS
Jan 31 02:40:47 np0005603609 ovn_controller[130359]: 2026-01-31T07:40:47Z|00138|binding|INFO|Setting lport d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 up in Southbound
Jan 31 02:40:47 np0005603609 systemd-machined[190912]: New machine qemu-22-instance-0000002b.
Jan 31 02:40:47 np0005603609 nova_compute[221550]: 2026-01-31 07:40:47.936 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:47.939 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[da6d4139-4c7b-4c88-999b-cc29d7ce55d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:47.940 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3acd4b5-d1 in ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:40:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:47.941 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3acd4b5-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:40:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:47.941 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d6d4ae1c-d833-4915-82c8-d1ccc1eb8093]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:47.942 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6d7e20d8-64a0-4388-863c-ee16705b217a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:47 np0005603609 NetworkManager[49064]: <info>  [1769845247.9461] device (tapd4c9242f-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:40:47 np0005603609 NetworkManager[49064]: <info>  [1769845247.9470] device (tapd4c9242f-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:40:47 np0005603609 systemd[1]: Started Virtual Machine qemu-22-instance-0000002b.
Jan 31 02:40:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:47.951 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[a485b7fb-c799-4feb-9b30-c3f7a4efcd9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:47.961 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b2b5a3-88de-4279-bc73-58b565f75c9d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:47.981 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[3289ffad-8468-4958-a899-5fd3addd8eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:47 np0005603609 NetworkManager[49064]: <info>  [1769845247.9869] manager: (tapf3acd4b5-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/65)
Jan 31 02:40:47 np0005603609 systemd-udevd[238192]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:40:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:47.987 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b4efc8b4-dd8b-4282-bbfd-90fae4d90b1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:48.007 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab4dae1-9134-458b-9c6b-9b2395356373]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:48.010 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[53f05fdf-c829-4551-a75e-23e600a1283e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:48.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 02:40:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:48 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:48.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:48 np0005603609 NetworkManager[49064]: <info>  [1769845248.0288] device (tapf3acd4b5-d0): carrier: link connected
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:48.031 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[93df6426-3d7b-4619-977b-1843f9ab574a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:48.044 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb7dab9-fbad-41f1-b492-391fa390c018]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3acd4b5-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:1b:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548772, 'reachable_time': 42248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238221, 'error': None, 'target': 'ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:48.053 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[95946aeb-8065-49be-9ca1-8748c7a3a947]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:1b13'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548772, 'tstamp': 548772}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238222, 'error': None, 'target': 'ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:48.065 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0f956b-206c-46b1-bb9d-8f037ee327e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3acd4b5-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:1b:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548772, 'reachable_time': 42248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 238223, 'error': None, 'target': 'ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:48.083 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e1d61bac-8cad-4ba3-adf9-70686c250e3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:48.120 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8838ad96-c5d4-4f68-8660-978b3ec335f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:48.121 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3acd4b5-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:48.121 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:48.122 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3acd4b5-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:40:48 np0005603609 nova_compute[221550]: 2026-01-31 07:40:48.123 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:48 np0005603609 NetworkManager[49064]: <info>  [1769845248.1245] manager: (tapf3acd4b5-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Jan 31 02:40:48 np0005603609 kernel: tapf3acd4b5-d0: entered promiscuous mode
Jan 31 02:40:48 np0005603609 nova_compute[221550]: 2026-01-31 07:40:48.125 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:48.128 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3acd4b5-d0, col_values=(('external_ids', {'iface-id': '5163c10b-b9fa-4c37-95bf-ea62d26f2f26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:40:48 np0005603609 nova_compute[221550]: 2026-01-31 07:40:48.129 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:48 np0005603609 ovn_controller[130359]: 2026-01-31T07:40:48Z|00139|binding|INFO|Releasing lport 5163c10b-b9fa-4c37-95bf-ea62d26f2f26 from this chassis (sb_readonly=0)
Jan 31 02:40:48 np0005603609 nova_compute[221550]: 2026-01-31 07:40:48.129 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:48.130 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3acd4b5-ddea-42f9-a2d5-15d28551c33b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3acd4b5-ddea-42f9-a2d5-15d28551c33b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:48.132 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3c6763c8-3fe8-41b4-ac12-e93cf33ca2ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:48.132 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-f3acd4b5-ddea-42f9-a2d5-15d28551c33b
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/f3acd4b5-ddea-42f9-a2d5-15d28551c33b.pid.haproxy
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID f3acd4b5-ddea-42f9-a2d5-15d28551c33b
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:40:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:48.133 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b', 'env', 'PROCESS_TAG=haproxy-f3acd4b5-ddea-42f9-a2d5-15d28551c33b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3acd4b5-ddea-42f9-a2d5-15d28551c33b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:40:48 np0005603609 nova_compute[221550]: 2026-01-31 07:40:48.134 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:48 np0005603609 podman[238290]: 2026-01-31 07:40:48.460724091 +0000 UTC m=+0.045194526 container create 8cda2664a615a63e48fb3f2eaa5299962e100f49e28da8d4b470ee59cbde8d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Jan 31 02:40:48 np0005603609 systemd[1]: Started libpod-conmon-8cda2664a615a63e48fb3f2eaa5299962e100f49e28da8d4b470ee59cbde8d4f.scope.
Jan 31 02:40:48 np0005603609 nova_compute[221550]: 2026-01-31 07:40:48.496 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845248.4954896, c6bb990f-2315-4d5d-8563-c6d7d6206a58 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:40:48 np0005603609 nova_compute[221550]: 2026-01-31 07:40:48.497 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] VM Started (Lifecycle Event)#033[00m
Jan 31 02:40:48 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:40:48 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5da6fb7b494758a0c4e5d7193d35f008127bb61fcec17daf4bd49b8585284c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:48 np0005603609 podman[238290]: 2026-01-31 07:40:48.436321895 +0000 UTC m=+0.020792360 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:40:48 np0005603609 podman[238290]: 2026-01-31 07:40:48.534851475 +0000 UTC m=+0.119321950 container init 8cda2664a615a63e48fb3f2eaa5299962e100f49e28da8d4b470ee59cbde8d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 02:40:48 np0005603609 podman[238290]: 2026-01-31 07:40:48.540115354 +0000 UTC m=+0.124585839 container start 8cda2664a615a63e48fb3f2eaa5299962e100f49e28da8d4b470ee59cbde8d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 02:40:48 np0005603609 neutron-haproxy-ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b[238311]: [NOTICE]   (238315) : New worker (238317) forked
Jan 31 02:40:48 np0005603609 neutron-haproxy-ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b[238311]: [NOTICE]   (238315) : Loading success.
Jan 31 02:40:48 np0005603609 nova_compute[221550]: 2026-01-31 07:40:48.580 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:40:48 np0005603609 nova_compute[221550]: 2026-01-31 07:40:48.586 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845248.495685, c6bb990f-2315-4d5d-8563-c6d7d6206a58 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:40:48 np0005603609 nova_compute[221550]: 2026-01-31 07:40:48.586 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:40:48 np0005603609 nova_compute[221550]: 2026-01-31 07:40:48.607 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:40:48 np0005603609 nova_compute[221550]: 2026-01-31 07:40:48.612 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:40:48 np0005603609 nova_compute[221550]: 2026-01-31 07:40:48.641 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:40:48 np0005603609 ceph-mon[81667]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Jan 31 02:40:48 np0005603609 ceph-mon[81667]: paxos.2).electionLogic(18) init, last seen epoch 18
Jan 31 02:40:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:40:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:40:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:40:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:40:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:40:49 np0005603609 ceph-mon[81667]: mon.compute-2 calling monitor election
Jan 31 02:40:49 np0005603609 ceph-mon[81667]: mon.compute-0 calling monitor election
Jan 31 02:40:49 np0005603609 ceph-mon[81667]: mon.compute-2 calling monitor election
Jan 31 02:40:49 np0005603609 ceph-mon[81667]: mon.compute-1 calling monitor election
Jan 31 02:40:49 np0005603609 ceph-mon[81667]: mon.compute-0 calling monitor election
Jan 31 02:40:49 np0005603609 ceph-mon[81667]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 02:40:49 np0005603609 ceph-mon[81667]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-1)
Jan 31 02:40:49 np0005603609 ceph-mon[81667]: Cluster is now healthy
Jan 31 02:40:49 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 02:40:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:40:49.886 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:40:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:40:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:50.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:40:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:50.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.725 221554 DEBUG nova.compute.manager [req-431ac1db-8952-4659-8f45-f4058e61877f req-c6388c5c-b122-46de-9214-f318d689956e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Received event network-vif-plugged-d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.726 221554 DEBUG oslo_concurrency.lockutils [req-431ac1db-8952-4659-8f45-f4058e61877f req-c6388c5c-b122-46de-9214-f318d689956e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.726 221554 DEBUG oslo_concurrency.lockutils [req-431ac1db-8952-4659-8f45-f4058e61877f req-c6388c5c-b122-46de-9214-f318d689956e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.726 221554 DEBUG oslo_concurrency.lockutils [req-431ac1db-8952-4659-8f45-f4058e61877f req-c6388c5c-b122-46de-9214-f318d689956e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.726 221554 DEBUG nova.compute.manager [req-431ac1db-8952-4659-8f45-f4058e61877f req-c6388c5c-b122-46de-9214-f318d689956e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Processing event network-vif-plugged-d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.727 221554 DEBUG nova.compute.manager [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.730 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845250.7303584, c6bb990f-2315-4d5d-8563-c6d7d6206a58 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.730 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.732 221554 DEBUG nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.734 221554 INFO nova.virt.libvirt.driver [-] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Instance spawned successfully.#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.735 221554 DEBUG nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.785 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.789 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.796 221554 DEBUG nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.796 221554 DEBUG nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.796 221554 DEBUG nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.797 221554 DEBUG nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.797 221554 DEBUG nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.798 221554 DEBUG nova.virt.libvirt.driver [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.911 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.993 221554 INFO nova.compute.manager [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Took 24.38 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:40:50 np0005603609 nova_compute[221550]: 2026-01-31 07:40:50.994 221554 DEBUG nova.compute.manager [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:40:51 np0005603609 nova_compute[221550]: 2026-01-31 07:40:51.087 221554 INFO nova.compute.manager [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Took 26.03 seconds to build instance.#033[00m
Jan 31 02:40:51 np0005603609 nova_compute[221550]: 2026-01-31 07:40:51.191 221554 DEBUG oslo_concurrency.lockutils [None req-043bc2ff-f4b5-49e0-b7c0-a0f52db39420 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 26.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:40:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:40:51 np0005603609 nova_compute[221550]: 2026-01-31 07:40:51.617 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:51 np0005603609 nova_compute[221550]: 2026-01-31 07:40:51.639 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:51 np0005603609 nova_compute[221550]: 2026-01-31 07:40:51.676 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:40:51 np0005603609 nova_compute[221550]: 2026-01-31 07:40:51.677 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:40:51 np0005603609 nova_compute[221550]: 2026-01-31 07:40:51.677 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 02:40:51 np0005603609 nova_compute[221550]: 2026-01-31 07:40:51.893 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 02:40:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:52.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:52.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:52 np0005603609 nova_compute[221550]: 2026-01-31 07:40:52.876 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:40:52 np0005603609 nova_compute[221550]: 2026-01-31 07:40:52.890 221554 DEBUG nova.compute.manager [req-14e00c85-2bdb-4a69-98f2-5c0e5b690282 req-56a9a4a4-739f-4825-87ca-bff776afc409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Received event network-vif-plugged-d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:40:52 np0005603609 nova_compute[221550]: 2026-01-31 07:40:52.890 221554 DEBUG oslo_concurrency.lockutils [req-14e00c85-2bdb-4a69-98f2-5c0e5b690282 req-56a9a4a4-739f-4825-87ca-bff776afc409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:40:52 np0005603609 nova_compute[221550]: 2026-01-31 07:40:52.890 221554 DEBUG oslo_concurrency.lockutils [req-14e00c85-2bdb-4a69-98f2-5c0e5b690282 req-56a9a4a4-739f-4825-87ca-bff776afc409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:40:52 np0005603609 nova_compute[221550]: 2026-01-31 07:40:52.890 221554 DEBUG oslo_concurrency.lockutils [req-14e00c85-2bdb-4a69-98f2-5c0e5b690282 req-56a9a4a4-739f-4825-87ca-bff776afc409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:40:52 np0005603609 nova_compute[221550]: 2026-01-31 07:40:52.890 221554 DEBUG nova.compute.manager [req-14e00c85-2bdb-4a69-98f2-5c0e5b690282 req-56a9a4a4-739f-4825-87ca-bff776afc409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] No waiting events found dispatching network-vif-plugged-d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:40:52 np0005603609 nova_compute[221550]: 2026-01-31 07:40:52.890 221554 WARNING nova.compute.manager [req-14e00c85-2bdb-4a69-98f2-5c0e5b690282 req-56a9a4a4-739f-4825-87ca-bff776afc409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Received unexpected event network-vif-plugged-d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:40:53 np0005603609 nova_compute[221550]: 2026-01-31 07:40:53.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:40:53 np0005603609 nova_compute[221550]: 2026-01-31 07:40:53.695 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:40:53 np0005603609 nova_compute[221550]: 2026-01-31 07:40:53.696 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:40:53 np0005603609 nova_compute[221550]: 2026-01-31 07:40:53.696 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:40:53 np0005603609 nova_compute[221550]: 2026-01-31 07:40:53.697 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:40:53 np0005603609 nova_compute[221550]: 2026-01-31 07:40:53.697 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:40:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:54.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:54.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:40:54 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1081832384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:40:54 np0005603609 nova_compute[221550]: 2026-01-31 07:40:54.100 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:40:54 np0005603609 nova_compute[221550]: 2026-01-31 07:40:54.218 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:40:54 np0005603609 nova_compute[221550]: 2026-01-31 07:40:54.218 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:40:54 np0005603609 nova_compute[221550]: 2026-01-31 07:40:54.358 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:40:54 np0005603609 nova_compute[221550]: 2026-01-31 07:40:54.359 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4617MB free_disk=20.901073455810547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:40:54 np0005603609 nova_compute[221550]: 2026-01-31 07:40:54.359 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:40:54 np0005603609 nova_compute[221550]: 2026-01-31 07:40:54.360 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:40:54 np0005603609 nova_compute[221550]: 2026-01-31 07:40:54.669 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance c6bb990f-2315-4d5d-8563-c6d7d6206a58 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:40:54 np0005603609 nova_compute[221550]: 2026-01-31 07:40:54.669 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:40:54 np0005603609 nova_compute[221550]: 2026-01-31 07:40:54.669 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:40:54 np0005603609 nova_compute[221550]: 2026-01-31 07:40:54.869 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:40:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:40:55 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3409420697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:40:55 np0005603609 nova_compute[221550]: 2026-01-31 07:40:55.292 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:40:55 np0005603609 nova_compute[221550]: 2026-01-31 07:40:55.299 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:40:55 np0005603609 nova_compute[221550]: 2026-01-31 07:40:55.392 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:40:55 np0005603609 nova_compute[221550]: 2026-01-31 07:40:55.475 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:40:55 np0005603609 nova_compute[221550]: 2026-01-31 07:40:55.476 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:40:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:56.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:56.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:56 np0005603609 nova_compute[221550]: 2026-01-31 07:40:56.308 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:56 np0005603609 NetworkManager[49064]: <info>  [1769845256.3115] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/67)
Jan 31 02:40:56 np0005603609 NetworkManager[49064]: <info>  [1769845256.3119] device (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:40:56 np0005603609 NetworkManager[49064]: <warn>  [1769845256.3120] device (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 02:40:56 np0005603609 NetworkManager[49064]: <info>  [1769845256.3125] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/68)
Jan 31 02:40:56 np0005603609 NetworkManager[49064]: <info>  [1769845256.3127] device (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:40:56 np0005603609 NetworkManager[49064]: <warn>  [1769845256.3128] device (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 02:40:56 np0005603609 NetworkManager[49064]: <info>  [1769845256.3134] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Jan 31 02:40:56 np0005603609 NetworkManager[49064]: <info>  [1769845256.3139] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Jan 31 02:40:56 np0005603609 NetworkManager[49064]: <info>  [1769845256.3142] device (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 02:40:56 np0005603609 NetworkManager[49064]: <info>  [1769845256.3144] device (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 02:40:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:40:56 np0005603609 nova_compute[221550]: 2026-01-31 07:40:56.367 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:56 np0005603609 ovn_controller[130359]: 2026-01-31T07:40:56Z|00140|binding|INFO|Releasing lport 5163c10b-b9fa-4c37-95bf-ea62d26f2f26 from this chassis (sb_readonly=0)
Jan 31 02:40:56 np0005603609 nova_compute[221550]: 2026-01-31 07:40:56.386 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:56 np0005603609 nova_compute[221550]: 2026-01-31 07:40:56.618 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:56 np0005603609 nova_compute[221550]: 2026-01-31 07:40:56.641 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:40:56 np0005603609 nova_compute[221550]: 2026-01-31 07:40:56.747 221554 DEBUG nova.compute.manager [req-b8f2419a-c1cb-421c-bd3a-1ba752a70c92 req-67ac6c05-b306-4ce2-8128-1e78827c731d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Received event network-changed-d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:40:56 np0005603609 nova_compute[221550]: 2026-01-31 07:40:56.747 221554 DEBUG nova.compute.manager [req-b8f2419a-c1cb-421c-bd3a-1ba752a70c92 req-67ac6c05-b306-4ce2-8128-1e78827c731d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Refreshing instance network info cache due to event network-changed-d4c9242f-69fc-4fca-8c7c-d7ef12922ec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:40:56 np0005603609 nova_compute[221550]: 2026-01-31 07:40:56.748 221554 DEBUG oslo_concurrency.lockutils [req-b8f2419a-c1cb-421c-bd3a-1ba752a70c92 req-67ac6c05-b306-4ce2-8128-1e78827c731d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c6bb990f-2315-4d5d-8563-c6d7d6206a58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:40:56 np0005603609 nova_compute[221550]: 2026-01-31 07:40:56.748 221554 DEBUG oslo_concurrency.lockutils [req-b8f2419a-c1cb-421c-bd3a-1ba752a70c92 req-67ac6c05-b306-4ce2-8128-1e78827c731d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c6bb990f-2315-4d5d-8563-c6d7d6206a58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:40:56 np0005603609 nova_compute[221550]: 2026-01-31 07:40:56.748 221554 DEBUG nova.network.neutron [req-b8f2419a-c1cb-421c-bd3a-1ba752a70c92 req-67ac6c05-b306-4ce2-8128-1e78827c731d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Refreshing network info cache for port d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:40:57 np0005603609 nova_compute[221550]: 2026-01-31 07:40:57.477 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:40:57 np0005603609 nova_compute[221550]: 2026-01-31 07:40:57.478 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:40:57 np0005603609 nova_compute[221550]: 2026-01-31 07:40:57.479 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:40:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:58.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:40:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:58.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:58 np0005603609 nova_compute[221550]: 2026-01-31 07:40:58.605 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-c6bb990f-2315-4d5d-8563-c6d7d6206a58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:40:59 np0005603609 nova_compute[221550]: 2026-01-31 07:40:59.147 221554 DEBUG nova.network.neutron [req-b8f2419a-c1cb-421c-bd3a-1ba752a70c92 req-67ac6c05-b306-4ce2-8128-1e78827c731d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Updated VIF entry in instance network info cache for port d4c9242f-69fc-4fca-8c7c-d7ef12922ec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:40:59 np0005603609 nova_compute[221550]: 2026-01-31 07:40:59.148 221554 DEBUG nova.network.neutron [req-b8f2419a-c1cb-421c-bd3a-1ba752a70c92 req-67ac6c05-b306-4ce2-8128-1e78827c731d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Updating instance_info_cache with network_info: [{"id": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "address": "fa:16:3e:db:18:51", "network": {"id": "f3acd4b5-ddea-42f9-a2d5-15d28551c33b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1847664389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4948c5b7b50412b95fb3ec320267d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4c9242f-69", "ovs_interfaceid": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:40:59 np0005603609 nova_compute[221550]: 2026-01-31 07:40:59.192 221554 DEBUG oslo_concurrency.lockutils [req-b8f2419a-c1cb-421c-bd3a-1ba752a70c92 req-67ac6c05-b306-4ce2-8128-1e78827c731d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c6bb990f-2315-4d5d-8563-c6d7d6206a58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:40:59 np0005603609 nova_compute[221550]: 2026-01-31 07:40:59.193 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-c6bb990f-2315-4d5d-8563-c6d7d6206a58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:40:59 np0005603609 nova_compute[221550]: 2026-01-31 07:40:59.194 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:40:59 np0005603609 nova_compute[221550]: 2026-01-31 07:40:59.194 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c6bb990f-2315-4d5d-8563-c6d7d6206a58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:40:59 np0005603609 nova_compute[221550]: 2026-01-31 07:40:59.417 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:41:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:00.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:41:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:00.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:01 np0005603609 nova_compute[221550]: 2026-01-31 07:41:01.021 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Updating instance_info_cache with network_info: [{"id": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "address": "fa:16:3e:db:18:51", "network": {"id": "f3acd4b5-ddea-42f9-a2d5-15d28551c33b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1847664389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4948c5b7b50412b95fb3ec320267d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4c9242f-69", "ovs_interfaceid": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:41:01 np0005603609 nova_compute[221550]: 2026-01-31 07:41:01.040 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-c6bb990f-2315-4d5d-8563-c6d7d6206a58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:41:01 np0005603609 nova_compute[221550]: 2026-01-31 07:41:01.041 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:41:01 np0005603609 nova_compute[221550]: 2026-01-31 07:41:01.042 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:41:01 np0005603609 nova_compute[221550]: 2026-01-31 07:41:01.042 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:41:01 np0005603609 nova_compute[221550]: 2026-01-31 07:41:01.042 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:41:01 np0005603609 nova_compute[221550]: 2026-01-31 07:41:01.043 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:41:01 np0005603609 nova_compute[221550]: 2026-01-31 07:41:01.043 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:41:01 np0005603609 nova_compute[221550]: 2026-01-31 07:41:01.222 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:41:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:01 np0005603609 nova_compute[221550]: 2026-01-31 07:41:01.621 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:01 np0005603609 nova_compute[221550]: 2026-01-31 07:41:01.643 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:02.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 02:41:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:02 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:02.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e188 e188: 3 total, 3 up, 3 in
Jan 31 02:41:03 np0005603609 ovn_controller[130359]: 2026-01-31T07:41:03Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:db:18:51 10.100.0.11
Jan 31 02:41:03 np0005603609 ovn_controller[130359]: 2026-01-31T07:41:03Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:db:18:51 10.100.0.11
Jan 31 02:41:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:04.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:04.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:05 np0005603609 nova_compute[221550]: 2026-01-31 07:41:05.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:41:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:06.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:41:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:06.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:41:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:06 np0005603609 nova_compute[221550]: 2026-01-31 07:41:06.623 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:06 np0005603609 nova_compute[221550]: 2026-01-31 07:41:06.644 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:07.474 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:41:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:07.475 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:41:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:07.475 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:41:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:41:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:08.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:41:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:08.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e189 e189: 3 total, 3 up, 3 in
Jan 31 02:41:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:10.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 02:41:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:41:10 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:10.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:41:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e190 e190: 3 total, 3 up, 3 in
Jan 31 02:41:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:11 np0005603609 nova_compute[221550]: 2026-01-31 07:41:11.626 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:11 np0005603609 nova_compute[221550]: 2026-01-31 07:41:11.646 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:12.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:41:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:12.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:41:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e191 e191: 3 total, 3 up, 3 in
Jan 31 02:41:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:14.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:14.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.545 221554 DEBUG oslo_concurrency.lockutils [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Acquiring lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.546 221554 DEBUG oslo_concurrency.lockutils [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.546 221554 DEBUG oslo_concurrency.lockutils [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Acquiring lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.546 221554 DEBUG oslo_concurrency.lockutils [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.547 221554 DEBUG oslo_concurrency.lockutils [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.548 221554 INFO nova.compute.manager [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Terminating instance#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.549 221554 DEBUG nova.compute.manager [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:41:14 np0005603609 kernel: tapd4c9242f-69 (unregistering): left promiscuous mode
Jan 31 02:41:14 np0005603609 NetworkManager[49064]: <info>  [1769845274.6910] device (tapd4c9242f-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:41:14 np0005603609 ovn_controller[130359]: 2026-01-31T07:41:14Z|00141|binding|INFO|Releasing lport d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 from this chassis (sb_readonly=0)
Jan 31 02:41:14 np0005603609 ovn_controller[130359]: 2026-01-31T07:41:14Z|00142|binding|INFO|Setting lport d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 down in Southbound
Jan 31 02:41:14 np0005603609 ovn_controller[130359]: 2026-01-31T07:41:14Z|00143|binding|INFO|Removing iface tapd4c9242f-69 ovn-installed in OVS
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.700 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.703 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.712 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:14.732 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:18:51 10.100.0.11'], port_security=['fa:16:3e:db:18:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c6bb990f-2315-4d5d-8563-c6d7d6206a58', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3acd4b5-ddea-42f9-a2d5-15d28551c33b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4948c5b7b50412b95fb3ec320267d3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '456943a1-8247-4866-a57e-0b181e0e8a34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.204'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb871ec8-0267-4b0e-9bb9-1a86e6634b6e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=d4c9242f-69fc-4fca-8c7c-d7ef12922ec0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:41:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:14.735 140058 INFO neutron.agent.ovn.metadata.agent [-] Port d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 in datapath f3acd4b5-ddea-42f9-a2d5-15d28551c33b unbound from our chassis#033[00m
Jan 31 02:41:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:14.738 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3acd4b5-ddea-42f9-a2d5-15d28551c33b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:41:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:14.739 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[561ba7cb-1993-430f-9411-3d5f17c5232c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:41:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:14.740 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b namespace which is not needed anymore#033[00m
Jan 31 02:41:14 np0005603609 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Jan 31 02:41:14 np0005603609 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000002b.scope: Consumed 13.051s CPU time.
Jan 31 02:41:14 np0005603609 systemd-machined[190912]: Machine qemu-22-instance-0000002b terminated.
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.774 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.781 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.794 221554 INFO nova.virt.libvirt.driver [-] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Instance destroyed successfully.#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.794 221554 DEBUG nova.objects.instance [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Lazy-loading 'resources' on Instance uuid c6bb990f-2315-4d5d-8563-c6d7d6206a58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.827 221554 DEBUG nova.virt.libvirt.vif [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:40:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1381501663',display_name='tempest-ServersTestManualDisk-server-1381501663',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1381501663',id=43,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGQlmHmTb3TsM6PyWm6pm5Ps30jDZrow7hg/bOixFhVWU2EkDQ4b1sbnGnRKWfEFdvU53m3ac1kwNTgJ9e8lXFEgqKAKA5DeNKjfgWn2s95AKODiUeTuLh5Av3iAXzVCjw==',key_name='tempest-keypair-1993286664',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:40:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4948c5b7b50412b95fb3ec320267d3a',ramdisk_id='',reservation_id='r-98eub7k1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-2020334594',owner_user_name='tempest-ServersTestManualDisk-2020334594-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:40:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3df48259a0de49fab877d91292b8bbb2',uuid=c6bb990f-2315-4d5d-8563-c6d7d6206a58,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "address": "fa:16:3e:db:18:51", "network": {"id": "f3acd4b5-ddea-42f9-a2d5-15d28551c33b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1847664389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4948c5b7b50412b95fb3ec320267d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4c9242f-69", "ovs_interfaceid": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.829 221554 DEBUG nova.network.os_vif_util [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Converting VIF {"id": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "address": "fa:16:3e:db:18:51", "network": {"id": "f3acd4b5-ddea-42f9-a2d5-15d28551c33b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1847664389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4948c5b7b50412b95fb3ec320267d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4c9242f-69", "ovs_interfaceid": "d4c9242f-69fc-4fca-8c7c-d7ef12922ec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.830 221554 DEBUG nova.network.os_vif_util [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:db:18:51,bridge_name='br-int',has_traffic_filtering=True,id=d4c9242f-69fc-4fca-8c7c-d7ef12922ec0,network=Network(f3acd4b5-ddea-42f9-a2d5-15d28551c33b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4c9242f-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.831 221554 DEBUG os_vif [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:18:51,bridge_name='br-int',has_traffic_filtering=True,id=d4c9242f-69fc-4fca-8c7c-d7ef12922ec0,network=Network(f3acd4b5-ddea-42f9-a2d5-15d28551c33b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4c9242f-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.832 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.833 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4c9242f-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.834 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.837 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:14 np0005603609 nova_compute[221550]: 2026-01-31 07:41:14.839 221554 INFO os_vif [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:18:51,bridge_name='br-int',has_traffic_filtering=True,id=d4c9242f-69fc-4fca-8c7c-d7ef12922ec0,network=Network(f3acd4b5-ddea-42f9-a2d5-15d28551c33b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4c9242f-69')#033[00m
Jan 31 02:41:14 np0005603609 neutron-haproxy-ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b[238311]: [NOTICE]   (238315) : haproxy version is 2.8.14-c23fe91
Jan 31 02:41:14 np0005603609 neutron-haproxy-ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b[238311]: [NOTICE]   (238315) : path to executable is /usr/sbin/haproxy
Jan 31 02:41:14 np0005603609 neutron-haproxy-ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b[238311]: [WARNING]  (238315) : Exiting Master process...
Jan 31 02:41:14 np0005603609 neutron-haproxy-ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b[238311]: [ALERT]    (238315) : Current worker (238317) exited with code 143 (Terminated)
Jan 31 02:41:14 np0005603609 neutron-haproxy-ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b[238311]: [WARNING]  (238315) : All workers exited. Exiting... (0)
Jan 31 02:41:14 np0005603609 systemd[1]: libpod-8cda2664a615a63e48fb3f2eaa5299962e100f49e28da8d4b470ee59cbde8d4f.scope: Deactivated successfully.
Jan 31 02:41:14 np0005603609 podman[238409]: 2026-01-31 07:41:14.907901371 +0000 UTC m=+0.077848295 container died 8cda2664a615a63e48fb3f2eaa5299962e100f49e28da8d4b470ee59cbde8d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 02:41:14 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8cda2664a615a63e48fb3f2eaa5299962e100f49e28da8d4b470ee59cbde8d4f-userdata-shm.mount: Deactivated successfully.
Jan 31 02:41:14 np0005603609 systemd[1]: var-lib-containers-storage-overlay-e5da6fb7b494758a0c4e5d7193d35f008127bb61fcec17daf4bd49b8585284c6-merged.mount: Deactivated successfully.
Jan 31 02:41:15 np0005603609 podman[238409]: 2026-01-31 07:41:15.130135829 +0000 UTC m=+0.300082753 container cleanup 8cda2664a615a63e48fb3f2eaa5299962e100f49e28da8d4b470ee59cbde8d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:41:15 np0005603609 systemd[1]: libpod-conmon-8cda2664a615a63e48fb3f2eaa5299962e100f49e28da8d4b470ee59cbde8d4f.scope: Deactivated successfully.
Jan 31 02:41:15 np0005603609 nova_compute[221550]: 2026-01-31 07:41:15.199 221554 DEBUG nova.compute.manager [req-87e2e3f8-8bd9-4b58-82b4-53912b3ec966 req-1b35f935-b005-44e8-8d0e-12581062b817 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Received event network-vif-unplugged-d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:41:15 np0005603609 nova_compute[221550]: 2026-01-31 07:41:15.200 221554 DEBUG oslo_concurrency.lockutils [req-87e2e3f8-8bd9-4b58-82b4-53912b3ec966 req-1b35f935-b005-44e8-8d0e-12581062b817 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:41:15 np0005603609 nova_compute[221550]: 2026-01-31 07:41:15.201 221554 DEBUG oslo_concurrency.lockutils [req-87e2e3f8-8bd9-4b58-82b4-53912b3ec966 req-1b35f935-b005-44e8-8d0e-12581062b817 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:41:15 np0005603609 nova_compute[221550]: 2026-01-31 07:41:15.201 221554 DEBUG oslo_concurrency.lockutils [req-87e2e3f8-8bd9-4b58-82b4-53912b3ec966 req-1b35f935-b005-44e8-8d0e-12581062b817 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:41:15 np0005603609 nova_compute[221550]: 2026-01-31 07:41:15.201 221554 DEBUG nova.compute.manager [req-87e2e3f8-8bd9-4b58-82b4-53912b3ec966 req-1b35f935-b005-44e8-8d0e-12581062b817 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] No waiting events found dispatching network-vif-unplugged-d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:41:15 np0005603609 nova_compute[221550]: 2026-01-31 07:41:15.201 221554 DEBUG nova.compute.manager [req-87e2e3f8-8bd9-4b58-82b4-53912b3ec966 req-1b35f935-b005-44e8-8d0e-12581062b817 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Received event network-vif-unplugged-d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:41:15 np0005603609 podman[238456]: 2026-01-31 07:41:15.339924542 +0000 UTC m=+0.186894054 container remove 8cda2664a615a63e48fb3f2eaa5299962e100f49e28da8d4b470ee59cbde8d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 02:41:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:15.344 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[832ebf0d-1bb7-4ae7-b1d8-ab74399c0303]: (4, ('Sat Jan 31 07:41:14 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b (8cda2664a615a63e48fb3f2eaa5299962e100f49e28da8d4b470ee59cbde8d4f)\n8cda2664a615a63e48fb3f2eaa5299962e100f49e28da8d4b470ee59cbde8d4f\nSat Jan 31 07:41:15 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b (8cda2664a615a63e48fb3f2eaa5299962e100f49e28da8d4b470ee59cbde8d4f)\n8cda2664a615a63e48fb3f2eaa5299962e100f49e28da8d4b470ee59cbde8d4f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:41:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:15.346 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[896e506a-1644-4118-80ce-531c021fd4bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:41:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:15.347 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3acd4b5-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:41:15 np0005603609 kernel: tapf3acd4b5-d0: left promiscuous mode
Jan 31 02:41:15 np0005603609 nova_compute[221550]: 2026-01-31 07:41:15.398 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:15 np0005603609 nova_compute[221550]: 2026-01-31 07:41:15.403 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:15.405 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[af010252-4250-4015-9049-32c94fb18e83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:41:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:15.423 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3bb8b0-8993-4d4e-8f74-dec5c471dd09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:41:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:15.424 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f649a264-3f4c-4f56-992c-711dceb1a92f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:41:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:15.436 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[193555d5-f3df-40f3-84f6-c6b51c61e92e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548767, 'reachable_time': 29858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238472, 'error': None, 'target': 'ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:41:15 np0005603609 systemd[1]: run-netns-ovnmeta\x2df3acd4b5\x2dddea\x2d42f9\x2da2d5\x2d15d28551c33b.mount: Deactivated successfully.
Jan 31 02:41:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:15.440 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3acd4b5-ddea-42f9-a2d5-15d28551c33b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:41:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:15.440 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[ef910e06-d766-4ea9-8b60-7f9f6f0e6d99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:41:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:16.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:16.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:16 np0005603609 podman[238476]: 2026-01-31 07:41:16.183943362 +0000 UTC m=+0.059435346 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:41:16 np0005603609 podman[238474]: 2026-01-31 07:41:16.216556149 +0000 UTC m=+0.094063842 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 31 02:41:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:16 np0005603609 nova_compute[221550]: 2026-01-31 07:41:16.627 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:16 np0005603609 nova_compute[221550]: 2026-01-31 07:41:16.779 221554 INFO nova.virt.libvirt.driver [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Deleting instance files /var/lib/nova/instances/c6bb990f-2315-4d5d-8563-c6d7d6206a58_del#033[00m
Jan 31 02:41:16 np0005603609 nova_compute[221550]: 2026-01-31 07:41:16.780 221554 INFO nova.virt.libvirt.driver [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Deletion of /var/lib/nova/instances/c6bb990f-2315-4d5d-8563-c6d7d6206a58_del complete#033[00m
Jan 31 02:41:17 np0005603609 nova_compute[221550]: 2026-01-31 07:41:17.065 221554 INFO nova.compute.manager [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Took 2.52 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:41:17 np0005603609 nova_compute[221550]: 2026-01-31 07:41:17.066 221554 DEBUG oslo.service.loopingcall [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:41:17 np0005603609 nova_compute[221550]: 2026-01-31 07:41:17.067 221554 DEBUG nova.compute.manager [-] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:41:17 np0005603609 nova_compute[221550]: 2026-01-31 07:41:17.067 221554 DEBUG nova.network.neutron [-] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:41:17 np0005603609 nova_compute[221550]: 2026-01-31 07:41:17.329 221554 DEBUG nova.compute.manager [req-c9e1e9fe-2fb5-4258-8f04-83e04f430259 req-07f1d1bf-58a3-49d7-85ed-ffbe46e6231a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Received event network-vif-plugged-d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:41:17 np0005603609 nova_compute[221550]: 2026-01-31 07:41:17.329 221554 DEBUG oslo_concurrency.lockutils [req-c9e1e9fe-2fb5-4258-8f04-83e04f430259 req-07f1d1bf-58a3-49d7-85ed-ffbe46e6231a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:41:17 np0005603609 nova_compute[221550]: 2026-01-31 07:41:17.330 221554 DEBUG oslo_concurrency.lockutils [req-c9e1e9fe-2fb5-4258-8f04-83e04f430259 req-07f1d1bf-58a3-49d7-85ed-ffbe46e6231a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:41:17 np0005603609 nova_compute[221550]: 2026-01-31 07:41:17.330 221554 DEBUG oslo_concurrency.lockutils [req-c9e1e9fe-2fb5-4258-8f04-83e04f430259 req-07f1d1bf-58a3-49d7-85ed-ffbe46e6231a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:41:17 np0005603609 nova_compute[221550]: 2026-01-31 07:41:17.330 221554 DEBUG nova.compute.manager [req-c9e1e9fe-2fb5-4258-8f04-83e04f430259 req-07f1d1bf-58a3-49d7-85ed-ffbe46e6231a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] No waiting events found dispatching network-vif-plugged-d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:41:17 np0005603609 nova_compute[221550]: 2026-01-31 07:41:17.331 221554 WARNING nova.compute.manager [req-c9e1e9fe-2fb5-4258-8f04-83e04f430259 req-07f1d1bf-58a3-49d7-85ed-ffbe46e6231a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Received unexpected event network-vif-plugged-d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:41:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:18.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:41:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:18.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:41:18 np0005603609 nova_compute[221550]: 2026-01-31 07:41:18.211 221554 DEBUG nova.compute.manager [req-eba72a5f-3e80-44dc-8eab-34282e05a7aa req-2edc438b-f42f-4852-adf5-a124319e34aa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Received event network-vif-deleted-d4c9242f-69fc-4fca-8c7c-d7ef12922ec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:41:18 np0005603609 nova_compute[221550]: 2026-01-31 07:41:18.211 221554 INFO nova.compute.manager [req-eba72a5f-3e80-44dc-8eab-34282e05a7aa req-2edc438b-f42f-4852-adf5-a124319e34aa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Neutron deleted interface d4c9242f-69fc-4fca-8c7c-d7ef12922ec0; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 02:41:18 np0005603609 nova_compute[221550]: 2026-01-31 07:41:18.212 221554 DEBUG nova.network.neutron [req-eba72a5f-3e80-44dc-8eab-34282e05a7aa req-2edc438b-f42f-4852-adf5-a124319e34aa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:41:18 np0005603609 nova_compute[221550]: 2026-01-31 07:41:18.236 221554 DEBUG nova.network.neutron [-] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:41:18 np0005603609 nova_compute[221550]: 2026-01-31 07:41:18.382 221554 DEBUG nova.compute.manager [req-eba72a5f-3e80-44dc-8eab-34282e05a7aa req-2edc438b-f42f-4852-adf5-a124319e34aa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Detach interface failed, port_id=d4c9242f-69fc-4fca-8c7c-d7ef12922ec0, reason: Instance c6bb990f-2315-4d5d-8563-c6d7d6206a58 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 02:41:18 np0005603609 nova_compute[221550]: 2026-01-31 07:41:18.390 221554 INFO nova.compute.manager [-] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Took 1.32 seconds to deallocate network for instance.#033[00m
Jan 31 02:41:18 np0005603609 nova_compute[221550]: 2026-01-31 07:41:18.572 221554 DEBUG oslo_concurrency.lockutils [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:41:18 np0005603609 nova_compute[221550]: 2026-01-31 07:41:18.573 221554 DEBUG oslo_concurrency.lockutils [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:41:18 np0005603609 nova_compute[221550]: 2026-01-31 07:41:18.606 221554 DEBUG nova.scheduler.client.report [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 02:41:18 np0005603609 nova_compute[221550]: 2026-01-31 07:41:18.631 221554 DEBUG nova.scheduler.client.report [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 02:41:18 np0005603609 nova_compute[221550]: 2026-01-31 07:41:18.632 221554 DEBUG nova.compute.provider_tree [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 02:41:18 np0005603609 nova_compute[221550]: 2026-01-31 07:41:18.650 221554 DEBUG nova.scheduler.client.report [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 02:41:18 np0005603609 nova_compute[221550]: 2026-01-31 07:41:18.670 221554 DEBUG nova.scheduler.client.report [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 02:41:18 np0005603609 nova_compute[221550]: 2026-01-31 07:41:18.730 221554 DEBUG oslo_concurrency.processutils [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:41:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:41:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3533032308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:41:19 np0005603609 nova_compute[221550]: 2026-01-31 07:41:19.258 221554 DEBUG oslo_concurrency.processutils [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:41:19 np0005603609 nova_compute[221550]: 2026-01-31 07:41:19.266 221554 DEBUG nova.compute.provider_tree [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:41:19 np0005603609 nova_compute[221550]: 2026-01-31 07:41:19.315 221554 DEBUG nova.scheduler.client.report [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:41:19 np0005603609 nova_compute[221550]: 2026-01-31 07:41:19.444 221554 DEBUG oslo_concurrency.lockutils [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:41:19 np0005603609 nova_compute[221550]: 2026-01-31 07:41:19.532 221554 INFO nova.scheduler.client.report [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Deleted allocations for instance c6bb990f-2315-4d5d-8563-c6d7d6206a58#033[00m
Jan 31 02:41:19 np0005603609 nova_compute[221550]: 2026-01-31 07:41:19.710 221554 DEBUG oslo_concurrency.lockutils [None req-5fc8d4ac-2e85-4d44-ac63-fe503c3652f3 3df48259a0de49fab877d91292b8bbb2 c4948c5b7b50412b95fb3ec320267d3a - - default default] Lock "c6bb990f-2315-4d5d-8563-c6d7d6206a58" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:41:19 np0005603609 nova_compute[221550]: 2026-01-31 07:41:19.835 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:20.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:20.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e192 e192: 3 total, 3 up, 3 in
Jan 31 02:41:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:21 np0005603609 nova_compute[221550]: 2026-01-31 07:41:21.630 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:22.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:22.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:41:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:24.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:41:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:24.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:24 np0005603609 nova_compute[221550]: 2026-01-31 07:41:24.838 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:26.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:41:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:26.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:41:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:26 np0005603609 nova_compute[221550]: 2026-01-31 07:41:26.675 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e193 e193: 3 total, 3 up, 3 in
Jan 31 02:41:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:41:27 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/628568958' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:41:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:28.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:41:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:28.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:41:28 np0005603609 nova_compute[221550]: 2026-01-31 07:41:28.290 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:28 np0005603609 nova_compute[221550]: 2026-01-31 07:41:28.366 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:29 np0005603609 nova_compute[221550]: 2026-01-31 07:41:29.792 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845274.7908287, c6bb990f-2315-4d5d-8563-c6d7d6206a58 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:41:29 np0005603609 nova_compute[221550]: 2026-01-31 07:41:29.793 221554 INFO nova.compute.manager [-] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:41:29 np0005603609 nova_compute[221550]: 2026-01-31 07:41:29.825 221554 DEBUG nova.compute.manager [None req-ad129068-dc89-48e2-bd0b-370c9c347f72 - - - - - -] [instance: c6bb990f-2315-4d5d-8563-c6d7d6206a58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:41:29 np0005603609 nova_compute[221550]: 2026-01-31 07:41:29.840 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e194 e194: 3 total, 3 up, 3 in
Jan 31 02:41:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:30.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:30.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e195 e195: 3 total, 3 up, 3 in
Jan 31 02:41:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:31 np0005603609 nova_compute[221550]: 2026-01-31 07:41:31.676 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:32.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:32.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e196 e196: 3 total, 3 up, 3 in
Jan 31 02:41:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:34.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:34.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:34 np0005603609 nova_compute[221550]: 2026-01-31 07:41:34.878 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:41:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:36.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:41:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:36.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:36 np0005603609 nova_compute[221550]: 2026-01-31 07:41:36.678 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:38.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:38.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:39 np0005603609 nova_compute[221550]: 2026-01-31 07:41:39.880 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:40.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:40.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e197 e197: 3 total, 3 up, 3 in
Jan 31 02:41:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:41 np0005603609 nova_compute[221550]: 2026-01-31 07:41:41.680 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:42.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:42.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:43 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:43.238 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:41:43 np0005603609 nova_compute[221550]: 2026-01-31 07:41:43.239 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:43 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:43.240 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:41:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:41:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:44.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:41:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:44.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:44 np0005603609 nova_compute[221550]: 2026-01-31 07:41:44.912 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 02:41:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:46.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 02:41:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:41:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:46.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:41:46 np0005603609 podman[238615]: 2026-01-31 07:41:46.277112765 +0000 UTC m=+0.049973404 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:41:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:46 np0005603609 podman[238656]: 2026-01-31 07:41:46.372584531 +0000 UTC m=+0.075558240 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 02:41:46 np0005603609 nova_compute[221550]: 2026-01-31 07:41:46.682 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:46 np0005603609 podman[238758]: 2026-01-31 07:41:46.839211368 +0000 UTC m=+0.144068576 container exec 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 02:41:46 np0005603609 podman[238758]: 2026-01-31 07:41:46.91945152 +0000 UTC m=+0.224308678 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:41:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:41:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:48.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:41:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:41:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:48.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:41:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:41:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:41:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:41:49.243 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:41:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:41:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:41:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:41:49 np0005603609 nova_compute[221550]: 2026-01-31 07:41:49.914 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:50.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:41:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:50.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:41:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:51 np0005603609 nova_compute[221550]: 2026-01-31 07:41:51.689 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:41:51 np0005603609 nova_compute[221550]: 2026-01-31 07:41:51.727 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:41:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:52.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:41:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:52.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:53 np0005603609 nova_compute[221550]: 2026-01-31 07:41:53.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:41:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:41:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:54.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:41:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:54.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:54 np0005603609 nova_compute[221550]: 2026-01-31 07:41:54.313 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:41:54 np0005603609 nova_compute[221550]: 2026-01-31 07:41:54.313 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:41:54 np0005603609 nova_compute[221550]: 2026-01-31 07:41:54.314 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:41:54 np0005603609 nova_compute[221550]: 2026-01-31 07:41:54.314 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:41:54 np0005603609 nova_compute[221550]: 2026-01-31 07:41:54.315 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:41:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:41:54 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/294200435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:41:54 np0005603609 nova_compute[221550]: 2026-01-31 07:41:54.733 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:41:54 np0005603609 nova_compute[221550]: 2026-01-31 07:41:54.888 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:41:54 np0005603609 nova_compute[221550]: 2026-01-31 07:41:54.889 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4759MB free_disk=20.897167205810547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:41:54 np0005603609 nova_compute[221550]: 2026-01-31 07:41:54.889 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:41:54 np0005603609 nova_compute[221550]: 2026-01-31 07:41:54.890 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:41:54 np0005603609 nova_compute[221550]: 2026-01-31 07:41:54.916 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:55 np0005603609 nova_compute[221550]: 2026-01-31 07:41:55.443 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:41:55 np0005603609 nova_compute[221550]: 2026-01-31 07:41:55.443 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:41:55 np0005603609 nova_compute[221550]: 2026-01-31 07:41:55.459 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:41:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:41:55 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/570766913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:41:55 np0005603609 nova_compute[221550]: 2026-01-31 07:41:55.927 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:41:55 np0005603609 nova_compute[221550]: 2026-01-31 07:41:55.934 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:41:56 np0005603609 nova_compute[221550]: 2026-01-31 07:41:56.023 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:41:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:56.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:41:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:56.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:41:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e198 e198: 3 total, 3 up, 3 in
Jan 31 02:41:56 np0005603609 nova_compute[221550]: 2026-01-31 07:41:56.729 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:41:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:41:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:41:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:41:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:58.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:41:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:41:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:41:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:58.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:41:59 np0005603609 nova_compute[221550]: 2026-01-31 07:41:59.918 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:00.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:42:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:00.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:42:01 np0005603609 nova_compute[221550]: 2026-01-31 07:42:01.272 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:42:01 np0005603609 nova_compute[221550]: 2026-01-31 07:42:01.272 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:01 np0005603609 nova_compute[221550]: 2026-01-31 07:42:01.765 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:02.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:02.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:02 np0005603609 nova_compute[221550]: 2026-01-31 07:42:02.273 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:02 np0005603609 nova_compute[221550]: 2026-01-31 07:42:02.273 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:02 np0005603609 nova_compute[221550]: 2026-01-31 07:42:02.320 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:02 np0005603609 nova_compute[221550]: 2026-01-31 07:42:02.320 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:42:02 np0005603609 nova_compute[221550]: 2026-01-31 07:42:02.321 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:42:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e199 e199: 3 total, 3 up, 3 in
Jan 31 02:42:02 np0005603609 nova_compute[221550]: 2026-01-31 07:42:02.625 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:42:02 np0005603609 nova_compute[221550]: 2026-01-31 07:42:02.625 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:02 np0005603609 nova_compute[221550]: 2026-01-31 07:42:02.626 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:02 np0005603609 nova_compute[221550]: 2026-01-31 07:42:02.626 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:02 np0005603609 nova_compute[221550]: 2026-01-31 07:42:02.626 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:02 np0005603609 nova_compute[221550]: 2026-01-31 07:42:02.627 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:02 np0005603609 nova_compute[221550]: 2026-01-31 07:42:02.627 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:42:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e200 e200: 3 total, 3 up, 3 in
Jan 31 02:42:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:04.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:04.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:04 np0005603609 nova_compute[221550]: 2026-01-31 07:42:04.920 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:06.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:42:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:06.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:42:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:06 np0005603609 nova_compute[221550]: 2026-01-31 07:42:06.767 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:07.475 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:07.476 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:07.476 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:08.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:08.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:09 np0005603609 nova_compute[221550]: 2026-01-31 07:42:09.922 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:10.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:10.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e201 e201: 3 total, 3 up, 3 in
Jan 31 02:42:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:11 np0005603609 nova_compute[221550]: 2026-01-31 07:42:11.770 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:12.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:42:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:12.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:42:13 np0005603609 nova_compute[221550]: 2026-01-31 07:42:13.005 221554 DEBUG oslo_concurrency.lockutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Acquiring lock "51d06818-a334-4589-94a4-c988275bc6b0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:13 np0005603609 nova_compute[221550]: 2026-01-31 07:42:13.006 221554 DEBUG oslo_concurrency.lockutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Lock "51d06818-a334-4589-94a4-c988275bc6b0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:13 np0005603609 nova_compute[221550]: 2026-01-31 07:42:13.142 221554 DEBUG nova.compute.manager [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:42:13 np0005603609 nova_compute[221550]: 2026-01-31 07:42:13.532 221554 DEBUG oslo_concurrency.lockutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:13 np0005603609 nova_compute[221550]: 2026-01-31 07:42:13.532 221554 DEBUG oslo_concurrency.lockutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:13 np0005603609 nova_compute[221550]: 2026-01-31 07:42:13.540 221554 DEBUG nova.virt.hardware [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:42:13 np0005603609 nova_compute[221550]: 2026-01-31 07:42:13.540 221554 INFO nova.compute.claims [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:42:14 np0005603609 nova_compute[221550]: 2026-01-31 07:42:14.033 221554 DEBUG oslo_concurrency.processutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:42:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:42:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:14.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:42:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:14.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:42:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1190926015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:42:14 np0005603609 nova_compute[221550]: 2026-01-31 07:42:14.405 221554 DEBUG oslo_concurrency.processutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:42:14 np0005603609 nova_compute[221550]: 2026-01-31 07:42:14.414 221554 DEBUG nova.compute.provider_tree [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:42:14 np0005603609 nova_compute[221550]: 2026-01-31 07:42:14.506 221554 DEBUG nova.scheduler.client.report [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:42:14 np0005603609 nova_compute[221550]: 2026-01-31 07:42:14.924 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:15 np0005603609 nova_compute[221550]: 2026-01-31 07:42:15.045 221554 DEBUG oslo_concurrency.lockutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:15 np0005603609 nova_compute[221550]: 2026-01-31 07:42:15.046 221554 DEBUG nova.compute.manager [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:42:15 np0005603609 nova_compute[221550]: 2026-01-31 07:42:15.404 221554 DEBUG nova.compute.manager [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:42:15 np0005603609 nova_compute[221550]: 2026-01-31 07:42:15.404 221554 DEBUG nova.network.neutron [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:42:15 np0005603609 nova_compute[221550]: 2026-01-31 07:42:15.701 221554 DEBUG nova.policy [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'de4e42ee10994085b3fe69f3ad010289', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '82af298aee3f4ddc92520d30b5faf699', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:42:15 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Jan 31 02:42:15 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:42:15.717710) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:42:15 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Jan 31 02:42:15 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845335717770, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2911, "num_deletes": 517, "total_data_size": 5777886, "memory_usage": 5860160, "flush_reason": "Manual Compaction"}
Jan 31 02:42:15 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Jan 31 02:42:15 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845335826841, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 3344978, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28168, "largest_seqno": 31074, "table_properties": {"data_size": 3333983, "index_size": 6529, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 28427, "raw_average_key_size": 21, "raw_value_size": 3309124, "raw_average_value_size": 2456, "num_data_blocks": 282, "num_entries": 1347, "num_filter_entries": 1347, "num_deletions": 517, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845140, "oldest_key_time": 1769845140, "file_creation_time": 1769845335, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:42:15 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 109165 microseconds, and 6080 cpu microseconds.
Jan 31 02:42:15 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:42:15 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:42:15.826883) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 3344978 bytes OK
Jan 31 02:42:15 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:42:15.826900) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Jan 31 02:42:15 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:42:15.865024) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Jan 31 02:42:15 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:42:15.865071) EVENT_LOG_v1 {"time_micros": 1769845335865062, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:42:15 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:42:15.865095) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:42:15 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 5763856, prev total WAL file size 5763856, number of live WAL files 2.
Jan 31 02:42:15 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:42:15 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:42:15.865982) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 31 02:42:15 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:42:15 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(3266KB)], [57(10MB)]
Jan 31 02:42:15 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845335866037, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 13954143, "oldest_snapshot_seqno": -1}
Jan 31 02:42:16 np0005603609 nova_compute[221550]: 2026-01-31 07:42:16.030 221554 INFO nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:42:16 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 5556 keys, 8660610 bytes, temperature: kUnknown
Jan 31 02:42:16 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845336127463, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 8660610, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8623334, "index_size": 22254, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13957, "raw_key_size": 141955, "raw_average_key_size": 25, "raw_value_size": 8523218, "raw_average_value_size": 1534, "num_data_blocks": 895, "num_entries": 5556, "num_filter_entries": 5556, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769845335, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:42:16 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:42:16 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:42:16.127773) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 8660610 bytes
Jan 31 02:42:16 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:42:16.144807) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 53.3 rd, 33.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.1 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(6.8) write-amplify(2.6) OK, records in: 6579, records dropped: 1023 output_compression: NoCompression
Jan 31 02:42:16 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:42:16.144838) EVENT_LOG_v1 {"time_micros": 1769845336144827, "job": 34, "event": "compaction_finished", "compaction_time_micros": 261600, "compaction_time_cpu_micros": 14861, "output_level": 6, "num_output_files": 1, "total_output_size": 8660610, "num_input_records": 6579, "num_output_records": 5556, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:42:16 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:42:16 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845336145666, "job": 34, "event": "table_file_deletion", "file_number": 59}
Jan 31 02:42:16 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:42:16 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845336146723, "job": 34, "event": "table_file_deletion", "file_number": 57}
Jan 31 02:42:16 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:42:15.865822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:42:16 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:42:16.146842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:42:16 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:42:16.146849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:42:16 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:42:16.146852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:42:16 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:42:16.146855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:42:16 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:42:16.146858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:42:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:42:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:16.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:42:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:16.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:16 np0005603609 nova_compute[221550]: 2026-01-31 07:42:16.279 221554 DEBUG nova.compute.manager [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:42:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:16 np0005603609 nova_compute[221550]: 2026-01-31 07:42:16.772 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:17 np0005603609 podman[239131]: 2026-01-31 07:42:17.209883123 +0000 UTC m=+0.079241907 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:42:17 np0005603609 podman[239130]: 2026-01-31 07:42:17.223657703 +0000 UTC m=+0.093780046 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 02:42:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:42:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:18.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:42:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:18.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:18 np0005603609 nova_compute[221550]: 2026-01-31 07:42:18.218 221554 DEBUG nova.compute.manager [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:42:18 np0005603609 nova_compute[221550]: 2026-01-31 07:42:18.219 221554 DEBUG nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:42:18 np0005603609 nova_compute[221550]: 2026-01-31 07:42:18.219 221554 INFO nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Creating image(s)#033[00m
Jan 31 02:42:18 np0005603609 nova_compute[221550]: 2026-01-31 07:42:18.246 221554 DEBUG nova.storage.rbd_utils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] rbd image 51d06818-a334-4589-94a4-c988275bc6b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:42:18 np0005603609 nova_compute[221550]: 2026-01-31 07:42:18.274 221554 DEBUG nova.storage.rbd_utils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] rbd image 51d06818-a334-4589-94a4-c988275bc6b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:42:18 np0005603609 nova_compute[221550]: 2026-01-31 07:42:18.306 221554 DEBUG nova.storage.rbd_utils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] rbd image 51d06818-a334-4589-94a4-c988275bc6b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:42:18 np0005603609 nova_compute[221550]: 2026-01-31 07:42:18.310 221554 DEBUG oslo_concurrency.processutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:42:18 np0005603609 nova_compute[221550]: 2026-01-31 07:42:18.379 221554 DEBUG oslo_concurrency.processutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:42:18 np0005603609 nova_compute[221550]: 2026-01-31 07:42:18.380 221554 DEBUG oslo_concurrency.lockutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:18 np0005603609 nova_compute[221550]: 2026-01-31 07:42:18.381 221554 DEBUG oslo_concurrency.lockutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:18 np0005603609 nova_compute[221550]: 2026-01-31 07:42:18.382 221554 DEBUG oslo_concurrency.lockutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:18 np0005603609 nova_compute[221550]: 2026-01-31 07:42:18.420 221554 DEBUG nova.storage.rbd_utils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] rbd image 51d06818-a334-4589-94a4-c988275bc6b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:42:18 np0005603609 nova_compute[221550]: 2026-01-31 07:42:18.426 221554 DEBUG oslo_concurrency.processutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 51d06818-a334-4589-94a4-c988275bc6b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:42:18 np0005603609 nova_compute[221550]: 2026-01-31 07:42:18.643 221554 DEBUG oslo_concurrency.lockutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Acquiring lock "49c2b2d1-3230-4f75-bc49-86230accc637" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:18 np0005603609 nova_compute[221550]: 2026-01-31 07:42:18.644 221554 DEBUG oslo_concurrency.lockutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lock "49c2b2d1-3230-4f75-bc49-86230accc637" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:18 np0005603609 nova_compute[221550]: 2026-01-31 07:42:18.644 221554 INFO nova.compute.manager [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Unshelving#033[00m
Jan 31 02:42:18 np0005603609 nova_compute[221550]: 2026-01-31 07:42:18.785 221554 DEBUG nova.network.neutron [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Successfully created port: dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:42:18 np0005603609 nova_compute[221550]: 2026-01-31 07:42:18.959 221554 DEBUG oslo_concurrency.processutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 51d06818-a334-4589-94a4-c988275bc6b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:42:19 np0005603609 nova_compute[221550]: 2026-01-31 07:42:19.077 221554 DEBUG nova.storage.rbd_utils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] resizing rbd image 51d06818-a334-4589-94a4-c988275bc6b0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:42:19 np0005603609 nova_compute[221550]: 2026-01-31 07:42:19.139 221554 DEBUG oslo_concurrency.lockutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:19 np0005603609 nova_compute[221550]: 2026-01-31 07:42:19.140 221554 DEBUG oslo_concurrency.lockutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:19 np0005603609 nova_compute[221550]: 2026-01-31 07:42:19.145 221554 DEBUG nova.objects.instance [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lazy-loading 'pci_requests' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:42:19 np0005603609 nova_compute[221550]: 2026-01-31 07:42:19.211 221554 DEBUG nova.objects.instance [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Lazy-loading 'migration_context' on Instance uuid 51d06818-a334-4589-94a4-c988275bc6b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:42:19 np0005603609 nova_compute[221550]: 2026-01-31 07:42:19.252 221554 DEBUG nova.objects.instance [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lazy-loading 'numa_topology' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:42:19 np0005603609 nova_compute[221550]: 2026-01-31 07:42:19.278 221554 DEBUG nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:42:19 np0005603609 nova_compute[221550]: 2026-01-31 07:42:19.278 221554 DEBUG nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Ensure instance console log exists: /var/lib/nova/instances/51d06818-a334-4589-94a4-c988275bc6b0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:42:19 np0005603609 nova_compute[221550]: 2026-01-31 07:42:19.279 221554 DEBUG oslo_concurrency.lockutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:19 np0005603609 nova_compute[221550]: 2026-01-31 07:42:19.279 221554 DEBUG oslo_concurrency.lockutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:19 np0005603609 nova_compute[221550]: 2026-01-31 07:42:19.280 221554 DEBUG oslo_concurrency.lockutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:19 np0005603609 nova_compute[221550]: 2026-01-31 07:42:19.376 221554 DEBUG nova.virt.hardware [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:42:19 np0005603609 nova_compute[221550]: 2026-01-31 07:42:19.376 221554 INFO nova.compute.claims [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:42:19 np0005603609 nova_compute[221550]: 2026-01-31 07:42:19.926 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:42:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:20.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:42:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:42:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:20.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:42:20 np0005603609 nova_compute[221550]: 2026-01-31 07:42:20.574 221554 DEBUG oslo_concurrency.processutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:42:20 np0005603609 nova_compute[221550]: 2026-01-31 07:42:20.895 221554 DEBUG nova.network.neutron [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Successfully updated port: dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:42:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:42:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3650789939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:42:20 np0005603609 nova_compute[221550]: 2026-01-31 07:42:20.986 221554 DEBUG oslo_concurrency.processutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:42:20 np0005603609 nova_compute[221550]: 2026-01-31 07:42:20.991 221554 DEBUG nova.compute.provider_tree [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:42:21 np0005603609 nova_compute[221550]: 2026-01-31 07:42:21.052 221554 DEBUG oslo_concurrency.lockutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Acquiring lock "refresh_cache-51d06818-a334-4589-94a4-c988275bc6b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:42:21 np0005603609 nova_compute[221550]: 2026-01-31 07:42:21.052 221554 DEBUG oslo_concurrency.lockutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Acquired lock "refresh_cache-51d06818-a334-4589-94a4-c988275bc6b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:42:21 np0005603609 nova_compute[221550]: 2026-01-31 07:42:21.053 221554 DEBUG nova.network.neutron [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:42:21 np0005603609 nova_compute[221550]: 2026-01-31 07:42:21.129 221554 DEBUG nova.scheduler.client.report [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:42:21 np0005603609 nova_compute[221550]: 2026-01-31 07:42:21.235 221554 DEBUG nova.compute.manager [req-5ca778e5-aabe-42f4-be19-972d5b925927 req-cd8c9bac-140a-4aad-b6cc-0cf4a23de4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Received event network-changed-dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:42:21 np0005603609 nova_compute[221550]: 2026-01-31 07:42:21.235 221554 DEBUG nova.compute.manager [req-5ca778e5-aabe-42f4-be19-972d5b925927 req-cd8c9bac-140a-4aad-b6cc-0cf4a23de4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Refreshing instance network info cache due to event network-changed-dd8f8a59-aeae-49bc-9cae-f91d4ea8b393. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:42:21 np0005603609 nova_compute[221550]: 2026-01-31 07:42:21.236 221554 DEBUG oslo_concurrency.lockutils [req-5ca778e5-aabe-42f4-be19-972d5b925927 req-cd8c9bac-140a-4aad-b6cc-0cf4a23de4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-51d06818-a334-4589-94a4-c988275bc6b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:42:21 np0005603609 nova_compute[221550]: 2026-01-31 07:42:21.305 221554 DEBUG oslo_concurrency.lockutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:21 np0005603609 nova_compute[221550]: 2026-01-31 07:42:21.356 221554 DEBUG nova.network.neutron [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:42:21 np0005603609 nova_compute[221550]: 2026-01-31 07:42:21.421 221554 DEBUG oslo_concurrency.lockutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Acquiring lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:42:21 np0005603609 nova_compute[221550]: 2026-01-31 07:42:21.422 221554 DEBUG oslo_concurrency.lockutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Acquired lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:42:21 np0005603609 nova_compute[221550]: 2026-01-31 07:42:21.422 221554 DEBUG nova.network.neutron [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:42:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:21 np0005603609 nova_compute[221550]: 2026-01-31 07:42:21.735 221554 DEBUG nova.network.neutron [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:42:21 np0005603609 nova_compute[221550]: 2026-01-31 07:42:21.775 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:42:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:22.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:42:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:22.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.327 221554 DEBUG nova.network.neutron [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.359 221554 DEBUG oslo_concurrency.lockutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Releasing lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.361 221554 DEBUG nova.virt.libvirt.driver [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.362 221554 INFO nova.virt.libvirt.driver [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Creating image(s)#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.394 221554 DEBUG nova.storage.rbd_utils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.397 221554 DEBUG nova.objects.instance [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.399 221554 DEBUG nova.network.neutron [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Updating instance_info_cache with network_info: [{"id": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "address": "fa:16:3e:9c:71:56", "network": {"id": "af5f0689-533c-4122-8c53-f96f6a00279a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-149970366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82af298aee3f4ddc92520d30b5faf699", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8f8a59-ae", "ovs_interfaceid": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.710 221554 DEBUG nova.storage.rbd_utils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.753 221554 DEBUG nova.storage.rbd_utils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.758 221554 DEBUG oslo_concurrency.lockutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Acquiring lock "81ffb7b15aafcfc03cc7cb189e50c383d2cd996a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.760 221554 DEBUG oslo_concurrency.lockutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lock "81ffb7b15aafcfc03cc7cb189e50c383d2cd996a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.767 221554 DEBUG oslo_concurrency.lockutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Releasing lock "refresh_cache-51d06818-a334-4589-94a4-c988275bc6b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.767 221554 DEBUG nova.compute.manager [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Instance network_info: |[{"id": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "address": "fa:16:3e:9c:71:56", "network": {"id": "af5f0689-533c-4122-8c53-f96f6a00279a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-149970366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82af298aee3f4ddc92520d30b5faf699", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8f8a59-ae", "ovs_interfaceid": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.768 221554 DEBUG oslo_concurrency.lockutils [req-5ca778e5-aabe-42f4-be19-972d5b925927 req-cd8c9bac-140a-4aad-b6cc-0cf4a23de4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-51d06818-a334-4589-94a4-c988275bc6b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.768 221554 DEBUG nova.network.neutron [req-5ca778e5-aabe-42f4-be19-972d5b925927 req-cd8c9bac-140a-4aad-b6cc-0cf4a23de4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Refreshing network info cache for port dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.773 221554 DEBUG nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Start _get_guest_xml network_info=[{"id": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "address": "fa:16:3e:9c:71:56", "network": {"id": "af5f0689-533c-4122-8c53-f96f6a00279a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-149970366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82af298aee3f4ddc92520d30b5faf699", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8f8a59-ae", "ovs_interfaceid": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.781 221554 WARNING nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.789 221554 DEBUG nova.virt.libvirt.host [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.790 221554 DEBUG nova.virt.libvirt.host [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.795 221554 DEBUG nova.virt.libvirt.host [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.795 221554 DEBUG nova.virt.libvirt.host [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.796 221554 DEBUG nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.797 221554 DEBUG nova.virt.hardware [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.797 221554 DEBUG nova.virt.hardware [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.797 221554 DEBUG nova.virt.hardware [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.798 221554 DEBUG nova.virt.hardware [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.798 221554 DEBUG nova.virt.hardware [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.798 221554 DEBUG nova.virt.hardware [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.799 221554 DEBUG nova.virt.hardware [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.799 221554 DEBUG nova.virt.hardware [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.799 221554 DEBUG nova.virt.hardware [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.799 221554 DEBUG nova.virt.hardware [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.800 221554 DEBUG nova.virt.hardware [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:42:22 np0005603609 nova_compute[221550]: 2026-01-31 07:42:22.802 221554 DEBUG oslo_concurrency.processutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.009 221554 DEBUG nova.virt.libvirt.imagebackend [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/0b8514e3-4251-4bd6-a806-b742c737781f/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/0b8514e3-4251-4bd6-a806-b742c737781f/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.101 221554 DEBUG nova.virt.libvirt.imagebackend [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Selected location: {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/0b8514e3-4251-4bd6-a806-b742c737781f/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.103 221554 DEBUG nova.storage.rbd_utils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] cloning images/0b8514e3-4251-4bd6-a806-b742c737781f@snap to None/49c2b2d1-3230-4f75-bc49-86230accc637_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 02:42:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:42:23 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2103826521' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.268 221554 DEBUG oslo_concurrency.processutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.291 221554 DEBUG nova.storage.rbd_utils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] rbd image 51d06818-a334-4589-94a4-c988275bc6b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.295 221554 DEBUG oslo_concurrency.processutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.705 221554 DEBUG oslo_concurrency.lockutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lock "81ffb7b15aafcfc03cc7cb189e50c383d2cd996a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:42:23 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3775045493' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.789 221554 DEBUG oslo_concurrency.processutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.791 221554 DEBUG nova.virt.libvirt.vif [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:42:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=49,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE4J7zXPDGyn8VdGEl7qmKSSnOiqxJZTBSl4eMBReI1rPpPbql6Hum2IZ3wG4Gtpz+QLnpxYYiQVM1ypVyqzDao4+3QGrgwQj8ywOjeZ8vqBC1CiXOqeDIF3971tPM9ESQ==',key_name='tempest-keypair-1583844521',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='82af298aee3f4ddc92520d30b5faf699',ramdisk_id='',reservation_id='r-9qkcl17p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-987146001',owner_user_name='tempest-ServersV294TestFqdnHostnames-987146001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:42:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='de4e42ee10994085b3fe69f3ad010289',uuid=51d06818-a334-4589-94a4-c988275bc6b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "address": "fa:16:3e:9c:71:56", "network": {"id": "af5f0689-533c-4122-8c53-f96f6a00279a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-149970366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82af298aee3f4ddc92520d30b5faf699", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8f8a59-ae", "ovs_interfaceid": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.791 221554 DEBUG nova.network.os_vif_util [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Converting VIF {"id": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "address": "fa:16:3e:9c:71:56", "network": {"id": "af5f0689-533c-4122-8c53-f96f6a00279a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-149970366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82af298aee3f4ddc92520d30b5faf699", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8f8a59-ae", "ovs_interfaceid": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.792 221554 DEBUG nova.network.os_vif_util [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:71:56,bridge_name='br-int',has_traffic_filtering=True,id=dd8f8a59-aeae-49bc-9cae-f91d4ea8b393,network=Network(af5f0689-533c-4122-8c53-f96f6a00279a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd8f8a59-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.793 221554 DEBUG nova.objects.instance [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Lazy-loading 'pci_devices' on Instance uuid 51d06818-a334-4589-94a4-c988275bc6b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.839 221554 DEBUG nova.objects.instance [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lazy-loading 'migration_context' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.842 221554 DEBUG nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:42:23 np0005603609 nova_compute[221550]:  <uuid>51d06818-a334-4589-94a4-c988275bc6b0</uuid>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:  <name>instance-00000031</name>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <nova:name>guest-instance-1</nova:name>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:42:22</nova:creationTime>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:42:23 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:        <nova:user uuid="de4e42ee10994085b3fe69f3ad010289">tempest-ServersV294TestFqdnHostnames-987146001-project-member</nova:user>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:        <nova:project uuid="82af298aee3f4ddc92520d30b5faf699">tempest-ServersV294TestFqdnHostnames-987146001</nova:project>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:        <nova:port uuid="dd8f8a59-aeae-49bc-9cae-f91d4ea8b393">
Jan 31 02:42:23 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <entry name="serial">51d06818-a334-4589-94a4-c988275bc6b0</entry>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <entry name="uuid">51d06818-a334-4589-94a4-c988275bc6b0</entry>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/51d06818-a334-4589-94a4-c988275bc6b0_disk">
Jan 31 02:42:23 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:42:23 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/51d06818-a334-4589-94a4-c988275bc6b0_disk.config">
Jan 31 02:42:23 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:42:23 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:9c:71:56"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <target dev="tapdd8f8a59-ae"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/51d06818-a334-4589-94a4-c988275bc6b0/console.log" append="off"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:42:23 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:42:23 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:42:23 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:42:23 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.843 221554 DEBUG nova.compute.manager [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Preparing to wait for external event network-vif-plugged-dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.843 221554 DEBUG oslo_concurrency.lockutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Acquiring lock "51d06818-a334-4589-94a4-c988275bc6b0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.843 221554 DEBUG oslo_concurrency.lockutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Lock "51d06818-a334-4589-94a4-c988275bc6b0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.843 221554 DEBUG oslo_concurrency.lockutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Lock "51d06818-a334-4589-94a4-c988275bc6b0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.844 221554 DEBUG nova.virt.libvirt.vif [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:42:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=49,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE4J7zXPDGyn8VdGEl7qmKSSnOiqxJZTBSl4eMBReI1rPpPbql6Hum2IZ3wG4Gtpz+QLnpxYYiQVM1ypVyqzDao4+3QGrgwQj8ywOjeZ8vqBC1CiXOqeDIF3971tPM9ESQ==',key_name='tempest-keypair-1583844521',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='82af298aee3f4ddc92520d30b5faf699',ramdisk_id='',reservation_id='r-9qkcl17p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-987146001',owner_user_name='tempest-ServersV294TestFqdnHostnames-987146001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:42:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='de4e42ee10994085b3fe69f3ad010289',uuid=51d06818-a334-4589-94a4-c988275bc6b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "address": "fa:16:3e:9c:71:56", "network": {"id": "af5f0689-533c-4122-8c53-f96f6a00279a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-149970366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82af298aee3f4ddc92520d30b5faf699", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8f8a59-ae", "ovs_interfaceid": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.844 221554 DEBUG nova.network.os_vif_util [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Converting VIF {"id": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "address": "fa:16:3e:9c:71:56", "network": {"id": "af5f0689-533c-4122-8c53-f96f6a00279a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-149970366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82af298aee3f4ddc92520d30b5faf699", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8f8a59-ae", "ovs_interfaceid": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.845 221554 DEBUG nova.network.os_vif_util [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:71:56,bridge_name='br-int',has_traffic_filtering=True,id=dd8f8a59-aeae-49bc-9cae-f91d4ea8b393,network=Network(af5f0689-533c-4122-8c53-f96f6a00279a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd8f8a59-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.845 221554 DEBUG os_vif [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:71:56,bridge_name='br-int',has_traffic_filtering=True,id=dd8f8a59-aeae-49bc-9cae-f91d4ea8b393,network=Network(af5f0689-533c-4122-8c53-f96f6a00279a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd8f8a59-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.846 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.846 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.847 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.849 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.849 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd8f8a59-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.850 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdd8f8a59-ae, col_values=(('external_ids', {'iface-id': 'dd8f8a59-aeae-49bc-9cae-f91d4ea8b393', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:71:56', 'vm-uuid': '51d06818-a334-4589-94a4-c988275bc6b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:42:23 np0005603609 NetworkManager[49064]: <info>  [1769845343.8518] manager: (tapdd8f8a59-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.852 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.855 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.858 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:23 np0005603609 nova_compute[221550]: 2026-01-31 07:42:23.859 221554 INFO os_vif [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:71:56,bridge_name='br-int',has_traffic_filtering=True,id=dd8f8a59-aeae-49bc-9cae-f91d4ea8b393,network=Network(af5f0689-533c-4122-8c53-f96f6a00279a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd8f8a59-ae')#033[00m
Jan 31 02:42:24 np0005603609 nova_compute[221550]: 2026-01-31 07:42:24.022 221554 DEBUG nova.storage.rbd_utils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] flattening vms/49c2b2d1-3230-4f75-bc49-86230accc637_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 02:42:24 np0005603609 nova_compute[221550]: 2026-01-31 07:42:24.128 221554 DEBUG nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:42:24 np0005603609 nova_compute[221550]: 2026-01-31 07:42:24.128 221554 DEBUG nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:42:24 np0005603609 nova_compute[221550]: 2026-01-31 07:42:24.129 221554 DEBUG nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] No VIF found with MAC fa:16:3e:9c:71:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:42:24 np0005603609 nova_compute[221550]: 2026-01-31 07:42:24.129 221554 INFO nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Using config drive#033[00m
Jan 31 02:42:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:24.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:24 np0005603609 nova_compute[221550]: 2026-01-31 07:42:24.170 221554 DEBUG nova.storage.rbd_utils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] rbd image 51d06818-a334-4589-94a4-c988275bc6b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:42:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:24.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:24 np0005603609 nova_compute[221550]: 2026-01-31 07:42:24.297 221554 DEBUG nova.network.neutron [req-5ca778e5-aabe-42f4-be19-972d5b925927 req-cd8c9bac-140a-4aad-b6cc-0cf4a23de4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Updated VIF entry in instance network info cache for port dd8f8a59-aeae-49bc-9cae-f91d4ea8b393. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:42:24 np0005603609 nova_compute[221550]: 2026-01-31 07:42:24.298 221554 DEBUG nova.network.neutron [req-5ca778e5-aabe-42f4-be19-972d5b925927 req-cd8c9bac-140a-4aad-b6cc-0cf4a23de4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Updating instance_info_cache with network_info: [{"id": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "address": "fa:16:3e:9c:71:56", "network": {"id": "af5f0689-533c-4122-8c53-f96f6a00279a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-149970366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82af298aee3f4ddc92520d30b5faf699", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8f8a59-ae", "ovs_interfaceid": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:42:24 np0005603609 nova_compute[221550]: 2026-01-31 07:42:24.395 221554 DEBUG oslo_concurrency.lockutils [req-5ca778e5-aabe-42f4-be19-972d5b925927 req-cd8c9bac-140a-4aad-b6cc-0cf4a23de4b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-51d06818-a334-4589-94a4-c988275bc6b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:42:24 np0005603609 nova_compute[221550]: 2026-01-31 07:42:24.691 221554 INFO nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Creating config drive at /var/lib/nova/instances/51d06818-a334-4589-94a4-c988275bc6b0/disk.config#033[00m
Jan 31 02:42:24 np0005603609 nova_compute[221550]: 2026-01-31 07:42:24.696 221554 DEBUG oslo_concurrency.processutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51d06818-a334-4589-94a4-c988275bc6b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9jajwqjy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:42:24 np0005603609 nova_compute[221550]: 2026-01-31 07:42:24.819 221554 DEBUG oslo_concurrency.processutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51d06818-a334-4589-94a4-c988275bc6b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9jajwqjy" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:42:24 np0005603609 nova_compute[221550]: 2026-01-31 07:42:24.887 221554 DEBUG nova.storage.rbd_utils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] rbd image 51d06818-a334-4589-94a4-c988275bc6b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:42:24 np0005603609 nova_compute[221550]: 2026-01-31 07:42:24.892 221554 DEBUG oslo_concurrency.processutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51d06818-a334-4589-94a4-c988275bc6b0/disk.config 51d06818-a334-4589-94a4-c988275bc6b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.182 221554 DEBUG nova.virt.libvirt.driver [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Image rbd:vms/49c2b2d1-3230-4f75-bc49-86230accc637_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.183 221554 DEBUG nova.virt.libvirt.driver [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.183 221554 DEBUG nova.virt.libvirt.driver [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Ensure instance console log exists: /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.184 221554 DEBUG oslo_concurrency.lockutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.184 221554 DEBUG oslo_concurrency.lockutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.185 221554 DEBUG oslo_concurrency.lockutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.187 221554 DEBUG nova.virt.libvirt.driver [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T07:41:35Z,direct_url=<?>,disk_format='raw',id=0b8514e3-4251-4bd6-a806-b742c737781f,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1773899943-shelved',owner='37a878bbb1224cfeabcbe629345fc85d',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T07:42:06Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.193 221554 WARNING nova.virt.libvirt.driver [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.197 221554 DEBUG nova.virt.libvirt.host [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.198 221554 DEBUG nova.virt.libvirt.host [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.202 221554 DEBUG nova.virt.libvirt.host [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.202 221554 DEBUG nova.virt.libvirt.host [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.204 221554 DEBUG nova.virt.libvirt.driver [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.205 221554 DEBUG nova.virt.hardware [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T07:41:35Z,direct_url=<?>,disk_format='raw',id=0b8514e3-4251-4bd6-a806-b742c737781f,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1773899943-shelved',owner='37a878bbb1224cfeabcbe629345fc85d',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T07:42:06Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.206 221554 DEBUG nova.virt.hardware [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.206 221554 DEBUG nova.virt.hardware [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.207 221554 DEBUG nova.virt.hardware [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.207 221554 DEBUG nova.virt.hardware [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.207 221554 DEBUG nova.virt.hardware [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.208 221554 DEBUG nova.virt.hardware [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.208 221554 DEBUG nova.virt.hardware [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.209 221554 DEBUG nova.virt.hardware [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.209 221554 DEBUG nova.virt.hardware [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.210 221554 DEBUG nova.virt.hardware [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.210 221554 DEBUG nova.objects.instance [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.407 221554 DEBUG oslo_concurrency.processutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.454 221554 DEBUG oslo_concurrency.processutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51d06818-a334-4589-94a4-c988275bc6b0/disk.config 51d06818-a334-4589-94a4-c988275bc6b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.455 221554 INFO nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Deleting local config drive /var/lib/nova/instances/51d06818-a334-4589-94a4-c988275bc6b0/disk.config because it was imported into RBD.#033[00m
Jan 31 02:42:25 np0005603609 kernel: tapdd8f8a59-ae: entered promiscuous mode
Jan 31 02:42:25 np0005603609 NetworkManager[49064]: <info>  [1769845345.5081] manager: (tapdd8f8a59-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/72)
Jan 31 02:42:25 np0005603609 ovn_controller[130359]: 2026-01-31T07:42:25Z|00144|binding|INFO|Claiming lport dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 for this chassis.
Jan 31 02:42:25 np0005603609 ovn_controller[130359]: 2026-01-31T07:42:25Z|00145|binding|INFO|dd8f8a59-aeae-49bc-9cae-f91d4ea8b393: Claiming fa:16:3e:9c:71:56 10.100.0.12
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.513 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:25 np0005603609 systemd-machined[190912]: New machine qemu-23-instance-00000031.
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.549 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:25 np0005603609 ovn_controller[130359]: 2026-01-31T07:42:25Z|00146|binding|INFO|Setting lport dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 ovn-installed in OVS
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.553 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:25 np0005603609 systemd[1]: Started Virtual Machine qemu-23-instance-00000031.
Jan 31 02:42:25 np0005603609 systemd-udevd[239733]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:42:25 np0005603609 NetworkManager[49064]: <info>  [1769845345.5785] device (tapdd8f8a59-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:42:25 np0005603609 NetworkManager[49064]: <info>  [1769845345.5791] device (tapdd8f8a59-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:42:25 np0005603609 ovn_controller[130359]: 2026-01-31T07:42:25Z|00147|binding|INFO|Setting lport dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 up in Southbound
Jan 31 02:42:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:25.727 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:71:56 10.100.0.12'], port_security=['fa:16:3e:9c:71:56 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '51d06818-a334-4589-94a4-c988275bc6b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-af5f0689-533c-4122-8c53-f96f6a00279a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82af298aee3f4ddc92520d30b5faf699', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6ba52ee7-b809-4855-b967-67249ccec44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa142bef-4343-4ed7-850e-c4d90e095f29, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=dd8f8a59-aeae-49bc-9cae-f91d4ea8b393) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:42:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:25.730 140058 INFO neutron.agent.ovn.metadata.agent [-] Port dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 in datapath af5f0689-533c-4122-8c53-f96f6a00279a bound to our chassis#033[00m
Jan 31 02:42:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:25.735 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network af5f0689-533c-4122-8c53-f96f6a00279a#033[00m
Jan 31 02:42:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:25.745 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b684d160-e243-491a-99ac-97eed0eab1de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:25.746 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaf5f0689-51 in ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:42:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:25.748 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaf5f0689-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:42:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:25.748 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[98d1ac36-2560-4aae-ab9f-48064a9af1fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:25.750 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[aae4ed2d-37e6-4f8a-87c6-69ecd794bfda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:25.758 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[b84dc0c3-40c5-4e65-b221-8936226ca84a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:25.770 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[00f146b6-f045-4478-8685-dea95bfe6eea]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:25.796 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[69646e3f-b65a-4926-9436-53e01afb1342]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:25 np0005603609 NetworkManager[49064]: <info>  [1769845345.8025] manager: (tapaf5f0689-50): new Veth device (/org/freedesktop/NetworkManager/Devices/73)
Jan 31 02:42:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:25.803 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[19f36e6c-2400-4a2d-b50e-847ce6eae3cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:25.836 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2cb0b9-5e90-42e0-b20b-c2f1e4fea3d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:25.840 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b366cd7f-7fc9-4e7f-9c42-d8c0ba1011ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.846 221554 DEBUG oslo_concurrency.processutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:42:25 np0005603609 NetworkManager[49064]: <info>  [1769845345.8613] device (tapaf5f0689-50): carrier: link connected
Jan 31 02:42:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:25.869 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b69d4c9e-de49-46a1-a448-5f476e10eee5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.885 221554 DEBUG nova.storage.rbd_utils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:42:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:25.887 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b913ac-43b0-44b1-91ac-d2999cdde200]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaf5f0689-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:da:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558556, 'reachable_time': 43384, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239783, 'error': None, 'target': 'ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:25 np0005603609 nova_compute[221550]: 2026-01-31 07:42:25.890 221554 DEBUG oslo_concurrency.processutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:42:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:25.904 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8f798e-16f1-452b-9412-cde9f75c8e0e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:daed'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558556, 'tstamp': 558556}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239787, 'error': None, 'target': 'ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:25.925 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ac5f0756-f4e1-4d94-9748-ff33ec6832e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaf5f0689-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:da:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558556, 'reachable_time': 43384, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239789, 'error': None, 'target': 'ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:25.956 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c39e9323-8d73-44bd-a3cf-4eb6afffd7c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:26.014 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e76e665e-c00d-4b0b-b81a-5b173f920dfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:26.015 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaf5f0689-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:26.015 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:26.016 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaf5f0689-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:42:26 np0005603609 NetworkManager[49064]: <info>  [1769845346.0183] manager: (tapaf5f0689-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Jan 31 02:42:26 np0005603609 kernel: tapaf5f0689-50: entered promiscuous mode
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:26.022 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaf5f0689-50, col_values=(('external_ids', {'iface-id': '37abe80c-89eb-4287-91a5-a7651951fc72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:42:26 np0005603609 ovn_controller[130359]: 2026-01-31T07:42:26Z|00148|binding|INFO|Releasing lport 37abe80c-89eb-4287-91a5-a7651951fc72 from this chassis (sb_readonly=0)
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.023 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.031 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:26.032 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/af5f0689-533c-4122-8c53-f96f6a00279a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/af5f0689-533c-4122-8c53-f96f6a00279a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:26.034 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[23461dcc-2964-4be0-8f4c-26457b1becbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:26.034 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-af5f0689-533c-4122-8c53-f96f6a00279a
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/af5f0689-533c-4122-8c53-f96f6a00279a.pid.haproxy
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID af5f0689-533c-4122-8c53-f96f6a00279a
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:42:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:26.035 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a', 'env', 'PROCESS_TAG=haproxy-af5f0689-533c-4122-8c53-f96f6a00279a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/af5f0689-533c-4122-8c53-f96f6a00279a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:42:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:26.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:26.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:42:26 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1043671265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.322 221554 DEBUG oslo_concurrency.processutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.325 221554 DEBUG nova.objects.instance [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lazy-loading 'pci_devices' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:42:26 np0005603609 podman[239870]: 2026-01-31 07:42:26.372857313 +0000 UTC m=+0.077224769 container create 222e967e6d3396004339feb2f418a0b6be0668913366ded00a602e339f48abc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.399 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845346.3991997, 51d06818-a334-4589-94a4-c988275bc6b0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.400 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] VM Started (Lifecycle Event)#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.403 221554 DEBUG nova.virt.libvirt.driver [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:42:26 np0005603609 nova_compute[221550]:  <uuid>49c2b2d1-3230-4f75-bc49-86230accc637</uuid>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:  <name>instance-0000002d</name>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <nova:name>tempest-UnshelveToHostMultiNodesTest-server-1773899943</nova:name>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:42:25</nova:creationTime>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:42:26 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:        <nova:user uuid="cda91adb5caf4eeb81b5a934ccbb1a1e">tempest-UnshelveToHostMultiNodesTest-877324354-project-member</nova:user>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:        <nova:project uuid="37a878bbb1224cfeabcbe629345fc85d">tempest-UnshelveToHostMultiNodesTest-877324354</nova:project>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="0b8514e3-4251-4bd6-a806-b742c737781f"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <nova:ports/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <entry name="serial">49c2b2d1-3230-4f75-bc49-86230accc637</entry>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <entry name="uuid">49c2b2d1-3230-4f75-bc49-86230accc637</entry>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/49c2b2d1-3230-4f75-bc49-86230accc637_disk">
Jan 31 02:42:26 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:42:26 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/49c2b2d1-3230-4f75-bc49-86230accc637_disk.config">
Jan 31 02:42:26 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:42:26 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/console.log" append="off"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <input type="keyboard" bus="usb"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:42:26 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:42:26 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:42:26 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:42:26 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:42:26 np0005603609 systemd[1]: Started libpod-conmon-222e967e6d3396004339feb2f418a0b6be0668913366ded00a602e339f48abc0.scope.
Jan 31 02:42:26 np0005603609 podman[239870]: 2026-01-31 07:42:26.324448525 +0000 UTC m=+0.028815981 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:42:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:26 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:42:26 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6faff16e812dbc94f458a92effda06e3bfcec28fb9a85276ce587277248f8382/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:42:26 np0005603609 podman[239870]: 2026-01-31 07:42:26.484856783 +0000 UTC m=+0.189224299 container init 222e967e6d3396004339feb2f418a0b6be0668913366ded00a602e339f48abc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 02:42:26 np0005603609 podman[239870]: 2026-01-31 07:42:26.49263555 +0000 UTC m=+0.197003006 container start 222e967e6d3396004339feb2f418a0b6be0668913366ded00a602e339f48abc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 02:42:26 np0005603609 neutron-haproxy-ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a[239896]: [NOTICE]   (239901) : New worker (239903) forked
Jan 31 02:42:26 np0005603609 neutron-haproxy-ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a[239896]: [NOTICE]   (239901) : Loading success.
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.542 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.548 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845346.3994489, 51d06818-a334-4589-94a4-c988275bc6b0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.548 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.736 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.739 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.741 221554 DEBUG nova.virt.libvirt.driver [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.741 221554 DEBUG nova.virt.libvirt.driver [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.742 221554 INFO nova.virt.libvirt.driver [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Using config drive#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.767 221554 DEBUG nova.storage.rbd_utils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.777 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.919 221554 DEBUG nova.compute.manager [req-9be07d5e-6e4c-4433-aa4a-2273bf3e396b req-be5d2481-3041-42ec-b5be-f31755470e39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Received event network-vif-plugged-dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.920 221554 DEBUG oslo_concurrency.lockutils [req-9be07d5e-6e4c-4433-aa4a-2273bf3e396b req-be5d2481-3041-42ec-b5be-f31755470e39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "51d06818-a334-4589-94a4-c988275bc6b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.920 221554 DEBUG oslo_concurrency.lockutils [req-9be07d5e-6e4c-4433-aa4a-2273bf3e396b req-be5d2481-3041-42ec-b5be-f31755470e39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "51d06818-a334-4589-94a4-c988275bc6b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.921 221554 DEBUG oslo_concurrency.lockutils [req-9be07d5e-6e4c-4433-aa4a-2273bf3e396b req-be5d2481-3041-42ec-b5be-f31755470e39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "51d06818-a334-4589-94a4-c988275bc6b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.921 221554 DEBUG nova.compute.manager [req-9be07d5e-6e4c-4433-aa4a-2273bf3e396b req-be5d2481-3041-42ec-b5be-f31755470e39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Processing event network-vif-plugged-dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.922 221554 DEBUG nova.compute.manager [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.926 221554 DEBUG nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.930 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.930 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845346.925002, 51d06818-a334-4589-94a4-c988275bc6b0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.931 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.936 221554 DEBUG nova.objects.instance [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.943 221554 INFO nova.virt.libvirt.driver [-] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Instance spawned successfully.#033[00m
Jan 31 02:42:26 np0005603609 nova_compute[221550]: 2026-01-31 07:42:26.945 221554 DEBUG nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:42:27 np0005603609 nova_compute[221550]: 2026-01-31 07:42:27.400 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:42:27 np0005603609 nova_compute[221550]: 2026-01-31 07:42:27.406 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:42:27 np0005603609 nova_compute[221550]: 2026-01-31 07:42:27.410 221554 DEBUG nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:42:27 np0005603609 nova_compute[221550]: 2026-01-31 07:42:27.410 221554 DEBUG nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:42:27 np0005603609 nova_compute[221550]: 2026-01-31 07:42:27.411 221554 DEBUG nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:42:27 np0005603609 nova_compute[221550]: 2026-01-31 07:42:27.411 221554 DEBUG nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:42:27 np0005603609 nova_compute[221550]: 2026-01-31 07:42:27.412 221554 DEBUG nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:42:27 np0005603609 nova_compute[221550]: 2026-01-31 07:42:27.412 221554 DEBUG nova.virt.libvirt.driver [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:42:27 np0005603609 nova_compute[221550]: 2026-01-31 07:42:27.533 221554 DEBUG nova.objects.instance [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lazy-loading 'keypairs' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:42:27 np0005603609 nova_compute[221550]: 2026-01-31 07:42:27.602 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:42:27 np0005603609 nova_compute[221550]: 2026-01-31 07:42:27.740 221554 INFO nova.compute.manager [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Took 9.52 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:42:27 np0005603609 nova_compute[221550]: 2026-01-31 07:42:27.741 221554 DEBUG nova.compute.manager [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:42:27 np0005603609 nova_compute[221550]: 2026-01-31 07:42:27.805 221554 INFO nova.virt.libvirt.driver [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Creating config drive at /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/disk.config#033[00m
Jan 31 02:42:27 np0005603609 nova_compute[221550]: 2026-01-31 07:42:27.812 221554 DEBUG oslo_concurrency.processutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpwishzzx5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:42:27 np0005603609 nova_compute[221550]: 2026-01-31 07:42:27.868 221554 INFO nova.compute.manager [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Took 14.42 seconds to build instance.#033[00m
Jan 31 02:42:27 np0005603609 nova_compute[221550]: 2026-01-31 07:42:27.921 221554 DEBUG oslo_concurrency.lockutils [None req-7e48856e-b6f1-4f8f-b887-5525cacfd289 de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Lock "51d06818-a334-4589-94a4-c988275bc6b0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.915s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:27 np0005603609 nova_compute[221550]: 2026-01-31 07:42:27.936 221554 DEBUG oslo_concurrency.processutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpwishzzx5" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:42:27 np0005603609 nova_compute[221550]: 2026-01-31 07:42:27.979 221554 DEBUG nova.storage.rbd_utils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] rbd image 49c2b2d1-3230-4f75-bc49-86230accc637_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:42:27 np0005603609 nova_compute[221550]: 2026-01-31 07:42:27.984 221554 DEBUG oslo_concurrency.processutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/disk.config 49c2b2d1-3230-4f75-bc49-86230accc637_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:42:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:28.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:28.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:28 np0005603609 nova_compute[221550]: 2026-01-31 07:42:28.887 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:29 np0005603609 nova_compute[221550]: 2026-01-31 07:42:29.007 221554 DEBUG nova.compute.manager [req-3795b55d-93b6-48de-b4f6-c9005ab3f2d5 req-8df32cd4-13c9-4eaa-b71b-7bba0a59dc8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Received event network-vif-plugged-dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:42:29 np0005603609 nova_compute[221550]: 2026-01-31 07:42:29.008 221554 DEBUG oslo_concurrency.lockutils [req-3795b55d-93b6-48de-b4f6-c9005ab3f2d5 req-8df32cd4-13c9-4eaa-b71b-7bba0a59dc8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "51d06818-a334-4589-94a4-c988275bc6b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:29 np0005603609 nova_compute[221550]: 2026-01-31 07:42:29.008 221554 DEBUG oslo_concurrency.lockutils [req-3795b55d-93b6-48de-b4f6-c9005ab3f2d5 req-8df32cd4-13c9-4eaa-b71b-7bba0a59dc8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "51d06818-a334-4589-94a4-c988275bc6b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:29 np0005603609 nova_compute[221550]: 2026-01-31 07:42:29.009 221554 DEBUG oslo_concurrency.lockutils [req-3795b55d-93b6-48de-b4f6-c9005ab3f2d5 req-8df32cd4-13c9-4eaa-b71b-7bba0a59dc8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "51d06818-a334-4589-94a4-c988275bc6b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:29 np0005603609 nova_compute[221550]: 2026-01-31 07:42:29.009 221554 DEBUG nova.compute.manager [req-3795b55d-93b6-48de-b4f6-c9005ab3f2d5 req-8df32cd4-13c9-4eaa-b71b-7bba0a59dc8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] No waiting events found dispatching network-vif-plugged-dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:42:29 np0005603609 nova_compute[221550]: 2026-01-31 07:42:29.010 221554 WARNING nova.compute.manager [req-3795b55d-93b6-48de-b4f6-c9005ab3f2d5 req-8df32cd4-13c9-4eaa-b71b-7bba0a59dc8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Received unexpected event network-vif-plugged-dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:42:29 np0005603609 nova_compute[221550]: 2026-01-31 07:42:29.010 221554 DEBUG oslo_concurrency.processutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/disk.config 49c2b2d1-3230-4f75-bc49-86230accc637_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:42:29 np0005603609 nova_compute[221550]: 2026-01-31 07:42:29.011 221554 INFO nova.virt.libvirt.driver [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Deleting local config drive /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637/disk.config because it was imported into RBD.#033[00m
Jan 31 02:42:29 np0005603609 systemd-machined[190912]: New machine qemu-24-instance-0000002d.
Jan 31 02:42:29 np0005603609 systemd[1]: Started Virtual Machine qemu-24-instance-0000002d.
Jan 31 02:42:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:42:29 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2678096094' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:42:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:42:29 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2678096094' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:42:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:30.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:30.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.201 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845350.2003803, 49c2b2d1-3230-4f75-bc49-86230accc637 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.202 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.204 221554 DEBUG nova.compute.manager [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.204 221554 DEBUG nova.virt.libvirt.driver [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.209 221554 INFO nova.virt.libvirt.driver [-] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance spawned successfully.#033[00m
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.231 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.234 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.259 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.259 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845350.202078, 49c2b2d1-3230-4f75-bc49-86230accc637 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.260 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] VM Started (Lifecycle Event)#033[00m
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.277 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.280 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.295 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:30 np0005603609 NetworkManager[49064]: <info>  [1769845350.2958] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Jan 31 02:42:30 np0005603609 NetworkManager[49064]: <info>  [1769845350.2967] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.302 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.346 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:30 np0005603609 ovn_controller[130359]: 2026-01-31T07:42:30Z|00149|binding|INFO|Releasing lport 37abe80c-89eb-4287-91a5-a7651951fc72 from this chassis (sb_readonly=0)
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.366 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.762 221554 DEBUG nova.compute.manager [req-3508f7fe-0670-4c77-876d-8ca77ae7eeb2 req-a1982844-fade-41b7-b858-6c32ae5f3b58 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Received event network-changed-dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.762 221554 DEBUG nova.compute.manager [req-3508f7fe-0670-4c77-876d-8ca77ae7eeb2 req-a1982844-fade-41b7-b858-6c32ae5f3b58 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Refreshing instance network info cache due to event network-changed-dd8f8a59-aeae-49bc-9cae-f91d4ea8b393. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.762 221554 DEBUG oslo_concurrency.lockutils [req-3508f7fe-0670-4c77-876d-8ca77ae7eeb2 req-a1982844-fade-41b7-b858-6c32ae5f3b58 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-51d06818-a334-4589-94a4-c988275bc6b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.763 221554 DEBUG oslo_concurrency.lockutils [req-3508f7fe-0670-4c77-876d-8ca77ae7eeb2 req-a1982844-fade-41b7-b858-6c32ae5f3b58 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-51d06818-a334-4589-94a4-c988275bc6b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:42:30 np0005603609 nova_compute[221550]: 2026-01-31 07:42:30.763 221554 DEBUG nova.network.neutron [req-3508f7fe-0670-4c77-876d-8ca77ae7eeb2 req-a1982844-fade-41b7-b858-6c32ae5f3b58 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Refreshing network info cache for port dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:42:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e202 e202: 3 total, 3 up, 3 in
Jan 31 02:42:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:31 np0005603609 nova_compute[221550]: 2026-01-31 07:42:31.778 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:32.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:42:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:32.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:42:33 np0005603609 nova_compute[221550]: 2026-01-31 07:42:33.891 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:42:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:34.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:42:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:42:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:34.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:42:34 np0005603609 nova_compute[221550]: 2026-01-31 07:42:34.272 221554 DEBUG nova.compute.manager [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:42:34 np0005603609 nova_compute[221550]: 2026-01-31 07:42:34.362 221554 DEBUG oslo_concurrency.lockutils [None req-d46f8e01-582c-4c86-9689-039ced7fbc98 ee34d17fc7784148b3999a76522012c1 b4dde04be2af41b58e93cd54a421a74f - - default default] Lock "49c2b2d1-3230-4f75-bc49-86230accc637" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 15.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:34 np0005603609 nova_compute[221550]: 2026-01-31 07:42:34.503 221554 DEBUG nova.network.neutron [req-3508f7fe-0670-4c77-876d-8ca77ae7eeb2 req-a1982844-fade-41b7-b858-6c32ae5f3b58 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Updated VIF entry in instance network info cache for port dd8f8a59-aeae-49bc-9cae-f91d4ea8b393. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:42:34 np0005603609 nova_compute[221550]: 2026-01-31 07:42:34.504 221554 DEBUG nova.network.neutron [req-3508f7fe-0670-4c77-876d-8ca77ae7eeb2 req-a1982844-fade-41b7-b858-6c32ae5f3b58 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Updating instance_info_cache with network_info: [{"id": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "address": "fa:16:3e:9c:71:56", "network": {"id": "af5f0689-533c-4122-8c53-f96f6a00279a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-149970366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82af298aee3f4ddc92520d30b5faf699", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8f8a59-ae", "ovs_interfaceid": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:42:34 np0005603609 nova_compute[221550]: 2026-01-31 07:42:34.674 221554 DEBUG oslo_concurrency.lockutils [req-3508f7fe-0670-4c77-876d-8ca77ae7eeb2 req-a1982844-fade-41b7-b858-6c32ae5f3b58 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-51d06818-a334-4589-94a4-c988275bc6b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:42:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:36.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:42:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:36.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:42:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:36 np0005603609 nova_compute[221550]: 2026-01-31 07:42:36.780 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:36 np0005603609 nova_compute[221550]: 2026-01-31 07:42:36.913 221554 DEBUG oslo_concurrency.lockutils [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Acquiring lock "49c2b2d1-3230-4f75-bc49-86230accc637" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:36 np0005603609 nova_compute[221550]: 2026-01-31 07:42:36.914 221554 DEBUG oslo_concurrency.lockutils [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "49c2b2d1-3230-4f75-bc49-86230accc637" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:36 np0005603609 nova_compute[221550]: 2026-01-31 07:42:36.914 221554 DEBUG oslo_concurrency.lockutils [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Acquiring lock "49c2b2d1-3230-4f75-bc49-86230accc637-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:36 np0005603609 nova_compute[221550]: 2026-01-31 07:42:36.915 221554 DEBUG oslo_concurrency.lockutils [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "49c2b2d1-3230-4f75-bc49-86230accc637-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:36 np0005603609 nova_compute[221550]: 2026-01-31 07:42:36.915 221554 DEBUG oslo_concurrency.lockutils [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "49c2b2d1-3230-4f75-bc49-86230accc637-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:36 np0005603609 nova_compute[221550]: 2026-01-31 07:42:36.916 221554 INFO nova.compute.manager [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Terminating instance#033[00m
Jan 31 02:42:36 np0005603609 nova_compute[221550]: 2026-01-31 07:42:36.917 221554 DEBUG oslo_concurrency.lockutils [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Acquiring lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:42:36 np0005603609 nova_compute[221550]: 2026-01-31 07:42:36.917 221554 DEBUG oslo_concurrency.lockutils [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Acquired lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:42:36 np0005603609 nova_compute[221550]: 2026-01-31 07:42:36.917 221554 DEBUG nova.network.neutron [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:42:37 np0005603609 nova_compute[221550]: 2026-01-31 07:42:37.205 221554 DEBUG nova.network.neutron [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:42:37 np0005603609 nova_compute[221550]: 2026-01-31 07:42:37.917 221554 DEBUG nova.network.neutron [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:42:37 np0005603609 nova_compute[221550]: 2026-01-31 07:42:37.970 221554 DEBUG oslo_concurrency.lockutils [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Releasing lock "refresh_cache-49c2b2d1-3230-4f75-bc49-86230accc637" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:42:37 np0005603609 nova_compute[221550]: 2026-01-31 07:42:37.971 221554 DEBUG nova.compute.manager [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:42:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:38.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:42:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:38.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:42:38 np0005603609 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Jan 31 02:42:38 np0005603609 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000002d.scope: Consumed 9.164s CPU time.
Jan 31 02:42:38 np0005603609 systemd-machined[190912]: Machine qemu-24-instance-0000002d terminated.
Jan 31 02:42:38 np0005603609 nova_compute[221550]: 2026-01-31 07:42:38.593 221554 INFO nova.virt.libvirt.driver [-] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance destroyed successfully.#033[00m
Jan 31 02:42:38 np0005603609 nova_compute[221550]: 2026-01-31 07:42:38.594 221554 DEBUG nova.objects.instance [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lazy-loading 'resources' on Instance uuid 49c2b2d1-3230-4f75-bc49-86230accc637 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:42:38 np0005603609 nova_compute[221550]: 2026-01-31 07:42:38.895 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:42:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:40.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:42:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:40.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 e203: 3 total, 3 up, 3 in
Jan 31 02:42:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:41 np0005603609 nova_compute[221550]: 2026-01-31 07:42:41.784 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:42:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:42.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:42:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:42.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:43 np0005603609 ovn_controller[130359]: 2026-01-31T07:42:43Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:71:56 10.100.0.12
Jan 31 02:42:43 np0005603609 ovn_controller[130359]: 2026-01-31T07:42:43Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:71:56 10.100.0.12
Jan 31 02:42:43 np0005603609 nova_compute[221550]: 2026-01-31 07:42:43.900 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:44.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:44.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:44 np0005603609 nova_compute[221550]: 2026-01-31 07:42:44.856 221554 INFO nova.virt.libvirt.driver [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Deleting instance files /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637_del#033[00m
Jan 31 02:42:44 np0005603609 nova_compute[221550]: 2026-01-31 07:42:44.858 221554 INFO nova.virt.libvirt.driver [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Deletion of /var/lib/nova/instances/49c2b2d1-3230-4f75-bc49-86230accc637_del complete#033[00m
Jan 31 02:42:45 np0005603609 nova_compute[221550]: 2026-01-31 07:42:45.377 221554 INFO nova.compute.manager [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Took 7.41 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:42:45 np0005603609 nova_compute[221550]: 2026-01-31 07:42:45.381 221554 DEBUG oslo.service.loopingcall [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:42:45 np0005603609 nova_compute[221550]: 2026-01-31 07:42:45.382 221554 DEBUG nova.compute.manager [-] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:42:45 np0005603609 nova_compute[221550]: 2026-01-31 07:42:45.382 221554 DEBUG nova.network.neutron [-] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:42:45 np0005603609 nova_compute[221550]: 2026-01-31 07:42:45.589 221554 DEBUG nova.network.neutron [-] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:42:45 np0005603609 nova_compute[221550]: 2026-01-31 07:42:45.631 221554 DEBUG nova.network.neutron [-] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:42:45 np0005603609 nova_compute[221550]: 2026-01-31 07:42:45.685 221554 INFO nova.compute.manager [-] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Took 0.30 seconds to deallocate network for instance.#033[00m
Jan 31 02:42:45 np0005603609 nova_compute[221550]: 2026-01-31 07:42:45.814 221554 DEBUG oslo_concurrency.lockutils [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:45 np0005603609 nova_compute[221550]: 2026-01-31 07:42:45.815 221554 DEBUG oslo_concurrency.lockutils [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:45 np0005603609 nova_compute[221550]: 2026-01-31 07:42:45.942 221554 DEBUG oslo_concurrency.processutils [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:42:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:46.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:42:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:46.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:42:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:42:46 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3880658820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:42:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:46 np0005603609 nova_compute[221550]: 2026-01-31 07:42:46.442 221554 DEBUG oslo_concurrency.processutils [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:42:46 np0005603609 nova_compute[221550]: 2026-01-31 07:42:46.447 221554 DEBUG nova.compute.provider_tree [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:42:46 np0005603609 nova_compute[221550]: 2026-01-31 07:42:46.545 221554 DEBUG nova.scheduler.client.report [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:42:46 np0005603609 nova_compute[221550]: 2026-01-31 07:42:46.652 221554 DEBUG oslo_concurrency.lockutils [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:46 np0005603609 nova_compute[221550]: 2026-01-31 07:42:46.785 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:46 np0005603609 nova_compute[221550]: 2026-01-31 07:42:46.908 221554 INFO nova.scheduler.client.report [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Deleted allocations for instance 49c2b2d1-3230-4f75-bc49-86230accc637#033[00m
Jan 31 02:42:47 np0005603609 nova_compute[221550]: 2026-01-31 07:42:47.155 221554 DEBUG oslo_concurrency.lockutils [None req-8acac80a-7b08-4569-b381-996b530ef7b3 cda91adb5caf4eeb81b5a934ccbb1a1e 37a878bbb1224cfeabcbe629345fc85d - - default default] Lock "49c2b2d1-3230-4f75-bc49-86230accc637" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:47.703 140167 DEBUG eventlet.wsgi.server [-] (140167) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Jan 31 02:42:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:47.705 140167 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015
Jan 31 02:42:47 np0005603609 ovn_metadata_agent[140053]: Accept: */*#015
Jan 31 02:42:47 np0005603609 ovn_metadata_agent[140053]: Connection: close#015
Jan 31 02:42:47 np0005603609 ovn_metadata_agent[140053]: Content-Type: text/plain#015
Jan 31 02:42:47 np0005603609 ovn_metadata_agent[140053]: Host: 169.254.169.254#015
Jan 31 02:42:47 np0005603609 ovn_metadata_agent[140053]: User-Agent: curl/7.84.0#015
Jan 31 02:42:47 np0005603609 ovn_metadata_agent[140053]: X-Forwarded-For: 10.100.0.12#015
Jan 31 02:42:47 np0005603609 ovn_metadata_agent[140053]: X-Ovn-Network-Id: af5f0689-533c-4122-8c53-f96f6a00279a __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Jan 31 02:42:48 np0005603609 podman[240077]: 2026-01-31 07:42:48.17275819 +0000 UTC m=+0.052781813 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 02:42:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:42:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:48.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:42:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:48.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:48 np0005603609 podman[240076]: 2026-01-31 07:42:48.230061822 +0000 UTC m=+0.106449649 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 02:42:48 np0005603609 nova_compute[221550]: 2026-01-31 07:42:48.903 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:50.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:42:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:50.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:42:51 np0005603609 haproxy-metadata-proxy-af5f0689-533c-4122-8c53-f96f6a00279a[239903]: 10.100.0.12:37046 [31/Jan/2026:07:42:47.702] listener listener/metadata 0/0/0/3654/3654 200 1657 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Jan 31 02:42:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:51.355 140167 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Jan 31 02:42:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:51.356 140167 INFO eventlet.wsgi.server [-] 10.100.0.12,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 1673 time: 3.6513417#033[00m
Jan 31 02:42:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:51 np0005603609 nova_compute[221550]: 2026-01-31 07:42:51.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:51 np0005603609 nova_compute[221550]: 2026-01-31 07:42:51.790 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:51 np0005603609 nova_compute[221550]: 2026-01-31 07:42:51.925 221554 DEBUG oslo_concurrency.lockutils [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Acquiring lock "51d06818-a334-4589-94a4-c988275bc6b0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:51 np0005603609 nova_compute[221550]: 2026-01-31 07:42:51.926 221554 DEBUG oslo_concurrency.lockutils [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Lock "51d06818-a334-4589-94a4-c988275bc6b0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:51 np0005603609 nova_compute[221550]: 2026-01-31 07:42:51.926 221554 DEBUG oslo_concurrency.lockutils [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Acquiring lock "51d06818-a334-4589-94a4-c988275bc6b0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:51 np0005603609 nova_compute[221550]: 2026-01-31 07:42:51.927 221554 DEBUG oslo_concurrency.lockutils [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Lock "51d06818-a334-4589-94a4-c988275bc6b0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:51 np0005603609 nova_compute[221550]: 2026-01-31 07:42:51.927 221554 DEBUG oslo_concurrency.lockutils [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Lock "51d06818-a334-4589-94a4-c988275bc6b0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:51 np0005603609 nova_compute[221550]: 2026-01-31 07:42:51.929 221554 INFO nova.compute.manager [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Terminating instance#033[00m
Jan 31 02:42:51 np0005603609 nova_compute[221550]: 2026-01-31 07:42:51.931 221554 DEBUG nova.compute.manager [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:42:52 np0005603609 kernel: tapdd8f8a59-ae (unregistering): left promiscuous mode
Jan 31 02:42:52 np0005603609 NetworkManager[49064]: <info>  [1769845372.1384] device (tapdd8f8a59-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:42:52 np0005603609 nova_compute[221550]: 2026-01-31 07:42:52.138 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:52 np0005603609 ovn_controller[130359]: 2026-01-31T07:42:52Z|00150|binding|INFO|Releasing lport dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 from this chassis (sb_readonly=0)
Jan 31 02:42:52 np0005603609 ovn_controller[130359]: 2026-01-31T07:42:52Z|00151|binding|INFO|Setting lport dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 down in Southbound
Jan 31 02:42:52 np0005603609 ovn_controller[130359]: 2026-01-31T07:42:52Z|00152|binding|INFO|Removing iface tapdd8f8a59-ae ovn-installed in OVS
Jan 31 02:42:52 np0005603609 nova_compute[221550]: 2026-01-31 07:42:52.146 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:52 np0005603609 nova_compute[221550]: 2026-01-31 07:42:52.149 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:52 np0005603609 nova_compute[221550]: 2026-01-31 07:42:52.155 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:52 np0005603609 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000031.scope: Deactivated successfully.
Jan 31 02:42:52 np0005603609 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000031.scope: Consumed 14.108s CPU time.
Jan 31 02:42:52 np0005603609 systemd-machined[190912]: Machine qemu-23-instance-00000031 terminated.
Jan 31 02:42:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:52.208 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:71:56 10.100.0.12'], port_security=['fa:16:3e:9c:71:56 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '51d06818-a334-4589-94a4-c988275bc6b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-af5f0689-533c-4122-8c53-f96f6a00279a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82af298aee3f4ddc92520d30b5faf699', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6ba52ee7-b809-4855-b967-67249ccec44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa142bef-4343-4ed7-850e-c4d90e095f29, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=dd8f8a59-aeae-49bc-9cae-f91d4ea8b393) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:42:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:52.210 140058 INFO neutron.agent.ovn.metadata.agent [-] Port dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 in datapath af5f0689-533c-4122-8c53-f96f6a00279a unbound from our chassis#033[00m
Jan 31 02:42:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:42:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:52.211 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network af5f0689-533c-4122-8c53-f96f6a00279a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:42:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:52.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:42:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:52.212 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b76dd30f-7781-4c91-9589-177712f09832]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:52.213 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a namespace which is not needed anymore#033[00m
Jan 31 02:42:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:52.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:52 np0005603609 neutron-haproxy-ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a[239896]: [NOTICE]   (239901) : haproxy version is 2.8.14-c23fe91
Jan 31 02:42:52 np0005603609 neutron-haproxy-ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a[239896]: [NOTICE]   (239901) : path to executable is /usr/sbin/haproxy
Jan 31 02:42:52 np0005603609 neutron-haproxy-ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a[239896]: [WARNING]  (239901) : Exiting Master process...
Jan 31 02:42:52 np0005603609 neutron-haproxy-ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a[239896]: [ALERT]    (239901) : Current worker (239903) exited with code 143 (Terminated)
Jan 31 02:42:52 np0005603609 neutron-haproxy-ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a[239896]: [WARNING]  (239901) : All workers exited. Exiting... (0)
Jan 31 02:42:52 np0005603609 systemd[1]: libpod-222e967e6d3396004339feb2f418a0b6be0668913366ded00a602e339f48abc0.scope: Deactivated successfully.
Jan 31 02:42:52 np0005603609 nova_compute[221550]: 2026-01-31 07:42:52.356 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:52 np0005603609 podman[240145]: 2026-01-31 07:42:52.360409673 +0000 UTC m=+0.066876532 container died 222e967e6d3396004339feb2f418a0b6be0668913366ded00a602e339f48abc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 02:42:52 np0005603609 nova_compute[221550]: 2026-01-31 07:42:52.362 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:52 np0005603609 nova_compute[221550]: 2026-01-31 07:42:52.378 221554 INFO nova.virt.libvirt.driver [-] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Instance destroyed successfully.#033[00m
Jan 31 02:42:52 np0005603609 nova_compute[221550]: 2026-01-31 07:42:52.380 221554 DEBUG nova.objects.instance [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Lazy-loading 'resources' on Instance uuid 51d06818-a334-4589-94a4-c988275bc6b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:42:52 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-222e967e6d3396004339feb2f418a0b6be0668913366ded00a602e339f48abc0-userdata-shm.mount: Deactivated successfully.
Jan 31 02:42:52 np0005603609 systemd[1]: var-lib-containers-storage-overlay-6faff16e812dbc94f458a92effda06e3bfcec28fb9a85276ce587277248f8382-merged.mount: Deactivated successfully.
Jan 31 02:42:52 np0005603609 nova_compute[221550]: 2026-01-31 07:42:52.435 221554 DEBUG nova.virt.libvirt.vif [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:42:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=49,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE4J7zXPDGyn8VdGEl7qmKSSnOiqxJZTBSl4eMBReI1rPpPbql6Hum2IZ3wG4Gtpz+QLnpxYYiQVM1ypVyqzDao4+3QGrgwQj8ywOjeZ8vqBC1CiXOqeDIF3971tPM9ESQ==',key_name='tempest-keypair-1583844521',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:42:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='82af298aee3f4ddc92520d30b5faf699',ramdisk_id='',reservation_id='r-9qkcl17p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersV294TestFqdnHostnames-987146001',owner_user_name='tempest-ServersV294TestFqdnHostnames-987146001-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:42:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='de4e42ee10994085b3fe69f3ad010289',uuid=51d06818-a334-4589-94a4-c988275bc6b0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "address": "fa:16:3e:9c:71:56", "network": {"id": "af5f0689-533c-4122-8c53-f96f6a00279a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-149970366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82af298aee3f4ddc92520d30b5faf699", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8f8a59-ae", "ovs_interfaceid": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:42:52 np0005603609 nova_compute[221550]: 2026-01-31 07:42:52.436 221554 DEBUG nova.network.os_vif_util [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Converting VIF {"id": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "address": "fa:16:3e:9c:71:56", "network": {"id": "af5f0689-533c-4122-8c53-f96f6a00279a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-149970366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82af298aee3f4ddc92520d30b5faf699", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd8f8a59-ae", "ovs_interfaceid": "dd8f8a59-aeae-49bc-9cae-f91d4ea8b393", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:42:52 np0005603609 nova_compute[221550]: 2026-01-31 07:42:52.437 221554 DEBUG nova.network.os_vif_util [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:71:56,bridge_name='br-int',has_traffic_filtering=True,id=dd8f8a59-aeae-49bc-9cae-f91d4ea8b393,network=Network(af5f0689-533c-4122-8c53-f96f6a00279a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd8f8a59-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:42:52 np0005603609 nova_compute[221550]: 2026-01-31 07:42:52.437 221554 DEBUG os_vif [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:71:56,bridge_name='br-int',has_traffic_filtering=True,id=dd8f8a59-aeae-49bc-9cae-f91d4ea8b393,network=Network(af5f0689-533c-4122-8c53-f96f6a00279a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd8f8a59-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:42:52 np0005603609 nova_compute[221550]: 2026-01-31 07:42:52.439 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:52 np0005603609 nova_compute[221550]: 2026-01-31 07:42:52.439 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd8f8a59-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:42:52 np0005603609 nova_compute[221550]: 2026-01-31 07:42:52.441 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:52 np0005603609 nova_compute[221550]: 2026-01-31 07:42:52.442 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:52 np0005603609 nova_compute[221550]: 2026-01-31 07:42:52.444 221554 INFO os_vif [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:71:56,bridge_name='br-int',has_traffic_filtering=True,id=dd8f8a59-aeae-49bc-9cae-f91d4ea8b393,network=Network(af5f0689-533c-4122-8c53-f96f6a00279a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd8f8a59-ae')#033[00m
Jan 31 02:42:52 np0005603609 podman[240145]: 2026-01-31 07:42:52.484489713 +0000 UTC m=+0.190956682 container cleanup 222e967e6d3396004339feb2f418a0b6be0668913366ded00a602e339f48abc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:42:52 np0005603609 systemd[1]: libpod-conmon-222e967e6d3396004339feb2f418a0b6be0668913366ded00a602e339f48abc0.scope: Deactivated successfully.
Jan 31 02:42:52 np0005603609 podman[240199]: 2026-01-31 07:42:52.659209435 +0000 UTC m=+0.148515046 container remove 222e967e6d3396004339feb2f418a0b6be0668913366ded00a602e339f48abc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 02:42:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:52.664 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5a5b40-7259-402a-b5a1-55636f2def7c]: (4, ('Sat Jan 31 07:42:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a (222e967e6d3396004339feb2f418a0b6be0668913366ded00a602e339f48abc0)\n222e967e6d3396004339feb2f418a0b6be0668913366ded00a602e339f48abc0\nSat Jan 31 07:42:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a (222e967e6d3396004339feb2f418a0b6be0668913366ded00a602e339f48abc0)\n222e967e6d3396004339feb2f418a0b6be0668913366ded00a602e339f48abc0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:52.666 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[182352ad-b345-4dfd-93f6-82522f613a20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:52.667 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaf5f0689-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:42:52 np0005603609 nova_compute[221550]: 2026-01-31 07:42:52.669 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:52 np0005603609 kernel: tapaf5f0689-50: left promiscuous mode
Jan 31 02:42:52 np0005603609 nova_compute[221550]: 2026-01-31 07:42:52.679 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:52.682 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[941d70cf-3c18-4dbb-bc36-c367dd42bb2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:52.697 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[859f2a87-814f-459b-95df-ca11897a82b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:52.698 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb9657b-26c7-443b-8884-8ffe9321be65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:52.715 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4996148f-4f68-4e57-9eb6-e516bfda644d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558549, 'reachable_time': 16110, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240216, 'error': None, 'target': 'ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:52 np0005603609 systemd[1]: run-netns-ovnmeta\x2daf5f0689\x2d533c\x2d4122\x2d8c53\x2df96f6a00279a.mount: Deactivated successfully.
Jan 31 02:42:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:52.720 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-af5f0689-533c-4122-8c53-f96f6a00279a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:42:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:52.720 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[cea9549c-3137-491b-bc54-ab88e79bff89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:42:53 np0005603609 nova_compute[221550]: 2026-01-31 07:42:53.111 221554 DEBUG nova.compute.manager [req-7104cf76-e1b7-4215-a4df-5f00b2b9e40a req-64e56fa9-0104-4d71-8239-26e4aa364691 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Received event network-vif-unplugged-dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:42:53 np0005603609 nova_compute[221550]: 2026-01-31 07:42:53.111 221554 DEBUG oslo_concurrency.lockutils [req-7104cf76-e1b7-4215-a4df-5f00b2b9e40a req-64e56fa9-0104-4d71-8239-26e4aa364691 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "51d06818-a334-4589-94a4-c988275bc6b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:53 np0005603609 nova_compute[221550]: 2026-01-31 07:42:53.111 221554 DEBUG oslo_concurrency.lockutils [req-7104cf76-e1b7-4215-a4df-5f00b2b9e40a req-64e56fa9-0104-4d71-8239-26e4aa364691 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "51d06818-a334-4589-94a4-c988275bc6b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:53 np0005603609 nova_compute[221550]: 2026-01-31 07:42:53.112 221554 DEBUG oslo_concurrency.lockutils [req-7104cf76-e1b7-4215-a4df-5f00b2b9e40a req-64e56fa9-0104-4d71-8239-26e4aa364691 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "51d06818-a334-4589-94a4-c988275bc6b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:53 np0005603609 nova_compute[221550]: 2026-01-31 07:42:53.112 221554 DEBUG nova.compute.manager [req-7104cf76-e1b7-4215-a4df-5f00b2b9e40a req-64e56fa9-0104-4d71-8239-26e4aa364691 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] No waiting events found dispatching network-vif-unplugged-dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:42:53 np0005603609 nova_compute[221550]: 2026-01-31 07:42:53.112 221554 DEBUG nova.compute.manager [req-7104cf76-e1b7-4215-a4df-5f00b2b9e40a req-64e56fa9-0104-4d71-8239-26e4aa364691 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Received event network-vif-unplugged-dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:42:53 np0005603609 nova_compute[221550]: 2026-01-31 07:42:53.591 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845358.5897267, 49c2b2d1-3230-4f75-bc49-86230accc637 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:42:53 np0005603609 nova_compute[221550]: 2026-01-31 07:42:53.592 221554 INFO nova.compute.manager [-] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:42:53 np0005603609 nova_compute[221550]: 2026-01-31 07:42:53.634 221554 DEBUG nova.compute.manager [None req-e45c9cd4-00a3-4d8c-91cb-12fa769a414b - - - - - -] [instance: 49c2b2d1-3230-4f75-bc49-86230accc637] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:42:53 np0005603609 nova_compute[221550]: 2026-01-31 07:42:53.894 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:53.894 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:42:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:53.895 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:42:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:54.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:54.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:54 np0005603609 nova_compute[221550]: 2026-01-31 07:42:54.868 221554 INFO nova.virt.libvirt.driver [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Deleting instance files /var/lib/nova/instances/51d06818-a334-4589-94a4-c988275bc6b0_del#033[00m
Jan 31 02:42:54 np0005603609 nova_compute[221550]: 2026-01-31 07:42:54.868 221554 INFO nova.virt.libvirt.driver [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Deletion of /var/lib/nova/instances/51d06818-a334-4589-94a4-c988275bc6b0_del complete#033[00m
Jan 31 02:42:54 np0005603609 nova_compute[221550]: 2026-01-31 07:42:54.934 221554 INFO nova.compute.manager [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Took 3.00 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:42:54 np0005603609 nova_compute[221550]: 2026-01-31 07:42:54.935 221554 DEBUG oslo.service.loopingcall [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:42:54 np0005603609 nova_compute[221550]: 2026-01-31 07:42:54.935 221554 DEBUG nova.compute.manager [-] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:42:54 np0005603609 nova_compute[221550]: 2026-01-31 07:42:54.936 221554 DEBUG nova.network.neutron [-] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:42:55 np0005603609 nova_compute[221550]: 2026-01-31 07:42:55.415 221554 DEBUG nova.compute.manager [req-099ed9f5-0073-4431-9c28-8ec3d8c760be req-5830c052-83d1-4eed-af8e-be3e9345b84b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Received event network-vif-plugged-dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:42:55 np0005603609 nova_compute[221550]: 2026-01-31 07:42:55.416 221554 DEBUG oslo_concurrency.lockutils [req-099ed9f5-0073-4431-9c28-8ec3d8c760be req-5830c052-83d1-4eed-af8e-be3e9345b84b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "51d06818-a334-4589-94a4-c988275bc6b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:55 np0005603609 nova_compute[221550]: 2026-01-31 07:42:55.416 221554 DEBUG oslo_concurrency.lockutils [req-099ed9f5-0073-4431-9c28-8ec3d8c760be req-5830c052-83d1-4eed-af8e-be3e9345b84b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "51d06818-a334-4589-94a4-c988275bc6b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:55 np0005603609 nova_compute[221550]: 2026-01-31 07:42:55.417 221554 DEBUG oslo_concurrency.lockutils [req-099ed9f5-0073-4431-9c28-8ec3d8c760be req-5830c052-83d1-4eed-af8e-be3e9345b84b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "51d06818-a334-4589-94a4-c988275bc6b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:55 np0005603609 nova_compute[221550]: 2026-01-31 07:42:55.417 221554 DEBUG nova.compute.manager [req-099ed9f5-0073-4431-9c28-8ec3d8c760be req-5830c052-83d1-4eed-af8e-be3e9345b84b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] No waiting events found dispatching network-vif-plugged-dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:42:55 np0005603609 nova_compute[221550]: 2026-01-31 07:42:55.418 221554 WARNING nova.compute.manager [req-099ed9f5-0073-4431-9c28-8ec3d8c760be req-5830c052-83d1-4eed-af8e-be3e9345b84b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Received unexpected event network-vif-plugged-dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:42:55 np0005603609 nova_compute[221550]: 2026-01-31 07:42:55.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:55 np0005603609 nova_compute[221550]: 2026-01-31 07:42:55.689 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:55 np0005603609 nova_compute[221550]: 2026-01-31 07:42:55.689 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:55 np0005603609 nova_compute[221550]: 2026-01-31 07:42:55.690 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:55 np0005603609 nova_compute[221550]: 2026-01-31 07:42:55.690 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:42:55 np0005603609 nova_compute[221550]: 2026-01-31 07:42:55.691 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:42:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:42:55.898 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:42:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:42:56 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2725692687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:42:56 np0005603609 nova_compute[221550]: 2026-01-31 07:42:56.163 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:42:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:42:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:56.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:42:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:56.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:56 np0005603609 nova_compute[221550]: 2026-01-31 07:42:56.336 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:42:56 np0005603609 nova_compute[221550]: 2026-01-31 07:42:56.338 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4761MB free_disk=20.942718505859375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:42:56 np0005603609 nova_compute[221550]: 2026-01-31 07:42:56.338 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:56 np0005603609 nova_compute[221550]: 2026-01-31 07:42:56.339 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:56 np0005603609 nova_compute[221550]: 2026-01-31 07:42:56.537 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 51d06818-a334-4589-94a4-c988275bc6b0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:42:56 np0005603609 nova_compute[221550]: 2026-01-31 07:42:56.538 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:42:56 np0005603609 nova_compute[221550]: 2026-01-31 07:42:56.539 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:42:56 np0005603609 nova_compute[221550]: 2026-01-31 07:42:56.646 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:42:56 np0005603609 nova_compute[221550]: 2026-01-31 07:42:56.705 221554 DEBUG nova.network.neutron [-] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:42:56 np0005603609 nova_compute[221550]: 2026-01-31 07:42:56.729 221554 INFO nova.compute.manager [-] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Took 1.79 seconds to deallocate network for instance.#033[00m
Jan 31 02:42:56 np0005603609 nova_compute[221550]: 2026-01-31 07:42:56.790 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:56 np0005603609 nova_compute[221550]: 2026-01-31 07:42:56.843 221554 DEBUG oslo_concurrency.lockutils [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:42:57 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2509227913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:42:57 np0005603609 nova_compute[221550]: 2026-01-31 07:42:57.081 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:42:57 np0005603609 nova_compute[221550]: 2026-01-31 07:42:57.086 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:42:57 np0005603609 nova_compute[221550]: 2026-01-31 07:42:57.103 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:42:57 np0005603609 nova_compute[221550]: 2026-01-31 07:42:57.130 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:42:57 np0005603609 nova_compute[221550]: 2026-01-31 07:42:57.130 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:57 np0005603609 nova_compute[221550]: 2026-01-31 07:42:57.130 221554 DEBUG oslo_concurrency.lockutils [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.287s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:57 np0005603609 nova_compute[221550]: 2026-01-31 07:42:57.180 221554 DEBUG oslo_concurrency.processutils [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:42:57 np0005603609 nova_compute[221550]: 2026-01-31 07:42:57.442 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:42:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:42:57 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3908076386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:42:57 np0005603609 nova_compute[221550]: 2026-01-31 07:42:57.566 221554 DEBUG oslo_concurrency.processutils [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:42:57 np0005603609 nova_compute[221550]: 2026-01-31 07:42:57.572 221554 DEBUG nova.compute.provider_tree [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:42:57 np0005603609 nova_compute[221550]: 2026-01-31 07:42:57.619 221554 DEBUG nova.scheduler.client.report [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:42:57 np0005603609 nova_compute[221550]: 2026-01-31 07:42:57.627 221554 DEBUG nova.compute.manager [req-d5a93fdc-fdac-4a14-969e-345e6fa5ac27 req-05d6ee8f-b7d4-4727-9d1b-6e33413534fa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Received event network-vif-deleted-dd8f8a59-aeae-49bc-9cae-f91d4ea8b393 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:42:57 np0005603609 nova_compute[221550]: 2026-01-31 07:42:57.661 221554 DEBUG oslo_concurrency.lockutils [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:57 np0005603609 nova_compute[221550]: 2026-01-31 07:42:57.703 221554 INFO nova.scheduler.client.report [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Deleted allocations for instance 51d06818-a334-4589-94a4-c988275bc6b0#033[00m
Jan 31 02:42:57 np0005603609 nova_compute[221550]: 2026-01-31 07:42:57.823 221554 DEBUG oslo_concurrency.lockutils [None req-323dfec8-c57d-4107-af7d-81ba02bd155d de4e42ee10994085b3fe69f3ad010289 82af298aee3f4ddc92520d30b5faf699 - - default default] Lock "51d06818-a334-4589-94a4-c988275bc6b0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:58 np0005603609 nova_compute[221550]: 2026-01-31 07:42:58.129 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:58 np0005603609 nova_compute[221550]: 2026-01-31 07:42:58.130 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:58.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:42:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:58.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:59 np0005603609 nova_compute[221550]: 2026-01-31 07:42:59.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:59 np0005603609 nova_compute[221550]: 2026-01-31 07:42:59.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:42:59 np0005603609 nova_compute[221550]: 2026-01-31 07:42:59.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:42:59 np0005603609 nova_compute[221550]: 2026-01-31 07:42:59.752 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:43:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:43:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:00.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:43:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:00.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:43:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:43:00 np0005603609 nova_compute[221550]: 2026-01-31 07:43:00.599 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:00 np0005603609 nova_compute[221550]: 2026-01-31 07:43:00.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:43:00 np0005603609 nova_compute[221550]: 2026-01-31 07:43:00.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:43:00 np0005603609 nova_compute[221550]: 2026-01-31 07:43:00.662 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:01 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:43:01 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:43:01 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:43:01 np0005603609 nova_compute[221550]: 2026-01-31 07:43:01.793 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:43:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:02.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:43:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:02.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:02 np0005603609 nova_compute[221550]: 2026-01-31 07:43:02.495 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:02 np0005603609 nova_compute[221550]: 2026-01-31 07:43:02.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:43:02 np0005603609 nova_compute[221550]: 2026-01-31 07:43:02.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:43:02 np0005603609 nova_compute[221550]: 2026-01-31 07:43:02.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:43:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:04.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:04.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:43:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:06.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:43:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:06.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:06 np0005603609 nova_compute[221550]: 2026-01-31 07:43:06.795 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:07 np0005603609 nova_compute[221550]: 2026-01-31 07:43:07.377 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845372.3765166, 51d06818-a334-4589-94a4-c988275bc6b0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:43:07 np0005603609 nova_compute[221550]: 2026-01-31 07:43:07.378 221554 INFO nova.compute.manager [-] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:43:07 np0005603609 nova_compute[221550]: 2026-01-31 07:43:07.414 221554 DEBUG nova.compute.manager [None req-7c112d77-a74f-4b53-bc6b-2b08db4f5f21 - - - - - -] [instance: 51d06818-a334-4589-94a4-c988275bc6b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:43:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:43:07.476 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:43:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:43:07.476 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:43:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:43:07.476 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:43:07 np0005603609 nova_compute[221550]: 2026-01-31 07:43:07.538 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:43:07 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:43:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:08.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:43:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:08.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:43:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:10.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:10.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:11 np0005603609 nova_compute[221550]: 2026-01-31 07:43:11.797 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:12.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:12.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:12 np0005603609 nova_compute[221550]: 2026-01-31 07:43:12.540 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:14.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:14.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:16.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:16.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:16 np0005603609 nova_compute[221550]: 2026-01-31 07:43:16.797 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:17 np0005603609 nova_compute[221550]: 2026-01-31 07:43:17.571 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:18.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:18.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:19 np0005603609 podman[240472]: 2026-01-31 07:43:19.207638803 +0000 UTC m=+0.090871886 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:43:19 np0005603609 podman[240473]: 2026-01-31 07:43:19.209122468 +0000 UTC m=+0.085467316 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 02:43:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:43:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:20.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:43:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:20.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:21 np0005603609 nova_compute[221550]: 2026-01-31 07:43:21.800 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:22.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:22.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:22 np0005603609 nova_compute[221550]: 2026-01-31 07:43:22.583 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:24.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:24.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:24 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 02:43:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:26.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:43:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:26.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:43:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:26 np0005603609 nova_compute[221550]: 2026-01-31 07:43:26.803 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).paxos(paxos active c 2511..3089) lease_timeout -- calling new election
Jan 31 02:43:27 np0005603609 ceph-mon[81667]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Jan 31 02:43:27 np0005603609 ceph-mon[81667]: paxos.2).electionLogic(24) init, last seen epoch 24
Jan 31 02:43:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:43:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:43:27 np0005603609 nova_compute[221550]: 2026-01-31 07:43:27.628 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:28.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:28.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:28 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 02:43:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:43:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:43:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:43:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:30.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:43:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:30.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:30 np0005603609 ceph-mon[81667]: mon.compute-1 calling monitor election
Jan 31 02:43:30 np0005603609 ceph-mon[81667]: mon.compute-2 calling monitor election
Jan 31 02:43:30 np0005603609 ceph-mon[81667]: mon.compute-0 calling monitor election
Jan 31 02:43:30 np0005603609 ceph-mon[81667]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 02:43:30 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 02:43:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:31 np0005603609 nova_compute[221550]: 2026-01-31 07:43:31.806 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:43:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:32.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:43:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:43:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:32.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:43:32 np0005603609 nova_compute[221550]: 2026-01-31 07:43:32.629 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:34.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:43:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:34.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:43:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:36.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:36.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:36 np0005603609 nova_compute[221550]: 2026-01-31 07:43:36.808 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:37 np0005603609 nova_compute[221550]: 2026-01-31 07:43:37.631 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:43:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:38.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:43:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:38.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:40.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:40.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:41 np0005603609 nova_compute[221550]: 2026-01-31 07:43:41.810 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:43:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:42.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:43:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:42.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:42 np0005603609 nova_compute[221550]: 2026-01-31 07:43:42.693 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:44.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:44.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:43:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3345716201' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:43:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:43:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3345716201' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:43:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:43:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:46.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:43:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:46.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:46 np0005603609 nova_compute[221550]: 2026-01-31 07:43:46.810 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:47 np0005603609 ovn_controller[130359]: 2026-01-31T07:43:47Z|00153|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 02:43:47 np0005603609 nova_compute[221550]: 2026-01-31 07:43:47.695 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:48.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:48.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:50 np0005603609 podman[240523]: 2026-01-31 07:43:50.162688004 +0000 UTC m=+0.049003754 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:43:50 np0005603609 podman[240522]: 2026-01-31 07:43:50.173895982 +0000 UTC m=+0.064948455 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 02:43:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:50.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:43:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:50.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:43:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:51 np0005603609 nova_compute[221550]: 2026-01-31 07:43:51.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:43:51 np0005603609 nova_compute[221550]: 2026-01-31 07:43:51.813 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:43:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:52.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:43:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:43:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:52.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:43:52 np0005603609 nova_compute[221550]: 2026-01-31 07:43:52.696 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:54.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000046s ======
Jan 31 02:43:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:54.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000046s
Jan 31 02:43:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:56.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:56 np0005603609 nova_compute[221550]: 2026-01-31 07:43:56.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:43:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:56.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:56 np0005603609 nova_compute[221550]: 2026-01-31 07:43:56.849 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:56 np0005603609 nova_compute[221550]: 2026-01-31 07:43:56.865 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:43:56 np0005603609 nova_compute[221550]: 2026-01-31 07:43:56.866 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:43:56 np0005603609 nova_compute[221550]: 2026-01-31 07:43:56.866 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:43:56 np0005603609 nova_compute[221550]: 2026-01-31 07:43:56.866 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:43:56 np0005603609 nova_compute[221550]: 2026-01-31 07:43:56.866 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:43:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:43:57 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/334924200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:43:57 np0005603609 nova_compute[221550]: 2026-01-31 07:43:57.318 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:43:57 np0005603609 nova_compute[221550]: 2026-01-31 07:43:57.468 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:43:57 np0005603609 nova_compute[221550]: 2026-01-31 07:43:57.469 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4798MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:43:57 np0005603609 nova_compute[221550]: 2026-01-31 07:43:57.470 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:43:57 np0005603609 nova_compute[221550]: 2026-01-31 07:43:57.470 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:43:57 np0005603609 nova_compute[221550]: 2026-01-31 07:43:57.698 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:58 np0005603609 nova_compute[221550]: 2026-01-31 07:43:58.150 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:43:58 np0005603609 nova_compute[221550]: 2026-01-31 07:43:58.151 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:43:58 np0005603609 nova_compute[221550]: 2026-01-31 07:43:58.166 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:43:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:58.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:43:58 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/914263149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:43:58 np0005603609 nova_compute[221550]: 2026-01-31 07:43:58.563 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:43:58 np0005603609 nova_compute[221550]: 2026-01-31 07:43:58.568 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:43:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:43:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:58.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:43:59.226 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:43:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:43:59.227 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:43:59 np0005603609 nova_compute[221550]: 2026-01-31 07:43:59.227 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:43:59 np0005603609 nova_compute[221550]: 2026-01-31 07:43:59.359 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:43:59 np0005603609 nova_compute[221550]: 2026-01-31 07:43:59.760 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:43:59 np0005603609 nova_compute[221550]: 2026-01-31 07:43:59.761 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:00.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:44:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:00.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:44:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:01 np0005603609 nova_compute[221550]: 2026-01-31 07:44:01.763 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:01 np0005603609 nova_compute[221550]: 2026-01-31 07:44:01.764 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:01 np0005603609 nova_compute[221550]: 2026-01-31 07:44:01.764 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:44:01 np0005603609 nova_compute[221550]: 2026-01-31 07:44:01.765 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:44:01 np0005603609 nova_compute[221550]: 2026-01-31 07:44:01.852 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:02 np0005603609 nova_compute[221550]: 2026-01-31 07:44:02.308 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:44:02 np0005603609 nova_compute[221550]: 2026-01-31 07:44:02.309 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:02 np0005603609 nova_compute[221550]: 2026-01-31 07:44:02.310 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:02 np0005603609 nova_compute[221550]: 2026-01-31 07:44:02.310 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:02.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:02 np0005603609 nova_compute[221550]: 2026-01-31 07:44:02.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:02 np0005603609 nova_compute[221550]: 2026-01-31 07:44:02.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:44:02 np0005603609 nova_compute[221550]: 2026-01-31 07:44:02.744 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:02.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:02 np0005603609 nova_compute[221550]: 2026-01-31 07:44:02.987 221554 DEBUG oslo_concurrency.lockutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:02 np0005603609 nova_compute[221550]: 2026-01-31 07:44:02.987 221554 DEBUG oslo_concurrency.lockutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:03 np0005603609 nova_compute[221550]: 2026-01-31 07:44:03.302 221554 DEBUG nova.compute.manager [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:44:03 np0005603609 nova_compute[221550]: 2026-01-31 07:44:03.717 221554 DEBUG oslo_concurrency.lockutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:03 np0005603609 nova_compute[221550]: 2026-01-31 07:44:03.718 221554 DEBUG oslo_concurrency.lockutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:03 np0005603609 nova_compute[221550]: 2026-01-31 07:44:03.727 221554 DEBUG nova.virt.hardware [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:44:03 np0005603609 nova_compute[221550]: 2026-01-31 07:44:03.728 221554 INFO nova.compute.claims [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:44:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:44:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:04.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:44:04 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 02:44:04 np0005603609 nova_compute[221550]: 2026-01-31 07:44:04.363 221554 DEBUG oslo_concurrency.processutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:04 np0005603609 nova_compute[221550]: 2026-01-31 07:44:04.662 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:44:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3402555874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:44:04 np0005603609 nova_compute[221550]: 2026-01-31 07:44:04.816 221554 DEBUG oslo_concurrency.processutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:04 np0005603609 nova_compute[221550]: 2026-01-31 07:44:04.824 221554 DEBUG nova.compute.provider_tree [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:44:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:04.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:04 np0005603609 nova_compute[221550]: 2026-01-31 07:44:04.873 221554 DEBUG nova.scheduler.client.report [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:44:04 np0005603609 nova_compute[221550]: 2026-01-31 07:44:04.941 221554 DEBUG oslo_concurrency.lockutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:04 np0005603609 nova_compute[221550]: 2026-01-31 07:44:04.943 221554 DEBUG nova.compute.manager [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:44:05 np0005603609 nova_compute[221550]: 2026-01-31 07:44:05.079 221554 DEBUG nova.compute.manager [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:44:05 np0005603609 nova_compute[221550]: 2026-01-31 07:44:05.080 221554 DEBUG nova.network.neutron [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:44:05 np0005603609 nova_compute[221550]: 2026-01-31 07:44:05.133 221554 INFO nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:44:05 np0005603609 nova_compute[221550]: 2026-01-31 07:44:05.210 221554 DEBUG nova.compute.manager [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:44:05 np0005603609 nova_compute[221550]: 2026-01-31 07:44:05.482 221554 DEBUG nova.compute.manager [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:44:05 np0005603609 nova_compute[221550]: 2026-01-31 07:44:05.484 221554 DEBUG nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:44:05 np0005603609 nova_compute[221550]: 2026-01-31 07:44:05.484 221554 INFO nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Creating image(s)#033[00m
Jan 31 02:44:05 np0005603609 nova_compute[221550]: 2026-01-31 07:44:05.525 221554 DEBUG nova.storage.rbd_utils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 273e09ce-6fa6-4cb5-a2c6-0eab195d7242_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:05 np0005603609 nova_compute[221550]: 2026-01-31 07:44:05.567 221554 DEBUG nova.storage.rbd_utils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 273e09ce-6fa6-4cb5-a2c6-0eab195d7242_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:05 np0005603609 nova_compute[221550]: 2026-01-31 07:44:05.610 221554 DEBUG nova.storage.rbd_utils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 273e09ce-6fa6-4cb5-a2c6-0eab195d7242_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:05 np0005603609 nova_compute[221550]: 2026-01-31 07:44:05.616 221554 DEBUG oslo_concurrency.processutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:05 np0005603609 nova_compute[221550]: 2026-01-31 07:44:05.702 221554 DEBUG oslo_concurrency.processutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:05 np0005603609 nova_compute[221550]: 2026-01-31 07:44:05.704 221554 DEBUG oslo_concurrency.lockutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:05 np0005603609 nova_compute[221550]: 2026-01-31 07:44:05.705 221554 DEBUG oslo_concurrency.lockutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:05 np0005603609 nova_compute[221550]: 2026-01-31 07:44:05.706 221554 DEBUG oslo_concurrency.lockutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:05 np0005603609 nova_compute[221550]: 2026-01-31 07:44:05.749 221554 DEBUG nova.storage.rbd_utils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 273e09ce-6fa6-4cb5-a2c6-0eab195d7242_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:05 np0005603609 nova_compute[221550]: 2026-01-31 07:44:05.755 221554 DEBUG oslo_concurrency.processutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 273e09ce-6fa6-4cb5-a2c6-0eab195d7242_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:05 np0005603609 nova_compute[221550]: 2026-01-31 07:44:05.969 221554 DEBUG nova.policy [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97abab8eb79247cd89fb2ebff295b890', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:44:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:44:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:06.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:44:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:06 np0005603609 nova_compute[221550]: 2026-01-31 07:44:06.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:44:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:06.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:44:06 np0005603609 nova_compute[221550]: 2026-01-31 07:44:06.897 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:07.230 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:07.478 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:07.478 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:07.479 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:07 np0005603609 nova_compute[221550]: 2026-01-31 07:44:07.747 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:08 np0005603609 nova_compute[221550]: 2026-01-31 07:44:08.003 221554 DEBUG nova.network.neutron [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Successfully created port: 4f95b428-b5b8-4191-b6dc-794f22f27406 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:44:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:44:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:08.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:44:08 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 02:44:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).paxos(paxos active c 2511..3124) lease_timeout -- calling new election
Jan 31 02:44:08 np0005603609 ceph-mon[81667]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Jan 31 02:44:08 np0005603609 ceph-mon[81667]: paxos.2).electionLogic(28) init, last seen epoch 28
Jan 31 02:44:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:44:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:44:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:44:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:08.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:44:10 np0005603609 nova_compute[221550]: 2026-01-31 07:44:10.188 221554 DEBUG nova.network.neutron [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Successfully updated port: 4f95b428-b5b8-4191-b6dc-794f22f27406 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:44:10 np0005603609 nova_compute[221550]: 2026-01-31 07:44:10.244 221554 DEBUG oslo_concurrency.lockutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "refresh_cache-273e09ce-6fa6-4cb5-a2c6-0eab195d7242" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:44:10 np0005603609 nova_compute[221550]: 2026-01-31 07:44:10.244 221554 DEBUG oslo_concurrency.lockutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquired lock "refresh_cache-273e09ce-6fa6-4cb5-a2c6-0eab195d7242" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:44:10 np0005603609 nova_compute[221550]: 2026-01-31 07:44:10.244 221554 DEBUG nova.network.neutron [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:44:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:10.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:10 np0005603609 nova_compute[221550]: 2026-01-31 07:44:10.438 221554 DEBUG nova.compute.manager [req-a7bdb6d9-cc77-4aae-9c4e-4fb35d1bdd89 req-f670d2c7-fdfc-4441-838c-7a547fede44c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Received event network-changed-4f95b428-b5b8-4191-b6dc-794f22f27406 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:44:10 np0005603609 nova_compute[221550]: 2026-01-31 07:44:10.438 221554 DEBUG nova.compute.manager [req-a7bdb6d9-cc77-4aae-9c4e-4fb35d1bdd89 req-f670d2c7-fdfc-4441-838c-7a547fede44c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Refreshing instance network info cache due to event network-changed-4f95b428-b5b8-4191-b6dc-794f22f27406. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:44:10 np0005603609 nova_compute[221550]: 2026-01-31 07:44:10.438 221554 DEBUG oslo_concurrency.lockutils [req-a7bdb6d9-cc77-4aae-9c4e-4fb35d1bdd89 req-f670d2c7-fdfc-4441-838c-7a547fede44c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-273e09ce-6fa6-4cb5-a2c6-0eab195d7242" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:44:10 np0005603609 nova_compute[221550]: 2026-01-31 07:44:10.680 221554 DEBUG nova.network.neutron [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:44:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:44:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:10.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:44:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:44:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:44:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:44:11 np0005603609 nova_compute[221550]: 2026-01-31 07:44:11.899 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:12.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:12 np0005603609 nova_compute[221550]: 2026-01-31 07:44:12.749 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:12.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:13 np0005603609 ceph-mon[81667]: mon.compute-1 calling monitor election
Jan 31 02:44:13 np0005603609 ceph-mon[81667]: mon.compute-2 calling monitor election
Jan 31 02:44:13 np0005603609 ceph-mon[81667]: mon.compute-0 calling monitor election
Jan 31 02:44:13 np0005603609 ceph-mon[81667]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 02:44:13 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 02:44:13 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:44:13 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:44:13 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:44:13 np0005603609 nova_compute[221550]: 2026-01-31 07:44:13.223 221554 DEBUG nova.network.neutron [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Updating instance_info_cache with network_info: [{"id": "4f95b428-b5b8-4191-b6dc-794f22f27406", "address": "fa:16:3e:93:95:94", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f95b428-b5", "ovs_interfaceid": "4f95b428-b5b8-4191-b6dc-794f22f27406", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:44:13 np0005603609 nova_compute[221550]: 2026-01-31 07:44:13.305 221554 DEBUG oslo_concurrency.processutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 273e09ce-6fa6-4cb5-a2c6-0eab195d7242_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 7.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:13 np0005603609 nova_compute[221550]: 2026-01-31 07:44:13.393 221554 DEBUG nova.storage.rbd_utils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] resizing rbd image 273e09ce-6fa6-4cb5-a2c6-0eab195d7242_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:44:13 np0005603609 nova_compute[221550]: 2026-01-31 07:44:13.580 221554 DEBUG oslo_concurrency.lockutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Releasing lock "refresh_cache-273e09ce-6fa6-4cb5-a2c6-0eab195d7242" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:44:13 np0005603609 nova_compute[221550]: 2026-01-31 07:44:13.580 221554 DEBUG nova.compute.manager [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Instance network_info: |[{"id": "4f95b428-b5b8-4191-b6dc-794f22f27406", "address": "fa:16:3e:93:95:94", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f95b428-b5", "ovs_interfaceid": "4f95b428-b5b8-4191-b6dc-794f22f27406", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:44:13 np0005603609 nova_compute[221550]: 2026-01-31 07:44:13.581 221554 DEBUG oslo_concurrency.lockutils [req-a7bdb6d9-cc77-4aae-9c4e-4fb35d1bdd89 req-f670d2c7-fdfc-4441-838c-7a547fede44c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-273e09ce-6fa6-4cb5-a2c6-0eab195d7242" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:44:13 np0005603609 nova_compute[221550]: 2026-01-31 07:44:13.581 221554 DEBUG nova.network.neutron [req-a7bdb6d9-cc77-4aae-9c4e-4fb35d1bdd89 req-f670d2c7-fdfc-4441-838c-7a547fede44c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Refreshing network info cache for port 4f95b428-b5b8-4191-b6dc-794f22f27406 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:44:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:14.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.430 221554 DEBUG nova.objects.instance [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'migration_context' on Instance uuid 273e09ce-6fa6-4cb5-a2c6-0eab195d7242 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.451 221554 DEBUG nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.451 221554 DEBUG nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Ensure instance console log exists: /var/lib/nova/instances/273e09ce-6fa6-4cb5-a2c6-0eab195d7242/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.452 221554 DEBUG oslo_concurrency.lockutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.452 221554 DEBUG oslo_concurrency.lockutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.453 221554 DEBUG oslo_concurrency.lockutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.456 221554 DEBUG nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Start _get_guest_xml network_info=[{"id": "4f95b428-b5b8-4191-b6dc-794f22f27406", "address": "fa:16:3e:93:95:94", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f95b428-b5", "ovs_interfaceid": "4f95b428-b5b8-4191-b6dc-794f22f27406", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.460 221554 WARNING nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.468 221554 DEBUG nova.virt.libvirt.host [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.468 221554 DEBUG nova.virt.libvirt.host [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.474 221554 DEBUG nova.virt.libvirt.host [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.474 221554 DEBUG nova.virt.libvirt.host [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.475 221554 DEBUG nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.476 221554 DEBUG nova.virt.hardware [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.476 221554 DEBUG nova.virt.hardware [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.476 221554 DEBUG nova.virt.hardware [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.476 221554 DEBUG nova.virt.hardware [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.477 221554 DEBUG nova.virt.hardware [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.477 221554 DEBUG nova.virt.hardware [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.477 221554 DEBUG nova.virt.hardware [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.477 221554 DEBUG nova.virt.hardware [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.477 221554 DEBUG nova.virt.hardware [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.478 221554 DEBUG nova.virt.hardware [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.478 221554 DEBUG nova.virt.hardware [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.480 221554 DEBUG oslo_concurrency.processutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:14.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:44:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3142601042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.917 221554 DEBUG oslo_concurrency.processutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.943 221554 DEBUG nova.storage.rbd_utils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 273e09ce-6fa6-4cb5-a2c6-0eab195d7242_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:14 np0005603609 nova_compute[221550]: 2026-01-31 07:44:14.947 221554 DEBUG oslo_concurrency.processutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:44:15 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1037439228' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.388 221554 DEBUG oslo_concurrency.processutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.390 221554 DEBUG nova.virt.libvirt.vif [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:43:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-742290636',display_name='tempest-DeleteServersTestJSON-server-742290636',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-742290636',id=52,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-oqbid7xd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:44:05Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=273e09ce-6fa6-4cb5-a2c6-0eab195d7242,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f95b428-b5b8-4191-b6dc-794f22f27406", "address": "fa:16:3e:93:95:94", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f95b428-b5", "ovs_interfaceid": "4f95b428-b5b8-4191-b6dc-794f22f27406", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.390 221554 DEBUG nova.network.os_vif_util [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "4f95b428-b5b8-4191-b6dc-794f22f27406", "address": "fa:16:3e:93:95:94", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f95b428-b5", "ovs_interfaceid": "4f95b428-b5b8-4191-b6dc-794f22f27406", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.391 221554 DEBUG nova.network.os_vif_util [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:95:94,bridge_name='br-int',has_traffic_filtering=True,id=4f95b428-b5b8-4191-b6dc-794f22f27406,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f95b428-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.393 221554 DEBUG nova.objects.instance [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'pci_devices' on Instance uuid 273e09ce-6fa6-4cb5-a2c6-0eab195d7242 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.585 221554 DEBUG nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:44:15 np0005603609 nova_compute[221550]:  <uuid>273e09ce-6fa6-4cb5-a2c6-0eab195d7242</uuid>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:  <name>instance-00000034</name>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <nova:name>tempest-DeleteServersTestJSON-server-742290636</nova:name>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:44:14</nova:creationTime>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:44:15 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:        <nova:user uuid="97abab8eb79247cd89fb2ebff295b890">tempest-DeleteServersTestJSON-2031597701-project-member</nova:user>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:        <nova:project uuid="f299640bb1f64e5fa12b23955e5a2127">tempest-DeleteServersTestJSON-2031597701</nova:project>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:        <nova:port uuid="4f95b428-b5b8-4191-b6dc-794f22f27406">
Jan 31 02:44:15 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <entry name="serial">273e09ce-6fa6-4cb5-a2c6-0eab195d7242</entry>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <entry name="uuid">273e09ce-6fa6-4cb5-a2c6-0eab195d7242</entry>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/273e09ce-6fa6-4cb5-a2c6-0eab195d7242_disk">
Jan 31 02:44:15 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:44:15 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/273e09ce-6fa6-4cb5-a2c6-0eab195d7242_disk.config">
Jan 31 02:44:15 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:44:15 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:93:95:94"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <target dev="tap4f95b428-b5"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/273e09ce-6fa6-4cb5-a2c6-0eab195d7242/console.log" append="off"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:44:15 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:44:15 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:44:15 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:44:15 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.587 221554 DEBUG nova.compute.manager [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Preparing to wait for external event network-vif-plugged-4f95b428-b5b8-4191-b6dc-794f22f27406 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.587 221554 DEBUG oslo_concurrency.lockutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.587 221554 DEBUG oslo_concurrency.lockutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.588 221554 DEBUG oslo_concurrency.lockutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.588 221554 DEBUG nova.virt.libvirt.vif [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:43:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-742290636',display_name='tempest-DeleteServersTestJSON-server-742290636',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-742290636',id=52,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-oqbid7xd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:44:05Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=273e09ce-6fa6-4cb5-a2c6-0eab195d7242,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f95b428-b5b8-4191-b6dc-794f22f27406", "address": "fa:16:3e:93:95:94", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f95b428-b5", "ovs_interfaceid": "4f95b428-b5b8-4191-b6dc-794f22f27406", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.589 221554 DEBUG nova.network.os_vif_util [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "4f95b428-b5b8-4191-b6dc-794f22f27406", "address": "fa:16:3e:93:95:94", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f95b428-b5", "ovs_interfaceid": "4f95b428-b5b8-4191-b6dc-794f22f27406", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.589 221554 DEBUG nova.network.os_vif_util [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:95:94,bridge_name='br-int',has_traffic_filtering=True,id=4f95b428-b5b8-4191-b6dc-794f22f27406,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f95b428-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.589 221554 DEBUG os_vif [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:95:94,bridge_name='br-int',has_traffic_filtering=True,id=4f95b428-b5b8-4191-b6dc-794f22f27406,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f95b428-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.590 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.590 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.591 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.593 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.594 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f95b428-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.594 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f95b428-b5, col_values=(('external_ids', {'iface-id': '4f95b428-b5b8-4191-b6dc-794f22f27406', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:95:94', 'vm-uuid': '273e09ce-6fa6-4cb5-a2c6-0eab195d7242'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.595 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:15 np0005603609 NetworkManager[49064]: <info>  [1769845455.5973] manager: (tap4f95b428-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.598 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.601 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.602 221554 INFO os_vif [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:95:94,bridge_name='br-int',has_traffic_filtering=True,id=4f95b428-b5b8-4191-b6dc-794f22f27406,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f95b428-b5')#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.710 221554 DEBUG nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.711 221554 DEBUG nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.711 221554 DEBUG nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No VIF found with MAC fa:16:3e:93:95:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.711 221554 INFO nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Using config drive#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.738 221554 DEBUG nova.storage.rbd_utils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 273e09ce-6fa6-4cb5-a2c6-0eab195d7242_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.977 221554 DEBUG nova.network.neutron [req-a7bdb6d9-cc77-4aae-9c4e-4fb35d1bdd89 req-f670d2c7-fdfc-4441-838c-7a547fede44c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Updated VIF entry in instance network info cache for port 4f95b428-b5b8-4191-b6dc-794f22f27406. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:44:15 np0005603609 nova_compute[221550]: 2026-01-31 07:44:15.977 221554 DEBUG nova.network.neutron [req-a7bdb6d9-cc77-4aae-9c4e-4fb35d1bdd89 req-f670d2c7-fdfc-4441-838c-7a547fede44c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Updating instance_info_cache with network_info: [{"id": "4f95b428-b5b8-4191-b6dc-794f22f27406", "address": "fa:16:3e:93:95:94", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f95b428-b5", "ovs_interfaceid": "4f95b428-b5b8-4191-b6dc-794f22f27406", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:44:16 np0005603609 nova_compute[221550]: 2026-01-31 07:44:16.007 221554 DEBUG oslo_concurrency.lockutils [req-a7bdb6d9-cc77-4aae-9c4e-4fb35d1bdd89 req-f670d2c7-fdfc-4441-838c-7a547fede44c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-273e09ce-6fa6-4cb5-a2c6-0eab195d7242" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:44:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:44:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:16.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:44:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:16.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:16 np0005603609 nova_compute[221550]: 2026-01-31 07:44:16.901 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:17 np0005603609 nova_compute[221550]: 2026-01-31 07:44:17.008 221554 INFO nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Creating config drive at /var/lib/nova/instances/273e09ce-6fa6-4cb5-a2c6-0eab195d7242/disk.config#033[00m
Jan 31 02:44:17 np0005603609 nova_compute[221550]: 2026-01-31 07:44:17.012 221554 DEBUG oslo_concurrency.processutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/273e09ce-6fa6-4cb5-a2c6-0eab195d7242/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpcbxf4ui_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:17 np0005603609 nova_compute[221550]: 2026-01-31 07:44:17.130 221554 DEBUG oslo_concurrency.processutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/273e09ce-6fa6-4cb5-a2c6-0eab195d7242/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpcbxf4ui_" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:17 np0005603609 nova_compute[221550]: 2026-01-31 07:44:17.166 221554 DEBUG nova.storage.rbd_utils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 273e09ce-6fa6-4cb5-a2c6-0eab195d7242_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:17 np0005603609 nova_compute[221550]: 2026-01-31 07:44:17.170 221554 DEBUG oslo_concurrency.processutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/273e09ce-6fa6-4cb5-a2c6-0eab195d7242/disk.config 273e09ce-6fa6-4cb5-a2c6-0eab195d7242_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:17 np0005603609 nova_compute[221550]: 2026-01-31 07:44:17.764 221554 DEBUG oslo_concurrency.processutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/273e09ce-6fa6-4cb5-a2c6-0eab195d7242/disk.config 273e09ce-6fa6-4cb5-a2c6-0eab195d7242_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:17 np0005603609 nova_compute[221550]: 2026-01-31 07:44:17.764 221554 INFO nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Deleting local config drive /var/lib/nova/instances/273e09ce-6fa6-4cb5-a2c6-0eab195d7242/disk.config because it was imported into RBD.#033[00m
Jan 31 02:44:17 np0005603609 kernel: tap4f95b428-b5: entered promiscuous mode
Jan 31 02:44:17 np0005603609 NetworkManager[49064]: <info>  [1769845457.8002] manager: (tap4f95b428-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Jan 31 02:44:17 np0005603609 systemd-udevd[241066]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:44:17 np0005603609 ovn_controller[130359]: 2026-01-31T07:44:17Z|00154|binding|INFO|Claiming lport 4f95b428-b5b8-4191-b6dc-794f22f27406 for this chassis.
Jan 31 02:44:17 np0005603609 ovn_controller[130359]: 2026-01-31T07:44:17Z|00155|binding|INFO|4f95b428-b5b8-4191-b6dc-794f22f27406: Claiming fa:16:3e:93:95:94 10.100.0.12
Jan 31 02:44:17 np0005603609 nova_compute[221550]: 2026-01-31 07:44:17.854 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:17 np0005603609 nova_compute[221550]: 2026-01-31 07:44:17.861 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:17 np0005603609 NetworkManager[49064]: <info>  [1769845457.8658] device (tap4f95b428-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:44:17 np0005603609 NetworkManager[49064]: <info>  [1769845457.8668] device (tap4f95b428-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:44:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:17.870 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:95:94 10.100.0.12'], port_security=['fa:16:3e:93:95:94 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '273e09ce-6fa6-4cb5-a2c6-0eab195d7242', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60244e92-1524-47f0-8207-05d0104afa47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f055dcb-9af1-4cf7-aa74-c819da93756e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70b36ac0-62cb-4e29-924b-93c5ad906bc9, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=4f95b428-b5b8-4191-b6dc-794f22f27406) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:44:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:17.871 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 4f95b428-b5b8-4191-b6dc-794f22f27406 in datapath 60244e92-1524-47f0-8207-05d0104afa47 bound to our chassis#033[00m
Jan 31 02:44:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:17.873 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60244e92-1524-47f0-8207-05d0104afa47#033[00m
Jan 31 02:44:17 np0005603609 systemd-machined[190912]: New machine qemu-25-instance-00000034.
Jan 31 02:44:17 np0005603609 ovn_controller[130359]: 2026-01-31T07:44:17Z|00156|binding|INFO|Setting lport 4f95b428-b5b8-4191-b6dc-794f22f27406 ovn-installed in OVS
Jan 31 02:44:17 np0005603609 ovn_controller[130359]: 2026-01-31T07:44:17Z|00157|binding|INFO|Setting lport 4f95b428-b5b8-4191-b6dc-794f22f27406 up in Southbound
Jan 31 02:44:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:17.879 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[92b08ee6-1441-471c-8d58-c1ab702f8abc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:17.880 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap60244e92-11 in ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:44:17 np0005603609 nova_compute[221550]: 2026-01-31 07:44:17.880 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:17.882 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap60244e92-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:44:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:17.882 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f13c9ead-d615-4da6-8cee-f7b035471962]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:17.883 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9e5ecd-4723-4c48-96dc-f70f7ed174ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:17 np0005603609 systemd[1]: Started Virtual Machine qemu-25-instance-00000034.
Jan 31 02:44:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:17.890 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[fef82335-ac1e-4696-ad89-51163a0210b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:17.898 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ccdb63a1-8239-429c-9433-4419f183e47e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:17.917 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[23b81c27-0905-4803-90a7-defaae99ce93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:17.922 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e4fc3edb-a315-4a72-b11d-6a0d5d5a2e98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:17 np0005603609 NetworkManager[49064]: <info>  [1769845457.9241] manager: (tap60244e92-10): new Veth device (/org/freedesktop/NetworkManager/Devices/79)
Jan 31 02:44:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:17.941 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[fd17dee7-c5cc-4021-bae2-24a2343a08fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:17.960 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[2398e266-5083-4bdc-9d60-3a2a3f198936]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:17 np0005603609 NetworkManager[49064]: <info>  [1769845457.9747] device (tap60244e92-10): carrier: link connected
Jan 31 02:44:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:17.979 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[02f66a50-2782-4b41-9f4b-f3cb7650ae6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:17.988 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[51dc4b73-8ff9-4c0a-b591-9a440d6a2c48]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60244e92-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:59:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 569767, 'reachable_time': 27237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241102, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:17.998 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1f61bff8-1afd-43d3-ac55-55c70aa39b10]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:59f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 569767, 'tstamp': 569767}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241103, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:18.010 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[364420b7-8b3e-4bb9-bfaa-2b4808950e20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60244e92-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:59:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 569767, 'reachable_time': 27237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 241104, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:18.029 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f4647b7b-fbd6-4095-ab23-42c007900a03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:18.071 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f67866e2-eb80-4692-a362-889e5610a52c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:18.072 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60244e92-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:18.072 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:18.073 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60244e92-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.074 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:18 np0005603609 NetworkManager[49064]: <info>  [1769845458.0753] manager: (tap60244e92-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Jan 31 02:44:18 np0005603609 kernel: tap60244e92-10: entered promiscuous mode
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.076 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:18.078 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60244e92-10, col_values=(('external_ids', {'iface-id': '4e20d708-9f46-41f3-86c0-9ab3849f8392'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.078 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:18 np0005603609 ovn_controller[130359]: 2026-01-31T07:44:18Z|00158|binding|INFO|Releasing lport 4e20d708-9f46-41f3-86c0-9ab3849f8392 from this chassis (sb_readonly=0)
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.084 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:18.086 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:18.087 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[56fc7d80-9c24-4f5c-9b72-a894e188e04c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:18.088 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-60244e92-1524-47f0-8207-05d0104afa47
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 60244e92-1524-47f0-8207-05d0104afa47
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:44:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:18.088 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'env', 'PROCESS_TAG=haproxy-60244e92-1524-47f0-8207-05d0104afa47', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/60244e92-1524-47f0-8207-05d0104afa47.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:44:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:18.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.424 221554 DEBUG nova.compute.manager [req-3067b057-7a25-44b1-acae-2dc7e72d4d51 req-b8853c08-0116-4a72-ad49-73b3d40e2068 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Received event network-vif-plugged-4f95b428-b5b8-4191-b6dc-794f22f27406 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.425 221554 DEBUG oslo_concurrency.lockutils [req-3067b057-7a25-44b1-acae-2dc7e72d4d51 req-b8853c08-0116-4a72-ad49-73b3d40e2068 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.425 221554 DEBUG oslo_concurrency.lockutils [req-3067b057-7a25-44b1-acae-2dc7e72d4d51 req-b8853c08-0116-4a72-ad49-73b3d40e2068 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.426 221554 DEBUG oslo_concurrency.lockutils [req-3067b057-7a25-44b1-acae-2dc7e72d4d51 req-b8853c08-0116-4a72-ad49-73b3d40e2068 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.426 221554 DEBUG nova.compute.manager [req-3067b057-7a25-44b1-acae-2dc7e72d4d51 req-b8853c08-0116-4a72-ad49-73b3d40e2068 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Processing event network-vif-plugged-4f95b428-b5b8-4191-b6dc-794f22f27406 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:44:18 np0005603609 podman[241170]: 2026-01-31 07:44:18.380861683 +0000 UTC m=+0.020788979 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.607 221554 DEBUG nova.compute.manager [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.607 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845458.6061814, 273e09ce-6fa6-4cb5-a2c6-0eab195d7242 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.608 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] VM Started (Lifecycle Event)#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.611 221554 DEBUG nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.614 221554 INFO nova.virt.libvirt.driver [-] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Instance spawned successfully.#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.615 221554 DEBUG nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.629 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.634 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.644 221554 DEBUG nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.644 221554 DEBUG nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.645 221554 DEBUG nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.645 221554 DEBUG nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.645 221554 DEBUG nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.646 221554 DEBUG nova.virt.libvirt.driver [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.667 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.667 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845458.6065595, 273e09ce-6fa6-4cb5-a2c6-0eab195d7242 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.668 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.705 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.710 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845458.6100447, 273e09ce-6fa6-4cb5-a2c6-0eab195d7242 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.710 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.720 221554 INFO nova.compute.manager [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Took 13.24 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.720 221554 DEBUG nova.compute.manager [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.750 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.753 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.793 221554 INFO nova.compute.manager [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Took 15.15 seconds to build instance.#033[00m
Jan 31 02:44:18 np0005603609 nova_compute[221550]: 2026-01-31 07:44:18.807 221554 DEBUG oslo_concurrency.lockutils [None req-a5e52c22-5f47-4cce-a456-8d710b930576 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:18.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:19 np0005603609 podman[241170]: 2026-01-31 07:44:19.032469618 +0000 UTC m=+0.672396894 container create 2a97e801cebe8f1b0bfa41f351185c82de618c296bd6c0dbb96d8a7c4a1438d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 31 02:44:19 np0005603609 systemd[1]: Started libpod-conmon-2a97e801cebe8f1b0bfa41f351185c82de618c296bd6c0dbb96d8a7c4a1438d3.scope.
Jan 31 02:44:19 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:44:19 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24360e980791730fe63f1070c24db741449172903b94c5e50b1d6b52301b962f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:44:19 np0005603609 podman[241170]: 2026-01-31 07:44:19.318213417 +0000 UTC m=+0.958140783 container init 2a97e801cebe8f1b0bfa41f351185c82de618c296bd6c0dbb96d8a7c4a1438d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:44:19 np0005603609 podman[241170]: 2026-01-31 07:44:19.328690117 +0000 UTC m=+0.968617383 container start 2a97e801cebe8f1b0bfa41f351185c82de618c296bd6c0dbb96d8a7c4a1438d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 02:44:19 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[241191]: [NOTICE]   (241195) : New worker (241197) forked
Jan 31 02:44:19 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[241191]: [NOTICE]   (241195) : Loading success.
Jan 31 02:44:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:20.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:20 np0005603609 nova_compute[221550]: 2026-01-31 07:44:20.597 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:20 np0005603609 nova_compute[221550]: 2026-01-31 07:44:20.756 221554 DEBUG nova.compute.manager [req-46fe9bf6-7f17-4883-b0fa-2126992212cf req-3525a061-6026-4fc1-acdf-5bf3cbc3e99b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Received event network-vif-plugged-4f95b428-b5b8-4191-b6dc-794f22f27406 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:44:20 np0005603609 nova_compute[221550]: 2026-01-31 07:44:20.757 221554 DEBUG oslo_concurrency.lockutils [req-46fe9bf6-7f17-4883-b0fa-2126992212cf req-3525a061-6026-4fc1-acdf-5bf3cbc3e99b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:20 np0005603609 nova_compute[221550]: 2026-01-31 07:44:20.758 221554 DEBUG oslo_concurrency.lockutils [req-46fe9bf6-7f17-4883-b0fa-2126992212cf req-3525a061-6026-4fc1-acdf-5bf3cbc3e99b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:20 np0005603609 nova_compute[221550]: 2026-01-31 07:44:20.758 221554 DEBUG oslo_concurrency.lockutils [req-46fe9bf6-7f17-4883-b0fa-2126992212cf req-3525a061-6026-4fc1-acdf-5bf3cbc3e99b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:20 np0005603609 nova_compute[221550]: 2026-01-31 07:44:20.758 221554 DEBUG nova.compute.manager [req-46fe9bf6-7f17-4883-b0fa-2126992212cf req-3525a061-6026-4fc1-acdf-5bf3cbc3e99b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] No waiting events found dispatching network-vif-plugged-4f95b428-b5b8-4191-b6dc-794f22f27406 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:44:20 np0005603609 nova_compute[221550]: 2026-01-31 07:44:20.758 221554 WARNING nova.compute.manager [req-46fe9bf6-7f17-4883-b0fa-2126992212cf req-3525a061-6026-4fc1-acdf-5bf3cbc3e99b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Received unexpected event network-vif-plugged-4f95b428-b5b8-4191-b6dc-794f22f27406 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:44:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:44:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:20.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:44:20 np0005603609 podman[241231]: 2026-01-31 07:44:20.871039261 +0000 UTC m=+0.048798979 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 02:44:20 np0005603609 podman[241230]: 2026-01-31 07:44:20.900907736 +0000 UTC m=+0.077110887 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.085 221554 DEBUG oslo_concurrency.lockutils [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.085 221554 DEBUG oslo_concurrency.lockutils [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.085 221554 DEBUG oslo_concurrency.lockutils [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.085 221554 DEBUG oslo_concurrency.lockutils [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.086 221554 DEBUG oslo_concurrency.lockutils [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.087 221554 INFO nova.compute.manager [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Terminating instance#033[00m
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.087 221554 DEBUG nova.compute.manager [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:44:21 np0005603609 kernel: tap4f95b428-b5 (unregistering): left promiscuous mode
Jan 31 02:44:21 np0005603609 NetworkManager[49064]: <info>  [1769845461.1907] device (tap4f95b428-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.231 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:21 np0005603609 ovn_controller[130359]: 2026-01-31T07:44:21Z|00159|binding|INFO|Releasing lport 4f95b428-b5b8-4191-b6dc-794f22f27406 from this chassis (sb_readonly=0)
Jan 31 02:44:21 np0005603609 ovn_controller[130359]: 2026-01-31T07:44:21Z|00160|binding|INFO|Setting lport 4f95b428-b5b8-4191-b6dc-794f22f27406 down in Southbound
Jan 31 02:44:21 np0005603609 ovn_controller[130359]: 2026-01-31T07:44:21Z|00161|binding|INFO|Removing iface tap4f95b428-b5 ovn-installed in OVS
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.233 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.238 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:21.244 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:95:94 10.100.0.12'], port_security=['fa:16:3e:93:95:94 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '273e09ce-6fa6-4cb5-a2c6-0eab195d7242', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60244e92-1524-47f0-8207-05d0104afa47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0f055dcb-9af1-4cf7-aa74-c819da93756e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70b36ac0-62cb-4e29-924b-93c5ad906bc9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=4f95b428-b5b8-4191-b6dc-794f22f27406) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:44:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:21.245 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 4f95b428-b5b8-4191-b6dc-794f22f27406 in datapath 60244e92-1524-47f0-8207-05d0104afa47 unbound from our chassis#033[00m
Jan 31 02:44:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:21.246 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60244e92-1524-47f0-8207-05d0104afa47, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:44:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:21.247 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3b411577-15a0-414c-9be5-e1c9774293f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:21.248 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 namespace which is not needed anymore#033[00m
Jan 31 02:44:21 np0005603609 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000034.scope: Deactivated successfully.
Jan 31 02:44:21 np0005603609 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000034.scope: Consumed 3.099s CPU time.
Jan 31 02:44:21 np0005603609 systemd-machined[190912]: Machine qemu-25-instance-00000034 terminated.
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.318 221554 INFO nova.virt.libvirt.driver [-] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Instance destroyed successfully.#033[00m
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.319 221554 DEBUG nova.objects.instance [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'resources' on Instance uuid 273e09ce-6fa6-4cb5-a2c6-0eab195d7242 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.347 221554 DEBUG nova.virt.libvirt.vif [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:43:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-742290636',display_name='tempest-DeleteServersTestJSON-server-742290636',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-742290636',id=52,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:44:18Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-oqbid7xd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:44:18Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=273e09ce-6fa6-4cb5-a2c6-0eab195d7242,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f95b428-b5b8-4191-b6dc-794f22f27406", "address": "fa:16:3e:93:95:94", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f95b428-b5", "ovs_interfaceid": "4f95b428-b5b8-4191-b6dc-794f22f27406", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.347 221554 DEBUG nova.network.os_vif_util [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "4f95b428-b5b8-4191-b6dc-794f22f27406", "address": "fa:16:3e:93:95:94", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f95b428-b5", "ovs_interfaceid": "4f95b428-b5b8-4191-b6dc-794f22f27406", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.348 221554 DEBUG nova.network.os_vif_util [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:95:94,bridge_name='br-int',has_traffic_filtering=True,id=4f95b428-b5b8-4191-b6dc-794f22f27406,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f95b428-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.348 221554 DEBUG os_vif [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:95:94,bridge_name='br-int',has_traffic_filtering=True,id=4f95b428-b5b8-4191-b6dc-794f22f27406,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f95b428-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.350 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.350 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f95b428-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.352 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.353 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.355 221554 INFO os_vif [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:95:94,bridge_name='br-int',has_traffic_filtering=True,id=4f95b428-b5b8-4191-b6dc-794f22f27406,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f95b428-b5')#033[00m
Jan 31 02:44:21 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[241191]: [NOTICE]   (241195) : haproxy version is 2.8.14-c23fe91
Jan 31 02:44:21 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[241191]: [NOTICE]   (241195) : path to executable is /usr/sbin/haproxy
Jan 31 02:44:21 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[241191]: [WARNING]  (241195) : Exiting Master process...
Jan 31 02:44:21 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[241191]: [WARNING]  (241195) : Exiting Master process...
Jan 31 02:44:21 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[241191]: [ALERT]    (241195) : Current worker (241197) exited with code 143 (Terminated)
Jan 31 02:44:21 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[241191]: [WARNING]  (241195) : All workers exited. Exiting... (0)
Jan 31 02:44:21 np0005603609 systemd[1]: libpod-2a97e801cebe8f1b0bfa41f351185c82de618c296bd6c0dbb96d8a7c4a1438d3.scope: Deactivated successfully.
Jan 31 02:44:21 np0005603609 podman[241330]: 2026-01-31 07:44:21.378597248 +0000 UTC m=+0.043302357 container died 2a97e801cebe8f1b0bfa41f351185c82de618c296bd6c0dbb96d8a7c4a1438d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:44:21 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a97e801cebe8f1b0bfa41f351185c82de618c296bd6c0dbb96d8a7c4a1438d3-userdata-shm.mount: Deactivated successfully.
Jan 31 02:44:21 np0005603609 systemd[1]: var-lib-containers-storage-overlay-24360e980791730fe63f1070c24db741449172903b94c5e50b1d6b52301b962f-merged.mount: Deactivated successfully.
Jan 31 02:44:21 np0005603609 podman[241330]: 2026-01-31 07:44:21.411858904 +0000 UTC m=+0.076564023 container cleanup 2a97e801cebe8f1b0bfa41f351185c82de618c296bd6c0dbb96d8a7c4a1438d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 02:44:21 np0005603609 systemd[1]: libpod-conmon-2a97e801cebe8f1b0bfa41f351185c82de618c296bd6c0dbb96d8a7c4a1438d3.scope: Deactivated successfully.
Jan 31 02:44:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:21 np0005603609 podman[241377]: 2026-01-31 07:44:21.464270788 +0000 UTC m=+0.037256572 container remove 2a97e801cebe8f1b0bfa41f351185c82de618c296bd6c0dbb96d8a7c4a1438d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 02:44:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:21.467 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[626ee604-6ace-4432-9b50-39fd46af3afe]: (4, ('Sat Jan 31 07:44:21 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 (2a97e801cebe8f1b0bfa41f351185c82de618c296bd6c0dbb96d8a7c4a1438d3)\n2a97e801cebe8f1b0bfa41f351185c82de618c296bd6c0dbb96d8a7c4a1438d3\nSat Jan 31 07:44:21 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 (2a97e801cebe8f1b0bfa41f351185c82de618c296bd6c0dbb96d8a7c4a1438d3)\n2a97e801cebe8f1b0bfa41f351185c82de618c296bd6c0dbb96d8a7c4a1438d3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:21.469 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5a922d34-e5dc-4503-a2c3-92faeb3030d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:21.470 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60244e92-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.471 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:21 np0005603609 kernel: tap60244e92-10: left promiscuous mode
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.477 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:21.480 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bff6307d-d178-4a04-901f-e5d44f5fe3c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:21.490 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ff2637d0-8596-4072-a863-c040233c0406]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:21.491 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c792eb-14a2-4e4e-9845-a7260e3d1849]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:21.503 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d29c7ced-ef17-485e-8676-81d2fe13ccdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 569761, 'reachable_time': 19043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241392, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:21.505 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:44:21 np0005603609 systemd[1]: run-netns-ovnmeta\x2d60244e92\x2d1524\x2d47f0\x2d8207\x2d05d0104afa47.mount: Deactivated successfully.
Jan 31 02:44:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:21.505 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[9600235c-f1d5-43c5-8758-2a263d852e05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:44:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:44:21 np0005603609 nova_compute[221550]: 2026-01-31 07:44:21.903 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:22 np0005603609 nova_compute[221550]: 2026-01-31 07:44:22.116 221554 INFO nova.virt.libvirt.driver [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Deleting instance files /var/lib/nova/instances/273e09ce-6fa6-4cb5-a2c6-0eab195d7242_del#033[00m
Jan 31 02:44:22 np0005603609 nova_compute[221550]: 2026-01-31 07:44:22.117 221554 INFO nova.virt.libvirt.driver [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Deletion of /var/lib/nova/instances/273e09ce-6fa6-4cb5-a2c6-0eab195d7242_del complete#033[00m
Jan 31 02:44:22 np0005603609 nova_compute[221550]: 2026-01-31 07:44:22.247 221554 INFO nova.compute.manager [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Took 1.16 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:44:22 np0005603609 nova_compute[221550]: 2026-01-31 07:44:22.248 221554 DEBUG oslo.service.loopingcall [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:44:22 np0005603609 nova_compute[221550]: 2026-01-31 07:44:22.248 221554 DEBUG nova.compute.manager [-] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:44:22 np0005603609 nova_compute[221550]: 2026-01-31 07:44:22.248 221554 DEBUG nova.network.neutron [-] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:44:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:22.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:22.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:23 np0005603609 nova_compute[221550]: 2026-01-31 07:44:23.049 221554 DEBUG nova.compute.manager [req-d3878426-bbf3-4480-88f3-a7b6f30c122c req-143ddaba-5ad8-4c1e-a2cc-b435c7d0867f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Received event network-vif-unplugged-4f95b428-b5b8-4191-b6dc-794f22f27406 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:44:23 np0005603609 nova_compute[221550]: 2026-01-31 07:44:23.050 221554 DEBUG oslo_concurrency.lockutils [req-d3878426-bbf3-4480-88f3-a7b6f30c122c req-143ddaba-5ad8-4c1e-a2cc-b435c7d0867f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:23 np0005603609 nova_compute[221550]: 2026-01-31 07:44:23.050 221554 DEBUG oslo_concurrency.lockutils [req-d3878426-bbf3-4480-88f3-a7b6f30c122c req-143ddaba-5ad8-4c1e-a2cc-b435c7d0867f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:23 np0005603609 nova_compute[221550]: 2026-01-31 07:44:23.051 221554 DEBUG oslo_concurrency.lockutils [req-d3878426-bbf3-4480-88f3-a7b6f30c122c req-143ddaba-5ad8-4c1e-a2cc-b435c7d0867f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:23 np0005603609 nova_compute[221550]: 2026-01-31 07:44:23.051 221554 DEBUG nova.compute.manager [req-d3878426-bbf3-4480-88f3-a7b6f30c122c req-143ddaba-5ad8-4c1e-a2cc-b435c7d0867f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] No waiting events found dispatching network-vif-unplugged-4f95b428-b5b8-4191-b6dc-794f22f27406 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:44:23 np0005603609 nova_compute[221550]: 2026-01-31 07:44:23.051 221554 DEBUG nova.compute.manager [req-d3878426-bbf3-4480-88f3-a7b6f30c122c req-143ddaba-5ad8-4c1e-a2cc-b435c7d0867f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Received event network-vif-unplugged-4f95b428-b5b8-4191-b6dc-794f22f27406 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:44:23 np0005603609 nova_compute[221550]: 2026-01-31 07:44:23.051 221554 DEBUG nova.compute.manager [req-d3878426-bbf3-4480-88f3-a7b6f30c122c req-143ddaba-5ad8-4c1e-a2cc-b435c7d0867f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Received event network-vif-plugged-4f95b428-b5b8-4191-b6dc-794f22f27406 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:44:23 np0005603609 nova_compute[221550]: 2026-01-31 07:44:23.051 221554 DEBUG oslo_concurrency.lockutils [req-d3878426-bbf3-4480-88f3-a7b6f30c122c req-143ddaba-5ad8-4c1e-a2cc-b435c7d0867f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:23 np0005603609 nova_compute[221550]: 2026-01-31 07:44:23.051 221554 DEBUG oslo_concurrency.lockutils [req-d3878426-bbf3-4480-88f3-a7b6f30c122c req-143ddaba-5ad8-4c1e-a2cc-b435c7d0867f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:23 np0005603609 nova_compute[221550]: 2026-01-31 07:44:23.052 221554 DEBUG oslo_concurrency.lockutils [req-d3878426-bbf3-4480-88f3-a7b6f30c122c req-143ddaba-5ad8-4c1e-a2cc-b435c7d0867f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:23 np0005603609 nova_compute[221550]: 2026-01-31 07:44:23.052 221554 DEBUG nova.compute.manager [req-d3878426-bbf3-4480-88f3-a7b6f30c122c req-143ddaba-5ad8-4c1e-a2cc-b435c7d0867f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] No waiting events found dispatching network-vif-plugged-4f95b428-b5b8-4191-b6dc-794f22f27406 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:44:23 np0005603609 nova_compute[221550]: 2026-01-31 07:44:23.052 221554 WARNING nova.compute.manager [req-d3878426-bbf3-4480-88f3-a7b6f30c122c req-143ddaba-5ad8-4c1e-a2cc-b435c7d0867f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Received unexpected event network-vif-plugged-4f95b428-b5b8-4191-b6dc-794f22f27406 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:44:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:24.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:24 np0005603609 nova_compute[221550]: 2026-01-31 07:44:24.812 221554 DEBUG nova.network.neutron [-] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:44:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:24.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:24 np0005603609 nova_compute[221550]: 2026-01-31 07:44:24.883 221554 INFO nova.compute.manager [-] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Took 2.63 seconds to deallocate network for instance.#033[00m
Jan 31 02:44:25 np0005603609 nova_compute[221550]: 2026-01-31 07:44:25.004 221554 DEBUG oslo_concurrency.lockutils [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:25 np0005603609 nova_compute[221550]: 2026-01-31 07:44:25.004 221554 DEBUG oslo_concurrency.lockutils [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:25 np0005603609 nova_compute[221550]: 2026-01-31 07:44:25.077 221554 DEBUG nova.compute.manager [req-78c06598-db22-4d42-8dde-9ff51e869cb9 req-852bd599-0886-4416-b6da-4bf272f73757 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Received event network-vif-deleted-4f95b428-b5b8-4191-b6dc-794f22f27406 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:44:25 np0005603609 nova_compute[221550]: 2026-01-31 07:44:25.116 221554 DEBUG oslo_concurrency.processutils [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:44:25 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2346548695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:44:25 np0005603609 nova_compute[221550]: 2026-01-31 07:44:25.543 221554 DEBUG oslo_concurrency.processutils [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:25 np0005603609 nova_compute[221550]: 2026-01-31 07:44:25.550 221554 DEBUG nova.compute.provider_tree [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:44:25 np0005603609 nova_compute[221550]: 2026-01-31 07:44:25.694 221554 DEBUG nova.scheduler.client.report [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:44:25 np0005603609 nova_compute[221550]: 2026-01-31 07:44:25.775 221554 DEBUG oslo_concurrency.lockutils [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:25 np0005603609 nova_compute[221550]: 2026-01-31 07:44:25.986 221554 INFO nova.scheduler.client.report [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Deleted allocations for instance 273e09ce-6fa6-4cb5-a2c6-0eab195d7242#033[00m
Jan 31 02:44:26 np0005603609 nova_compute[221550]: 2026-01-31 07:44:26.158 221554 DEBUG oslo_concurrency.lockutils [None req-504c1ca3-5d1a-48cf-a1de-55c756c94f25 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "273e09ce-6fa6-4cb5-a2c6-0eab195d7242" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:44:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:26.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:44:26 np0005603609 nova_compute[221550]: 2026-01-31 07:44:26.355 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:26.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:26 np0005603609 nova_compute[221550]: 2026-01-31 07:44:26.916 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:28.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:28.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:30.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:30.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:31 np0005603609 nova_compute[221550]: 2026-01-31 07:44:31.358 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:31 np0005603609 nova_compute[221550]: 2026-01-31 07:44:31.919 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:32.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:44:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:32.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:44:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:34.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:44:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:34.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:44:36 np0005603609 nova_compute[221550]: 2026-01-31 07:44:36.316 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845461.3148272, 273e09ce-6fa6-4cb5-a2c6-0eab195d7242 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:44:36 np0005603609 nova_compute[221550]: 2026-01-31 07:44:36.316 221554 INFO nova.compute.manager [-] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:44:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:36.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:36 np0005603609 nova_compute[221550]: 2026-01-31 07:44:36.361 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:36.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e204 e204: 3 total, 3 up, 3 in
Jan 31 02:44:36 np0005603609 nova_compute[221550]: 2026-01-31 07:44:36.920 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e205 e205: 3 total, 3 up, 3 in
Jan 31 02:44:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:38.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:38 np0005603609 nova_compute[221550]: 2026-01-31 07:44:38.639 221554 DEBUG nova.compute.manager [None req-a4f8b144-20e7-49e7-8fef-48c470091dd8 - - - - - -] [instance: 273e09ce-6fa6-4cb5-a2c6-0eab195d7242] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:44:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:38.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:39 np0005603609 nova_compute[221550]: 2026-01-31 07:44:39.642 221554 DEBUG oslo_concurrency.lockutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "3087c480-1e28-4efd-8f41-70490a9295ef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:39 np0005603609 nova_compute[221550]: 2026-01-31 07:44:39.643 221554 DEBUG oslo_concurrency.lockutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e206 e206: 3 total, 3 up, 3 in
Jan 31 02:44:39 np0005603609 nova_compute[221550]: 2026-01-31 07:44:39.783 221554 DEBUG nova.compute.manager [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:44:40 np0005603609 nova_compute[221550]: 2026-01-31 07:44:40.122 221554 DEBUG oslo_concurrency.lockutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:40 np0005603609 nova_compute[221550]: 2026-01-31 07:44:40.123 221554 DEBUG oslo_concurrency.lockutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:40 np0005603609 nova_compute[221550]: 2026-01-31 07:44:40.129 221554 DEBUG nova.virt.hardware [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:44:40 np0005603609 nova_compute[221550]: 2026-01-31 07:44:40.129 221554 INFO nova.compute.claims [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:44:40 np0005603609 nova_compute[221550]: 2026-01-31 07:44:40.328 221554 DEBUG oslo_concurrency.processutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:40.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:44:40 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1775213986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:44:40 np0005603609 nova_compute[221550]: 2026-01-31 07:44:40.774 221554 DEBUG oslo_concurrency.processutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:40 np0005603609 nova_compute[221550]: 2026-01-31 07:44:40.779 221554 DEBUG nova.compute.provider_tree [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:44:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e207 e207: 3 total, 3 up, 3 in
Jan 31 02:44:40 np0005603609 nova_compute[221550]: 2026-01-31 07:44:40.824 221554 DEBUG nova.scheduler.client.report [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:44:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:40.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.032 221554 DEBUG oslo_concurrency.lockutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.033 221554 DEBUG nova.compute.manager [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.153 221554 DEBUG nova.compute.manager [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.154 221554 DEBUG nova.network.neutron [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.208 221554 INFO nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.258 221554 DEBUG nova.compute.manager [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.364 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.618 221554 DEBUG nova.compute.manager [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.619 221554 DEBUG nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.619 221554 INFO nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Creating image(s)#033[00m
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.652 221554 DEBUG nova.storage.rbd_utils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 3087c480-1e28-4efd-8f41-70490a9295ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.680 221554 DEBUG nova.storage.rbd_utils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 3087c480-1e28-4efd-8f41-70490a9295ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.709 221554 DEBUG nova.storage.rbd_utils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 3087c480-1e28-4efd-8f41-70490a9295ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.713 221554 DEBUG oslo_concurrency.processutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.761 221554 DEBUG oslo_concurrency.processutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.762 221554 DEBUG oslo_concurrency.lockutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.762 221554 DEBUG oslo_concurrency.lockutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.763 221554 DEBUG oslo_concurrency.lockutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.787 221554 DEBUG nova.storage.rbd_utils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 3087c480-1e28-4efd-8f41-70490a9295ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.791 221554 DEBUG oslo_concurrency.processutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 3087c480-1e28-4efd-8f41-70490a9295ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:41 np0005603609 nova_compute[221550]: 2026-01-31 07:44:41.954 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:42 np0005603609 nova_compute[221550]: 2026-01-31 07:44:42.057 221554 DEBUG nova.policy [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97abab8eb79247cd89fb2ebff295b890', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:44:42 np0005603609 nova_compute[221550]: 2026-01-31 07:44:42.076 221554 DEBUG oslo_concurrency.processutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 3087c480-1e28-4efd-8f41-70490a9295ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:42 np0005603609 nova_compute[221550]: 2026-01-31 07:44:42.145 221554 DEBUG nova.storage.rbd_utils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] resizing rbd image 3087c480-1e28-4efd-8f41-70490a9295ef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:44:42 np0005603609 nova_compute[221550]: 2026-01-31 07:44:42.258 221554 DEBUG nova.objects.instance [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'migration_context' on Instance uuid 3087c480-1e28-4efd-8f41-70490a9295ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:44:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:42.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:42.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:44 np0005603609 nova_compute[221550]: 2026-01-31 07:44:44.202 221554 DEBUG nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:44:44 np0005603609 nova_compute[221550]: 2026-01-31 07:44:44.202 221554 DEBUG nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Ensure instance console log exists: /var/lib/nova/instances/3087c480-1e28-4efd-8f41-70490a9295ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:44:44 np0005603609 nova_compute[221550]: 2026-01-31 07:44:44.203 221554 DEBUG oslo_concurrency.lockutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:44 np0005603609 nova_compute[221550]: 2026-01-31 07:44:44.204 221554 DEBUG oslo_concurrency.lockutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:44 np0005603609 nova_compute[221550]: 2026-01-31 07:44:44.204 221554 DEBUG oslo_concurrency.lockutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:44.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:44:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:44.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:44:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e208 e208: 3 total, 3 up, 3 in
Jan 31 02:44:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:46.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:46 np0005603609 nova_compute[221550]: 2026-01-31 07:44:46.367 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:46 np0005603609 nova_compute[221550]: 2026-01-31 07:44:46.521 221554 DEBUG nova.network.neutron [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Successfully created port: 4e76293d-e306-4fb0-a2e3-d93b835dc120 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:44:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:46.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:46 np0005603609 nova_compute[221550]: 2026-01-31 07:44:46.955 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:48.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:48.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:49 np0005603609 nova_compute[221550]: 2026-01-31 07:44:49.445 221554 DEBUG nova.network.neutron [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Successfully updated port: 4e76293d-e306-4fb0-a2e3-d93b835dc120 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:44:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:50.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:50.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e209 e209: 3 total, 3 up, 3 in
Jan 31 02:44:51 np0005603609 nova_compute[221550]: 2026-01-31 07:44:51.082 221554 DEBUG oslo_concurrency.lockutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "refresh_cache-3087c480-1e28-4efd-8f41-70490a9295ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:44:51 np0005603609 nova_compute[221550]: 2026-01-31 07:44:51.083 221554 DEBUG oslo_concurrency.lockutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquired lock "refresh_cache-3087c480-1e28-4efd-8f41-70490a9295ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:44:51 np0005603609 nova_compute[221550]: 2026-01-31 07:44:51.083 221554 DEBUG nova.network.neutron [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:44:51 np0005603609 podman[241610]: 2026-01-31 07:44:51.181929517 +0000 UTC m=+0.054909855 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Jan 31 02:44:51 np0005603609 podman[241609]: 2026-01-31 07:44:51.233137783 +0000 UTC m=+0.108549190 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:44:51 np0005603609 nova_compute[221550]: 2026-01-31 07:44:51.369 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:51 np0005603609 nova_compute[221550]: 2026-01-31 07:44:51.816 221554 DEBUG nova.network.neutron [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:44:51 np0005603609 nova_compute[221550]: 2026-01-31 07:44:51.878 221554 DEBUG nova.compute.manager [req-2efb4fb1-1616-415e-809e-539405fe656c req-35dcf4b0-86c9-4e68-90be-c16a16ccf483 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Received event network-changed-4e76293d-e306-4fb0-a2e3-d93b835dc120 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:44:51 np0005603609 nova_compute[221550]: 2026-01-31 07:44:51.878 221554 DEBUG nova.compute.manager [req-2efb4fb1-1616-415e-809e-539405fe656c req-35dcf4b0-86c9-4e68-90be-c16a16ccf483 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Refreshing instance network info cache due to event network-changed-4e76293d-e306-4fb0-a2e3-d93b835dc120. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:44:51 np0005603609 nova_compute[221550]: 2026-01-31 07:44:51.878 221554 DEBUG oslo_concurrency.lockutils [req-2efb4fb1-1616-415e-809e-539405fe656c req-35dcf4b0-86c9-4e68-90be-c16a16ccf483 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-3087c480-1e28-4efd-8f41-70490a9295ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:44:51 np0005603609 nova_compute[221550]: 2026-01-31 07:44:51.957 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:52.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:52 np0005603609 nova_compute[221550]: 2026-01-31 07:44:52.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:52.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.329 221554 DEBUG nova.network.neutron [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Updating instance_info_cache with network_info: [{"id": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "address": "fa:16:3e:13:ba:10", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e76293d-e3", "ovs_interfaceid": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:44:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:54.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.394 221554 DEBUG oslo_concurrency.lockutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Releasing lock "refresh_cache-3087c480-1e28-4efd-8f41-70490a9295ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.394 221554 DEBUG nova.compute.manager [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Instance network_info: |[{"id": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "address": "fa:16:3e:13:ba:10", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e76293d-e3", "ovs_interfaceid": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.395 221554 DEBUG oslo_concurrency.lockutils [req-2efb4fb1-1616-415e-809e-539405fe656c req-35dcf4b0-86c9-4e68-90be-c16a16ccf483 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-3087c480-1e28-4efd-8f41-70490a9295ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.395 221554 DEBUG nova.network.neutron [req-2efb4fb1-1616-415e-809e-539405fe656c req-35dcf4b0-86c9-4e68-90be-c16a16ccf483 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Refreshing network info cache for port 4e76293d-e306-4fb0-a2e3-d93b835dc120 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.398 221554 DEBUG nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Start _get_guest_xml network_info=[{"id": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "address": "fa:16:3e:13:ba:10", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e76293d-e3", "ovs_interfaceid": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.404 221554 WARNING nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.411 221554 DEBUG nova.virt.libvirt.host [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.412 221554 DEBUG nova.virt.libvirt.host [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.416 221554 DEBUG nova.virt.libvirt.host [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.417 221554 DEBUG nova.virt.libvirt.host [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.418 221554 DEBUG nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.419 221554 DEBUG nova.virt.hardware [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.419 221554 DEBUG nova.virt.hardware [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.419 221554 DEBUG nova.virt.hardware [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.420 221554 DEBUG nova.virt.hardware [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.420 221554 DEBUG nova.virt.hardware [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.420 221554 DEBUG nova.virt.hardware [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.420 221554 DEBUG nova.virt.hardware [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.421 221554 DEBUG nova.virt.hardware [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.421 221554 DEBUG nova.virt.hardware [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.421 221554 DEBUG nova.virt.hardware [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.422 221554 DEBUG nova.virt.hardware [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.425 221554 DEBUG oslo_concurrency.processutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:44:54 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1208867897' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.842 221554 DEBUG oslo_concurrency.processutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.869 221554 DEBUG nova.storage.rbd_utils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 3087c480-1e28-4efd-8f41-70490a9295ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:54 np0005603609 nova_compute[221550]: 2026-01-31 07:44:54.873 221554 DEBUG oslo_concurrency.processutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:44:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:54.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:44:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:44:55 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/884181136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.343 221554 DEBUG oslo_concurrency.processutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.344 221554 DEBUG nova.virt.libvirt.vif [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:44:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-915379133',display_name='tempest-DeleteServersTestJSON-server-915379133',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-915379133',id=55,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-yiqpc2tx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:44:41Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=3087c480-1e28-4efd-8f41-70490a9295ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "address": "fa:16:3e:13:ba:10", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e76293d-e3", "ovs_interfaceid": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.345 221554 DEBUG nova.network.os_vif_util [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "address": "fa:16:3e:13:ba:10", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e76293d-e3", "ovs_interfaceid": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.345 221554 DEBUG nova.network.os_vif_util [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ba:10,bridge_name='br-int',has_traffic_filtering=True,id=4e76293d-e306-4fb0-a2e3-d93b835dc120,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e76293d-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.346 221554 DEBUG nova.objects.instance [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3087c480-1e28-4efd-8f41-70490a9295ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.532 221554 DEBUG nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:44:55 np0005603609 nova_compute[221550]:  <uuid>3087c480-1e28-4efd-8f41-70490a9295ef</uuid>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:  <name>instance-00000037</name>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <nova:name>tempest-DeleteServersTestJSON-server-915379133</nova:name>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:44:54</nova:creationTime>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:44:55 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:        <nova:user uuid="97abab8eb79247cd89fb2ebff295b890">tempest-DeleteServersTestJSON-2031597701-project-member</nova:user>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:        <nova:project uuid="f299640bb1f64e5fa12b23955e5a2127">tempest-DeleteServersTestJSON-2031597701</nova:project>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:        <nova:port uuid="4e76293d-e306-4fb0-a2e3-d93b835dc120">
Jan 31 02:44:55 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <entry name="serial">3087c480-1e28-4efd-8f41-70490a9295ef</entry>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <entry name="uuid">3087c480-1e28-4efd-8f41-70490a9295ef</entry>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/3087c480-1e28-4efd-8f41-70490a9295ef_disk">
Jan 31 02:44:55 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:44:55 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/3087c480-1e28-4efd-8f41-70490a9295ef_disk.config">
Jan 31 02:44:55 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:44:55 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:13:ba:10"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <target dev="tap4e76293d-e3"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/3087c480-1e28-4efd-8f41-70490a9295ef/console.log" append="off"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:44:55 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:44:55 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:44:55 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:44:55 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.534 221554 DEBUG nova.compute.manager [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Preparing to wait for external event network-vif-plugged-4e76293d-e306-4fb0-a2e3-d93b835dc120 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.534 221554 DEBUG oslo_concurrency.lockutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "3087c480-1e28-4efd-8f41-70490a9295ef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.535 221554 DEBUG oslo_concurrency.lockutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.535 221554 DEBUG oslo_concurrency.lockutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.536 221554 DEBUG nova.virt.libvirt.vif [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:44:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-915379133',display_name='tempest-DeleteServersTestJSON-server-915379133',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-915379133',id=55,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-yiqpc2tx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:44:41Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=3087c480-1e28-4efd-8f41-70490a9295ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "address": "fa:16:3e:13:ba:10", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e76293d-e3", "ovs_interfaceid": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.537 221554 DEBUG nova.network.os_vif_util [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "address": "fa:16:3e:13:ba:10", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e76293d-e3", "ovs_interfaceid": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.538 221554 DEBUG nova.network.os_vif_util [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ba:10,bridge_name='br-int',has_traffic_filtering=True,id=4e76293d-e306-4fb0-a2e3-d93b835dc120,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e76293d-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.538 221554 DEBUG os_vif [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ba:10,bridge_name='br-int',has_traffic_filtering=True,id=4e76293d-e306-4fb0-a2e3-d93b835dc120,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e76293d-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.539 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.540 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.541 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.545 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.545 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4e76293d-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.546 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4e76293d-e3, col_values=(('external_ids', {'iface-id': '4e76293d-e306-4fb0-a2e3-d93b835dc120', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:ba:10', 'vm-uuid': '3087c480-1e28-4efd-8f41-70490a9295ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:55 np0005603609 NetworkManager[49064]: <info>  [1769845495.5728] manager: (tap4e76293d-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.572 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.575 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.577 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.578 221554 INFO os_vif [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ba:10,bridge_name='br-int',has_traffic_filtering=True,id=4e76293d-e306-4fb0-a2e3-d93b835dc120,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e76293d-e3')#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.749 221554 DEBUG nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.750 221554 DEBUG nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.750 221554 DEBUG nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No VIF found with MAC fa:16:3e:13:ba:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.750 221554 INFO nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Using config drive#033[00m
Jan 31 02:44:55 np0005603609 nova_compute[221550]: 2026-01-31 07:44:55.776 221554 DEBUG nova.storage.rbd_utils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 3087c480-1e28-4efd-8f41-70490a9295ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:56.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:56.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:56 np0005603609 nova_compute[221550]: 2026-01-31 07:44:56.958 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:57 np0005603609 nova_compute[221550]: 2026-01-31 07:44:57.707 221554 INFO nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Creating config drive at /var/lib/nova/instances/3087c480-1e28-4efd-8f41-70490a9295ef/disk.config#033[00m
Jan 31 02:44:57 np0005603609 nova_compute[221550]: 2026-01-31 07:44:57.711 221554 DEBUG oslo_concurrency.processutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3087c480-1e28-4efd-8f41-70490a9295ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpz2ezyrhp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:57 np0005603609 nova_compute[221550]: 2026-01-31 07:44:57.846 221554 DEBUG oslo_concurrency.processutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3087c480-1e28-4efd-8f41-70490a9295ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpz2ezyrhp" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:57 np0005603609 nova_compute[221550]: 2026-01-31 07:44:57.882 221554 DEBUG nova.storage.rbd_utils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 3087c480-1e28-4efd-8f41-70490a9295ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:57 np0005603609 nova_compute[221550]: 2026-01-31 07:44:57.887 221554 DEBUG oslo_concurrency.processutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3087c480-1e28-4efd-8f41-70490a9295ef/disk.config 3087c480-1e28-4efd-8f41-70490a9295ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.061 221554 DEBUG oslo_concurrency.processutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3087c480-1e28-4efd-8f41-70490a9295ef/disk.config 3087c480-1e28-4efd-8f41-70490a9295ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.062 221554 INFO nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Deleting local config drive /var/lib/nova/instances/3087c480-1e28-4efd-8f41-70490a9295ef/disk.config because it was imported into RBD.#033[00m
Jan 31 02:44:58 np0005603609 kernel: tap4e76293d-e3: entered promiscuous mode
Jan 31 02:44:58 np0005603609 NetworkManager[49064]: <info>  [1769845498.1031] manager: (tap4e76293d-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/82)
Jan 31 02:44:58 np0005603609 ovn_controller[130359]: 2026-01-31T07:44:58Z|00162|binding|INFO|Claiming lport 4e76293d-e306-4fb0-a2e3-d93b835dc120 for this chassis.
Jan 31 02:44:58 np0005603609 ovn_controller[130359]: 2026-01-31T07:44:58Z|00163|binding|INFO|4e76293d-e306-4fb0-a2e3-d93b835dc120: Claiming fa:16:3e:13:ba:10 10.100.0.4
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.145 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:58 np0005603609 systemd-udevd[241785]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:44:58 np0005603609 ovn_controller[130359]: 2026-01-31T07:44:58Z|00164|binding|INFO|Setting lport 4e76293d-e306-4fb0-a2e3-d93b835dc120 ovn-installed in OVS
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.155 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:58 np0005603609 NetworkManager[49064]: <info>  [1769845498.1561] device (tap4e76293d-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:44:58 np0005603609 NetworkManager[49064]: <info>  [1769845498.1571] device (tap4e76293d-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:44:58 np0005603609 systemd-machined[190912]: New machine qemu-26-instance-00000037.
Jan 31 02:44:58 np0005603609 systemd[1]: Started Virtual Machine qemu-26-instance-00000037.
Jan 31 02:44:58 np0005603609 ovn_controller[130359]: 2026-01-31T07:44:58Z|00165|binding|INFO|Setting lport 4e76293d-e306-4fb0-a2e3-d93b835dc120 up in Southbound
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.225 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:ba:10 10.100.0.4'], port_security=['fa:16:3e:13:ba:10 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3087c480-1e28-4efd-8f41-70490a9295ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60244e92-1524-47f0-8207-05d0104afa47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f055dcb-9af1-4cf7-aa74-c819da93756e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70b36ac0-62cb-4e29-924b-93c5ad906bc9, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=4e76293d-e306-4fb0-a2e3-d93b835dc120) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.226 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 4e76293d-e306-4fb0-a2e3-d93b835dc120 in datapath 60244e92-1524-47f0-8207-05d0104afa47 bound to our chassis#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.228 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60244e92-1524-47f0-8207-05d0104afa47#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.236 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[21709ba0-0bb2-4228-943b-0b4729bdd658]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.237 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap60244e92-11 in ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.238 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap60244e92-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.238 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd0051e-27ca-4b60-b60a-be2449389390]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.239 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb84ea1-d99e-407e-8fb2-742b23943c19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.247 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[f0eb4851-cbe3-48b0-a36b-d06f0a919142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.256 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[52efd221-c15f-4ddc-9ca7-5fb1ca19f8f1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.275 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d27a442e-4757-46e9-90c6-8bf369b305ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.279 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc2f521-0170-4cb3-91ec-38bf9b57b3d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:58 np0005603609 NetworkManager[49064]: <info>  [1769845498.2807] manager: (tap60244e92-10): new Veth device (/org/freedesktop/NetworkManager/Devices/83)
Jan 31 02:44:58 np0005603609 systemd-udevd[241787]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.301 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[90c059ad-caa2-4e1a-be4a-c87e24b9bf61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.303 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[65340917-4874-4e88-8e3f-244fca5603db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:58 np0005603609 NetworkManager[49064]: <info>  [1769845498.3170] device (tap60244e92-10): carrier: link connected
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.323 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[7033e1cc-ba6b-4c98-b9ab-9e1f54e89a58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.333 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8c3cee09-44f8-4707-86d5-e715fa295e1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60244e92-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:59:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573801, 'reachable_time': 16832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241821, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.344 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcda8ba-c5ac-4526-bba2-1e1beef15b2f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:59f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573801, 'tstamp': 573801}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241822, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.354 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[166b13f5-81f2-4b11-ab0f-dc73acf8fa41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60244e92-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:59:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573801, 'reachable_time': 16832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 241823, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.377 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f1da336e-9947-488c-9531-7d0505c50b93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:44:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:58.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.418 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0d4b1c-4b49-4d79-8c3f-54bee832c3de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.420 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60244e92-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.420 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.421 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60244e92-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.422 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:58 np0005603609 NetworkManager[49064]: <info>  [1769845498.4234] manager: (tap60244e92-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Jan 31 02:44:58 np0005603609 kernel: tap60244e92-10: entered promiscuous mode
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.425 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.425 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60244e92-10, col_values=(('external_ids', {'iface-id': '4e20d708-9f46-41f3-86c0-9ab3849f8392'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:58 np0005603609 ovn_controller[130359]: 2026-01-31T07:44:58Z|00166|binding|INFO|Releasing lport 4e20d708-9f46-41f3-86c0-9ab3849f8392 from this chassis (sb_readonly=0)
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.428 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.428 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ae902ec6-74dc-4674-ad77-c170bcdc114f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.430 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-60244e92-1524-47f0-8207-05d0104afa47
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 60244e92-1524-47f0-8207-05d0104afa47
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:44:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:44:58.430 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'env', 'PROCESS_TAG=haproxy-60244e92-1524-47f0-8207-05d0104afa47', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/60244e92-1524-47f0-8207-05d0104afa47.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.433 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.731 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.733 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.734 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.735 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.736 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.841 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845498.8406339, 3087c480-1e28-4efd-8f41-70490a9295ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.842 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] VM Started (Lifecycle Event)#033[00m
Jan 31 02:44:58 np0005603609 podman[241874]: 2026-01-31 07:44:58.751129171 +0000 UTC m=+0.033717458 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.902 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.906 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845498.8420587, 3087c480-1e28-4efd-8f41-70490a9295ef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.906 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:44:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:44:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:58.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.963 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.966 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.982 221554 DEBUG nova.network.neutron [req-2efb4fb1-1616-415e-809e-539405fe656c req-35dcf4b0-86c9-4e68-90be-c16a16ccf483 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Updated VIF entry in instance network info cache for port 4e76293d-e306-4fb0-a2e3-d93b835dc120. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:44:58 np0005603609 nova_compute[221550]: 2026-01-31 07:44:58.983 221554 DEBUG nova.network.neutron [req-2efb4fb1-1616-415e-809e-539405fe656c req-35dcf4b0-86c9-4e68-90be-c16a16ccf483 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Updating instance_info_cache with network_info: [{"id": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "address": "fa:16:3e:13:ba:10", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e76293d-e3", "ovs_interfaceid": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.037 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.042 221554 DEBUG oslo_concurrency.lockutils [req-2efb4fb1-1616-415e-809e-539405fe656c req-35dcf4b0-86c9-4e68-90be-c16a16ccf483 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-3087c480-1e28-4efd-8f41-70490a9295ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:44:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:44:59 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2076917329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.173 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.196 221554 DEBUG nova.compute.manager [req-214c5bc9-ceb4-4c5d-ab7c-5450f953ab31 req-dfb11e18-2d93-44b5-9614-2a8424662b12 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Received event network-vif-plugged-4e76293d-e306-4fb0-a2e3-d93b835dc120 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.197 221554 DEBUG oslo_concurrency.lockutils [req-214c5bc9-ceb4-4c5d-ab7c-5450f953ab31 req-dfb11e18-2d93-44b5-9614-2a8424662b12 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3087c480-1e28-4efd-8f41-70490a9295ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.198 221554 DEBUG oslo_concurrency.lockutils [req-214c5bc9-ceb4-4c5d-ab7c-5450f953ab31 req-dfb11e18-2d93-44b5-9614-2a8424662b12 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.198 221554 DEBUG oslo_concurrency.lockutils [req-214c5bc9-ceb4-4c5d-ab7c-5450f953ab31 req-dfb11e18-2d93-44b5-9614-2a8424662b12 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.199 221554 DEBUG nova.compute.manager [req-214c5bc9-ceb4-4c5d-ab7c-5450f953ab31 req-dfb11e18-2d93-44b5-9614-2a8424662b12 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Processing event network-vif-plugged-4e76293d-e306-4fb0-a2e3-d93b835dc120 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.200 221554 DEBUG nova.compute.manager [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.203 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845499.2032056, 3087c480-1e28-4efd-8f41-70490a9295ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.204 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.208 221554 DEBUG nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.211 221554 INFO nova.virt.libvirt.driver [-] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Instance spawned successfully.#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.212 221554 DEBUG nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.259 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.264 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.276 221554 DEBUG nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.276 221554 DEBUG nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.277 221554 DEBUG nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.277 221554 DEBUG nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.277 221554 DEBUG nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.278 221554 DEBUG nova.virt.libvirt.driver [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.356 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.396 221554 INFO nova.compute.manager [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Took 17.78 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.397 221554 DEBUG nova.compute.manager [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.405 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.405 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.654 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.656 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4621MB free_disk=20.901050567626953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.658 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.658 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.739 221554 INFO nova.compute.manager [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Took 19.66 seconds to build instance.#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.875 221554 DEBUG oslo_concurrency.lockutils [None req-daaef637-ea00-4238-8b07-24f1c1c44b7c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.914 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 3087c480-1e28-4efd-8f41-70490a9295ef actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.915 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:44:59 np0005603609 nova_compute[221550]: 2026-01-31 07:44:59.916 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:44:59 np0005603609 podman[241874]: 2026-01-31 07:44:59.962219167 +0000 UTC m=+1.244807454 container create 7e980ea9fba82b538a09c97103edf688a8f0230087685a63793df9032e5b1a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:45:00 np0005603609 systemd[1]: Started libpod-conmon-7e980ea9fba82b538a09c97103edf688a8f0230087685a63793df9032e5b1a8e.scope.
Jan 31 02:45:00 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:45:00 np0005603609 nova_compute[221550]: 2026-01-31 07:45:00.048 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:00 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0140ff4dbaf91df64b5c3ef0fdf46d08c3dc5970e428125747208e204eff3996/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:45:00 np0005603609 podman[241874]: 2026-01-31 07:45:00.065048748 +0000 UTC m=+1.347637085 container init 7e980ea9fba82b538a09c97103edf688a8f0230087685a63793df9032e5b1a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 02:45:00 np0005603609 podman[241874]: 2026-01-31 07:45:00.07017225 +0000 UTC m=+1.352760537 container start 7e980ea9fba82b538a09c97103edf688a8f0230087685a63793df9032e5b1a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 02:45:00 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[241936]: [NOTICE]   (241941) : New worker (241943) forked
Jan 31 02:45:00 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[241936]: [NOTICE]   (241941) : Loading success.
Jan 31 02:45:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:00.166 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:45:00 np0005603609 nova_compute[221550]: 2026-01-31 07:45:00.166 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:00.168 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:45:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:00.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:45:00 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1610713272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:45:00 np0005603609 nova_compute[221550]: 2026-01-31 07:45:00.521 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:00 np0005603609 nova_compute[221550]: 2026-01-31 07:45:00.525 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:45:00 np0005603609 nova_compute[221550]: 2026-01-31 07:45:00.568 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:45:00 np0005603609 nova_compute[221550]: 2026-01-31 07:45:00.572 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:00 np0005603609 nova_compute[221550]: 2026-01-31 07:45:00.612 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:45:00 np0005603609 nova_compute[221550]: 2026-01-31 07:45:00.612 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:00.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:01 np0005603609 nova_compute[221550]: 2026-01-31 07:45:01.788 221554 DEBUG nova.compute.manager [req-fa47d49d-db6e-4243-b2c0-7e8b4e825857 req-50d6cd9f-1311-4a93-bd87-6ff0c5687cf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Received event network-vif-plugged-4e76293d-e306-4fb0-a2e3-d93b835dc120 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:45:01 np0005603609 nova_compute[221550]: 2026-01-31 07:45:01.788 221554 DEBUG oslo_concurrency.lockutils [req-fa47d49d-db6e-4243-b2c0-7e8b4e825857 req-50d6cd9f-1311-4a93-bd87-6ff0c5687cf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3087c480-1e28-4efd-8f41-70490a9295ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:01 np0005603609 nova_compute[221550]: 2026-01-31 07:45:01.788 221554 DEBUG oslo_concurrency.lockutils [req-fa47d49d-db6e-4243-b2c0-7e8b4e825857 req-50d6cd9f-1311-4a93-bd87-6ff0c5687cf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:01 np0005603609 nova_compute[221550]: 2026-01-31 07:45:01.788 221554 DEBUG oslo_concurrency.lockutils [req-fa47d49d-db6e-4243-b2c0-7e8b4e825857 req-50d6cd9f-1311-4a93-bd87-6ff0c5687cf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:01 np0005603609 nova_compute[221550]: 2026-01-31 07:45:01.788 221554 DEBUG nova.compute.manager [req-fa47d49d-db6e-4243-b2c0-7e8b4e825857 req-50d6cd9f-1311-4a93-bd87-6ff0c5687cf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] No waiting events found dispatching network-vif-plugged-4e76293d-e306-4fb0-a2e3-d93b835dc120 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:45:01 np0005603609 nova_compute[221550]: 2026-01-31 07:45:01.789 221554 WARNING nova.compute.manager [req-fa47d49d-db6e-4243-b2c0-7e8b4e825857 req-50d6cd9f-1311-4a93-bd87-6ff0c5687cf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Received unexpected event network-vif-plugged-4e76293d-e306-4fb0-a2e3-d93b835dc120 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:45:02 np0005603609 nova_compute[221550]: 2026-01-31 07:45:02.006 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:02.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:02 np0005603609 nova_compute[221550]: 2026-01-31 07:45:02.613 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:02 np0005603609 nova_compute[221550]: 2026-01-31 07:45:02.614 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:02 np0005603609 nova_compute[221550]: 2026-01-31 07:45:02.614 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:45:02 np0005603609 nova_compute[221550]: 2026-01-31 07:45:02.614 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:45:02 np0005603609 nova_compute[221550]: 2026-01-31 07:45:02.672 221554 DEBUG oslo_concurrency.lockutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquiring lock "10979044-21a0-4f6e-8d6d-bc6d04961714" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:02 np0005603609 nova_compute[221550]: 2026-01-31 07:45:02.672 221554 DEBUG oslo_concurrency.lockutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "10979044-21a0-4f6e-8d6d-bc6d04961714" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:02 np0005603609 nova_compute[221550]: 2026-01-31 07:45:02.742 221554 DEBUG nova.compute.manager [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:45:02 np0005603609 nova_compute[221550]: 2026-01-31 07:45:02.868 221554 DEBUG oslo_concurrency.lockutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:02 np0005603609 nova_compute[221550]: 2026-01-31 07:45:02.868 221554 DEBUG oslo_concurrency.lockutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:02 np0005603609 nova_compute[221550]: 2026-01-31 07:45:02.895 221554 DEBUG nova.virt.hardware [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:45:02 np0005603609 nova_compute[221550]: 2026-01-31 07:45:02.895 221554 INFO nova.compute.claims [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:45:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:02.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:03 np0005603609 nova_compute[221550]: 2026-01-31 07:45:03.094 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-3087c480-1e28-4efd-8f41-70490a9295ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:45:03 np0005603609 nova_compute[221550]: 2026-01-31 07:45:03.094 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-3087c480-1e28-4efd-8f41-70490a9295ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:45:03 np0005603609 nova_compute[221550]: 2026-01-31 07:45:03.095 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:45:03 np0005603609 nova_compute[221550]: 2026-01-31 07:45:03.095 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3087c480-1e28-4efd-8f41-70490a9295ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:45:03 np0005603609 nova_compute[221550]: 2026-01-31 07:45:03.151 221554 DEBUG oslo_concurrency.processutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:45:03 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/7205415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:45:03 np0005603609 nova_compute[221550]: 2026-01-31 07:45:03.567 221554 DEBUG oslo_concurrency.processutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:03 np0005603609 nova_compute[221550]: 2026-01-31 07:45:03.572 221554 DEBUG nova.compute.provider_tree [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:45:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:04.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:04.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:05 np0005603609 nova_compute[221550]: 2026-01-31 07:45:05.359 221554 DEBUG nova.scheduler.client.report [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:45:05 np0005603609 nova_compute[221550]: 2026-01-31 07:45:05.623 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:06 np0005603609 nova_compute[221550]: 2026-01-31 07:45:06.350 221554 DEBUG oslo_concurrency.lockutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:06 np0005603609 nova_compute[221550]: 2026-01-31 07:45:06.351 221554 DEBUG nova.compute.manager [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:45:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:45:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:06.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:45:06 np0005603609 nova_compute[221550]: 2026-01-31 07:45:06.466 221554 DEBUG nova.compute.manager [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:45:06 np0005603609 nova_compute[221550]: 2026-01-31 07:45:06.466 221554 DEBUG nova.network.neutron [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:45:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:06 np0005603609 nova_compute[221550]: 2026-01-31 07:45:06.596 221554 INFO nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:45:06 np0005603609 nova_compute[221550]: 2026-01-31 07:45:06.681 221554 DEBUG nova.compute.manager [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:45:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:06.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:07 np0005603609 nova_compute[221550]: 2026-01-31 07:45:07.005 221554 DEBUG nova.compute.manager [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:45:07 np0005603609 nova_compute[221550]: 2026-01-31 07:45:07.006 221554 DEBUG nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:45:07 np0005603609 nova_compute[221550]: 2026-01-31 07:45:07.007 221554 INFO nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Creating image(s)#033[00m
Jan 31 02:45:07 np0005603609 nova_compute[221550]: 2026-01-31 07:45:07.035 221554 DEBUG nova.storage.rbd_utils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] rbd image 10979044-21a0-4f6e-8d6d-bc6d04961714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:45:07 np0005603609 nova_compute[221550]: 2026-01-31 07:45:07.060 221554 DEBUG nova.storage.rbd_utils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] rbd image 10979044-21a0-4f6e-8d6d-bc6d04961714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:45:07 np0005603609 nova_compute[221550]: 2026-01-31 07:45:07.086 221554 DEBUG nova.storage.rbd_utils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] rbd image 10979044-21a0-4f6e-8d6d-bc6d04961714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:45:07 np0005603609 nova_compute[221550]: 2026-01-31 07:45:07.090 221554 DEBUG oslo_concurrency.processutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:07 np0005603609 nova_compute[221550]: 2026-01-31 07:45:07.107 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:07 np0005603609 nova_compute[221550]: 2026-01-31 07:45:07.111 221554 DEBUG nova.policy [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a6a24b8bc028456fa6de79d3b792e79a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '07ac56babd144839be6d08563340e6bd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:45:07 np0005603609 nova_compute[221550]: 2026-01-31 07:45:07.141 221554 DEBUG oslo_concurrency.processutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:07 np0005603609 nova_compute[221550]: 2026-01-31 07:45:07.143 221554 DEBUG oslo_concurrency.lockutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:07 np0005603609 nova_compute[221550]: 2026-01-31 07:45:07.144 221554 DEBUG oslo_concurrency.lockutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:07 np0005603609 nova_compute[221550]: 2026-01-31 07:45:07.144 221554 DEBUG oslo_concurrency.lockutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:07 np0005603609 nova_compute[221550]: 2026-01-31 07:45:07.175 221554 DEBUG nova.storage.rbd_utils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] rbd image 10979044-21a0-4f6e-8d6d-bc6d04961714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:45:07 np0005603609 nova_compute[221550]: 2026-01-31 07:45:07.180 221554 DEBUG oslo_concurrency.processutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 10979044-21a0-4f6e-8d6d-bc6d04961714_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:07.478 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:07.479 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:07.480 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:07 np0005603609 nova_compute[221550]: 2026-01-31 07:45:07.643 221554 DEBUG oslo_concurrency.processutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 10979044-21a0-4f6e-8d6d-bc6d04961714_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:07 np0005603609 nova_compute[221550]: 2026-01-31 07:45:07.717 221554 DEBUG nova.storage.rbd_utils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] resizing rbd image 10979044-21a0-4f6e-8d6d-bc6d04961714_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:45:08 np0005603609 nova_compute[221550]: 2026-01-31 07:45:08.018 221554 DEBUG oslo_concurrency.lockutils [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "3087c480-1e28-4efd-8f41-70490a9295ef" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:08 np0005603609 nova_compute[221550]: 2026-01-31 07:45:08.018 221554 DEBUG oslo_concurrency.lockutils [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:08 np0005603609 nova_compute[221550]: 2026-01-31 07:45:08.075 221554 DEBUG nova.objects.instance [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'flavor' on Instance uuid 3087c480-1e28-4efd-8f41-70490a9295ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:45:08 np0005603609 nova_compute[221550]: 2026-01-31 07:45:08.284 221554 DEBUG oslo_concurrency.lockutils [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:08.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:08.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.042 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Updating instance_info_cache with network_info: [{"id": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "address": "fa:16:3e:13:ba:10", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e76293d-e3", "ovs_interfaceid": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.068 221554 DEBUG nova.network.neutron [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Successfully created port: 301f354a-102e-4d48-8e41-36be449f4e70 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:45:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:09.172 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.264 221554 DEBUG oslo_concurrency.lockutils [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "3087c480-1e28-4efd-8f41-70490a9295ef" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.265 221554 DEBUG oslo_concurrency.lockutils [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.266 221554 INFO nova.compute.manager [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Attaching volume 111b00c1-b66c-4cba-bfe9-abb2e4396a68 to /dev/vdb#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.270 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-3087c480-1e28-4efd-8f41-70490a9295ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.270 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.274 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.276 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.280 221554 DEBUG nova.objects.instance [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lazy-loading 'migration_context' on Instance uuid 10979044-21a0-4f6e-8d6d-bc6d04961714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.282 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.283 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.283 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.283 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.334 221554 DEBUG nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.335 221554 DEBUG nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Ensure instance console log exists: /var/lib/nova/instances/10979044-21a0-4f6e-8d6d-bc6d04961714/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.335 221554 DEBUG oslo_concurrency.lockutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.335 221554 DEBUG oslo_concurrency.lockutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.335 221554 DEBUG oslo_concurrency.lockutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.711 221554 DEBUG os_brick.utils [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.713 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.720 226739 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.721 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[e8e8eb98-f165-4a4a-9483-1a523100e354]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.722 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.726 226739 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.726 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[ab6831ab-128a-4d06-bd11-874bf293830d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1765b9b6275c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.728 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.735 226739 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.735 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[43b8f38e-9f38-40f6-a2aa-eaeef7542f3b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.737 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[ca5bc4fe-6f8b-4fad-a54b-f9fb7cefe586]: (4, '231927d4-1ded-4b84-843c-456d697af567') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.737 221554 DEBUG oslo_concurrency.processutils [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.757 221554 DEBUG oslo_concurrency.processutils [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.758 221554 DEBUG os_brick.initiator.connectors.lightos [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.759 221554 DEBUG os_brick.initiator.connectors.lightos [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.759 221554 DEBUG os_brick.initiator.connectors.lightos [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.759 221554 DEBUG os_brick.utils [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] <== get_connector_properties: return (47ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1765b9b6275c', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '231927d4-1ded-4b84-843c-456d697af567', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 02:45:09 np0005603609 nova_compute[221550]: 2026-01-31 07:45:09.760 221554 DEBUG nova.virt.block_device [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Updating existing volume attachment record: 5a116532-3ed2-4ade-98d5-9a22d8432372 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 02:45:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:45:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:10.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:45:10 np0005603609 nova_compute[221550]: 2026-01-31 07:45:10.651 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:10.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:11 np0005603609 nova_compute[221550]: 2026-01-31 07:45:11.742 221554 DEBUG nova.objects.instance [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'flavor' on Instance uuid 3087c480-1e28-4efd-8f41-70490a9295ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:45:11 np0005603609 nova_compute[221550]: 2026-01-31 07:45:11.790 221554 DEBUG nova.virt.libvirt.driver [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Attempting to attach volume 111b00c1-b66c-4cba-bfe9-abb2e4396a68 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 31 02:45:11 np0005603609 nova_compute[221550]: 2026-01-31 07:45:11.792 221554 DEBUG nova.virt.libvirt.guest [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 02:45:11 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 02:45:11 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-111b00c1-b66c-4cba-bfe9-abb2e4396a68">
Jan 31 02:45:11 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 02:45:11 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 02:45:11 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 02:45:11 np0005603609 nova_compute[221550]:  </source>
Jan 31 02:45:11 np0005603609 nova_compute[221550]:  <auth username="openstack">
Jan 31 02:45:11 np0005603609 nova_compute[221550]:    <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:45:11 np0005603609 nova_compute[221550]:  </auth>
Jan 31 02:45:11 np0005603609 nova_compute[221550]:  <target dev="vdb" bus="virtio"/>
Jan 31 02:45:11 np0005603609 nova_compute[221550]:  <serial>111b00c1-b66c-4cba-bfe9-abb2e4396a68</serial>
Jan 31 02:45:11 np0005603609 nova_compute[221550]: </disk>
Jan 31 02:45:11 np0005603609 nova_compute[221550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 02:45:12 np0005603609 nova_compute[221550]: 2026-01-31 07:45:12.051 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:12 np0005603609 nova_compute[221550]: 2026-01-31 07:45:12.056 221554 DEBUG nova.virt.libvirt.driver [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:45:12 np0005603609 nova_compute[221550]: 2026-01-31 07:45:12.056 221554 DEBUG nova.virt.libvirt.driver [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:45:12 np0005603609 nova_compute[221550]: 2026-01-31 07:45:12.056 221554 DEBUG nova.virt.libvirt.driver [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:45:12 np0005603609 nova_compute[221550]: 2026-01-31 07:45:12.057 221554 DEBUG nova.virt.libvirt.driver [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No VIF found with MAC fa:16:3e:13:ba:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:45:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:12.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:12.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:13 np0005603609 ovn_controller[130359]: 2026-01-31T07:45:13Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:13:ba:10 10.100.0.4
Jan 31 02:45:13 np0005603609 ovn_controller[130359]: 2026-01-31T07:45:13Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:13:ba:10 10.100.0.4
Jan 31 02:45:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:14.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:14 np0005603609 nova_compute[221550]: 2026-01-31 07:45:14.401 221554 DEBUG nova.network.neutron [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Successfully updated port: 301f354a-102e-4d48-8e41-36be449f4e70 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:45:14 np0005603609 nova_compute[221550]: 2026-01-31 07:45:14.520 221554 DEBUG oslo_concurrency.lockutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquiring lock "refresh_cache-10979044-21a0-4f6e-8d6d-bc6d04961714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:45:14 np0005603609 nova_compute[221550]: 2026-01-31 07:45:14.521 221554 DEBUG oslo_concurrency.lockutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquired lock "refresh_cache-10979044-21a0-4f6e-8d6d-bc6d04961714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:45:14 np0005603609 nova_compute[221550]: 2026-01-31 07:45:14.521 221554 DEBUG nova.network.neutron [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:45:14 np0005603609 nova_compute[221550]: 2026-01-31 07:45:14.541 221554 DEBUG oslo_concurrency.lockutils [None req-51536ffc-fb28-488e-8fd3-635e4cbcb762 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:14 np0005603609 nova_compute[221550]: 2026-01-31 07:45:14.699 221554 DEBUG nova.compute.manager [req-dc3356ae-a886-4046-96b2-6eb527755172 req-f791f05b-2a89-49fa-a0e3-70ad0283a829 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Received event network-changed-301f354a-102e-4d48-8e41-36be449f4e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:45:14 np0005603609 nova_compute[221550]: 2026-01-31 07:45:14.699 221554 DEBUG nova.compute.manager [req-dc3356ae-a886-4046-96b2-6eb527755172 req-f791f05b-2a89-49fa-a0e3-70ad0283a829 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Refreshing instance network info cache due to event network-changed-301f354a-102e-4d48-8e41-36be449f4e70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:45:14 np0005603609 nova_compute[221550]: 2026-01-31 07:45:14.699 221554 DEBUG oslo_concurrency.lockutils [req-dc3356ae-a886-4046-96b2-6eb527755172 req-f791f05b-2a89-49fa-a0e3-70ad0283a829 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-10979044-21a0-4f6e-8d6d-bc6d04961714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:45:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:14.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:15 np0005603609 nova_compute[221550]: 2026-01-31 07:45:15.258 221554 DEBUG nova.network.neutron [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:45:15 np0005603609 nova_compute[221550]: 2026-01-31 07:45:15.701 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:15 np0005603609 nova_compute[221550]: 2026-01-31 07:45:15.993 221554 DEBUG oslo_concurrency.lockutils [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "3087c480-1e28-4efd-8f41-70490a9295ef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:15 np0005603609 nova_compute[221550]: 2026-01-31 07:45:15.994 221554 DEBUG oslo_concurrency.lockutils [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:15 np0005603609 nova_compute[221550]: 2026-01-31 07:45:15.994 221554 DEBUG oslo_concurrency.lockutils [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "3087c480-1e28-4efd-8f41-70490a9295ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:15 np0005603609 nova_compute[221550]: 2026-01-31 07:45:15.994 221554 DEBUG oslo_concurrency.lockutils [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:15 np0005603609 nova_compute[221550]: 2026-01-31 07:45:15.994 221554 DEBUG oslo_concurrency.lockutils [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:15 np0005603609 nova_compute[221550]: 2026-01-31 07:45:15.995 221554 INFO nova.compute.manager [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Terminating instance#033[00m
Jan 31 02:45:15 np0005603609 nova_compute[221550]: 2026-01-31 07:45:15.997 221554 DEBUG nova.compute.manager [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:45:16 np0005603609 kernel: tap4e76293d-e3 (unregistering): left promiscuous mode
Jan 31 02:45:16 np0005603609 NetworkManager[49064]: <info>  [1769845516.1873] device (tap4e76293d-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:45:16 np0005603609 nova_compute[221550]: 2026-01-31 07:45:16.187 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:16 np0005603609 ovn_controller[130359]: 2026-01-31T07:45:16Z|00167|binding|INFO|Releasing lport 4e76293d-e306-4fb0-a2e3-d93b835dc120 from this chassis (sb_readonly=0)
Jan 31 02:45:16 np0005603609 ovn_controller[130359]: 2026-01-31T07:45:16Z|00168|binding|INFO|Setting lport 4e76293d-e306-4fb0-a2e3-d93b835dc120 down in Southbound
Jan 31 02:45:16 np0005603609 ovn_controller[130359]: 2026-01-31T07:45:16Z|00169|binding|INFO|Removing iface tap4e76293d-e3 ovn-installed in OVS
Jan 31 02:45:16 np0005603609 nova_compute[221550]: 2026-01-31 07:45:16.195 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:16 np0005603609 nova_compute[221550]: 2026-01-31 07:45:16.197 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:16 np0005603609 nova_compute[221550]: 2026-01-31 07:45:16.201 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:16.206 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:ba:10 10.100.0.4'], port_security=['fa:16:3e:13:ba:10 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3087c480-1e28-4efd-8f41-70490a9295ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60244e92-1524-47f0-8207-05d0104afa47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0f055dcb-9af1-4cf7-aa74-c819da93756e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70b36ac0-62cb-4e29-924b-93c5ad906bc9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=4e76293d-e306-4fb0-a2e3-d93b835dc120) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:45:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:16.207 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 4e76293d-e306-4fb0-a2e3-d93b835dc120 in datapath 60244e92-1524-47f0-8207-05d0104afa47 unbound from our chassis#033[00m
Jan 31 02:45:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:16.209 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60244e92-1524-47f0-8207-05d0104afa47, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:45:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:16.210 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[abc11584-ad72-491e-8225-13d6ec7c3507]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:16.210 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 namespace which is not needed anymore#033[00m
Jan 31 02:45:16 np0005603609 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000037.scope: Deactivated successfully.
Jan 31 02:45:16 np0005603609 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000037.scope: Consumed 13.474s CPU time.
Jan 31 02:45:16 np0005603609 systemd-machined[190912]: Machine qemu-26-instance-00000037 terminated.
Jan 31 02:45:16 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[241936]: [NOTICE]   (241941) : haproxy version is 2.8.14-c23fe91
Jan 31 02:45:16 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[241936]: [NOTICE]   (241941) : path to executable is /usr/sbin/haproxy
Jan 31 02:45:16 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[241936]: [WARNING]  (241941) : Exiting Master process...
Jan 31 02:45:16 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[241936]: [ALERT]    (241941) : Current worker (241943) exited with code 143 (Terminated)
Jan 31 02:45:16 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[241936]: [WARNING]  (241941) : All workers exited. Exiting... (0)
Jan 31 02:45:16 np0005603609 systemd[1]: libpod-7e980ea9fba82b538a09c97103edf688a8f0230087685a63793df9032e5b1a8e.scope: Deactivated successfully.
Jan 31 02:45:16 np0005603609 podman[242211]: 2026-01-31 07:45:16.316884348 +0000 UTC m=+0.039580268 container died 7e980ea9fba82b538a09c97103edf688a8f0230087685a63793df9032e5b1a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:45:16 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7e980ea9fba82b538a09c97103edf688a8f0230087685a63793df9032e5b1a8e-userdata-shm.mount: Deactivated successfully.
Jan 31 02:45:16 np0005603609 systemd[1]: var-lib-containers-storage-overlay-0140ff4dbaf91df64b5c3ef0fdf46d08c3dc5970e428125747208e204eff3996-merged.mount: Deactivated successfully.
Jan 31 02:45:16 np0005603609 podman[242211]: 2026-01-31 07:45:16.350897512 +0000 UTC m=+0.073593432 container cleanup 7e980ea9fba82b538a09c97103edf688a8f0230087685a63793df9032e5b1a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 02:45:16 np0005603609 systemd[1]: libpod-conmon-7e980ea9fba82b538a09c97103edf688a8f0230087685a63793df9032e5b1a8e.scope: Deactivated successfully.
Jan 31 02:45:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:16.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:16 np0005603609 podman[242240]: 2026-01-31 07:45:16.408749417 +0000 UTC m=+0.043098463 container remove 7e980ea9fba82b538a09c97103edf688a8f0230087685a63793df9032e5b1a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Jan 31 02:45:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:16.412 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[33387c60-26a8-4f9b-80ac-477d3902b442]: (4, ('Sat Jan 31 07:45:16 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 (7e980ea9fba82b538a09c97103edf688a8f0230087685a63793df9032e5b1a8e)\n7e980ea9fba82b538a09c97103edf688a8f0230087685a63793df9032e5b1a8e\nSat Jan 31 07:45:16 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 (7e980ea9fba82b538a09c97103edf688a8f0230087685a63793df9032e5b1a8e)\n7e980ea9fba82b538a09c97103edf688a8f0230087685a63793df9032e5b1a8e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:16.413 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7cde5b2c-c254-4467-b113-67a82e2c270f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:16.414 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60244e92-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:16 np0005603609 nova_compute[221550]: 2026-01-31 07:45:16.416 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:16 np0005603609 kernel: tap60244e92-10: left promiscuous mode
Jan 31 02:45:16 np0005603609 nova_compute[221550]: 2026-01-31 07:45:16.424 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:16 np0005603609 nova_compute[221550]: 2026-01-31 07:45:16.426 221554 INFO nova.virt.libvirt.driver [-] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Instance destroyed successfully.#033[00m
Jan 31 02:45:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:16.426 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[53841053-1ba8-4146-8492-1c4e1fe20949]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:16 np0005603609 nova_compute[221550]: 2026-01-31 07:45:16.426 221554 DEBUG nova.objects.instance [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'resources' on Instance uuid 3087c480-1e28-4efd-8f41-70490a9295ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:45:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:16.439 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c669029f-7dc7-4830-adc5-1fbb94f82a3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:16.441 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[11a80574-5b3d-42cb-ba17-e57e94d7327b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:16.452 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b444e0c2-4298-4c5e-aadc-e0bc942b557c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573797, 'reachable_time': 24829, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242269, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:16.454 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:45:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:16.454 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc6260c-7799-4212-af56-128eb872716d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:16 np0005603609 systemd[1]: run-netns-ovnmeta\x2d60244e92\x2d1524\x2d47f0\x2d8207\x2d05d0104afa47.mount: Deactivated successfully.
Jan 31 02:45:16 np0005603609 nova_compute[221550]: 2026-01-31 07:45:16.460 221554 DEBUG nova.virt.libvirt.vif [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:44:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-915379133',display_name='tempest-DeleteServersTestJSON-server-915379133',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-915379133',id=55,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:44:59Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-yiqpc2tx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:44:59Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=3087c480-1e28-4efd-8f41-70490a9295ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "address": "fa:16:3e:13:ba:10", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e76293d-e3", "ovs_interfaceid": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:45:16 np0005603609 nova_compute[221550]: 2026-01-31 07:45:16.460 221554 DEBUG nova.network.os_vif_util [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "address": "fa:16:3e:13:ba:10", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e76293d-e3", "ovs_interfaceid": "4e76293d-e306-4fb0-a2e3-d93b835dc120", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:45:16 np0005603609 nova_compute[221550]: 2026-01-31 07:45:16.461 221554 DEBUG nova.network.os_vif_util [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:13:ba:10,bridge_name='br-int',has_traffic_filtering=True,id=4e76293d-e306-4fb0-a2e3-d93b835dc120,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e76293d-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:45:16 np0005603609 nova_compute[221550]: 2026-01-31 07:45:16.461 221554 DEBUG os_vif [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:13:ba:10,bridge_name='br-int',has_traffic_filtering=True,id=4e76293d-e306-4fb0-a2e3-d93b835dc120,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e76293d-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:45:16 np0005603609 nova_compute[221550]: 2026-01-31 07:45:16.462 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:16 np0005603609 nova_compute[221550]: 2026-01-31 07:45:16.463 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e76293d-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:16 np0005603609 nova_compute[221550]: 2026-01-31 07:45:16.464 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:16 np0005603609 nova_compute[221550]: 2026-01-31 07:45:16.465 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:16 np0005603609 nova_compute[221550]: 2026-01-31 07:45:16.467 221554 INFO os_vif [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:13:ba:10,bridge_name='br-int',has_traffic_filtering=True,id=4e76293d-e306-4fb0-a2e3-d93b835dc120,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e76293d-e3')#033[00m
Jan 31 02:45:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:16 np0005603609 nova_compute[221550]: 2026-01-31 07:45:16.894 221554 INFO nova.virt.libvirt.driver [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Deleting instance files /var/lib/nova/instances/3087c480-1e28-4efd-8f41-70490a9295ef_del#033[00m
Jan 31 02:45:16 np0005603609 nova_compute[221550]: 2026-01-31 07:45:16.895 221554 INFO nova.virt.libvirt.driver [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Deletion of /var/lib/nova/instances/3087c480-1e28-4efd-8f41-70490a9295ef_del complete#033[00m
Jan 31 02:45:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:45:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:16.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:45:17 np0005603609 nova_compute[221550]: 2026-01-31 07:45:17.075 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:17 np0005603609 nova_compute[221550]: 2026-01-31 07:45:17.802 221554 INFO nova.compute.manager [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Took 1.81 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:45:17 np0005603609 nova_compute[221550]: 2026-01-31 07:45:17.803 221554 DEBUG oslo.service.loopingcall [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:45:17 np0005603609 nova_compute[221550]: 2026-01-31 07:45:17.804 221554 DEBUG nova.compute.manager [-] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:45:17 np0005603609 nova_compute[221550]: 2026-01-31 07:45:17.804 221554 DEBUG nova.network.neutron [-] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:45:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:18.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:18 np0005603609 nova_compute[221550]: 2026-01-31 07:45:18.446 221554 DEBUG nova.compute.manager [req-4a00d568-8f8d-4b04-8852-573b49ee439f req-a824d560-15af-48b7-8369-393338b30972 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Received event network-vif-unplugged-4e76293d-e306-4fb0-a2e3-d93b835dc120 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:45:18 np0005603609 nova_compute[221550]: 2026-01-31 07:45:18.447 221554 DEBUG oslo_concurrency.lockutils [req-4a00d568-8f8d-4b04-8852-573b49ee439f req-a824d560-15af-48b7-8369-393338b30972 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3087c480-1e28-4efd-8f41-70490a9295ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:18 np0005603609 nova_compute[221550]: 2026-01-31 07:45:18.447 221554 DEBUG oslo_concurrency.lockutils [req-4a00d568-8f8d-4b04-8852-573b49ee439f req-a824d560-15af-48b7-8369-393338b30972 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:18 np0005603609 nova_compute[221550]: 2026-01-31 07:45:18.447 221554 DEBUG oslo_concurrency.lockutils [req-4a00d568-8f8d-4b04-8852-573b49ee439f req-a824d560-15af-48b7-8369-393338b30972 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:18 np0005603609 nova_compute[221550]: 2026-01-31 07:45:18.447 221554 DEBUG nova.compute.manager [req-4a00d568-8f8d-4b04-8852-573b49ee439f req-a824d560-15af-48b7-8369-393338b30972 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] No waiting events found dispatching network-vif-unplugged-4e76293d-e306-4fb0-a2e3-d93b835dc120 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:45:18 np0005603609 nova_compute[221550]: 2026-01-31 07:45:18.447 221554 DEBUG nova.compute.manager [req-4a00d568-8f8d-4b04-8852-573b49ee439f req-a824d560-15af-48b7-8369-393338b30972 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Received event network-vif-unplugged-4e76293d-e306-4fb0-a2e3-d93b835dc120 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:45:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:45:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:18.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.174 221554 DEBUG nova.network.neutron [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Updating instance_info_cache with network_info: [{"id": "301f354a-102e-4d48-8e41-36be449f4e70", "address": "fa:16:3e:c7:bb:e0", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap301f354a-10", "ovs_interfaceid": "301f354a-102e-4d48-8e41-36be449f4e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.224 221554 DEBUG oslo_concurrency.lockutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Releasing lock "refresh_cache-10979044-21a0-4f6e-8d6d-bc6d04961714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.224 221554 DEBUG nova.compute.manager [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Instance network_info: |[{"id": "301f354a-102e-4d48-8e41-36be449f4e70", "address": "fa:16:3e:c7:bb:e0", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap301f354a-10", "ovs_interfaceid": "301f354a-102e-4d48-8e41-36be449f4e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.224 221554 DEBUG oslo_concurrency.lockutils [req-dc3356ae-a886-4046-96b2-6eb527755172 req-f791f05b-2a89-49fa-a0e3-70ad0283a829 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-10979044-21a0-4f6e-8d6d-bc6d04961714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.225 221554 DEBUG nova.network.neutron [req-dc3356ae-a886-4046-96b2-6eb527755172 req-f791f05b-2a89-49fa-a0e3-70ad0283a829 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Refreshing network info cache for port 301f354a-102e-4d48-8e41-36be449f4e70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.227 221554 DEBUG nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Start _get_guest_xml network_info=[{"id": "301f354a-102e-4d48-8e41-36be449f4e70", "address": "fa:16:3e:c7:bb:e0", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap301f354a-10", "ovs_interfaceid": "301f354a-102e-4d48-8e41-36be449f4e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.232 221554 WARNING nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.236 221554 DEBUG nova.virt.libvirt.host [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.237 221554 DEBUG nova.virt.libvirt.host [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.243 221554 DEBUG nova.virt.libvirt.host [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.244 221554 DEBUG nova.virt.libvirt.host [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.245 221554 DEBUG nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.245 221554 DEBUG nova.virt.hardware [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.245 221554 DEBUG nova.virt.hardware [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.245 221554 DEBUG nova.virt.hardware [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.246 221554 DEBUG nova.virt.hardware [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.246 221554 DEBUG nova.virt.hardware [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.246 221554 DEBUG nova.virt.hardware [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.246 221554 DEBUG nova.virt.hardware [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.246 221554 DEBUG nova.virt.hardware [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.247 221554 DEBUG nova.virt.hardware [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.247 221554 DEBUG nova.virt.hardware [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.247 221554 DEBUG nova.virt.hardware [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.249 221554 DEBUG oslo_concurrency.processutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:45:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3042256607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.694 221554 DEBUG oslo_concurrency.processutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.720 221554 DEBUG nova.storage.rbd_utils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] rbd image 10979044-21a0-4f6e-8d6d-bc6d04961714_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:45:19 np0005603609 nova_compute[221550]: 2026-01-31 07:45:19.725 221554 DEBUG oslo_concurrency.processutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:45:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1545433916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.183 221554 DEBUG oslo_concurrency.processutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.185 221554 DEBUG nova.virt.libvirt.vif [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:45:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-749773993',display_name='tempest-FloatingIPsAssociationTestJSON-server-749773993',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-749773993',id=57,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='07ac56babd144839be6d08563340e6bd',ramdisk_id='',reservation_id='r-krbyjgit',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-656325591',owner_user_name='tempest-FloatingIPsAssociationTestJSON-656325591-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:45:06Z,user_data=None,user_id='a6a24b8bc028456fa6de79d3b792e79a',uuid=10979044-21a0-4f6e-8d6d-bc6d04961714,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "301f354a-102e-4d48-8e41-36be449f4e70", "address": "fa:16:3e:c7:bb:e0", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap301f354a-10", "ovs_interfaceid": "301f354a-102e-4d48-8e41-36be449f4e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.186 221554 DEBUG nova.network.os_vif_util [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Converting VIF {"id": "301f354a-102e-4d48-8e41-36be449f4e70", "address": "fa:16:3e:c7:bb:e0", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap301f354a-10", "ovs_interfaceid": "301f354a-102e-4d48-8e41-36be449f4e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.187 221554 DEBUG nova.network.os_vif_util [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:bb:e0,bridge_name='br-int',has_traffic_filtering=True,id=301f354a-102e-4d48-8e41-36be449f4e70,network=Network(7077deb2-06a0-4e93-8714-7555d93557cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap301f354a-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.189 221554 DEBUG nova.objects.instance [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lazy-loading 'pci_devices' on Instance uuid 10979044-21a0-4f6e-8d6d-bc6d04961714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.327 221554 DEBUG nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:45:20 np0005603609 nova_compute[221550]:  <uuid>10979044-21a0-4f6e-8d6d-bc6d04961714</uuid>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:  <name>instance-00000039</name>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-749773993</nova:name>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:45:19</nova:creationTime>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:45:20 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:        <nova:user uuid="a6a24b8bc028456fa6de79d3b792e79a">tempest-FloatingIPsAssociationTestJSON-656325591-project-member</nova:user>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:        <nova:project uuid="07ac56babd144839be6d08563340e6bd">tempest-FloatingIPsAssociationTestJSON-656325591</nova:project>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:        <nova:port uuid="301f354a-102e-4d48-8e41-36be449f4e70">
Jan 31 02:45:20 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <entry name="serial">10979044-21a0-4f6e-8d6d-bc6d04961714</entry>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <entry name="uuid">10979044-21a0-4f6e-8d6d-bc6d04961714</entry>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/10979044-21a0-4f6e-8d6d-bc6d04961714_disk">
Jan 31 02:45:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:45:20 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/10979044-21a0-4f6e-8d6d-bc6d04961714_disk.config">
Jan 31 02:45:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:45:20 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:c7:bb:e0"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <target dev="tap301f354a-10"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/10979044-21a0-4f6e-8d6d-bc6d04961714/console.log" append="off"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:45:20 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:45:20 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:45:20 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:45:20 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.329 221554 DEBUG nova.compute.manager [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Preparing to wait for external event network-vif-plugged-301f354a-102e-4d48-8e41-36be449f4e70 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.329 221554 DEBUG oslo_concurrency.lockutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquiring lock "10979044-21a0-4f6e-8d6d-bc6d04961714-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.329 221554 DEBUG oslo_concurrency.lockutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "10979044-21a0-4f6e-8d6d-bc6d04961714-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.330 221554 DEBUG oslo_concurrency.lockutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "10979044-21a0-4f6e-8d6d-bc6d04961714-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.330 221554 DEBUG nova.virt.libvirt.vif [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:45:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-749773993',display_name='tempest-FloatingIPsAssociationTestJSON-server-749773993',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-749773993',id=57,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='07ac56babd144839be6d08563340e6bd',ramdisk_id='',reservation_id='r-krbyjgit',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-656325591',owner_user_name='tempest-FloatingIPsAssociationTestJSON-656325591-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:45:06Z,user_data=None,user_id='a6a24b8bc028456fa6de79d3b792e79a',uuid=10979044-21a0-4f6e-8d6d-bc6d04961714,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "301f354a-102e-4d48-8e41-36be449f4e70", "address": "fa:16:3e:c7:bb:e0", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap301f354a-10", "ovs_interfaceid": "301f354a-102e-4d48-8e41-36be449f4e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.331 221554 DEBUG nova.network.os_vif_util [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Converting VIF {"id": "301f354a-102e-4d48-8e41-36be449f4e70", "address": "fa:16:3e:c7:bb:e0", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap301f354a-10", "ovs_interfaceid": "301f354a-102e-4d48-8e41-36be449f4e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.331 221554 DEBUG nova.network.os_vif_util [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:bb:e0,bridge_name='br-int',has_traffic_filtering=True,id=301f354a-102e-4d48-8e41-36be449f4e70,network=Network(7077deb2-06a0-4e93-8714-7555d93557cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap301f354a-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.332 221554 DEBUG os_vif [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:bb:e0,bridge_name='br-int',has_traffic_filtering=True,id=301f354a-102e-4d48-8e41-36be449f4e70,network=Network(7077deb2-06a0-4e93-8714-7555d93557cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap301f354a-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.332 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.333 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.333 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.335 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.336 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap301f354a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.336 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap301f354a-10, col_values=(('external_ids', {'iface-id': '301f354a-102e-4d48-8e41-36be449f4e70', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:bb:e0', 'vm-uuid': '10979044-21a0-4f6e-8d6d-bc6d04961714'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.339 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:20 np0005603609 NetworkManager[49064]: <info>  [1769845520.3395] manager: (tap301f354a-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.342 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.343 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.344 221554 INFO os_vif [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:bb:e0,bridge_name='br-int',has_traffic_filtering=True,id=301f354a-102e-4d48-8e41-36be449f4e70,network=Network(7077deb2-06a0-4e93-8714-7555d93557cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap301f354a-10')#033[00m
Jan 31 02:45:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:20.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.763 221554 DEBUG nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.763 221554 DEBUG nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.763 221554 DEBUG nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] No VIF found with MAC fa:16:3e:c7:bb:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.764 221554 INFO nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Using config drive#033[00m
Jan 31 02:45:20 np0005603609 nova_compute[221550]: 2026-01-31 07:45:20.901 221554 DEBUG nova.storage.rbd_utils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] rbd image 10979044-21a0-4f6e-8d6d-bc6d04961714_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:45:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:45:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:20.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:45:21 np0005603609 nova_compute[221550]: 2026-01-31 07:45:21.018 221554 DEBUG nova.compute.manager [req-e743a770-8a89-4a8c-b8cb-43b9fb31905e req-b47bccef-692f-4ce6-abff-1e542ba64efa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Received event network-vif-plugged-4e76293d-e306-4fb0-a2e3-d93b835dc120 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:45:21 np0005603609 nova_compute[221550]: 2026-01-31 07:45:21.018 221554 DEBUG oslo_concurrency.lockutils [req-e743a770-8a89-4a8c-b8cb-43b9fb31905e req-b47bccef-692f-4ce6-abff-1e542ba64efa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3087c480-1e28-4efd-8f41-70490a9295ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:21 np0005603609 nova_compute[221550]: 2026-01-31 07:45:21.020 221554 DEBUG oslo_concurrency.lockutils [req-e743a770-8a89-4a8c-b8cb-43b9fb31905e req-b47bccef-692f-4ce6-abff-1e542ba64efa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:21 np0005603609 nova_compute[221550]: 2026-01-31 07:45:21.021 221554 DEBUG oslo_concurrency.lockutils [req-e743a770-8a89-4a8c-b8cb-43b9fb31905e req-b47bccef-692f-4ce6-abff-1e542ba64efa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:21 np0005603609 nova_compute[221550]: 2026-01-31 07:45:21.021 221554 DEBUG nova.compute.manager [req-e743a770-8a89-4a8c-b8cb-43b9fb31905e req-b47bccef-692f-4ce6-abff-1e542ba64efa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] No waiting events found dispatching network-vif-plugged-4e76293d-e306-4fb0-a2e3-d93b835dc120 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:45:21 np0005603609 nova_compute[221550]: 2026-01-31 07:45:21.021 221554 WARNING nova.compute.manager [req-e743a770-8a89-4a8c-b8cb-43b9fb31905e req-b47bccef-692f-4ce6-abff-1e542ba64efa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Received unexpected event network-vif-plugged-4e76293d-e306-4fb0-a2e3-d93b835dc120 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:45:21 np0005603609 podman[242448]: 2026-01-31 07:45:21.277687366 +0000 UTC m=+0.048814420 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 02:45:21 np0005603609 podman[242492]: 2026-01-31 07:45:21.348591253 +0000 UTC m=+0.058263826 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 02:45:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:21 np0005603609 nova_compute[221550]: 2026-01-31 07:45:21.736 221554 DEBUG nova.network.neutron [-] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:45:21 np0005603609 nova_compute[221550]: 2026-01-31 07:45:21.741 221554 INFO nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Creating config drive at /var/lib/nova/instances/10979044-21a0-4f6e-8d6d-bc6d04961714/disk.config#033[00m
Jan 31 02:45:21 np0005603609 nova_compute[221550]: 2026-01-31 07:45:21.745 221554 DEBUG oslo_concurrency.processutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/10979044-21a0-4f6e-8d6d-bc6d04961714/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8yj8843o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:21 np0005603609 nova_compute[221550]: 2026-01-31 07:45:21.778 221554 INFO nova.compute.manager [-] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Took 3.97 seconds to deallocate network for instance.#033[00m
Jan 31 02:45:21 np0005603609 nova_compute[221550]: 2026-01-31 07:45:21.864 221554 DEBUG oslo_concurrency.processutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/10979044-21a0-4f6e-8d6d-bc6d04961714/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8yj8843o" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:21 np0005603609 nova_compute[221550]: 2026-01-31 07:45:21.893 221554 DEBUG nova.storage.rbd_utils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] rbd image 10979044-21a0-4f6e-8d6d-bc6d04961714_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:45:21 np0005603609 nova_compute[221550]: 2026-01-31 07:45:21.898 221554 DEBUG oslo_concurrency.processutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/10979044-21a0-4f6e-8d6d-bc6d04961714/disk.config 10979044-21a0-4f6e-8d6d-bc6d04961714_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.077 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.126 221554 DEBUG oslo_concurrency.processutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/10979044-21a0-4f6e-8d6d-bc6d04961714/disk.config 10979044-21a0-4f6e-8d6d-bc6d04961714_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.127 221554 INFO nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Deleting local config drive /var/lib/nova/instances/10979044-21a0-4f6e-8d6d-bc6d04961714/disk.config because it was imported into RBD.#033[00m
Jan 31 02:45:22 np0005603609 kernel: tap301f354a-10: entered promiscuous mode
Jan 31 02:45:22 np0005603609 NetworkManager[49064]: <info>  [1769845522.1709] manager: (tap301f354a-10): new Tun device (/org/freedesktop/NetworkManager/Devices/86)
Jan 31 02:45:22 np0005603609 ovn_controller[130359]: 2026-01-31T07:45:22Z|00170|binding|INFO|Claiming lport 301f354a-102e-4d48-8e41-36be449f4e70 for this chassis.
Jan 31 02:45:22 np0005603609 ovn_controller[130359]: 2026-01-31T07:45:22Z|00171|binding|INFO|301f354a-102e-4d48-8e41-36be449f4e70: Claiming fa:16:3e:c7:bb:e0 10.100.0.6
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.171 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.176 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.192 221554 INFO nova.compute.manager [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Took 0.41 seconds to detach 1 volumes for instance.#033[00m
Jan 31 02:45:22 np0005603609 systemd-machined[190912]: New machine qemu-27-instance-00000039.
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.200 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:22 np0005603609 ovn_controller[130359]: 2026-01-31T07:45:22Z|00172|binding|INFO|Setting lport 301f354a-102e-4d48-8e41-36be449f4e70 ovn-installed in OVS
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.202 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.206 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:bb:e0 10.100.0.6'], port_security=['fa:16:3e:c7:bb:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '10979044-21a0-4f6e-8d6d-bc6d04961714', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7077deb2-06a0-4e93-8714-7555d93557cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07ac56babd144839be6d08563340e6bd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2da1fc29-dd97-45b5-a69f-a954c8d9f902', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2803db5-d904-4d93-a43e-b71357b850fe, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=301f354a-102e-4d48-8e41-36be449f4e70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:45:22 np0005603609 ovn_controller[130359]: 2026-01-31T07:45:22Z|00173|binding|INFO|Setting lport 301f354a-102e-4d48-8e41-36be449f4e70 up in Southbound
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.207 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 301f354a-102e-4d48-8e41-36be449f4e70 in datapath 7077deb2-06a0-4e93-8714-7555d93557cf bound to our chassis#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.208 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7077deb2-06a0-4e93-8714-7555d93557cf#033[00m
Jan 31 02:45:22 np0005603609 systemd[1]: Started Virtual Machine qemu-27-instance-00000039.
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.214 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c791b534-6358-4ca8-929f-7365313ad5e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.215 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7077deb2-01 in ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.216 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7077deb2-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.216 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[70e8c8e0-29b2-401e-b7d6-91bc35c5ce10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.217 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4bcc091d-01e5-4fe1-9c07-737725adeaaa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:22 np0005603609 systemd-udevd[242606]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.224 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[41b093ed-2077-4a7e-aa29-789e2bca3469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:22 np0005603609 NetworkManager[49064]: <info>  [1769845522.2334] device (tap301f354a-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.232 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f7713c1e-464c-4d23-ae08-239e6237a326]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:22 np0005603609 NetworkManager[49064]: <info>  [1769845522.2343] device (tap301f354a-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.251 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[daa75084-435c-4fb0-b753-735b4b5fd645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:22 np0005603609 NetworkManager[49064]: <info>  [1769845522.2573] manager: (tap7077deb2-00): new Veth device (/org/freedesktop/NetworkManager/Devices/87)
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.256 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2955ef-8eba-40f5-87cb-d0471242add7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.277 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[71699a7b-9d89-4098-aa5d-135a27d5ed65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.280 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[2eda8e99-b103-41b4-a0a8-3b72d320538d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:22 np0005603609 NetworkManager[49064]: <info>  [1769845522.2952] device (tap7077deb2-00): carrier: link connected
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.299 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed4486c-d706-4a34-ae24-198f2a8b071c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.309 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[06058069-8533-40d6-91bb-020a4a0de6e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7077deb2-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:9c:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576199, 'reachable_time': 27440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242638, 'error': None, 'target': 'ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.314 221554 DEBUG oslo_concurrency.lockutils [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.314 221554 DEBUG oslo_concurrency.lockutils [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.321 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1c53154b-4944-4e8e-b3c6-0f6cb14781ae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe56:9c01'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576199, 'tstamp': 576199}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242639, 'error': None, 'target': 'ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.330 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[08a5a1a4-9b2f-422c-80aa-b7897d845395]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7077deb2-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:9c:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576199, 'reachable_time': 27440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 242640, 'error': None, 'target': 'ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.354 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5f0597fb-4096-4c0d-88ab-0632ce623a9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.389 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b72eef79-5a66-4e49-b1d6-49924b56d333]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.390 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7077deb2-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.390 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.391 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7077deb2-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:22 np0005603609 kernel: tap7077deb2-00: entered promiscuous mode
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.395 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:22 np0005603609 NetworkManager[49064]: <info>  [1769845522.3971] manager: (tap7077deb2-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.397 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7077deb2-00, col_values=(('external_ids', {'iface-id': '5e30ad3f-073b-4a38-b984-d0517ecbf784'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:22 np0005603609 ovn_controller[130359]: 2026-01-31T07:45:22Z|00174|binding|INFO|Releasing lport 5e30ad3f-073b-4a38-b984-d0517ecbf784 from this chassis (sb_readonly=0)
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.398 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.399 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7077deb2-06a0-4e93-8714-7555d93557cf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7077deb2-06a0-4e93-8714-7555d93557cf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.400 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f75f2e69-b929-4f92-bd42-3102dc2069d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.401 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-7077deb2-06a0-4e93-8714-7555d93557cf
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/7077deb2-06a0-4e93-8714-7555d93557cf.pid.haproxy
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 7077deb2-06a0-4e93-8714-7555d93557cf
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:45:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:45:22.401 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf', 'env', 'PROCESS_TAG=haproxy-7077deb2-06a0-4e93-8714-7555d93557cf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7077deb2-06a0-4e93-8714-7555d93557cf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.403 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:22.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.442 221554 DEBUG oslo_concurrency.processutils [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.704 221554 DEBUG nova.network.neutron [req-dc3356ae-a886-4046-96b2-6eb527755172 req-f791f05b-2a89-49fa-a0e3-70ad0283a829 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Updated VIF entry in instance network info cache for port 301f354a-102e-4d48-8e41-36be449f4e70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.705 221554 DEBUG nova.network.neutron [req-dc3356ae-a886-4046-96b2-6eb527755172 req-f791f05b-2a89-49fa-a0e3-70ad0283a829 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Updating instance_info_cache with network_info: [{"id": "301f354a-102e-4d48-8e41-36be449f4e70", "address": "fa:16:3e:c7:bb:e0", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap301f354a-10", "ovs_interfaceid": "301f354a-102e-4d48-8e41-36be449f4e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.712 221554 DEBUG nova.compute.manager [req-78ea563a-2237-4cd0-b272-84bb06f221bf req-feee85ec-d213-4911-bc79-8973960aef78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Received event network-vif-plugged-301f354a-102e-4d48-8e41-36be449f4e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.713 221554 DEBUG oslo_concurrency.lockutils [req-78ea563a-2237-4cd0-b272-84bb06f221bf req-feee85ec-d213-4911-bc79-8973960aef78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10979044-21a0-4f6e-8d6d-bc6d04961714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.713 221554 DEBUG oslo_concurrency.lockutils [req-78ea563a-2237-4cd0-b272-84bb06f221bf req-feee85ec-d213-4911-bc79-8973960aef78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10979044-21a0-4f6e-8d6d-bc6d04961714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.714 221554 DEBUG oslo_concurrency.lockutils [req-78ea563a-2237-4cd0-b272-84bb06f221bf req-feee85ec-d213-4911-bc79-8973960aef78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10979044-21a0-4f6e-8d6d-bc6d04961714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.714 221554 DEBUG nova.compute.manager [req-78ea563a-2237-4cd0-b272-84bb06f221bf req-feee85ec-d213-4911-bc79-8973960aef78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Processing event network-vif-plugged-301f354a-102e-4d48-8e41-36be449f4e70 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.761 221554 DEBUG oslo_concurrency.lockutils [req-dc3356ae-a886-4046-96b2-6eb527755172 req-f791f05b-2a89-49fa-a0e3-70ad0283a829 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-10979044-21a0-4f6e-8d6d-bc6d04961714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:45:22 np0005603609 podman[242692]: 2026-01-31 07:45:22.794653241 +0000 UTC m=+0.119093711 container create 690bf033dad1afb44236a60bfdc2809f1fb06bc429f1108e0e9bee119f0d92d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 02:45:22 np0005603609 podman[242692]: 2026-01-31 07:45:22.723196391 +0000 UTC m=+0.047636891 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:45:22 np0005603609 systemd[1]: Started libpod-conmon-690bf033dad1afb44236a60bfdc2809f1fb06bc429f1108e0e9bee119f0d92d9.scope.
Jan 31 02:45:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:45:22 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/408421975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:45:22 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:45:22 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ec76a93d4bf9c438d41c0a0e7d4ec847fea013476d4179b8b248baa427a47c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.930 221554 DEBUG oslo_concurrency.processutils [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.936 221554 DEBUG nova.compute.provider_tree [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:45:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:22.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:22 np0005603609 podman[242692]: 2026-01-31 07:45:22.954693762 +0000 UTC m=+0.279134252 container init 690bf033dad1afb44236a60bfdc2809f1fb06bc429f1108e0e9bee119f0d92d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:45:22 np0005603609 podman[242692]: 2026-01-31 07:45:22.958981635 +0000 UTC m=+0.283422105 container start 690bf033dad1afb44236a60bfdc2809f1fb06bc429f1108e0e9bee119f0d92d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:45:22 np0005603609 neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf[242707]: [NOTICE]   (242713) : New worker (242715) forked
Jan 31 02:45:22 np0005603609 neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf[242707]: [NOTICE]   (242713) : Loading success.
Jan 31 02:45:22 np0005603609 nova_compute[221550]: 2026-01-31 07:45:22.993 221554 DEBUG nova.scheduler.client.report [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.185 221554 DEBUG oslo_concurrency.lockutils [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.201 221554 DEBUG nova.compute.manager [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Received event network-vif-deleted-4e76293d-e306-4fb0-a2e3-d93b835dc120 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.248 221554 INFO nova.scheduler.client.report [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Deleted allocations for instance 3087c480-1e28-4efd-8f41-70490a9295ef#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.251 221554 DEBUG nova.compute.manager [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.251 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845523.2507298, 10979044-21a0-4f6e-8d6d-bc6d04961714 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.251 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] VM Started (Lifecycle Event)#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.255 221554 DEBUG nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.258 221554 INFO nova.virt.libvirt.driver [-] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Instance spawned successfully.#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.258 221554 DEBUG nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.316 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.319 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.327 221554 DEBUG nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.328 221554 DEBUG nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.328 221554 DEBUG nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.328 221554 DEBUG nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.329 221554 DEBUG nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.329 221554 DEBUG nova.virt.libvirt.driver [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.367 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.368 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845523.2528336, 10979044-21a0-4f6e-8d6d-bc6d04961714 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.368 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.414 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.417 221554 DEBUG oslo_concurrency.lockutils [None req-b9c208f1-d475-49d7-b86d-aac52906a36b 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "3087c480-1e28-4efd-8f41-70490a9295ef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.420 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845523.2542882, 10979044-21a0-4f6e-8d6d-bc6d04961714 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.420 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.437 221554 INFO nova.compute.manager [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Took 16.43 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.438 221554 DEBUG nova.compute.manager [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.480 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.482 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.527 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.627 221554 INFO nova.compute.manager [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Took 20.79 seconds to build instance.#033[00m
Jan 31 02:45:23 np0005603609 nova_compute[221550]: 2026-01-31 07:45:23.662 221554 DEBUG oslo_concurrency.lockutils [None req-041ac1f3-0f79-45e3-8442-742c34f8e825 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "10979044-21a0-4f6e-8d6d-bc6d04961714" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:24 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:45:24 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:45:24 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:45:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:24.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:45:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:24.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:45:24 np0005603609 nova_compute[221550]: 2026-01-31 07:45:24.972 221554 DEBUG nova.compute.manager [req-94052e17-0916-43a1-aee7-1de49b3d9ca6 req-52512515-3fa5-41a6-bc5b-6e9f492205e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Received event network-vif-plugged-301f354a-102e-4d48-8e41-36be449f4e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:45:24 np0005603609 nova_compute[221550]: 2026-01-31 07:45:24.972 221554 DEBUG oslo_concurrency.lockutils [req-94052e17-0916-43a1-aee7-1de49b3d9ca6 req-52512515-3fa5-41a6-bc5b-6e9f492205e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10979044-21a0-4f6e-8d6d-bc6d04961714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:24 np0005603609 nova_compute[221550]: 2026-01-31 07:45:24.972 221554 DEBUG oslo_concurrency.lockutils [req-94052e17-0916-43a1-aee7-1de49b3d9ca6 req-52512515-3fa5-41a6-bc5b-6e9f492205e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10979044-21a0-4f6e-8d6d-bc6d04961714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:24 np0005603609 nova_compute[221550]: 2026-01-31 07:45:24.972 221554 DEBUG oslo_concurrency.lockutils [req-94052e17-0916-43a1-aee7-1de49b3d9ca6 req-52512515-3fa5-41a6-bc5b-6e9f492205e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10979044-21a0-4f6e-8d6d-bc6d04961714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:24 np0005603609 nova_compute[221550]: 2026-01-31 07:45:24.973 221554 DEBUG nova.compute.manager [req-94052e17-0916-43a1-aee7-1de49b3d9ca6 req-52512515-3fa5-41a6-bc5b-6e9f492205e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] No waiting events found dispatching network-vif-plugged-301f354a-102e-4d48-8e41-36be449f4e70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:45:24 np0005603609 nova_compute[221550]: 2026-01-31 07:45:24.973 221554 WARNING nova.compute.manager [req-94052e17-0916-43a1-aee7-1de49b3d9ca6 req-52512515-3fa5-41a6-bc5b-6e9f492205e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Received unexpected event network-vif-plugged-301f354a-102e-4d48-8e41-36be449f4e70 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:45:25 np0005603609 nova_compute[221550]: 2026-01-31 07:45:25.339 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:25 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:45:25 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:45:25 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Jan 31 02:45:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:45:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:26.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:45:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:26.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:27 np0005603609 nova_compute[221550]: 2026-01-31 07:45:27.079 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:28.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:28.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:29 np0005603609 NetworkManager[49064]: <info>  [1769845529.3246] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Jan 31 02:45:29 np0005603609 NetworkManager[49064]: <info>  [1769845529.3255] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Jan 31 02:45:29 np0005603609 nova_compute[221550]: 2026-01-31 07:45:29.324 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:29 np0005603609 nova_compute[221550]: 2026-01-31 07:45:29.391 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:29 np0005603609 ovn_controller[130359]: 2026-01-31T07:45:29Z|00175|binding|INFO|Releasing lport 5e30ad3f-073b-4a38-b984-d0517ecbf784 from this chassis (sb_readonly=0)
Jan 31 02:45:29 np0005603609 nova_compute[221550]: 2026-01-31 07:45:29.420 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:30 np0005603609 nova_compute[221550]: 2026-01-31 07:45:30.341 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:30.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:30.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:31 np0005603609 nova_compute[221550]: 2026-01-31 07:45:31.425 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845516.4235852, 3087c480-1e28-4efd-8f41-70490a9295ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:45:31 np0005603609 nova_compute[221550]: 2026-01-31 07:45:31.425 221554 INFO nova.compute.manager [-] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:45:31 np0005603609 nova_compute[221550]: 2026-01-31 07:45:31.462 221554 DEBUG nova.compute.manager [None req-20279a47-37f0-4703-98c2-ce6084eaacbb - - - - - -] [instance: 3087c480-1e28-4efd-8f41-70490a9295ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:45:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:32 np0005603609 nova_compute[221550]: 2026-01-31 07:45:32.081 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:45:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:32.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:45:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:45:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:32.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:45:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:45:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:34.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:45:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:34.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:35 np0005603609 nova_compute[221550]: 2026-01-31 07:45:35.343 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:36.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:36 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:45:36 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:45:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e210 e210: 3 total, 3 up, 3 in
Jan 31 02:45:36 np0005603609 ovn_controller[130359]: 2026-01-31T07:45:36Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c7:bb:e0 10.100.0.6
Jan 31 02:45:36 np0005603609 ovn_controller[130359]: 2026-01-31T07:45:36Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c7:bb:e0 10.100.0.6
Jan 31 02:45:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:45:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:36.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:45:37 np0005603609 nova_compute[221550]: 2026-01-31 07:45:37.084 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:45:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:38.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:45:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:38.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:39 np0005603609 nova_compute[221550]: 2026-01-31 07:45:39.357 221554 DEBUG nova.compute.manager [req-5f3dc07f-8413-4c0d-978a-15b47ac1d662 req-b8383c41-c41f-479d-a78b-24062522ac7b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Received event network-changed-301f354a-102e-4d48-8e41-36be449f4e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:45:39 np0005603609 nova_compute[221550]: 2026-01-31 07:45:39.358 221554 DEBUG nova.compute.manager [req-5f3dc07f-8413-4c0d-978a-15b47ac1d662 req-b8383c41-c41f-479d-a78b-24062522ac7b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Refreshing instance network info cache due to event network-changed-301f354a-102e-4d48-8e41-36be449f4e70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:45:39 np0005603609 nova_compute[221550]: 2026-01-31 07:45:39.358 221554 DEBUG oslo_concurrency.lockutils [req-5f3dc07f-8413-4c0d-978a-15b47ac1d662 req-b8383c41-c41f-479d-a78b-24062522ac7b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-10979044-21a0-4f6e-8d6d-bc6d04961714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:45:39 np0005603609 nova_compute[221550]: 2026-01-31 07:45:39.358 221554 DEBUG oslo_concurrency.lockutils [req-5f3dc07f-8413-4c0d-978a-15b47ac1d662 req-b8383c41-c41f-479d-a78b-24062522ac7b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-10979044-21a0-4f6e-8d6d-bc6d04961714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:45:39 np0005603609 nova_compute[221550]: 2026-01-31 07:45:39.359 221554 DEBUG nova.network.neutron [req-5f3dc07f-8413-4c0d-978a-15b47ac1d662 req-b8383c41-c41f-479d-a78b-24062522ac7b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Refreshing network info cache for port 301f354a-102e-4d48-8e41-36be449f4e70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:45:40 np0005603609 nova_compute[221550]: 2026-01-31 07:45:40.345 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:40.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:45:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:40.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:45:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:42 np0005603609 nova_compute[221550]: 2026-01-31 07:45:42.088 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:45:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:42.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:45:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:42.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:44.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:44.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:45 np0005603609 nova_compute[221550]: 2026-01-31 07:45:45.348 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:46 np0005603609 nova_compute[221550]: 2026-01-31 07:45:46.344 221554 DEBUG nova.network.neutron [req-5f3dc07f-8413-4c0d-978a-15b47ac1d662 req-b8383c41-c41f-479d-a78b-24062522ac7b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Updated VIF entry in instance network info cache for port 301f354a-102e-4d48-8e41-36be449f4e70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:45:46 np0005603609 nova_compute[221550]: 2026-01-31 07:45:46.344 221554 DEBUG nova.network.neutron [req-5f3dc07f-8413-4c0d-978a-15b47ac1d662 req-b8383c41-c41f-479d-a78b-24062522ac7b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Updating instance_info_cache with network_info: [{"id": "301f354a-102e-4d48-8e41-36be449f4e70", "address": "fa:16:3e:c7:bb:e0", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap301f354a-10", "ovs_interfaceid": "301f354a-102e-4d48-8e41-36be449f4e70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:45:46 np0005603609 nova_compute[221550]: 2026-01-31 07:45:46.389 221554 DEBUG oslo_concurrency.lockutils [req-5f3dc07f-8413-4c0d-978a-15b47ac1d662 req-b8383c41-c41f-479d-a78b-24062522ac7b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-10979044-21a0-4f6e-8d6d-bc6d04961714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:45:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:45:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:46.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:45:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e211 e211: 3 total, 3 up, 3 in
Jan 31 02:45:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:45:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:46.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:45:47 np0005603609 nova_compute[221550]: 2026-01-31 07:45:47.091 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e212 e212: 3 total, 3 up, 3 in
Jan 31 02:45:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:48.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:45:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:48.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:45:50 np0005603609 nova_compute[221550]: 2026-01-31 07:45:50.350 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:45:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:50.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:45:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:50.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e213 e213: 3 total, 3 up, 3 in
Jan 31 02:45:51 np0005603609 nova_compute[221550]: 2026-01-31 07:45:51.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:51 np0005603609 nova_compute[221550]: 2026-01-31 07:45:51.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 02:45:51 np0005603609 nova_compute[221550]: 2026-01-31 07:45:51.758 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 02:45:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:52 np0005603609 nova_compute[221550]: 2026-01-31 07:45:52.093 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:52 np0005603609 podman[242822]: 2026-01-31 07:45:52.174055205 +0000 UTC m=+0.051959264 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:45:52 np0005603609 podman[242821]: 2026-01-31 07:45:52.197651269 +0000 UTC m=+0.077341322 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:45:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:52.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:52.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:45:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:54.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:45:54 np0005603609 nova_compute[221550]: 2026-01-31 07:45:54.758 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:54.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:55 np0005603609 nova_compute[221550]: 2026-01-31 07:45:55.352 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e214 e214: 3 total, 3 up, 3 in
Jan 31 02:45:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:56.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:56 np0005603609 nova_compute[221550]: 2026-01-31 07:45:56.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:56 np0005603609 nova_compute[221550]: 2026-01-31 07:45:56.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 02:45:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:56.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:57 np0005603609 nova_compute[221550]: 2026-01-31 07:45:57.095 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:45:57.409170) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845557409210, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2756, "num_deletes": 511, "total_data_size": 5744110, "memory_usage": 5816568, "flush_reason": "Manual Compaction"}
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845557445200, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 3746998, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31079, "largest_seqno": 33830, "table_properties": {"data_size": 3736143, "index_size": 6453, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 26265, "raw_average_key_size": 20, "raw_value_size": 3712069, "raw_average_value_size": 2842, "num_data_blocks": 280, "num_entries": 1306, "num_filter_entries": 1306, "num_deletions": 511, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845336, "oldest_key_time": 1769845336, "file_creation_time": 1769845557, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 36094 microseconds, and 6384 cpu microseconds.
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:45:57.445259) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 3746998 bytes OK
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:45:57.445282) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:45:57.448846) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:45:57.448881) EVENT_LOG_v1 {"time_micros": 1769845557448872, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:45:57.448899) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 5731075, prev total WAL file size 5731075, number of live WAL files 2.
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:45:57.450047) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(3659KB)], [60(8457KB)]
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845557450085, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 12407608, "oldest_snapshot_seqno": -1}
Jan 31 02:45:57 np0005603609 nova_compute[221550]: 2026-01-31 07:45:57.462 221554 DEBUG nova.compute.manager [req-047165ba-3c7e-42d3-991e-89f6f2b9ac68 req-e8874693-f055-42db-be42-5869e3530fab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Received event network-changed-301f354a-102e-4d48-8e41-36be449f4e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:45:57 np0005603609 nova_compute[221550]: 2026-01-31 07:45:57.462 221554 DEBUG nova.compute.manager [req-047165ba-3c7e-42d3-991e-89f6f2b9ac68 req-e8874693-f055-42db-be42-5869e3530fab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Refreshing instance network info cache due to event network-changed-301f354a-102e-4d48-8e41-36be449f4e70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:45:57 np0005603609 nova_compute[221550]: 2026-01-31 07:45:57.462 221554 DEBUG oslo_concurrency.lockutils [req-047165ba-3c7e-42d3-991e-89f6f2b9ac68 req-e8874693-f055-42db-be42-5869e3530fab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-10979044-21a0-4f6e-8d6d-bc6d04961714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:45:57 np0005603609 nova_compute[221550]: 2026-01-31 07:45:57.463 221554 DEBUG oslo_concurrency.lockutils [req-047165ba-3c7e-42d3-991e-89f6f2b9ac68 req-e8874693-f055-42db-be42-5869e3530fab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-10979044-21a0-4f6e-8d6d-bc6d04961714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:45:57 np0005603609 nova_compute[221550]: 2026-01-31 07:45:57.463 221554 DEBUG nova.network.neutron [req-047165ba-3c7e-42d3-991e-89f6f2b9ac68 req-e8874693-f055-42db-be42-5869e3530fab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Refreshing network info cache for port 301f354a-102e-4d48-8e41-36be449f4e70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 5819 keys, 10385257 bytes, temperature: kUnknown
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845557555137, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 10385257, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10344105, "index_size": 25490, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 149888, "raw_average_key_size": 25, "raw_value_size": 10237233, "raw_average_value_size": 1759, "num_data_blocks": 1025, "num_entries": 5819, "num_filter_entries": 5819, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769845557, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:45:57.555341) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10385257 bytes
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:45:57.569114) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 118.0 rd, 98.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.3 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 6862, records dropped: 1043 output_compression: NoCompression
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:45:57.569182) EVENT_LOG_v1 {"time_micros": 1769845557569168, "job": 36, "event": "compaction_finished", "compaction_time_micros": 105123, "compaction_time_cpu_micros": 16646, "output_level": 6, "num_output_files": 1, "total_output_size": 10385257, "num_input_records": 6862, "num_output_records": 5819, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845557569591, "job": 36, "event": "table_file_deletion", "file_number": 62}
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845557570690, "job": 36, "event": "table_file_deletion", "file_number": 60}
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:45:57.449900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:45:57.570742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:45:57.570747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:45:57.570749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:45:57.570751) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:45:57 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:45:57.570753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:45:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:58.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:58 np0005603609 nova_compute[221550]: 2026-01-31 07:45:58.704 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:58 np0005603609 nova_compute[221550]: 2026-01-31 07:45:58.744 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Triggering sync for uuid 10979044-21a0-4f6e-8d6d-bc6d04961714 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 02:45:58 np0005603609 nova_compute[221550]: 2026-01-31 07:45:58.744 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "10979044-21a0-4f6e-8d6d-bc6d04961714" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:58 np0005603609 nova_compute[221550]: 2026-01-31 07:45:58.744 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "10979044-21a0-4f6e-8d6d-bc6d04961714" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:58 np0005603609 nova_compute[221550]: 2026-01-31 07:45:58.788 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "10979044-21a0-4f6e-8d6d-bc6d04961714" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:45:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:45:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:58.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:46:00 np0005603609 nova_compute[221550]: 2026-01-31 07:46:00.354 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:46:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:00.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:46:00 np0005603609 nova_compute[221550]: 2026-01-31 07:46:00.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:00 np0005603609 nova_compute[221550]: 2026-01-31 07:46:00.787 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:00 np0005603609 nova_compute[221550]: 2026-01-31 07:46:00.788 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:00 np0005603609 nova_compute[221550]: 2026-01-31 07:46:00.788 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:00 np0005603609 nova_compute[221550]: 2026-01-31 07:46:00.788 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:46:00 np0005603609 nova_compute[221550]: 2026-01-31 07:46:00.788 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:46:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:00.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:46:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:46:01 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3982097981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.197 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.352 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.352 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:46:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e215 e215: 3 total, 3 up, 3 in
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.527 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.529 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4483MB free_disk=20.74002456665039GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.529 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.529 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.682 221554 DEBUG nova.network.neutron [req-047165ba-3c7e-42d3-991e-89f6f2b9ac68 req-e8874693-f055-42db-be42-5869e3530fab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Updated VIF entry in instance network info cache for port 301f354a-102e-4d48-8e41-36be449f4e70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.682 221554 DEBUG nova.network.neutron [req-047165ba-3c7e-42d3-991e-89f6f2b9ac68 req-e8874693-f055-42db-be42-5869e3530fab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Updating instance_info_cache with network_info: [{"id": "301f354a-102e-4d48-8e41-36be449f4e70", "address": "fa:16:3e:c7:bb:e0", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap301f354a-10", "ovs_interfaceid": "301f354a-102e-4d48-8e41-36be449f4e70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.704 221554 DEBUG oslo_concurrency.lockutils [req-047165ba-3c7e-42d3-991e-89f6f2b9ac68 req-e8874693-f055-42db-be42-5869e3530fab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-10979044-21a0-4f6e-8d6d-bc6d04961714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.738 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 10979044-21a0-4f6e-8d6d-bc6d04961714 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.738 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.738 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.787 221554 DEBUG oslo_concurrency.lockutils [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquiring lock "10979044-21a0-4f6e-8d6d-bc6d04961714" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.788 221554 DEBUG oslo_concurrency.lockutils [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "10979044-21a0-4f6e-8d6d-bc6d04961714" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.788 221554 DEBUG oslo_concurrency.lockutils [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquiring lock "10979044-21a0-4f6e-8d6d-bc6d04961714-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.788 221554 DEBUG oslo_concurrency.lockutils [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "10979044-21a0-4f6e-8d6d-bc6d04961714-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.788 221554 DEBUG oslo_concurrency.lockutils [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "10979044-21a0-4f6e-8d6d-bc6d04961714-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.789 221554 INFO nova.compute.manager [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Terminating instance#033[00m
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.790 221554 DEBUG nova.compute.manager [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:46:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:01 np0005603609 kernel: tap301f354a-10 (unregistering): left promiscuous mode
Jan 31 02:46:01 np0005603609 NetworkManager[49064]: <info>  [1769845561.9595] device (tap301f354a-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:46:01 np0005603609 ovn_controller[130359]: 2026-01-31T07:46:01Z|00176|binding|INFO|Releasing lport 301f354a-102e-4d48-8e41-36be449f4e70 from this chassis (sb_readonly=0)
Jan 31 02:46:01 np0005603609 ovn_controller[130359]: 2026-01-31T07:46:01Z|00177|binding|INFO|Setting lport 301f354a-102e-4d48-8e41-36be449f4e70 down in Southbound
Jan 31 02:46:01 np0005603609 ovn_controller[130359]: 2026-01-31T07:46:01Z|00178|binding|INFO|Removing iface tap301f354a-10 ovn-installed in OVS
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.971 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:01 np0005603609 nova_compute[221550]: 2026-01-31 07:46:01.984 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:02.006 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:bb:e0 10.100.0.6'], port_security=['fa:16:3e:c7:bb:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '10979044-21a0-4f6e-8d6d-bc6d04961714', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7077deb2-06a0-4e93-8714-7555d93557cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07ac56babd144839be6d08563340e6bd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2da1fc29-dd97-45b5-a69f-a954c8d9f902', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2803db5-d904-4d93-a43e-b71357b850fe, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=301f354a-102e-4d48-8e41-36be449f4e70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:46:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:02.008 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 301f354a-102e-4d48-8e41-36be449f4e70 in datapath 7077deb2-06a0-4e93-8714-7555d93557cf unbound from our chassis#033[00m
Jan 31 02:46:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:02.012 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7077deb2-06a0-4e93-8714-7555d93557cf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:46:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:02.013 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3d23f1-6624-4ff5-b7a9-facc48cb24db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:02.014 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf namespace which is not needed anymore#033[00m
Jan 31 02:46:02 np0005603609 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000039.scope: Deactivated successfully.
Jan 31 02:46:02 np0005603609 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000039.scope: Consumed 14.907s CPU time.
Jan 31 02:46:02 np0005603609 systemd-machined[190912]: Machine qemu-27-instance-00000039 terminated.
Jan 31 02:46:02 np0005603609 nova_compute[221550]: 2026-01-31 07:46:02.071 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:02 np0005603609 nova_compute[221550]: 2026-01-31 07:46:02.097 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:02 np0005603609 neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf[242707]: [NOTICE]   (242713) : haproxy version is 2.8.14-c23fe91
Jan 31 02:46:02 np0005603609 neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf[242707]: [NOTICE]   (242713) : path to executable is /usr/sbin/haproxy
Jan 31 02:46:02 np0005603609 neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf[242707]: [WARNING]  (242713) : Exiting Master process...
Jan 31 02:46:02 np0005603609 neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf[242707]: [ALERT]    (242713) : Current worker (242715) exited with code 143 (Terminated)
Jan 31 02:46:02 np0005603609 neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf[242707]: [WARNING]  (242713) : All workers exited. Exiting... (0)
Jan 31 02:46:02 np0005603609 systemd[1]: libpod-690bf033dad1afb44236a60bfdc2809f1fb06bc429f1108e0e9bee119f0d92d9.scope: Deactivated successfully.
Jan 31 02:46:02 np0005603609 podman[242917]: 2026-01-31 07:46:02.132592713 +0000 UTC m=+0.039199849 container died 690bf033dad1afb44236a60bfdc2809f1fb06bc429f1108e0e9bee119f0d92d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:46:02 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-690bf033dad1afb44236a60bfdc2809f1fb06bc429f1108e0e9bee119f0d92d9-userdata-shm.mount: Deactivated successfully.
Jan 31 02:46:02 np0005603609 systemd[1]: var-lib-containers-storage-overlay-a0ec76a93d4bf9c438d41c0a0e7d4ec847fea013476d4179b8b248baa427a47c-merged.mount: Deactivated successfully.
Jan 31 02:46:02 np0005603609 podman[242917]: 2026-01-31 07:46:02.167760025 +0000 UTC m=+0.074367161 container cleanup 690bf033dad1afb44236a60bfdc2809f1fb06bc429f1108e0e9bee119f0d92d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 02:46:02 np0005603609 systemd[1]: libpod-conmon-690bf033dad1afb44236a60bfdc2809f1fb06bc429f1108e0e9bee119f0d92d9.scope: Deactivated successfully.
Jan 31 02:46:02 np0005603609 nova_compute[221550]: 2026-01-31 07:46:02.233 221554 INFO nova.virt.libvirt.driver [-] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Instance destroyed successfully.#033[00m
Jan 31 02:46:02 np0005603609 nova_compute[221550]: 2026-01-31 07:46:02.234 221554 DEBUG nova.objects.instance [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lazy-loading 'resources' on Instance uuid 10979044-21a0-4f6e-8d6d-bc6d04961714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:46:02 np0005603609 podman[242956]: 2026-01-31 07:46:02.234658656 +0000 UTC m=+0.050167252 container remove 690bf033dad1afb44236a60bfdc2809f1fb06bc429f1108e0e9bee119f0d92d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:46:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:02.240 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d4362db5-71cc-4c36-b4bb-d86e990c3c62]: (4, ('Sat Jan 31 07:46:02 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf (690bf033dad1afb44236a60bfdc2809f1fb06bc429f1108e0e9bee119f0d92d9)\n690bf033dad1afb44236a60bfdc2809f1fb06bc429f1108e0e9bee119f0d92d9\nSat Jan 31 07:46:02 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf (690bf033dad1afb44236a60bfdc2809f1fb06bc429f1108e0e9bee119f0d92d9)\n690bf033dad1afb44236a60bfdc2809f1fb06bc429f1108e0e9bee119f0d92d9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:02.242 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3d08438e-e511-45ca-ab95-44e9d09453f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:02.243 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7077deb2-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:46:02 np0005603609 nova_compute[221550]: 2026-01-31 07:46:02.257 221554 DEBUG nova.virt.libvirt.vif [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:45:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-749773993',display_name='tempest-FloatingIPsAssociationTestJSON-server-749773993',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-749773993',id=57,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:45:23Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='07ac56babd144839be6d08563340e6bd',ramdisk_id='',reservation_id='r-krbyjgit',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-656325591',owner_user_name='tempest-FloatingIPsAssociationTestJSON-656325591-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:45:23Z,user_data=None,user_id='a6a24b8bc028456fa6de79d3b792e79a',uuid=10979044-21a0-4f6e-8d6d-bc6d04961714,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "301f354a-102e-4d48-8e41-36be449f4e70", "address": "fa:16:3e:c7:bb:e0", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap301f354a-10", "ovs_interfaceid": "301f354a-102e-4d48-8e41-36be449f4e70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:46:02 np0005603609 nova_compute[221550]: 2026-01-31 07:46:02.258 221554 DEBUG nova.network.os_vif_util [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Converting VIF {"id": "301f354a-102e-4d48-8e41-36be449f4e70", "address": "fa:16:3e:c7:bb:e0", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap301f354a-10", "ovs_interfaceid": "301f354a-102e-4d48-8e41-36be449f4e70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:46:02 np0005603609 nova_compute[221550]: 2026-01-31 07:46:02.258 221554 DEBUG nova.network.os_vif_util [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:bb:e0,bridge_name='br-int',has_traffic_filtering=True,id=301f354a-102e-4d48-8e41-36be449f4e70,network=Network(7077deb2-06a0-4e93-8714-7555d93557cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap301f354a-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:46:02 np0005603609 nova_compute[221550]: 2026-01-31 07:46:02.259 221554 DEBUG os_vif [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:bb:e0,bridge_name='br-int',has_traffic_filtering=True,id=301f354a-102e-4d48-8e41-36be449f4e70,network=Network(7077deb2-06a0-4e93-8714-7555d93557cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap301f354a-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:46:02 np0005603609 nova_compute[221550]: 2026-01-31 07:46:02.260 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:02 np0005603609 nova_compute[221550]: 2026-01-31 07:46:02.260 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap301f354a-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:46:02 np0005603609 nova_compute[221550]: 2026-01-31 07:46:02.278 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:02 np0005603609 kernel: tap7077deb2-00: left promiscuous mode
Jan 31 02:46:02 np0005603609 nova_compute[221550]: 2026-01-31 07:46:02.280 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:02 np0005603609 nova_compute[221550]: 2026-01-31 07:46:02.282 221554 INFO os_vif [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:bb:e0,bridge_name='br-int',has_traffic_filtering=True,id=301f354a-102e-4d48-8e41-36be449f4e70,network=Network(7077deb2-06a0-4e93-8714-7555d93557cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap301f354a-10')#033[00m
Jan 31 02:46:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:02.285 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[be891f56-b048-4b36-ad1e-8d0361e35321]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:02.299 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[42951111-de9c-4f45-9a1d-20ad22732ff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:02 np0005603609 nova_compute[221550]: 2026-01-31 07:46:02.298 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:02.300 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5a35ad5e-3bd3-4eb0-b64e-07f59034625d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:02.310 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[205a9980-250e-41e6-974c-30a1ae141c4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576194, 'reachable_time': 32881, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243005, 'error': None, 'target': 'ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:02 np0005603609 systemd[1]: run-netns-ovnmeta\x2d7077deb2\x2d06a0\x2d4e93\x2d8714\x2d7555d93557cf.mount: Deactivated successfully.
Jan 31 02:46:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:02.314 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:46:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:02.314 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[939fd9bb-ab2a-46e6-bc1c-c641387dead1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:46:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:02.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:46:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:46:02 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2722501756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:46:02 np0005603609 nova_compute[221550]: 2026-01-31 07:46:02.522 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:02 np0005603609 nova_compute[221550]: 2026-01-31 07:46:02.526 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:46:02 np0005603609 nova_compute[221550]: 2026-01-31 07:46:02.560 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:46:02 np0005603609 nova_compute[221550]: 2026-01-31 07:46:02.606 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:46:02 np0005603609 nova_compute[221550]: 2026-01-31 07:46:02.606 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:46:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:03.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:46:03 np0005603609 nova_compute[221550]: 2026-01-31 07:46:03.607 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:03 np0005603609 nova_compute[221550]: 2026-01-31 07:46:03.607 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:46:03 np0005603609 nova_compute[221550]: 2026-01-31 07:46:03.607 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:46:03 np0005603609 nova_compute[221550]: 2026-01-31 07:46:03.629 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 31 02:46:03 np0005603609 nova_compute[221550]: 2026-01-31 07:46:03.630 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:46:03 np0005603609 nova_compute[221550]: 2026-01-31 07:46:03.630 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:03 np0005603609 nova_compute[221550]: 2026-01-31 07:46:03.630 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:03 np0005603609 nova_compute[221550]: 2026-01-31 07:46:03.630 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:03 np0005603609 nova_compute[221550]: 2026-01-31 07:46:03.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:03 np0005603609 nova_compute[221550]: 2026-01-31 07:46:03.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:03 np0005603609 nova_compute[221550]: 2026-01-31 07:46:03.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:46:04 np0005603609 nova_compute[221550]: 2026-01-31 07:46:04.401 221554 DEBUG nova.compute.manager [req-0c56f680-12d5-4473-b0ce-6014859e6cfc req-fd70c114-c834-4fe0-ba8f-816dd9032c37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Received event network-vif-unplugged-301f354a-102e-4d48-8e41-36be449f4e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:46:04 np0005603609 nova_compute[221550]: 2026-01-31 07:46:04.402 221554 DEBUG oslo_concurrency.lockutils [req-0c56f680-12d5-4473-b0ce-6014859e6cfc req-fd70c114-c834-4fe0-ba8f-816dd9032c37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10979044-21a0-4f6e-8d6d-bc6d04961714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:04 np0005603609 nova_compute[221550]: 2026-01-31 07:46:04.402 221554 DEBUG oslo_concurrency.lockutils [req-0c56f680-12d5-4473-b0ce-6014859e6cfc req-fd70c114-c834-4fe0-ba8f-816dd9032c37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10979044-21a0-4f6e-8d6d-bc6d04961714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:04 np0005603609 nova_compute[221550]: 2026-01-31 07:46:04.403 221554 DEBUG oslo_concurrency.lockutils [req-0c56f680-12d5-4473-b0ce-6014859e6cfc req-fd70c114-c834-4fe0-ba8f-816dd9032c37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10979044-21a0-4f6e-8d6d-bc6d04961714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:04 np0005603609 nova_compute[221550]: 2026-01-31 07:46:04.403 221554 DEBUG nova.compute.manager [req-0c56f680-12d5-4473-b0ce-6014859e6cfc req-fd70c114-c834-4fe0-ba8f-816dd9032c37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] No waiting events found dispatching network-vif-unplugged-301f354a-102e-4d48-8e41-36be449f4e70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:46:04 np0005603609 nova_compute[221550]: 2026-01-31 07:46:04.404 221554 DEBUG nova.compute.manager [req-0c56f680-12d5-4473-b0ce-6014859e6cfc req-fd70c114-c834-4fe0-ba8f-816dd9032c37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Received event network-vif-unplugged-301f354a-102e-4d48-8e41-36be449f4e70 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:46:04 np0005603609 nova_compute[221550]: 2026-01-31 07:46:04.404 221554 DEBUG nova.compute.manager [req-0c56f680-12d5-4473-b0ce-6014859e6cfc req-fd70c114-c834-4fe0-ba8f-816dd9032c37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Received event network-vif-plugged-301f354a-102e-4d48-8e41-36be449f4e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:46:04 np0005603609 nova_compute[221550]: 2026-01-31 07:46:04.405 221554 DEBUG oslo_concurrency.lockutils [req-0c56f680-12d5-4473-b0ce-6014859e6cfc req-fd70c114-c834-4fe0-ba8f-816dd9032c37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10979044-21a0-4f6e-8d6d-bc6d04961714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:04 np0005603609 nova_compute[221550]: 2026-01-31 07:46:04.405 221554 DEBUG oslo_concurrency.lockutils [req-0c56f680-12d5-4473-b0ce-6014859e6cfc req-fd70c114-c834-4fe0-ba8f-816dd9032c37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10979044-21a0-4f6e-8d6d-bc6d04961714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:04 np0005603609 nova_compute[221550]: 2026-01-31 07:46:04.406 221554 DEBUG oslo_concurrency.lockutils [req-0c56f680-12d5-4473-b0ce-6014859e6cfc req-fd70c114-c834-4fe0-ba8f-816dd9032c37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10979044-21a0-4f6e-8d6d-bc6d04961714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:04 np0005603609 nova_compute[221550]: 2026-01-31 07:46:04.406 221554 DEBUG nova.compute.manager [req-0c56f680-12d5-4473-b0ce-6014859e6cfc req-fd70c114-c834-4fe0-ba8f-816dd9032c37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] No waiting events found dispatching network-vif-plugged-301f354a-102e-4d48-8e41-36be449f4e70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:46:04 np0005603609 nova_compute[221550]: 2026-01-31 07:46:04.406 221554 WARNING nova.compute.manager [req-0c56f680-12d5-4473-b0ce-6014859e6cfc req-fd70c114-c834-4fe0-ba8f-816dd9032c37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Received unexpected event network-vif-plugged-301f354a-102e-4d48-8e41-36be449f4e70 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:46:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:04.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:05.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:06.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:06 np0005603609 nova_compute[221550]: 2026-01-31 07:46:06.688 221554 INFO nova.virt.libvirt.driver [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Deleting instance files /var/lib/nova/instances/10979044-21a0-4f6e-8d6d-bc6d04961714_del#033[00m
Jan 31 02:46:06 np0005603609 nova_compute[221550]: 2026-01-31 07:46:06.689 221554 INFO nova.virt.libvirt.driver [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Deletion of /var/lib/nova/instances/10979044-21a0-4f6e-8d6d-bc6d04961714_del complete#033[00m
Jan 31 02:46:06 np0005603609 nova_compute[221550]: 2026-01-31 07:46:06.758 221554 INFO nova.compute.manager [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Took 4.97 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:46:06 np0005603609 nova_compute[221550]: 2026-01-31 07:46:06.759 221554 DEBUG oslo.service.loopingcall [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:46:06 np0005603609 nova_compute[221550]: 2026-01-31 07:46:06.759 221554 DEBUG nova.compute.manager [-] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:46:06 np0005603609 nova_compute[221550]: 2026-01-31 07:46:06.760 221554 DEBUG nova.network.neutron [-] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:46:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000046s ======
Jan 31 02:46:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:07.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000046s
Jan 31 02:46:07 np0005603609 nova_compute[221550]: 2026-01-31 07:46:07.099 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:07 np0005603609 nova_compute[221550]: 2026-01-31 07:46:07.321 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:07.480 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:07.481 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:07.481 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:07 np0005603609 nova_compute[221550]: 2026-01-31 07:46:07.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:07 np0005603609 nova_compute[221550]: 2026-01-31 07:46:07.680 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:08.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:08.683 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:46:08 np0005603609 nova_compute[221550]: 2026-01-31 07:46:08.683 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:08.684 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:46:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:09.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:09 np0005603609 nova_compute[221550]: 2026-01-31 07:46:09.393 221554 DEBUG nova.network.neutron [-] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:46:09 np0005603609 nova_compute[221550]: 2026-01-31 07:46:09.476 221554 INFO nova.compute.manager [-] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Took 2.72 seconds to deallocate network for instance.#033[00m
Jan 31 02:46:09 np0005603609 nova_compute[221550]: 2026-01-31 07:46:09.556 221554 DEBUG oslo_concurrency.lockutils [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:09 np0005603609 nova_compute[221550]: 2026-01-31 07:46:09.557 221554 DEBUG oslo_concurrency.lockutils [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:09 np0005603609 nova_compute[221550]: 2026-01-31 07:46:09.709 221554 DEBUG oslo_concurrency.processutils [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:46:10 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/254862565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:46:10 np0005603609 nova_compute[221550]: 2026-01-31 07:46:10.171 221554 DEBUG oslo_concurrency.processutils [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:10 np0005603609 nova_compute[221550]: 2026-01-31 07:46:10.176 221554 DEBUG nova.compute.provider_tree [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:46:10 np0005603609 nova_compute[221550]: 2026-01-31 07:46:10.216 221554 DEBUG nova.scheduler.client.report [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:46:10 np0005603609 nova_compute[221550]: 2026-01-31 07:46:10.257 221554 DEBUG oslo_concurrency.lockutils [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:10 np0005603609 nova_compute[221550]: 2026-01-31 07:46:10.331 221554 INFO nova.scheduler.client.report [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Deleted allocations for instance 10979044-21a0-4f6e-8d6d-bc6d04961714#033[00m
Jan 31 02:46:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:10.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:10 np0005603609 nova_compute[221550]: 2026-01-31 07:46:10.482 221554 DEBUG oslo_concurrency.lockutils [None req-6ae5ad0e-d88f-4fff-8254-293eaf9dcfa6 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "10979044-21a0-4f6e-8d6d-bc6d04961714" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:10 np0005603609 nova_compute[221550]: 2026-01-31 07:46:10.592 221554 DEBUG nova.compute.manager [req-adc6076c-19fd-4cfa-a847-122aa4e6ea8a req-1a05e75e-7d51-4da5-97fa-1b5e681f7151 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Received event network-vif-deleted-301f354a-102e-4d48-8e41-36be449f4e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:46:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:10.686 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:46:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:11.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:12 np0005603609 nova_compute[221550]: 2026-01-31 07:46:12.025 221554 DEBUG oslo_concurrency.lockutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "cef38897-c445-4e7d-83db-06fb75f5fcda" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:12 np0005603609 nova_compute[221550]: 2026-01-31 07:46:12.026 221554 DEBUG oslo_concurrency.lockutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "cef38897-c445-4e7d-83db-06fb75f5fcda" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:12 np0005603609 nova_compute[221550]: 2026-01-31 07:46:12.063 221554 DEBUG nova.compute.manager [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:46:12 np0005603609 nova_compute[221550]: 2026-01-31 07:46:12.101 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:12 np0005603609 nova_compute[221550]: 2026-01-31 07:46:12.153 221554 DEBUG oslo_concurrency.lockutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:12 np0005603609 nova_compute[221550]: 2026-01-31 07:46:12.153 221554 DEBUG oslo_concurrency.lockutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:12 np0005603609 nova_compute[221550]: 2026-01-31 07:46:12.166 221554 DEBUG nova.virt.hardware [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:46:12 np0005603609 nova_compute[221550]: 2026-01-31 07:46:12.167 221554 INFO nova.compute.claims [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:46:12 np0005603609 nova_compute[221550]: 2026-01-31 07:46:12.323 221554 DEBUG oslo_concurrency.processutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:12 np0005603609 nova_compute[221550]: 2026-01-31 07:46:12.342 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:12.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:46:12 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3818271398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:46:12 np0005603609 nova_compute[221550]: 2026-01-31 07:46:12.747 221554 DEBUG oslo_concurrency.processutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:12 np0005603609 nova_compute[221550]: 2026-01-31 07:46:12.753 221554 DEBUG nova.compute.provider_tree [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:46:12 np0005603609 nova_compute[221550]: 2026-01-31 07:46:12.797 221554 DEBUG nova.scheduler.client.report [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:46:12 np0005603609 nova_compute[221550]: 2026-01-31 07:46:12.832 221554 DEBUG oslo_concurrency.lockutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:12 np0005603609 nova_compute[221550]: 2026-01-31 07:46:12.833 221554 DEBUG nova.compute.manager [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:46:12 np0005603609 nova_compute[221550]: 2026-01-31 07:46:12.907 221554 DEBUG nova.compute.manager [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:46:12 np0005603609 nova_compute[221550]: 2026-01-31 07:46:12.908 221554 DEBUG nova.network.neutron [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:46:12 np0005603609 nova_compute[221550]: 2026-01-31 07:46:12.937 221554 INFO nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:46:12 np0005603609 nova_compute[221550]: 2026-01-31 07:46:12.977 221554 DEBUG nova.compute.manager [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:46:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 02:46:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:13.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 02:46:13 np0005603609 nova_compute[221550]: 2026-01-31 07:46:13.131 221554 DEBUG nova.compute.manager [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:46:13 np0005603609 nova_compute[221550]: 2026-01-31 07:46:13.133 221554 DEBUG nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:46:13 np0005603609 nova_compute[221550]: 2026-01-31 07:46:13.134 221554 INFO nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Creating image(s)#033[00m
Jan 31 02:46:13 np0005603609 nova_compute[221550]: 2026-01-31 07:46:13.548 221554 DEBUG nova.storage.rbd_utils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image cef38897-c445-4e7d-83db-06fb75f5fcda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:46:13 np0005603609 nova_compute[221550]: 2026-01-31 07:46:13.588 221554 DEBUG nova.storage.rbd_utils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image cef38897-c445-4e7d-83db-06fb75f5fcda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:46:13 np0005603609 nova_compute[221550]: 2026-01-31 07:46:13.709 221554 DEBUG nova.storage.rbd_utils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image cef38897-c445-4e7d-83db-06fb75f5fcda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:46:13 np0005603609 nova_compute[221550]: 2026-01-31 07:46:13.714 221554 DEBUG oslo_concurrency.processutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:13 np0005603609 nova_compute[221550]: 2026-01-31 07:46:13.736 221554 DEBUG nova.policy [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4cdbfeb437d54df89a0fb0f6621b8fdc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9a5c5f11e8f24f898d16bceb9925aaa0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:46:13 np0005603609 nova_compute[221550]: 2026-01-31 07:46:13.772 221554 DEBUG oslo_concurrency.processutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:13 np0005603609 nova_compute[221550]: 2026-01-31 07:46:13.773 221554 DEBUG oslo_concurrency.lockutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:13 np0005603609 nova_compute[221550]: 2026-01-31 07:46:13.773 221554 DEBUG oslo_concurrency.lockutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:13 np0005603609 nova_compute[221550]: 2026-01-31 07:46:13.774 221554 DEBUG oslo_concurrency.lockutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:13 np0005603609 nova_compute[221550]: 2026-01-31 07:46:13.802 221554 DEBUG nova.storage.rbd_utils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image cef38897-c445-4e7d-83db-06fb75f5fcda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:46:13 np0005603609 nova_compute[221550]: 2026-01-31 07:46:13.806 221554 DEBUG oslo_concurrency.processutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 cef38897-c445-4e7d-83db-06fb75f5fcda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:46:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:14.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:46:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:15.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:15 np0005603609 nova_compute[221550]: 2026-01-31 07:46:15.589 221554 DEBUG oslo_concurrency.processutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 cef38897-c445-4e7d-83db-06fb75f5fcda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.783s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:15 np0005603609 nova_compute[221550]: 2026-01-31 07:46:15.659 221554 DEBUG nova.storage.rbd_utils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] resizing rbd image cef38897-c445-4e7d-83db-06fb75f5fcda_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:46:15 np0005603609 nova_compute[221550]: 2026-01-31 07:46:15.804 221554 DEBUG nova.network.neutron [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Successfully created port: c19ce404-77d1-46ea-90da-e996d7dff364 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:46:16 np0005603609 nova_compute[221550]: 2026-01-31 07:46:16.183 221554 DEBUG nova.objects.instance [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lazy-loading 'migration_context' on Instance uuid cef38897-c445-4e7d-83db-06fb75f5fcda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:46:16 np0005603609 nova_compute[221550]: 2026-01-31 07:46:16.198 221554 DEBUG nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:46:16 np0005603609 nova_compute[221550]: 2026-01-31 07:46:16.198 221554 DEBUG nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Ensure instance console log exists: /var/lib/nova/instances/cef38897-c445-4e7d-83db-06fb75f5fcda/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:46:16 np0005603609 nova_compute[221550]: 2026-01-31 07:46:16.199 221554 DEBUG oslo_concurrency.lockutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:16 np0005603609 nova_compute[221550]: 2026-01-31 07:46:16.199 221554 DEBUG oslo_concurrency.lockutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:16 np0005603609 nova_compute[221550]: 2026-01-31 07:46:16.200 221554 DEBUG oslo_concurrency.lockutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:16.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:46:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:17.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:46:17 np0005603609 nova_compute[221550]: 2026-01-31 07:46:17.103 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:17 np0005603609 nova_compute[221550]: 2026-01-31 07:46:17.231 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845562.229364, 10979044-21a0-4f6e-8d6d-bc6d04961714 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:46:17 np0005603609 nova_compute[221550]: 2026-01-31 07:46:17.231 221554 INFO nova.compute.manager [-] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:46:17 np0005603609 nova_compute[221550]: 2026-01-31 07:46:17.264 221554 DEBUG nova.compute.manager [None req-707766e6-e2bc-4ac3-bc5b-354021022bbb - - - - - -] [instance: 10979044-21a0-4f6e-8d6d-bc6d04961714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:46:17 np0005603609 nova_compute[221550]: 2026-01-31 07:46:17.358 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:17 np0005603609 nova_compute[221550]: 2026-01-31 07:46:17.776 221554 DEBUG nova.network.neutron [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Successfully updated port: c19ce404-77d1-46ea-90da-e996d7dff364 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:46:17 np0005603609 nova_compute[221550]: 2026-01-31 07:46:17.824 221554 DEBUG oslo_concurrency.lockutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "refresh_cache-cef38897-c445-4e7d-83db-06fb75f5fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:46:17 np0005603609 nova_compute[221550]: 2026-01-31 07:46:17.824 221554 DEBUG oslo_concurrency.lockutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquired lock "refresh_cache-cef38897-c445-4e7d-83db-06fb75f5fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:46:17 np0005603609 nova_compute[221550]: 2026-01-31 07:46:17.824 221554 DEBUG nova.network.neutron [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:46:17 np0005603609 nova_compute[221550]: 2026-01-31 07:46:17.974 221554 DEBUG nova.compute.manager [req-2744ea4d-c075-4e26-9981-24f648402ffc req-d794e081-8529-4d5c-87c7-cd02fa19c4f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Received event network-changed-c19ce404-77d1-46ea-90da-e996d7dff364 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:46:17 np0005603609 nova_compute[221550]: 2026-01-31 07:46:17.974 221554 DEBUG nova.compute.manager [req-2744ea4d-c075-4e26-9981-24f648402ffc req-d794e081-8529-4d5c-87c7-cd02fa19c4f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Refreshing instance network info cache due to event network-changed-c19ce404-77d1-46ea-90da-e996d7dff364. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:46:17 np0005603609 nova_compute[221550]: 2026-01-31 07:46:17.975 221554 DEBUG oslo_concurrency.lockutils [req-2744ea4d-c075-4e26-9981-24f648402ffc req-d794e081-8529-4d5c-87c7-cd02fa19c4f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-cef38897-c445-4e7d-83db-06fb75f5fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:46:18 np0005603609 nova_compute[221550]: 2026-01-31 07:46:18.444 221554 DEBUG nova.network.neutron [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:46:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:46:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:18.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:46:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:19.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:19 np0005603609 nova_compute[221550]: 2026-01-31 07:46:19.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:20.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.871 221554 DEBUG nova.network.neutron [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Updating instance_info_cache with network_info: [{"id": "c19ce404-77d1-46ea-90da-e996d7dff364", "address": "fa:16:3e:47:be:cc", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc19ce404-77", "ovs_interfaceid": "c19ce404-77d1-46ea-90da-e996d7dff364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.930 221554 DEBUG oslo_concurrency.lockutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Releasing lock "refresh_cache-cef38897-c445-4e7d-83db-06fb75f5fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.931 221554 DEBUG nova.compute.manager [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Instance network_info: |[{"id": "c19ce404-77d1-46ea-90da-e996d7dff364", "address": "fa:16:3e:47:be:cc", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc19ce404-77", "ovs_interfaceid": "c19ce404-77d1-46ea-90da-e996d7dff364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.931 221554 DEBUG oslo_concurrency.lockutils [req-2744ea4d-c075-4e26-9981-24f648402ffc req-d794e081-8529-4d5c-87c7-cd02fa19c4f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-cef38897-c445-4e7d-83db-06fb75f5fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.932 221554 DEBUG nova.network.neutron [req-2744ea4d-c075-4e26-9981-24f648402ffc req-d794e081-8529-4d5c-87c7-cd02fa19c4f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Refreshing network info cache for port c19ce404-77d1-46ea-90da-e996d7dff364 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.935 221554 DEBUG nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Start _get_guest_xml network_info=[{"id": "c19ce404-77d1-46ea-90da-e996d7dff364", "address": "fa:16:3e:47:be:cc", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc19ce404-77", "ovs_interfaceid": "c19ce404-77d1-46ea-90da-e996d7dff364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.940 221554 WARNING nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.945 221554 DEBUG nova.virt.libvirt.host [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.945 221554 DEBUG nova.virt.libvirt.host [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.948 221554 DEBUG nova.virt.libvirt.host [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.949 221554 DEBUG nova.virt.libvirt.host [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.950 221554 DEBUG nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.950 221554 DEBUG nova.virt.hardware [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.951 221554 DEBUG nova.virt.hardware [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.951 221554 DEBUG nova.virt.hardware [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.951 221554 DEBUG nova.virt.hardware [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.951 221554 DEBUG nova.virt.hardware [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.951 221554 DEBUG nova.virt.hardware [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.951 221554 DEBUG nova.virt.hardware [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.951 221554 DEBUG nova.virt.hardware [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.952 221554 DEBUG nova.virt.hardware [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.952 221554 DEBUG nova.virt.hardware [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.952 221554 DEBUG nova.virt.hardware [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:46:20 np0005603609 nova_compute[221550]: 2026-01-31 07:46:20.954 221554 DEBUG oslo_concurrency.processutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:21.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:46:21 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3836211079' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.397 221554 DEBUG oslo_concurrency.processutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.418 221554 DEBUG nova.storage.rbd_utils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image cef38897-c445-4e7d-83db-06fb75f5fcda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.423 221554 DEBUG oslo_concurrency.processutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:46:21 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1517547197' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.856 221554 DEBUG oslo_concurrency.processutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.859 221554 DEBUG nova.virt.libvirt.vif [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:46:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1343363810',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1343363810',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1343363810',id=60,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9a5c5f11e8f24f898d16bceb9925aaa0',ramdisk_id='',reservation_id='r-36qzlnvh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-536491326',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-536491326-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:46:13Z,user_data=None,user_id='4cdbfeb437d54df89a0fb0f6621b8fdc',uuid=cef38897-c445-4e7d-83db-06fb75f5fcda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c19ce404-77d1-46ea-90da-e996d7dff364", "address": "fa:16:3e:47:be:cc", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc19ce404-77", "ovs_interfaceid": "c19ce404-77d1-46ea-90da-e996d7dff364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.860 221554 DEBUG nova.network.os_vif_util [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Converting VIF {"id": "c19ce404-77d1-46ea-90da-e996d7dff364", "address": "fa:16:3e:47:be:cc", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc19ce404-77", "ovs_interfaceid": "c19ce404-77d1-46ea-90da-e996d7dff364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.861 221554 DEBUG nova.network.os_vif_util [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:be:cc,bridge_name='br-int',has_traffic_filtering=True,id=c19ce404-77d1-46ea-90da-e996d7dff364,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc19ce404-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.863 221554 DEBUG nova.objects.instance [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lazy-loading 'pci_devices' on Instance uuid cef38897-c445-4e7d-83db-06fb75f5fcda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.911 221554 DEBUG nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:46:21 np0005603609 nova_compute[221550]:  <uuid>cef38897-c445-4e7d-83db-06fb75f5fcda</uuid>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:  <name>instance-0000003c</name>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1343363810</nova:name>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:46:20</nova:creationTime>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:46:21 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:        <nova:user uuid="4cdbfeb437d54df89a0fb0f6621b8fdc">tempest-ImagesOneServerNegativeTestJSON-536491326-project-member</nova:user>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:        <nova:project uuid="9a5c5f11e8f24f898d16bceb9925aaa0">tempest-ImagesOneServerNegativeTestJSON-536491326</nova:project>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:        <nova:port uuid="c19ce404-77d1-46ea-90da-e996d7dff364">
Jan 31 02:46:21 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <entry name="serial">cef38897-c445-4e7d-83db-06fb75f5fcda</entry>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <entry name="uuid">cef38897-c445-4e7d-83db-06fb75f5fcda</entry>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/cef38897-c445-4e7d-83db-06fb75f5fcda_disk">
Jan 31 02:46:21 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:46:21 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/cef38897-c445-4e7d-83db-06fb75f5fcda_disk.config">
Jan 31 02:46:21 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:46:21 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:47:be:cc"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <target dev="tapc19ce404-77"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/cef38897-c445-4e7d-83db-06fb75f5fcda/console.log" append="off"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:46:21 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:46:21 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:46:21 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:46:21 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.912 221554 DEBUG nova.compute.manager [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Preparing to wait for external event network-vif-plugged-c19ce404-77d1-46ea-90da-e996d7dff364 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.913 221554 DEBUG oslo_concurrency.lockutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "cef38897-c445-4e7d-83db-06fb75f5fcda-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.914 221554 DEBUG oslo_concurrency.lockutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "cef38897-c445-4e7d-83db-06fb75f5fcda-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.914 221554 DEBUG oslo_concurrency.lockutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "cef38897-c445-4e7d-83db-06fb75f5fcda-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.915 221554 DEBUG nova.virt.libvirt.vif [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:46:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1343363810',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1343363810',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1343363810',id=60,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9a5c5f11e8f24f898d16bceb9925aaa0',ramdisk_id='',reservation_id='r-36qzlnvh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-536491326',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-536491326-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:46:13Z,user_data=None,user_id='4cdbfeb437d54df89a0fb0f6621b8fdc',uuid=cef38897-c445-4e7d-83db-06fb75f5fcda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c19ce404-77d1-46ea-90da-e996d7dff364", "address": "fa:16:3e:47:be:cc", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc19ce404-77", "ovs_interfaceid": "c19ce404-77d1-46ea-90da-e996d7dff364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.915 221554 DEBUG nova.network.os_vif_util [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Converting VIF {"id": "c19ce404-77d1-46ea-90da-e996d7dff364", "address": "fa:16:3e:47:be:cc", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc19ce404-77", "ovs_interfaceid": "c19ce404-77d1-46ea-90da-e996d7dff364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.916 221554 DEBUG nova.network.os_vif_util [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:be:cc,bridge_name='br-int',has_traffic_filtering=True,id=c19ce404-77d1-46ea-90da-e996d7dff364,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc19ce404-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.917 221554 DEBUG os_vif [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:be:cc,bridge_name='br-int',has_traffic_filtering=True,id=c19ce404-77d1-46ea-90da-e996d7dff364,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc19ce404-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.918 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.919 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.920 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.924 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.925 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc19ce404-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.926 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc19ce404-77, col_values=(('external_ids', {'iface-id': 'c19ce404-77d1-46ea-90da-e996d7dff364', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:be:cc', 'vm-uuid': 'cef38897-c445-4e7d-83db-06fb75f5fcda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.927 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:21 np0005603609 NetworkManager[49064]: <info>  [1769845581.9290] manager: (tapc19ce404-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.930 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.937 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:21 np0005603609 nova_compute[221550]: 2026-01-31 07:46:21.938 221554 INFO os_vif [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:be:cc,bridge_name='br-int',has_traffic_filtering=True,id=c19ce404-77d1-46ea-90da-e996d7dff364,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc19ce404-77')#033[00m
Jan 31 02:46:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:22 np0005603609 nova_compute[221550]: 2026-01-31 07:46:22.080 221554 DEBUG nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:46:22 np0005603609 nova_compute[221550]: 2026-01-31 07:46:22.080 221554 DEBUG nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:46:22 np0005603609 nova_compute[221550]: 2026-01-31 07:46:22.080 221554 DEBUG nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] No VIF found with MAC fa:16:3e:47:be:cc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:46:22 np0005603609 nova_compute[221550]: 2026-01-31 07:46:22.081 221554 INFO nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Using config drive#033[00m
Jan 31 02:46:22 np0005603609 nova_compute[221550]: 2026-01-31 07:46:22.105 221554 DEBUG nova.storage.rbd_utils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image cef38897-c445-4e7d-83db-06fb75f5fcda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:46:22 np0005603609 nova_compute[221550]: 2026-01-31 07:46:22.142 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:22.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:22 np0005603609 nova_compute[221550]: 2026-01-31 07:46:22.948 221554 INFO nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Creating config drive at /var/lib/nova/instances/cef38897-c445-4e7d-83db-06fb75f5fcda/disk.config#033[00m
Jan 31 02:46:22 np0005603609 nova_compute[221550]: 2026-01-31 07:46:22.951 221554 DEBUG oslo_concurrency.processutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cef38897-c445-4e7d-83db-06fb75f5fcda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8wp9td5s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:23.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:23 np0005603609 nova_compute[221550]: 2026-01-31 07:46:23.074 221554 DEBUG oslo_concurrency.processutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cef38897-c445-4e7d-83db-06fb75f5fcda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8wp9td5s" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:23 np0005603609 nova_compute[221550]: 2026-01-31 07:46:23.100 221554 DEBUG nova.storage.rbd_utils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image cef38897-c445-4e7d-83db-06fb75f5fcda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:46:23 np0005603609 nova_compute[221550]: 2026-01-31 07:46:23.105 221554 DEBUG oslo_concurrency.processutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cef38897-c445-4e7d-83db-06fb75f5fcda/disk.config cef38897-c445-4e7d-83db-06fb75f5fcda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:23 np0005603609 podman[243328]: 2026-01-31 07:46:23.162132917 +0000 UTC m=+0.046017581 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 02:46:23 np0005603609 podman[243324]: 2026-01-31 07:46:23.24159944 +0000 UTC m=+0.126956749 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:46:23 np0005603609 nova_compute[221550]: 2026-01-31 07:46:23.717 221554 DEBUG nova.network.neutron [req-2744ea4d-c075-4e26-9981-24f648402ffc req-d794e081-8529-4d5c-87c7-cd02fa19c4f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Updated VIF entry in instance network info cache for port c19ce404-77d1-46ea-90da-e996d7dff364. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:46:23 np0005603609 nova_compute[221550]: 2026-01-31 07:46:23.717 221554 DEBUG nova.network.neutron [req-2744ea4d-c075-4e26-9981-24f648402ffc req-d794e081-8529-4d5c-87c7-cd02fa19c4f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Updating instance_info_cache with network_info: [{"id": "c19ce404-77d1-46ea-90da-e996d7dff364", "address": "fa:16:3e:47:be:cc", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc19ce404-77", "ovs_interfaceid": "c19ce404-77d1-46ea-90da-e996d7dff364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:46:23 np0005603609 nova_compute[221550]: 2026-01-31 07:46:23.815 221554 DEBUG oslo_concurrency.lockutils [req-2744ea4d-c075-4e26-9981-24f648402ffc req-d794e081-8529-4d5c-87c7-cd02fa19c4f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-cef38897-c445-4e7d-83db-06fb75f5fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:46:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:24.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:24 np0005603609 nova_compute[221550]: 2026-01-31 07:46:24.986 221554 DEBUG oslo_concurrency.processutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cef38897-c445-4e7d-83db-06fb75f5fcda/disk.config cef38897-c445-4e7d-83db-06fb75f5fcda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.881s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:24 np0005603609 nova_compute[221550]: 2026-01-31 07:46:24.987 221554 INFO nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Deleting local config drive /var/lib/nova/instances/cef38897-c445-4e7d-83db-06fb75f5fcda/disk.config because it was imported into RBD.#033[00m
Jan 31 02:46:25 np0005603609 kernel: tapc19ce404-77: entered promiscuous mode
Jan 31 02:46:25 np0005603609 NetworkManager[49064]: <info>  [1769845585.0256] manager: (tapc19ce404-77): new Tun device (/org/freedesktop/NetworkManager/Devices/92)
Jan 31 02:46:25 np0005603609 ovn_controller[130359]: 2026-01-31T07:46:25Z|00179|binding|INFO|Claiming lport c19ce404-77d1-46ea-90da-e996d7dff364 for this chassis.
Jan 31 02:46:25 np0005603609 ovn_controller[130359]: 2026-01-31T07:46:25Z|00180|binding|INFO|c19ce404-77d1-46ea-90da-e996d7dff364: Claiming fa:16:3e:47:be:cc 10.100.0.7
Jan 31 02:46:25 np0005603609 nova_compute[221550]: 2026-01-31 07:46:25.027 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:25 np0005603609 ovn_controller[130359]: 2026-01-31T07:46:25Z|00181|binding|INFO|Setting lport c19ce404-77d1-46ea-90da-e996d7dff364 ovn-installed in OVS
Jan 31 02:46:25 np0005603609 ovn_controller[130359]: 2026-01-31T07:46:25Z|00182|binding|INFO|Setting lport c19ce404-77d1-46ea-90da-e996d7dff364 up in Southbound
Jan 31 02:46:25 np0005603609 nova_compute[221550]: 2026-01-31 07:46:25.035 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.035 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:be:cc 10.100.0.7'], port_security=['fa:16:3e:47:be:cc 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'cef38897-c445-4e7d-83db-06fb75f5fcda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7130ed58-0d3f-4534-9498-e2d59204c82c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a5c5f11e8f24f898d16bceb9925aaa0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d514bed-4c59-42dc-a403-a5a9a9cfa795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0203aeab-482d-423f-9cbc-afbc1fe3631d, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=c19ce404-77d1-46ea-90da-e996d7dff364) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:46:25 np0005603609 nova_compute[221550]: 2026-01-31 07:46:25.036 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.038 140058 INFO neutron.agent.ovn.metadata.agent [-] Port c19ce404-77d1-46ea-90da-e996d7dff364 in datapath 7130ed58-0d3f-4534-9498-e2d59204c82c bound to our chassis#033[00m
Jan 31 02:46:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.041 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7130ed58-0d3f-4534-9498-e2d59204c82c#033[00m
Jan 31 02:46:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:25.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.050 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7888235b-4142-4605-8049-4be224261e98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.051 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7130ed58-01 in ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.053 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7130ed58-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.053 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[12cd1a02-b03e-42a6-8197-8155d7d03475]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.054 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9d87c50f-132e-46df-83e5-676464aa420a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:25 np0005603609 systemd-udevd[243404]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:46:25 np0005603609 systemd-machined[190912]: New machine qemu-28-instance-0000003c.
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.064 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[5e663859-0377-4006-b4e8-5eb3b65a144e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:25 np0005603609 NetworkManager[49064]: <info>  [1769845585.0699] device (tapc19ce404-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:46:25 np0005603609 NetworkManager[49064]: <info>  [1769845585.0706] device (tapc19ce404-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:46:25 np0005603609 systemd[1]: Started Virtual Machine qemu-28-instance-0000003c.
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.084 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a3dcc33b-329b-4f74-bac9-355a21c1251a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.102 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[1a748048-f01e-4689-8ad8-feb1a1d215a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:25 np0005603609 NetworkManager[49064]: <info>  [1769845585.1084] manager: (tap7130ed58-00): new Veth device (/org/freedesktop/NetworkManager/Devices/93)
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.108 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2ded2aee-8128-47b7-ab59-3f539a4a624a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.127 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[a1600dde-3fcd-4519-a410-b05e22c0d149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.130 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f4273059-7269-4b3b-95f0-c6ed023d1938]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:25 np0005603609 NetworkManager[49064]: <info>  [1769845585.1486] device (tap7130ed58-00): carrier: link connected
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.154 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[a1e5ee00-8fc1-46e7-9930-786a8668e338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.167 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4f316a6a-81d2-4e1f-b435-1f4362201326]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7130ed58-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:8a:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582484, 'reachable_time': 25890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243436, 'error': None, 'target': 'ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.181 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[70f6ed50-966d-40d5-8807-4bf4789a2e15]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:8a47'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 582484, 'tstamp': 582484}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243437, 'error': None, 'target': 'ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.191 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[53c64d15-003e-4d4b-8e86-47abdfb66ef3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7130ed58-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:8a:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582484, 'reachable_time': 25890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 243438, 'error': None, 'target': 'ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.211 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cb72c145-322a-47bc-8a4d-ac0966271fe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.253 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5d4ebe0e-40d6-4158-b0b6-3537453e68fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.254 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7130ed58-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.254 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.255 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7130ed58-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:46:25 np0005603609 nova_compute[221550]: 2026-01-31 07:46:25.256 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:25 np0005603609 NetworkManager[49064]: <info>  [1769845585.2571] manager: (tap7130ed58-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Jan 31 02:46:25 np0005603609 kernel: tap7130ed58-00: entered promiscuous mode
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.259 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7130ed58-00, col_values=(('external_ids', {'iface-id': '498a34f8-98b0-44b9-8d4d-24ad7111bb4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:46:25 np0005603609 nova_compute[221550]: 2026-01-31 07:46:25.260 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:25 np0005603609 ovn_controller[130359]: 2026-01-31T07:46:25Z|00183|binding|INFO|Releasing lport 498a34f8-98b0-44b9-8d4d-24ad7111bb4f from this chassis (sb_readonly=0)
Jan 31 02:46:25 np0005603609 nova_compute[221550]: 2026-01-31 07:46:25.265 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:25 np0005603609 nova_compute[221550]: 2026-01-31 07:46:25.265 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.265 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7130ed58-0d3f-4534-9498-e2d59204c82c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7130ed58-0d3f-4534-9498-e2d59204c82c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.266 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[96f6abb7-c90b-4fb8-bbab-9287d56b48c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.267 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-7130ed58-0d3f-4534-9498-e2d59204c82c
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/7130ed58-0d3f-4534-9498-e2d59204c82c.pid.haproxy
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 7130ed58-0d3f-4534-9498-e2d59204c82c
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:46:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:25.268 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c', 'env', 'PROCESS_TAG=haproxy-7130ed58-0d3f-4534-9498-e2d59204c82c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7130ed58-0d3f-4534-9498-e2d59204c82c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:46:25 np0005603609 podman[243488]: 2026-01-31 07:46:25.526529677 +0000 UTC m=+0.017667341 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:46:25 np0005603609 podman[243488]: 2026-01-31 07:46:25.861713034 +0000 UTC m=+0.352850728 container create a409d095ce0535e4c4c3a42ea29f4995189849d28115e193f4fe065e2997e380 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 02:46:25 np0005603609 systemd[1]: Started libpod-conmon-a409d095ce0535e4c4c3a42ea29f4995189849d28115e193f4fe065e2997e380.scope.
Jan 31 02:46:25 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:46:25 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdda2fc09c17eee66c935c6a6297cbc54a809233fdf18e31ca660412ec6092e3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:46:26 np0005603609 nova_compute[221550]: 2026-01-31 07:46:26.004 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845586.0037584, cef38897-c445-4e7d-83db-06fb75f5fcda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:46:26 np0005603609 nova_compute[221550]: 2026-01-31 07:46:26.006 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] VM Started (Lifecycle Event)#033[00m
Jan 31 02:46:26 np0005603609 podman[243488]: 2026-01-31 07:46:26.014939893 +0000 UTC m=+0.506077567 container init a409d095ce0535e4c4c3a42ea29f4995189849d28115e193f4fe065e2997e380 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 02:46:26 np0005603609 podman[243488]: 2026-01-31 07:46:26.019086124 +0000 UTC m=+0.510223778 container start a409d095ce0535e4c4c3a42ea29f4995189849d28115e193f4fe065e2997e380 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:46:26 np0005603609 nova_compute[221550]: 2026-01-31 07:46:26.041 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:46:26 np0005603609 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[243526]: [NOTICE]   (243531) : New worker (243533) forked
Jan 31 02:46:26 np0005603609 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[243526]: [NOTICE]   (243531) : Loading success.
Jan 31 02:46:26 np0005603609 nova_compute[221550]: 2026-01-31 07:46:26.047 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845586.0046093, cef38897-c445-4e7d-83db-06fb75f5fcda => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:46:26 np0005603609 nova_compute[221550]: 2026-01-31 07:46:26.047 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:46:26 np0005603609 nova_compute[221550]: 2026-01-31 07:46:26.095 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:46:26 np0005603609 nova_compute[221550]: 2026-01-31 07:46:26.098 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:46:26 np0005603609 nova_compute[221550]: 2026-01-31 07:46:26.169 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:46:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:26.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:26 np0005603609 nova_compute[221550]: 2026-01-31 07:46:26.929 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:46:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:27.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.141 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.203 221554 DEBUG nova.compute.manager [req-24aaf67f-4dd3-4779-9903-64510e15726e req-f5d44d17-03e2-4240-8c25-4f4876a671c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Received event network-vif-plugged-c19ce404-77d1-46ea-90da-e996d7dff364 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.203 221554 DEBUG oslo_concurrency.lockutils [req-24aaf67f-4dd3-4779-9903-64510e15726e req-f5d44d17-03e2-4240-8c25-4f4876a671c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cef38897-c445-4e7d-83db-06fb75f5fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.203 221554 DEBUG oslo_concurrency.lockutils [req-24aaf67f-4dd3-4779-9903-64510e15726e req-f5d44d17-03e2-4240-8c25-4f4876a671c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cef38897-c445-4e7d-83db-06fb75f5fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.203 221554 DEBUG oslo_concurrency.lockutils [req-24aaf67f-4dd3-4779-9903-64510e15726e req-f5d44d17-03e2-4240-8c25-4f4876a671c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cef38897-c445-4e7d-83db-06fb75f5fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.204 221554 DEBUG nova.compute.manager [req-24aaf67f-4dd3-4779-9903-64510e15726e req-f5d44d17-03e2-4240-8c25-4f4876a671c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Processing event network-vif-plugged-c19ce404-77d1-46ea-90da-e996d7dff364 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.204 221554 DEBUG nova.compute.manager [req-24aaf67f-4dd3-4779-9903-64510e15726e req-f5d44d17-03e2-4240-8c25-4f4876a671c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Received event network-vif-plugged-c19ce404-77d1-46ea-90da-e996d7dff364 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.204 221554 DEBUG oslo_concurrency.lockutils [req-24aaf67f-4dd3-4779-9903-64510e15726e req-f5d44d17-03e2-4240-8c25-4f4876a671c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cef38897-c445-4e7d-83db-06fb75f5fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.204 221554 DEBUG oslo_concurrency.lockutils [req-24aaf67f-4dd3-4779-9903-64510e15726e req-f5d44d17-03e2-4240-8c25-4f4876a671c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cef38897-c445-4e7d-83db-06fb75f5fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.205 221554 DEBUG oslo_concurrency.lockutils [req-24aaf67f-4dd3-4779-9903-64510e15726e req-f5d44d17-03e2-4240-8c25-4f4876a671c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cef38897-c445-4e7d-83db-06fb75f5fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.205 221554 DEBUG nova.compute.manager [req-24aaf67f-4dd3-4779-9903-64510e15726e req-f5d44d17-03e2-4240-8c25-4f4876a671c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] No waiting events found dispatching network-vif-plugged-c19ce404-77d1-46ea-90da-e996d7dff364 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.205 221554 WARNING nova.compute.manager [req-24aaf67f-4dd3-4779-9903-64510e15726e req-f5d44d17-03e2-4240-8c25-4f4876a671c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Received unexpected event network-vif-plugged-c19ce404-77d1-46ea-90da-e996d7dff364 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.208 221554 DEBUG nova.compute.manager [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.213 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845587.2121663, cef38897-c445-4e7d-83db-06fb75f5fcda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.213 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.214 221554 DEBUG nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.217 221554 INFO nova.virt.libvirt.driver [-] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Instance spawned successfully.#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.217 221554 DEBUG nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.254 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.258 221554 DEBUG nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.259 221554 DEBUG nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.260 221554 DEBUG nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.261 221554 DEBUG nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.261 221554 DEBUG nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.262 221554 DEBUG nova.virt.libvirt.driver [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.269 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.302 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.344 221554 INFO nova.compute.manager [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Took 14.21 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.345 221554 DEBUG nova.compute.manager [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.458 221554 INFO nova.compute.manager [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Took 15.33 seconds to build instance.#033[00m
Jan 31 02:46:27 np0005603609 nova_compute[221550]: 2026-01-31 07:46:27.495 221554 DEBUG oslo_concurrency.lockutils [None req-65465f9b-fc7f-4db4-b352-fa8f2e48ea82 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "cef38897-c445-4e7d-83db-06fb75f5fcda" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:28.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:29.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:46:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 21K writes, 87K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.04 MB/s#012Cumulative WAL: 21K writes, 7212 syncs, 3.03 writes per sync, written: 0.08 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 42.82 MB, 0.07 MB/s#012Interval WAL: 10K writes, 4130 syncs, 2.56 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 02:46:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:30.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:31.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:46:31 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/50917927' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:46:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:46:31 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/50917927' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.290 221554 DEBUG oslo_concurrency.lockutils [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "cef38897-c445-4e7d-83db-06fb75f5fcda" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.291 221554 DEBUG oslo_concurrency.lockutils [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "cef38897-c445-4e7d-83db-06fb75f5fcda" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.291 221554 DEBUG oslo_concurrency.lockutils [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "cef38897-c445-4e7d-83db-06fb75f5fcda-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.292 221554 DEBUG oslo_concurrency.lockutils [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "cef38897-c445-4e7d-83db-06fb75f5fcda-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.292 221554 DEBUG oslo_concurrency.lockutils [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "cef38897-c445-4e7d-83db-06fb75f5fcda-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.293 221554 INFO nova.compute.manager [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Terminating instance#033[00m
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.294 221554 DEBUG nova.compute.manager [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:46:31 np0005603609 kernel: tapc19ce404-77 (unregistering): left promiscuous mode
Jan 31 02:46:31 np0005603609 NetworkManager[49064]: <info>  [1769845591.4762] device (tapc19ce404-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:46:31 np0005603609 ovn_controller[130359]: 2026-01-31T07:46:31Z|00184|binding|INFO|Releasing lport c19ce404-77d1-46ea-90da-e996d7dff364 from this chassis (sb_readonly=0)
Jan 31 02:46:31 np0005603609 ovn_controller[130359]: 2026-01-31T07:46:31Z|00185|binding|INFO|Setting lport c19ce404-77d1-46ea-90da-e996d7dff364 down in Southbound
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.496 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:31 np0005603609 ovn_controller[130359]: 2026-01-31T07:46:31Z|00186|binding|INFO|Removing iface tapc19ce404-77 ovn-installed in OVS
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.501 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.504 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:31.510 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:be:cc 10.100.0.7'], port_security=['fa:16:3e:47:be:cc 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'cef38897-c445-4e7d-83db-06fb75f5fcda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7130ed58-0d3f-4534-9498-e2d59204c82c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a5c5f11e8f24f898d16bceb9925aaa0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d514bed-4c59-42dc-a403-a5a9a9cfa795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0203aeab-482d-423f-9cbc-afbc1fe3631d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=c19ce404-77d1-46ea-90da-e996d7dff364) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:46:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:31.511 140058 INFO neutron.agent.ovn.metadata.agent [-] Port c19ce404-77d1-46ea-90da-e996d7dff364 in datapath 7130ed58-0d3f-4534-9498-e2d59204c82c unbound from our chassis#033[00m
Jan 31 02:46:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:31.513 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7130ed58-0d3f-4534-9498-e2d59204c82c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:46:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:31.513 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[19a0840b-d368-49f4-8807-e693fbf24350]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:31.514 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c namespace which is not needed anymore#033[00m
Jan 31 02:46:31 np0005603609 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Jan 31 02:46:31 np0005603609 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000003c.scope: Consumed 4.669s CPU time.
Jan 31 02:46:31 np0005603609 systemd-machined[190912]: Machine qemu-28-instance-0000003c terminated.
Jan 31 02:46:31 np0005603609 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[243526]: [NOTICE]   (243531) : haproxy version is 2.8.14-c23fe91
Jan 31 02:46:31 np0005603609 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[243526]: [NOTICE]   (243531) : path to executable is /usr/sbin/haproxy
Jan 31 02:46:31 np0005603609 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[243526]: [WARNING]  (243531) : Exiting Master process...
Jan 31 02:46:31 np0005603609 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[243526]: [WARNING]  (243531) : Exiting Master process...
Jan 31 02:46:31 np0005603609 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[243526]: [ALERT]    (243531) : Current worker (243533) exited with code 143 (Terminated)
Jan 31 02:46:31 np0005603609 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[243526]: [WARNING]  (243531) : All workers exited. Exiting... (0)
Jan 31 02:46:31 np0005603609 systemd[1]: libpod-a409d095ce0535e4c4c3a42ea29f4995189849d28115e193f4fe065e2997e380.scope: Deactivated successfully.
Jan 31 02:46:31 np0005603609 podman[243569]: 2026-01-31 07:46:31.672942105 +0000 UTC m=+0.076845931 container died a409d095ce0535e4c4c3a42ea29f4995189849d28115e193f4fe065e2997e380 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.717 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.722 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.734 221554 INFO nova.virt.libvirt.driver [-] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Instance destroyed successfully.#033[00m
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.734 221554 DEBUG nova.objects.instance [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lazy-loading 'resources' on Instance uuid cef38897-c445-4e7d-83db-06fb75f5fcda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.772 221554 DEBUG nova.virt.libvirt.vif [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:46:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1343363810',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1343363810',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1343363810',id=60,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:46:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9a5c5f11e8f24f898d16bceb9925aaa0',ramdisk_id='',reservation_id='r-36qzlnvh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-536491326',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-536491326-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:46:27Z,user_data=None,user_id='4cdbfeb437d54df89a0fb0f6621b8fdc',uuid=cef38897-c445-4e7d-83db-06fb75f5fcda,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c19ce404-77d1-46ea-90da-e996d7dff364", "address": "fa:16:3e:47:be:cc", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc19ce404-77", "ovs_interfaceid": "c19ce404-77d1-46ea-90da-e996d7dff364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.773 221554 DEBUG nova.network.os_vif_util [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Converting VIF {"id": "c19ce404-77d1-46ea-90da-e996d7dff364", "address": "fa:16:3e:47:be:cc", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc19ce404-77", "ovs_interfaceid": "c19ce404-77d1-46ea-90da-e996d7dff364", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.774 221554 DEBUG nova.network.os_vif_util [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:be:cc,bridge_name='br-int',has_traffic_filtering=True,id=c19ce404-77d1-46ea-90da-e996d7dff364,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc19ce404-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.774 221554 DEBUG os_vif [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:be:cc,bridge_name='br-int',has_traffic_filtering=True,id=c19ce404-77d1-46ea-90da-e996d7dff364,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc19ce404-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.777 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.777 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc19ce404-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.779 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.781 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:31 np0005603609 nova_compute[221550]: 2026-01-31 07:46:31.784 221554 INFO os_vif [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:be:cc,bridge_name='br-int',has_traffic_filtering=True,id=c19ce404-77d1-46ea-90da-e996d7dff364,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc19ce404-77')#033[00m
Jan 31 02:46:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:32 np0005603609 nova_compute[221550]: 2026-01-31 07:46:32.144 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:32 np0005603609 nova_compute[221550]: 2026-01-31 07:46:32.194 221554 DEBUG nova.compute.manager [req-58d7d358-41d1-44d5-b114-d1fe43b9296b req-568668b7-c868-4616-a9cd-9912de9380ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Received event network-vif-unplugged-c19ce404-77d1-46ea-90da-e996d7dff364 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:46:32 np0005603609 nova_compute[221550]: 2026-01-31 07:46:32.195 221554 DEBUG oslo_concurrency.lockutils [req-58d7d358-41d1-44d5-b114-d1fe43b9296b req-568668b7-c868-4616-a9cd-9912de9380ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cef38897-c445-4e7d-83db-06fb75f5fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:32 np0005603609 nova_compute[221550]: 2026-01-31 07:46:32.196 221554 DEBUG oslo_concurrency.lockutils [req-58d7d358-41d1-44d5-b114-d1fe43b9296b req-568668b7-c868-4616-a9cd-9912de9380ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cef38897-c445-4e7d-83db-06fb75f5fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:32 np0005603609 nova_compute[221550]: 2026-01-31 07:46:32.196 221554 DEBUG oslo_concurrency.lockutils [req-58d7d358-41d1-44d5-b114-d1fe43b9296b req-568668b7-c868-4616-a9cd-9912de9380ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cef38897-c445-4e7d-83db-06fb75f5fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:32 np0005603609 nova_compute[221550]: 2026-01-31 07:46:32.197 221554 DEBUG nova.compute.manager [req-58d7d358-41d1-44d5-b114-d1fe43b9296b req-568668b7-c868-4616-a9cd-9912de9380ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] No waiting events found dispatching network-vif-unplugged-c19ce404-77d1-46ea-90da-e996d7dff364 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:46:32 np0005603609 nova_compute[221550]: 2026-01-31 07:46:32.197 221554 DEBUG nova.compute.manager [req-58d7d358-41d1-44d5-b114-d1fe43b9296b req-568668b7-c868-4616-a9cd-9912de9380ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Received event network-vif-unplugged-c19ce404-77d1-46ea-90da-e996d7dff364 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:46:32 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a409d095ce0535e4c4c3a42ea29f4995189849d28115e193f4fe065e2997e380-userdata-shm.mount: Deactivated successfully.
Jan 31 02:46:32 np0005603609 systemd[1]: var-lib-containers-storage-overlay-bdda2fc09c17eee66c935c6a6297cbc54a809233fdf18e31ca660412ec6092e3-merged.mount: Deactivated successfully.
Jan 31 02:46:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:32.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:32 np0005603609 podman[243569]: 2026-01-31 07:46:32.572105917 +0000 UTC m=+0.976009763 container cleanup a409d095ce0535e4c4c3a42ea29f4995189849d28115e193f4fe065e2997e380 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 02:46:32 np0005603609 systemd[1]: libpod-conmon-a409d095ce0535e4c4c3a42ea29f4995189849d28115e193f4fe065e2997e380.scope: Deactivated successfully.
Jan 31 02:46:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:33.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:33 np0005603609 podman[243628]: 2026-01-31 07:46:33.198229925 +0000 UTC m=+0.613007539 container remove a409d095ce0535e4c4c3a42ea29f4995189849d28115e193f4fe065e2997e380 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:46:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:33.202 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba333d8-1137-46b0-9492-77e8dbec6da1]: (4, ('Sat Jan 31 07:46:31 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c (a409d095ce0535e4c4c3a42ea29f4995189849d28115e193f4fe065e2997e380)\na409d095ce0535e4c4c3a42ea29f4995189849d28115e193f4fe065e2997e380\nSat Jan 31 07:46:32 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c (a409d095ce0535e4c4c3a42ea29f4995189849d28115e193f4fe065e2997e380)\na409d095ce0535e4c4c3a42ea29f4995189849d28115e193f4fe065e2997e380\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:33.204 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[34cff78b-e8a4-4858-b17d-c0b32bad40a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:33.206 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7130ed58-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:46:33 np0005603609 nova_compute[221550]: 2026-01-31 07:46:33.209 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:33 np0005603609 kernel: tap7130ed58-00: left promiscuous mode
Jan 31 02:46:33 np0005603609 nova_compute[221550]: 2026-01-31 07:46:33.219 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:33.222 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2e11eb57-478d-40e5-9556-e10887bb07da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:33.238 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[250e665e-4c0a-4bca-a95f-b575bac6e19b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:33.239 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb8ed90-e68b-419c-b1f2-c0ed1ede2754]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:33.251 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[24e21f2f-31b9-49c4-b057-fe6b8b159ccc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582479, 'reachable_time': 35066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243644, 'error': None, 'target': 'ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:33.254 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:46:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:46:33.254 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[11173ab1-3b72-4153-9850-fe76d0d0e5b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:46:33 np0005603609 systemd[1]: run-netns-ovnmeta\x2d7130ed58\x2d0d3f\x2d4534\x2d9498\x2de2d59204c82c.mount: Deactivated successfully.
Jan 31 02:46:34 np0005603609 nova_compute[221550]: 2026-01-31 07:46:34.357 221554 DEBUG nova.compute.manager [req-51acee84-e873-4240-8833-a2cb54ec0fd6 req-cf7eb1e8-3b55-4963-966d-a55bfa80777d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Received event network-vif-plugged-c19ce404-77d1-46ea-90da-e996d7dff364 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:46:34 np0005603609 nova_compute[221550]: 2026-01-31 07:46:34.357 221554 DEBUG oslo_concurrency.lockutils [req-51acee84-e873-4240-8833-a2cb54ec0fd6 req-cf7eb1e8-3b55-4963-966d-a55bfa80777d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cef38897-c445-4e7d-83db-06fb75f5fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:34 np0005603609 nova_compute[221550]: 2026-01-31 07:46:34.358 221554 DEBUG oslo_concurrency.lockutils [req-51acee84-e873-4240-8833-a2cb54ec0fd6 req-cf7eb1e8-3b55-4963-966d-a55bfa80777d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cef38897-c445-4e7d-83db-06fb75f5fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:34 np0005603609 nova_compute[221550]: 2026-01-31 07:46:34.358 221554 DEBUG oslo_concurrency.lockutils [req-51acee84-e873-4240-8833-a2cb54ec0fd6 req-cf7eb1e8-3b55-4963-966d-a55bfa80777d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cef38897-c445-4e7d-83db-06fb75f5fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:34 np0005603609 nova_compute[221550]: 2026-01-31 07:46:34.359 221554 DEBUG nova.compute.manager [req-51acee84-e873-4240-8833-a2cb54ec0fd6 req-cf7eb1e8-3b55-4963-966d-a55bfa80777d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] No waiting events found dispatching network-vif-plugged-c19ce404-77d1-46ea-90da-e996d7dff364 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:46:34 np0005603609 nova_compute[221550]: 2026-01-31 07:46:34.359 221554 WARNING nova.compute.manager [req-51acee84-e873-4240-8833-a2cb54ec0fd6 req-cf7eb1e8-3b55-4963-966d-a55bfa80777d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Received unexpected event network-vif-plugged-c19ce404-77d1-46ea-90da-e996d7dff364 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:46:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:34.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:46:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:35.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:46:35 np0005603609 nova_compute[221550]: 2026-01-31 07:46:35.901 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:35 np0005603609 nova_compute[221550]: 2026-01-31 07:46:35.978 221554 INFO nova.virt.libvirt.driver [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Deleting instance files /var/lib/nova/instances/cef38897-c445-4e7d-83db-06fb75f5fcda_del#033[00m
Jan 31 02:46:35 np0005603609 nova_compute[221550]: 2026-01-31 07:46:35.979 221554 INFO nova.virt.libvirt.driver [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Deletion of /var/lib/nova/instances/cef38897-c445-4e7d-83db-06fb75f5fcda_del complete#033[00m
Jan 31 02:46:36 np0005603609 nova_compute[221550]: 2026-01-31 07:46:36.067 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:36 np0005603609 nova_compute[221550]: 2026-01-31 07:46:36.149 221554 INFO nova.compute.manager [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Took 4.85 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:46:36 np0005603609 nova_compute[221550]: 2026-01-31 07:46:36.150 221554 DEBUG oslo.service.loopingcall [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:46:36 np0005603609 nova_compute[221550]: 2026-01-31 07:46:36.150 221554 DEBUG nova.compute.manager [-] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:46:36 np0005603609 nova_compute[221550]: 2026-01-31 07:46:36.150 221554 DEBUG nova.network.neutron [-] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:46:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:36.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:36 np0005603609 nova_compute[221550]: 2026-01-31 07:46:36.780 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:46:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:37.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:46:37 np0005603609 nova_compute[221550]: 2026-01-31 07:46:37.146 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:37 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:46:37 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:46:37 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:46:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:38.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:46:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:39.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:46:39 np0005603609 nova_compute[221550]: 2026-01-31 07:46:39.103 221554 DEBUG nova.network.neutron [-] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:46:39 np0005603609 nova_compute[221550]: 2026-01-31 07:46:39.141 221554 INFO nova.compute.manager [-] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Took 2.99 seconds to deallocate network for instance.#033[00m
Jan 31 02:46:39 np0005603609 nova_compute[221550]: 2026-01-31 07:46:39.293 221554 DEBUG oslo_concurrency.lockutils [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:39 np0005603609 nova_compute[221550]: 2026-01-31 07:46:39.293 221554 DEBUG oslo_concurrency.lockutils [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:39 np0005603609 nova_compute[221550]: 2026-01-31 07:46:39.308 221554 DEBUG nova.compute.manager [req-aef6cf6e-88ab-475b-81da-843c02f78a15 req-679be09f-524f-47be-8a65-ca8c416bbd58 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Received event network-vif-deleted-c19ce404-77d1-46ea-90da-e996d7dff364 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:46:39 np0005603609 nova_compute[221550]: 2026-01-31 07:46:39.336 221554 DEBUG nova.scheduler.client.report [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 02:46:39 np0005603609 nova_compute[221550]: 2026-01-31 07:46:39.360 221554 DEBUG nova.scheduler.client.report [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 02:46:39 np0005603609 nova_compute[221550]: 2026-01-31 07:46:39.360 221554 DEBUG nova.compute.provider_tree [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 02:46:39 np0005603609 nova_compute[221550]: 2026-01-31 07:46:39.402 221554 DEBUG nova.scheduler.client.report [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 02:46:39 np0005603609 nova_compute[221550]: 2026-01-31 07:46:39.451 221554 DEBUG nova.scheduler.client.report [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 02:46:39 np0005603609 nova_compute[221550]: 2026-01-31 07:46:39.511 221554 DEBUG oslo_concurrency.processutils [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:46:40 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1379675515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:46:40 np0005603609 nova_compute[221550]: 2026-01-31 07:46:40.460 221554 DEBUG oslo_concurrency.processutils [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.949s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:40 np0005603609 nova_compute[221550]: 2026-01-31 07:46:40.468 221554 DEBUG nova.compute.provider_tree [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:46:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:46:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:40.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:46:40 np0005603609 nova_compute[221550]: 2026-01-31 07:46:40.679 221554 DEBUG nova.scheduler.client.report [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:46:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:41.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:41 np0005603609 nova_compute[221550]: 2026-01-31 07:46:41.074 221554 DEBUG oslo_concurrency.lockutils [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:41 np0005603609 nova_compute[221550]: 2026-01-31 07:46:41.321 221554 INFO nova.scheduler.client.report [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Deleted allocations for instance cef38897-c445-4e7d-83db-06fb75f5fcda#033[00m
Jan 31 02:46:41 np0005603609 nova_compute[221550]: 2026-01-31 07:46:41.785 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:42 np0005603609 nova_compute[221550]: 2026-01-31 07:46:42.147 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:42 np0005603609 nova_compute[221550]: 2026-01-31 07:46:42.341 221554 DEBUG oslo_concurrency.lockutils [None req-3bca6758-83d7-4eda-bab3-6d8c387f13f6 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "cef38897-c445-4e7d-83db-06fb75f5fcda" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:42.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:43.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:46:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:44.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:46:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:46:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:45.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:46:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:46.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:46 np0005603609 nova_compute[221550]: 2026-01-31 07:46:46.732 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845591.7319846, cef38897-c445-4e7d-83db-06fb75f5fcda => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:46:46 np0005603609 nova_compute[221550]: 2026-01-31 07:46:46.733 221554 INFO nova.compute.manager [-] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:46:46 np0005603609 nova_compute[221550]: 2026-01-31 07:46:46.776 221554 DEBUG nova.compute.manager [None req-c31c5e3f-20b5-43d4-bb4a-d906edd5bbca - - - - - -] [instance: cef38897-c445-4e7d-83db-06fb75f5fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:46:46 np0005603609 nova_compute[221550]: 2026-01-31 07:46:46.788 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:47.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:47 np0005603609 nova_compute[221550]: 2026-01-31 07:46:47.148 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:47 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:46:47 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:46:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:48.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:49.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:50.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:51.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:51 np0005603609 nova_compute[221550]: 2026-01-31 07:46:51.792 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:52 np0005603609 nova_compute[221550]: 2026-01-31 07:46:52.152 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:46:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:52.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:46:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:53.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:54 np0005603609 podman[243850]: 2026-01-31 07:46:54.171654482 +0000 UTC m=+0.054105107 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:46:54 np0005603609 podman[243849]: 2026-01-31 07:46:54.192897349 +0000 UTC m=+0.080066409 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller)
Jan 31 02:46:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:46:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:54.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:46:54 np0005603609 nova_compute[221550]: 2026-01-31 07:46:54.712 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:55.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:56.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:56 np0005603609 nova_compute[221550]: 2026-01-31 07:46:56.796 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:57.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:57 np0005603609 nova_compute[221550]: 2026-01-31 07:46:57.154 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:46:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:58.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:46:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:46:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:59.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e216 e216: 3 total, 3 up, 3 in
Jan 31 02:47:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:47:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:00.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:47:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:47:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:01.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:47:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e217 e217: 3 total, 3 up, 3 in
Jan 31 02:47:01 np0005603609 nova_compute[221550]: 2026-01-31 07:47:01.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:47:01 np0005603609 nova_compute[221550]: 2026-01-31 07:47:01.799 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:02 np0005603609 nova_compute[221550]: 2026-01-31 07:47:02.158 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:02.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:47:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.3 total, 600.0 interval#012Cumulative writes: 6591 writes, 34K keys, 6591 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 6591 writes, 6591 syncs, 1.00 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1650 writes, 8270 keys, 1650 commit groups, 1.0 writes per commit group, ingest: 16.09 MB, 0.03 MB/s#012Interval WAL: 1650 writes, 1650 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     70.0      0.58              0.09        18    0.032       0      0       0.0       0.0#012  L6      1/0    9.90 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.6     83.6     69.1      2.14              0.33        17    0.126     87K   9942       0.0       0.0#012 Sum      1/0    9.90 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.6     65.7     69.3      2.72              0.42        35    0.078     87K   9942       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9     64.0     65.7      0.77              0.11         8    0.096     25K   3112       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     83.6     69.1      2.14              0.33        17    0.126     87K   9942       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     74.0      0.55              0.09        17    0.032       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.3 total, 600.0 interval#012Flush(GB): cumulative 0.040, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.18 GB write, 0.08 MB/s write, 0.17 GB read, 0.07 MB/s read, 2.7 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619ad8ed1f0#2 capacity: 304.00 MB usage: 19.63 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000244 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1142,18.94 MB,6.23051%) FilterBlock(35,250.23 KB,0.0803847%) IndexBlock(35,453.28 KB,0.145611%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 02:47:02 np0005603609 nova_compute[221550]: 2026-01-31 07:47:02.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:47:02 np0005603609 nova_compute[221550]: 2026-01-31 07:47:02.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:47:02 np0005603609 nova_compute[221550]: 2026-01-31 07:47:02.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:47:02 np0005603609 nova_compute[221550]: 2026-01-31 07:47:02.754 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:47:02 np0005603609 nova_compute[221550]: 2026-01-31 07:47:02.754 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:47:02 np0005603609 nova_compute[221550]: 2026-01-31 07:47:02.754 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:47:02 np0005603609 nova_compute[221550]: 2026-01-31 07:47:02.754 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:47:02 np0005603609 nova_compute[221550]: 2026-01-31 07:47:02.755 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:47:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:03.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:47:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:47:03 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/677423198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:47:03 np0005603609 nova_compute[221550]: 2026-01-31 07:47:03.193 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:03 np0005603609 nova_compute[221550]: 2026-01-31 07:47:03.343 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:47:03 np0005603609 nova_compute[221550]: 2026-01-31 07:47:03.344 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4707MB free_disk=20.942768096923828GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:47:03 np0005603609 nova_compute[221550]: 2026-01-31 07:47:03.344 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:47:03 np0005603609 nova_compute[221550]: 2026-01-31 07:47:03.345 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:47:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e218 e218: 3 total, 3 up, 3 in
Jan 31 02:47:03 np0005603609 nova_compute[221550]: 2026-01-31 07:47:03.615 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:47:03 np0005603609 nova_compute[221550]: 2026-01-31 07:47:03.616 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:47:03 np0005603609 nova_compute[221550]: 2026-01-31 07:47:03.643 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:47:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/446125117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:47:04 np0005603609 nova_compute[221550]: 2026-01-31 07:47:04.057 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:04 np0005603609 nova_compute[221550]: 2026-01-31 07:47:04.062 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:47:04 np0005603609 nova_compute[221550]: 2026-01-31 07:47:04.400 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:47:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:47:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:04.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:47:04 np0005603609 nova_compute[221550]: 2026-01-31 07:47:04.636 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:47:04 np0005603609 nova_compute[221550]: 2026-01-31 07:47:04.637 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:47:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e219 e219: 3 total, 3 up, 3 in
Jan 31 02:47:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:05.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:05 np0005603609 nova_compute[221550]: 2026-01-31 07:47:05.632 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:47:05 np0005603609 nova_compute[221550]: 2026-01-31 07:47:05.632 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:47:05 np0005603609 nova_compute[221550]: 2026-01-31 07:47:05.632 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:47:05 np0005603609 nova_compute[221550]: 2026-01-31 07:47:05.632 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:47:05 np0005603609 nova_compute[221550]: 2026-01-31 07:47:05.705 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:47:05 np0005603609 nova_compute[221550]: 2026-01-31 07:47:05.705 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:47:05 np0005603609 nova_compute[221550]: 2026-01-31 07:47:05.706 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:47:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:06.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:06 np0005603609 nova_compute[221550]: 2026-01-31 07:47:06.802 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:07.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:07 np0005603609 nova_compute[221550]: 2026-01-31 07:47:07.159 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:07.481 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:47:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:07.481 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:47:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:07.481 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:47:07 np0005603609 nova_compute[221550]: 2026-01-31 07:47:07.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:47:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:47:08 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/991230648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:47:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:08.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:09.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e220 e220: 3 total, 3 up, 3 in
Jan 31 02:47:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:10.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:11.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e221 e221: 3 total, 3 up, 3 in
Jan 31 02:47:11 np0005603609 nova_compute[221550]: 2026-01-31 07:47:11.806 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:12 np0005603609 nova_compute[221550]: 2026-01-31 07:47:12.195 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:47:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:12.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:47:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:13 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:47:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:13.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:13 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:47:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:47:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:14.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:47:15 np0005603609 nova_compute[221550]: 2026-01-31 07:47:15.049 221554 DEBUG oslo_concurrency.lockutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Acquiring lock "2d280afb-69f5-4511-8134-19901686f5fc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:47:15 np0005603609 nova_compute[221550]: 2026-01-31 07:47:15.050 221554 DEBUG oslo_concurrency.lockutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "2d280afb-69f5-4511-8134-19901686f5fc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:47:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:47:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:15.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:47:15 np0005603609 nova_compute[221550]: 2026-01-31 07:47:15.180 221554 DEBUG nova.compute.manager [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:47:15 np0005603609 nova_compute[221550]: 2026-01-31 07:47:15.367 221554 DEBUG oslo_concurrency.lockutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:47:15 np0005603609 nova_compute[221550]: 2026-01-31 07:47:15.368 221554 DEBUG oslo_concurrency.lockutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:47:15 np0005603609 nova_compute[221550]: 2026-01-31 07:47:15.380 221554 DEBUG nova.virt.hardware [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:47:15 np0005603609 nova_compute[221550]: 2026-01-31 07:47:15.380 221554 INFO nova.compute.claims [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:47:15 np0005603609 nova_compute[221550]: 2026-01-31 07:47:15.760 221554 DEBUG oslo_concurrency.processutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:47:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2462762902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:47:16 np0005603609 nova_compute[221550]: 2026-01-31 07:47:16.195 221554 DEBUG oslo_concurrency.processutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:16 np0005603609 nova_compute[221550]: 2026-01-31 07:47:16.200 221554 DEBUG nova.compute.provider_tree [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:47:16 np0005603609 nova_compute[221550]: 2026-01-31 07:47:16.398 221554 DEBUG nova.scheduler.client.report [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:47:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:16.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:16 np0005603609 nova_compute[221550]: 2026-01-31 07:47:16.613 221554 DEBUG oslo_concurrency.lockutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:47:16 np0005603609 nova_compute[221550]: 2026-01-31 07:47:16.614 221554 DEBUG nova.compute.manager [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:47:16 np0005603609 nova_compute[221550]: 2026-01-31 07:47:16.836 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:16 np0005603609 nova_compute[221550]: 2026-01-31 07:47:16.909 221554 DEBUG nova.compute.manager [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:47:16 np0005603609 nova_compute[221550]: 2026-01-31 07:47:16.910 221554 DEBUG nova.network.neutron [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:47:16 np0005603609 nova_compute[221550]: 2026-01-31 07:47:16.965 221554 INFO nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:47:16 np0005603609 nova_compute[221550]: 2026-01-31 07:47:16.995 221554 DEBUG nova.compute.manager [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:47:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:17.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:17 np0005603609 nova_compute[221550]: 2026-01-31 07:47:17.197 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:17 np0005603609 nova_compute[221550]: 2026-01-31 07:47:17.363 221554 DEBUG nova.compute.manager [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:47:17 np0005603609 nova_compute[221550]: 2026-01-31 07:47:17.365 221554 DEBUG nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:47:17 np0005603609 nova_compute[221550]: 2026-01-31 07:47:17.366 221554 INFO nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Creating image(s)#033[00m
Jan 31 02:47:17 np0005603609 nova_compute[221550]: 2026-01-31 07:47:17.394 221554 DEBUG nova.storage.rbd_utils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] rbd image 2d280afb-69f5-4511-8134-19901686f5fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:47:17 np0005603609 nova_compute[221550]: 2026-01-31 07:47:17.422 221554 DEBUG nova.storage.rbd_utils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] rbd image 2d280afb-69f5-4511-8134-19901686f5fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:47:17 np0005603609 nova_compute[221550]: 2026-01-31 07:47:17.448 221554 DEBUG nova.storage.rbd_utils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] rbd image 2d280afb-69f5-4511-8134-19901686f5fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:47:17 np0005603609 nova_compute[221550]: 2026-01-31 07:47:17.452 221554 DEBUG oslo_concurrency.processutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:17 np0005603609 nova_compute[221550]: 2026-01-31 07:47:17.499 221554 DEBUG oslo_concurrency.processutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:17 np0005603609 nova_compute[221550]: 2026-01-31 07:47:17.500 221554 DEBUG oslo_concurrency.lockutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:47:17 np0005603609 nova_compute[221550]: 2026-01-31 07:47:17.500 221554 DEBUG oslo_concurrency.lockutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:47:17 np0005603609 nova_compute[221550]: 2026-01-31 07:47:17.501 221554 DEBUG oslo_concurrency.lockutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:47:17 np0005603609 nova_compute[221550]: 2026-01-31 07:47:17.522 221554 DEBUG nova.storage.rbd_utils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] rbd image 2d280afb-69f5-4511-8134-19901686f5fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:47:17 np0005603609 nova_compute[221550]: 2026-01-31 07:47:17.525 221554 DEBUG oslo_concurrency.processutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 2d280afb-69f5-4511-8134-19901686f5fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:17 np0005603609 nova_compute[221550]: 2026-01-31 07:47:17.799 221554 DEBUG nova.network.neutron [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 31 02:47:17 np0005603609 nova_compute[221550]: 2026-01-31 07:47:17.800 221554 DEBUG nova.compute.manager [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.293 221554 DEBUG oslo_concurrency.processutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 2d280afb-69f5-4511-8134-19901686f5fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.768s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.458 221554 DEBUG nova.storage.rbd_utils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] resizing rbd image 2d280afb-69f5-4511-8134-19901686f5fc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:47:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:47:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:18.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.786 221554 DEBUG nova.objects.instance [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lazy-loading 'migration_context' on Instance uuid 2d280afb-69f5-4511-8134-19901686f5fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.806 221554 DEBUG nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.806 221554 DEBUG nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Ensure instance console log exists: /var/lib/nova/instances/2d280afb-69f5-4511-8134-19901686f5fc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.807 221554 DEBUG oslo_concurrency.lockutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.807 221554 DEBUG oslo_concurrency.lockutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.808 221554 DEBUG oslo_concurrency.lockutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.810 221554 DEBUG nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.816 221554 WARNING nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.823 221554 DEBUG nova.virt.libvirt.host [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.824 221554 DEBUG nova.virt.libvirt.host [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.829 221554 DEBUG nova.virt.libvirt.host [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.829 221554 DEBUG nova.virt.libvirt.host [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.831 221554 DEBUG nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.832 221554 DEBUG nova.virt.hardware [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.833 221554 DEBUG nova.virt.hardware [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.833 221554 DEBUG nova.virt.hardware [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.834 221554 DEBUG nova.virt.hardware [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.834 221554 DEBUG nova.virt.hardware [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.834 221554 DEBUG nova.virt.hardware [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.835 221554 DEBUG nova.virt.hardware [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.835 221554 DEBUG nova.virt.hardware [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.836 221554 DEBUG nova.virt.hardware [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.836 221554 DEBUG nova.virt.hardware [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.837 221554 DEBUG nova.virt.hardware [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:47:18 np0005603609 nova_compute[221550]: 2026-01-31 07:47:18.841 221554 DEBUG oslo_concurrency.processutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:19.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:47:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2357984663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:47:19 np0005603609 nova_compute[221550]: 2026-01-31 07:47:19.266 221554 DEBUG oslo_concurrency.processutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:19 np0005603609 nova_compute[221550]: 2026-01-31 07:47:19.293 221554 DEBUG nova.storage.rbd_utils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] rbd image 2d280afb-69f5-4511-8134-19901686f5fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:47:19 np0005603609 nova_compute[221550]: 2026-01-31 07:47:19.297 221554 DEBUG oslo_concurrency.processutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:47:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3624245809' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:47:19 np0005603609 nova_compute[221550]: 2026-01-31 07:47:19.720 221554 DEBUG oslo_concurrency.processutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:19 np0005603609 nova_compute[221550]: 2026-01-31 07:47:19.721 221554 DEBUG nova.objects.instance [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d280afb-69f5-4511-8134-19901686f5fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:47:19 np0005603609 nova_compute[221550]: 2026-01-31 07:47:19.871 221554 DEBUG nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:47:19 np0005603609 nova_compute[221550]:  <uuid>2d280afb-69f5-4511-8134-19901686f5fc</uuid>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:  <name>instance-0000003f</name>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <nova:name>tempest-ListImageFiltersTestJSON-server-519900585</nova:name>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:47:18</nova:creationTime>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:47:19 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:        <nova:user uuid="aeb97fa2b1284c3faf0028734652a72c">tempest-ListImageFiltersTestJSON-1086962866-project-member</nova:user>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:        <nova:project uuid="d8503864fef643f698a175cc6364101c">tempest-ListImageFiltersTestJSON-1086962866</nova:project>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <nova:ports/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <entry name="serial">2d280afb-69f5-4511-8134-19901686f5fc</entry>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <entry name="uuid">2d280afb-69f5-4511-8134-19901686f5fc</entry>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/2d280afb-69f5-4511-8134-19901686f5fc_disk">
Jan 31 02:47:19 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:47:19 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/2d280afb-69f5-4511-8134-19901686f5fc_disk.config">
Jan 31 02:47:19 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:47:19 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/2d280afb-69f5-4511-8134-19901686f5fc/console.log" append="off"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:47:19 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:47:19 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:47:19 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:47:19 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:47:20 np0005603609 nova_compute[221550]: 2026-01-31 07:47:20.207 221554 DEBUG nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:47:20 np0005603609 nova_compute[221550]: 2026-01-31 07:47:20.208 221554 DEBUG nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:47:20 np0005603609 nova_compute[221550]: 2026-01-31 07:47:20.208 221554 INFO nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Using config drive#033[00m
Jan 31 02:47:20 np0005603609 nova_compute[221550]: 2026-01-31 07:47:20.231 221554 DEBUG nova.storage.rbd_utils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] rbd image 2d280afb-69f5-4511-8134-19901686f5fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:47:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:20.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:20 np0005603609 nova_compute[221550]: 2026-01-31 07:47:20.631 221554 INFO nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Creating config drive at /var/lib/nova/instances/2d280afb-69f5-4511-8134-19901686f5fc/disk.config#033[00m
Jan 31 02:47:20 np0005603609 nova_compute[221550]: 2026-01-31 07:47:20.635 221554 DEBUG oslo_concurrency.processutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d280afb-69f5-4511-8134-19901686f5fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpj215mru8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:20 np0005603609 nova_compute[221550]: 2026-01-31 07:47:20.756 221554 DEBUG oslo_concurrency.processutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d280afb-69f5-4511-8134-19901686f5fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpj215mru8" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:20 np0005603609 nova_compute[221550]: 2026-01-31 07:47:20.785 221554 DEBUG nova.storage.rbd_utils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] rbd image 2d280afb-69f5-4511-8134-19901686f5fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:47:20 np0005603609 nova_compute[221550]: 2026-01-31 07:47:20.789 221554 DEBUG oslo_concurrency.processutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d280afb-69f5-4511-8134-19901686f5fc/disk.config 2d280afb-69f5-4511-8134-19901686f5fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:20.931 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:47:20 np0005603609 nova_compute[221550]: 2026-01-31 07:47:20.930 221554 DEBUG oslo_concurrency.processutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d280afb-69f5-4511-8134-19901686f5fc/disk.config 2d280afb-69f5-4511-8134-19901686f5fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:20 np0005603609 nova_compute[221550]: 2026-01-31 07:47:20.931 221554 INFO nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Deleting local config drive /var/lib/nova/instances/2d280afb-69f5-4511-8134-19901686f5fc/disk.config because it was imported into RBD.#033[00m
Jan 31 02:47:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:20.932 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:47:20 np0005603609 nova_compute[221550]: 2026-01-31 07:47:20.932 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:21 np0005603609 systemd-machined[190912]: New machine qemu-29-instance-0000003f.
Jan 31 02:47:21 np0005603609 systemd[1]: Started Virtual Machine qemu-29-instance-0000003f.
Jan 31 02:47:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:47:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:21.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.357 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845641.356579, 2d280afb-69f5-4511-8134-19901686f5fc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.357 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.360 221554 DEBUG nova.compute.manager [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.360 221554 DEBUG nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.364 221554 INFO nova.virt.libvirt.driver [-] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Instance spawned successfully.#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.364 221554 DEBUG nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.454 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:47:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e222 e222: 3 total, 3 up, 3 in
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.464 221554 DEBUG nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.465 221554 DEBUG nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.465 221554 DEBUG nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.466 221554 DEBUG nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.466 221554 DEBUG nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.467 221554 DEBUG nova.virt.libvirt.driver [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.472 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.594 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.595 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845641.357882, 2d280afb-69f5-4511-8134-19901686f5fc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.595 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] VM Started (Lifecycle Event)#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.658 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.664 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.689 221554 INFO nova.compute.manager [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Took 4.33 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.690 221554 DEBUG nova.compute.manager [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.705 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.840 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.893 221554 INFO nova.compute.manager [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Took 6.58 seconds to build instance.#033[00m
Jan 31 02:47:21 np0005603609 nova_compute[221550]: 2026-01-31 07:47:21.959 221554 DEBUG oslo_concurrency.lockutils [None req-56b1001a-ec0e-4760-bf7b-d60b95c3dde4 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "2d280afb-69f5-4511-8134-19901686f5fc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:47:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:22 np0005603609 nova_compute[221550]: 2026-01-31 07:47:22.200 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:22.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:22.934 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:47:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:47:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:23.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:47:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:24.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:47:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:25.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:47:25 np0005603609 podman[244307]: 2026-01-31 07:47:25.170132609 +0000 UTC m=+0.047448115 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 02:47:25 np0005603609 podman[244306]: 2026-01-31 07:47:25.195961998 +0000 UTC m=+0.077682181 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 02:47:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:26.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:26 np0005603609 nova_compute[221550]: 2026-01-31 07:47:26.843 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:27.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:27 np0005603609 nova_compute[221550]: 2026-01-31 07:47:27.240 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:47:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:28.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:47:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:47:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:29.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:47:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:47:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:30.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:47:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:31.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e223 e223: 3 total, 3 up, 3 in
Jan 31 02:47:31 np0005603609 nova_compute[221550]: 2026-01-31 07:47:31.847 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:32 np0005603609 nova_compute[221550]: 2026-01-31 07:47:32.242 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:32.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:33.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:34 np0005603609 nova_compute[221550]: 2026-01-31 07:47:34.001 221554 DEBUG nova.compute.manager [None req-db6c448e-0a4b-4a99-9a33-4638626cabbc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:47:34 np0005603609 nova_compute[221550]: 2026-01-31 07:47:34.362 221554 INFO nova.compute.manager [None req-db6c448e-0a4b-4a99-9a33-4638626cabbc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] instance snapshotting#033[00m
Jan 31 02:47:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:34.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:34 np0005603609 nova_compute[221550]: 2026-01-31 07:47:34.838 221554 INFO nova.virt.libvirt.driver [None req-db6c448e-0a4b-4a99-9a33-4638626cabbc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Beginning live snapshot process#033[00m
Jan 31 02:47:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:35.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:35 np0005603609 nova_compute[221550]: 2026-01-31 07:47:35.785 221554 DEBUG nova.virt.libvirt.imagebackend [None req-db6c448e-0a4b-4a99-9a33-4638626cabbc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 02:47:36 np0005603609 nova_compute[221550]: 2026-01-31 07:47:36.258 221554 DEBUG nova.storage.rbd_utils [None req-db6c448e-0a4b-4a99-9a33-4638626cabbc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] creating snapshot(a748f38347a240f48d0dd486ba73ca15) on rbd image(2d280afb-69f5-4511-8134-19901686f5fc_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 02:47:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:47:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:36.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:47:36 np0005603609 nova_compute[221550]: 2026-01-31 07:47:36.850 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e224 e224: 3 total, 3 up, 3 in
Jan 31 02:47:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:47:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:37.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:47:37 np0005603609 nova_compute[221550]: 2026-01-31 07:47:37.244 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:37 np0005603609 nova_compute[221550]: 2026-01-31 07:47:37.275 221554 DEBUG nova.storage.rbd_utils [None req-db6c448e-0a4b-4a99-9a33-4638626cabbc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] cloning vms/2d280afb-69f5-4511-8134-19901686f5fc_disk@a748f38347a240f48d0dd486ba73ca15 to images/32133469-37ac-412d-b940-c908b55cf858 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 02:47:38 np0005603609 nova_compute[221550]: 2026-01-31 07:47:38.059 221554 DEBUG nova.storage.rbd_utils [None req-db6c448e-0a4b-4a99-9a33-4638626cabbc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] flattening images/32133469-37ac-412d-b940-c908b55cf858 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 02:47:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:38.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:39 np0005603609 nova_compute[221550]: 2026-01-31 07:47:39.136 221554 DEBUG oslo_concurrency.lockutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:47:39 np0005603609 nova_compute[221550]: 2026-01-31 07:47:39.137 221554 DEBUG oslo_concurrency.lockutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:47:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:47:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:39.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:47:39 np0005603609 nova_compute[221550]: 2026-01-31 07:47:39.650 221554 DEBUG nova.compute.manager [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:47:40 np0005603609 nova_compute[221550]: 2026-01-31 07:47:40.208 221554 DEBUG oslo_concurrency.lockutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:47:40 np0005603609 nova_compute[221550]: 2026-01-31 07:47:40.209 221554 DEBUG oslo_concurrency.lockutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:47:40 np0005603609 nova_compute[221550]: 2026-01-31 07:47:40.222 221554 DEBUG nova.virt.hardware [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:47:40 np0005603609 nova_compute[221550]: 2026-01-31 07:47:40.222 221554 INFO nova.compute.claims [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:47:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:40.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:41 np0005603609 nova_compute[221550]: 2026-01-31 07:47:41.040 221554 DEBUG oslo_concurrency.processutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:41.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:47:41 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4165145417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:47:41 np0005603609 nova_compute[221550]: 2026-01-31 07:47:41.460 221554 DEBUG oslo_concurrency.processutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:41 np0005603609 nova_compute[221550]: 2026-01-31 07:47:41.465 221554 DEBUG nova.storage.rbd_utils [None req-db6c448e-0a4b-4a99-9a33-4638626cabbc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] removing snapshot(a748f38347a240f48d0dd486ba73ca15) on rbd image(2d280afb-69f5-4511-8134-19901686f5fc_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 02:47:41 np0005603609 nova_compute[221550]: 2026-01-31 07:47:41.466 221554 DEBUG nova.compute.provider_tree [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:47:41 np0005603609 nova_compute[221550]: 2026-01-31 07:47:41.569 221554 DEBUG nova.scheduler.client.report [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:47:41 np0005603609 nova_compute[221550]: 2026-01-31 07:47:41.854 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:42 np0005603609 nova_compute[221550]: 2026-01-31 07:47:42.246 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:42 np0005603609 nova_compute[221550]: 2026-01-31 07:47:42.273 221554 DEBUG oslo_concurrency.lockutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:47:42 np0005603609 nova_compute[221550]: 2026-01-31 07:47:42.274 221554 DEBUG nova.compute.manager [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:47:42 np0005603609 nova_compute[221550]: 2026-01-31 07:47:42.589 221554 DEBUG nova.compute.manager [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:47:42 np0005603609 nova_compute[221550]: 2026-01-31 07:47:42.590 221554 DEBUG nova.network.neutron [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:47:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:42.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:42 np0005603609 nova_compute[221550]: 2026-01-31 07:47:42.854 221554 INFO nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:47:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e225 e225: 3 total, 3 up, 3 in
Jan 31 02:47:43 np0005603609 nova_compute[221550]: 2026-01-31 07:47:43.159 221554 DEBUG nova.compute.manager [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:47:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:47:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:43.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:47:43 np0005603609 nova_compute[221550]: 2026-01-31 07:47:43.327 221554 DEBUG nova.storage.rbd_utils [None req-db6c448e-0a4b-4a99-9a33-4638626cabbc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] creating snapshot(snap) on rbd image(32133469-37ac-412d-b940-c908b55cf858) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 02:47:43 np0005603609 nova_compute[221550]: 2026-01-31 07:47:43.759 221554 DEBUG nova.compute.manager [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:47:43 np0005603609 nova_compute[221550]: 2026-01-31 07:47:43.761 221554 DEBUG nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:47:43 np0005603609 nova_compute[221550]: 2026-01-31 07:47:43.762 221554 INFO nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Creating image(s)#033[00m
Jan 31 02:47:43 np0005603609 nova_compute[221550]: 2026-01-31 07:47:43.794 221554 DEBUG nova.storage.rbd_utils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 9e3b0e76-62cb-4d16-888a-a1574d9ea228_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:47:43 np0005603609 nova_compute[221550]: 2026-01-31 07:47:43.822 221554 DEBUG nova.storage.rbd_utils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 9e3b0e76-62cb-4d16-888a-a1574d9ea228_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:47:43 np0005603609 nova_compute[221550]: 2026-01-31 07:47:43.849 221554 DEBUG nova.storage.rbd_utils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 9e3b0e76-62cb-4d16-888a-a1574d9ea228_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:47:43 np0005603609 nova_compute[221550]: 2026-01-31 07:47:43.852 221554 DEBUG oslo_concurrency.processutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:43 np0005603609 nova_compute[221550]: 2026-01-31 07:47:43.900 221554 DEBUG oslo_concurrency.processutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:43 np0005603609 nova_compute[221550]: 2026-01-31 07:47:43.902 221554 DEBUG oslo_concurrency.lockutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:47:43 np0005603609 nova_compute[221550]: 2026-01-31 07:47:43.904 221554 DEBUG oslo_concurrency.lockutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:47:43 np0005603609 nova_compute[221550]: 2026-01-31 07:47:43.904 221554 DEBUG oslo_concurrency.lockutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:47:43 np0005603609 nova_compute[221550]: 2026-01-31 07:47:43.934 221554 DEBUG nova.storage.rbd_utils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 9e3b0e76-62cb-4d16-888a-a1574d9ea228_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:47:43 np0005603609 nova_compute[221550]: 2026-01-31 07:47:43.938 221554 DEBUG oslo_concurrency.processutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 9e3b0e76-62cb-4d16-888a-a1574d9ea228_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:44 np0005603609 nova_compute[221550]: 2026-01-31 07:47:44.308 221554 DEBUG nova.policy [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97abab8eb79247cd89fb2ebff295b890', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:47:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e226 e226: 3 total, 3 up, 3 in
Jan 31 02:47:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:44.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:45.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:45 np0005603609 nova_compute[221550]: 2026-01-31 07:47:45.404 221554 DEBUG oslo_concurrency.processutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 9e3b0e76-62cb-4d16-888a-a1574d9ea228_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:45 np0005603609 nova_compute[221550]: 2026-01-31 07:47:45.484 221554 DEBUG nova.storage.rbd_utils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] resizing rbd image 9e3b0e76-62cb-4d16-888a-a1574d9ea228_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:47:46 np0005603609 nova_compute[221550]: 2026-01-31 07:47:46.177 221554 DEBUG nova.objects.instance [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'migration_context' on Instance uuid 9e3b0e76-62cb-4d16-888a-a1574d9ea228 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:47:46 np0005603609 nova_compute[221550]: 2026-01-31 07:47:46.290 221554 DEBUG nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:47:46 np0005603609 nova_compute[221550]: 2026-01-31 07:47:46.290 221554 DEBUG nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Ensure instance console log exists: /var/lib/nova/instances/9e3b0e76-62cb-4d16-888a-a1574d9ea228/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:47:46 np0005603609 nova_compute[221550]: 2026-01-31 07:47:46.291 221554 DEBUG oslo_concurrency.lockutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:47:46 np0005603609 nova_compute[221550]: 2026-01-31 07:47:46.291 221554 DEBUG oslo_concurrency.lockutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:47:46 np0005603609 nova_compute[221550]: 2026-01-31 07:47:46.291 221554 DEBUG oslo_concurrency.lockutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:47:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:46.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:46 np0005603609 nova_compute[221550]: 2026-01-31 07:47:46.704 221554 DEBUG nova.network.neutron [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Successfully created port: b17c792c-aad7-417b-9cfe-859ce32b3d37 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:47:46 np0005603609 nova_compute[221550]: 2026-01-31 07:47:46.858 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:47:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:47.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:47:47 np0005603609 nova_compute[221550]: 2026-01-31 07:47:47.247 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:47 np0005603609 nova_compute[221550]: 2026-01-31 07:47:47.430 221554 INFO nova.virt.libvirt.driver [None req-db6c448e-0a4b-4a99-9a33-4638626cabbc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Snapshot image upload complete#033[00m
Jan 31 02:47:47 np0005603609 nova_compute[221550]: 2026-01-31 07:47:47.431 221554 INFO nova.compute.manager [None req-db6c448e-0a4b-4a99-9a33-4638626cabbc aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Took 13.07 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 02:47:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:47:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:47:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:47:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:47:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:47:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:47:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:48.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:47:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:49.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:50 np0005603609 nova_compute[221550]: 2026-01-31 07:47:50.312 221554 DEBUG nova.network.neutron [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Successfully updated port: b17c792c-aad7-417b-9cfe-859ce32b3d37 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:47:50 np0005603609 nova_compute[221550]: 2026-01-31 07:47:50.567 221554 DEBUG oslo_concurrency.lockutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "refresh_cache-9e3b0e76-62cb-4d16-888a-a1574d9ea228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:47:50 np0005603609 nova_compute[221550]: 2026-01-31 07:47:50.568 221554 DEBUG oslo_concurrency.lockutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquired lock "refresh_cache-9e3b0e76-62cb-4d16-888a-a1574d9ea228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:47:50 np0005603609 nova_compute[221550]: 2026-01-31 07:47:50.568 221554 DEBUG nova.network.neutron [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:47:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:47:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:50.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:47:50 np0005603609 nova_compute[221550]: 2026-01-31 07:47:50.709 221554 DEBUG nova.compute.manager [req-e19c4520-5f1c-4db5-adcc-43a77737857f req-576e280e-f0ef-4109-b754-6e266f807c02 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Received event network-changed-b17c792c-aad7-417b-9cfe-859ce32b3d37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:47:50 np0005603609 nova_compute[221550]: 2026-01-31 07:47:50.709 221554 DEBUG nova.compute.manager [req-e19c4520-5f1c-4db5-adcc-43a77737857f req-576e280e-f0ef-4109-b754-6e266f807c02 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Refreshing instance network info cache due to event network-changed-b17c792c-aad7-417b-9cfe-859ce32b3d37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:47:50 np0005603609 nova_compute[221550]: 2026-01-31 07:47:50.709 221554 DEBUG oslo_concurrency.lockutils [req-e19c4520-5f1c-4db5-adcc-43a77737857f req-576e280e-f0ef-4109-b754-6e266f807c02 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9e3b0e76-62cb-4d16-888a-a1574d9ea228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:47:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:47:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:51.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:47:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e227 e227: 3 total, 3 up, 3 in
Jan 31 02:47:51 np0005603609 nova_compute[221550]: 2026-01-31 07:47:51.464 221554 DEBUG nova.network.neutron [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:47:51 np0005603609 nova_compute[221550]: 2026-01-31 07:47:51.862 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:52 np0005603609 nova_compute[221550]: 2026-01-31 07:47:52.250 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:47:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:52.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.050 221554 DEBUG nova.network.neutron [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Updating instance_info_cache with network_info: [{"id": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "address": "fa:16:3e:35:94:6a", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb17c792c-aa", "ovs_interfaceid": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:47:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:47:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:53.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.285 221554 DEBUG oslo_concurrency.lockutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Releasing lock "refresh_cache-9e3b0e76-62cb-4d16-888a-a1574d9ea228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.285 221554 DEBUG nova.compute.manager [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Instance network_info: |[{"id": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "address": "fa:16:3e:35:94:6a", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb17c792c-aa", "ovs_interfaceid": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.286 221554 DEBUG oslo_concurrency.lockutils [req-e19c4520-5f1c-4db5-adcc-43a77737857f req-576e280e-f0ef-4109-b754-6e266f807c02 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9e3b0e76-62cb-4d16-888a-a1574d9ea228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.286 221554 DEBUG nova.network.neutron [req-e19c4520-5f1c-4db5-adcc-43a77737857f req-576e280e-f0ef-4109-b754-6e266f807c02 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Refreshing network info cache for port b17c792c-aad7-417b-9cfe-859ce32b3d37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.288 221554 DEBUG nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Start _get_guest_xml network_info=[{"id": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "address": "fa:16:3e:35:94:6a", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb17c792c-aa", "ovs_interfaceid": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.292 221554 WARNING nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.297 221554 DEBUG nova.virt.libvirt.host [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.297 221554 DEBUG nova.virt.libvirt.host [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.300 221554 DEBUG nova.virt.libvirt.host [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.300 221554 DEBUG nova.virt.libvirt.host [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.301 221554 DEBUG nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.301 221554 DEBUG nova.virt.hardware [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.302 221554 DEBUG nova.virt.hardware [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.302 221554 DEBUG nova.virt.hardware [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.302 221554 DEBUG nova.virt.hardware [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.302 221554 DEBUG nova.virt.hardware [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.303 221554 DEBUG nova.virt.hardware [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.303 221554 DEBUG nova.virt.hardware [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.303 221554 DEBUG nova.virt.hardware [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.303 221554 DEBUG nova.virt.hardware [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.304 221554 DEBUG nova.virt.hardware [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.304 221554 DEBUG nova.virt.hardware [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:47:53 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.306 221554 DEBUG oslo_concurrency.processutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:47:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/376835805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:47:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e228 e228: 3 total, 3 up, 3 in
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:53.999 221554 DEBUG oslo_concurrency.processutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.693s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.023 221554 DEBUG nova.storage.rbd_utils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 9e3b0e76-62cb-4d16-888a-a1574d9ea228_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.027 221554 DEBUG oslo_concurrency.processutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:47:54 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3959903741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.468 221554 DEBUG oslo_concurrency.processutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.470 221554 DEBUG nova.virt.libvirt.vif [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:47:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1127391059',display_name='tempest-DeleteServersTestJSON-server-1127391059',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1127391059',id=65,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-g2vqvavk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:47:43Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=9e3b0e76-62cb-4d16-888a-a1574d9ea228,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "address": "fa:16:3e:35:94:6a", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb17c792c-aa", "ovs_interfaceid": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.471 221554 DEBUG nova.network.os_vif_util [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "address": "fa:16:3e:35:94:6a", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb17c792c-aa", "ovs_interfaceid": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.472 221554 DEBUG nova.network.os_vif_util [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:94:6a,bridge_name='br-int',has_traffic_filtering=True,id=b17c792c-aad7-417b-9cfe-859ce32b3d37,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb17c792c-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.472 221554 DEBUG nova.objects.instance [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9e3b0e76-62cb-4d16-888a-a1574d9ea228 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:47:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:54.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.647 221554 DEBUG nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:47:54 np0005603609 nova_compute[221550]:  <uuid>9e3b0e76-62cb-4d16-888a-a1574d9ea228</uuid>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:  <name>instance-00000041</name>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <nova:name>tempest-DeleteServersTestJSON-server-1127391059</nova:name>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:47:53</nova:creationTime>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:47:54 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:        <nova:user uuid="97abab8eb79247cd89fb2ebff295b890">tempest-DeleteServersTestJSON-2031597701-project-member</nova:user>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:        <nova:project uuid="f299640bb1f64e5fa12b23955e5a2127">tempest-DeleteServersTestJSON-2031597701</nova:project>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:        <nova:port uuid="b17c792c-aad7-417b-9cfe-859ce32b3d37">
Jan 31 02:47:54 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <entry name="serial">9e3b0e76-62cb-4d16-888a-a1574d9ea228</entry>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <entry name="uuid">9e3b0e76-62cb-4d16-888a-a1574d9ea228</entry>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/9e3b0e76-62cb-4d16-888a-a1574d9ea228_disk">
Jan 31 02:47:54 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:47:54 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/9e3b0e76-62cb-4d16-888a-a1574d9ea228_disk.config">
Jan 31 02:47:54 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:47:54 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:35:94:6a"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <target dev="tapb17c792c-aa"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/9e3b0e76-62cb-4d16-888a-a1574d9ea228/console.log" append="off"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:47:54 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:47:54 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:47:54 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:47:54 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.648 221554 DEBUG nova.compute.manager [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Preparing to wait for external event network-vif-plugged-b17c792c-aad7-417b-9cfe-859ce32b3d37 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.648 221554 DEBUG oslo_concurrency.lockutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.649 221554 DEBUG oslo_concurrency.lockutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.649 221554 DEBUG oslo_concurrency.lockutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.650 221554 DEBUG nova.virt.libvirt.vif [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:47:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1127391059',display_name='tempest-DeleteServersTestJSON-server-1127391059',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1127391059',id=65,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-g2vqvavk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:47:43Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=9e3b0e76-62cb-4d16-888a-a1574d9ea228,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "address": "fa:16:3e:35:94:6a", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb17c792c-aa", "ovs_interfaceid": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.650 221554 DEBUG nova.network.os_vif_util [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "address": "fa:16:3e:35:94:6a", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb17c792c-aa", "ovs_interfaceid": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.651 221554 DEBUG nova.network.os_vif_util [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:94:6a,bridge_name='br-int',has_traffic_filtering=True,id=b17c792c-aad7-417b-9cfe-859ce32b3d37,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb17c792c-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.651 221554 DEBUG os_vif [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:94:6a,bridge_name='br-int',has_traffic_filtering=True,id=b17c792c-aad7-417b-9cfe-859ce32b3d37,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb17c792c-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.652 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.653 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.653 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.657 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.657 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb17c792c-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.658 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb17c792c-aa, col_values=(('external_ids', {'iface-id': 'b17c792c-aad7-417b-9cfe-859ce32b3d37', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:35:94:6a', 'vm-uuid': '9e3b0e76-62cb-4d16-888a-a1574d9ea228'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.700 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:54 np0005603609 NetworkManager[49064]: <info>  [1769845674.7030] manager: (tapb17c792c-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.703 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.707 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:54 np0005603609 nova_compute[221550]: 2026-01-31 07:47:54.708 221554 INFO os_vif [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:94:6a,bridge_name='br-int',has_traffic_filtering=True,id=b17c792c-aad7-417b-9cfe-859ce32b3d37,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb17c792c-aa')#033[00m
Jan 31 02:47:54 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:47:54 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:47:55 np0005603609 nova_compute[221550]: 2026-01-31 07:47:55.044 221554 DEBUG nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:47:55 np0005603609 nova_compute[221550]: 2026-01-31 07:47:55.044 221554 DEBUG nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:47:55 np0005603609 nova_compute[221550]: 2026-01-31 07:47:55.044 221554 DEBUG nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No VIF found with MAC fa:16:3e:35:94:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:47:55 np0005603609 nova_compute[221550]: 2026-01-31 07:47:55.045 221554 INFO nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Using config drive#033[00m
Jan 31 02:47:55 np0005603609 nova_compute[221550]: 2026-01-31 07:47:55.072 221554 DEBUG nova.storage.rbd_utils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 9e3b0e76-62cb-4d16-888a-a1574d9ea228_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:47:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:47:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:55.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:47:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e229 e229: 3 total, 3 up, 3 in
Jan 31 02:47:56 np0005603609 podman[244948]: 2026-01-31 07:47:56.208026094 +0000 UTC m=+0.087037509 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:47:56 np0005603609 podman[244947]: 2026-01-31 07:47:56.241750064 +0000 UTC m=+0.119898758 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 02:47:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e230 e230: 3 total, 3 up, 3 in
Jan 31 02:47:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:56.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:57.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:57 np0005603609 nova_compute[221550]: 2026-01-31 07:47:57.296 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:57 np0005603609 ovn_controller[130359]: 2026-01-31T07:47:57Z|00187|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 31 02:47:58 np0005603609 nova_compute[221550]: 2026-01-31 07:47:58.527 221554 INFO nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Creating config drive at /var/lib/nova/instances/9e3b0e76-62cb-4d16-888a-a1574d9ea228/disk.config#033[00m
Jan 31 02:47:58 np0005603609 nova_compute[221550]: 2026-01-31 07:47:58.534 221554 DEBUG oslo_concurrency.processutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9e3b0e76-62cb-4d16-888a-a1574d9ea228/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp74ti1baj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:58.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:58 np0005603609 nova_compute[221550]: 2026-01-31 07:47:58.666 221554 DEBUG oslo_concurrency.processutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9e3b0e76-62cb-4d16-888a-a1574d9ea228/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp74ti1baj" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:58 np0005603609 nova_compute[221550]: 2026-01-31 07:47:58.704 221554 DEBUG nova.storage.rbd_utils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 9e3b0e76-62cb-4d16-888a-a1574d9ea228_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:47:58 np0005603609 nova_compute[221550]: 2026-01-31 07:47:58.710 221554 DEBUG oslo_concurrency.processutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9e3b0e76-62cb-4d16-888a-a1574d9ea228/disk.config 9e3b0e76-62cb-4d16-888a-a1574d9ea228_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:58 np0005603609 nova_compute[221550]: 2026-01-31 07:47:58.878 221554 DEBUG oslo_concurrency.processutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9e3b0e76-62cb-4d16-888a-a1574d9ea228/disk.config 9e3b0e76-62cb-4d16-888a-a1574d9ea228_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:58 np0005603609 nova_compute[221550]: 2026-01-31 07:47:58.879 221554 INFO nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Deleting local config drive /var/lib/nova/instances/9e3b0e76-62cb-4d16-888a-a1574d9ea228/disk.config because it was imported into RBD.#033[00m
Jan 31 02:47:58 np0005603609 kernel: tapb17c792c-aa: entered promiscuous mode
Jan 31 02:47:58 np0005603609 NetworkManager[49064]: <info>  [1769845678.9221] manager: (tapb17c792c-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Jan 31 02:47:58 np0005603609 ovn_controller[130359]: 2026-01-31T07:47:58Z|00188|binding|INFO|Claiming lport b17c792c-aad7-417b-9cfe-859ce32b3d37 for this chassis.
Jan 31 02:47:58 np0005603609 nova_compute[221550]: 2026-01-31 07:47:58.923 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:58 np0005603609 ovn_controller[130359]: 2026-01-31T07:47:58Z|00189|binding|INFO|b17c792c-aad7-417b-9cfe-859ce32b3d37: Claiming fa:16:3e:35:94:6a 10.100.0.6
Jan 31 02:47:58 np0005603609 nova_compute[221550]: 2026-01-31 07:47:58.926 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:58 np0005603609 nova_compute[221550]: 2026-01-31 07:47:58.929 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:58 np0005603609 systemd-machined[190912]: New machine qemu-30-instance-00000041.
Jan 31 02:47:58 np0005603609 nova_compute[221550]: 2026-01-31 07:47:58.954 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:58 np0005603609 ovn_controller[130359]: 2026-01-31T07:47:58Z|00190|binding|INFO|Setting lport b17c792c-aad7-417b-9cfe-859ce32b3d37 ovn-installed in OVS
Jan 31 02:47:58 np0005603609 ovn_controller[130359]: 2026-01-31T07:47:58Z|00191|binding|INFO|Setting lport b17c792c-aad7-417b-9cfe-859ce32b3d37 up in Southbound
Jan 31 02:47:58 np0005603609 nova_compute[221550]: 2026-01-31 07:47:58.957 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:58.958 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:94:6a 10.100.0.6'], port_security=['fa:16:3e:35:94:6a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9e3b0e76-62cb-4d16-888a-a1574d9ea228', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60244e92-1524-47f0-8207-05d0104afa47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f055dcb-9af1-4cf7-aa74-c819da93756e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70b36ac0-62cb-4e29-924b-93c5ad906bc9, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=b17c792c-aad7-417b-9cfe-859ce32b3d37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:47:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:58.959 140058 INFO neutron.agent.ovn.metadata.agent [-] Port b17c792c-aad7-417b-9cfe-859ce32b3d37 in datapath 60244e92-1524-47f0-8207-05d0104afa47 bound to our chassis#033[00m
Jan 31 02:47:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:58.961 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60244e92-1524-47f0-8207-05d0104afa47#033[00m
Jan 31 02:47:58 np0005603609 systemd[1]: Started Virtual Machine qemu-30-instance-00000041.
Jan 31 02:47:58 np0005603609 systemd-udevd[245045]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:47:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:58.969 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9f35bc-570e-4ea8-af73-1629d74d4164]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:47:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:58.970 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap60244e92-11 in ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:47:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:58.972 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap60244e92-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:47:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:58.972 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7aba0f-de77-455d-a014-89c4a3c1bd8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:47:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:58.972 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[96e55b23-b263-4943-b244-caf2af7cb4f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:47:58 np0005603609 NetworkManager[49064]: <info>  [1769845678.9793] device (tapb17c792c-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:47:58 np0005603609 NetworkManager[49064]: <info>  [1769845678.9804] device (tapb17c792c-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:47:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:58.981 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[c94536d8-9ea6-47f9-a78f-db3fcd4ddaf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:47:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:58.991 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b80d6956-3540-45cb-bf89-9807729c7ac4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:59.008 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d595d296-fed0-494b-be4d-d7c92568e4cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:59.014 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[99835ecf-1310-41f4-99a8-fc65b4ab0812]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:47:59 np0005603609 NetworkManager[49064]: <info>  [1769845679.0148] manager: (tap60244e92-10): new Veth device (/org/freedesktop/NetworkManager/Devices/97)
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:59.041 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[0803afa7-2c59-48b9-9c52-f53275dbb5ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:59.045 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d77aff20-4b34-4e37-a318-335a3be8cae6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:47:59 np0005603609 NetworkManager[49064]: <info>  [1769845679.0620] device (tap60244e92-10): carrier: link connected
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:59.064 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[180c9c6f-1af7-4124-bb65-3d0abb9bd949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:59.076 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f5902420-7a65-4267-acf2-c829185ba0da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60244e92-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:59:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591876, 'reachable_time': 15668, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245078, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:59.084 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e16cf652-6b5b-408a-a8c1-9a90e1cee915]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:59f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591876, 'tstamp': 591876}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245079, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:59.096 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b616b096-63ea-4342-8593-f9d9eba7a63f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60244e92-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:59:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591876, 'reachable_time': 15668, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 245080, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:59.116 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[debc30ca-82d8-4e7f-ae2a-173dca79d35f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.150 221554 DEBUG nova.network.neutron [req-e19c4520-5f1c-4db5-adcc-43a77737857f req-576e280e-f0ef-4109-b754-6e266f807c02 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Updated VIF entry in instance network info cache for port b17c792c-aad7-417b-9cfe-859ce32b3d37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.151 221554 DEBUG nova.network.neutron [req-e19c4520-5f1c-4db5-adcc-43a77737857f req-576e280e-f0ef-4109-b754-6e266f807c02 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Updating instance_info_cache with network_info: [{"id": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "address": "fa:16:3e:35:94:6a", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb17c792c-aa", "ovs_interfaceid": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:59.154 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[646e74c9-5cff-4275-b058-6a1090b1714d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:59.156 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60244e92-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:59.156 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:59.157 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60244e92-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:47:59 np0005603609 kernel: tap60244e92-10: entered promiscuous mode
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.158 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:59 np0005603609 NetworkManager[49064]: <info>  [1769845679.1588] manager: (tap60244e92-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.160 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:59.163 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60244e92-10, col_values=(('external_ids', {'iface-id': '4e20d708-9f46-41f3-86c0-9ab3849f8392'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:47:59 np0005603609 ovn_controller[130359]: 2026-01-31T07:47:59Z|00192|binding|INFO|Releasing lport 4e20d708-9f46-41f3-86c0-9ab3849f8392 from this chassis (sb_readonly=0)
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.164 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.165 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:59.167 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.169 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:59.169 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c5645941-104f-4293-a52f-b1367e843917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:59.170 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-60244e92-1524-47f0-8207-05d0104afa47
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 60244e92-1524-47f0-8207-05d0104afa47
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:47:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:47:59.172 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'env', 'PROCESS_TAG=haproxy-60244e92-1524-47f0-8207-05d0104afa47', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/60244e92-1524-47f0-8207-05d0104afa47.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:47:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:47:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:59.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.199 221554 DEBUG oslo_concurrency.lockutils [req-e19c4520-5f1c-4db5-adcc-43a77737857f req-576e280e-f0ef-4109-b754-6e266f807c02 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9e3b0e76-62cb-4d16-888a-a1574d9ea228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.482 221554 DEBUG nova.compute.manager [req-b1f94ebb-d0ea-486d-892d-4b45d7898211 req-5efb97cf-60d6-4455-8ac7-3584170cdbe9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Received event network-vif-plugged-b17c792c-aad7-417b-9cfe-859ce32b3d37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.482 221554 DEBUG oslo_concurrency.lockutils [req-b1f94ebb-d0ea-486d-892d-4b45d7898211 req-5efb97cf-60d6-4455-8ac7-3584170cdbe9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.483 221554 DEBUG oslo_concurrency.lockutils [req-b1f94ebb-d0ea-486d-892d-4b45d7898211 req-5efb97cf-60d6-4455-8ac7-3584170cdbe9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.483 221554 DEBUG oslo_concurrency.lockutils [req-b1f94ebb-d0ea-486d-892d-4b45d7898211 req-5efb97cf-60d6-4455-8ac7-3584170cdbe9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.483 221554 DEBUG nova.compute.manager [req-b1f94ebb-d0ea-486d-892d-4b45d7898211 req-5efb97cf-60d6-4455-8ac7-3584170cdbe9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Processing event network-vif-plugged-b17c792c-aad7-417b-9cfe-859ce32b3d37 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:47:59 np0005603609 podman[245112]: 2026-01-31 07:47:59.521729626 +0000 UTC m=+0.074487683 container create 0d9b2e39a19f5093b41709a01164d2c2475bb79143efd7d17d56e48a0f19a944 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 02:47:59 np0005603609 podman[245112]: 2026-01-31 07:47:59.471592406 +0000 UTC m=+0.024350483 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:47:59 np0005603609 systemd[1]: Started libpod-conmon-0d9b2e39a19f5093b41709a01164d2c2475bb79143efd7d17d56e48a0f19a944.scope.
Jan 31 02:47:59 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:47:59 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a13b590afc81945310ccf971b8a3970ed8c2c0caba3cd9aadd5cfa41b33243a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:47:59 np0005603609 podman[245112]: 2026-01-31 07:47:59.669899662 +0000 UTC m=+0.222657739 container init 0d9b2e39a19f5093b41709a01164d2c2475bb79143efd7d17d56e48a0f19a944 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 02:47:59 np0005603609 podman[245112]: 2026-01-31 07:47:59.67434164 +0000 UTC m=+0.227099697 container start 0d9b2e39a19f5093b41709a01164d2c2475bb79143efd7d17d56e48a0f19a944 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:47:59 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[245152]: [NOTICE]   (245167) : New worker (245169) forked
Jan 31 02:47:59 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[245152]: [NOTICE]   (245167) : Loading success.
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.744 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.872 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845679.871916, 9e3b0e76-62cb-4d16-888a-a1574d9ea228 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.872 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] VM Started (Lifecycle Event)#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.875 221554 DEBUG nova.compute.manager [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.879 221554 DEBUG nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.883 221554 INFO nova.virt.libvirt.driver [-] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Instance spawned successfully.#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.884 221554 DEBUG nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.947 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.954 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.958 221554 DEBUG nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.958 221554 DEBUG nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.959 221554 DEBUG nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.960 221554 DEBUG nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.960 221554 DEBUG nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.961 221554 DEBUG nova.virt.libvirt.driver [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:47:59 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.999 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:48:00 np0005603609 nova_compute[221550]: 2026-01-31 07:47:59.999 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845679.8720722, 9e3b0e76-62cb-4d16-888a-a1574d9ea228 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:48:00 np0005603609 nova_compute[221550]: 2026-01-31 07:48:00.000 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:48:00 np0005603609 nova_compute[221550]: 2026-01-31 07:48:00.051 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:48:00 np0005603609 nova_compute[221550]: 2026-01-31 07:48:00.055 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845679.8783774, 9e3b0e76-62cb-4d16-888a-a1574d9ea228 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:48:00 np0005603609 nova_compute[221550]: 2026-01-31 07:48:00.055 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:48:00 np0005603609 nova_compute[221550]: 2026-01-31 07:48:00.080 221554 INFO nova.compute.manager [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Took 16.32 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:48:00 np0005603609 nova_compute[221550]: 2026-01-31 07:48:00.080 221554 DEBUG nova.compute.manager [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:48:00 np0005603609 nova_compute[221550]: 2026-01-31 07:48:00.167 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:48:00 np0005603609 nova_compute[221550]: 2026-01-31 07:48:00.170 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:48:00 np0005603609 nova_compute[221550]: 2026-01-31 07:48:00.231 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:48:00 np0005603609 nova_compute[221550]: 2026-01-31 07:48:00.309 221554 INFO nova.compute.manager [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Took 20.14 seconds to build instance.#033[00m
Jan 31 02:48:00 np0005603609 nova_compute[221550]: 2026-01-31 07:48:00.357 221554 DEBUG oslo_concurrency.lockutils [None req-b2c4c0ba-f9c8-4f09-b5fd-dcf86ca48e56 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:00.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:48:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:01.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:48:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:02 np0005603609 nova_compute[221550]: 2026-01-31 07:48:02.298 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:02 np0005603609 nova_compute[221550]: 2026-01-31 07:48:02.367 221554 DEBUG oslo_concurrency.lockutils [None req-6fee509e-4f9b-40ea-8a28-e27ec22ef421 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:02 np0005603609 nova_compute[221550]: 2026-01-31 07:48:02.368 221554 DEBUG oslo_concurrency.lockutils [None req-6fee509e-4f9b-40ea-8a28-e27ec22ef421 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:02 np0005603609 nova_compute[221550]: 2026-01-31 07:48:02.369 221554 DEBUG nova.compute.manager [None req-6fee509e-4f9b-40ea-8a28-e27ec22ef421 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:48:02 np0005603609 nova_compute[221550]: 2026-01-31 07:48:02.374 221554 DEBUG nova.compute.manager [None req-6fee509e-4f9b-40ea-8a28-e27ec22ef421 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 31 02:48:02 np0005603609 nova_compute[221550]: 2026-01-31 07:48:02.375 221554 DEBUG nova.objects.instance [None req-6fee509e-4f9b-40ea-8a28-e27ec22ef421 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'flavor' on Instance uuid 9e3b0e76-62cb-4d16-888a-a1574d9ea228 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:48:02 np0005603609 nova_compute[221550]: 2026-01-31 07:48:02.430 221554 DEBUG nova.virt.libvirt.driver [None req-6fee509e-4f9b-40ea-8a28-e27ec22ef421 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 02:48:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:02.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:02 np0005603609 nova_compute[221550]: 2026-01-31 07:48:02.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:02 np0005603609 nova_compute[221550]: 2026-01-31 07:48:02.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:02 np0005603609 nova_compute[221550]: 2026-01-31 07:48:02.820 221554 DEBUG nova.compute.manager [None req-8987aa17-2b13-436e-bdde-39b0ddd51c52 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:48:02 np0005603609 nova_compute[221550]: 2026-01-31 07:48:02.930 221554 DEBUG nova.compute.manager [req-6e07ff76-311d-4528-a306-bc1ed08e1b30 req-77482540-b4b2-4d6a-a072-2a5e7f869e9d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Received event network-vif-plugged-b17c792c-aad7-417b-9cfe-859ce32b3d37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:48:02 np0005603609 nova_compute[221550]: 2026-01-31 07:48:02.930 221554 DEBUG oslo_concurrency.lockutils [req-6e07ff76-311d-4528-a306-bc1ed08e1b30 req-77482540-b4b2-4d6a-a072-2a5e7f869e9d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:02 np0005603609 nova_compute[221550]: 2026-01-31 07:48:02.930 221554 DEBUG oslo_concurrency.lockutils [req-6e07ff76-311d-4528-a306-bc1ed08e1b30 req-77482540-b4b2-4d6a-a072-2a5e7f869e9d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:02 np0005603609 nova_compute[221550]: 2026-01-31 07:48:02.931 221554 DEBUG oslo_concurrency.lockutils [req-6e07ff76-311d-4528-a306-bc1ed08e1b30 req-77482540-b4b2-4d6a-a072-2a5e7f869e9d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:02 np0005603609 nova_compute[221550]: 2026-01-31 07:48:02.931 221554 DEBUG nova.compute.manager [req-6e07ff76-311d-4528-a306-bc1ed08e1b30 req-77482540-b4b2-4d6a-a072-2a5e7f869e9d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] No waiting events found dispatching network-vif-plugged-b17c792c-aad7-417b-9cfe-859ce32b3d37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:48:02 np0005603609 nova_compute[221550]: 2026-01-31 07:48:02.931 221554 WARNING nova.compute.manager [req-6e07ff76-311d-4528-a306-bc1ed08e1b30 req-77482540-b4b2-4d6a-a072-2a5e7f869e9d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Received unexpected event network-vif-plugged-b17c792c-aad7-417b-9cfe-859ce32b3d37 for instance with vm_state active and task_state powering-off.#033[00m
Jan 31 02:48:03 np0005603609 nova_compute[221550]: 2026-01-31 07:48:03.039 221554 INFO nova.compute.manager [None req-8987aa17-2b13-436e-bdde-39b0ddd51c52 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] instance snapshotting#033[00m
Jan 31 02:48:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:48:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:03.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:48:03 np0005603609 nova_compute[221550]: 2026-01-31 07:48:03.622 221554 INFO nova.virt.libvirt.driver [None req-8987aa17-2b13-436e-bdde-39b0ddd51c52 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Beginning live snapshot process#033[00m
Jan 31 02:48:03 np0005603609 nova_compute[221550]: 2026-01-31 07:48:03.916 221554 DEBUG nova.virt.libvirt.imagebackend [None req-8987aa17-2b13-436e-bdde-39b0ddd51c52 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 02:48:04 np0005603609 nova_compute[221550]: 2026-01-31 07:48:04.366 221554 DEBUG nova.storage.rbd_utils [None req-8987aa17-2b13-436e-bdde-39b0ddd51c52 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] creating snapshot(a05f0d42a6854479aee53652a066e741) on rbd image(2d280afb-69f5-4511-8134-19901686f5fc_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 02:48:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e231 e231: 3 total, 3 up, 3 in
Jan 31 02:48:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:48:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:04.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:48:04 np0005603609 nova_compute[221550]: 2026-01-31 07:48:04.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:04 np0005603609 nova_compute[221550]: 2026-01-31 07:48:04.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:04 np0005603609 nova_compute[221550]: 2026-01-31 07:48:04.721 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:04 np0005603609 nova_compute[221550]: 2026-01-31 07:48:04.721 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:04 np0005603609 nova_compute[221550]: 2026-01-31 07:48:04.722 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:04 np0005603609 nova_compute[221550]: 2026-01-31 07:48:04.722 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:48:04 np0005603609 nova_compute[221550]: 2026-01-31 07:48:04.722 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:48:04 np0005603609 nova_compute[221550]: 2026-01-31 07:48:04.747 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:05 np0005603609 nova_compute[221550]: 2026-01-31 07:48:05.072 221554 DEBUG nova.storage.rbd_utils [None req-8987aa17-2b13-436e-bdde-39b0ddd51c52 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] cloning vms/2d280afb-69f5-4511-8134-19901686f5fc_disk@a05f0d42a6854479aee53652a066e741 to images/ba20a523-9c85-4f5a-958b-3ab7efcbb3ba clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 02:48:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:48:05 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1783155386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:48:05 np0005603609 nova_compute[221550]: 2026-01-31 07:48:05.147 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:48:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:05.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:05 np0005603609 nova_compute[221550]: 2026-01-31 07:48:05.295 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000041 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:48:05 np0005603609 nova_compute[221550]: 2026-01-31 07:48:05.295 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000041 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:48:05 np0005603609 nova_compute[221550]: 2026-01-31 07:48:05.299 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:48:05 np0005603609 nova_compute[221550]: 2026-01-31 07:48:05.299 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:48:05 np0005603609 nova_compute[221550]: 2026-01-31 07:48:05.456 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:48:05 np0005603609 nova_compute[221550]: 2026-01-31 07:48:05.457 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4349MB free_disk=20.83094024658203GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:48:05 np0005603609 nova_compute[221550]: 2026-01-31 07:48:05.458 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:05 np0005603609 nova_compute[221550]: 2026-01-31 07:48:05.458 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:05 np0005603609 nova_compute[221550]: 2026-01-31 07:48:05.743 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 2d280afb-69f5-4511-8134-19901686f5fc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:48:05 np0005603609 nova_compute[221550]: 2026-01-31 07:48:05.743 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 9e3b0e76-62cb-4d16-888a-a1574d9ea228 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:48:05 np0005603609 nova_compute[221550]: 2026-01-31 07:48:05.744 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:48:05 np0005603609 nova_compute[221550]: 2026-01-31 07:48:05.744 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:48:05 np0005603609 nova_compute[221550]: 2026-01-31 07:48:05.813 221554 DEBUG nova.storage.rbd_utils [None req-8987aa17-2b13-436e-bdde-39b0ddd51c52 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] flattening images/ba20a523-9c85-4f5a-958b-3ab7efcbb3ba flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 02:48:05 np0005603609 nova_compute[221550]: 2026-01-31 07:48:05.868 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:48:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:48:06 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1085148854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:48:06 np0005603609 nova_compute[221550]: 2026-01-31 07:48:06.302 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:48:06 np0005603609 nova_compute[221550]: 2026-01-31 07:48:06.307 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:48:06 np0005603609 nova_compute[221550]: 2026-01-31 07:48:06.372 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:48:06 np0005603609 nova_compute[221550]: 2026-01-31 07:48:06.488 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:48:06 np0005603609 nova_compute[221550]: 2026-01-31 07:48:06.489 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e232 e232: 3 total, 3 up, 3 in
Jan 31 02:48:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:06.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:07 np0005603609 nova_compute[221550]: 2026-01-31 07:48:07.084 221554 DEBUG nova.storage.rbd_utils [None req-8987aa17-2b13-436e-bdde-39b0ddd51c52 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] removing snapshot(a05f0d42a6854479aee53652a066e741) on rbd image(2d280afb-69f5-4511-8134-19901686f5fc_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 02:48:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:07.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:07 np0005603609 nova_compute[221550]: 2026-01-31 07:48:07.300 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:07.482 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:07.483 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:07.484 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:07 np0005603609 nova_compute[221550]: 2026-01-31 07:48:07.485 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:07 np0005603609 nova_compute[221550]: 2026-01-31 07:48:07.486 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:07 np0005603609 nova_compute[221550]: 2026-01-31 07:48:07.486 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:48:07 np0005603609 nova_compute[221550]: 2026-01-31 07:48:07.487 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:48:07 np0005603609 nova_compute[221550]: 2026-01-31 07:48:07.891 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-2d280afb-69f5-4511-8134-19901686f5fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:48:07 np0005603609 nova_compute[221550]: 2026-01-31 07:48:07.892 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-2d280afb-69f5-4511-8134-19901686f5fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:48:07 np0005603609 nova_compute[221550]: 2026-01-31 07:48:07.892 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:48:07 np0005603609 nova_compute[221550]: 2026-01-31 07:48:07.893 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2d280afb-69f5-4511-8134-19901686f5fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:48:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e233 e233: 3 total, 3 up, 3 in
Jan 31 02:48:07 np0005603609 nova_compute[221550]: 2026-01-31 07:48:07.947 221554 DEBUG nova.storage.rbd_utils [None req-8987aa17-2b13-436e-bdde-39b0ddd51c52 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] creating snapshot(snap) on rbd image(ba20a523-9c85-4f5a-958b-3ab7efcbb3ba) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 02:48:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:48:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:08.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:48:08 np0005603609 nova_compute[221550]: 2026-01-31 07:48:08.754 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:48:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e234 e234: 3 total, 3 up, 3 in
Jan 31 02:48:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:09.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:09 np0005603609 nova_compute[221550]: 2026-01-31 07:48:09.803 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:10 np0005603609 nova_compute[221550]: 2026-01-31 07:48:10.473 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:48:10 np0005603609 nova_compute[221550]: 2026-01-31 07:48:10.520 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-2d280afb-69f5-4511-8134-19901686f5fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:48:10 np0005603609 nova_compute[221550]: 2026-01-31 07:48:10.521 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:48:10 np0005603609 nova_compute[221550]: 2026-01-31 07:48:10.521 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:10 np0005603609 nova_compute[221550]: 2026-01-31 07:48:10.521 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:10 np0005603609 nova_compute[221550]: 2026-01-31 07:48:10.521 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:48:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:10.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:11.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:12 np0005603609 nova_compute[221550]: 2026-01-31 07:48:12.301 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:12 np0005603609 nova_compute[221550]: 2026-01-31 07:48:12.473 221554 DEBUG nova.virt.libvirt.driver [None req-6fee509e-4f9b-40ea-8a28-e27ec22ef421 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 02:48:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:12.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:12 np0005603609 nova_compute[221550]: 2026-01-31 07:48:12.964 221554 INFO nova.virt.libvirt.driver [None req-8987aa17-2b13-436e-bdde-39b0ddd51c52 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Snapshot image upload complete#033[00m
Jan 31 02:48:12 np0005603609 nova_compute[221550]: 2026-01-31 07:48:12.964 221554 INFO nova.compute.manager [None req-8987aa17-2b13-436e-bdde-39b0ddd51c52 aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Took 9.92 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 02:48:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:13.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:14.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:14 np0005603609 ovn_controller[130359]: 2026-01-31T07:48:14Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:35:94:6a 10.100.0.6
Jan 31 02:48:14 np0005603609 ovn_controller[130359]: 2026-01-31T07:48:14Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:35:94:6a 10.100.0.6
Jan 31 02:48:14 np0005603609 nova_compute[221550]: 2026-01-31 07:48:14.805 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:15.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:15 np0005603609 nova_compute[221550]: 2026-01-31 07:48:15.690 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e235 e235: 3 total, 3 up, 3 in
Jan 31 02:48:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:16.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:48:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:17.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:48:17 np0005603609 nova_compute[221550]: 2026-01-31 07:48:17.303 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:18.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:19.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:19 np0005603609 nova_compute[221550]: 2026-01-31 07:48:19.807 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:20.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:21.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:21.267 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:48:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:21.268 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:48:21 np0005603609 nova_compute[221550]: 2026-01-31 07:48:21.268 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:22 np0005603609 nova_compute[221550]: 2026-01-31 07:48:22.337 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:22.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:23.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:23 np0005603609 nova_compute[221550]: 2026-01-31 07:48:23.513 221554 DEBUG nova.virt.libvirt.driver [None req-6fee509e-4f9b-40ea-8a28-e27ec22ef421 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 02:48:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:24.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:24 np0005603609 nova_compute[221550]: 2026-01-31 07:48:24.847 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:25.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:25 np0005603609 kernel: tapb17c792c-aa (unregistering): left promiscuous mode
Jan 31 02:48:25 np0005603609 NetworkManager[49064]: <info>  [1769845705.8287] device (tapb17c792c-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:48:25 np0005603609 nova_compute[221550]: 2026-01-31 07:48:25.836 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:25 np0005603609 ovn_controller[130359]: 2026-01-31T07:48:25Z|00193|binding|INFO|Releasing lport b17c792c-aad7-417b-9cfe-859ce32b3d37 from this chassis (sb_readonly=0)
Jan 31 02:48:25 np0005603609 ovn_controller[130359]: 2026-01-31T07:48:25Z|00194|binding|INFO|Setting lport b17c792c-aad7-417b-9cfe-859ce32b3d37 down in Southbound
Jan 31 02:48:25 np0005603609 ovn_controller[130359]: 2026-01-31T07:48:25Z|00195|binding|INFO|Removing iface tapb17c792c-aa ovn-installed in OVS
Jan 31 02:48:25 np0005603609 nova_compute[221550]: 2026-01-31 07:48:25.839 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:25 np0005603609 nova_compute[221550]: 2026-01-31 07:48:25.848 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:25.852 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:94:6a 10.100.0.6'], port_security=['fa:16:3e:35:94:6a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9e3b0e76-62cb-4d16-888a-a1574d9ea228', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60244e92-1524-47f0-8207-05d0104afa47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0f055dcb-9af1-4cf7-aa74-c819da93756e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70b36ac0-62cb-4e29-924b-93c5ad906bc9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=b17c792c-aad7-417b-9cfe-859ce32b3d37) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:48:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:25.854 140058 INFO neutron.agent.ovn.metadata.agent [-] Port b17c792c-aad7-417b-9cfe-859ce32b3d37 in datapath 60244e92-1524-47f0-8207-05d0104afa47 unbound from our chassis#033[00m
Jan 31 02:48:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:25.855 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60244e92-1524-47f0-8207-05d0104afa47, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:48:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:25.857 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[52a4bb77-1d78-4a4e-b6a3-1c2f5485f336]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:48:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:25.858 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 namespace which is not needed anymore#033[00m
Jan 31 02:48:25 np0005603609 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000041.scope: Deactivated successfully.
Jan 31 02:48:25 np0005603609 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000041.scope: Consumed 15.564s CPU time.
Jan 31 02:48:25 np0005603609 systemd-machined[190912]: Machine qemu-30-instance-00000041 terminated.
Jan 31 02:48:25 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[245152]: [NOTICE]   (245167) : haproxy version is 2.8.14-c23fe91
Jan 31 02:48:25 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[245152]: [NOTICE]   (245167) : path to executable is /usr/sbin/haproxy
Jan 31 02:48:25 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[245152]: [WARNING]  (245167) : Exiting Master process...
Jan 31 02:48:25 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[245152]: [ALERT]    (245167) : Current worker (245169) exited with code 143 (Terminated)
Jan 31 02:48:25 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[245152]: [WARNING]  (245167) : All workers exited. Exiting... (0)
Jan 31 02:48:25 np0005603609 systemd[1]: libpod-0d9b2e39a19f5093b41709a01164d2c2475bb79143efd7d17d56e48a0f19a944.scope: Deactivated successfully.
Jan 31 02:48:25 np0005603609 podman[245397]: 2026-01-31 07:48:25.974652344 +0000 UTC m=+0.044001561 container died 0d9b2e39a19f5093b41709a01164d2c2475bb79143efd7d17d56e48a0f19a944 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:48:26 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d9b2e39a19f5093b41709a01164d2c2475bb79143efd7d17d56e48a0f19a944-userdata-shm.mount: Deactivated successfully.
Jan 31 02:48:26 np0005603609 systemd[1]: var-lib-containers-storage-overlay-6a13b590afc81945310ccf971b8a3970ed8c2c0caba3cd9aadd5cfa41b33243a-merged.mount: Deactivated successfully.
Jan 31 02:48:26 np0005603609 podman[245397]: 2026-01-31 07:48:26.032655396 +0000 UTC m=+0.102004613 container cleanup 0d9b2e39a19f5093b41709a01164d2c2475bb79143efd7d17d56e48a0f19a944 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 02:48:26 np0005603609 systemd[1]: libpod-conmon-0d9b2e39a19f5093b41709a01164d2c2475bb79143efd7d17d56e48a0f19a944.scope: Deactivated successfully.
Jan 31 02:48:26 np0005603609 podman[245426]: 2026-01-31 07:48:26.116612979 +0000 UTC m=+0.064404748 container remove 0d9b2e39a19f5093b41709a01164d2c2475bb79143efd7d17d56e48a0f19a944 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:48:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e236 e236: 3 total, 3 up, 3 in
Jan 31 02:48:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:26.124 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2a78d9f0-c534-4f32-bb9b-580ce4ac0f9e]: (4, ('Sat Jan 31 07:48:25 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 (0d9b2e39a19f5093b41709a01164d2c2475bb79143efd7d17d56e48a0f19a944)\n0d9b2e39a19f5093b41709a01164d2c2475bb79143efd7d17d56e48a0f19a944\nSat Jan 31 07:48:26 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 (0d9b2e39a19f5093b41709a01164d2c2475bb79143efd7d17d56e48a0f19a944)\n0d9b2e39a19f5093b41709a01164d2c2475bb79143efd7d17d56e48a0f19a944\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:48:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:26.125 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b6fe2fd1-e9ae-4207-96b8-7ba5058b12b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:48:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:26.126 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60244e92-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:48:26 np0005603609 nova_compute[221550]: 2026-01-31 07:48:26.129 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:26 np0005603609 kernel: tap60244e92-10: left promiscuous mode
Jan 31 02:48:26 np0005603609 nova_compute[221550]: 2026-01-31 07:48:26.143 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:26.147 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[af81ce50-6370-4705-b7f5-886ee2e0462e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:48:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:26.157 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bfdb2267-1923-4e28-83e3-533ea5a18e7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:48:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:26.158 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b4d31c89-a2c9-4449-b59c-f9f8908f68e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:48:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:26.170 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b464ffbc-820f-44b2-b756-6d8528ab8982]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591870, 'reachable_time': 36586, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245454, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:48:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:26.173 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:48:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:26.173 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[9815f4ba-4c0a-40a5-a0e0-76ac5773a4ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:48:26 np0005603609 systemd[1]: run-netns-ovnmeta\x2d60244e92\x2d1524\x2d47f0\x2d8207\x2d05d0104afa47.mount: Deactivated successfully.
Jan 31 02:48:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:26.270 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:48:26 np0005603609 nova_compute[221550]: 2026-01-31 07:48:26.525 221554 INFO nova.virt.libvirt.driver [None req-6fee509e-4f9b-40ea-8a28-e27ec22ef421 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Instance shutdown successfully after 24 seconds.#033[00m
Jan 31 02:48:26 np0005603609 nova_compute[221550]: 2026-01-31 07:48:26.530 221554 INFO nova.virt.libvirt.driver [-] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Instance destroyed successfully.#033[00m
Jan 31 02:48:26 np0005603609 nova_compute[221550]: 2026-01-31 07:48:26.530 221554 DEBUG nova.objects.instance [None req-6fee509e-4f9b-40ea-8a28-e27ec22ef421 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9e3b0e76-62cb-4d16-888a-a1574d9ea228 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:48:26 np0005603609 nova_compute[221550]: 2026-01-31 07:48:26.563 221554 DEBUG nova.compute.manager [None req-6fee509e-4f9b-40ea-8a28-e27ec22ef421 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:48:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:26.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:26 np0005603609 nova_compute[221550]: 2026-01-31 07:48:26.668 221554 DEBUG oslo_concurrency.lockutils [None req-6fee509e-4f9b-40ea-8a28-e27ec22ef421 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.299s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:27 np0005603609 nova_compute[221550]: 2026-01-31 07:48:27.045 221554 DEBUG nova.compute.manager [req-817b39e4-f6e5-4670-802e-06e809062437 req-f6f051be-4318-4106-8b2a-10f5b0b3458e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Received event network-vif-unplugged-b17c792c-aad7-417b-9cfe-859ce32b3d37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:48:27 np0005603609 nova_compute[221550]: 2026-01-31 07:48:27.046 221554 DEBUG oslo_concurrency.lockutils [req-817b39e4-f6e5-4670-802e-06e809062437 req-f6f051be-4318-4106-8b2a-10f5b0b3458e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:27 np0005603609 nova_compute[221550]: 2026-01-31 07:48:27.047 221554 DEBUG oslo_concurrency.lockutils [req-817b39e4-f6e5-4670-802e-06e809062437 req-f6f051be-4318-4106-8b2a-10f5b0b3458e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:27 np0005603609 nova_compute[221550]: 2026-01-31 07:48:27.047 221554 DEBUG oslo_concurrency.lockutils [req-817b39e4-f6e5-4670-802e-06e809062437 req-f6f051be-4318-4106-8b2a-10f5b0b3458e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:27 np0005603609 nova_compute[221550]: 2026-01-31 07:48:27.047 221554 DEBUG nova.compute.manager [req-817b39e4-f6e5-4670-802e-06e809062437 req-f6f051be-4318-4106-8b2a-10f5b0b3458e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] No waiting events found dispatching network-vif-unplugged-b17c792c-aad7-417b-9cfe-859ce32b3d37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:48:27 np0005603609 nova_compute[221550]: 2026-01-31 07:48:27.047 221554 WARNING nova.compute.manager [req-817b39e4-f6e5-4670-802e-06e809062437 req-f6f051be-4318-4106-8b2a-10f5b0b3458e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Received unexpected event network-vif-unplugged-b17c792c-aad7-417b-9cfe-859ce32b3d37 for instance with vm_state stopped and task_state None.#033[00m
Jan 31 02:48:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:27 np0005603609 podman[245456]: 2026-01-31 07:48:27.164740566 +0000 UTC m=+0.053193945 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Jan 31 02:48:27 np0005603609 podman[245455]: 2026-01-31 07:48:27.2006571 +0000 UTC m=+0.089224532 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 02:48:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:27.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:27 np0005603609 nova_compute[221550]: 2026-01-31 07:48:27.339 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e237 e237: 3 total, 3 up, 3 in
Jan 31 02:48:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:48:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:28.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:48:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:29.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.581 221554 DEBUG nova.compute.manager [req-f35c7535-0e6b-4658-b7ca-17aa7a1c242b req-9cecb790-fe53-470a-9fcc-ca1f0469050b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Received event network-vif-plugged-b17c792c-aad7-417b-9cfe-859ce32b3d37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.581 221554 DEBUG oslo_concurrency.lockutils [req-f35c7535-0e6b-4658-b7ca-17aa7a1c242b req-9cecb790-fe53-470a-9fcc-ca1f0469050b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.581 221554 DEBUG oslo_concurrency.lockutils [req-f35c7535-0e6b-4658-b7ca-17aa7a1c242b req-9cecb790-fe53-470a-9fcc-ca1f0469050b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.582 221554 DEBUG oslo_concurrency.lockutils [req-f35c7535-0e6b-4658-b7ca-17aa7a1c242b req-9cecb790-fe53-470a-9fcc-ca1f0469050b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.582 221554 DEBUG nova.compute.manager [req-f35c7535-0e6b-4658-b7ca-17aa7a1c242b req-9cecb790-fe53-470a-9fcc-ca1f0469050b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] No waiting events found dispatching network-vif-plugged-b17c792c-aad7-417b-9cfe-859ce32b3d37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.582 221554 WARNING nova.compute.manager [req-f35c7535-0e6b-4658-b7ca-17aa7a1c242b req-9cecb790-fe53-470a-9fcc-ca1f0469050b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Received unexpected event network-vif-plugged-b17c792c-aad7-417b-9cfe-859ce32b3d37 for instance with vm_state stopped and task_state deleting.#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.676 221554 DEBUG oslo_concurrency.lockutils [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.677 221554 DEBUG oslo_concurrency.lockutils [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.677 221554 DEBUG oslo_concurrency.lockutils [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.677 221554 DEBUG oslo_concurrency.lockutils [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.678 221554 DEBUG oslo_concurrency.lockutils [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.678 221554 INFO nova.compute.manager [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Terminating instance#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.679 221554 DEBUG nova.compute.manager [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.686 221554 INFO nova.virt.libvirt.driver [-] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Instance destroyed successfully.#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.686 221554 DEBUG nova.objects.instance [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'resources' on Instance uuid 9e3b0e76-62cb-4d16-888a-a1574d9ea228 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.720 221554 DEBUG nova.virt.libvirt.vif [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:47:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1127391059',display_name='tempest-DeleteServersTestJSON-server-1127391059',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1127391059',id=65,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:48:00Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-g2vqvavk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:48:26Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=9e3b0e76-62cb-4d16-888a-a1574d9ea228,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "address": "fa:16:3e:35:94:6a", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb17c792c-aa", "ovs_interfaceid": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.721 221554 DEBUG nova.network.os_vif_util [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "address": "fa:16:3e:35:94:6a", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb17c792c-aa", "ovs_interfaceid": "b17c792c-aad7-417b-9cfe-859ce32b3d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.722 221554 DEBUG nova.network.os_vif_util [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:94:6a,bridge_name='br-int',has_traffic_filtering=True,id=b17c792c-aad7-417b-9cfe-859ce32b3d37,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb17c792c-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.722 221554 DEBUG os_vif [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:94:6a,bridge_name='br-int',has_traffic_filtering=True,id=b17c792c-aad7-417b-9cfe-859ce32b3d37,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb17c792c-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.723 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.724 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb17c792c-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.756 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.758 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:48:29 np0005603609 nova_compute[221550]: 2026-01-31 07:48:29.761 221554 INFO os_vif [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:94:6a,bridge_name='br-int',has_traffic_filtering=True,id=b17c792c-aad7-417b-9cfe-859ce32b3d37,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb17c792c-aa')#033[00m
Jan 31 02:48:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:30.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:31.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:31 np0005603609 nova_compute[221550]: 2026-01-31 07:48:31.626 221554 INFO nova.virt.libvirt.driver [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Deleting instance files /var/lib/nova/instances/9e3b0e76-62cb-4d16-888a-a1574d9ea228_del#033[00m
Jan 31 02:48:31 np0005603609 nova_compute[221550]: 2026-01-31 07:48:31.627 221554 INFO nova.virt.libvirt.driver [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Deletion of /var/lib/nova/instances/9e3b0e76-62cb-4d16-888a-a1574d9ea228_del complete#033[00m
Jan 31 02:48:31 np0005603609 nova_compute[221550]: 2026-01-31 07:48:31.734 221554 INFO nova.compute.manager [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Took 2.05 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:48:31 np0005603609 nova_compute[221550]: 2026-01-31 07:48:31.734 221554 DEBUG oslo.service.loopingcall [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:48:31 np0005603609 nova_compute[221550]: 2026-01-31 07:48:31.735 221554 DEBUG nova.compute.manager [-] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:48:31 np0005603609 nova_compute[221550]: 2026-01-31 07:48:31.735 221554 DEBUG nova.network.neutron [-] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:48:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:32 np0005603609 nova_compute[221550]: 2026-01-31 07:48:32.343 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e238 e238: 3 total, 3 up, 3 in
Jan 31 02:48:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:32.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:32 np0005603609 nova_compute[221550]: 2026-01-31 07:48:32.695 221554 DEBUG nova.network.neutron [-] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:48:32 np0005603609 nova_compute[221550]: 2026-01-31 07:48:32.735 221554 INFO nova.compute.manager [-] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Took 1.00 seconds to deallocate network for instance.#033[00m
Jan 31 02:48:32 np0005603609 nova_compute[221550]: 2026-01-31 07:48:32.806 221554 DEBUG oslo_concurrency.lockutils [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:32 np0005603609 nova_compute[221550]: 2026-01-31 07:48:32.807 221554 DEBUG oslo_concurrency.lockutils [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:32 np0005603609 nova_compute[221550]: 2026-01-31 07:48:32.871 221554 DEBUG nova.compute.manager [req-341f2a5a-9f02-4d64-bde7-a05a2cda710b req-53a461be-096f-4cc9-8fe5-b72c5763020f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Received event network-vif-deleted-b17c792c-aad7-417b-9cfe-859ce32b3d37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:48:32 np0005603609 nova_compute[221550]: 2026-01-31 07:48:32.923 221554 DEBUG oslo_concurrency.processutils [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:48:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:33.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:48:33 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1982302776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:48:33 np0005603609 nova_compute[221550]: 2026-01-31 07:48:33.357 221554 DEBUG oslo_concurrency.processutils [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:48:33 np0005603609 nova_compute[221550]: 2026-01-31 07:48:33.363 221554 DEBUG nova.compute.provider_tree [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:48:33 np0005603609 nova_compute[221550]: 2026-01-31 07:48:33.381 221554 DEBUG nova.scheduler.client.report [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:48:33 np0005603609 nova_compute[221550]: 2026-01-31 07:48:33.414 221554 DEBUG oslo_concurrency.lockutils [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:33 np0005603609 nova_compute[221550]: 2026-01-31 07:48:33.523 221554 INFO nova.scheduler.client.report [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Deleted allocations for instance 9e3b0e76-62cb-4d16-888a-a1574d9ea228#033[00m
Jan 31 02:48:33 np0005603609 nova_compute[221550]: 2026-01-31 07:48:33.937 221554 DEBUG oslo_concurrency.lockutils [None req-6de75540-ff47-42fb-b091-15149844d13c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "9e3b0e76-62cb-4d16-888a-a1574d9ea228" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:34.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:34 np0005603609 nova_compute[221550]: 2026-01-31 07:48:34.757 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:48:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:35.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:48:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e239 e239: 3 total, 3 up, 3 in
Jan 31 02:48:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:36.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:37.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:37 np0005603609 nova_compute[221550]: 2026-01-31 07:48:37.386 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:38.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:48:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:39.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:48:39 np0005603609 nova_compute[221550]: 2026-01-31 07:48:39.807 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:40.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:41 np0005603609 nova_compute[221550]: 2026-01-31 07:48:41.071 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845706.069882, 9e3b0e76-62cb-4d16-888a-a1574d9ea228 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:48:41 np0005603609 nova_compute[221550]: 2026-01-31 07:48:41.072 221554 INFO nova.compute.manager [-] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:48:41 np0005603609 nova_compute[221550]: 2026-01-31 07:48:41.092 221554 DEBUG nova.compute.manager [None req-6f4f7615-72ff-45db-b85c-1535bc24a2cb - - - - - -] [instance: 9e3b0e76-62cb-4d16-888a-a1574d9ea228] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:48:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:41.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e240 e240: 3 total, 3 up, 3 in
Jan 31 02:48:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:42 np0005603609 nova_compute[221550]: 2026-01-31 07:48:42.390 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:48:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:42.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:48:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:43.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:43 np0005603609 nova_compute[221550]: 2026-01-31 07:48:43.955 221554 DEBUG oslo_concurrency.lockutils [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Acquiring lock "2d280afb-69f5-4511-8134-19901686f5fc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:43 np0005603609 nova_compute[221550]: 2026-01-31 07:48:43.955 221554 DEBUG oslo_concurrency.lockutils [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "2d280afb-69f5-4511-8134-19901686f5fc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:43 np0005603609 nova_compute[221550]: 2026-01-31 07:48:43.955 221554 DEBUG oslo_concurrency.lockutils [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Acquiring lock "2d280afb-69f5-4511-8134-19901686f5fc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:43 np0005603609 nova_compute[221550]: 2026-01-31 07:48:43.956 221554 DEBUG oslo_concurrency.lockutils [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "2d280afb-69f5-4511-8134-19901686f5fc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:43 np0005603609 nova_compute[221550]: 2026-01-31 07:48:43.956 221554 DEBUG oslo_concurrency.lockutils [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "2d280afb-69f5-4511-8134-19901686f5fc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:43 np0005603609 nova_compute[221550]: 2026-01-31 07:48:43.957 221554 INFO nova.compute.manager [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Terminating instance#033[00m
Jan 31 02:48:43 np0005603609 nova_compute[221550]: 2026-01-31 07:48:43.957 221554 DEBUG oslo_concurrency.lockutils [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Acquiring lock "refresh_cache-2d280afb-69f5-4511-8134-19901686f5fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:48:43 np0005603609 nova_compute[221550]: 2026-01-31 07:48:43.958 221554 DEBUG oslo_concurrency.lockutils [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Acquired lock "refresh_cache-2d280afb-69f5-4511-8134-19901686f5fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:48:43 np0005603609 nova_compute[221550]: 2026-01-31 07:48:43.958 221554 DEBUG nova.network.neutron [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:48:44 np0005603609 nova_compute[221550]: 2026-01-31 07:48:44.150 221554 DEBUG nova.network.neutron [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:48:44 np0005603609 nova_compute[221550]: 2026-01-31 07:48:44.531 221554 DEBUG nova.network.neutron [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:48:44 np0005603609 nova_compute[221550]: 2026-01-31 07:48:44.572 221554 DEBUG oslo_concurrency.lockutils [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Releasing lock "refresh_cache-2d280afb-69f5-4511-8134-19901686f5fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:48:44 np0005603609 nova_compute[221550]: 2026-01-31 07:48:44.573 221554 DEBUG nova.compute.manager [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:48:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:48:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:44.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:48:44 np0005603609 nova_compute[221550]: 2026-01-31 07:48:44.811 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:45 np0005603609 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Jan 31 02:48:45 np0005603609 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000003f.scope: Consumed 15.406s CPU time.
Jan 31 02:48:45 np0005603609 systemd-machined[190912]: Machine qemu-29-instance-0000003f terminated.
Jan 31 02:48:45 np0005603609 nova_compute[221550]: 2026-01-31 07:48:45.198 221554 INFO nova.virt.libvirt.driver [-] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Instance destroyed successfully.#033[00m
Jan 31 02:48:45 np0005603609 nova_compute[221550]: 2026-01-31 07:48:45.199 221554 DEBUG nova.objects.instance [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lazy-loading 'resources' on Instance uuid 2d280afb-69f5-4511-8134-19901686f5fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:48:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:45.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:46.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:47.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:47 np0005603609 nova_compute[221550]: 2026-01-31 07:48:47.448 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:48:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:48.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:48:48 np0005603609 nova_compute[221550]: 2026-01-31 07:48:48.849 221554 INFO nova.virt.libvirt.driver [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Deleting instance files /var/lib/nova/instances/2d280afb-69f5-4511-8134-19901686f5fc_del#033[00m
Jan 31 02:48:48 np0005603609 nova_compute[221550]: 2026-01-31 07:48:48.850 221554 INFO nova.virt.libvirt.driver [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Deletion of /var/lib/nova/instances/2d280afb-69f5-4511-8134-19901686f5fc_del complete#033[00m
Jan 31 02:48:48 np0005603609 nova_compute[221550]: 2026-01-31 07:48:48.968 221554 INFO nova.compute.manager [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Took 4.39 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:48:48 np0005603609 nova_compute[221550]: 2026-01-31 07:48:48.969 221554 DEBUG oslo.service.loopingcall [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:48:48 np0005603609 nova_compute[221550]: 2026-01-31 07:48:48.969 221554 DEBUG nova.compute.manager [-] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:48:48 np0005603609 nova_compute[221550]: 2026-01-31 07:48:48.969 221554 DEBUG nova.network.neutron [-] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:48:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:48:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:49.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:48:49 np0005603609 nova_compute[221550]: 2026-01-31 07:48:49.536 221554 DEBUG nova.network.neutron [-] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:48:49 np0005603609 nova_compute[221550]: 2026-01-31 07:48:49.551 221554 DEBUG nova.network.neutron [-] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:48:49 np0005603609 nova_compute[221550]: 2026-01-31 07:48:49.577 221554 INFO nova.compute.manager [-] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Took 0.61 seconds to deallocate network for instance.#033[00m
Jan 31 02:48:49 np0005603609 nova_compute[221550]: 2026-01-31 07:48:49.664 221554 DEBUG oslo_concurrency.lockutils [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:49 np0005603609 nova_compute[221550]: 2026-01-31 07:48:49.664 221554 DEBUG oslo_concurrency.lockutils [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:49 np0005603609 nova_compute[221550]: 2026-01-31 07:48:49.731 221554 DEBUG oslo_concurrency.processutils [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:48:49 np0005603609 nova_compute[221550]: 2026-01-31 07:48:49.816 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:48:50 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3811372512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:48:50 np0005603609 nova_compute[221550]: 2026-01-31 07:48:50.122 221554 DEBUG oslo_concurrency.processutils [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:48:50 np0005603609 nova_compute[221550]: 2026-01-31 07:48:50.130 221554 DEBUG nova.compute.provider_tree [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:48:50 np0005603609 nova_compute[221550]: 2026-01-31 07:48:50.145 221554 DEBUG nova.scheduler.client.report [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:48:50 np0005603609 nova_compute[221550]: 2026-01-31 07:48:50.163 221554 DEBUG oslo_concurrency.lockutils [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:50 np0005603609 nova_compute[221550]: 2026-01-31 07:48:50.185 221554 INFO nova.scheduler.client.report [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Deleted allocations for instance 2d280afb-69f5-4511-8134-19901686f5fc#033[00m
Jan 31 02:48:50 np0005603609 nova_compute[221550]: 2026-01-31 07:48:50.252 221554 DEBUG oslo_concurrency.lockutils [None req-589a3339-405b-4787-8f2e-943c7822268a aeb97fa2b1284c3faf0028734652a72c d8503864fef643f698a175cc6364101c - - default default] Lock "2d280afb-69f5-4511-8134-19901686f5fc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:48:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:50.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:48:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:48:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:51.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:48:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e241 e241: 3 total, 3 up, 3 in
Jan 31 02:48:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:52 np0005603609 nova_compute[221550]: 2026-01-31 07:48:52.449 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e242 e242: 3 total, 3 up, 3 in
Jan 31 02:48:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:48:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:52.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:48:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:53.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:54.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:54 np0005603609 nova_compute[221550]: 2026-01-31 07:48:54.820 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e243 e243: 3 total, 3 up, 3 in
Jan 31 02:48:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:55.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:55 np0005603609 nova_compute[221550]: 2026-01-31 07:48:55.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:48:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:48:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:48:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:48:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 02:48:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:48:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:56.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:48:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 02:48:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:48:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:48:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:48:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:57.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:57 np0005603609 nova_compute[221550]: 2026-01-31 07:48:57.497 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:58 np0005603609 podman[245836]: 2026-01-31 07:48:58.169848331 +0000 UTC m=+0.052643491 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:48:58 np0005603609 podman[245835]: 2026-01-31 07:48:58.18866891 +0000 UTC m=+0.074185337 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:48:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:58.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:58.763 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:48:58 np0005603609 nova_compute[221550]: 2026-01-31 07:48:58.763 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:48:58.765 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:48:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:48:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:59.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:59 np0005603609 nova_compute[221550]: 2026-01-31 07:48:59.875 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:00 np0005603609 nova_compute[221550]: 2026-01-31 07:49:00.195 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845725.1925874, 2d280afb-69f5-4511-8134-19901686f5fc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:49:00 np0005603609 nova_compute[221550]: 2026-01-31 07:49:00.196 221554 INFO nova.compute.manager [-] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:49:00 np0005603609 nova_compute[221550]: 2026-01-31 07:49:00.218 221554 DEBUG nova.compute.manager [None req-4bd04c8d-5871-428f-8240-c4b15e9cbf75 - - - - - -] [instance: 2d280afb-69f5-4511-8134-19901686f5fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:49:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:00.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:00.768 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:00.873554) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845740873587, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2535, "num_deletes": 262, "total_data_size": 5638990, "memory_usage": 5725856, "flush_reason": "Manual Compaction"}
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845740901854, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 3691359, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33836, "largest_seqno": 36365, "table_properties": {"data_size": 3680901, "index_size": 6760, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 22153, "raw_average_key_size": 21, "raw_value_size": 3659861, "raw_average_value_size": 3492, "num_data_blocks": 291, "num_entries": 1048, "num_filter_entries": 1048, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845557, "oldest_key_time": 1769845557, "file_creation_time": 1769845740, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 28344 microseconds, and 5770 cpu microseconds.
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:00.901896) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 3691359 bytes OK
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:00.901913) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:00.904558) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:00.904568) EVENT_LOG_v1 {"time_micros": 1769845740904565, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:00.904583) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 5627799, prev total WAL file size 5627799, number of live WAL files 2.
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:00.905216) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(3604KB)], [63(10141KB)]
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845740905257, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 14076616, "oldest_snapshot_seqno": -1}
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6330 keys, 12110243 bytes, temperature: kUnknown
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845740990061, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 12110243, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12064089, "index_size": 29212, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 161597, "raw_average_key_size": 25, "raw_value_size": 11946687, "raw_average_value_size": 1887, "num_data_blocks": 1174, "num_entries": 6330, "num_filter_entries": 6330, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769845740, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:00.990314) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 12110243 bytes
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:00.992707) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.9 rd, 142.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.9 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(7.1) write-amplify(3.3) OK, records in: 6867, records dropped: 537 output_compression: NoCompression
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:00.992779) EVENT_LOG_v1 {"time_micros": 1769845740992751, "job": 38, "event": "compaction_finished", "compaction_time_micros": 84875, "compaction_time_cpu_micros": 19132, "output_level": 6, "num_output_files": 1, "total_output_size": 12110243, "num_input_records": 6867, "num_output_records": 6330, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845740993860, "job": 38, "event": "table_file_deletion", "file_number": 65}
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845740995696, "job": 38, "event": "table_file_deletion", "file_number": 63}
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:00.905119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:00.995828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:00.995836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:00.995839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:00.995841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:49:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:00.995843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:49:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:01.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e244 e244: 3 total, 3 up, 3 in
Jan 31 02:49:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:02 np0005603609 nova_compute[221550]: 2026-01-31 07:49:02.499 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:02 np0005603609 nova_compute[221550]: 2026-01-31 07:49:02.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:49:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:02.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e245 e245: 3 total, 3 up, 3 in
Jan 31 02:49:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:03.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:03 np0005603609 nova_compute[221550]: 2026-01-31 07:49:03.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:49:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:49:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:49:04 np0005603609 nova_compute[221550]: 2026-01-31 07:49:04.197 221554 DEBUG oslo_concurrency.lockutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "4a932d78-e0fe-4c23-a756-916144472a64" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:04 np0005603609 nova_compute[221550]: 2026-01-31 07:49:04.198 221554 DEBUG oslo_concurrency.lockutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:04 np0005603609 nova_compute[221550]: 2026-01-31 07:49:04.235 221554 DEBUG nova.compute.manager [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:49:04 np0005603609 nova_compute[221550]: 2026-01-31 07:49:04.322 221554 DEBUG oslo_concurrency.lockutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:04 np0005603609 nova_compute[221550]: 2026-01-31 07:49:04.322 221554 DEBUG oslo_concurrency.lockutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:04 np0005603609 nova_compute[221550]: 2026-01-31 07:49:04.330 221554 DEBUG nova.virt.hardware [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:49:04 np0005603609 nova_compute[221550]: 2026-01-31 07:49:04.331 221554 INFO nova.compute.claims [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:49:04 np0005603609 nova_compute[221550]: 2026-01-31 07:49:04.452 221554 DEBUG oslo_concurrency.processutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:04 np0005603609 nova_compute[221550]: 2026-01-31 07:49:04.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:49:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:04.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:49:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1096458877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:49:04 np0005603609 nova_compute[221550]: 2026-01-31 07:49:04.929 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:04 np0005603609 nova_compute[221550]: 2026-01-31 07:49:04.940 221554 DEBUG oslo_concurrency.processutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:04 np0005603609 nova_compute[221550]: 2026-01-31 07:49:04.949 221554 DEBUG nova.compute.provider_tree [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:49:04 np0005603609 nova_compute[221550]: 2026-01-31 07:49:04.968 221554 DEBUG nova.scheduler.client.report [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:49:04 np0005603609 nova_compute[221550]: 2026-01-31 07:49:04.996 221554 DEBUG oslo_concurrency.lockutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:04 np0005603609 nova_compute[221550]: 2026-01-31 07:49:04.997 221554 DEBUG nova.compute.manager [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:49:05 np0005603609 nova_compute[221550]: 2026-01-31 07:49:05.061 221554 DEBUG nova.compute.manager [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:49:05 np0005603609 nova_compute[221550]: 2026-01-31 07:49:05.061 221554 DEBUG nova.network.neutron [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:49:05 np0005603609 nova_compute[221550]: 2026-01-31 07:49:05.109 221554 INFO nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:49:05 np0005603609 nova_compute[221550]: 2026-01-31 07:49:05.142 221554 DEBUG nova.compute.manager [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:49:05 np0005603609 nova_compute[221550]: 2026-01-31 07:49:05.258 221554 DEBUG nova.compute.manager [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:49:05 np0005603609 nova_compute[221550]: 2026-01-31 07:49:05.259 221554 DEBUG nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:49:05 np0005603609 nova_compute[221550]: 2026-01-31 07:49:05.260 221554 INFO nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Creating image(s)#033[00m
Jan 31 02:49:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:05.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:05 np0005603609 nova_compute[221550]: 2026-01-31 07:49:05.289 221554 DEBUG nova.storage.rbd_utils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 4a932d78-e0fe-4c23-a756-916144472a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:49:05 np0005603609 nova_compute[221550]: 2026-01-31 07:49:05.319 221554 DEBUG nova.storage.rbd_utils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 4a932d78-e0fe-4c23-a756-916144472a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:49:05 np0005603609 nova_compute[221550]: 2026-01-31 07:49:05.355 221554 DEBUG nova.storage.rbd_utils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 4a932d78-e0fe-4c23-a756-916144472a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:49:05 np0005603609 nova_compute[221550]: 2026-01-31 07:49:05.360 221554 DEBUG oslo_concurrency.processutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:05 np0005603609 nova_compute[221550]: 2026-01-31 07:49:05.390 221554 DEBUG nova.policy [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97abab8eb79247cd89fb2ebff295b890', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:49:05 np0005603609 nova_compute[221550]: 2026-01-31 07:49:05.446 221554 DEBUG oslo_concurrency.processutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:05 np0005603609 nova_compute[221550]: 2026-01-31 07:49:05.447 221554 DEBUG oslo_concurrency.lockutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:05 np0005603609 nova_compute[221550]: 2026-01-31 07:49:05.448 221554 DEBUG oslo_concurrency.lockutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:05 np0005603609 nova_compute[221550]: 2026-01-31 07:49:05.448 221554 DEBUG oslo_concurrency.lockutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:05 np0005603609 nova_compute[221550]: 2026-01-31 07:49:05.479 221554 DEBUG nova.storage.rbd_utils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 4a932d78-e0fe-4c23-a756-916144472a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:49:05 np0005603609 nova_compute[221550]: 2026-01-31 07:49:05.483 221554 DEBUG oslo_concurrency.processutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4a932d78-e0fe-4c23-a756-916144472a64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:05 np0005603609 nova_compute[221550]: 2026-01-31 07:49:05.933 221554 DEBUG oslo_concurrency.processutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4a932d78-e0fe-4c23-a756-916144472a64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:06 np0005603609 nova_compute[221550]: 2026-01-31 07:49:06.008 221554 DEBUG nova.storage.rbd_utils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] resizing rbd image 4a932d78-e0fe-4c23-a756-916144472a64_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:49:06 np0005603609 nova_compute[221550]: 2026-01-31 07:49:06.056 221554 DEBUG nova.network.neutron [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Successfully created port: b1f53ff3-6ad6-4599-8b83-463239ecd8c8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:49:06 np0005603609 nova_compute[221550]: 2026-01-31 07:49:06.117 221554 DEBUG nova.objects.instance [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'migration_context' on Instance uuid 4a932d78-e0fe-4c23-a756-916144472a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:49:06 np0005603609 nova_compute[221550]: 2026-01-31 07:49:06.137 221554 DEBUG nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:49:06 np0005603609 nova_compute[221550]: 2026-01-31 07:49:06.137 221554 DEBUG nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Ensure instance console log exists: /var/lib/nova/instances/4a932d78-e0fe-4c23-a756-916144472a64/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:49:06 np0005603609 nova_compute[221550]: 2026-01-31 07:49:06.138 221554 DEBUG oslo_concurrency.lockutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:06 np0005603609 nova_compute[221550]: 2026-01-31 07:49:06.138 221554 DEBUG oslo_concurrency.lockutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:06 np0005603609 nova_compute[221550]: 2026-01-31 07:49:06.138 221554 DEBUG oslo_concurrency.lockutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:06 np0005603609 nova_compute[221550]: 2026-01-31 07:49:06.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:49:06 np0005603609 nova_compute[221550]: 2026-01-31 07:49:06.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:49:06 np0005603609 nova_compute[221550]: 2026-01-31 07:49:06.658 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:49:06 np0005603609 nova_compute[221550]: 2026-01-31 07:49:06.681 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:49:06 np0005603609 nova_compute[221550]: 2026-01-31 07:49:06.681 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:49:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:06.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:06 np0005603609 nova_compute[221550]: 2026-01-31 07:49:06.701 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:06 np0005603609 nova_compute[221550]: 2026-01-31 07:49:06.702 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:06 np0005603609 nova_compute[221550]: 2026-01-31 07:49:06.702 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:06 np0005603609 nova_compute[221550]: 2026-01-31 07:49:06.702 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:49:06 np0005603609 nova_compute[221550]: 2026-01-31 07:49:06.702 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.041 221554 DEBUG nova.network.neutron [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Successfully updated port: b1f53ff3-6ad6-4599-8b83-463239ecd8c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:49:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.067 221554 DEBUG oslo_concurrency.lockutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.067 221554 DEBUG oslo_concurrency.lockutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquired lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.068 221554 DEBUG nova.network.neutron [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:49:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:49:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/968428789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.090 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.220 221554 DEBUG nova.compute.manager [req-cd3967c8-0b74-42ce-8ad0-afa657a64d63 req-7d3b6da7-018a-4053-9379-9bf00807f130 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received event network-changed-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.221 221554 DEBUG nova.compute.manager [req-cd3967c8-0b74-42ce-8ad0-afa657a64d63 req-7d3b6da7-018a-4053-9379-9bf00807f130 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Refreshing instance network info cache due to event network-changed-b1f53ff3-6ad6-4599-8b83-463239ecd8c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.222 221554 DEBUG oslo_concurrency.lockutils [req-cd3967c8-0b74-42ce-8ad0-afa657a64d63 req-7d3b6da7-018a-4053-9379-9bf00807f130 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.237 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.238 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4612MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.238 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.239 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:07.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.294 221554 DEBUG nova.network.neutron [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.298 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 4a932d78-e0fe-4c23-a756-916144472a64 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.298 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.298 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.343 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:07.483 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:07.483 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:07.484 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.501 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:49:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2089316534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.797 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.802 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.824 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.849 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:49:07 np0005603609 nova_compute[221550]: 2026-01-31 07:49:07.849 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:08.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:08 np0005603609 nova_compute[221550]: 2026-01-31 07:49:08.828 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:49:08 np0005603609 nova_compute[221550]: 2026-01-31 07:49:08.828 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:49:08 np0005603609 nova_compute[221550]: 2026-01-31 07:49:08.829 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.090 221554 DEBUG nova.network.neutron [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Updating instance_info_cache with network_info: [{"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.113 221554 DEBUG oslo_concurrency.lockutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Releasing lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.114 221554 DEBUG nova.compute.manager [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Instance network_info: |[{"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.114 221554 DEBUG oslo_concurrency.lockutils [req-cd3967c8-0b74-42ce-8ad0-afa657a64d63 req-7d3b6da7-018a-4053-9379-9bf00807f130 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.114 221554 DEBUG nova.network.neutron [req-cd3967c8-0b74-42ce-8ad0-afa657a64d63 req-7d3b6da7-018a-4053-9379-9bf00807f130 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Refreshing network info cache for port b1f53ff3-6ad6-4599-8b83-463239ecd8c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.117 221554 DEBUG nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Start _get_guest_xml network_info=[{"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.121 221554 WARNING nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.127 221554 DEBUG nova.virt.libvirt.host [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.128 221554 DEBUG nova.virt.libvirt.host [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.134 221554 DEBUG nova.virt.libvirt.host [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.135 221554 DEBUG nova.virt.libvirt.host [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.136 221554 DEBUG nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.136 221554 DEBUG nova.virt.hardware [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.137 221554 DEBUG nova.virt.hardware [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.137 221554 DEBUG nova.virt.hardware [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.137 221554 DEBUG nova.virt.hardware [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.137 221554 DEBUG nova.virt.hardware [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.138 221554 DEBUG nova.virt.hardware [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.138 221554 DEBUG nova.virt.hardware [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.138 221554 DEBUG nova.virt.hardware [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.139 221554 DEBUG nova.virt.hardware [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.139 221554 DEBUG nova.virt.hardware [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.139 221554 DEBUG nova.virt.hardware [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.141 221554 DEBUG oslo_concurrency.processutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:09.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:49:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3542332933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.532 221554 DEBUG oslo_concurrency.processutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.556 221554 DEBUG nova.storage.rbd_utils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 4a932d78-e0fe-4c23-a756-916144472a64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.559 221554 DEBUG oslo_concurrency.processutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.934 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:49:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/937353425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.981 221554 DEBUG oslo_concurrency.processutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.984 221554 DEBUG nova.virt.libvirt.vif [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:49:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1393865902',display_name='tempest-DeleteServersTestJSON-server-1393865902',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1393865902',id=67,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-4aq9tghe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:49:05Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=4a932d78-e0fe-4c23-a756-916144472a64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.984 221554 DEBUG nova.network.os_vif_util [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.985 221554 DEBUG nova.network.os_vif_util [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:12:46,bridge_name='br-int',has_traffic_filtering=True,id=b1f53ff3-6ad6-4599-8b83-463239ecd8c8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1f53ff3-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:49:09 np0005603609 nova_compute[221550]: 2026-01-31 07:49:09.986 221554 DEBUG nova.objects.instance [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4a932d78-e0fe-4c23-a756-916144472a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.004 221554 DEBUG nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:49:10 np0005603609 nova_compute[221550]:  <uuid>4a932d78-e0fe-4c23-a756-916144472a64</uuid>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:  <name>instance-00000043</name>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <nova:name>tempest-DeleteServersTestJSON-server-1393865902</nova:name>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:49:09</nova:creationTime>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:49:10 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:        <nova:user uuid="97abab8eb79247cd89fb2ebff295b890">tempest-DeleteServersTestJSON-2031597701-project-member</nova:user>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:        <nova:project uuid="f299640bb1f64e5fa12b23955e5a2127">tempest-DeleteServersTestJSON-2031597701</nova:project>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:        <nova:port uuid="b1f53ff3-6ad6-4599-8b83-463239ecd8c8">
Jan 31 02:49:10 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <entry name="serial">4a932d78-e0fe-4c23-a756-916144472a64</entry>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <entry name="uuid">4a932d78-e0fe-4c23-a756-916144472a64</entry>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/4a932d78-e0fe-4c23-a756-916144472a64_disk">
Jan 31 02:49:10 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:49:10 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/4a932d78-e0fe-4c23-a756-916144472a64_disk.config">
Jan 31 02:49:10 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:49:10 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:e2:12:46"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <target dev="tapb1f53ff3-6a"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/4a932d78-e0fe-4c23-a756-916144472a64/console.log" append="off"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:49:10 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:49:10 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:49:10 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:49:10 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.007 221554 DEBUG nova.compute.manager [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Preparing to wait for external event network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.008 221554 DEBUG oslo_concurrency.lockutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "4a932d78-e0fe-4c23-a756-916144472a64-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.008 221554 DEBUG oslo_concurrency.lockutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.009 221554 DEBUG oslo_concurrency.lockutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.010 221554 DEBUG nova.virt.libvirt.vif [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:49:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1393865902',display_name='tempest-DeleteServersTestJSON-server-1393865902',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1393865902',id=67,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-4aq9tghe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:49:05Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=4a932d78-e0fe-4c23-a756-916144472a64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.011 221554 DEBUG nova.network.os_vif_util [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.012 221554 DEBUG nova.network.os_vif_util [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:12:46,bridge_name='br-int',has_traffic_filtering=True,id=b1f53ff3-6ad6-4599-8b83-463239ecd8c8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1f53ff3-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.013 221554 DEBUG os_vif [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:12:46,bridge_name='br-int',has_traffic_filtering=True,id=b1f53ff3-6ad6-4599-8b83-463239ecd8c8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1f53ff3-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.014 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.014 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.015 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.020 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.021 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1f53ff3-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.021 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb1f53ff3-6a, col_values=(('external_ids', {'iface-id': 'b1f53ff3-6ad6-4599-8b83-463239ecd8c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:12:46', 'vm-uuid': '4a932d78-e0fe-4c23-a756-916144472a64'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.023 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:10 np0005603609 NetworkManager[49064]: <info>  [1769845750.0249] manager: (tapb1f53ff3-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.025 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.032 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.035 221554 INFO os_vif [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:12:46,bridge_name='br-int',has_traffic_filtering=True,id=b1f53ff3-6ad6-4599-8b83-463239ecd8c8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1f53ff3-6a')#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.082 221554 DEBUG nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.084 221554 DEBUG nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.084 221554 DEBUG nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No VIF found with MAC fa:16:3e:e2:12:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.085 221554 INFO nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Using config drive#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.112 221554 DEBUG nova.storage.rbd_utils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 4a932d78-e0fe-4c23-a756-916144472a64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.478 221554 INFO nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Creating config drive at /var/lib/nova/instances/4a932d78-e0fe-4c23-a756-916144472a64/disk.config#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.481 221554 DEBUG oslo_concurrency.processutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4a932d78-e0fe-4c23-a756-916144472a64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpjafxfsp2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.602 221554 DEBUG oslo_concurrency.processutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4a932d78-e0fe-4c23-a756-916144472a64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpjafxfsp2" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.633 221554 DEBUG nova.storage.rbd_utils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 4a932d78-e0fe-4c23-a756-916144472a64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.637 221554 DEBUG oslo_concurrency.processutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4a932d78-e0fe-4c23-a756-916144472a64/disk.config 4a932d78-e0fe-4c23-a756-916144472a64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:49:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:10.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.773 221554 DEBUG oslo_concurrency.processutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4a932d78-e0fe-4c23-a756-916144472a64/disk.config 4a932d78-e0fe-4c23-a756-916144472a64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.774 221554 INFO nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Deleting local config drive /var/lib/nova/instances/4a932d78-e0fe-4c23-a756-916144472a64/disk.config because it was imported into RBD.#033[00m
Jan 31 02:49:10 np0005603609 kernel: tapb1f53ff3-6a: entered promiscuous mode
Jan 31 02:49:10 np0005603609 NetworkManager[49064]: <info>  [1769845750.8219] manager: (tapb1f53ff3-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/100)
Jan 31 02:49:10 np0005603609 ovn_controller[130359]: 2026-01-31T07:49:10Z|00196|binding|INFO|Claiming lport b1f53ff3-6ad6-4599-8b83-463239ecd8c8 for this chassis.
Jan 31 02:49:10 np0005603609 ovn_controller[130359]: 2026-01-31T07:49:10Z|00197|binding|INFO|b1f53ff3-6ad6-4599-8b83-463239ecd8c8: Claiming fa:16:3e:e2:12:46 10.100.0.12
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.822 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.828 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:10 np0005603609 ovn_controller[130359]: 2026-01-31T07:49:10Z|00198|binding|INFO|Setting lport b1f53ff3-6ad6-4599-8b83-463239ecd8c8 ovn-installed in OVS
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.829 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:10 np0005603609 nova_compute[221550]: 2026-01-31 07:49:10.831 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:10 np0005603609 ovn_controller[130359]: 2026-01-31T07:49:10Z|00199|binding|INFO|Setting lport b1f53ff3-6ad6-4599-8b83-463239ecd8c8 up in Southbound
Jan 31 02:49:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:10.837 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:12:46 10.100.0.12'], port_security=['fa:16:3e:e2:12:46 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4a932d78-e0fe-4c23-a756-916144472a64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60244e92-1524-47f0-8207-05d0104afa47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f055dcb-9af1-4cf7-aa74-c819da93756e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70b36ac0-62cb-4e29-924b-93c5ad906bc9, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=b1f53ff3-6ad6-4599-8b83-463239ecd8c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:49:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:10.839 140058 INFO neutron.agent.ovn.metadata.agent [-] Port b1f53ff3-6ad6-4599-8b83-463239ecd8c8 in datapath 60244e92-1524-47f0-8207-05d0104afa47 bound to our chassis#033[00m
Jan 31 02:49:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:10.840 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60244e92-1524-47f0-8207-05d0104afa47#033[00m
Jan 31 02:49:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:10.849 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[27e93cda-9abe-4a4f-b115-844c8f9d4052]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:10.850 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap60244e92-11 in ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:49:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:10.852 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap60244e92-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:49:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:10.852 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e975c7ed-4652-44bc-bc69-4679ef4b1860]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:10.853 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0586e2-953a-4e4b-8c5e-fc728f8dc4f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:10 np0005603609 systemd-udevd[246303]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:49:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:10.863 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[921c8c6c-9be6-4188-bcbf-ce345fdc88b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:10 np0005603609 systemd-machined[190912]: New machine qemu-31-instance-00000043.
Jan 31 02:49:10 np0005603609 NetworkManager[49064]: <info>  [1769845750.8681] device (tapb1f53ff3-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:49:10 np0005603609 NetworkManager[49064]: <info>  [1769845750.8686] device (tapb1f53ff3-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:49:10 np0005603609 systemd[1]: Started Virtual Machine qemu-31-instance-00000043.
Jan 31 02:49:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:10.875 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[207ba1f0-6e79-488f-ba31-ac9fb5547971]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:10.897 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb40430-7c70-41d9-b0e4-566ebdf2559b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:10.902 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[80809dba-e639-47c9-9b99-68a95b360cd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:10 np0005603609 NetworkManager[49064]: <info>  [1769845750.9036] manager: (tap60244e92-10): new Veth device (/org/freedesktop/NetworkManager/Devices/101)
Jan 31 02:49:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:10.926 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[00f87dc2-38cf-4283-ae98-1249d8023b2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:10.929 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[72240ee9-05b5-415d-832f-c43f7d2378a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:10 np0005603609 NetworkManager[49064]: <info>  [1769845750.9456] device (tap60244e92-10): carrier: link connected
Jan 31 02:49:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:10.949 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[7e00c9f5-6e3d-4784-9ee1-31f59dae1c4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:10.961 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d2b713-b2e1-4662-993e-5ecefb2e3f5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60244e92-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:59:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599064, 'reachable_time': 28814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246334, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:10.969 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[297a508e-e0f9-4843-aad1-f3c32345956a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:59f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599064, 'tstamp': 599064}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246335, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:10.981 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[09543522-0167-463c-92b7-0f3a62072b0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60244e92-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:59:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599064, 'reachable_time': 28814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 246336, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:10.998 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fbb141af-42e8-4bd6-bf3e-6a274158d18e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:11.048 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3baa62d8-e15c-444b-9a4c-d5898d0a1c1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:11.050 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60244e92-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:11.050 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:11.050 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60244e92-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:49:11 np0005603609 NetworkManager[49064]: <info>  [1769845751.0527] manager: (tap60244e92-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Jan 31 02:49:11 np0005603609 kernel: tap60244e92-10: entered promiscuous mode
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:11.056 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60244e92-10, col_values=(('external_ids', {'iface-id': '4e20d708-9f46-41f3-86c0-9ab3849f8392'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.057 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:11 np0005603609 ovn_controller[130359]: 2026-01-31T07:49:11Z|00200|binding|INFO|Releasing lport 4e20d708-9f46-41f3-86c0-9ab3849f8392 from this chassis (sb_readonly=0)
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:11.059 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:11.060 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[70167f02-ca32-40ce-91b7-b55dc4a1ca01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:11.062 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-60244e92-1524-47f0-8207-05d0104afa47
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 60244e92-1524-47f0-8207-05d0104afa47
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:49:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:11.063 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'env', 'PROCESS_TAG=haproxy-60244e92-1524-47f0-8207-05d0104afa47', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/60244e92-1524-47f0-8207-05d0104afa47.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.173 221554 DEBUG nova.compute.manager [req-deaaef40-8267-4ce0-a5e0-af6a5918a39d req-eff70a72-057a-443d-9905-dea4c5e8cd0d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received event network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.173 221554 DEBUG oslo_concurrency.lockutils [req-deaaef40-8267-4ce0-a5e0-af6a5918a39d req-eff70a72-057a-443d-9905-dea4c5e8cd0d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4a932d78-e0fe-4c23-a756-916144472a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.173 221554 DEBUG oslo_concurrency.lockutils [req-deaaef40-8267-4ce0-a5e0-af6a5918a39d req-eff70a72-057a-443d-9905-dea4c5e8cd0d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.174 221554 DEBUG oslo_concurrency.lockutils [req-deaaef40-8267-4ce0-a5e0-af6a5918a39d req-eff70a72-057a-443d-9905-dea4c5e8cd0d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.174 221554 DEBUG nova.compute.manager [req-deaaef40-8267-4ce0-a5e0-af6a5918a39d req-eff70a72-057a-443d-9905-dea4c5e8cd0d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Processing event network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.183 221554 DEBUG nova.network.neutron [req-cd3967c8-0b74-42ce-8ad0-afa657a64d63 req-7d3b6da7-018a-4053-9379-9bf00807f130 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Updated VIF entry in instance network info cache for port b1f53ff3-6ad6-4599-8b83-463239ecd8c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.184 221554 DEBUG nova.network.neutron [req-cd3967c8-0b74-42ce-8ad0-afa657a64d63 req-7d3b6da7-018a-4053-9379-9bf00807f130 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Updating instance_info_cache with network_info: [{"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.200 221554 DEBUG oslo_concurrency.lockutils [req-cd3967c8-0b74-42ce-8ad0-afa657a64d63 req-7d3b6da7-018a-4053-9379-9bf00807f130 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:49:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:49:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:11.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:49:11 np0005603609 podman[246392]: 2026-01-31 07:49:11.522337888 +0000 UTC m=+0.099767489 container create 2ba7dda985dbc2ad18ff8ab80a5b73a532bb89372de4b8e447d21c155a1b5eb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:49:11 np0005603609 podman[246392]: 2026-01-31 07:49:11.44104739 +0000 UTC m=+0.018476951 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.545 221554 DEBUG nova.compute.manager [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.546 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845751.5449889, 4a932d78-e0fe-4c23-a756-916144472a64 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.547 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] VM Started (Lifecycle Event)#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.550 221554 DEBUG nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.555 221554 INFO nova.virt.libvirt.driver [-] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Instance spawned successfully.#033[00m
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.555 221554 DEBUG nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:11.556280) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845751556347, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 423, "num_deletes": 252, "total_data_size": 446390, "memory_usage": 455032, "flush_reason": "Manual Compaction"}
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845751567805, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 293730, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36370, "largest_seqno": 36788, "table_properties": {"data_size": 291266, "index_size": 564, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6848, "raw_average_key_size": 21, "raw_value_size": 286145, "raw_average_value_size": 877, "num_data_blocks": 24, "num_entries": 326, "num_filter_entries": 326, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845741, "oldest_key_time": 1769845741, "file_creation_time": 1769845751, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 11544 microseconds, and 1174 cpu microseconds.
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.578 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:11.567836) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 293730 bytes OK
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:11.567849) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:11.578753) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:11.578766) EVENT_LOG_v1 {"time_micros": 1769845751578762, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:11.578781) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 443681, prev total WAL file size 443681, number of live WAL files 2.
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:11.579155) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303038' seq:72057594037927935, type:22 .. '6D6772737461740031323539' seq:0, type:0; will stop at (end)
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(286KB)], [66(11MB)]
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845751579213, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 12403973, "oldest_snapshot_seqno": -1}
Jan 31 02:49:11 np0005603609 systemd[1]: Started libpod-conmon-2ba7dda985dbc2ad18ff8ab80a5b73a532bb89372de4b8e447d21c155a1b5eb2.scope.
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.583 221554 DEBUG nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.583 221554 DEBUG nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.584 221554 DEBUG nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.584 221554 DEBUG nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.585 221554 DEBUG nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.585 221554 DEBUG nova.virt.libvirt.driver [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.589 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:49:11 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:49:11 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e26bd2d58a05c57e9f75ce904cbc7735b43f9fbc80c6cadee8105c0f08cfe400/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.620 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.620 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845751.5461495, 4a932d78-e0fe-4c23-a756-916144472a64 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.620 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.642 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.645 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845751.5505176, 4a932d78-e0fe-4c23-a756-916144472a64 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.645 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.653 221554 INFO nova.compute.manager [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Took 6.39 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.653 221554 DEBUG nova.compute.manager [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.663 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.665 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 6136 keys, 8560567 bytes, temperature: kUnknown
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845751686064, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 8560567, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8520394, "index_size": 23743, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15365, "raw_key_size": 157749, "raw_average_key_size": 25, "raw_value_size": 8410932, "raw_average_value_size": 1370, "num_data_blocks": 946, "num_entries": 6136, "num_filter_entries": 6136, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769845751, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.701 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.714 221554 INFO nova.compute.manager [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Took 7.42 seconds to build instance.#033[00m
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:11.686271) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8560567 bytes
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:11.729303) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 116.0 rd, 80.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.5 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(71.4) write-amplify(29.1) OK, records in: 6656, records dropped: 520 output_compression: NoCompression
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:11.729344) EVENT_LOG_v1 {"time_micros": 1769845751729330, "job": 40, "event": "compaction_finished", "compaction_time_micros": 106914, "compaction_time_cpu_micros": 16045, "output_level": 6, "num_output_files": 1, "total_output_size": 8560567, "num_input_records": 6656, "num_output_records": 6136, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845751729809, "job": 40, "event": "table_file_deletion", "file_number": 68}
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845751731164, "job": 40, "event": "table_file_deletion", "file_number": 66}
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:11.579080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:11.731316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:11.731327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:11.731332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:11.731336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:49:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:49:11.731340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:49:11 np0005603609 podman[246392]: 2026-01-31 07:49:11.731638141 +0000 UTC m=+0.309067782 container init 2ba7dda985dbc2ad18ff8ab80a5b73a532bb89372de4b8e447d21c155a1b5eb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:49:11 np0005603609 podman[246392]: 2026-01-31 07:49:11.735861334 +0000 UTC m=+0.313290935 container start 2ba7dda985dbc2ad18ff8ab80a5b73a532bb89372de4b8e447d21c155a1b5eb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:49:11 np0005603609 nova_compute[221550]: 2026-01-31 07:49:11.736 221554 DEBUG oslo_concurrency.lockutils [None req-6add563f-8ab7-4f83-9dbd-8a944b22d707 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:11 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[246424]: [NOTICE]   (246428) : New worker (246430) forked
Jan 31 02:49:11 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[246424]: [NOTICE]   (246428) : Loading success.
Jan 31 02:49:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:12 np0005603609 nova_compute[221550]: 2026-01-31 07:49:12.503 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:49:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:12.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:49:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:13.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:13 np0005603609 nova_compute[221550]: 2026-01-31 07:49:13.855 221554 DEBUG nova.compute.manager [req-04b2da1f-df05-4a22-a2b3-0be923002576 req-88cd2601-4767-4d31-b009-ed37eef2ec57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received event network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:49:13 np0005603609 nova_compute[221550]: 2026-01-31 07:49:13.855 221554 DEBUG oslo_concurrency.lockutils [req-04b2da1f-df05-4a22-a2b3-0be923002576 req-88cd2601-4767-4d31-b009-ed37eef2ec57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4a932d78-e0fe-4c23-a756-916144472a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:13 np0005603609 nova_compute[221550]: 2026-01-31 07:49:13.855 221554 DEBUG oslo_concurrency.lockutils [req-04b2da1f-df05-4a22-a2b3-0be923002576 req-88cd2601-4767-4d31-b009-ed37eef2ec57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:13 np0005603609 nova_compute[221550]: 2026-01-31 07:49:13.855 221554 DEBUG oslo_concurrency.lockutils [req-04b2da1f-df05-4a22-a2b3-0be923002576 req-88cd2601-4767-4d31-b009-ed37eef2ec57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:13 np0005603609 nova_compute[221550]: 2026-01-31 07:49:13.855 221554 DEBUG nova.compute.manager [req-04b2da1f-df05-4a22-a2b3-0be923002576 req-88cd2601-4767-4d31-b009-ed37eef2ec57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] No waiting events found dispatching network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:49:13 np0005603609 nova_compute[221550]: 2026-01-31 07:49:13.856 221554 WARNING nova.compute.manager [req-04b2da1f-df05-4a22-a2b3-0be923002576 req-88cd2601-4767-4d31-b009-ed37eef2ec57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received unexpected event network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:49:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:14.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:15 np0005603609 nova_compute[221550]: 2026-01-31 07:49:15.026 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:15.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:16.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:17.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:17 np0005603609 nova_compute[221550]: 2026-01-31 07:49:17.505 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:49:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:18.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:49:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:19.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:20 np0005603609 nova_compute[221550]: 2026-01-31 07:49:20.030 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:49:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:20.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:49:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:49:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:21.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:49:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:22 np0005603609 nova_compute[221550]: 2026-01-31 07:49:22.507 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:49:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:22.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:49:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:23.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:23 np0005603609 ovn_controller[130359]: 2026-01-31T07:49:23Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:12:46 10.100.0.12
Jan 31 02:49:23 np0005603609 ovn_controller[130359]: 2026-01-31T07:49:23Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:12:46 10.100.0.12
Jan 31 02:49:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e246 e246: 3 total, 3 up, 3 in
Jan 31 02:49:24 np0005603609 nova_compute[221550]: 2026-01-31 07:49:24.131 221554 DEBUG oslo_concurrency.lockutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:49:24 np0005603609 nova_compute[221550]: 2026-01-31 07:49:24.131 221554 DEBUG oslo_concurrency.lockutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquired lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:49:24 np0005603609 nova_compute[221550]: 2026-01-31 07:49:24.132 221554 DEBUG nova.network.neutron [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:49:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:24.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:25 np0005603609 nova_compute[221550]: 2026-01-31 07:49:25.033 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:49:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:25.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:49:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:26.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:27 np0005603609 nova_compute[221550]: 2026-01-31 07:49:27.293 221554 DEBUG nova.network.neutron [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Updating instance_info_cache with network_info: [{"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:49:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:49:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:27.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:49:27 np0005603609 nova_compute[221550]: 2026-01-31 07:49:27.510 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:27 np0005603609 nova_compute[221550]: 2026-01-31 07:49:27.688 221554 DEBUG oslo_concurrency.lockutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Releasing lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:49:28 np0005603609 nova_compute[221550]: 2026-01-31 07:49:28.356 221554 DEBUG nova.virt.libvirt.driver [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Jan 31 02:49:28 np0005603609 nova_compute[221550]: 2026-01-31 07:49:28.356 221554 DEBUG nova.virt.libvirt.volume.remotefs [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Creating file /var/lib/nova/instances/4a932d78-e0fe-4c23-a756-916144472a64/795dad7731854e628dc3020c757e202b.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Jan 31 02:49:28 np0005603609 nova_compute[221550]: 2026-01-31 07:49:28.357 221554 DEBUG oslo_concurrency.processutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/4a932d78-e0fe-4c23-a756-916144472a64/795dad7731854e628dc3020c757e202b.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:28.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:28 np0005603609 nova_compute[221550]: 2026-01-31 07:49:28.800 221554 DEBUG oslo_concurrency.processutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/4a932d78-e0fe-4c23-a756-916144472a64/795dad7731854e628dc3020c757e202b.tmp" returned: 1 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:28 np0005603609 nova_compute[221550]: 2026-01-31 07:49:28.801 221554 DEBUG oslo_concurrency.processutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/4a932d78-e0fe-4c23-a756-916144472a64/795dad7731854e628dc3020c757e202b.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 02:49:28 np0005603609 nova_compute[221550]: 2026-01-31 07:49:28.802 221554 DEBUG nova.virt.libvirt.volume.remotefs [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Creating directory /var/lib/nova/instances/4a932d78-e0fe-4c23-a756-916144472a64 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Jan 31 02:49:28 np0005603609 nova_compute[221550]: 2026-01-31 07:49:28.802 221554 DEBUG oslo_concurrency.processutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/4a932d78-e0fe-4c23-a756-916144472a64 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:29 np0005603609 nova_compute[221550]: 2026-01-31 07:49:29.013 221554 DEBUG oslo_concurrency.processutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/4a932d78-e0fe-4c23-a756-916144472a64" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:29 np0005603609 nova_compute[221550]: 2026-01-31 07:49:29.019 221554 DEBUG nova.virt.libvirt.driver [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 02:49:29 np0005603609 podman[246442]: 2026-01-31 07:49:29.196773671 +0000 UTC m=+0.075245222 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 02:49:29 np0005603609 podman[246441]: 2026-01-31 07:49:29.213005656 +0000 UTC m=+0.097479023 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 02:49:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:49:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:29.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:49:30 np0005603609 nova_compute[221550]: 2026-01-31 07:49:30.037 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:49:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:30.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:49:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:49:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:31.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:49:31 np0005603609 kernel: tapb1f53ff3-6a (unregistering): left promiscuous mode
Jan 31 02:49:31 np0005603609 nova_compute[221550]: 2026-01-31 07:49:31.491 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:31 np0005603609 NetworkManager[49064]: <info>  [1769845771.4935] device (tapb1f53ff3-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:49:31 np0005603609 nova_compute[221550]: 2026-01-31 07:49:31.504 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:31 np0005603609 ovn_controller[130359]: 2026-01-31T07:49:31Z|00201|binding|INFO|Releasing lport b1f53ff3-6ad6-4599-8b83-463239ecd8c8 from this chassis (sb_readonly=0)
Jan 31 02:49:31 np0005603609 ovn_controller[130359]: 2026-01-31T07:49:31Z|00202|binding|INFO|Setting lport b1f53ff3-6ad6-4599-8b83-463239ecd8c8 down in Southbound
Jan 31 02:49:31 np0005603609 ovn_controller[130359]: 2026-01-31T07:49:31Z|00203|binding|INFO|Removing iface tapb1f53ff3-6a ovn-installed in OVS
Jan 31 02:49:31 np0005603609 nova_compute[221550]: 2026-01-31 07:49:31.507 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:31 np0005603609 nova_compute[221550]: 2026-01-31 07:49:31.512 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:31 np0005603609 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000043.scope: Deactivated successfully.
Jan 31 02:49:31 np0005603609 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000043.scope: Consumed 13.298s CPU time.
Jan 31 02:49:31 np0005603609 systemd-machined[190912]: Machine qemu-31-instance-00000043 terminated.
Jan 31 02:49:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e247 e247: 3 total, 3 up, 3 in
Jan 31 02:49:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:31.636 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:12:46 10.100.0.12'], port_security=['fa:16:3e:e2:12:46 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4a932d78-e0fe-4c23-a756-916144472a64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60244e92-1524-47f0-8207-05d0104afa47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0f055dcb-9af1-4cf7-aa74-c819da93756e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70b36ac0-62cb-4e29-924b-93c5ad906bc9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=b1f53ff3-6ad6-4599-8b83-463239ecd8c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:49:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:31.638 140058 INFO neutron.agent.ovn.metadata.agent [-] Port b1f53ff3-6ad6-4599-8b83-463239ecd8c8 in datapath 60244e92-1524-47f0-8207-05d0104afa47 unbound from our chassis#033[00m
Jan 31 02:49:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:31.640 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60244e92-1524-47f0-8207-05d0104afa47, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:49:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:31.641 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[06a502ef-3013-48ed-97c2-3af65e13137b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:31.642 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 namespace which is not needed anymore#033[00m
Jan 31 02:49:32 np0005603609 nova_compute[221550]: 2026-01-31 07:49:32.039 221554 INFO nova.virt.libvirt.driver [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 02:49:32 np0005603609 nova_compute[221550]: 2026-01-31 07:49:32.046 221554 INFO nova.virt.libvirt.driver [-] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Instance destroyed successfully.#033[00m
Jan 31 02:49:32 np0005603609 nova_compute[221550]: 2026-01-31 07:49:32.048 221554 DEBUG nova.virt.libvirt.vif [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:49:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1393865902',display_name='tempest-DeleteServersTestJSON-server-1393865902',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1393865902',id=67,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:49:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-4aq9tghe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:49:19Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=4a932d78-e0fe-4c23-a756-916144472a64,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-1527532153-network", "vif_mac": "fa:16:3e:e2:12:46"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:49:32 np0005603609 nova_compute[221550]: 2026-01-31 07:49:32.050 221554 DEBUG nova.network.os_vif_util [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-1527532153-network", "vif_mac": "fa:16:3e:e2:12:46"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:49:32 np0005603609 nova_compute[221550]: 2026-01-31 07:49:32.052 221554 DEBUG nova.network.os_vif_util [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e2:12:46,bridge_name='br-int',has_traffic_filtering=True,id=b1f53ff3-6ad6-4599-8b83-463239ecd8c8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1f53ff3-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:49:32 np0005603609 nova_compute[221550]: 2026-01-31 07:49:32.053 221554 DEBUG os_vif [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:12:46,bridge_name='br-int',has_traffic_filtering=True,id=b1f53ff3-6ad6-4599-8b83-463239ecd8c8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1f53ff3-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:49:32 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[246424]: [NOTICE]   (246428) : haproxy version is 2.8.14-c23fe91
Jan 31 02:49:32 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[246424]: [NOTICE]   (246428) : path to executable is /usr/sbin/haproxy
Jan 31 02:49:32 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[246424]: [WARNING]  (246428) : Exiting Master process...
Jan 31 02:49:32 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[246424]: [WARNING]  (246428) : Exiting Master process...
Jan 31 02:49:32 np0005603609 nova_compute[221550]: 2026-01-31 07:49:32.056 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:32 np0005603609 nova_compute[221550]: 2026-01-31 07:49:32.057 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1f53ff3-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:49:32 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[246424]: [ALERT]    (246428) : Current worker (246430) exited with code 143 (Terminated)
Jan 31 02:49:32 np0005603609 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[246424]: [WARNING]  (246428) : All workers exited. Exiting... (0)
Jan 31 02:49:32 np0005603609 systemd[1]: libpod-2ba7dda985dbc2ad18ff8ab80a5b73a532bb89372de4b8e447d21c155a1b5eb2.scope: Deactivated successfully.
Jan 31 02:49:32 np0005603609 nova_compute[221550]: 2026-01-31 07:49:32.059 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:32 np0005603609 nova_compute[221550]: 2026-01-31 07:49:32.063 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:49:32 np0005603609 nova_compute[221550]: 2026-01-31 07:49:32.065 221554 INFO os_vif [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:12:46,bridge_name='br-int',has_traffic_filtering=True,id=b1f53ff3-6ad6-4599-8b83-463239ecd8c8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1f53ff3-6a')#033[00m
Jan 31 02:49:32 np0005603609 podman[246507]: 2026-01-31 07:49:32.066795465 +0000 UTC m=+0.349320831 container died 2ba7dda985dbc2ad18ff8ab80a5b73a532bb89372de4b8e447d21c155a1b5eb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 02:49:32 np0005603609 nova_compute[221550]: 2026-01-31 07:49:32.071 221554 DEBUG nova.virt.libvirt.driver [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:49:32 np0005603609 nova_compute[221550]: 2026-01-31 07:49:32.071 221554 DEBUG nova.virt.libvirt.driver [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:49:32 np0005603609 nova_compute[221550]: 2026-01-31 07:49:32.403 221554 DEBUG neutronclient.v2_0.client [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port b1f53ff3-6ad6-4599-8b83-463239ecd8c8 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 31 02:49:32 np0005603609 nova_compute[221550]: 2026-01-31 07:49:32.511 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:32.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:32 np0005603609 nova_compute[221550]: 2026-01-31 07:49:32.948 221554 DEBUG oslo_concurrency.lockutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "4a932d78-e0fe-4c23-a756-916144472a64-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:32 np0005603609 nova_compute[221550]: 2026-01-31 07:49:32.949 221554 DEBUG oslo_concurrency.lockutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:32 np0005603609 nova_compute[221550]: 2026-01-31 07:49:32.949 221554 DEBUG oslo_concurrency.lockutils [None req-0fff8fb4-a960-4a3c-96d6-e3ec2227d24a 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:32 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2ba7dda985dbc2ad18ff8ab80a5b73a532bb89372de4b8e447d21c155a1b5eb2-userdata-shm.mount: Deactivated successfully.
Jan 31 02:49:32 np0005603609 systemd[1]: var-lib-containers-storage-overlay-e26bd2d58a05c57e9f75ce904cbc7735b43f9fbc80c6cadee8105c0f08cfe400-merged.mount: Deactivated successfully.
Jan 31 02:49:33 np0005603609 podman[246507]: 2026-01-31 07:49:33.004227768 +0000 UTC m=+1.286753144 container cleanup 2ba7dda985dbc2ad18ff8ab80a5b73a532bb89372de4b8e447d21c155a1b5eb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:49:33 np0005603609 systemd[1]: libpod-conmon-2ba7dda985dbc2ad18ff8ab80a5b73a532bb89372de4b8e447d21c155a1b5eb2.scope: Deactivated successfully.
Jan 31 02:49:33 np0005603609 podman[246548]: 2026-01-31 07:49:33.127634971 +0000 UTC m=+0.102791633 container remove 2ba7dda985dbc2ad18ff8ab80a5b73a532bb89372de4b8e447d21c155a1b5eb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 02:49:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:33.134 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6b8089c5-8a29-48a1-848a-2f1641f69c1e]: (4, ('Sat Jan 31 07:49:31 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 (2ba7dda985dbc2ad18ff8ab80a5b73a532bb89372de4b8e447d21c155a1b5eb2)\n2ba7dda985dbc2ad18ff8ab80a5b73a532bb89372de4b8e447d21c155a1b5eb2\nSat Jan 31 07:49:33 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 (2ba7dda985dbc2ad18ff8ab80a5b73a532bb89372de4b8e447d21c155a1b5eb2)\n2ba7dda985dbc2ad18ff8ab80a5b73a532bb89372de4b8e447d21c155a1b5eb2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:33.136 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d5ce6217-2de2-46c0-941c-ec4a8e44a298]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:33.138 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60244e92-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:49:33 np0005603609 nova_compute[221550]: 2026-01-31 07:49:33.141 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:33 np0005603609 kernel: tap60244e92-10: left promiscuous mode
Jan 31 02:49:33 np0005603609 nova_compute[221550]: 2026-01-31 07:49:33.155 221554 DEBUG nova.compute.manager [req-c8dc6299-8fc8-4b0b-b482-02fc0981b98b req-99cfee2c-61c2-41e4-853f-03325c1dd09a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received event network-vif-unplugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:49:33 np0005603609 nova_compute[221550]: 2026-01-31 07:49:33.156 221554 DEBUG oslo_concurrency.lockutils [req-c8dc6299-8fc8-4b0b-b482-02fc0981b98b req-99cfee2c-61c2-41e4-853f-03325c1dd09a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4a932d78-e0fe-4c23-a756-916144472a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:33 np0005603609 nova_compute[221550]: 2026-01-31 07:49:33.156 221554 DEBUG oslo_concurrency.lockutils [req-c8dc6299-8fc8-4b0b-b482-02fc0981b98b req-99cfee2c-61c2-41e4-853f-03325c1dd09a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:33 np0005603609 nova_compute[221550]: 2026-01-31 07:49:33.156 221554 DEBUG oslo_concurrency.lockutils [req-c8dc6299-8fc8-4b0b-b482-02fc0981b98b req-99cfee2c-61c2-41e4-853f-03325c1dd09a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:33 np0005603609 nova_compute[221550]: 2026-01-31 07:49:33.156 221554 DEBUG nova.compute.manager [req-c8dc6299-8fc8-4b0b-b482-02fc0981b98b req-99cfee2c-61c2-41e4-853f-03325c1dd09a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] No waiting events found dispatching network-vif-unplugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:49:33 np0005603609 nova_compute[221550]: 2026-01-31 07:49:33.156 221554 WARNING nova.compute.manager [req-c8dc6299-8fc8-4b0b-b482-02fc0981b98b req-99cfee2c-61c2-41e4-853f-03325c1dd09a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received unexpected event network-vif-unplugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 02:49:33 np0005603609 nova_compute[221550]: 2026-01-31 07:49:33.157 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:33.157 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[381e8478-a427-4bce-957e-8629d49cc665]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:33.184 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d24e21ea-869a-4fe8-be9d-0c0d922ce517]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:33.186 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a47e99c3-a046-4b66-ab71-0b6da8d0bb0e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:33.199 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9672505e-412f-4de8-b93b-36fe6a2ec3ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599059, 'reachable_time': 19619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246564, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:33.202 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:49:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:33.202 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[7de664d6-7932-49f1-b719-fc5a1e149fe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:49:33 np0005603609 systemd[1]: run-netns-ovnmeta\x2d60244e92\x2d1524\x2d47f0\x2d8207\x2d05d0104afa47.mount: Deactivated successfully.
Jan 31 02:49:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:33.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:34.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:49:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:35.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:49:35 np0005603609 nova_compute[221550]: 2026-01-31 07:49:35.558 221554 DEBUG nova.compute.manager [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received event network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:49:35 np0005603609 nova_compute[221550]: 2026-01-31 07:49:35.558 221554 DEBUG oslo_concurrency.lockutils [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4a932d78-e0fe-4c23-a756-916144472a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:35 np0005603609 nova_compute[221550]: 2026-01-31 07:49:35.558 221554 DEBUG oslo_concurrency.lockutils [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:35 np0005603609 nova_compute[221550]: 2026-01-31 07:49:35.558 221554 DEBUG oslo_concurrency.lockutils [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:35 np0005603609 nova_compute[221550]: 2026-01-31 07:49:35.558 221554 DEBUG nova.compute.manager [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] No waiting events found dispatching network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:49:35 np0005603609 nova_compute[221550]: 2026-01-31 07:49:35.559 221554 WARNING nova.compute.manager [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received unexpected event network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 02:49:35 np0005603609 nova_compute[221550]: 2026-01-31 07:49:35.559 221554 DEBUG nova.compute.manager [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received event network-changed-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:49:35 np0005603609 nova_compute[221550]: 2026-01-31 07:49:35.559 221554 DEBUG nova.compute.manager [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Refreshing instance network info cache due to event network-changed-b1f53ff3-6ad6-4599-8b83-463239ecd8c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:49:35 np0005603609 nova_compute[221550]: 2026-01-31 07:49:35.559 221554 DEBUG oslo_concurrency.lockutils [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:49:35 np0005603609 nova_compute[221550]: 2026-01-31 07:49:35.559 221554 DEBUG oslo_concurrency.lockutils [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:49:35 np0005603609 nova_compute[221550]: 2026-01-31 07:49:35.559 221554 DEBUG nova.network.neutron [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Refreshing network info cache for port b1f53ff3-6ad6-4599-8b83-463239ecd8c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:49:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:36.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:37 np0005603609 nova_compute[221550]: 2026-01-31 07:49:37.062 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:37.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:37 np0005603609 nova_compute[221550]: 2026-01-31 07:49:37.514 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:38 np0005603609 nova_compute[221550]: 2026-01-31 07:49:38.637 221554 DEBUG nova.network.neutron [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Updated VIF entry in instance network info cache for port b1f53ff3-6ad6-4599-8b83-463239ecd8c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:49:38 np0005603609 nova_compute[221550]: 2026-01-31 07:49:38.638 221554 DEBUG nova.network.neutron [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Updating instance_info_cache with network_info: [{"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:49:38 np0005603609 nova_compute[221550]: 2026-01-31 07:49:38.654 221554 DEBUG oslo_concurrency.lockutils [req-b1c3ef87-be34-4c2f-afb5-d033dbebad7d req-431d0c50-cf46-4cab-a474-9526278376b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:49:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:38.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:38 np0005603609 nova_compute[221550]: 2026-01-31 07:49:38.744 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:38.744 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:49:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:38.745 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:49:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e248 e248: 3 total, 3 up, 3 in
Jan 31 02:49:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:39.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:40 np0005603609 nova_compute[221550]: 2026-01-31 07:49:40.574 221554 DEBUG nova.compute.manager [req-0a492273-9d9d-462e-926d-aedf6b39756e req-2d5db2d1-a7d0-4819-b47f-7053a3dfb86b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received event network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:49:40 np0005603609 nova_compute[221550]: 2026-01-31 07:49:40.574 221554 DEBUG oslo_concurrency.lockutils [req-0a492273-9d9d-462e-926d-aedf6b39756e req-2d5db2d1-a7d0-4819-b47f-7053a3dfb86b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4a932d78-e0fe-4c23-a756-916144472a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:40 np0005603609 nova_compute[221550]: 2026-01-31 07:49:40.575 221554 DEBUG oslo_concurrency.lockutils [req-0a492273-9d9d-462e-926d-aedf6b39756e req-2d5db2d1-a7d0-4819-b47f-7053a3dfb86b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:40 np0005603609 nova_compute[221550]: 2026-01-31 07:49:40.575 221554 DEBUG oslo_concurrency.lockutils [req-0a492273-9d9d-462e-926d-aedf6b39756e req-2d5db2d1-a7d0-4819-b47f-7053a3dfb86b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:40 np0005603609 nova_compute[221550]: 2026-01-31 07:49:40.575 221554 DEBUG nova.compute.manager [req-0a492273-9d9d-462e-926d-aedf6b39756e req-2d5db2d1-a7d0-4819-b47f-7053a3dfb86b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] No waiting events found dispatching network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:49:40 np0005603609 nova_compute[221550]: 2026-01-31 07:49:40.575 221554 WARNING nova.compute.manager [req-0a492273-9d9d-462e-926d-aedf6b39756e req-2d5db2d1-a7d0-4819-b47f-7053a3dfb86b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received unexpected event network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 for instance with vm_state active and task_state resize_finish.#033[00m
Jan 31 02:49:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:49:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:40.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:49:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:41.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:49:41.746 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:49:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:42 np0005603609 nova_compute[221550]: 2026-01-31 07:49:42.065 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:42 np0005603609 nova_compute[221550]: 2026-01-31 07:49:42.114 221554 DEBUG oslo_concurrency.lockutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "4a932d78-e0fe-4c23-a756-916144472a64" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:42 np0005603609 nova_compute[221550]: 2026-01-31 07:49:42.115 221554 DEBUG oslo_concurrency.lockutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:42 np0005603609 nova_compute[221550]: 2026-01-31 07:49:42.115 221554 DEBUG nova.compute.manager [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Going to confirm migration 11 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Jan 31 02:49:42 np0005603609 nova_compute[221550]: 2026-01-31 07:49:42.515 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:42.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:42 np0005603609 nova_compute[221550]: 2026-01-31 07:49:42.774 221554 DEBUG nova.compute.manager [req-aba1c92c-1ba1-4e6b-a11e-84b5766c40ba req-7530c002-b45e-4c62-8808-e8d412d4cbe8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received event network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:49:42 np0005603609 nova_compute[221550]: 2026-01-31 07:49:42.775 221554 DEBUG oslo_concurrency.lockutils [req-aba1c92c-1ba1-4e6b-a11e-84b5766c40ba req-7530c002-b45e-4c62-8808-e8d412d4cbe8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4a932d78-e0fe-4c23-a756-916144472a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:42 np0005603609 nova_compute[221550]: 2026-01-31 07:49:42.775 221554 DEBUG oslo_concurrency.lockutils [req-aba1c92c-1ba1-4e6b-a11e-84b5766c40ba req-7530c002-b45e-4c62-8808-e8d412d4cbe8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:42 np0005603609 nova_compute[221550]: 2026-01-31 07:49:42.775 221554 DEBUG oslo_concurrency.lockutils [req-aba1c92c-1ba1-4e6b-a11e-84b5766c40ba req-7530c002-b45e-4c62-8808-e8d412d4cbe8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:42 np0005603609 nova_compute[221550]: 2026-01-31 07:49:42.775 221554 DEBUG nova.compute.manager [req-aba1c92c-1ba1-4e6b-a11e-84b5766c40ba req-7530c002-b45e-4c62-8808-e8d412d4cbe8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] No waiting events found dispatching network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:49:42 np0005603609 nova_compute[221550]: 2026-01-31 07:49:42.776 221554 WARNING nova.compute.manager [req-aba1c92c-1ba1-4e6b-a11e-84b5766c40ba req-7530c002-b45e-4c62-8808-e8d412d4cbe8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Received unexpected event network-vif-plugged-b1f53ff3-6ad6-4599-8b83-463239ecd8c8 for instance with vm_state resized and task_state deleting.#033[00m
Jan 31 02:49:42 np0005603609 nova_compute[221550]: 2026-01-31 07:49:42.795 221554 DEBUG neutronclient.v2_0.client [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port b1f53ff3-6ad6-4599-8b83-463239ecd8c8 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 31 02:49:42 np0005603609 nova_compute[221550]: 2026-01-31 07:49:42.796 221554 DEBUG oslo_concurrency.lockutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:49:42 np0005603609 nova_compute[221550]: 2026-01-31 07:49:42.796 221554 DEBUG oslo_concurrency.lockutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquired lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:49:42 np0005603609 nova_compute[221550]: 2026-01-31 07:49:42.796 221554 DEBUG nova.network.neutron [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:49:42 np0005603609 nova_compute[221550]: 2026-01-31 07:49:42.797 221554 DEBUG nova.objects.instance [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'info_cache' on Instance uuid 4a932d78-e0fe-4c23-a756-916144472a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:49:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:49:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:43.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:49:44 np0005603609 nova_compute[221550]: 2026-01-31 07:49:44.515 221554 DEBUG nova.network.neutron [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Updating instance_info_cache with network_info: [{"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:49:44 np0005603609 nova_compute[221550]: 2026-01-31 07:49:44.554 221554 DEBUG oslo_concurrency.lockutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Releasing lock "refresh_cache-4a932d78-e0fe-4c23-a756-916144472a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:49:44 np0005603609 nova_compute[221550]: 2026-01-31 07:49:44.554 221554 DEBUG nova.objects.instance [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'migration_context' on Instance uuid 4a932d78-e0fe-4c23-a756-916144472a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:49:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:49:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:44.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:49:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:49:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:45.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:49:45 np0005603609 nova_compute[221550]: 2026-01-31 07:49:45.559 221554 DEBUG nova.storage.rbd_utils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] removing snapshot(nova-resize) on rbd image(4a932d78-e0fe-4c23-a756-916144472a64_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 02:49:46 np0005603609 nova_compute[221550]: 2026-01-31 07:49:46.723 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845771.722861, 4a932d78-e0fe-4c23-a756-916144472a64 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:49:46 np0005603609 nova_compute[221550]: 2026-01-31 07:49:46.724 221554 INFO nova.compute.manager [-] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:49:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:46.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:46 np0005603609 nova_compute[221550]: 2026-01-31 07:49:46.746 221554 DEBUG nova.compute.manager [None req-861b26b7-902b-4085-9321-7898562edf0e - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:49:46 np0005603609 nova_compute[221550]: 2026-01-31 07:49:46.751 221554 DEBUG nova.compute.manager [None req-861b26b7-902b-4085-9321-7898562edf0e - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:49:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e249 e249: 3 total, 3 up, 3 in
Jan 31 02:49:46 np0005603609 nova_compute[221550]: 2026-01-31 07:49:46.773 221554 INFO nova.compute.manager [None req-861b26b7-902b-4085-9321-7898562edf0e - - - - - -] [instance: 4a932d78-e0fe-4c23-a756-916144472a64] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Jan 31 02:49:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:47 np0005603609 nova_compute[221550]: 2026-01-31 07:49:47.068 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:47.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:47 np0005603609 nova_compute[221550]: 2026-01-31 07:49:47.519 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:47 np0005603609 nova_compute[221550]: 2026-01-31 07:49:47.741 221554 DEBUG nova.virt.libvirt.vif [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:49:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1393865902',display_name='tempest-DeleteServersTestJSON-server-1393865902',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1393865902',id=67,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:49:41Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-4aq9tghe',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:49:42Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=4a932d78-e0fe-4c23-a756-916144472a64,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:49:47 np0005603609 nova_compute[221550]: 2026-01-31 07:49:47.742 221554 DEBUG nova.network.os_vif_util [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "address": "fa:16:3e:e2:12:46", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1f53ff3-6a", "ovs_interfaceid": "b1f53ff3-6ad6-4599-8b83-463239ecd8c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:49:47 np0005603609 nova_compute[221550]: 2026-01-31 07:49:47.743 221554 DEBUG nova.network.os_vif_util [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e2:12:46,bridge_name='br-int',has_traffic_filtering=True,id=b1f53ff3-6ad6-4599-8b83-463239ecd8c8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1f53ff3-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:49:47 np0005603609 nova_compute[221550]: 2026-01-31 07:49:47.743 221554 DEBUG os_vif [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:12:46,bridge_name='br-int',has_traffic_filtering=True,id=b1f53ff3-6ad6-4599-8b83-463239ecd8c8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1f53ff3-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:49:47 np0005603609 nova_compute[221550]: 2026-01-31 07:49:47.745 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:47 np0005603609 nova_compute[221550]: 2026-01-31 07:49:47.746 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1f53ff3-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:49:47 np0005603609 nova_compute[221550]: 2026-01-31 07:49:47.746 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:49:47 np0005603609 nova_compute[221550]: 2026-01-31 07:49:47.748 221554 INFO os_vif [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:12:46,bridge_name='br-int',has_traffic_filtering=True,id=b1f53ff3-6ad6-4599-8b83-463239ecd8c8,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1f53ff3-6a')#033[00m
Jan 31 02:49:47 np0005603609 nova_compute[221550]: 2026-01-31 07:49:47.749 221554 DEBUG oslo_concurrency.lockutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:47 np0005603609 nova_compute[221550]: 2026-01-31 07:49:47.749 221554 DEBUG oslo_concurrency.lockutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:47 np0005603609 nova_compute[221550]: 2026-01-31 07:49:47.848 221554 DEBUG oslo_concurrency.processutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:49:48 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3597531801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:49:48 np0005603609 nova_compute[221550]: 2026-01-31 07:49:48.273 221554 DEBUG oslo_concurrency.processutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:48 np0005603609 nova_compute[221550]: 2026-01-31 07:49:48.278 221554 DEBUG nova.compute.provider_tree [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:49:48 np0005603609 nova_compute[221550]: 2026-01-31 07:49:48.296 221554 DEBUG nova.scheduler.client.report [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:49:48 np0005603609 nova_compute[221550]: 2026-01-31 07:49:48.360 221554 DEBUG oslo_concurrency.lockutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:48 np0005603609 nova_compute[221550]: 2026-01-31 07:49:48.471 221554 INFO nova.scheduler.client.report [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Deleted allocation for migration af6e1651-e4b2-4410-bf6e-1d810c0a085c#033[00m
Jan 31 02:49:48 np0005603609 nova_compute[221550]: 2026-01-31 07:49:48.565 221554 DEBUG oslo_concurrency.lockutils [None req-5e575c9b-54c6-44bd-8847-dec387587204 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4a932d78-e0fe-4c23-a756-916144472a64" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 6.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:48.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:49.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:50.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:51.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 e250: 3 total, 3 up, 3 in
Jan 31 02:49:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:52 np0005603609 nova_compute[221550]: 2026-01-31 07:49:52.071 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:52 np0005603609 nova_compute[221550]: 2026-01-31 07:49:52.521 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:49:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:52.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:49:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:49:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:53.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:49:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:54.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:49:54 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1330305516' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:49:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:49:54 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1330305516' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:49:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:49:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:55.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:49:56 np0005603609 nova_compute[221550]: 2026-01-31 07:49:56.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:49:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:56.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:57 np0005603609 nova_compute[221550]: 2026-01-31 07:49:57.074 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:57.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:57 np0005603609 nova_compute[221550]: 2026-01-31 07:49:57.524 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:49:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:58.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:49:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:49:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:59.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:00 np0005603609 podman[246626]: 2026-01-31 07:50:00.173677313 +0000 UTC m=+0.052516619 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 31 02:50:00 np0005603609 podman[246625]: 2026-01-31 07:50:00.195257958 +0000 UTC m=+0.074452102 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:50:00 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 02:50:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:50:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:00.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:50:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:01.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:02 np0005603609 nova_compute[221550]: 2026-01-31 07:50:02.078 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:02 np0005603609 nova_compute[221550]: 2026-01-31 07:50:02.528 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:02.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:03 np0005603609 nova_compute[221550]: 2026-01-31 07:50:03.158 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:03.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:03 np0005603609 nova_compute[221550]: 2026-01-31 07:50:03.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:03 np0005603609 nova_compute[221550]: 2026-01-31 07:50:03.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:04 np0005603609 nova_compute[221550]: 2026-01-31 07:50:04.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:04.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:05.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:05 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:50:05 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:50:05 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:50:05 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:50:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:06.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:07 np0005603609 nova_compute[221550]: 2026-01-31 07:50:07.080 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:07.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:07.484 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:50:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:07.484 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:50:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:07.484 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:50:07 np0005603609 nova_compute[221550]: 2026-01-31 07:50:07.567 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:07 np0005603609 nova_compute[221550]: 2026-01-31 07:50:07.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:07 np0005603609 nova_compute[221550]: 2026-01-31 07:50:07.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:07 np0005603609 nova_compute[221550]: 2026-01-31 07:50:07.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:50:07 np0005603609 nova_compute[221550]: 2026-01-31 07:50:07.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:50:07 np0005603609 nova_compute[221550]: 2026-01-31 07:50:07.681 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:50:07 np0005603609 nova_compute[221550]: 2026-01-31 07:50:07.681 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:07 np0005603609 nova_compute[221550]: 2026-01-31 07:50:07.682 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:50:08 np0005603609 nova_compute[221550]: 2026-01-31 07:50:08.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:08 np0005603609 nova_compute[221550]: 2026-01-31 07:50:08.679 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:50:08 np0005603609 nova_compute[221550]: 2026-01-31 07:50:08.680 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:50:08 np0005603609 nova_compute[221550]: 2026-01-31 07:50:08.680 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:50:08 np0005603609 nova_compute[221550]: 2026-01-31 07:50:08.680 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:50:08 np0005603609 nova_compute[221550]: 2026-01-31 07:50:08.681 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:50:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:08.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:50:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2475019247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:50:09 np0005603609 nova_compute[221550]: 2026-01-31 07:50:09.135 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:50:09 np0005603609 nova_compute[221550]: 2026-01-31 07:50:09.256 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:50:09 np0005603609 nova_compute[221550]: 2026-01-31 07:50:09.257 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4720MB free_disk=20.942955017089844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:50:09 np0005603609 nova_compute[221550]: 2026-01-31 07:50:09.258 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:50:09 np0005603609 nova_compute[221550]: 2026-01-31 07:50:09.258 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:50:09 np0005603609 nova_compute[221550]: 2026-01-31 07:50:09.341 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:50:09 np0005603609 nova_compute[221550]: 2026-01-31 07:50:09.341 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:50:09 np0005603609 nova_compute[221550]: 2026-01-31 07:50:09.365 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:50:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:09.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:50:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3085610044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:50:09 np0005603609 nova_compute[221550]: 2026-01-31 07:50:09.775 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:50:09 np0005603609 nova_compute[221550]: 2026-01-31 07:50:09.780 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:50:09 np0005603609 nova_compute[221550]: 2026-01-31 07:50:09.794 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:50:09 np0005603609 nova_compute[221550]: 2026-01-31 07:50:09.814 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:50:09 np0005603609 nova_compute[221550]: 2026-01-31 07:50:09.814 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:50:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:50:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:10.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:50:10 np0005603609 nova_compute[221550]: 2026-01-31 07:50:10.815 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:50:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:50:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:50:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:11.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:50:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:12 np0005603609 nova_compute[221550]: 2026-01-31 07:50:12.084 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:12 np0005603609 nova_compute[221550]: 2026-01-31 07:50:12.571 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:12.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:13.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:14.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:50:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:15.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:50:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:16.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:17 np0005603609 nova_compute[221550]: 2026-01-31 07:50:17.089 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:50:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:17.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:50:17 np0005603609 nova_compute[221550]: 2026-01-31 07:50:17.619 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:17 np0005603609 nova_compute[221550]: 2026-01-31 07:50:17.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:18.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:50:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:19.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:50:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:20.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:21.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:22 np0005603609 nova_compute[221550]: 2026-01-31 07:50:22.092 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:22 np0005603609 nova_compute[221550]: 2026-01-31 07:50:22.621 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:22.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:23.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:24 np0005603609 nova_compute[221550]: 2026-01-31 07:50:24.394 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:24.395 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:50:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:24.396 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:50:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:24.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:50:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:25.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:50:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:26.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:27 np0005603609 nova_compute[221550]: 2026-01-31 07:50:27.097 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:50:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:27.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:50:27 np0005603609 nova_compute[221550]: 2026-01-31 07:50:27.655 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:28.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:29.399 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:50:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:50:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:29.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:50:30 np0005603609 nova_compute[221550]: 2026-01-31 07:50:30.261 221554 DEBUG oslo_concurrency.lockutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:50:30 np0005603609 nova_compute[221550]: 2026-01-31 07:50:30.261 221554 DEBUG oslo_concurrency.lockutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:50:30 np0005603609 nova_compute[221550]: 2026-01-31 07:50:30.296 221554 DEBUG nova.compute.manager [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:50:30 np0005603609 nova_compute[221550]: 2026-01-31 07:50:30.434 221554 DEBUG oslo_concurrency.lockutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:50:30 np0005603609 nova_compute[221550]: 2026-01-31 07:50:30.435 221554 DEBUG oslo_concurrency.lockutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:50:30 np0005603609 nova_compute[221550]: 2026-01-31 07:50:30.442 221554 DEBUG nova.virt.hardware [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:50:30 np0005603609 nova_compute[221550]: 2026-01-31 07:50:30.442 221554 INFO nova.compute.claims [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:50:30 np0005603609 nova_compute[221550]: 2026-01-31 07:50:30.577 221554 DEBUG oslo_concurrency.processutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:50:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:30.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:50:30 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4063991903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:50:30 np0005603609 nova_compute[221550]: 2026-01-31 07:50:30.963 221554 DEBUG oslo_concurrency.processutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:50:30 np0005603609 nova_compute[221550]: 2026-01-31 07:50:30.971 221554 DEBUG nova.compute.provider_tree [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.090 221554 DEBUG nova.scheduler.client.report [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.168 221554 DEBUG oslo_concurrency.lockutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.168 221554 DEBUG nova.compute.manager [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:50:31 np0005603609 podman[246922]: 2026-01-31 07:50:31.187774157 +0000 UTC m=+0.060853861 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 02:50:31 np0005603609 podman[246921]: 2026-01-31 07:50:31.212751435 +0000 UTC m=+0.088854794 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.255 221554 DEBUG nova.compute.manager [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.255 221554 DEBUG nova.network.neutron [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.304 221554 INFO nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.370 221554 DEBUG nova.compute.manager [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:50:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:31.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.526 221554 DEBUG nova.compute.manager [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.528 221554 DEBUG nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.528 221554 INFO nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Creating image(s)#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.563 221554 DEBUG nova.storage.rbd_utils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.601 221554 DEBUG nova.storage.rbd_utils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.628 221554 DEBUG nova.storage.rbd_utils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.632 221554 DEBUG oslo_concurrency.processutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.657 221554 DEBUG nova.policy [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1e1c5eef3d264666bc90735dd338d82a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a8cbd6cc22654dfab04487522a63426c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.696 221554 DEBUG oslo_concurrency.processutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.697 221554 DEBUG oslo_concurrency.lockutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.698 221554 DEBUG oslo_concurrency.lockutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.698 221554 DEBUG oslo_concurrency.lockutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.725 221554 DEBUG nova.storage.rbd_utils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.728 221554 DEBUG oslo_concurrency.processutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:50:31 np0005603609 nova_compute[221550]: 2026-01-31 07:50:31.976 221554 DEBUG oslo_concurrency.processutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:50:32 np0005603609 nova_compute[221550]: 2026-01-31 07:50:32.043 221554 DEBUG nova.storage.rbd_utils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] resizing rbd image 537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:50:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:32 np0005603609 nova_compute[221550]: 2026-01-31 07:50:32.159 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:32 np0005603609 nova_compute[221550]: 2026-01-31 07:50:32.166 221554 DEBUG nova.objects.instance [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'migration_context' on Instance uuid 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:50:32 np0005603609 nova_compute[221550]: 2026-01-31 07:50:32.216 221554 DEBUG nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:50:32 np0005603609 nova_compute[221550]: 2026-01-31 07:50:32.217 221554 DEBUG nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Ensure instance console log exists: /var/lib/nova/instances/537adcb0-1fba-4e29-9bb3-b33ba7e3e523/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:50:32 np0005603609 nova_compute[221550]: 2026-01-31 07:50:32.218 221554 DEBUG oslo_concurrency.lockutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:50:32 np0005603609 nova_compute[221550]: 2026-01-31 07:50:32.218 221554 DEBUG oslo_concurrency.lockutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:50:32 np0005603609 nova_compute[221550]: 2026-01-31 07:50:32.218 221554 DEBUG oslo_concurrency.lockutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:50:32 np0005603609 nova_compute[221550]: 2026-01-31 07:50:32.656 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:32.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:33 np0005603609 nova_compute[221550]: 2026-01-31 07:50:33.071 221554 DEBUG nova.network.neutron [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Successfully created port: 80de277a-9f3a-4b8a-a272-23c1948aaa56 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:50:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:50:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:33.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:50:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:34.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:50:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:35.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:50:36 np0005603609 nova_compute[221550]: 2026-01-31 07:50:36.044 221554 DEBUG nova.network.neutron [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Successfully updated port: 80de277a-9f3a-4b8a-a272-23c1948aaa56 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:50:36 np0005603609 nova_compute[221550]: 2026-01-31 07:50:36.124 221554 DEBUG oslo_concurrency.lockutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:50:36 np0005603609 nova_compute[221550]: 2026-01-31 07:50:36.125 221554 DEBUG oslo_concurrency.lockutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquired lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:50:36 np0005603609 nova_compute[221550]: 2026-01-31 07:50:36.125 221554 DEBUG nova.network.neutron [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:50:36 np0005603609 nova_compute[221550]: 2026-01-31 07:50:36.210 221554 DEBUG nova.compute.manager [req-444f85f3-1022-40ee-8eb5-ec9566d8f884 req-b5bca603-f5dc-4d2b-8719-439feb228e6d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received event network-changed-80de277a-9f3a-4b8a-a272-23c1948aaa56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:50:36 np0005603609 nova_compute[221550]: 2026-01-31 07:50:36.211 221554 DEBUG nova.compute.manager [req-444f85f3-1022-40ee-8eb5-ec9566d8f884 req-b5bca603-f5dc-4d2b-8719-439feb228e6d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Refreshing instance network info cache due to event network-changed-80de277a-9f3a-4b8a-a272-23c1948aaa56. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:50:36 np0005603609 nova_compute[221550]: 2026-01-31 07:50:36.211 221554 DEBUG oslo_concurrency.lockutils [req-444f85f3-1022-40ee-8eb5-ec9566d8f884 req-b5bca603-f5dc-4d2b-8719-439feb228e6d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:50:36 np0005603609 nova_compute[221550]: 2026-01-31 07:50:36.313 221554 DEBUG nova.network.neutron [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:50:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:36.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.162 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.224 221554 DEBUG nova.network.neutron [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Updating instance_info_cache with network_info: [{"id": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "address": "fa:16:3e:69:b6:7b", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80de277a-9f", "ovs_interfaceid": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.280 221554 DEBUG oslo_concurrency.lockutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Releasing lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.280 221554 DEBUG nova.compute.manager [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Instance network_info: |[{"id": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "address": "fa:16:3e:69:b6:7b", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80de277a-9f", "ovs_interfaceid": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.281 221554 DEBUG oslo_concurrency.lockutils [req-444f85f3-1022-40ee-8eb5-ec9566d8f884 req-b5bca603-f5dc-4d2b-8719-439feb228e6d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.281 221554 DEBUG nova.network.neutron [req-444f85f3-1022-40ee-8eb5-ec9566d8f884 req-b5bca603-f5dc-4d2b-8719-439feb228e6d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Refreshing network info cache for port 80de277a-9f3a-4b8a-a272-23c1948aaa56 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.285 221554 DEBUG nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Start _get_guest_xml network_info=[{"id": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "address": "fa:16:3e:69:b6:7b", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80de277a-9f", "ovs_interfaceid": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.289 221554 WARNING nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.294 221554 DEBUG nova.virt.libvirt.host [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.295 221554 DEBUG nova.virt.libvirt.host [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.298 221554 DEBUG nova.virt.libvirt.host [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.298 221554 DEBUG nova.virt.libvirt.host [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.299 221554 DEBUG nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.300 221554 DEBUG nova.virt.hardware [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.300 221554 DEBUG nova.virt.hardware [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.301 221554 DEBUG nova.virt.hardware [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.301 221554 DEBUG nova.virt.hardware [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.301 221554 DEBUG nova.virt.hardware [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.301 221554 DEBUG nova.virt.hardware [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.302 221554 DEBUG nova.virt.hardware [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.302 221554 DEBUG nova.virt.hardware [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.302 221554 DEBUG nova.virt.hardware [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.302 221554 DEBUG nova.virt.hardware [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.303 221554 DEBUG nova.virt.hardware [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.306 221554 DEBUG oslo_concurrency.processutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:50:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:37.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.658 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:50:37 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2732920842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.700 221554 DEBUG oslo_concurrency.processutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.721 221554 DEBUG nova.storage.rbd_utils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:50:37 np0005603609 nova_compute[221550]: 2026-01-31 07:50:37.725 221554 DEBUG oslo_concurrency.processutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:50:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:50:38 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2809114543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.126 221554 DEBUG oslo_concurrency.processutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.127 221554 DEBUG nova.virt.libvirt.vif [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:50:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1077414172',display_name='tempest-AttachInterfacesTestJSON-server-1077414172',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1077414172',id=70,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAcDMT21wqfH8JLIQ1A1rchIYyXu+jv2taZ98rEzLlbMI8L3f1ONZJQxWBx/yKSKBA7FUnPq2XK4gO+skhzO2F8qgCgP1dHat5l6TbqZGCAz/thGBq96oAIJMHIEagTjug==',key_name='tempest-keypair-807659227',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-g7pm4i0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:50:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=537adcb0-1fba-4e29-9bb3-b33ba7e3e523,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "address": "fa:16:3e:69:b6:7b", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80de277a-9f", "ovs_interfaceid": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.128 221554 DEBUG nova.network.os_vif_util [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "address": "fa:16:3e:69:b6:7b", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80de277a-9f", "ovs_interfaceid": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.128 221554 DEBUG nova.network.os_vif_util [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:b6:7b,bridge_name='br-int',has_traffic_filtering=True,id=80de277a-9f3a-4b8a-a272-23c1948aaa56,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80de277a-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.129 221554 DEBUG nova.objects.instance [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'pci_devices' on Instance uuid 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.149 221554 DEBUG nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:50:38 np0005603609 nova_compute[221550]:  <uuid>537adcb0-1fba-4e29-9bb3-b33ba7e3e523</uuid>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:  <name>instance-00000046</name>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1077414172</nova:name>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:50:37</nova:creationTime>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:50:38 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:        <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:        <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:        <nova:port uuid="80de277a-9f3a-4b8a-a272-23c1948aaa56">
Jan 31 02:50:38 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <entry name="serial">537adcb0-1fba-4e29-9bb3-b33ba7e3e523</entry>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <entry name="uuid">537adcb0-1fba-4e29-9bb3-b33ba7e3e523</entry>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk">
Jan 31 02:50:38 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:50:38 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk.config">
Jan 31 02:50:38 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:50:38 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:69:b6:7b"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <target dev="tap80de277a-9f"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/537adcb0-1fba-4e29-9bb3-b33ba7e3e523/console.log" append="off"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:50:38 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:50:38 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:50:38 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:50:38 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.150 221554 DEBUG nova.compute.manager [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Preparing to wait for external event network-vif-plugged-80de277a-9f3a-4b8a-a272-23c1948aaa56 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.150 221554 DEBUG oslo_concurrency.lockutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.150 221554 DEBUG oslo_concurrency.lockutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.150 221554 DEBUG oslo_concurrency.lockutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.151 221554 DEBUG nova.virt.libvirt.vif [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:50:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1077414172',display_name='tempest-AttachInterfacesTestJSON-server-1077414172',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1077414172',id=70,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAcDMT21wqfH8JLIQ1A1rchIYyXu+jv2taZ98rEzLlbMI8L3f1ONZJQxWBx/yKSKBA7FUnPq2XK4gO+skhzO2F8qgCgP1dHat5l6TbqZGCAz/thGBq96oAIJMHIEagTjug==',key_name='tempest-keypair-807659227',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-g7pm4i0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:50:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=537adcb0-1fba-4e29-9bb3-b33ba7e3e523,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "address": "fa:16:3e:69:b6:7b", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80de277a-9f", "ovs_interfaceid": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.151 221554 DEBUG nova.network.os_vif_util [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "address": "fa:16:3e:69:b6:7b", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80de277a-9f", "ovs_interfaceid": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.152 221554 DEBUG nova.network.os_vif_util [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:b6:7b,bridge_name='br-int',has_traffic_filtering=True,id=80de277a-9f3a-4b8a-a272-23c1948aaa56,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80de277a-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.152 221554 DEBUG os_vif [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:b6:7b,bridge_name='br-int',has_traffic_filtering=True,id=80de277a-9f3a-4b8a-a272-23c1948aaa56,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80de277a-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.153 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.153 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.154 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.158 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.158 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80de277a-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.159 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap80de277a-9f, col_values=(('external_ids', {'iface-id': '80de277a-9f3a-4b8a-a272-23c1948aaa56', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:b6:7b', 'vm-uuid': '537adcb0-1fba-4e29-9bb3-b33ba7e3e523'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.160 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:38 np0005603609 NetworkManager[49064]: <info>  [1769845838.1620] manager: (tap80de277a-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.163 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.167 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.168 221554 INFO os_vif [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:b6:7b,bridge_name='br-int',has_traffic_filtering=True,id=80de277a-9f3a-4b8a-a272-23c1948aaa56,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80de277a-9f')#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.280 221554 DEBUG nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.280 221554 DEBUG nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.281 221554 DEBUG nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No VIF found with MAC fa:16:3e:69:b6:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.281 221554 INFO nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Using config drive#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.299 221554 DEBUG nova.storage.rbd_utils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:50:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:38.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.986 221554 INFO nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Creating config drive at /var/lib/nova/instances/537adcb0-1fba-4e29-9bb3-b33ba7e3e523/disk.config#033[00m
Jan 31 02:50:38 np0005603609 nova_compute[221550]: 2026-01-31 07:50:38.990 221554 DEBUG oslo_concurrency.processutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/537adcb0-1fba-4e29-9bb3-b33ba7e3e523/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpunu0by9q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:50:39 np0005603609 nova_compute[221550]: 2026-01-31 07:50:39.109 221554 DEBUG oslo_concurrency.processutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/537adcb0-1fba-4e29-9bb3-b33ba7e3e523/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpunu0by9q" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:50:39 np0005603609 nova_compute[221550]: 2026-01-31 07:50:39.131 221554 DEBUG nova.storage.rbd_utils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:50:39 np0005603609 nova_compute[221550]: 2026-01-31 07:50:39.134 221554 DEBUG oslo_concurrency.processutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/537adcb0-1fba-4e29-9bb3-b33ba7e3e523/disk.config 537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:50:39 np0005603609 nova_compute[221550]: 2026-01-31 07:50:39.161 221554 DEBUG nova.network.neutron [req-444f85f3-1022-40ee-8eb5-ec9566d8f884 req-b5bca603-f5dc-4d2b-8719-439feb228e6d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Updated VIF entry in instance network info cache for port 80de277a-9f3a-4b8a-a272-23c1948aaa56. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:50:39 np0005603609 nova_compute[221550]: 2026-01-31 07:50:39.161 221554 DEBUG nova.network.neutron [req-444f85f3-1022-40ee-8eb5-ec9566d8f884 req-b5bca603-f5dc-4d2b-8719-439feb228e6d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Updating instance_info_cache with network_info: [{"id": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "address": "fa:16:3e:69:b6:7b", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80de277a-9f", "ovs_interfaceid": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:50:39 np0005603609 nova_compute[221550]: 2026-01-31 07:50:39.192 221554 DEBUG oslo_concurrency.lockutils [req-444f85f3-1022-40ee-8eb5-ec9566d8f884 req-b5bca603-f5dc-4d2b-8719-439feb228e6d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:50:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:50:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:39.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:50:39 np0005603609 nova_compute[221550]: 2026-01-31 07:50:39.756 221554 DEBUG oslo_concurrency.processutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/537adcb0-1fba-4e29-9bb3-b33ba7e3e523/disk.config 537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:50:39 np0005603609 nova_compute[221550]: 2026-01-31 07:50:39.757 221554 INFO nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Deleting local config drive /var/lib/nova/instances/537adcb0-1fba-4e29-9bb3-b33ba7e3e523/disk.config because it was imported into RBD.#033[00m
Jan 31 02:50:39 np0005603609 kernel: tap80de277a-9f: entered promiscuous mode
Jan 31 02:50:39 np0005603609 NetworkManager[49064]: <info>  [1769845839.8086] manager: (tap80de277a-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Jan 31 02:50:39 np0005603609 ovn_controller[130359]: 2026-01-31T07:50:39Z|00204|binding|INFO|Claiming lport 80de277a-9f3a-4b8a-a272-23c1948aaa56 for this chassis.
Jan 31 02:50:39 np0005603609 ovn_controller[130359]: 2026-01-31T07:50:39Z|00205|binding|INFO|80de277a-9f3a-4b8a-a272-23c1948aaa56: Claiming fa:16:3e:69:b6:7b 10.100.0.14
Jan 31 02:50:39 np0005603609 nova_compute[221550]: 2026-01-31 07:50:39.847 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:39 np0005603609 systemd-udevd[247264]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:50:39 np0005603609 nova_compute[221550]: 2026-01-31 07:50:39.851 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:39 np0005603609 nova_compute[221550]: 2026-01-31 07:50:39.854 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:39 np0005603609 NetworkManager[49064]: <info>  [1769845839.8618] device (tap80de277a-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:50:39 np0005603609 NetworkManager[49064]: <info>  [1769845839.8626] device (tap80de277a-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:50:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:39.867 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:b6:7b 10.100.0.14'], port_security=['fa:16:3e:69:b6:7b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '537adcb0-1fba-4e29-9bb3-b33ba7e3e523', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5324d833-550d-4984-912a-15a1917a1b00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=80de277a-9f3a-4b8a-a272-23c1948aaa56) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:50:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:39.868 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 80de277a-9f3a-4b8a-a272-23c1948aaa56 in datapath 485494d9-5360-41c3-a10e-ef5098af0809 bound to our chassis#033[00m
Jan 31 02:50:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:39.870 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 485494d9-5360-41c3-a10e-ef5098af0809#033[00m
Jan 31 02:50:39 np0005603609 ovn_controller[130359]: 2026-01-31T07:50:39Z|00206|binding|INFO|Setting lport 80de277a-9f3a-4b8a-a272-23c1948aaa56 ovn-installed in OVS
Jan 31 02:50:39 np0005603609 ovn_controller[130359]: 2026-01-31T07:50:39Z|00207|binding|INFO|Setting lport 80de277a-9f3a-4b8a-a272-23c1948aaa56 up in Southbound
Jan 31 02:50:39 np0005603609 nova_compute[221550]: 2026-01-31 07:50:39.874 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:39 np0005603609 systemd-machined[190912]: New machine qemu-32-instance-00000046.
Jan 31 02:50:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:39.877 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[290e58de-e3e6-4149-89ef-f17963717dd2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:50:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:39.878 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap485494d9-51 in ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:50:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:39.880 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap485494d9-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:50:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:39.880 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff27878-44c6-4443-90ef-b2c0c5971769]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:50:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:39.881 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[05e2fa2d-fdf4-4e3b-a817-0eda35939727]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:50:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:39.889 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[0cbf9d11-2e50-43af-b4ef-f1e878aa4fe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:50:39 np0005603609 systemd[1]: Started Virtual Machine qemu-32-instance-00000046.
Jan 31 02:50:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:39.900 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[11014057-97ee-4cbe-abf3-2707f61fc9da]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:50:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:39.919 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[aee4948a-7745-418d-a031-556e6bf29ea1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:50:39 np0005603609 systemd-udevd[247268]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:50:39 np0005603609 NetworkManager[49064]: <info>  [1769845839.9245] manager: (tap485494d9-50): new Veth device (/org/freedesktop/NetworkManager/Devices/105)
Jan 31 02:50:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:39.923 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d63b7e49-9499-488b-8d7c-03d1b277d6d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:50:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:39.948 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[bf938d45-b03b-4fb4-a8a9-2fbb450ce7fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:50:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:39.951 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[747cfd11-afb1-4310-8870-d14340154dfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:50:39 np0005603609 NetworkManager[49064]: <info>  [1769845839.9672] device (tap485494d9-50): carrier: link connected
Jan 31 02:50:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:39.971 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[88da502d-2e45-423e-8711-3ec98ce2eae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:50:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:39.983 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e82fbfde-351a-4177-98fb-1cf20ec82501]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607966, 'reachable_time': 27803, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247300, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:50:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:39.995 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1a8cfae0-00b3-4317-a224-a66351a31f22]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:4b05'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607966, 'tstamp': 607966}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247301, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:40.008 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[978bd762-5549-4db8-8628-485bfe143e0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607966, 'reachable_time': 27803, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 247302, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:40.027 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[492393f0-b7cb-435e-b847-d378c9ae057a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:40.056 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe8e844-01e2-42ed-a18f-b8abfd25e18f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:40.057 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:40.057 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:40.058 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap485494d9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:50:40 np0005603609 nova_compute[221550]: 2026-01-31 07:50:40.059 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:40 np0005603609 kernel: tap485494d9-50: entered promiscuous mode
Jan 31 02:50:40 np0005603609 NetworkManager[49064]: <info>  [1769845840.0602] manager: (tap485494d9-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Jan 31 02:50:40 np0005603609 nova_compute[221550]: 2026-01-31 07:50:40.061 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:40.062 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap485494d9-50, col_values=(('external_ids', {'iface-id': 'cbb039b2-15df-45bd-8800-81213ecc7011'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:50:40 np0005603609 nova_compute[221550]: 2026-01-31 07:50:40.063 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:40 np0005603609 ovn_controller[130359]: 2026-01-31T07:50:40Z|00208|binding|INFO|Releasing lport cbb039b2-15df-45bd-8800-81213ecc7011 from this chassis (sb_readonly=0)
Jan 31 02:50:40 np0005603609 nova_compute[221550]: 2026-01-31 07:50:40.063 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:40.064 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/485494d9-5360-41c3-a10e-ef5098af0809.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/485494d9-5360-41c3-a10e-ef5098af0809.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:40.065 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[595930ee-6bb7-44f9-a5b7-62784c8933a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:40.066 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-485494d9-5360-41c3-a10e-ef5098af0809
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/485494d9-5360-41c3-a10e-ef5098af0809.pid.haproxy
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 485494d9-5360-41c3-a10e-ef5098af0809
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:50:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:50:40.066 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'env', 'PROCESS_TAG=haproxy-485494d9-5360-41c3-a10e-ef5098af0809', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/485494d9-5360-41c3-a10e-ef5098af0809.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:50:40 np0005603609 nova_compute[221550]: 2026-01-31 07:50:40.068 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:40 np0005603609 nova_compute[221550]: 2026-01-31 07:50:40.359 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845840.3590903, 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:50:40 np0005603609 nova_compute[221550]: 2026-01-31 07:50:40.360 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] VM Started (Lifecycle Event)#033[00m
Jan 31 02:50:40 np0005603609 nova_compute[221550]: 2026-01-31 07:50:40.414 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:50:40 np0005603609 nova_compute[221550]: 2026-01-31 07:50:40.417 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845840.3593695, 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:50:40 np0005603609 nova_compute[221550]: 2026-01-31 07:50:40.418 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:50:40 np0005603609 nova_compute[221550]: 2026-01-31 07:50:40.435 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:50:40 np0005603609 nova_compute[221550]: 2026-01-31 07:50:40.440 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:50:40 np0005603609 podman[247376]: 2026-01-31 07:50:40.442427361 +0000 UTC m=+0.077202653 container create d4d284cdd515c0613f4493bb0ef68c3d357a08e1b2bfcab98146e94499205571 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 02:50:40 np0005603609 nova_compute[221550]: 2026-01-31 07:50:40.467 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:50:40 np0005603609 systemd[1]: Started libpod-conmon-d4d284cdd515c0613f4493bb0ef68c3d357a08e1b2bfcab98146e94499205571.scope.
Jan 31 02:50:40 np0005603609 podman[247376]: 2026-01-31 07:50:40.384932064 +0000 UTC m=+0.019707376 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:50:40 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:50:40 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1963041eef7e9150ad6b95885d364d24cf0e73f5c85a39491b5adb1750382cb1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:50:40 np0005603609 podman[247376]: 2026-01-31 07:50:40.510124603 +0000 UTC m=+0.144899925 container init d4d284cdd515c0613f4493bb0ef68c3d357a08e1b2bfcab98146e94499205571 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:50:40 np0005603609 podman[247376]: 2026-01-31 07:50:40.515016751 +0000 UTC m=+0.149792043 container start d4d284cdd515c0613f4493bb0ef68c3d357a08e1b2bfcab98146e94499205571 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 02:50:40 np0005603609 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[247391]: [NOTICE]   (247395) : New worker (247397) forked
Jan 31 02:50:40 np0005603609 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[247391]: [NOTICE]   (247395) : Loading success.
Jan 31 02:50:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:50:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:40.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:50:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:41.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.199 221554 DEBUG nova.compute.manager [req-623e984e-42ee-4f68-8e07-69454cd8d760 req-da734665-a774-4608-a9b4-0d156695b63f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received event network-vif-plugged-80de277a-9f3a-4b8a-a272-23c1948aaa56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.199 221554 DEBUG oslo_concurrency.lockutils [req-623e984e-42ee-4f68-8e07-69454cd8d760 req-da734665-a774-4608-a9b4-0d156695b63f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.200 221554 DEBUG oslo_concurrency.lockutils [req-623e984e-42ee-4f68-8e07-69454cd8d760 req-da734665-a774-4608-a9b4-0d156695b63f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.200 221554 DEBUG oslo_concurrency.lockutils [req-623e984e-42ee-4f68-8e07-69454cd8d760 req-da734665-a774-4608-a9b4-0d156695b63f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.200 221554 DEBUG nova.compute.manager [req-623e984e-42ee-4f68-8e07-69454cd8d760 req-da734665-a774-4608-a9b4-0d156695b63f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Processing event network-vif-plugged-80de277a-9f3a-4b8a-a272-23c1948aaa56 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.201 221554 DEBUG nova.compute.manager [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.204 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845842.2047205, 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.205 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.206 221554 DEBUG nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.211 221554 INFO nova.virt.libvirt.driver [-] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Instance spawned successfully.#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.212 221554 DEBUG nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.230 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.236 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.239 221554 DEBUG nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.239 221554 DEBUG nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.240 221554 DEBUG nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.240 221554 DEBUG nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.241 221554 DEBUG nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.241 221554 DEBUG nova.virt.libvirt.driver [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.265 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.294 221554 INFO nova.compute.manager [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Took 10.77 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.295 221554 DEBUG nova.compute.manager [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.370 221554 INFO nova.compute.manager [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Took 11.99 seconds to build instance.#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.387 221554 DEBUG oslo_concurrency.lockutils [None req-26ae2f52-2fa8-4170-a1ac-b449b45ca897 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:50:42 np0005603609 nova_compute[221550]: 2026-01-31 07:50:42.660 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:42.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:43 np0005603609 nova_compute[221550]: 2026-01-31 07:50:43.196 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:43.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:43 np0005603609 nova_compute[221550]: 2026-01-31 07:50:43.939 221554 DEBUG nova.compute.manager [req-58c787f3-7492-4771-b075-e34eacd039e4 req-1ac3164a-d2d2-4487-99d5-7bade9d21e44 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received event network-vif-plugged-80de277a-9f3a-4b8a-a272-23c1948aaa56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:50:43 np0005603609 nova_compute[221550]: 2026-01-31 07:50:43.940 221554 DEBUG oslo_concurrency.lockutils [req-58c787f3-7492-4771-b075-e34eacd039e4 req-1ac3164a-d2d2-4487-99d5-7bade9d21e44 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:50:43 np0005603609 nova_compute[221550]: 2026-01-31 07:50:43.940 221554 DEBUG oslo_concurrency.lockutils [req-58c787f3-7492-4771-b075-e34eacd039e4 req-1ac3164a-d2d2-4487-99d5-7bade9d21e44 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:50:43 np0005603609 nova_compute[221550]: 2026-01-31 07:50:43.940 221554 DEBUG oslo_concurrency.lockutils [req-58c787f3-7492-4771-b075-e34eacd039e4 req-1ac3164a-d2d2-4487-99d5-7bade9d21e44 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:50:43 np0005603609 nova_compute[221550]: 2026-01-31 07:50:43.940 221554 DEBUG nova.compute.manager [req-58c787f3-7492-4771-b075-e34eacd039e4 req-1ac3164a-d2d2-4487-99d5-7bade9d21e44 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] No waiting events found dispatching network-vif-plugged-80de277a-9f3a-4b8a-a272-23c1948aaa56 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:50:43 np0005603609 nova_compute[221550]: 2026-01-31 07:50:43.940 221554 WARNING nova.compute.manager [req-58c787f3-7492-4771-b075-e34eacd039e4 req-1ac3164a-d2d2-4487-99d5-7bade9d21e44 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received unexpected event network-vif-plugged-80de277a-9f3a-4b8a-a272-23c1948aaa56 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:50:44 np0005603609 nova_compute[221550]: 2026-01-31 07:50:44.150 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:44 np0005603609 NetworkManager[49064]: <info>  [1769845844.1512] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Jan 31 02:50:44 np0005603609 NetworkManager[49064]: <info>  [1769845844.1523] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Jan 31 02:50:44 np0005603609 nova_compute[221550]: 2026-01-31 07:50:44.226 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:44 np0005603609 ovn_controller[130359]: 2026-01-31T07:50:44Z|00209|binding|INFO|Releasing lport cbb039b2-15df-45bd-8800-81213ecc7011 from this chassis (sb_readonly=0)
Jan 31 02:50:44 np0005603609 nova_compute[221550]: 2026-01-31 07:50:44.253 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:44.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:44 np0005603609 nova_compute[221550]: 2026-01-31 07:50:44.893 221554 DEBUG nova.compute.manager [req-4d82f2b2-e0b6-4ce0-a18f-548e4ab8ae24 req-45274573-39f8-4f61-944b-09da5b4cb3e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received event network-changed-80de277a-9f3a-4b8a-a272-23c1948aaa56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:50:44 np0005603609 nova_compute[221550]: 2026-01-31 07:50:44.893 221554 DEBUG nova.compute.manager [req-4d82f2b2-e0b6-4ce0-a18f-548e4ab8ae24 req-45274573-39f8-4f61-944b-09da5b4cb3e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Refreshing instance network info cache due to event network-changed-80de277a-9f3a-4b8a-a272-23c1948aaa56. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:50:44 np0005603609 nova_compute[221550]: 2026-01-31 07:50:44.894 221554 DEBUG oslo_concurrency.lockutils [req-4d82f2b2-e0b6-4ce0-a18f-548e4ab8ae24 req-45274573-39f8-4f61-944b-09da5b4cb3e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:50:44 np0005603609 nova_compute[221550]: 2026-01-31 07:50:44.894 221554 DEBUG oslo_concurrency.lockutils [req-4d82f2b2-e0b6-4ce0-a18f-548e4ab8ae24 req-45274573-39f8-4f61-944b-09da5b4cb3e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:50:44 np0005603609 nova_compute[221550]: 2026-01-31 07:50:44.895 221554 DEBUG nova.network.neutron [req-4d82f2b2-e0b6-4ce0-a18f-548e4ab8ae24 req-45274573-39f8-4f61-944b-09da5b4cb3e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Refreshing network info cache for port 80de277a-9f3a-4b8a-a272-23c1948aaa56 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:50:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:50:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:45.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:50:46 np0005603609 nova_compute[221550]: 2026-01-31 07:50:46.625 221554 DEBUG nova.network.neutron [req-4d82f2b2-e0b6-4ce0-a18f-548e4ab8ae24 req-45274573-39f8-4f61-944b-09da5b4cb3e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Updated VIF entry in instance network info cache for port 80de277a-9f3a-4b8a-a272-23c1948aaa56. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:50:46 np0005603609 nova_compute[221550]: 2026-01-31 07:50:46.625 221554 DEBUG nova.network.neutron [req-4d82f2b2-e0b6-4ce0-a18f-548e4ab8ae24 req-45274573-39f8-4f61-944b-09da5b4cb3e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Updating instance_info_cache with network_info: [{"id": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "address": "fa:16:3e:69:b6:7b", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80de277a-9f", "ovs_interfaceid": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:50:46 np0005603609 nova_compute[221550]: 2026-01-31 07:50:46.717 221554 DEBUG oslo_concurrency.lockutils [req-4d82f2b2-e0b6-4ce0-a18f-548e4ab8ae24 req-45274573-39f8-4f61-944b-09da5b4cb3e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:50:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:50:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:46.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:50:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:50:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:47.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:50:47 np0005603609 nova_compute[221550]: 2026-01-31 07:50:47.663 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:48 np0005603609 nova_compute[221550]: 2026-01-31 07:50:48.198 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:50:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:48.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:50:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:50:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:49.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:50:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:50:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:50.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:50:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:50:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:51.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:50:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:52 np0005603609 nova_compute[221550]: 2026-01-31 07:50:52.664 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:52.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:53 np0005603609 nova_compute[221550]: 2026-01-31 07:50:53.200 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:53.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:50:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:54.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:50:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:55.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:56 np0005603609 nova_compute[221550]: 2026-01-31 07:50:56.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:56.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:50:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:57.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:50:57 np0005603609 nova_compute[221550]: 2026-01-31 07:50:57.666 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:58 np0005603609 nova_compute[221550]: 2026-01-31 07:50:58.202 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:58.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:50:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:59.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:00 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 02:51:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:00.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:01.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:02 np0005603609 podman[247410]: 2026-01-31 07:51:02.179060988 +0000 UTC m=+0.055208012 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 02:51:02 np0005603609 podman[247409]: 2026-01-31 07:51:02.227633849 +0000 UTC m=+0.103595239 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 02:51:02 np0005603609 nova_compute[221550]: 2026-01-31 07:51:02.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:02 np0005603609 nova_compute[221550]: 2026-01-31 07:51:02.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 02:51:02 np0005603609 nova_compute[221550]: 2026-01-31 07:51:02.717 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:02.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:03 np0005603609 nova_compute[221550]: 2026-01-31 07:51:03.203 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:03.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).paxos(paxos updating c 3013..3655) lease_timeout -- calling new election
Jan 31 02:51:03 np0005603609 ceph-mon[81667]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Jan 31 02:51:03 np0005603609 ceph-mon[81667]: paxos.2).electionLogic(32) init, last seen epoch 32
Jan 31 02:51:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:51:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:51:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:51:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:04.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:05 np0005603609 ceph-mon[81667]: mon.compute-0 calling monitor election
Jan 31 02:51:05 np0005603609 ceph-mon[81667]: mon.compute-1 calling monitor election
Jan 31 02:51:05 np0005603609 ceph-mon[81667]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 02:51:05 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 02:51:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:05.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:06 np0005603609 nova_compute[221550]: 2026-01-31 07:51:06.221 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:06 np0005603609 nova_compute[221550]: 2026-01-31 07:51:06.221 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:06 np0005603609 nova_compute[221550]: 2026-01-31 07:51:06.222 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:06 np0005603609 nova_compute[221550]: 2026-01-31 07:51:06.222 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:06 np0005603609 nova_compute[221550]: 2026-01-31 07:51:06.222 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 02:51:06 np0005603609 nova_compute[221550]: 2026-01-31 07:51:06.287 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 02:51:06 np0005603609 ovn_controller[130359]: 2026-01-31T07:51:06Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:b6:7b 10.100.0.14
Jan 31 02:51:06 np0005603609 ovn_controller[130359]: 2026-01-31T07:51:06Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:b6:7b 10.100.0.14
Jan 31 02:51:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:06.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:07.414 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:51:07 np0005603609 nova_compute[221550]: 2026-01-31 07:51:07.415 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:07.418 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:51:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:51:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:07.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:51:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:07.485 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:51:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:07.486 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:51:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:07.486 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:51:07 np0005603609 nova_compute[221550]: 2026-01-31 07:51:07.720 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:08 np0005603609 nova_compute[221550]: 2026-01-31 07:51:08.209 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:08 np0005603609 nova_compute[221550]: 2026-01-31 07:51:08.721 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:08 np0005603609 nova_compute[221550]: 2026-01-31 07:51:08.721 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:08 np0005603609 nova_compute[221550]: 2026-01-31 07:51:08.722 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:51:08 np0005603609 nova_compute[221550]: 2026-01-31 07:51:08.722 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:51:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:51:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:08.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:51:09 np0005603609 ovn_controller[130359]: 2026-01-31T07:51:09Z|00210|binding|INFO|Releasing lport cbb039b2-15df-45bd-8800-81213ecc7011 from this chassis (sb_readonly=0)
Jan 31 02:51:09 np0005603609 nova_compute[221550]: 2026-01-31 07:51:09.324 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:09.420 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:51:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:09.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:09 np0005603609 nova_compute[221550]: 2026-01-31 07:51:09.633 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:51:09 np0005603609 nova_compute[221550]: 2026-01-31 07:51:09.633 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:51:09 np0005603609 nova_compute[221550]: 2026-01-31 07:51:09.633 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:51:09 np0005603609 nova_compute[221550]: 2026-01-31 07:51:09.634 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:51:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:10.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:11.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:12 np0005603609 nova_compute[221550]: 2026-01-31 07:51:12.462 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Updating instance_info_cache with network_info: [{"id": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "address": "fa:16:3e:69:b6:7b", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80de277a-9f", "ovs_interfaceid": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:51:12 np0005603609 nova_compute[221550]: 2026-01-31 07:51:12.628 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:51:12 np0005603609 nova_compute[221550]: 2026-01-31 07:51:12.628 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:51:12 np0005603609 nova_compute[221550]: 2026-01-31 07:51:12.629 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:12 np0005603609 nova_compute[221550]: 2026-01-31 07:51:12.630 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:12 np0005603609 nova_compute[221550]: 2026-01-31 07:51:12.630 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:51:12 np0005603609 nova_compute[221550]: 2026-01-31 07:51:12.630 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:12 np0005603609 nova_compute[221550]: 2026-01-31 07:51:12.723 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:12.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:12 np0005603609 nova_compute[221550]: 2026-01-31 07:51:12.921 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:51:12 np0005603609 nova_compute[221550]: 2026-01-31 07:51:12.922 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:51:12 np0005603609 nova_compute[221550]: 2026-01-31 07:51:12.923 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:51:12 np0005603609 nova_compute[221550]: 2026-01-31 07:51:12.923 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:51:12 np0005603609 nova_compute[221550]: 2026-01-31 07:51:12.923 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:51:13 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:51:13 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:51:13 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:51:13 np0005603609 nova_compute[221550]: 2026-01-31 07:51:13.211 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:51:13 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3780460574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:51:13 np0005603609 nova_compute[221550]: 2026-01-31 07:51:13.339 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:51:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:51:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:13.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:51:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:14.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:15 np0005603609 nova_compute[221550]: 2026-01-31 07:51:15.361 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:51:15 np0005603609 nova_compute[221550]: 2026-01-31 07:51:15.362 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:51:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:51:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:15.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:51:15 np0005603609 nova_compute[221550]: 2026-01-31 07:51:15.533 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:51:15 np0005603609 nova_compute[221550]: 2026-01-31 07:51:15.534 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4505MB free_disk=20.921932220458984GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:51:15 np0005603609 nova_compute[221550]: 2026-01-31 07:51:15.535 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:51:15 np0005603609 nova_compute[221550]: 2026-01-31 07:51:15.535 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:51:16 np0005603609 nova_compute[221550]: 2026-01-31 07:51:16.024 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:51:16 np0005603609 nova_compute[221550]: 2026-01-31 07:51:16.024 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:51:16 np0005603609 nova_compute[221550]: 2026-01-31 07:51:16.024 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:51:16 np0005603609 nova_compute[221550]: 2026-01-31 07:51:16.143 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:51:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:51:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2099143682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:51:16 np0005603609 nova_compute[221550]: 2026-01-31 07:51:16.587 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:51:16 np0005603609 nova_compute[221550]: 2026-01-31 07:51:16.592 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:51:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:16.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:17 np0005603609 nova_compute[221550]: 2026-01-31 07:51:17.159 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:51:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:17.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:17 np0005603609 nova_compute[221550]: 2026-01-31 07:51:17.726 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:17 np0005603609 nova_compute[221550]: 2026-01-31 07:51:17.941 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:51:17 np0005603609 nova_compute[221550]: 2026-01-31 07:51:17.942 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.407s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:51:18 np0005603609 nova_compute[221550]: 2026-01-31 07:51:18.213 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:51:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:18.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:51:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:19.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:20 np0005603609 nova_compute[221550]: 2026-01-31 07:51:20.511 221554 DEBUG oslo_concurrency.lockutils [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "interface-537adcb0-1fba-4e29-9bb3-b33ba7e3e523-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:51:20 np0005603609 nova_compute[221550]: 2026-01-31 07:51:20.512 221554 DEBUG oslo_concurrency.lockutils [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-537adcb0-1fba-4e29-9bb3-b33ba7e3e523-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:51:20 np0005603609 nova_compute[221550]: 2026-01-31 07:51:20.512 221554 DEBUG nova.objects.instance [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'flavor' on Instance uuid 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:51:20 np0005603609 nova_compute[221550]: 2026-01-31 07:51:20.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:20.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:51:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:21.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:51:21 np0005603609 nova_compute[221550]: 2026-01-31 07:51:21.854 221554 DEBUG nova.objects.instance [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'pci_requests' on Instance uuid 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:51:21 np0005603609 nova_compute[221550]: 2026-01-31 07:51:21.908 221554 DEBUG nova.network.neutron [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:51:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:22 np0005603609 nova_compute[221550]: 2026-01-31 07:51:22.299 221554 DEBUG nova.policy [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1e1c5eef3d264666bc90735dd338d82a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a8cbd6cc22654dfab04487522a63426c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:51:22 np0005603609 nova_compute[221550]: 2026-01-31 07:51:22.727 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:22.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:23 np0005603609 nova_compute[221550]: 2026-01-31 07:51:23.214 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:51:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:51:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:51:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:23.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:51:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:24.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:51:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:25.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:51:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:51:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:26.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:51:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:51:27 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1851526475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:51:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:51:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:27.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:51:27 np0005603609 nova_compute[221550]: 2026-01-31 07:51:27.730 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:27 np0005603609 nova_compute[221550]: 2026-01-31 07:51:27.906 221554 DEBUG nova.network.neutron [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Successfully created port: 9ec6f03b-475c-4295-8014-f37bdda5832f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:51:28 np0005603609 nova_compute[221550]: 2026-01-31 07:51:28.216 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:51:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:28.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:51:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:51:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:29.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:51:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:51:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:30.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:51:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:31.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:32 np0005603609 nova_compute[221550]: 2026-01-31 07:51:32.732 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:51:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:32.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:51:33 np0005603609 podman[247683]: 2026-01-31 07:51:33.172618408 +0000 UTC m=+0.052277092 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 02:51:33 np0005603609 podman[247682]: 2026-01-31 07:51:33.19886549 +0000 UTC m=+0.079660842 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 02:51:33 np0005603609 nova_compute[221550]: 2026-01-31 07:51:33.217 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:33.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:34 np0005603609 nova_compute[221550]: 2026-01-31 07:51:34.754 221554 DEBUG nova.network.neutron [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Successfully updated port: 9ec6f03b-475c-4295-8014-f37bdda5832f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:51:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:34.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:35 np0005603609 nova_compute[221550]: 2026-01-31 07:51:35.388 221554 DEBUG oslo_concurrency.lockutils [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:51:35 np0005603609 nova_compute[221550]: 2026-01-31 07:51:35.389 221554 DEBUG oslo_concurrency.lockutils [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquired lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:51:35 np0005603609 nova_compute[221550]: 2026-01-31 07:51:35.389 221554 DEBUG nova.network.neutron [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:51:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:51:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:35.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:51:35 np0005603609 nova_compute[221550]: 2026-01-31 07:51:35.961 221554 DEBUG nova.compute.manager [req-48ba3986-e3a9-4069-a2ef-eeaf985d82f4 req-b66a8322-fd03-4e52-82e2-99a1a8a05dd9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received event network-changed-9ec6f03b-475c-4295-8014-f37bdda5832f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:51:35 np0005603609 nova_compute[221550]: 2026-01-31 07:51:35.961 221554 DEBUG nova.compute.manager [req-48ba3986-e3a9-4069-a2ef-eeaf985d82f4 req-b66a8322-fd03-4e52-82e2-99a1a8a05dd9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Refreshing instance network info cache due to event network-changed-9ec6f03b-475c-4295-8014-f37bdda5832f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:51:35 np0005603609 nova_compute[221550]: 2026-01-31 07:51:35.962 221554 DEBUG oslo_concurrency.lockutils [req-48ba3986-e3a9-4069-a2ef-eeaf985d82f4 req-b66a8322-fd03-4e52-82e2-99a1a8a05dd9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:51:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:51:36 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/132925161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:51:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:51:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:36.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:51:36 np0005603609 nova_compute[221550]: 2026-01-31 07:51:36.984 221554 WARNING nova.network.neutron [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] 485494d9-5360-41c3-a10e-ef5098af0809 already exists in list: networks containing: ['485494d9-5360-41c3-a10e-ef5098af0809']. ignoring it#033[00m
Jan 31 02:51:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:37.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:37 np0005603609 nova_compute[221550]: 2026-01-31 07:51:37.734 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:38 np0005603609 nova_compute[221550]: 2026-01-31 07:51:38.219 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:51:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:38.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:51:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:51:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:39.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.642 221554 DEBUG nova.network.neutron [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Updating instance_info_cache with network_info: [{"id": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "address": "fa:16:3e:69:b6:7b", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80de277a-9f", "ovs_interfaceid": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9ec6f03b-475c-4295-8014-f37bdda5832f", "address": "fa:16:3e:1a:f6:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec6f03b-47", "ovs_interfaceid": "9ec6f03b-475c-4295-8014-f37bdda5832f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:51:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:51:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:40.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.892 221554 DEBUG oslo_concurrency.lockutils [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Releasing lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.893 221554 DEBUG oslo_concurrency.lockutils [req-48ba3986-e3a9-4069-a2ef-eeaf985d82f4 req-b66a8322-fd03-4e52-82e2-99a1a8a05dd9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.894 221554 DEBUG nova.network.neutron [req-48ba3986-e3a9-4069-a2ef-eeaf985d82f4 req-b66a8322-fd03-4e52-82e2-99a1a8a05dd9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Refreshing network info cache for port 9ec6f03b-475c-4295-8014-f37bdda5832f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.897 221554 DEBUG nova.virt.libvirt.vif [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:50:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1077414172',display_name='tempest-AttachInterfacesTestJSON-server-1077414172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1077414172',id=70,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAcDMT21wqfH8JLIQ1A1rchIYyXu+jv2taZ98rEzLlbMI8L3f1ONZJQxWBx/yKSKBA7FUnPq2XK4gO+skhzO2F8qgCgP1dHat5l6TbqZGCAz/thGBq96oAIJMHIEagTjug==',key_name='tempest-keypair-807659227',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:50:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-g7pm4i0k',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:50:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=537adcb0-1fba-4e29-9bb3-b33ba7e3e523,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ec6f03b-475c-4295-8014-f37bdda5832f", "address": "fa:16:3e:1a:f6:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec6f03b-47", "ovs_interfaceid": "9ec6f03b-475c-4295-8014-f37bdda5832f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.898 221554 DEBUG nova.network.os_vif_util [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "9ec6f03b-475c-4295-8014-f37bdda5832f", "address": "fa:16:3e:1a:f6:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec6f03b-47", "ovs_interfaceid": "9ec6f03b-475c-4295-8014-f37bdda5832f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.898 221554 DEBUG nova.network.os_vif_util [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:f6:82,bridge_name='br-int',has_traffic_filtering=True,id=9ec6f03b-475c-4295-8014-f37bdda5832f,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec6f03b-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.899 221554 DEBUG os_vif [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:f6:82,bridge_name='br-int',has_traffic_filtering=True,id=9ec6f03b-475c-4295-8014-f37bdda5832f,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec6f03b-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.900 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.901 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.901 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.905 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.906 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ec6f03b-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.906 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9ec6f03b-47, col_values=(('external_ids', {'iface-id': '9ec6f03b-475c-4295-8014-f37bdda5832f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:f6:82', 'vm-uuid': '537adcb0-1fba-4e29-9bb3-b33ba7e3e523'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.909 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:40 np0005603609 NetworkManager[49064]: <info>  [1769845900.9098] manager: (tap9ec6f03b-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.912 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.914 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.915 221554 INFO os_vif [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:f6:82,bridge_name='br-int',has_traffic_filtering=True,id=9ec6f03b-475c-4295-8014-f37bdda5832f,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec6f03b-47')#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.916 221554 DEBUG nova.virt.libvirt.vif [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:50:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1077414172',display_name='tempest-AttachInterfacesTestJSON-server-1077414172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1077414172',id=70,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAcDMT21wqfH8JLIQ1A1rchIYyXu+jv2taZ98rEzLlbMI8L3f1ONZJQxWBx/yKSKBA7FUnPq2XK4gO+skhzO2F8qgCgP1dHat5l6TbqZGCAz/thGBq96oAIJMHIEagTjug==',key_name='tempest-keypair-807659227',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:50:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-g7pm4i0k',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:50:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=537adcb0-1fba-4e29-9bb3-b33ba7e3e523,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ec6f03b-475c-4295-8014-f37bdda5832f", "address": "fa:16:3e:1a:f6:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec6f03b-47", "ovs_interfaceid": "9ec6f03b-475c-4295-8014-f37bdda5832f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.917 221554 DEBUG nova.network.os_vif_util [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "9ec6f03b-475c-4295-8014-f37bdda5832f", "address": "fa:16:3e:1a:f6:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec6f03b-47", "ovs_interfaceid": "9ec6f03b-475c-4295-8014-f37bdda5832f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.917 221554 DEBUG nova.network.os_vif_util [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:f6:82,bridge_name='br-int',has_traffic_filtering=True,id=9ec6f03b-475c-4295-8014-f37bdda5832f,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec6f03b-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.920 221554 DEBUG nova.virt.libvirt.guest [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] attach device xml: <interface type="ethernet">
Jan 31 02:51:40 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:1a:f6:82"/>
Jan 31 02:51:40 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 02:51:40 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:51:40 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 02:51:40 np0005603609 nova_compute[221550]:  <target dev="tap9ec6f03b-47"/>
Jan 31 02:51:40 np0005603609 nova_compute[221550]: </interface>
Jan 31 02:51:40 np0005603609 nova_compute[221550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 02:51:40 np0005603609 kernel: tap9ec6f03b-47: entered promiscuous mode
Jan 31 02:51:40 np0005603609 NetworkManager[49064]: <info>  [1769845900.9325] manager: (tap9ec6f03b-47): new Tun device (/org/freedesktop/NetworkManager/Devices/110)
Jan 31 02:51:40 np0005603609 ovn_controller[130359]: 2026-01-31T07:51:40Z|00211|binding|INFO|Claiming lport 9ec6f03b-475c-4295-8014-f37bdda5832f for this chassis.
Jan 31 02:51:40 np0005603609 ovn_controller[130359]: 2026-01-31T07:51:40Z|00212|binding|INFO|9ec6f03b-475c-4295-8014-f37bdda5832f: Claiming fa:16:3e:1a:f6:82 10.100.0.13
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.935 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:40 np0005603609 ovn_controller[130359]: 2026-01-31T07:51:40Z|00213|binding|INFO|Setting lport 9ec6f03b-475c-4295-8014-f37bdda5832f ovn-installed in OVS
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.939 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.940 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:40 np0005603609 nova_compute[221550]: 2026-01-31 07:51:40.946 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:40 np0005603609 systemd-udevd[247729]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:51:40 np0005603609 NetworkManager[49064]: <info>  [1769845900.9757] device (tap9ec6f03b-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:51:40 np0005603609 NetworkManager[49064]: <info>  [1769845900.9764] device (tap9ec6f03b-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:51:40 np0005603609 ovn_controller[130359]: 2026-01-31T07:51:40Z|00214|binding|INFO|Setting lport 9ec6f03b-475c-4295-8014-f37bdda5832f up in Southbound
Jan 31 02:51:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:40.997 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:f6:82 10.100.0.13'], port_security=['fa:16:3e:1a:f6:82 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '537adcb0-1fba-4e29-9bb3-b33ba7e3e523', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c7e4e426-0b76-41c4-8dcb-ea007c31db76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=9ec6f03b-475c-4295-8014-f37bdda5832f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:51:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:40.999 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 9ec6f03b-475c-4295-8014-f37bdda5832f in datapath 485494d9-5360-41c3-a10e-ef5098af0809 bound to our chassis#033[00m
Jan 31 02:51:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:41.000 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 485494d9-5360-41c3-a10e-ef5098af0809#033[00m
Jan 31 02:51:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:41.010 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3b65e27c-a361-4df1-9f5d-e78f0feed839]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:41.031 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[84c15381-57ca-4f3c-bfd8-23ed70b92f49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:41.034 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[40c98e9e-4be7-4d90-9cee-f78bedefaca2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:41.051 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[5ae540d8-aeb0-4ef0-b656-472affe0c017]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:41.064 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[55f8dd4a-7e94-4327-a477-38728d30b799]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607966, 'reachable_time': 42681, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247737, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:41.076 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3a0d1aee-7ff0-4466-af70-b2c6b7f993d1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607974, 'tstamp': 607974}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247738, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607975, 'tstamp': 607975}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247738, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:41.078 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:51:41 np0005603609 nova_compute[221550]: 2026-01-31 07:51:41.079 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:41 np0005603609 nova_compute[221550]: 2026-01-31 07:51:41.080 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:41.080 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap485494d9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:51:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:41.080 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:51:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:41.081 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap485494d9-50, col_values=(('external_ids', {'iface-id': 'cbb039b2-15df-45bd-8800-81213ecc7011'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:51:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:41.081 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:51:41 np0005603609 nova_compute[221550]: 2026-01-31 07:51:41.198 221554 DEBUG nova.virt.libvirt.driver [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:51:41 np0005603609 nova_compute[221550]: 2026-01-31 07:51:41.199 221554 DEBUG nova.virt.libvirt.driver [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:51:41 np0005603609 nova_compute[221550]: 2026-01-31 07:51:41.199 221554 DEBUG nova.virt.libvirt.driver [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No VIF found with MAC fa:16:3e:69:b6:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:51:41 np0005603609 nova_compute[221550]: 2026-01-31 07:51:41.199 221554 DEBUG nova.virt.libvirt.driver [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No VIF found with MAC fa:16:3e:1a:f6:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:51:41 np0005603609 nova_compute[221550]: 2026-01-31 07:51:41.408 221554 DEBUG nova.virt.libvirt.guest [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:51:41 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:51:41 np0005603609 nova_compute[221550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1077414172</nova:name>
Jan 31 02:51:41 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 07:51:41</nova:creationTime>
Jan 31 02:51:41 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 02:51:41 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 02:51:41 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 02:51:41 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 02:51:41 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:51:41 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 02:51:41 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 02:51:41 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 02:51:41 np0005603609 nova_compute[221550]:    <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 02:51:41 np0005603609 nova_compute[221550]:    <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 02:51:41 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 02:51:41 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:51:41 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 02:51:41 np0005603609 nova_compute[221550]:    <nova:port uuid="80de277a-9f3a-4b8a-a272-23c1948aaa56">
Jan 31 02:51:41 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 02:51:41 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:51:41 np0005603609 nova_compute[221550]:    <nova:port uuid="9ec6f03b-475c-4295-8014-f37bdda5832f">
Jan 31 02:51:41 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 02:51:41 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:51:41 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 02:51:41 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 02:51:41 np0005603609 nova_compute[221550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 02:51:41 np0005603609 nova_compute[221550]: 2026-01-31 07:51:41.499 221554 DEBUG oslo_concurrency.lockutils [None req-ea32e54b-e8fb-4b9d-b09a-6de9ac5e2470 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-537adcb0-1fba-4e29-9bb3-b33ba7e3e523-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 20.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:51:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:51:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:41.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:51:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:42 np0005603609 nova_compute[221550]: 2026-01-31 07:51:42.764 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:42.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:42 np0005603609 nova_compute[221550]: 2026-01-31 07:51:42.883 221554 DEBUG nova.compute.manager [req-ebe86a77-fd4c-4c04-8e44-038794883f10 req-b2dc32b8-cdd0-4ea1-9a8e-c6a40d2af0dd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received event network-vif-plugged-9ec6f03b-475c-4295-8014-f37bdda5832f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:51:42 np0005603609 nova_compute[221550]: 2026-01-31 07:51:42.883 221554 DEBUG oslo_concurrency.lockutils [req-ebe86a77-fd4c-4c04-8e44-038794883f10 req-b2dc32b8-cdd0-4ea1-9a8e-c6a40d2af0dd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:51:42 np0005603609 nova_compute[221550]: 2026-01-31 07:51:42.883 221554 DEBUG oslo_concurrency.lockutils [req-ebe86a77-fd4c-4c04-8e44-038794883f10 req-b2dc32b8-cdd0-4ea1-9a8e-c6a40d2af0dd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:51:42 np0005603609 nova_compute[221550]: 2026-01-31 07:51:42.884 221554 DEBUG oslo_concurrency.lockutils [req-ebe86a77-fd4c-4c04-8e44-038794883f10 req-b2dc32b8-cdd0-4ea1-9a8e-c6a40d2af0dd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:51:42 np0005603609 nova_compute[221550]: 2026-01-31 07:51:42.884 221554 DEBUG nova.compute.manager [req-ebe86a77-fd4c-4c04-8e44-038794883f10 req-b2dc32b8-cdd0-4ea1-9a8e-c6a40d2af0dd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] No waiting events found dispatching network-vif-plugged-9ec6f03b-475c-4295-8014-f37bdda5832f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:51:42 np0005603609 nova_compute[221550]: 2026-01-31 07:51:42.884 221554 WARNING nova.compute.manager [req-ebe86a77-fd4c-4c04-8e44-038794883f10 req-b2dc32b8-cdd0-4ea1-9a8e-c6a40d2af0dd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received unexpected event network-vif-plugged-9ec6f03b-475c-4295-8014-f37bdda5832f for instance with vm_state active and task_state None.#033[00m
Jan 31 02:51:43 np0005603609 ovn_controller[130359]: 2026-01-31T07:51:43Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:f6:82 10.100.0.13
Jan 31 02:51:43 np0005603609 ovn_controller[130359]: 2026-01-31T07:51:43Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:f6:82 10.100.0.13
Jan 31 02:51:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:43.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:44.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:45.300 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:51:45 np0005603609 nova_compute[221550]: 2026-01-31 07:51:45.301 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:45.302 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:51:45 np0005603609 nova_compute[221550]: 2026-01-31 07:51:45.309 221554 DEBUG nova.network.neutron [req-48ba3986-e3a9-4069-a2ef-eeaf985d82f4 req-b66a8322-fd03-4e52-82e2-99a1a8a05dd9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Updated VIF entry in instance network info cache for port 9ec6f03b-475c-4295-8014-f37bdda5832f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:51:45 np0005603609 nova_compute[221550]: 2026-01-31 07:51:45.309 221554 DEBUG nova.network.neutron [req-48ba3986-e3a9-4069-a2ef-eeaf985d82f4 req-b66a8322-fd03-4e52-82e2-99a1a8a05dd9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Updating instance_info_cache with network_info: [{"id": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "address": "fa:16:3e:69:b6:7b", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80de277a-9f", "ovs_interfaceid": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9ec6f03b-475c-4295-8014-f37bdda5832f", "address": "fa:16:3e:1a:f6:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec6f03b-47", "ovs_interfaceid": "9ec6f03b-475c-4295-8014-f37bdda5832f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:51:45 np0005603609 nova_compute[221550]: 2026-01-31 07:51:45.374 221554 DEBUG nova.compute.manager [req-4434a2c7-cf70-4cc7-9dc1-5da587e372ad req-0c0cceac-5488-4734-9c21-50099c901f01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received event network-vif-plugged-9ec6f03b-475c-4295-8014-f37bdda5832f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:51:45 np0005603609 nova_compute[221550]: 2026-01-31 07:51:45.374 221554 DEBUG oslo_concurrency.lockutils [req-4434a2c7-cf70-4cc7-9dc1-5da587e372ad req-0c0cceac-5488-4734-9c21-50099c901f01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:51:45 np0005603609 nova_compute[221550]: 2026-01-31 07:51:45.374 221554 DEBUG oslo_concurrency.lockutils [req-4434a2c7-cf70-4cc7-9dc1-5da587e372ad req-0c0cceac-5488-4734-9c21-50099c901f01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:51:45 np0005603609 nova_compute[221550]: 2026-01-31 07:51:45.374 221554 DEBUG oslo_concurrency.lockutils [req-4434a2c7-cf70-4cc7-9dc1-5da587e372ad req-0c0cceac-5488-4734-9c21-50099c901f01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:51:45 np0005603609 nova_compute[221550]: 2026-01-31 07:51:45.375 221554 DEBUG nova.compute.manager [req-4434a2c7-cf70-4cc7-9dc1-5da587e372ad req-0c0cceac-5488-4734-9c21-50099c901f01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] No waiting events found dispatching network-vif-plugged-9ec6f03b-475c-4295-8014-f37bdda5832f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:51:45 np0005603609 nova_compute[221550]: 2026-01-31 07:51:45.375 221554 WARNING nova.compute.manager [req-4434a2c7-cf70-4cc7-9dc1-5da587e372ad req-0c0cceac-5488-4734-9c21-50099c901f01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received unexpected event network-vif-plugged-9ec6f03b-475c-4295-8014-f37bdda5832f for instance with vm_state active and task_state None.#033[00m
Jan 31 02:51:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:45.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:45 np0005603609 nova_compute[221550]: 2026-01-31 07:51:45.540 221554 DEBUG oslo_concurrency.lockutils [req-48ba3986-e3a9-4069-a2ef-eeaf985d82f4 req-b66a8322-fd03-4e52-82e2-99a1a8a05dd9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:51:45 np0005603609 nova_compute[221550]: 2026-01-31 07:51:45.955 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:46 np0005603609 nova_compute[221550]: 2026-01-31 07:51:46.433 221554 DEBUG oslo_concurrency.lockutils [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "interface-537adcb0-1fba-4e29-9bb3-b33ba7e3e523-9ec6f03b-475c-4295-8014-f37bdda5832f" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:51:46 np0005603609 nova_compute[221550]: 2026-01-31 07:51:46.433 221554 DEBUG oslo_concurrency.lockutils [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-537adcb0-1fba-4e29-9bb3-b33ba7e3e523-9ec6f03b-475c-4295-8014-f37bdda5832f" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:51:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:46.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:51:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:47.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:51:47 np0005603609 nova_compute[221550]: 2026-01-31 07:51:47.705 221554 DEBUG nova.objects.instance [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'flavor' on Instance uuid 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:51:47 np0005603609 nova_compute[221550]: 2026-01-31 07:51:47.766 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.142 221554 DEBUG nova.virt.libvirt.vif [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:50:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1077414172',display_name='tempest-AttachInterfacesTestJSON-server-1077414172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1077414172',id=70,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAcDMT21wqfH8JLIQ1A1rchIYyXu+jv2taZ98rEzLlbMI8L3f1ONZJQxWBx/yKSKBA7FUnPq2XK4gO+skhzO2F8qgCgP1dHat5l6TbqZGCAz/thGBq96oAIJMHIEagTjug==',key_name='tempest-keypair-807659227',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:50:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-g7pm4i0k',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:50:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=537adcb0-1fba-4e29-9bb3-b33ba7e3e523,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ec6f03b-475c-4295-8014-f37bdda5832f", "address": "fa:16:3e:1a:f6:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec6f03b-47", "ovs_interfaceid": "9ec6f03b-475c-4295-8014-f37bdda5832f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.142 221554 DEBUG nova.network.os_vif_util [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "9ec6f03b-475c-4295-8014-f37bdda5832f", "address": "fa:16:3e:1a:f6:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec6f03b-47", "ovs_interfaceid": "9ec6f03b-475c-4295-8014-f37bdda5832f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.143 221554 DEBUG nova.network.os_vif_util [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:f6:82,bridge_name='br-int',has_traffic_filtering=True,id=9ec6f03b-475c-4295-8014-f37bdda5832f,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec6f03b-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.148 221554 DEBUG nova.virt.libvirt.guest [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1a:f6:82"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ec6f03b-47"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.151 221554 DEBUG nova.virt.libvirt.guest [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1a:f6:82"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ec6f03b-47"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.155 221554 DEBUG nova.virt.libvirt.driver [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Attempting to detach device tap9ec6f03b-47 from instance 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.156 221554 DEBUG nova.virt.libvirt.guest [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] detach device xml: <interface type="ethernet">
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:1a:f6:82"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <target dev="tap9ec6f03b-47"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]: </interface>
Jan 31 02:51:48 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.224 221554 DEBUG nova.virt.libvirt.guest [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1a:f6:82"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ec6f03b-47"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.227 221554 DEBUG nova.virt.libvirt.guest [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1a:f6:82"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ec6f03b-47"/></interface>not found in domain: <domain type='kvm' id='32'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <name>instance-00000046</name>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <uuid>537adcb0-1fba-4e29-9bb3-b33ba7e3e523</uuid>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1077414172</nova:name>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 07:51:41</nova:creationTime>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:port uuid="80de277a-9f3a-4b8a-a272-23c1948aaa56">
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:port uuid="9ec6f03b-475c-4295-8014-f37bdda5832f">
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 02:51:48 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <memory unit='KiB'>131072</memory>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <vcpu placement='static'>1</vcpu>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <resource>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <partition>/machine</partition>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </resource>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <sysinfo type='smbios'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <entry name='manufacturer'>RDO</entry>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <entry name='serial'>537adcb0-1fba-4e29-9bb3-b33ba7e3e523</entry>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <entry name='uuid'>537adcb0-1fba-4e29-9bb3-b33ba7e3e523</entry>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <entry name='family'>Virtual Machine</entry>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <boot dev='hd'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <smbios mode='sysinfo'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <vmcoreinfo state='on'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <model fallback='forbid'>Nehalem</model>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <feature policy='require' name='x2apic'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <feature policy='require' name='hypervisor'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <feature policy='require' name='vme'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <clock offset='utc'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <timer name='hpet' present='no'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <on_poweroff>destroy</on_poweroff>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <on_reboot>restart</on_reboot>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <on_crash>destroy</on_crash>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <disk type='network' device='disk'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk' index='2'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target dev='vda' bus='virtio'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='virtio-disk0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <disk type='network' device='cdrom'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk.config' index='1'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target dev='sda' bus='sata'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <readonly/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='sata0-0-0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pcie.0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='1' port='0x10'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.1'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='2' port='0x11'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.2'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='3' port='0x12'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.3'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='4' port='0x13'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.4'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='5' port='0x14'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.5'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='6' port='0x15'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.6'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='7' port='0x16'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.7'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='8' port='0x17'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.8'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='9' port='0x18'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.9'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='10' port='0x19'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.10'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='11' port='0x1a'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.11'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='12' port='0x1b'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.12'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='13' port='0x1c'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.13'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='14' port='0x1d'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.14'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='15' port='0x1e'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.15'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='16' port='0x1f'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.16'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='17' port='0x20'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.17'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='18' port='0x21'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.18'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='19' port='0x22'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.19'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='20' port='0x23'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.20'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='21' port='0x24'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.21'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='22' port='0x25'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.22'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='23' port='0x26'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.23'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='24' port='0x27'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.24'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='25' port='0x28'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.25'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-pci-bridge'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.26'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='usb'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='sata' index='0'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='ide'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:69:b6:7b'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target dev='tap80de277a-9f'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='net0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:1a:f6:82'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target dev='tap9ec6f03b-47'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='net1'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <serial type='pty'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/537adcb0-1fba-4e29-9bb3-b33ba7e3e523/console.log' append='off'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target type='isa-serial' port='0'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:        <model name='isa-serial'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      </target>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <console type='pty' tty='/dev/pts/0'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/537adcb0-1fba-4e29-9bb3-b33ba7e3e523/console.log' append='off'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target type='serial' port='0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </console>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <input type='tablet' bus='usb'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='input0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='usb' bus='0' port='1'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </input>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <input type='mouse' bus='ps2'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='input1'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </input>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <input type='keyboard' bus='ps2'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='input2'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </input>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <listen type='address' address='::0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </graphics>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <audio id='1' type='none'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='video0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <watchdog model='itco' action='reset'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='watchdog0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </watchdog>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <memballoon model='virtio'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <stats period='10'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='balloon0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <rng model='virtio'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <backend model='random'>/dev/urandom</backend>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='rng0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <label>system_u:system_r:svirt_t:s0:c764,c937</label>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c764,c937</imagelabel>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <label>+107:+107</label>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <imagelabel>+107:+107</imagelabel>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 02:51:48 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:51:48 np0005603609 nova_compute[221550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.228 221554 INFO nova.virt.libvirt.driver [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully detached device tap9ec6f03b-47 from instance 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 from the persistent domain config.#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.229 221554 DEBUG nova.virt.libvirt.driver [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] (1/8): Attempting to detach device tap9ec6f03b-47 with device alias net1 from instance 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.229 221554 DEBUG nova.virt.libvirt.guest [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] detach device xml: <interface type="ethernet">
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:1a:f6:82"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <target dev="tap9ec6f03b-47"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]: </interface>
Jan 31 02:51:48 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 02:51:48 np0005603609 kernel: tap9ec6f03b-47 (unregistering): left promiscuous mode
Jan 31 02:51:48 np0005603609 NetworkManager[49064]: <info>  [1769845908.3166] device (tap9ec6f03b-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.319 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:48 np0005603609 ovn_controller[130359]: 2026-01-31T07:51:48Z|00215|binding|INFO|Releasing lport 9ec6f03b-475c-4295-8014-f37bdda5832f from this chassis (sb_readonly=0)
Jan 31 02:51:48 np0005603609 ovn_controller[130359]: 2026-01-31T07:51:48Z|00216|binding|INFO|Setting lport 9ec6f03b-475c-4295-8014-f37bdda5832f down in Southbound
Jan 31 02:51:48 np0005603609 ovn_controller[130359]: 2026-01-31T07:51:48Z|00217|binding|INFO|Removing iface tap9ec6f03b-47 ovn-installed in OVS
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.326 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.330 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.332 221554 DEBUG nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Received event <DeviceRemovedEvent: 1769845908.3325415, 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.334 221554 DEBUG nova.virt.libvirt.driver [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Start waiting for the detach event from libvirt for device tap9ec6f03b-47 with device alias net1 for instance 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.334 221554 DEBUG nova.virt.libvirt.guest [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1a:f6:82"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ec6f03b-47"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.337 221554 DEBUG nova.virt.libvirt.guest [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1a:f6:82"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ec6f03b-47"/></interface>not found in domain: <domain type='kvm' id='32'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <name>instance-00000046</name>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <uuid>537adcb0-1fba-4e29-9bb3-b33ba7e3e523</uuid>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1077414172</nova:name>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 07:51:41</nova:creationTime>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:port uuid="80de277a-9f3a-4b8a-a272-23c1948aaa56">
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:port uuid="9ec6f03b-475c-4295-8014-f37bdda5832f">
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 02:51:48 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <memory unit='KiB'>131072</memory>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <vcpu placement='static'>1</vcpu>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <resource>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <partition>/machine</partition>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </resource>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <sysinfo type='smbios'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <entry name='manufacturer'>RDO</entry>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <entry name='serial'>537adcb0-1fba-4e29-9bb3-b33ba7e3e523</entry>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <entry name='uuid'>537adcb0-1fba-4e29-9bb3-b33ba7e3e523</entry>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <entry name='family'>Virtual Machine</entry>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <boot dev='hd'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <smbios mode='sysinfo'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <vmcoreinfo state='on'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <model fallback='forbid'>Nehalem</model>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <feature policy='require' name='x2apic'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <feature policy='require' name='hypervisor'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <feature policy='require' name='vme'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <clock offset='utc'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <timer name='hpet' present='no'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <on_poweroff>destroy</on_poweroff>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <on_reboot>restart</on_reboot>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <on_crash>destroy</on_crash>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <disk type='network' device='disk'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk' index='2'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target dev='vda' bus='virtio'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='virtio-disk0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <disk type='network' device='cdrom'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk.config' index='1'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target dev='sda' bus='sata'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <readonly/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='sata0-0-0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pcie.0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='1' port='0x10'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.1'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='2' port='0x11'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.2'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='3' port='0x12'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.3'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='4' port='0x13'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.4'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='5' port='0x14'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.5'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='6' port='0x15'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.6'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='7' port='0x16'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.7'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='8' port='0x17'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.8'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='9' port='0x18'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.9'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='10' port='0x19'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.10'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='11' port='0x1a'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.11'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='12' port='0x1b'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.12'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='13' port='0x1c'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.13'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='14' port='0x1d'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.14'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='15' port='0x1e'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.15'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='16' port='0x1f'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.16'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='17' port='0x20'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.17'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='18' port='0x21'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.18'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='19' port='0x22'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.19'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='20' port='0x23'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.20'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='21' port='0x24'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.21'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='22' port='0x25'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.22'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='23' port='0x26'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.23'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='24' port='0x27'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.24'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target chassis='25' port='0x28'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.25'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model name='pcie-pci-bridge'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='pci.26'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='usb'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <controller type='sata' index='0'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='ide'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:69:b6:7b'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target dev='tap80de277a-9f'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='net0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <serial type='pty'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/537adcb0-1fba-4e29-9bb3-b33ba7e3e523/console.log' append='off'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target type='isa-serial' port='0'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:        <model name='isa-serial'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      </target>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <console type='pty' tty='/dev/pts/0'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/537adcb0-1fba-4e29-9bb3-b33ba7e3e523/console.log' append='off'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <target type='serial' port='0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </console>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <input type='tablet' bus='usb'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='input0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='usb' bus='0' port='1'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </input>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <input type='mouse' bus='ps2'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='input1'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </input>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <input type='keyboard' bus='ps2'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='input2'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </input>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <listen type='address' address='::0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </graphics>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <audio id='1' type='none'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='video0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <watchdog model='itco' action='reset'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='watchdog0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </watchdog>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <memballoon model='virtio'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <stats period='10'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='balloon0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <rng model='virtio'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <backend model='random'>/dev/urandom</backend>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <alias name='rng0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <label>system_u:system_r:svirt_t:s0:c764,c937</label>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c764,c937</imagelabel>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <label>+107:+107</label>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <imagelabel>+107:+107</imagelabel>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 02:51:48 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:51:48 np0005603609 nova_compute[221550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.338 221554 INFO nova.virt.libvirt.driver [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully detached device tap9ec6f03b-47 from instance 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 from the live domain config.#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.339 221554 DEBUG nova.virt.libvirt.vif [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:50:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1077414172',display_name='tempest-AttachInterfacesTestJSON-server-1077414172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1077414172',id=70,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAcDMT21wqfH8JLIQ1A1rchIYyXu+jv2taZ98rEzLlbMI8L3f1ONZJQxWBx/yKSKBA7FUnPq2XK4gO+skhzO2F8qgCgP1dHat5l6TbqZGCAz/thGBq96oAIJMHIEagTjug==',key_name='tempest-keypair-807659227',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:50:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-g7pm4i0k',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:50:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=537adcb0-1fba-4e29-9bb3-b33ba7e3e523,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ec6f03b-475c-4295-8014-f37bdda5832f", "address": "fa:16:3e:1a:f6:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec6f03b-47", "ovs_interfaceid": "9ec6f03b-475c-4295-8014-f37bdda5832f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.339 221554 DEBUG nova.network.os_vif_util [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "9ec6f03b-475c-4295-8014-f37bdda5832f", "address": "fa:16:3e:1a:f6:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec6f03b-47", "ovs_interfaceid": "9ec6f03b-475c-4295-8014-f37bdda5832f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.340 221554 DEBUG nova.network.os_vif_util [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:f6:82,bridge_name='br-int',has_traffic_filtering=True,id=9ec6f03b-475c-4295-8014-f37bdda5832f,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec6f03b-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.340 221554 DEBUG os_vif [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:f6:82,bridge_name='br-int',has_traffic_filtering=True,id=9ec6f03b-475c-4295-8014-f37bdda5832f,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec6f03b-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.342 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.343 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ec6f03b-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.344 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.346 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.347 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.349 221554 INFO os_vif [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:f6:82,bridge_name='br-int',has_traffic_filtering=True,id=9ec6f03b-475c-4295-8014-f37bdda5832f,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec6f03b-47')#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.350 221554 DEBUG nova.virt.libvirt.guest [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1077414172</nova:name>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 07:51:48</nova:creationTime>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    <nova:port uuid="80de277a-9f3a-4b8a-a272-23c1948aaa56">
Jan 31 02:51:48 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:51:48 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 02:51:48 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 02:51:48 np0005603609 nova_compute[221550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 02:51:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:48.382 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:f6:82 10.100.0.13'], port_security=['fa:16:3e:1a:f6:82 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '537adcb0-1fba-4e29-9bb3-b33ba7e3e523', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c7e4e426-0b76-41c4-8dcb-ea007c31db76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=9ec6f03b-475c-4295-8014-f37bdda5832f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:51:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:48.383 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 9ec6f03b-475c-4295-8014-f37bdda5832f in datapath 485494d9-5360-41c3-a10e-ef5098af0809 unbound from our chassis#033[00m
Jan 31 02:51:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:48.384 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 485494d9-5360-41c3-a10e-ef5098af0809#033[00m
Jan 31 02:51:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:48.397 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3f72cf63-f6e1-4a1b-b963-b4e3e86f7a2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:48.423 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[06db52f9-2dc6-4729-936e-e8aadc62979f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:48.426 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[475d7fbd-8715-432c-bd67-ca9ad60528b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:48.446 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f8907b6f-f9df-4fca-bd7b-f6e74d667ef6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:48.457 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf2b65f-dc18-4172-8fd6-d28832528332]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607966, 'reachable_time': 42681, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247751, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:48.469 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[19a9149e-0f7c-446f-875f-8c5d3ab6e861]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607974, 'tstamp': 607974}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247752, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607975, 'tstamp': 607975}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247752, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:48.471 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:51:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:48.473 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap485494d9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:51:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:48.473 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:51:48 np0005603609 nova_compute[221550]: 2026-01-31 07:51:48.473 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:48.474 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap485494d9-50, col_values=(('external_ids', {'iface-id': 'cbb039b2-15df-45bd-8800-81213ecc7011'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:51:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:48.474 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:51:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:48.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:49 np0005603609 nova_compute[221550]: 2026-01-31 07:51:49.076 221554 DEBUG nova.compute.manager [req-b9406a02-0f41-4bae-ab79-27519b386743 req-f2a51a7c-069c-464c-a30a-6fbf4c851f95 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received event network-vif-unplugged-9ec6f03b-475c-4295-8014-f37bdda5832f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:51:49 np0005603609 nova_compute[221550]: 2026-01-31 07:51:49.076 221554 DEBUG oslo_concurrency.lockutils [req-b9406a02-0f41-4bae-ab79-27519b386743 req-f2a51a7c-069c-464c-a30a-6fbf4c851f95 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:51:49 np0005603609 nova_compute[221550]: 2026-01-31 07:51:49.076 221554 DEBUG oslo_concurrency.lockutils [req-b9406a02-0f41-4bae-ab79-27519b386743 req-f2a51a7c-069c-464c-a30a-6fbf4c851f95 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:51:49 np0005603609 nova_compute[221550]: 2026-01-31 07:51:49.076 221554 DEBUG oslo_concurrency.lockutils [req-b9406a02-0f41-4bae-ab79-27519b386743 req-f2a51a7c-069c-464c-a30a-6fbf4c851f95 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:51:49 np0005603609 nova_compute[221550]: 2026-01-31 07:51:49.076 221554 DEBUG nova.compute.manager [req-b9406a02-0f41-4bae-ab79-27519b386743 req-f2a51a7c-069c-464c-a30a-6fbf4c851f95 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] No waiting events found dispatching network-vif-unplugged-9ec6f03b-475c-4295-8014-f37bdda5832f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:51:49 np0005603609 nova_compute[221550]: 2026-01-31 07:51:49.077 221554 WARNING nova.compute.manager [req-b9406a02-0f41-4bae-ab79-27519b386743 req-f2a51a7c-069c-464c-a30a-6fbf4c851f95 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received unexpected event network-vif-unplugged-9ec6f03b-475c-4295-8014-f37bdda5832f for instance with vm_state active and task_state None.#033[00m
Jan 31 02:51:49 np0005603609 nova_compute[221550]: 2026-01-31 07:51:49.533 221554 DEBUG oslo_concurrency.lockutils [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:51:49 np0005603609 nova_compute[221550]: 2026-01-31 07:51:49.534 221554 DEBUG oslo_concurrency.lockutils [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquired lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:51:49 np0005603609 nova_compute[221550]: 2026-01-31 07:51:49.535 221554 DEBUG nova.network.neutron [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:51:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:49.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:50 np0005603609 nova_compute[221550]: 2026-01-31 07:51:50.867 221554 DEBUG oslo_concurrency.lockutils [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:51:50 np0005603609 nova_compute[221550]: 2026-01-31 07:51:50.868 221554 DEBUG oslo_concurrency.lockutils [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:51:50 np0005603609 nova_compute[221550]: 2026-01-31 07:51:50.868 221554 DEBUG oslo_concurrency.lockutils [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:51:50 np0005603609 nova_compute[221550]: 2026-01-31 07:51:50.868 221554 DEBUG oslo_concurrency.lockutils [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:51:50 np0005603609 nova_compute[221550]: 2026-01-31 07:51:50.869 221554 DEBUG oslo_concurrency.lockutils [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:51:50 np0005603609 nova_compute[221550]: 2026-01-31 07:51:50.870 221554 INFO nova.compute.manager [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Terminating instance#033[00m
Jan 31 02:51:50 np0005603609 nova_compute[221550]: 2026-01-31 07:51:50.871 221554 DEBUG nova.compute.manager [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:51:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:50.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:50 np0005603609 nova_compute[221550]: 2026-01-31 07:51:50.922 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:51 np0005603609 kernel: tap80de277a-9f (unregistering): left promiscuous mode
Jan 31 02:51:51 np0005603609 NetworkManager[49064]: <info>  [1769845911.1994] device (tap80de277a-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.204 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:51 np0005603609 ovn_controller[130359]: 2026-01-31T07:51:51Z|00218|binding|INFO|Releasing lport 80de277a-9f3a-4b8a-a272-23c1948aaa56 from this chassis (sb_readonly=0)
Jan 31 02:51:51 np0005603609 ovn_controller[130359]: 2026-01-31T07:51:51Z|00219|binding|INFO|Setting lport 80de277a-9f3a-4b8a-a272-23c1948aaa56 down in Southbound
Jan 31 02:51:51 np0005603609 ovn_controller[130359]: 2026-01-31T07:51:51Z|00220|binding|INFO|Removing iface tap80de277a-9f ovn-installed in OVS
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.207 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.214 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:51 np0005603609 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000046.scope: Deactivated successfully.
Jan 31 02:51:51 np0005603609 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000046.scope: Consumed 15.508s CPU time.
Jan 31 02:51:51 np0005603609 systemd-machined[190912]: Machine qemu-32-instance-00000046 terminated.
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.282 221554 DEBUG nova.compute.manager [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received event network-vif-plugged-9ec6f03b-475c-4295-8014-f37bdda5832f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.283 221554 DEBUG oslo_concurrency.lockutils [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.283 221554 DEBUG oslo_concurrency.lockutils [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.283 221554 DEBUG oslo_concurrency.lockutils [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.284 221554 DEBUG nova.compute.manager [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] No waiting events found dispatching network-vif-plugged-9ec6f03b-475c-4295-8014-f37bdda5832f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.284 221554 WARNING nova.compute.manager [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received unexpected event network-vif-plugged-9ec6f03b-475c-4295-8014-f37bdda5832f for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.284 221554 DEBUG nova.compute.manager [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received event network-vif-deleted-9ec6f03b-475c-4295-8014-f37bdda5832f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.284 221554 INFO nova.compute.manager [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Neutron deleted interface 9ec6f03b-475c-4295-8014-f37bdda5832f; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.285 221554 DEBUG nova.network.neutron [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Updating instance_info_cache with network_info: [{"id": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "address": "fa:16:3e:69:b6:7b", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80de277a-9f", "ovs_interfaceid": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:51:51 np0005603609 NetworkManager[49064]: <info>  [1769845911.2858] manager: (tap80de277a-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/111)
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.287 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.294 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.300 221554 INFO nova.virt.libvirt.driver [-] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Instance destroyed successfully.#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.300 221554 DEBUG nova.objects.instance [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'resources' on Instance uuid 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:51:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:51.404 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:b6:7b 10.100.0.14'], port_security=['fa:16:3e:69:b6:7b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '537adcb0-1fba-4e29-9bb3-b33ba7e3e523', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5324d833-550d-4984-912a-15a1917a1b00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=80de277a-9f3a-4b8a-a272-23c1948aaa56) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:51:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:51.406 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 80de277a-9f3a-4b8a-a272-23c1948aaa56 in datapath 485494d9-5360-41c3-a10e-ef5098af0809 unbound from our chassis#033[00m
Jan 31 02:51:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:51.409 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 485494d9-5360-41c3-a10e-ef5098af0809, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:51:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:51.410 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0e19b30e-39b6-4d94-9012-d6110a051cc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:51.411 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809 namespace which is not needed anymore#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.441 221554 INFO nova.network.neutron [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Port 9ec6f03b-475c-4295-8014-f37bdda5832f from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.441 221554 DEBUG nova.network.neutron [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Updating instance_info_cache with network_info: [{"id": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "address": "fa:16:3e:69:b6:7b", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80de277a-9f", "ovs_interfaceid": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.485 221554 DEBUG nova.virt.libvirt.vif [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:50:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1077414172',display_name='tempest-AttachInterfacesTestJSON-server-1077414172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1077414172',id=70,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAcDMT21wqfH8JLIQ1A1rchIYyXu+jv2taZ98rEzLlbMI8L3f1ONZJQxWBx/yKSKBA7FUnPq2XK4gO+skhzO2F8qgCgP1dHat5l6TbqZGCAz/thGBq96oAIJMHIEagTjug==',key_name='tempest-keypair-807659227',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:50:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-g7pm4i0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:50:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=537adcb0-1fba-4e29-9bb3-b33ba7e3e523,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "address": "fa:16:3e:69:b6:7b", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80de277a-9f", "ovs_interfaceid": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.486 221554 DEBUG nova.network.os_vif_util [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "address": "fa:16:3e:69:b6:7b", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80de277a-9f", "ovs_interfaceid": "80de277a-9f3a-4b8a-a272-23c1948aaa56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.487 221554 DEBUG nova.network.os_vif_util [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:b6:7b,bridge_name='br-int',has_traffic_filtering=True,id=80de277a-9f3a-4b8a-a272-23c1948aaa56,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80de277a-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.487 221554 DEBUG os_vif [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:b6:7b,bridge_name='br-int',has_traffic_filtering=True,id=80de277a-9f3a-4b8a-a272-23c1948aaa56,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80de277a-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.489 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.489 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80de277a-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.526 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.527 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.530 221554 INFO os_vif [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:b6:7b,bridge_name='br-int',has_traffic_filtering=True,id=80de277a-9f3a-4b8a-a272-23c1948aaa56,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80de277a-9f')#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.531 221554 DEBUG nova.virt.libvirt.vif [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:50:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1077414172',display_name='tempest-AttachInterfacesTestJSON-server-1077414172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1077414172',id=70,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAcDMT21wqfH8JLIQ1A1rchIYyXu+jv2taZ98rEzLlbMI8L3f1ONZJQxWBx/yKSKBA7FUnPq2XK4gO+skhzO2F8qgCgP1dHat5l6TbqZGCAz/thGBq96oAIJMHIEagTjug==',key_name='tempest-keypair-807659227',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:50:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-g7pm4i0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:50:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=537adcb0-1fba-4e29-9bb3-b33ba7e3e523,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ec6f03b-475c-4295-8014-f37bdda5832f", "address": "fa:16:3e:1a:f6:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec6f03b-47", "ovs_interfaceid": "9ec6f03b-475c-4295-8014-f37bdda5832f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.531 221554 DEBUG nova.network.os_vif_util [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "9ec6f03b-475c-4295-8014-f37bdda5832f", "address": "fa:16:3e:1a:f6:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec6f03b-47", "ovs_interfaceid": "9ec6f03b-475c-4295-8014-f37bdda5832f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.532 221554 DEBUG nova.network.os_vif_util [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:f6:82,bridge_name='br-int',has_traffic_filtering=True,id=9ec6f03b-475c-4295-8014-f37bdda5832f,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec6f03b-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.533 221554 DEBUG os_vif [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:f6:82,bridge_name='br-int',has_traffic_filtering=True,id=9ec6f03b-475c-4295-8014-f37bdda5832f,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec6f03b-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.534 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.535 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ec6f03b-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.535 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:51:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:51:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:51.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.541 221554 INFO os_vif [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:f6:82,bridge_name='br-int',has_traffic_filtering=True,id=9ec6f03b-475c-4295-8014-f37bdda5832f,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec6f03b-47')#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.561 221554 DEBUG nova.objects.instance [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lazy-loading 'system_metadata' on Instance uuid 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.565 221554 DEBUG oslo_concurrency.lockutils [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Releasing lock "refresh_cache-537adcb0-1fba-4e29-9bb3-b33ba7e3e523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:51:51 np0005603609 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[247391]: [NOTICE]   (247395) : haproxy version is 2.8.14-c23fe91
Jan 31 02:51:51 np0005603609 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[247391]: [NOTICE]   (247395) : path to executable is /usr/sbin/haproxy
Jan 31 02:51:51 np0005603609 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[247391]: [WARNING]  (247395) : Exiting Master process...
Jan 31 02:51:51 np0005603609 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[247391]: [ALERT]    (247395) : Current worker (247397) exited with code 143 (Terminated)
Jan 31 02:51:51 np0005603609 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[247391]: [WARNING]  (247395) : All workers exited. Exiting... (0)
Jan 31 02:51:51 np0005603609 systemd[1]: libpod-d4d284cdd515c0613f4493bb0ef68c3d357a08e1b2bfcab98146e94499205571.scope: Deactivated successfully.
Jan 31 02:51:51 np0005603609 podman[247785]: 2026-01-31 07:51:51.609097633 +0000 UTC m=+0.120399505 container died d4d284cdd515c0613f4493bb0ef68c3d357a08e1b2bfcab98146e94499205571 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.656 221554 DEBUG nova.objects.instance [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lazy-loading 'flavor' on Instance uuid 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.662 221554 DEBUG oslo_concurrency.lockutils [None req-b4fbf0a5-8ec6-4ec4-a774-f7d3032d5439 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-537adcb0-1fba-4e29-9bb3-b33ba7e3e523-9ec6f03b-475c-4295-8014-f37bdda5832f" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.703 221554 DEBUG nova.virt.libvirt.vif [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:50:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1077414172',display_name='tempest-AttachInterfacesTestJSON-server-1077414172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1077414172',id=70,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAcDMT21wqfH8JLIQ1A1rchIYyXu+jv2taZ98rEzLlbMI8L3f1ONZJQxWBx/yKSKBA7FUnPq2XK4gO+skhzO2F8qgCgP1dHat5l6TbqZGCAz/thGBq96oAIJMHIEagTjug==',key_name='tempest-keypair-807659227',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:50:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-g7pm4i0k',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:51:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=537adcb0-1fba-4e29-9bb3-b33ba7e3e523,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ec6f03b-475c-4295-8014-f37bdda5832f", "address": "fa:16:3e:1a:f6:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec6f03b-47", "ovs_interfaceid": "9ec6f03b-475c-4295-8014-f37bdda5832f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.704 221554 DEBUG nova.network.os_vif_util [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converting VIF {"id": "9ec6f03b-475c-4295-8014-f37bdda5832f", "address": "fa:16:3e:1a:f6:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec6f03b-47", "ovs_interfaceid": "9ec6f03b-475c-4295-8014-f37bdda5832f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.705 221554 DEBUG nova.network.os_vif_util [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:f6:82,bridge_name='br-int',has_traffic_filtering=True,id=9ec6f03b-475c-4295-8014-f37bdda5832f,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec6f03b-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.708 221554 DEBUG nova.virt.libvirt.guest [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1a:f6:82"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ec6f03b-47"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.712 221554 DEBUG nova.virt.libvirt.guest [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1a:f6:82"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ec6f03b-47"/></interface>not found in domain: <domain type='kvm'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <name>instance-00000046</name>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <uuid>537adcb0-1fba-4e29-9bb3-b33ba7e3e523</uuid>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1077414172</nova:name>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:50:37</nova:creationTime>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:51:51 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:        <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:        <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:        <nova:port uuid="80de277a-9f3a-4b8a-a272-23c1948aaa56">
Jan 31 02:51:51 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <memory unit='KiB'>131072</memory>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <vcpu placement='static'>1</vcpu>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <sysinfo type='smbios'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <entry name='manufacturer'>RDO</entry>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <entry name='serial'>537adcb0-1fba-4e29-9bb3-b33ba7e3e523</entry>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <entry name='uuid'>537adcb0-1fba-4e29-9bb3-b33ba7e3e523</entry>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <entry name='family'>Virtual Machine</entry>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <boot dev='hd'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <smbios mode='sysinfo'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <vmcoreinfo state='on'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <cpu mode='custom' match='exact' check='partial'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <model fallback='allow'>Nehalem</model>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <clock offset='utc'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <timer name='hpet' present='no'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <on_poweroff>destroy</on_poweroff>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <on_reboot>restart</on_reboot>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <on_crash>destroy</on_crash>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <disk type='network' device='disk'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target dev='vda' bus='virtio'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <disk type='network' device='cdrom'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/537adcb0-1fba-4e29-9bb3-b33ba7e3e523_disk.config'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target dev='sda' bus='sata'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <readonly/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='0' model='pcie-root'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='1' port='0x10'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='2' port='0x11'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='3' port='0x12'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='4' port='0x13'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='5' port='0x14'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='6' port='0x15'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='7' port='0x16'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='8' port='0x17'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='9' port='0x18'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='10' port='0x19'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='11' port='0x1a'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='12' port='0x1b'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='13' port='0x1c'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='14' port='0x1d'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='15' port='0x1e'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='16' port='0x1f'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='17' port='0x20'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='18' port='0x21'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='19' port='0x22'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='20' port='0x23'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='21' port='0x24'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='22' port='0x25'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='23' port='0x26'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='24' port='0x27'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target chassis='25' port='0x28'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model name='pcie-pci-bridge'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <controller type='sata' index='0'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:69:b6:7b'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target dev='tap80de277a-9f'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <serial type='pty'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/537adcb0-1fba-4e29-9bb3-b33ba7e3e523/console.log' append='off'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target type='isa-serial' port='0'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:        <model name='isa-serial'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      </target>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <console type='pty'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/537adcb0-1fba-4e29-9bb3-b33ba7e3e523/console.log' append='off'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <target type='serial' port='0'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </console>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <input type='tablet' bus='usb'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='usb' bus='0' port='1'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </input>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <input type='mouse' bus='ps2'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <input type='keyboard' bus='ps2'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <listen type='address' address='::0'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </graphics>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <audio id='1' type='none'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <watchdog model='itco' action='reset'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <memballoon model='virtio'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <stats period='10'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <rng model='virtio'>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <backend model='random'>/dev/urandom</backend>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:51:51 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:51:51 np0005603609 nova_compute[221550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.713 221554 WARNING nova.virt.libvirt.driver [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Detaching interface fa:16:3e:1a:f6:82 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap9ec6f03b-47' not found.#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.714 221554 DEBUG nova.virt.libvirt.vif [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:50:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1077414172',display_name='tempest-AttachInterfacesTestJSON-server-1077414172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1077414172',id=70,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAcDMT21wqfH8JLIQ1A1rchIYyXu+jv2taZ98rEzLlbMI8L3f1ONZJQxWBx/yKSKBA7FUnPq2XK4gO+skhzO2F8qgCgP1dHat5l6TbqZGCAz/thGBq96oAIJMHIEagTjug==',key_name='tempest-keypair-807659227',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:50:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-g7pm4i0k',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:51:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=537adcb0-1fba-4e29-9bb3-b33ba7e3e523,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ec6f03b-475c-4295-8014-f37bdda5832f", "address": "fa:16:3e:1a:f6:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec6f03b-47", "ovs_interfaceid": "9ec6f03b-475c-4295-8014-f37bdda5832f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.714 221554 DEBUG nova.network.os_vif_util [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converting VIF {"id": "9ec6f03b-475c-4295-8014-f37bdda5832f", "address": "fa:16:3e:1a:f6:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec6f03b-47", "ovs_interfaceid": "9ec6f03b-475c-4295-8014-f37bdda5832f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.715 221554 DEBUG nova.network.os_vif_util [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:f6:82,bridge_name='br-int',has_traffic_filtering=True,id=9ec6f03b-475c-4295-8014-f37bdda5832f,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec6f03b-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.716 221554 DEBUG os_vif [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:f6:82,bridge_name='br-int',has_traffic_filtering=True,id=9ec6f03b-475c-4295-8014-f37bdda5832f,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec6f03b-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.717 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.718 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ec6f03b-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.718 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.720 221554 INFO os_vif [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:f6:82,bridge_name='br-int',has_traffic_filtering=True,id=9ec6f03b-475c-4295-8014-f37bdda5832f,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec6f03b-47')#033[00m
Jan 31 02:51:51 np0005603609 nova_compute[221550]: 2026-01-31 07:51:51.721 221554 DEBUG nova.virt.libvirt.guest [req-ba7fec9b-1000-473b-8f01-f1a40ebf75ac req-707b1432-341f-4568-9c8c-0dd2c728137e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1077414172</nova:name>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 07:51:51</nova:creationTime>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    <nova:port uuid="80de277a-9f3a-4b8a-a272-23c1948aaa56">
Jan 31 02:51:51 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:51:51 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 02:51:51 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 02:51:51 np0005603609 nova_compute[221550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 02:51:51 np0005603609 systemd[1]: var-lib-containers-storage-overlay-1963041eef7e9150ad6b95885d364d24cf0e73f5c85a39491b5adb1750382cb1-merged.mount: Deactivated successfully.
Jan 31 02:51:51 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d4d284cdd515c0613f4493bb0ef68c3d357a08e1b2bfcab98146e94499205571-userdata-shm.mount: Deactivated successfully.
Jan 31 02:51:52 np0005603609 podman[247785]: 2026-01-31 07:51:52.042927934 +0000 UTC m=+0.554229816 container cleanup d4d284cdd515c0613f4493bb0ef68c3d357a08e1b2bfcab98146e94499205571 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:51:52 np0005603609 systemd[1]: libpod-conmon-d4d284cdd515c0613f4493bb0ef68c3d357a08e1b2bfcab98146e94499205571.scope: Deactivated successfully.
Jan 31 02:51:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:52 np0005603609 podman[247832]: 2026-01-31 07:51:52.487571876 +0000 UTC m=+0.431497217 container remove d4d284cdd515c0613f4493bb0ef68c3d357a08e1b2bfcab98146e94499205571 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:51:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:52.491 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[093f8182-39af-4317-b40d-ff75627d0a34]: (4, ('Sat Jan 31 07:51:51 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809 (d4d284cdd515c0613f4493bb0ef68c3d357a08e1b2bfcab98146e94499205571)\nd4d284cdd515c0613f4493bb0ef68c3d357a08e1b2bfcab98146e94499205571\nSat Jan 31 07:51:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809 (d4d284cdd515c0613f4493bb0ef68c3d357a08e1b2bfcab98146e94499205571)\nd4d284cdd515c0613f4493bb0ef68c3d357a08e1b2bfcab98146e94499205571\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:52.493 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb2f24b-1a48-430b-8ad6-0bede985e68c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:52.494 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:51:52 np0005603609 nova_compute[221550]: 2026-01-31 07:51:52.495 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:52 np0005603609 kernel: tap485494d9-50: left promiscuous mode
Jan 31 02:51:52 np0005603609 nova_compute[221550]: 2026-01-31 07:51:52.500 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:52.504 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[426336c1-5c32-4256-ad54-382beec3dd4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:52.520 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[10d69057-08e5-4840-86ec-363ebebe337b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:52.521 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1cbf4316-a941-48c5-8bd7-b5ab64ff12ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:52.535 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[685ce61f-adc1-4d59-8c7b-b6a8943ff1ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607961, 'reachable_time': 41744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247846, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:52 np0005603609 systemd[1]: run-netns-ovnmeta\x2d485494d9\x2d5360\x2d41c3\x2da10e\x2def5098af0809.mount: Deactivated successfully.
Jan 31 02:51:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:52.538 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:51:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:52.538 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[73ff58ee-b473-4731-8c20-bc1d09ef3fa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:51:52 np0005603609 nova_compute[221550]: 2026-01-31 07:51:52.768 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:52.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:53.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:53 np0005603609 nova_compute[221550]: 2026-01-31 07:51:53.981 221554 DEBUG nova.compute.manager [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received event network-vif-unplugged-80de277a-9f3a-4b8a-a272-23c1948aaa56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:51:53 np0005603609 nova_compute[221550]: 2026-01-31 07:51:53.982 221554 DEBUG oslo_concurrency.lockutils [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:51:53 np0005603609 nova_compute[221550]: 2026-01-31 07:51:53.982 221554 DEBUG oslo_concurrency.lockutils [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:51:53 np0005603609 nova_compute[221550]: 2026-01-31 07:51:53.983 221554 DEBUG oslo_concurrency.lockutils [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:51:53 np0005603609 nova_compute[221550]: 2026-01-31 07:51:53.983 221554 DEBUG nova.compute.manager [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] No waiting events found dispatching network-vif-unplugged-80de277a-9f3a-4b8a-a272-23c1948aaa56 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:51:53 np0005603609 nova_compute[221550]: 2026-01-31 07:51:53.984 221554 DEBUG nova.compute.manager [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received event network-vif-unplugged-80de277a-9f3a-4b8a-a272-23c1948aaa56 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:51:53 np0005603609 nova_compute[221550]: 2026-01-31 07:51:53.984 221554 DEBUG nova.compute.manager [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received event network-vif-plugged-80de277a-9f3a-4b8a-a272-23c1948aaa56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:51:53 np0005603609 nova_compute[221550]: 2026-01-31 07:51:53.984 221554 DEBUG oslo_concurrency.lockutils [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:51:53 np0005603609 nova_compute[221550]: 2026-01-31 07:51:53.985 221554 DEBUG oslo_concurrency.lockutils [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:51:53 np0005603609 nova_compute[221550]: 2026-01-31 07:51:53.985 221554 DEBUG oslo_concurrency.lockutils [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:51:53 np0005603609 nova_compute[221550]: 2026-01-31 07:51:53.986 221554 DEBUG nova.compute.manager [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] No waiting events found dispatching network-vif-plugged-80de277a-9f3a-4b8a-a272-23c1948aaa56 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:51:53 np0005603609 nova_compute[221550]: 2026-01-31 07:51:53.986 221554 WARNING nova.compute.manager [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received unexpected event network-vif-plugged-80de277a-9f3a-4b8a-a272-23c1948aaa56 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:51:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:54.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:51:55.304 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:51:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:51:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:55.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:51:56 np0005603609 nova_compute[221550]: 2026-01-31 07:51:56.497 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:56 np0005603609 nova_compute[221550]: 2026-01-31 07:51:56.525 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:56.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:57.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:57 np0005603609 nova_compute[221550]: 2026-01-31 07:51:57.771 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:57 np0005603609 nova_compute[221550]: 2026-01-31 07:51:57.918 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:58.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:51:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:59.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:00.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:01 np0005603609 nova_compute[221550]: 2026-01-31 07:52:01.528 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:01.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:02 np0005603609 nova_compute[221550]: 2026-01-31 07:52:02.772 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:02.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:03.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:04 np0005603609 podman[247856]: 2026-01-31 07:52:04.166361734 +0000 UTC m=+0.050528399 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 02:52:04 np0005603609 podman[247855]: 2026-01-31 07:52:04.181679924 +0000 UTC m=+0.070615204 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 02:52:04 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 02:52:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:04.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:05.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:05 np0005603609 nova_compute[221550]: 2026-01-31 07:52:05.661 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).paxos(paxos updating c 3013..3716) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 3.205664396s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Jan 31 02:52:05 np0005603609 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-1[81663]: 2026-01-31T07:52:05.781+0000 7f39bcfc4640 -1 mon.compute-1@2(peon).paxos(paxos updating c 3013..3716) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 3.205664396s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Jan 31 02:52:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:06 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 8.476012230s
Jan 31 02:52:06 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 8.476012230s
Jan 31 02:52:06 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.476189613s, txc = 0x55f200988c00
Jan 31 02:52:06 np0005603609 nova_compute[221550]: 2026-01-31 07:52:06.299 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845911.298521, 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:52:06 np0005603609 nova_compute[221550]: 2026-01-31 07:52:06.299 221554 INFO nova.compute.manager [-] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:52:06 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.444864273s, txc = 0x55f201350600
Jan 31 02:52:06 np0005603609 nova_compute[221550]: 2026-01-31 07:52:06.376 221554 INFO nova.virt.libvirt.driver [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Deleting instance files /var/lib/nova/instances/537adcb0-1fba-4e29-9bb3-b33ba7e3e523_del#033[00m
Jan 31 02:52:06 np0005603609 nova_compute[221550]: 2026-01-31 07:52:06.377 221554 INFO nova.virt.libvirt.driver [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Deletion of /var/lib/nova/instances/537adcb0-1fba-4e29-9bb3-b33ba7e3e523_del complete#033[00m
Jan 31 02:52:06 np0005603609 nova_compute[221550]: 2026-01-31 07:52:06.383 221554 DEBUG nova.compute.manager [None req-dacad6a3-18af-450f-977f-70ff21d9a7f1 - - - - - -] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:52:06 np0005603609 nova_compute[221550]: 2026-01-31 07:52:06.387 221554 DEBUG nova.compute.manager [None req-dacad6a3-18af-450f-977f-70ff21d9a7f1 - - - - - -] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:52:06 np0005603609 nova_compute[221550]: 2026-01-31 07:52:06.532 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:06 np0005603609 nova_compute[221550]: 2026-01-31 07:52:06.558 221554 INFO nova.compute.manager [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Took 15.69 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:52:06 np0005603609 nova_compute[221550]: 2026-01-31 07:52:06.559 221554 DEBUG oslo.service.loopingcall [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:52:06 np0005603609 nova_compute[221550]: 2026-01-31 07:52:06.559 221554 DEBUG nova.compute.manager [-] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:52:06 np0005603609 nova_compute[221550]: 2026-01-31 07:52:06.559 221554 DEBUG nova.network.neutron [-] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:52:06 np0005603609 nova_compute[221550]: 2026-01-31 07:52:06.622 221554 INFO nova.compute.manager [None req-dacad6a3-18af-450f-977f-70ff21d9a7f1 - - - - - -] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Jan 31 02:52:06 np0005603609 nova_compute[221550]: 2026-01-31 07:52:06.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:06.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:07.486 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:07.486 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:07.486 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:07.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:07 np0005603609 nova_compute[221550]: 2026-01-31 07:52:07.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:07 np0005603609 nova_compute[221550]: 2026-01-31 07:52:07.774 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:08 np0005603609 nova_compute[221550]: 2026-01-31 07:52:08.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:08 np0005603609 nova_compute[221550]: 2026-01-31 07:52:08.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:08 np0005603609 nova_compute[221550]: 2026-01-31 07:52:08.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:52:08 np0005603609 nova_compute[221550]: 2026-01-31 07:52:08.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:52:08 np0005603609 nova_compute[221550]: 2026-01-31 07:52:08.707 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 31 02:52:08 np0005603609 nova_compute[221550]: 2026-01-31 07:52:08.707 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:52:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:08.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:52:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:09.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:52:10 np0005603609 nova_compute[221550]: 2026-01-31 07:52:10.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:10 np0005603609 nova_compute[221550]: 2026-01-31 07:52:10.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:52:10 np0005603609 nova_compute[221550]: 2026-01-31 07:52:10.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:10.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:11 np0005603609 nova_compute[221550]: 2026-01-31 07:52:11.083 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:11 np0005603609 nova_compute[221550]: 2026-01-31 07:52:11.083 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:11 np0005603609 nova_compute[221550]: 2026-01-31 07:52:11.084 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:11 np0005603609 nova_compute[221550]: 2026-01-31 07:52:11.084 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:52:11 np0005603609 nova_compute[221550]: 2026-01-31 07:52:11.084 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:11 np0005603609 nova_compute[221550]: 2026-01-31 07:52:11.534 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:52:11 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2746839228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:52:11 np0005603609 nova_compute[221550]: 2026-01-31 07:52:11.568 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:11.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:11 np0005603609 nova_compute[221550]: 2026-01-31 07:52:11.693 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:52:11 np0005603609 nova_compute[221550]: 2026-01-31 07:52:11.694 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4719MB free_disk=20.98541259765625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:52:11 np0005603609 nova_compute[221550]: 2026-01-31 07:52:11.694 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:11 np0005603609 nova_compute[221550]: 2026-01-31 07:52:11.694 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:12 np0005603609 nova_compute[221550]: 2026-01-31 07:52:12.617 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:52:12 np0005603609 nova_compute[221550]: 2026-01-31 07:52:12.618 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:52:12 np0005603609 nova_compute[221550]: 2026-01-31 07:52:12.618 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:52:12 np0005603609 nova_compute[221550]: 2026-01-31 07:52:12.658 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 02:52:12 np0005603609 nova_compute[221550]: 2026-01-31 07:52:12.726 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 02:52:12 np0005603609 nova_compute[221550]: 2026-01-31 07:52:12.726 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 02:52:12 np0005603609 nova_compute[221550]: 2026-01-31 07:52:12.757 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 02:52:12 np0005603609 nova_compute[221550]: 2026-01-31 07:52:12.776 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:12 np0005603609 nova_compute[221550]: 2026-01-31 07:52:12.780 221554 DEBUG nova.network.neutron [-] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:52:12 np0005603609 nova_compute[221550]: 2026-01-31 07:52:12.799 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 02:52:12 np0005603609 nova_compute[221550]: 2026-01-31 07:52:12.846 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:52:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:12.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:52:12 np0005603609 nova_compute[221550]: 2026-01-31 07:52:12.976 221554 DEBUG nova.compute.manager [req-55c18170-6370-4e39-9328-afb126978382 req-6ab49230-a687-41df-b4fd-1120cb2e5ed2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Received event network-vif-deleted-80de277a-9f3a-4b8a-a272-23c1948aaa56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:52:12 np0005603609 nova_compute[221550]: 2026-01-31 07:52:12.977 221554 INFO nova.compute.manager [req-55c18170-6370-4e39-9328-afb126978382 req-6ab49230-a687-41df-b4fd-1120cb2e5ed2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Neutron deleted interface 80de277a-9f3a-4b8a-a272-23c1948aaa56; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 02:52:12 np0005603609 nova_compute[221550]: 2026-01-31 07:52:12.978 221554 DEBUG nova.network.neutron [req-55c18170-6370-4e39-9328-afb126978382 req-6ab49230-a687-41df-b4fd-1120cb2e5ed2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:52:13 np0005603609 nova_compute[221550]: 2026-01-31 07:52:13.132 221554 INFO nova.compute.manager [-] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Took 6.57 seconds to deallocate network for instance.#033[00m
Jan 31 02:52:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:52:13 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2589437897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:52:13 np0005603609 nova_compute[221550]: 2026-01-31 07:52:13.280 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:13 np0005603609 nova_compute[221550]: 2026-01-31 07:52:13.285 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:52:13 np0005603609 nova_compute[221550]: 2026-01-31 07:52:13.376 221554 DEBUG nova.compute.manager [req-55c18170-6370-4e39-9328-afb126978382 req-6ab49230-a687-41df-b4fd-1120cb2e5ed2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 537adcb0-1fba-4e29-9bb3-b33ba7e3e523] Detach interface failed, port_id=80de277a-9f3a-4b8a-a272-23c1948aaa56, reason: Instance 537adcb0-1fba-4e29-9bb3-b33ba7e3e523 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 02:52:13 np0005603609 nova_compute[221550]: 2026-01-31 07:52:13.379 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:52:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:13.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:13 np0005603609 nova_compute[221550]: 2026-01-31 07:52:13.614 221554 DEBUG oslo_concurrency.lockutils [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:13 np0005603609 nova_compute[221550]: 2026-01-31 07:52:13.699 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:52:13 np0005603609 nova_compute[221550]: 2026-01-31 07:52:13.700 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:13 np0005603609 nova_compute[221550]: 2026-01-31 07:52:13.701 221554 DEBUG oslo_concurrency.lockutils [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:13 np0005603609 nova_compute[221550]: 2026-01-31 07:52:13.759 221554 DEBUG oslo_concurrency.processutils [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:52:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4102396379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:52:14 np0005603609 nova_compute[221550]: 2026-01-31 07:52:14.452 221554 DEBUG oslo_concurrency.processutils [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.693s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:14 np0005603609 nova_compute[221550]: 2026-01-31 07:52:14.461 221554 DEBUG nova.compute.provider_tree [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:52:14 np0005603609 nova_compute[221550]: 2026-01-31 07:52:14.783 221554 DEBUG nova.scheduler.client.report [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:52:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:14.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:15 np0005603609 nova_compute[221550]: 2026-01-31 07:52:15.482 221554 DEBUG oslo_concurrency.lockutils [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:15.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:15 np0005603609 nova_compute[221550]: 2026-01-31 07:52:15.703 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:16 np0005603609 nova_compute[221550]: 2026-01-31 07:52:16.023 221554 INFO nova.scheduler.client.report [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Deleted allocations for instance 537adcb0-1fba-4e29-9bb3-b33ba7e3e523#033[00m
Jan 31 02:52:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:16 np0005603609 nova_compute[221550]: 2026-01-31 07:52:16.537 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:16 np0005603609 nova_compute[221550]: 2026-01-31 07:52:16.759 221554 DEBUG oslo_concurrency.lockutils [None req-7a59dab5-23bd-4746-9aa9-7ccfd56e2942 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "537adcb0-1fba-4e29-9bb3-b33ba7e3e523" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 25.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:16.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:52:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:17.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:52:17 np0005603609 nova_compute[221550]: 2026-01-31 07:52:17.821 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:18.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:19.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:20 np0005603609 nova_compute[221550]: 2026-01-31 07:52:20.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:20.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:21 np0005603609 nova_compute[221550]: 2026-01-31 07:52:21.539 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:21.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:22 np0005603609 nova_compute[221550]: 2026-01-31 07:52:22.822 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:52:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:22.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:52:23 np0005603609 podman[248143]: 2026-01-31 07:52:23.175463456 +0000 UTC m=+0.092795399 container exec 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 02:52:23 np0005603609 podman[248143]: 2026-01-31 07:52:23.26727743 +0000 UTC m=+0.184609373 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:52:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:23.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:24 np0005603609 nova_compute[221550]: 2026-01-31 07:52:24.929 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:24.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:25 np0005603609 nova_compute[221550]: 2026-01-31 07:52:25.000 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:25.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:25 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:52:25 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:52:25 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:52:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:26 np0005603609 nova_compute[221550]: 2026-01-31 07:52:26.542 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:26.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:27 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:52:27 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:52:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:52:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:27.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:52:27 np0005603609 nova_compute[221550]: 2026-01-31 07:52:27.823 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:28.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:52:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:29.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:52:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:52:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:30.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:52:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:31.251 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:52:31 np0005603609 nova_compute[221550]: 2026-01-31 07:52:31.251 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:31.253 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:52:31 np0005603609 nova_compute[221550]: 2026-01-31 07:52:31.544 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:52:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:31.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:52:32 np0005603609 nova_compute[221550]: 2026-01-31 07:52:32.825 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:32.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:33.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:52:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:34.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:52:35 np0005603609 podman[248400]: 2026-01-31 07:52:35.167937937 +0000 UTC m=+0.051067922 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Jan 31 02:52:35 np0005603609 podman[248399]: 2026-01-31 07:52:35.191661199 +0000 UTC m=+0.074728903 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 31 02:52:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:35.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:36 np0005603609 nova_compute[221550]: 2026-01-31 07:52:36.546 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:36 np0005603609 ceph-mgr[82033]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:36.837584) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845956837622, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2185, "num_deletes": 257, "total_data_size": 5074291, "memory_usage": 5141136, "flush_reason": "Manual Compaction"}
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845956893036, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 3324359, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36793, "largest_seqno": 38973, "table_properties": {"data_size": 3315458, "index_size": 5459, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19024, "raw_average_key_size": 20, "raw_value_size": 3297377, "raw_average_value_size": 3553, "num_data_blocks": 238, "num_entries": 928, "num_filter_entries": 928, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845752, "oldest_key_time": 1769845752, "file_creation_time": 1769845956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 55787 microseconds, and 5027 cpu microseconds.
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:36.893371) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 3324359 bytes OK
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:36.893455) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:36.902525) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:36.902542) EVENT_LOG_v1 {"time_micros": 1769845956902536, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:36.902559) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 5064631, prev total WAL file size 5064631, number of live WAL files 2.
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:36.903665) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(3246KB)], [69(8359KB)]
Jan 31 02:52:36 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845956903687, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 11884926, "oldest_snapshot_seqno": -1}
Jan 31 02:52:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:36.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:36 np0005603609 nova_compute[221550]: 2026-01-31 07:52:36.985 221554 DEBUG oslo_concurrency.lockutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Acquiring lock "c440f73c-e6b3-46be-a2e3-702703379890" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:36 np0005603609 nova_compute[221550]: 2026-01-31 07:52:36.985 221554 DEBUG oslo_concurrency.lockutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Lock "c440f73c-e6b3-46be-a2e3-702703379890" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:37 np0005603609 nova_compute[221550]: 2026-01-31 07:52:37.001 221554 DEBUG nova.compute.manager [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:52:37 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 6528 keys, 11725384 bytes, temperature: kUnknown
Jan 31 02:52:37 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845957048536, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 11725384, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11679444, "index_size": 28550, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 167134, "raw_average_key_size": 25, "raw_value_size": 11560048, "raw_average_value_size": 1770, "num_data_blocks": 1149, "num_entries": 6528, "num_filter_entries": 6528, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769845956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:52:37 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:52:37 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:37.048776) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11725384 bytes
Jan 31 02:52:37 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:37.051623) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 82.0 rd, 80.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.2 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(7.1) write-amplify(3.5) OK, records in: 7064, records dropped: 536 output_compression: NoCompression
Jan 31 02:52:37 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:37.051645) EVENT_LOG_v1 {"time_micros": 1769845957051635, "job": 42, "event": "compaction_finished", "compaction_time_micros": 144947, "compaction_time_cpu_micros": 17267, "output_level": 6, "num_output_files": 1, "total_output_size": 11725384, "num_input_records": 7064, "num_output_records": 6528, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:52:37 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:52:37 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845957052140, "job": 42, "event": "table_file_deletion", "file_number": 71}
Jan 31 02:52:37 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:52:37 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845957052969, "job": 42, "event": "table_file_deletion", "file_number": 69}
Jan 31 02:52:37 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:36.903607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:52:37 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:37.053018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:52:37 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:37.053024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:52:37 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:37.053026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:52:37 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:37.053028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:52:37 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:37.053030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:52:37 np0005603609 nova_compute[221550]: 2026-01-31 07:52:37.072 221554 DEBUG oslo_concurrency.lockutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:37 np0005603609 nova_compute[221550]: 2026-01-31 07:52:37.072 221554 DEBUG oslo_concurrency.lockutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:37 np0005603609 nova_compute[221550]: 2026-01-31 07:52:37.079 221554 DEBUG nova.virt.hardware [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:52:37 np0005603609 nova_compute[221550]: 2026-01-31 07:52:37.079 221554 INFO nova.compute.claims [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:52:37 np0005603609 nova_compute[221550]: 2026-01-31 07:52:37.188 221554 DEBUG oslo_concurrency.processutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:37.254 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:52:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:52:37 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1357689423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:52:37 np0005603609 nova_compute[221550]: 2026-01-31 07:52:37.617 221554 DEBUG oslo_concurrency.processutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:37 np0005603609 nova_compute[221550]: 2026-01-31 07:52:37.622 221554 DEBUG nova.compute.provider_tree [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:52:37 np0005603609 nova_compute[221550]: 2026-01-31 07:52:37.640 221554 DEBUG nova.scheduler.client.report [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:52:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:37.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:37 np0005603609 nova_compute[221550]: 2026-01-31 07:52:37.667 221554 DEBUG oslo_concurrency.lockutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:37 np0005603609 nova_compute[221550]: 2026-01-31 07:52:37.668 221554 DEBUG nova.compute.manager [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:52:37 np0005603609 nova_compute[221550]: 2026-01-31 07:52:37.718 221554 DEBUG nova.compute.manager [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:52:37 np0005603609 nova_compute[221550]: 2026-01-31 07:52:37.718 221554 DEBUG nova.network.neutron [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:52:37 np0005603609 nova_compute[221550]: 2026-01-31 07:52:37.741 221554 INFO nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:52:37 np0005603609 nova_compute[221550]: 2026-01-31 07:52:37.762 221554 DEBUG nova.compute.manager [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:52:37 np0005603609 nova_compute[221550]: 2026-01-31 07:52:37.827 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:37 np0005603609 nova_compute[221550]: 2026-01-31 07:52:37.844 221554 INFO nova.virt.block_device [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Booting with volume a1b4a963-bab3-4e88-b617-96814bf6094a at /dev/vda#033[00m
Jan 31 02:52:38 np0005603609 nova_compute[221550]: 2026-01-31 07:52:38.043 221554 DEBUG os_brick.utils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 02:52:38 np0005603609 nova_compute[221550]: 2026-01-31 07:52:38.044 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:38 np0005603609 nova_compute[221550]: 2026-01-31 07:52:38.052 221554 DEBUG nova.policy [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '93403129e78e44f3916d321433a31bd2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4a9ee1af66824e368ddad5ea68a3201b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:52:38 np0005603609 nova_compute[221550]: 2026-01-31 07:52:38.053 226739 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:38 np0005603609 nova_compute[221550]: 2026-01-31 07:52:38.054 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[20e3f49f-b44e-4abf-abc7-03be0ccd8bb7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:38 np0005603609 nova_compute[221550]: 2026-01-31 07:52:38.055 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:38 np0005603609 nova_compute[221550]: 2026-01-31 07:52:38.060 226739 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:38 np0005603609 nova_compute[221550]: 2026-01-31 07:52:38.061 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[ee050a61-6478-4114-8884-1d1467be5382]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1765b9b6275c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:38 np0005603609 nova_compute[221550]: 2026-01-31 07:52:38.062 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:38 np0005603609 nova_compute[221550]: 2026-01-31 07:52:38.068 226739 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:38 np0005603609 nova_compute[221550]: 2026-01-31 07:52:38.068 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[9d01db6c-b4d1-4145-8a35-f10a71443d27]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:38 np0005603609 nova_compute[221550]: 2026-01-31 07:52:38.069 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[ccb2c994-02ce-4edd-b12c-8e62a5fdf54a]: (4, '231927d4-1ded-4b84-843c-456d697af567') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:38 np0005603609 nova_compute[221550]: 2026-01-31 07:52:38.069 221554 DEBUG oslo_concurrency.processutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:38 np0005603609 nova_compute[221550]: 2026-01-31 07:52:38.086 221554 DEBUG oslo_concurrency.processutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:38 np0005603609 nova_compute[221550]: 2026-01-31 07:52:38.088 221554 DEBUG os_brick.initiator.connectors.lightos [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 02:52:38 np0005603609 nova_compute[221550]: 2026-01-31 07:52:38.089 221554 DEBUG os_brick.initiator.connectors.lightos [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 02:52:38 np0005603609 nova_compute[221550]: 2026-01-31 07:52:38.089 221554 DEBUG os_brick.initiator.connectors.lightos [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 02:52:38 np0005603609 nova_compute[221550]: 2026-01-31 07:52:38.090 221554 DEBUG os_brick.utils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] <== get_connector_properties: return (46ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1765b9b6275c', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '231927d4-1ded-4b84-843c-456d697af567', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 02:52:38 np0005603609 nova_compute[221550]: 2026-01-31 07:52:38.090 221554 DEBUG nova.virt.block_device [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Updating existing volume attachment record: e45281b4-66a6-4aa0-89ef-c871e880bb7d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 02:52:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:52:38 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2684906466' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:52:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:52:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:52:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:52:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:38.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:52:39 np0005603609 nova_compute[221550]: 2026-01-31 07:52:39.135 221554 DEBUG nova.compute.manager [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:52:39 np0005603609 nova_compute[221550]: 2026-01-31 07:52:39.137 221554 DEBUG nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:52:39 np0005603609 nova_compute[221550]: 2026-01-31 07:52:39.138 221554 INFO nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Creating image(s)#033[00m
Jan 31 02:52:39 np0005603609 nova_compute[221550]: 2026-01-31 07:52:39.140 221554 DEBUG nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 02:52:39 np0005603609 nova_compute[221550]: 2026-01-31 07:52:39.140 221554 DEBUG nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Ensure instance console log exists: /var/lib/nova/instances/c440f73c-e6b3-46be-a2e3-702703379890/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:52:39 np0005603609 nova_compute[221550]: 2026-01-31 07:52:39.140 221554 DEBUG oslo_concurrency.lockutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:39 np0005603609 nova_compute[221550]: 2026-01-31 07:52:39.141 221554 DEBUG oslo_concurrency.lockutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:39 np0005603609 nova_compute[221550]: 2026-01-31 07:52:39.141 221554 DEBUG oslo_concurrency.lockutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:39.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:40 np0005603609 nova_compute[221550]: 2026-01-31 07:52:40.711 221554 DEBUG nova.network.neutron [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Successfully created port: 9b72c5b5-be8a-41f5-abfe-94cdbd6de794 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:52:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:41.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:41 np0005603609 nova_compute[221550]: 2026-01-31 07:52:41.549 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:52:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:41.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:52:42 np0005603609 nova_compute[221550]: 2026-01-31 07:52:42.711 221554 DEBUG nova.network.neutron [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Successfully updated port: 9b72c5b5-be8a-41f5-abfe-94cdbd6de794 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:52:42 np0005603609 nova_compute[221550]: 2026-01-31 07:52:42.779 221554 DEBUG oslo_concurrency.lockutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Acquiring lock "refresh_cache-c440f73c-e6b3-46be-a2e3-702703379890" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:52:42 np0005603609 nova_compute[221550]: 2026-01-31 07:52:42.779 221554 DEBUG oslo_concurrency.lockutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Acquired lock "refresh_cache-c440f73c-e6b3-46be-a2e3-702703379890" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:52:42 np0005603609 nova_compute[221550]: 2026-01-31 07:52:42.780 221554 DEBUG nova.network.neutron [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:52:42 np0005603609 nova_compute[221550]: 2026-01-31 07:52:42.832 221554 DEBUG nova.compute.manager [req-fb6cb117-adcc-46b3-8230-564f684324f8 req-5a3e343a-ad8a-4929-94a5-56fc436dbad2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Received event network-changed-9b72c5b5-be8a-41f5-abfe-94cdbd6de794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:52:42 np0005603609 nova_compute[221550]: 2026-01-31 07:52:42.832 221554 DEBUG nova.compute.manager [req-fb6cb117-adcc-46b3-8230-564f684324f8 req-5a3e343a-ad8a-4929-94a5-56fc436dbad2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Refreshing instance network info cache due to event network-changed-9b72c5b5-be8a-41f5-abfe-94cdbd6de794. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:52:42 np0005603609 nova_compute[221550]: 2026-01-31 07:52:42.833 221554 DEBUG oslo_concurrency.lockutils [req-fb6cb117-adcc-46b3-8230-564f684324f8 req-5a3e343a-ad8a-4929-94a5-56fc436dbad2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c440f73c-e6b3-46be-a2e3-702703379890" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:52:42 np0005603609 nova_compute[221550]: 2026-01-31 07:52:42.835 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:43 np0005603609 nova_compute[221550]: 2026-01-31 07:52:43.000 221554 DEBUG nova.network.neutron [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:52:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:43.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:43.294631) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845963294677, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 337, "num_deletes": 251, "total_data_size": 292925, "memory_usage": 300136, "flush_reason": "Manual Compaction"}
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845963300321, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 193425, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38979, "largest_seqno": 39310, "table_properties": {"data_size": 191219, "index_size": 370, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5457, "raw_average_key_size": 18, "raw_value_size": 186894, "raw_average_value_size": 644, "num_data_blocks": 16, "num_entries": 290, "num_filter_entries": 290, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845957, "oldest_key_time": 1769845957, "file_creation_time": 1769845963, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 5724 microseconds, and 1032 cpu microseconds.
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:43.300355) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 193425 bytes OK
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:43.300379) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:43.303533) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:43.303552) EVENT_LOG_v1 {"time_micros": 1769845963303545, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:43.303569) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 290579, prev total WAL file size 290579, number of live WAL files 2.
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:43.303909) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(188KB)], [72(11MB)]
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845963304037, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 11918809, "oldest_snapshot_seqno": -1}
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 6305 keys, 9986935 bytes, temperature: kUnknown
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845963461406, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 9986935, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9944007, "index_size": 26050, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 163204, "raw_average_key_size": 25, "raw_value_size": 9830030, "raw_average_value_size": 1559, "num_data_blocks": 1037, "num_entries": 6305, "num_filter_entries": 6305, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769845963, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:43.461623) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 9986935 bytes
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:43.491499) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 75.7 rd, 63.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.2 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(113.3) write-amplify(51.6) OK, records in: 6818, records dropped: 513 output_compression: NoCompression
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:43.491527) EVENT_LOG_v1 {"time_micros": 1769845963491515, "job": 44, "event": "compaction_finished", "compaction_time_micros": 157420, "compaction_time_cpu_micros": 16546, "output_level": 6, "num_output_files": 1, "total_output_size": 9986935, "num_input_records": 6818, "num_output_records": 6305, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845963491712, "job": 44, "event": "table_file_deletion", "file_number": 74}
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845963492839, "job": 44, "event": "table_file_deletion", "file_number": 72}
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:43.303854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:43.492861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:43.492865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:43.492866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:43.492867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:52:43 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:52:43.492869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:52:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:43.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.870 221554 DEBUG nova.network.neutron [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Updating instance_info_cache with network_info: [{"id": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "address": "fa:16:3e:dc:4a:f7", "network": {"id": "868fa2eb-4f72-481a-9c66-8c69ae0a2f4a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-281656863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a9ee1af66824e368ddad5ea68a3201b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b72c5b5-be", "ovs_interfaceid": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.903 221554 DEBUG oslo_concurrency.lockutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Releasing lock "refresh_cache-c440f73c-e6b3-46be-a2e3-702703379890" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.903 221554 DEBUG nova.compute.manager [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Instance network_info: |[{"id": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "address": "fa:16:3e:dc:4a:f7", "network": {"id": "868fa2eb-4f72-481a-9c66-8c69ae0a2f4a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-281656863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a9ee1af66824e368ddad5ea68a3201b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b72c5b5-be", "ovs_interfaceid": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.904 221554 DEBUG oslo_concurrency.lockutils [req-fb6cb117-adcc-46b3-8230-564f684324f8 req-5a3e343a-ad8a-4929-94a5-56fc436dbad2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c440f73c-e6b3-46be-a2e3-702703379890" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.904 221554 DEBUG nova.network.neutron [req-fb6cb117-adcc-46b3-8230-564f684324f8 req-5a3e343a-ad8a-4929-94a5-56fc436dbad2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Refreshing network info cache for port 9b72c5b5-be8a-41f5-abfe-94cdbd6de794 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.907 221554 DEBUG nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Start _get_guest_xml network_info=[{"id": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "address": "fa:16:3e:dc:4a:f7", "network": {"id": "868fa2eb-4f72-481a-9c66-8c69ae0a2f4a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-281656863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a9ee1af66824e368ddad5ea68a3201b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b72c5b5-be", "ovs_interfaceid": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': 'e45281b4-66a6-4aa0-89ef-c871e880bb7d', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a1b4a963-bab3-4e88-b617-96814bf6094a', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a1b4a963-bab3-4e88-b617-96814bf6094a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'c440f73c-e6b3-46be-a2e3-702703379890', 'attached_at': '', 'detached_at': '', 'volume_id': 'a1b4a963-bab3-4e88-b617-96814bf6094a', 'serial': 'a1b4a963-bab3-4e88-b617-96814bf6094a'}, 'delete_on_termination': True, 'device_type': 'disk', 'disk_bus': 'virtio', 'guest_format': None, 'mount_device': '/dev/vda', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.913 221554 WARNING nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.917 221554 DEBUG nova.virt.libvirt.host [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.918 221554 DEBUG nova.virt.libvirt.host [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.921 221554 DEBUG nova.virt.libvirt.host [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.921 221554 DEBUG nova.virt.libvirt.host [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.922 221554 DEBUG nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.923 221554 DEBUG nova.virt.hardware [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.923 221554 DEBUG nova.virt.hardware [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.924 221554 DEBUG nova.virt.hardware [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.924 221554 DEBUG nova.virt.hardware [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.924 221554 DEBUG nova.virt.hardware [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.924 221554 DEBUG nova.virt.hardware [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.924 221554 DEBUG nova.virt.hardware [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.924 221554 DEBUG nova.virt.hardware [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.924 221554 DEBUG nova.virt.hardware [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.925 221554 DEBUG nova.virt.hardware [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.925 221554 DEBUG nova.virt.hardware [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.955 221554 DEBUG nova.storage.rbd_utils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] rbd image c440f73c-e6b3-46be-a2e3-702703379890_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:52:44 np0005603609 nova_compute[221550]: 2026-01-31 07:52:44.960 221554 DEBUG oslo_concurrency.processutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:45.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:52:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2159163348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.378 221554 DEBUG oslo_concurrency.processutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.442 221554 DEBUG nova.virt.libvirt.vif [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:52:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-1483795707',display_name='tempest-ServersTestBootFromVolume-server-1483795707',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-1483795707',id=74,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLghy76daco7LPdR7LJ4ZpR9HlKVlfpSkn4hjpGPHUnSgqF+p/LRarFMcdh5UhCtNMFVsTCUdmFk5AqPUdvfsTHsDylq9sjh7Eic0az0zDNGmUXVEUFm9c1a9Gyj3iNSNg==',key_name='tempest-keypair-714413144',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4a9ee1af66824e368ddad5ea68a3201b',ramdisk_id='',reservation_id='r-60d1v4ja',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServersTestBootFromVolume-591669822',owner_user_name='tempest-ServersTestBootFromVolume-591669822-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:52:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93403129e78e44f3916d321433a31bd2',uuid=c440f73c-e6b3-46be-a2e3-702703379890,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "address": "fa:16:3e:dc:4a:f7", "network": {"id": "868fa2eb-4f72-481a-9c66-8c69ae0a2f4a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-281656863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a9ee1af66824e368ddad5ea68a3201b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b72c5b5-be", "ovs_interfaceid": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.443 221554 DEBUG nova.network.os_vif_util [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Converting VIF {"id": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "address": "fa:16:3e:dc:4a:f7", "network": {"id": "868fa2eb-4f72-481a-9c66-8c69ae0a2f4a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-281656863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a9ee1af66824e368ddad5ea68a3201b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b72c5b5-be", "ovs_interfaceid": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.444 221554 DEBUG nova.network.os_vif_util [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:4a:f7,bridge_name='br-int',has_traffic_filtering=True,id=9b72c5b5-be8a-41f5-abfe-94cdbd6de794,network=Network(868fa2eb-4f72-481a-9c66-8c69ae0a2f4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b72c5b5-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.445 221554 DEBUG nova.objects.instance [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Lazy-loading 'pci_devices' on Instance uuid c440f73c-e6b3-46be-a2e3-702703379890 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.522 221554 DEBUG nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:52:45 np0005603609 nova_compute[221550]:  <uuid>c440f73c-e6b3-46be-a2e3-702703379890</uuid>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:  <name>instance-0000004a</name>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServersTestBootFromVolume-server-1483795707</nova:name>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:52:44</nova:creationTime>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:52:45 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:        <nova:user uuid="93403129e78e44f3916d321433a31bd2">tempest-ServersTestBootFromVolume-591669822-project-member</nova:user>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:        <nova:project uuid="4a9ee1af66824e368ddad5ea68a3201b">tempest-ServersTestBootFromVolume-591669822</nova:project>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:        <nova:port uuid="9b72c5b5-be8a-41f5-abfe-94cdbd6de794">
Jan 31 02:52:45 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <entry name="serial">c440f73c-e6b3-46be-a2e3-702703379890</entry>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <entry name="uuid">c440f73c-e6b3-46be-a2e3-702703379890</entry>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/c440f73c-e6b3-46be-a2e3-702703379890_disk.config">
Jan 31 02:52:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:52:45 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="volumes/volume-a1b4a963-bab3-4e88-b617-96814bf6094a">
Jan 31 02:52:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:52:45 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <serial>a1b4a963-bab3-4e88-b617-96814bf6094a</serial>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:dc:4a:f7"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <target dev="tap9b72c5b5-be"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/c440f73c-e6b3-46be-a2e3-702703379890/console.log" append="off"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:52:45 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:52:45 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:52:45 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:52:45 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.523 221554 DEBUG nova.compute.manager [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Preparing to wait for external event network-vif-plugged-9b72c5b5-be8a-41f5-abfe-94cdbd6de794 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.523 221554 DEBUG oslo_concurrency.lockutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Acquiring lock "c440f73c-e6b3-46be-a2e3-702703379890-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.523 221554 DEBUG oslo_concurrency.lockutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Lock "c440f73c-e6b3-46be-a2e3-702703379890-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.524 221554 DEBUG oslo_concurrency.lockutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Lock "c440f73c-e6b3-46be-a2e3-702703379890-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.524 221554 DEBUG nova.virt.libvirt.vif [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:52:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-1483795707',display_name='tempest-ServersTestBootFromVolume-server-1483795707',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-1483795707',id=74,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLghy76daco7LPdR7LJ4ZpR9HlKVlfpSkn4hjpGPHUnSgqF+p/LRarFMcdh5UhCtNMFVsTCUdmFk5AqPUdvfsTHsDylq9sjh7Eic0az0zDNGmUXVEUFm9c1a9Gyj3iNSNg==',key_name='tempest-keypair-714413144',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4a9ee1af66824e368ddad5ea68a3201b',ramdisk_id='',reservation_id='r-60d1v4ja',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServersTestBootFromVolume-591669822',owner_user_name='tempest-ServersTestBootFromVolume-591669822-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:52:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93403129e78e44f3916d321433a31bd2',uuid=c440f73c-e6b3-46be-a2e3-702703379890,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "address": "fa:16:3e:dc:4a:f7", "network": {"id": "868fa2eb-4f72-481a-9c66-8c69ae0a2f4a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-281656863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a9ee1af66824e368ddad5ea68a3201b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b72c5b5-be", "ovs_interfaceid": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.524 221554 DEBUG nova.network.os_vif_util [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Converting VIF {"id": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "address": "fa:16:3e:dc:4a:f7", "network": {"id": "868fa2eb-4f72-481a-9c66-8c69ae0a2f4a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-281656863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a9ee1af66824e368ddad5ea68a3201b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b72c5b5-be", "ovs_interfaceid": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.525 221554 DEBUG nova.network.os_vif_util [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:4a:f7,bridge_name='br-int',has_traffic_filtering=True,id=9b72c5b5-be8a-41f5-abfe-94cdbd6de794,network=Network(868fa2eb-4f72-481a-9c66-8c69ae0a2f4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b72c5b5-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.525 221554 DEBUG os_vif [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:4a:f7,bridge_name='br-int',has_traffic_filtering=True,id=9b72c5b5-be8a-41f5-abfe-94cdbd6de794,network=Network(868fa2eb-4f72-481a-9c66-8c69ae0a2f4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b72c5b5-be') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.526 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.526 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.527 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.529 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.529 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b72c5b5-be, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.530 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9b72c5b5-be, col_values=(('external_ids', {'iface-id': '9b72c5b5-be8a-41f5-abfe-94cdbd6de794', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:4a:f7', 'vm-uuid': 'c440f73c-e6b3-46be-a2e3-702703379890'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.531 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:45 np0005603609 NetworkManager[49064]: <info>  [1769845965.5327] manager: (tap9b72c5b5-be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.533 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.536 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.537 221554 INFO os_vif [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:4a:f7,bridge_name='br-int',has_traffic_filtering=True,id=9b72c5b5-be8a-41f5-abfe-94cdbd6de794,network=Network(868fa2eb-4f72-481a-9c66-8c69ae0a2f4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b72c5b5-be')#033[00m
Jan 31 02:52:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:45.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.944 221554 DEBUG nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.944 221554 DEBUG nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.945 221554 DEBUG nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] No VIF found with MAC fa:16:3e:dc:4a:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.945 221554 INFO nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Using config drive#033[00m
Jan 31 02:52:45 np0005603609 nova_compute[221550]: 2026-01-31 07:52:45.972 221554 DEBUG nova.storage.rbd_utils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] rbd image c440f73c-e6b3-46be-a2e3-702703379890_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:52:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:46 np0005603609 nova_compute[221550]: 2026-01-31 07:52:46.536 221554 INFO nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Creating config drive at /var/lib/nova/instances/c440f73c-e6b3-46be-a2e3-702703379890/disk.config#033[00m
Jan 31 02:52:46 np0005603609 nova_compute[221550]: 2026-01-31 07:52:46.540 221554 DEBUG oslo_concurrency.processutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c440f73c-e6b3-46be-a2e3-702703379890/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpr_k2hohr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:46 np0005603609 nova_compute[221550]: 2026-01-31 07:52:46.658 221554 DEBUG oslo_concurrency.processutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c440f73c-e6b3-46be-a2e3-702703379890/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpr_k2hohr" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:46 np0005603609 nova_compute[221550]: 2026-01-31 07:52:46.686 221554 DEBUG nova.storage.rbd_utils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] rbd image c440f73c-e6b3-46be-a2e3-702703379890_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:52:46 np0005603609 nova_compute[221550]: 2026-01-31 07:52:46.690 221554 DEBUG oslo_concurrency.processutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c440f73c-e6b3-46be-a2e3-702703379890/disk.config c440f73c-e6b3-46be-a2e3-702703379890_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:46 np0005603609 nova_compute[221550]: 2026-01-31 07:52:46.704 221554 DEBUG nova.network.neutron [req-fb6cb117-adcc-46b3-8230-564f684324f8 req-5a3e343a-ad8a-4929-94a5-56fc436dbad2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Updated VIF entry in instance network info cache for port 9b72c5b5-be8a-41f5-abfe-94cdbd6de794. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:52:46 np0005603609 nova_compute[221550]: 2026-01-31 07:52:46.705 221554 DEBUG nova.network.neutron [req-fb6cb117-adcc-46b3-8230-564f684324f8 req-5a3e343a-ad8a-4929-94a5-56fc436dbad2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Updating instance_info_cache with network_info: [{"id": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "address": "fa:16:3e:dc:4a:f7", "network": {"id": "868fa2eb-4f72-481a-9c66-8c69ae0a2f4a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-281656863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a9ee1af66824e368ddad5ea68a3201b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b72c5b5-be", "ovs_interfaceid": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:52:46 np0005603609 nova_compute[221550]: 2026-01-31 07:52:46.757 221554 DEBUG oslo_concurrency.lockutils [req-fb6cb117-adcc-46b3-8230-564f684324f8 req-5a3e343a-ad8a-4929-94a5-56fc436dbad2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c440f73c-e6b3-46be-a2e3-702703379890" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:52:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:47.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:47.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:47 np0005603609 nova_compute[221550]: 2026-01-31 07:52:47.789 221554 DEBUG oslo_concurrency.processutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c440f73c-e6b3-46be-a2e3-702703379890/disk.config c440f73c-e6b3-46be-a2e3-702703379890_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:47 np0005603609 nova_compute[221550]: 2026-01-31 07:52:47.790 221554 INFO nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Deleting local config drive /var/lib/nova/instances/c440f73c-e6b3-46be-a2e3-702703379890/disk.config because it was imported into RBD.#033[00m
Jan 31 02:52:47 np0005603609 nova_compute[221550]: 2026-01-31 07:52:47.832 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:47 np0005603609 kernel: tap9b72c5b5-be: entered promiscuous mode
Jan 31 02:52:47 np0005603609 NetworkManager[49064]: <info>  [1769845967.8608] manager: (tap9b72c5b5-be): new Tun device (/org/freedesktop/NetworkManager/Devices/113)
Jan 31 02:52:47 np0005603609 ovn_controller[130359]: 2026-01-31T07:52:47Z|00221|binding|INFO|Claiming lport 9b72c5b5-be8a-41f5-abfe-94cdbd6de794 for this chassis.
Jan 31 02:52:47 np0005603609 ovn_controller[130359]: 2026-01-31T07:52:47Z|00222|binding|INFO|9b72c5b5-be8a-41f5-abfe-94cdbd6de794: Claiming fa:16:3e:dc:4a:f7 10.100.0.8
Jan 31 02:52:47 np0005603609 nova_compute[221550]: 2026-01-31 07:52:47.862 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:47 np0005603609 nova_compute[221550]: 2026-01-31 07:52:47.867 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:47 np0005603609 nova_compute[221550]: 2026-01-31 07:52:47.872 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:47.889 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:4a:f7 10.100.0.8'], port_security=['fa:16:3e:dc:4a:f7 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c440f73c-e6b3-46be-a2e3-702703379890', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4a9ee1af66824e368ddad5ea68a3201b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1af7dded-ba4c-464c-a37d-082e55c9ea5f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4986fb78-ee27-4ba6-b733-7a8bc7b4bbd0, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=9b72c5b5-be8a-41f5-abfe-94cdbd6de794) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:52:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:47.891 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 9b72c5b5-be8a-41f5-abfe-94cdbd6de794 in datapath 868fa2eb-4f72-481a-9c66-8c69ae0a2f4a bound to our chassis#033[00m
Jan 31 02:52:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:47.893 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 868fa2eb-4f72-481a-9c66-8c69ae0a2f4a#033[00m
Jan 31 02:52:47 np0005603609 systemd-udevd[248635]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:52:47 np0005603609 systemd-machined[190912]: New machine qemu-33-instance-0000004a.
Jan 31 02:52:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:47.903 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5955c539-420e-4f73-af45-077628f1e8be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:47.904 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap868fa2eb-41 in ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:52:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:47.908 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap868fa2eb-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:52:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:47.908 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e0fa7725-c6c9-4d54-9648-786b49374fb8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:47 np0005603609 NetworkManager[49064]: <info>  [1769845967.9101] device (tap9b72c5b5-be): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:52:47 np0005603609 NetworkManager[49064]: <info>  [1769845967.9108] device (tap9b72c5b5-be): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:52:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:47.910 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[99e41a2c-01c6-44d6-a456-d63b4ff6472f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:47 np0005603609 nova_compute[221550]: 2026-01-31 07:52:47.911 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:47 np0005603609 ovn_controller[130359]: 2026-01-31T07:52:47Z|00223|binding|INFO|Setting lport 9b72c5b5-be8a-41f5-abfe-94cdbd6de794 ovn-installed in OVS
Jan 31 02:52:47 np0005603609 ovn_controller[130359]: 2026-01-31T07:52:47Z|00224|binding|INFO|Setting lport 9b72c5b5-be8a-41f5-abfe-94cdbd6de794 up in Southbound
Jan 31 02:52:47 np0005603609 nova_compute[221550]: 2026-01-31 07:52:47.917 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:47 np0005603609 systemd[1]: Started Virtual Machine qemu-33-instance-0000004a.
Jan 31 02:52:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:47.922 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[7859c70a-fd88-4a57-bfcd-4f44da098a7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:47.938 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[19f16e4d-193a-44e8-b363-fd63db1b5b28]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:47.969 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[37b89086-1b7d-4017-9bb9-627f0888f1f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:47 np0005603609 systemd-udevd[248639]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:52:47 np0005603609 NetworkManager[49064]: <info>  [1769845967.9808] manager: (tap868fa2eb-40): new Veth device (/org/freedesktop/NetworkManager/Devices/114)
Jan 31 02:52:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:47.979 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e114ea-6043-494e-a9e5-a226a6866e65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:48.006 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0fd342-a626-4525-a8f3-7572b1799cdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:48.011 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[73015a21-a9c2-406e-a205-7d01b361aa7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:48 np0005603609 NetworkManager[49064]: <info>  [1769845968.0332] device (tap868fa2eb-40): carrier: link connected
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:48.039 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[cf046002-e9a6-4476-8048-9eac6bafecf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:48.067 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[06e936c3-3271-42ed-80a9-23430db92edd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap868fa2eb-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:6a:9a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 620773, 'reachable_time': 16543, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248669, 'error': None, 'target': 'ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:48.086 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[495a797e-a144-46f6-98b5-9e6261bc2c57]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0a:6a9a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 620773, 'tstamp': 620773}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248670, 'error': None, 'target': 'ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:48.109 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4c181d14-19b6-4df9-8d1e-296a71362703]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap868fa2eb-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:6a:9a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 620773, 'reachable_time': 16543, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248671, 'error': None, 'target': 'ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:48.144 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[87702b5d-9685-406b-8336-78ca6167d9f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:48.200 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd7b07c-71dc-40b8-910d-1e582f9f50fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:48.202 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap868fa2eb-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:48.202 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:48.203 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap868fa2eb-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:52:48 np0005603609 nova_compute[221550]: 2026-01-31 07:52:48.204 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:48 np0005603609 NetworkManager[49064]: <info>  [1769845968.2055] manager: (tap868fa2eb-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Jan 31 02:52:48 np0005603609 kernel: tap868fa2eb-40: entered promiscuous mode
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:48.207 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap868fa2eb-40, col_values=(('external_ids', {'iface-id': '09a2a890-f48a-4bdd-82d5-e0d14979a802'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:52:48 np0005603609 ovn_controller[130359]: 2026-01-31T07:52:48Z|00225|binding|INFO|Releasing lport 09a2a890-f48a-4bdd-82d5-e0d14979a802 from this chassis (sb_readonly=0)
Jan 31 02:52:48 np0005603609 nova_compute[221550]: 2026-01-31 07:52:48.208 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:48 np0005603609 nova_compute[221550]: 2026-01-31 07:52:48.214 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:48.214 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/868fa2eb-4f72-481a-9c66-8c69ae0a2f4a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/868fa2eb-4f72-481a-9c66-8c69ae0a2f4a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:48.215 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[09ec487e-5999-40fd-a5dd-6cb4cd033ea4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:48.215 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/868fa2eb-4f72-481a-9c66-8c69ae0a2f4a.pid.haproxy
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 868fa2eb-4f72-481a-9c66-8c69ae0a2f4a
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:52:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:52:48.216 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a', 'env', 'PROCESS_TAG=haproxy-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/868fa2eb-4f72-481a-9c66-8c69ae0a2f4a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:52:48 np0005603609 nova_compute[221550]: 2026-01-31 07:52:48.574 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845968.5728014, c440f73c-e6b3-46be-a2e3-702703379890 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:52:48 np0005603609 nova_compute[221550]: 2026-01-31 07:52:48.575 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c440f73c-e6b3-46be-a2e3-702703379890] VM Started (Lifecycle Event)#033[00m
Jan 31 02:52:48 np0005603609 podman[248741]: 2026-01-31 07:52:48.5476597 +0000 UTC m=+0.033607412 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:52:48 np0005603609 nova_compute[221550]: 2026-01-31 07:52:48.647 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:52:48 np0005603609 nova_compute[221550]: 2026-01-31 07:52:48.653 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845968.5730376, c440f73c-e6b3-46be-a2e3-702703379890 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:52:48 np0005603609 nova_compute[221550]: 2026-01-31 07:52:48.653 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c440f73c-e6b3-46be-a2e3-702703379890] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:52:48 np0005603609 nova_compute[221550]: 2026-01-31 07:52:48.734 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:52:48 np0005603609 nova_compute[221550]: 2026-01-31 07:52:48.738 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:52:48 np0005603609 nova_compute[221550]: 2026-01-31 07:52:48.799 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c440f73c-e6b3-46be-a2e3-702703379890] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:52:49 np0005603609 podman[248741]: 2026-01-31 07:52:49.014569359 +0000 UTC m=+0.500517071 container create d4172129a0330ee6e5cd63c0a3ac0adeb321865b04c895e1ba05434d804932d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:52:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:49.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:49 np0005603609 systemd[1]: Started libpod-conmon-d4172129a0330ee6e5cd63c0a3ac0adeb321865b04c895e1ba05434d804932d1.scope.
Jan 31 02:52:49 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:52:49 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac238a29fa6e4cb130ada719f9be05d38f113f2f0a50447f14d2f8d487352f24/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:52:49 np0005603609 podman[248741]: 2026-01-31 07:52:49.476805555 +0000 UTC m=+0.962753307 container init d4172129a0330ee6e5cd63c0a3ac0adeb321865b04c895e1ba05434d804932d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Jan 31 02:52:49 np0005603609 podman[248741]: 2026-01-31 07:52:49.485372071 +0000 UTC m=+0.971319783 container start d4172129a0330ee6e5cd63c0a3ac0adeb321865b04c895e1ba05434d804932d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:52:49 np0005603609 neutron-haproxy-ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a[248758]: [NOTICE]   (248762) : New worker (248764) forked
Jan 31 02:52:49 np0005603609 neutron-haproxy-ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a[248758]: [NOTICE]   (248762) : Loading success.
Jan 31 02:52:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:49.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:50 np0005603609 nova_compute[221550]: 2026-01-31 07:52:50.533 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:51.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.347 221554 DEBUG oslo_concurrency.lockutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Acquiring lock "a214f153-3adc-4183-8407-d768f3fbf70e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.347 221554 DEBUG oslo_concurrency.lockutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.368 221554 DEBUG nova.compute.manager [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.466 221554 DEBUG oslo_concurrency.lockutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.467 221554 DEBUG oslo_concurrency.lockutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.475 221554 DEBUG nova.virt.hardware [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.476 221554 INFO nova.compute.claims [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.486 221554 DEBUG nova.compute.manager [req-b6b0f59d-d6d1-4ffb-a2b0-bccf881740d3 req-8d102653-5f8b-4dd6-bc12-38dfa4250e78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Received event network-vif-plugged-9b72c5b5-be8a-41f5-abfe-94cdbd6de794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.487 221554 DEBUG oslo_concurrency.lockutils [req-b6b0f59d-d6d1-4ffb-a2b0-bccf881740d3 req-8d102653-5f8b-4dd6-bc12-38dfa4250e78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c440f73c-e6b3-46be-a2e3-702703379890-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.488 221554 DEBUG oslo_concurrency.lockutils [req-b6b0f59d-d6d1-4ffb-a2b0-bccf881740d3 req-8d102653-5f8b-4dd6-bc12-38dfa4250e78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c440f73c-e6b3-46be-a2e3-702703379890-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.488 221554 DEBUG oslo_concurrency.lockutils [req-b6b0f59d-d6d1-4ffb-a2b0-bccf881740d3 req-8d102653-5f8b-4dd6-bc12-38dfa4250e78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c440f73c-e6b3-46be-a2e3-702703379890-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.488 221554 DEBUG nova.compute.manager [req-b6b0f59d-d6d1-4ffb-a2b0-bccf881740d3 req-8d102653-5f8b-4dd6-bc12-38dfa4250e78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Processing event network-vif-plugged-9b72c5b5-be8a-41f5-abfe-94cdbd6de794 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.489 221554 DEBUG nova.compute.manager [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.494 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845971.4938588, c440f73c-e6b3-46be-a2e3-702703379890 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.495 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c440f73c-e6b3-46be-a2e3-702703379890] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.497 221554 DEBUG nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.502 221554 INFO nova.virt.libvirt.driver [-] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Instance spawned successfully.#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.502 221554 DEBUG nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.582 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.590 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.594 221554 DEBUG nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.594 221554 DEBUG nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.595 221554 DEBUG nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.595 221554 DEBUG nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.595 221554 DEBUG nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.595 221554 DEBUG nova.virt.libvirt.driver [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.638 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c440f73c-e6b3-46be-a2e3-702703379890] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:52:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:51.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.714 221554 DEBUG oslo_concurrency.processutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.732 221554 INFO nova.compute.manager [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Took 12.60 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.732 221554 DEBUG nova.compute.manager [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.830 221554 INFO nova.compute.manager [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Took 14.78 seconds to build instance.#033[00m
Jan 31 02:52:51 np0005603609 nova_compute[221550]: 2026-01-31 07:52:51.879 221554 DEBUG oslo_concurrency.lockutils [None req-a5a47114-b9e0-412d-936b-c85020d8850d 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Lock "c440f73c-e6b3-46be-a2e3-702703379890" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:52:52 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3956740207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:52:52 np0005603609 nova_compute[221550]: 2026-01-31 07:52:52.363 221554 DEBUG oslo_concurrency.processutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:52 np0005603609 nova_compute[221550]: 2026-01-31 07:52:52.367 221554 DEBUG nova.compute.provider_tree [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:52:52 np0005603609 nova_compute[221550]: 2026-01-31 07:52:52.395 221554 DEBUG nova.scheduler.client.report [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:52:52 np0005603609 nova_compute[221550]: 2026-01-31 07:52:52.444 221554 DEBUG oslo_concurrency.lockutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.977s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:52 np0005603609 nova_compute[221550]: 2026-01-31 07:52:52.444 221554 DEBUG nova.compute.manager [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:52:52 np0005603609 nova_compute[221550]: 2026-01-31 07:52:52.608 221554 DEBUG nova.compute.manager [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:52:52 np0005603609 nova_compute[221550]: 2026-01-31 07:52:52.608 221554 DEBUG nova.network.neutron [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:52:52 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Jan 31 02:52:52 np0005603609 nova_compute[221550]: 2026-01-31 07:52:52.796 221554 INFO nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:52:52 np0005603609 nova_compute[221550]: 2026-01-31 07:52:52.810 221554 DEBUG nova.policy [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'db9fe8a6ca1443d099966af09b0a5402', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4ce2602aa08c4766b5f575340b920fd7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:52:52 np0005603609 nova_compute[221550]: 2026-01-31 07:52:52.833 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:52 np0005603609 nova_compute[221550]: 2026-01-31 07:52:52.878 221554 DEBUG nova.compute.manager [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:52:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:53.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:53 np0005603609 nova_compute[221550]: 2026-01-31 07:52:53.203 221554 DEBUG nova.compute.manager [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:52:53 np0005603609 nova_compute[221550]: 2026-01-31 07:52:53.204 221554 DEBUG nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:52:53 np0005603609 nova_compute[221550]: 2026-01-31 07:52:53.205 221554 INFO nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Creating image(s)#033[00m
Jan 31 02:52:53 np0005603609 nova_compute[221550]: 2026-01-31 07:52:53.404 221554 DEBUG nova.storage.rbd_utils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] rbd image a214f153-3adc-4183-8407-d768f3fbf70e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:52:53 np0005603609 nova_compute[221550]: 2026-01-31 07:52:53.451 221554 DEBUG nova.storage.rbd_utils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] rbd image a214f153-3adc-4183-8407-d768f3fbf70e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:52:53 np0005603609 nova_compute[221550]: 2026-01-31 07:52:53.482 221554 DEBUG nova.storage.rbd_utils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] rbd image a214f153-3adc-4183-8407-d768f3fbf70e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:52:53 np0005603609 nova_compute[221550]: 2026-01-31 07:52:53.486 221554 DEBUG oslo_concurrency.processutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:53 np0005603609 nova_compute[221550]: 2026-01-31 07:52:53.537 221554 DEBUG oslo_concurrency.processutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:53 np0005603609 nova_compute[221550]: 2026-01-31 07:52:53.538 221554 DEBUG oslo_concurrency.lockutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:53 np0005603609 nova_compute[221550]: 2026-01-31 07:52:53.539 221554 DEBUG oslo_concurrency.lockutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:53 np0005603609 nova_compute[221550]: 2026-01-31 07:52:53.540 221554 DEBUG oslo_concurrency.lockutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:53 np0005603609 nova_compute[221550]: 2026-01-31 07:52:53.568 221554 DEBUG nova.storage.rbd_utils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] rbd image a214f153-3adc-4183-8407-d768f3fbf70e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:52:53 np0005603609 nova_compute[221550]: 2026-01-31 07:52:53.573 221554 DEBUG oslo_concurrency.processutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 a214f153-3adc-4183-8407-d768f3fbf70e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:53.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:54 np0005603609 nova_compute[221550]: 2026-01-31 07:52:54.066 221554 DEBUG nova.compute.manager [req-158fa318-e047-4bdf-ba8b-fa9b623f4747 req-9d98ffb4-c5e7-4c5d-bcd4-4892d17e3ecf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Received event network-vif-plugged-9b72c5b5-be8a-41f5-abfe-94cdbd6de794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:52:54 np0005603609 nova_compute[221550]: 2026-01-31 07:52:54.067 221554 DEBUG oslo_concurrency.lockutils [req-158fa318-e047-4bdf-ba8b-fa9b623f4747 req-9d98ffb4-c5e7-4c5d-bcd4-4892d17e3ecf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c440f73c-e6b3-46be-a2e3-702703379890-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:54 np0005603609 nova_compute[221550]: 2026-01-31 07:52:54.067 221554 DEBUG oslo_concurrency.lockutils [req-158fa318-e047-4bdf-ba8b-fa9b623f4747 req-9d98ffb4-c5e7-4c5d-bcd4-4892d17e3ecf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c440f73c-e6b3-46be-a2e3-702703379890-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:54 np0005603609 nova_compute[221550]: 2026-01-31 07:52:54.067 221554 DEBUG oslo_concurrency.lockutils [req-158fa318-e047-4bdf-ba8b-fa9b623f4747 req-9d98ffb4-c5e7-4c5d-bcd4-4892d17e3ecf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c440f73c-e6b3-46be-a2e3-702703379890-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:54 np0005603609 nova_compute[221550]: 2026-01-31 07:52:54.068 221554 DEBUG nova.compute.manager [req-158fa318-e047-4bdf-ba8b-fa9b623f4747 req-9d98ffb4-c5e7-4c5d-bcd4-4892d17e3ecf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] No waiting events found dispatching network-vif-plugged-9b72c5b5-be8a-41f5-abfe-94cdbd6de794 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:52:54 np0005603609 nova_compute[221550]: 2026-01-31 07:52:54.068 221554 WARNING nova.compute.manager [req-158fa318-e047-4bdf-ba8b-fa9b623f4747 req-9d98ffb4-c5e7-4c5d-bcd4-4892d17e3ecf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Received unexpected event network-vif-plugged-9b72c5b5-be8a-41f5-abfe-94cdbd6de794 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:52:54 np0005603609 nova_compute[221550]: 2026-01-31 07:52:54.575 221554 DEBUG nova.network.neutron [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Successfully created port: 25c1ece7-cf11-445a-b58a-c1d924dadd91 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:52:54 np0005603609 nova_compute[221550]: 2026-01-31 07:52:54.731 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:54 np0005603609 NetworkManager[49064]: <info>  [1769845974.7323] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Jan 31 02:52:54 np0005603609 NetworkManager[49064]: <info>  [1769845974.7331] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Jan 31 02:52:54 np0005603609 nova_compute[221550]: 2026-01-31 07:52:54.782 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:54 np0005603609 ovn_controller[130359]: 2026-01-31T07:52:54Z|00226|binding|INFO|Releasing lport 09a2a890-f48a-4bdd-82d5-e0d14979a802 from this chassis (sb_readonly=0)
Jan 31 02:52:54 np0005603609 nova_compute[221550]: 2026-01-31 07:52:54.814 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:52:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:55.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:52:55 np0005603609 nova_compute[221550]: 2026-01-31 07:52:55.081 221554 DEBUG nova.compute.manager [req-a90098ac-4457-4e06-96f3-98aa5b32ecd0 req-6fe164a0-fe01-4cc1-bb44-d9c661f2f5c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Received event network-changed-9b72c5b5-be8a-41f5-abfe-94cdbd6de794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:52:55 np0005603609 nova_compute[221550]: 2026-01-31 07:52:55.082 221554 DEBUG nova.compute.manager [req-a90098ac-4457-4e06-96f3-98aa5b32ecd0 req-6fe164a0-fe01-4cc1-bb44-d9c661f2f5c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Refreshing instance network info cache due to event network-changed-9b72c5b5-be8a-41f5-abfe-94cdbd6de794. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:52:55 np0005603609 nova_compute[221550]: 2026-01-31 07:52:55.082 221554 DEBUG oslo_concurrency.lockutils [req-a90098ac-4457-4e06-96f3-98aa5b32ecd0 req-6fe164a0-fe01-4cc1-bb44-d9c661f2f5c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c440f73c-e6b3-46be-a2e3-702703379890" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:52:55 np0005603609 nova_compute[221550]: 2026-01-31 07:52:55.083 221554 DEBUG oslo_concurrency.lockutils [req-a90098ac-4457-4e06-96f3-98aa5b32ecd0 req-6fe164a0-fe01-4cc1-bb44-d9c661f2f5c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c440f73c-e6b3-46be-a2e3-702703379890" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:52:55 np0005603609 nova_compute[221550]: 2026-01-31 07:52:55.083 221554 DEBUG nova.network.neutron [req-a90098ac-4457-4e06-96f3-98aa5b32ecd0 req-6fe164a0-fe01-4cc1-bb44-d9c661f2f5c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Refreshing network info cache for port 9b72c5b5-be8a-41f5-abfe-94cdbd6de794 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:52:55 np0005603609 nova_compute[221550]: 2026-01-31 07:52:55.534 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:52:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:55.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:52:56 np0005603609 nova_compute[221550]: 2026-01-31 07:52:56.031 221554 DEBUG nova.network.neutron [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Successfully updated port: 25c1ece7-cf11-445a-b58a-c1d924dadd91 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:52:56 np0005603609 nova_compute[221550]: 2026-01-31 07:52:56.059 221554 DEBUG oslo_concurrency.lockutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Acquiring lock "refresh_cache-a214f153-3adc-4183-8407-d768f3fbf70e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:52:56 np0005603609 nova_compute[221550]: 2026-01-31 07:52:56.059 221554 DEBUG oslo_concurrency.lockutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Acquired lock "refresh_cache-a214f153-3adc-4183-8407-d768f3fbf70e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:52:56 np0005603609 nova_compute[221550]: 2026-01-31 07:52:56.059 221554 DEBUG nova.network.neutron [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:52:56 np0005603609 nova_compute[221550]: 2026-01-31 07:52:56.191 221554 DEBUG nova.compute.manager [req-6ce8cda2-9b2f-4d8a-81fa-e5f5fe1e9f7d req-70dac4aa-6818-47a7-b90b-b8edb208df39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Received event network-changed-25c1ece7-cf11-445a-b58a-c1d924dadd91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:52:56 np0005603609 nova_compute[221550]: 2026-01-31 07:52:56.191 221554 DEBUG nova.compute.manager [req-6ce8cda2-9b2f-4d8a-81fa-e5f5fe1e9f7d req-70dac4aa-6818-47a7-b90b-b8edb208df39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Refreshing instance network info cache due to event network-changed-25c1ece7-cf11-445a-b58a-c1d924dadd91. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:52:56 np0005603609 nova_compute[221550]: 2026-01-31 07:52:56.192 221554 DEBUG oslo_concurrency.lockutils [req-6ce8cda2-9b2f-4d8a-81fa-e5f5fe1e9f7d req-70dac4aa-6818-47a7-b90b-b8edb208df39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-a214f153-3adc-4183-8407-d768f3fbf70e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:52:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:56 np0005603609 nova_compute[221550]: 2026-01-31 07:52:56.345 221554 DEBUG nova.network.neutron [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:52:56 np0005603609 nova_compute[221550]: 2026-01-31 07:52:56.433 221554 DEBUG oslo_concurrency.processutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 a214f153-3adc-4183-8407-d768f3fbf70e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.861s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:56 np0005603609 nova_compute[221550]: 2026-01-31 07:52:56.495 221554 DEBUG nova.storage.rbd_utils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] resizing rbd image a214f153-3adc-4183-8407-d768f3fbf70e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:52:56 np0005603609 nova_compute[221550]: 2026-01-31 07:52:56.864 221554 DEBUG nova.network.neutron [req-a90098ac-4457-4e06-96f3-98aa5b32ecd0 req-6fe164a0-fe01-4cc1-bb44-d9c661f2f5c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Updated VIF entry in instance network info cache for port 9b72c5b5-be8a-41f5-abfe-94cdbd6de794. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:52:56 np0005603609 nova_compute[221550]: 2026-01-31 07:52:56.865 221554 DEBUG nova.network.neutron [req-a90098ac-4457-4e06-96f3-98aa5b32ecd0 req-6fe164a0-fe01-4cc1-bb44-d9c661f2f5c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Updating instance_info_cache with network_info: [{"id": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "address": "fa:16:3e:dc:4a:f7", "network": {"id": "868fa2eb-4f72-481a-9c66-8c69ae0a2f4a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-281656863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a9ee1af66824e368ddad5ea68a3201b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b72c5b5-be", "ovs_interfaceid": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:52:56 np0005603609 nova_compute[221550]: 2026-01-31 07:52:56.888 221554 DEBUG oslo_concurrency.lockutils [req-a90098ac-4457-4e06-96f3-98aa5b32ecd0 req-6fe164a0-fe01-4cc1-bb44-d9c661f2f5c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c440f73c-e6b3-46be-a2e3-702703379890" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:52:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:57.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.234 221554 DEBUG nova.network.neutron [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Updating instance_info_cache with network_info: [{"id": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "address": "fa:16:3e:ae:db:d1", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c1ece7-cf", "ovs_interfaceid": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:52:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:57.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.674 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.678 221554 DEBUG nova.objects.instance [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lazy-loading 'migration_context' on Instance uuid a214f153-3adc-4183-8407-d768f3fbf70e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.754 221554 DEBUG oslo_concurrency.lockutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Releasing lock "refresh_cache-a214f153-3adc-4183-8407-d768f3fbf70e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.755 221554 DEBUG nova.compute.manager [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Instance network_info: |[{"id": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "address": "fa:16:3e:ae:db:d1", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c1ece7-cf", "ovs_interfaceid": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.756 221554 DEBUG oslo_concurrency.lockutils [req-6ce8cda2-9b2f-4d8a-81fa-e5f5fe1e9f7d req-70dac4aa-6818-47a7-b90b-b8edb208df39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-a214f153-3adc-4183-8407-d768f3fbf70e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.756 221554 DEBUG nova.network.neutron [req-6ce8cda2-9b2f-4d8a-81fa-e5f5fe1e9f7d req-70dac4aa-6818-47a7-b90b-b8edb208df39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Refreshing network info cache for port 25c1ece7-cf11-445a-b58a-c1d924dadd91 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.788 221554 DEBUG nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.789 221554 DEBUG nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Ensure instance console log exists: /var/lib/nova/instances/a214f153-3adc-4183-8407-d768f3fbf70e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.789 221554 DEBUG oslo_concurrency.lockutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.790 221554 DEBUG oslo_concurrency.lockutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.790 221554 DEBUG oslo_concurrency.lockutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.793 221554 DEBUG nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Start _get_guest_xml network_info=[{"id": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "address": "fa:16:3e:ae:db:d1", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c1ece7-cf", "ovs_interfaceid": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.802 221554 WARNING nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.810 221554 DEBUG nova.virt.libvirt.host [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.811 221554 DEBUG nova.virt.libvirt.host [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.814 221554 DEBUG nova.virt.libvirt.host [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.814 221554 DEBUG nova.virt.libvirt.host [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.816 221554 DEBUG nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.816 221554 DEBUG nova.virt.hardware [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.816 221554 DEBUG nova.virt.hardware [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.817 221554 DEBUG nova.virt.hardware [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.817 221554 DEBUG nova.virt.hardware [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.817 221554 DEBUG nova.virt.hardware [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.817 221554 DEBUG nova.virt.hardware [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.817 221554 DEBUG nova.virt.hardware [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.818 221554 DEBUG nova.virt.hardware [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.818 221554 DEBUG nova.virt.hardware [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.818 221554 DEBUG nova.virt.hardware [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.818 221554 DEBUG nova.virt.hardware [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.821 221554 DEBUG oslo_concurrency.processutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:57 np0005603609 nova_compute[221550]: 2026-01-31 07:52:57.843 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:52:58 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1258146455' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.223 221554 DEBUG oslo_concurrency.processutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.243 221554 DEBUG nova.storage.rbd_utils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] rbd image a214f153-3adc-4183-8407-d768f3fbf70e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.246 221554 DEBUG oslo_concurrency.processutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:52:58 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1360236136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.684 221554 DEBUG oslo_concurrency.processutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.686 221554 DEBUG nova.virt.libvirt.vif [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:52:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1617535596',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1617535596',id=75,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ce2602aa08c4766b5f575340b920fd7',ramdisk_id='',reservation_id='r-hgx0nnqm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-667072375',owner_user_name='tempest-AttachInterfacesV270Test-667072375-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:52:52Z,user_data=None,user_id='db9fe8a6ca1443d099966af09b0a5402',uuid=a214f153-3adc-4183-8407-d768f3fbf70e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "address": "fa:16:3e:ae:db:d1", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c1ece7-cf", "ovs_interfaceid": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.686 221554 DEBUG nova.network.os_vif_util [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Converting VIF {"id": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "address": "fa:16:3e:ae:db:d1", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c1ece7-cf", "ovs_interfaceid": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.687 221554 DEBUG nova.network.os_vif_util [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:db:d1,bridge_name='br-int',has_traffic_filtering=True,id=25c1ece7-cf11-445a-b58a-c1d924dadd91,network=Network(6b3fc65f-f4fb-4d63-a01a-0637865c750d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c1ece7-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.688 221554 DEBUG nova.objects.instance [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lazy-loading 'pci_devices' on Instance uuid a214f153-3adc-4183-8407-d768f3fbf70e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.703 221554 DEBUG nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:52:58 np0005603609 nova_compute[221550]:  <uuid>a214f153-3adc-4183-8407-d768f3fbf70e</uuid>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:  <name>instance-0000004b</name>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <nova:name>tempest-AttachInterfacesV270Test-server-1617535596</nova:name>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:52:57</nova:creationTime>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:52:58 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:        <nova:user uuid="db9fe8a6ca1443d099966af09b0a5402">tempest-AttachInterfacesV270Test-667072375-project-member</nova:user>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:        <nova:project uuid="4ce2602aa08c4766b5f575340b920fd7">tempest-AttachInterfacesV270Test-667072375</nova:project>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:        <nova:port uuid="25c1ece7-cf11-445a-b58a-c1d924dadd91">
Jan 31 02:52:58 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <entry name="serial">a214f153-3adc-4183-8407-d768f3fbf70e</entry>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <entry name="uuid">a214f153-3adc-4183-8407-d768f3fbf70e</entry>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/a214f153-3adc-4183-8407-d768f3fbf70e_disk">
Jan 31 02:52:58 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:52:58 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/a214f153-3adc-4183-8407-d768f3fbf70e_disk.config">
Jan 31 02:52:58 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:52:58 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:ae:db:d1"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <target dev="tap25c1ece7-cf"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/a214f153-3adc-4183-8407-d768f3fbf70e/console.log" append="off"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:52:58 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:52:58 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:52:58 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:52:58 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.705 221554 DEBUG nova.compute.manager [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Preparing to wait for external event network-vif-plugged-25c1ece7-cf11-445a-b58a-c1d924dadd91 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.705 221554 DEBUG oslo_concurrency.lockutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Acquiring lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.705 221554 DEBUG oslo_concurrency.lockutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.706 221554 DEBUG oslo_concurrency.lockutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.706 221554 DEBUG nova.virt.libvirt.vif [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:52:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1617535596',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1617535596',id=75,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ce2602aa08c4766b5f575340b920fd7',ramdisk_id='',reservation_id='r-hgx0nnqm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-667072375',owner_user_name='tempest-AttachInterfacesV270Test-667072375-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:52:52Z,user_data=None,user_id='db9fe8a6ca1443d099966af09b0a5402',uuid=a214f153-3adc-4183-8407-d768f3fbf70e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "address": "fa:16:3e:ae:db:d1", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c1ece7-cf", "ovs_interfaceid": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.707 221554 DEBUG nova.network.os_vif_util [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Converting VIF {"id": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "address": "fa:16:3e:ae:db:d1", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c1ece7-cf", "ovs_interfaceid": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.707 221554 DEBUG nova.network.os_vif_util [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:db:d1,bridge_name='br-int',has_traffic_filtering=True,id=25c1ece7-cf11-445a-b58a-c1d924dadd91,network=Network(6b3fc65f-f4fb-4d63-a01a-0637865c750d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c1ece7-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.708 221554 DEBUG os_vif [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:db:d1,bridge_name='br-int',has_traffic_filtering=True,id=25c1ece7-cf11-445a-b58a-c1d924dadd91,network=Network(6b3fc65f-f4fb-4d63-a01a-0637865c750d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c1ece7-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.709 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.709 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.709 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.713 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.714 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25c1ece7-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.714 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25c1ece7-cf, col_values=(('external_ids', {'iface-id': '25c1ece7-cf11-445a-b58a-c1d924dadd91', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:db:d1', 'vm-uuid': 'a214f153-3adc-4183-8407-d768f3fbf70e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.716 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:58 np0005603609 NetworkManager[49064]: <info>  [1769845978.7183] manager: (tap25c1ece7-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.719 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.724 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.725 221554 INFO os_vif [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:db:d1,bridge_name='br-int',has_traffic_filtering=True,id=25c1ece7-cf11-445a-b58a-c1d924dadd91,network=Network(6b3fc65f-f4fb-4d63-a01a-0637865c750d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c1ece7-cf')#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.878 221554 DEBUG nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.879 221554 DEBUG nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.880 221554 DEBUG nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] No VIF found with MAC fa:16:3e:ae:db:d1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.881 221554 INFO nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Using config drive#033[00m
Jan 31 02:52:58 np0005603609 nova_compute[221550]: 2026-01-31 07:52:58.912 221554 DEBUG nova.storage.rbd_utils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] rbd image a214f153-3adc-4183-8407-d768f3fbf70e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:52:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:52:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:59.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:52:59 np0005603609 nova_compute[221550]: 2026-01-31 07:52:59.285 221554 INFO nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Creating config drive at /var/lib/nova/instances/a214f153-3adc-4183-8407-d768f3fbf70e/disk.config#033[00m
Jan 31 02:52:59 np0005603609 nova_compute[221550]: 2026-01-31 07:52:59.290 221554 DEBUG oslo_concurrency.processutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a214f153-3adc-4183-8407-d768f3fbf70e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp0wla9s90 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:59 np0005603609 nova_compute[221550]: 2026-01-31 07:52:59.418 221554 DEBUG oslo_concurrency.processutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a214f153-3adc-4183-8407-d768f3fbf70e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp0wla9s90" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:59 np0005603609 nova_compute[221550]: 2026-01-31 07:52:59.440 221554 DEBUG nova.storage.rbd_utils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] rbd image a214f153-3adc-4183-8407-d768f3fbf70e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:52:59 np0005603609 nova_compute[221550]: 2026-01-31 07:52:59.444 221554 DEBUG oslo_concurrency.processutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a214f153-3adc-4183-8407-d768f3fbf70e/disk.config a214f153-3adc-4183-8407-d768f3fbf70e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:52:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:59.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:00 np0005603609 nova_compute[221550]: 2026-01-31 07:53:00.119 221554 DEBUG nova.network.neutron [req-6ce8cda2-9b2f-4d8a-81fa-e5f5fe1e9f7d req-70dac4aa-6818-47a7-b90b-b8edb208df39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Updated VIF entry in instance network info cache for port 25c1ece7-cf11-445a-b58a-c1d924dadd91. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:53:00 np0005603609 nova_compute[221550]: 2026-01-31 07:53:00.120 221554 DEBUG nova.network.neutron [req-6ce8cda2-9b2f-4d8a-81fa-e5f5fe1e9f7d req-70dac4aa-6818-47a7-b90b-b8edb208df39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Updating instance_info_cache with network_info: [{"id": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "address": "fa:16:3e:ae:db:d1", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c1ece7-cf", "ovs_interfaceid": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:53:00 np0005603609 nova_compute[221550]: 2026-01-31 07:53:00.136 221554 DEBUG oslo_concurrency.lockutils [req-6ce8cda2-9b2f-4d8a-81fa-e5f5fe1e9f7d req-70dac4aa-6818-47a7-b90b-b8edb208df39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-a214f153-3adc-4183-8407-d768f3fbf70e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:53:00 np0005603609 nova_compute[221550]: 2026-01-31 07:53:00.243 221554 DEBUG oslo_concurrency.processutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a214f153-3adc-4183-8407-d768f3fbf70e/disk.config a214f153-3adc-4183-8407-d768f3fbf70e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.799s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:00 np0005603609 nova_compute[221550]: 2026-01-31 07:53:00.244 221554 INFO nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Deleting local config drive /var/lib/nova/instances/a214f153-3adc-4183-8407-d768f3fbf70e/disk.config because it was imported into RBD.#033[00m
Jan 31 02:53:00 np0005603609 kernel: tap25c1ece7-cf: entered promiscuous mode
Jan 31 02:53:00 np0005603609 NetworkManager[49064]: <info>  [1769845980.3001] manager: (tap25c1ece7-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/119)
Jan 31 02:53:00 np0005603609 nova_compute[221550]: 2026-01-31 07:53:00.302 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:00 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:00Z|00227|binding|INFO|Claiming lport 25c1ece7-cf11-445a-b58a-c1d924dadd91 for this chassis.
Jan 31 02:53:00 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:00Z|00228|binding|INFO|25c1ece7-cf11-445a-b58a-c1d924dadd91: Claiming fa:16:3e:ae:db:d1 10.100.0.4
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.315 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:db:d1 10.100.0.4'], port_security=['fa:16:3e:ae:db:d1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a214f153-3adc-4183-8407-d768f3fbf70e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b3fc65f-f4fb-4d63-a01a-0637865c750d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce2602aa08c4766b5f575340b920fd7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee6f790d-cbbe-4563-b40a-f23eb2235db6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d9f7b22-8a34-401c-9a90-a1268249d403, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=25c1ece7-cf11-445a-b58a-c1d924dadd91) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:53:00 np0005603609 nova_compute[221550]: 2026-01-31 07:53:00.317 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.316 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 25c1ece7-cf11-445a-b58a-c1d924dadd91 in datapath 6b3fc65f-f4fb-4d63-a01a-0637865c750d bound to our chassis#033[00m
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.318 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6b3fc65f-f4fb-4d63-a01a-0637865c750d#033[00m
Jan 31 02:53:00 np0005603609 nova_compute[221550]: 2026-01-31 07:53:00.319 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:00 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:00Z|00229|binding|INFO|Setting lport 25c1ece7-cf11-445a-b58a-c1d924dadd91 ovn-installed in OVS
Jan 31 02:53:00 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:00Z|00230|binding|INFO|Setting lport 25c1ece7-cf11-445a-b58a-c1d924dadd91 up in Southbound
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.328 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[67c75bf8-e396-4d92-88a2-9cae116353bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.329 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6b3fc65f-f1 in ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:53:00 np0005603609 systemd-udevd[249098]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.331 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6b3fc65f-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.331 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[139350da-a3da-404f-afed-a476eb324e6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.335 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[caebd424-1f71-431f-871e-d797027361f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:00 np0005603609 NetworkManager[49064]: <info>  [1769845980.3423] device (tap25c1ece7-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:53:00 np0005603609 NetworkManager[49064]: <info>  [1769845980.3432] device (tap25c1ece7-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.346 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[fff3e10a-3d31-4ed6-beb6-8cb7bbd7344b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:00 np0005603609 systemd-machined[190912]: New machine qemu-34-instance-0000004b.
Jan 31 02:53:00 np0005603609 systemd[1]: Started Virtual Machine qemu-34-instance-0000004b.
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.373 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4c560acc-aa25-4cec-82fb-bc8257ea02ad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.405 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b4cdd9b1-7d15-4ac4-9815-1d5376b500d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.410 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c203de89-b632-4dca-9708-218afbdbd8b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:00 np0005603609 NetworkManager[49064]: <info>  [1769845980.4115] manager: (tap6b3fc65f-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/120)
Jan 31 02:53:00 np0005603609 systemd-udevd[249103]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.440 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[50f17b89-36b0-438a-8b36-a146c95dfbc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.444 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[921ff3c8-47d0-44a2-ae00-0582af6cf553]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:00 np0005603609 NetworkManager[49064]: <info>  [1769845980.4662] device (tap6b3fc65f-f0): carrier: link connected
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.471 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[83e962ef-2370-4068-b7a4-9ff26f5b22a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.483 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[57572429-76ce-4056-bdc2-3dc89a1e3187]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6b3fc65f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:11:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622016, 'reachable_time': 41216, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249133, 'error': None, 'target': 'ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.492 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7e18423c-18ce-4ed0-be2f-0d349f63b5e4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb3:11a1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622016, 'tstamp': 622016}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249134, 'error': None, 'target': 'ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.501 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9f5b4484-2286-4ec1-abd0-90d099450d67]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6b3fc65f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:11:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622016, 'reachable_time': 41216, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249135, 'error': None, 'target': 'ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.527 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[56271a4b-0750-4cb0-9f6c-11d5d39d24f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.582 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[db88e66b-d6c2-46a9-a1b7-c053a975e1a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.583 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b3fc65f-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.583 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.583 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b3fc65f-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:53:00 np0005603609 nova_compute[221550]: 2026-01-31 07:53:00.586 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:00 np0005603609 NetworkManager[49064]: <info>  [1769845980.5867] manager: (tap6b3fc65f-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Jan 31 02:53:00 np0005603609 kernel: tap6b3fc65f-f0: entered promiscuous mode
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.588 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6b3fc65f-f0, col_values=(('external_ids', {'iface-id': '8c6feaf2-1214-4d46-b7c0-b68a620c5b7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:53:00 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:00Z|00231|binding|INFO|Releasing lport 8c6feaf2-1214-4d46-b7c0-b68a620c5b7c from this chassis (sb_readonly=0)
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.590 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6b3fc65f-f4fb-4d63-a01a-0637865c750d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6b3fc65f-f4fb-4d63-a01a-0637865c750d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.591 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a561adee-a812-429f-9d1f-aa893817b19a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.591 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-6b3fc65f-f4fb-4d63-a01a-0637865c750d
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/6b3fc65f-f4fb-4d63-a01a-0637865c750d.pid.haproxy
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 6b3fc65f-f4fb-4d63-a01a-0637865c750d
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:53:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:00.592 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d', 'env', 'PROCESS_TAG=haproxy-6b3fc65f-f4fb-4d63-a01a-0637865c750d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6b3fc65f-f4fb-4d63-a01a-0637865c750d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:53:00 np0005603609 nova_compute[221550]: 2026-01-31 07:53:00.593 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:00 np0005603609 nova_compute[221550]: 2026-01-31 07:53:00.931 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845980.9309084, a214f153-3adc-4183-8407-d768f3fbf70e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:53:00 np0005603609 nova_compute[221550]: 2026-01-31 07:53:00.932 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] VM Started (Lifecycle Event)#033[00m
Jan 31 02:53:00 np0005603609 podman[249203]: 2026-01-31 07:53:00.879540605 +0000 UTC m=+0.021370446 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:53:00 np0005603609 nova_compute[221550]: 2026-01-31 07:53:00.976 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:53:00 np0005603609 nova_compute[221550]: 2026-01-31 07:53:00.981 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845980.9315708, a214f153-3adc-4183-8407-d768f3fbf70e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:53:00 np0005603609 nova_compute[221550]: 2026-01-31 07:53:00.981 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:53:00 np0005603609 podman[249203]: 2026-01-31 07:53:00.995752757 +0000 UTC m=+0.137582578 container create 5da1ab5fb2d59da827630fcaed1076ad569cd434266bf1c835a23bea01a43748 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.013 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.017 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:53:01 np0005603609 systemd[1]: Started libpod-conmon-5da1ab5fb2d59da827630fcaed1076ad569cd434266bf1c835a23bea01a43748.scope.
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.049 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:53:01 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:53:01 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8307503229fbc7fc539562f2260b124c593a0c1f690d9a37fc9f896d82b7f5af/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:53:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:53:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:01.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:53:01 np0005603609 podman[249203]: 2026-01-31 07:53:01.088554735 +0000 UTC m=+0.230384586 container init 5da1ab5fb2d59da827630fcaed1076ad569cd434266bf1c835a23bea01a43748 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 02:53:01 np0005603609 podman[249203]: 2026-01-31 07:53:01.092564811 +0000 UTC m=+0.234394622 container start 5da1ab5fb2d59da827630fcaed1076ad569cd434266bf1c835a23bea01a43748 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 02:53:01 np0005603609 neutron-haproxy-ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d[249223]: [NOTICE]   (249228) : New worker (249230) forked
Jan 31 02:53:01 np0005603609 neutron-haproxy-ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d[249223]: [NOTICE]   (249228) : Loading success.
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.131 221554 DEBUG nova.compute.manager [req-5a847ffd-26a3-47ef-9efe-f9d3f41af32c req-02a717e4-cc37-42e9-8ad6-3bf0620a894d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Received event network-vif-plugged-25c1ece7-cf11-445a-b58a-c1d924dadd91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.132 221554 DEBUG oslo_concurrency.lockutils [req-5a847ffd-26a3-47ef-9efe-f9d3f41af32c req-02a717e4-cc37-42e9-8ad6-3bf0620a894d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.132 221554 DEBUG oslo_concurrency.lockutils [req-5a847ffd-26a3-47ef-9efe-f9d3f41af32c req-02a717e4-cc37-42e9-8ad6-3bf0620a894d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.132 221554 DEBUG oslo_concurrency.lockutils [req-5a847ffd-26a3-47ef-9efe-f9d3f41af32c req-02a717e4-cc37-42e9-8ad6-3bf0620a894d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.132 221554 DEBUG nova.compute.manager [req-5a847ffd-26a3-47ef-9efe-f9d3f41af32c req-02a717e4-cc37-42e9-8ad6-3bf0620a894d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Processing event network-vif-plugged-25c1ece7-cf11-445a-b58a-c1d924dadd91 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.133 221554 DEBUG nova.compute.manager [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.137 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769845981.136821, a214f153-3adc-4183-8407-d768f3fbf70e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.137 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.139 221554 DEBUG nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.141 221554 INFO nova.virt.libvirt.driver [-] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Instance spawned successfully.#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.141 221554 DEBUG nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.161 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.164 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.169 221554 DEBUG nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.169 221554 DEBUG nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.169 221554 DEBUG nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.170 221554 DEBUG nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.170 221554 DEBUG nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.170 221554 DEBUG nova.virt.libvirt.driver [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.187 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:53:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.286 221554 INFO nova.compute.manager [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Took 8.08 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.286 221554 DEBUG nova.compute.manager [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.395 221554 INFO nova.compute.manager [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Took 9.96 seconds to build instance.#033[00m
Jan 31 02:53:01 np0005603609 nova_compute[221550]: 2026-01-31 07:53:01.506 221554 DEBUG oslo_concurrency.lockutils [None req-7de8be3f-429e-4afe-91ba-93ccc0b2bc2e db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:01.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:02 np0005603609 nova_compute[221550]: 2026-01-31 07:53:02.847 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:03.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:03 np0005603609 nova_compute[221550]: 2026-01-31 07:53:03.253 221554 DEBUG nova.compute.manager [req-7ec8c949-e25b-4058-9dfc-cb6276e7a9bd req-170c6f07-4f3b-4f8a-b5e3-434fca6f21fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Received event network-vif-plugged-25c1ece7-cf11-445a-b58a-c1d924dadd91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:53:03 np0005603609 nova_compute[221550]: 2026-01-31 07:53:03.253 221554 DEBUG oslo_concurrency.lockutils [req-7ec8c949-e25b-4058-9dfc-cb6276e7a9bd req-170c6f07-4f3b-4f8a-b5e3-434fca6f21fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:03 np0005603609 nova_compute[221550]: 2026-01-31 07:53:03.253 221554 DEBUG oslo_concurrency.lockutils [req-7ec8c949-e25b-4058-9dfc-cb6276e7a9bd req-170c6f07-4f3b-4f8a-b5e3-434fca6f21fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:03 np0005603609 nova_compute[221550]: 2026-01-31 07:53:03.254 221554 DEBUG oslo_concurrency.lockutils [req-7ec8c949-e25b-4058-9dfc-cb6276e7a9bd req-170c6f07-4f3b-4f8a-b5e3-434fca6f21fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:03 np0005603609 nova_compute[221550]: 2026-01-31 07:53:03.254 221554 DEBUG nova.compute.manager [req-7ec8c949-e25b-4058-9dfc-cb6276e7a9bd req-170c6f07-4f3b-4f8a-b5e3-434fca6f21fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] No waiting events found dispatching network-vif-plugged-25c1ece7-cf11-445a-b58a-c1d924dadd91 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:53:03 np0005603609 nova_compute[221550]: 2026-01-31 07:53:03.254 221554 WARNING nova.compute.manager [req-7ec8c949-e25b-4058-9dfc-cb6276e7a9bd req-170c6f07-4f3b-4f8a-b5e3-434fca6f21fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Received unexpected event network-vif-plugged-25c1ece7-cf11-445a-b58a-c1d924dadd91 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:53:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:03.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:03 np0005603609 nova_compute[221550]: 2026-01-31 07:53:03.716 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:04 np0005603609 nova_compute[221550]: 2026-01-31 07:53:04.633 221554 DEBUG oslo_concurrency.lockutils [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Acquiring lock "interface-a214f153-3adc-4183-8407-d768f3fbf70e-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:04 np0005603609 nova_compute[221550]: 2026-01-31 07:53:04.633 221554 DEBUG oslo_concurrency.lockutils [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lock "interface-a214f153-3adc-4183-8407-d768f3fbf70e-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:04 np0005603609 nova_compute[221550]: 2026-01-31 07:53:04.634 221554 DEBUG nova.objects.instance [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lazy-loading 'flavor' on Instance uuid a214f153-3adc-4183-8407-d768f3fbf70e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:53:04 np0005603609 nova_compute[221550]: 2026-01-31 07:53:04.654 221554 DEBUG nova.objects.instance [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lazy-loading 'pci_requests' on Instance uuid a214f153-3adc-4183-8407-d768f3fbf70e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:53:04 np0005603609 nova_compute[221550]: 2026-01-31 07:53:04.665 221554 DEBUG nova.network.neutron [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:53:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:05.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:05 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:05Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dc:4a:f7 10.100.0.8
Jan 31 02:53:05 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:05Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dc:4a:f7 10.100.0.8
Jan 31 02:53:05 np0005603609 nova_compute[221550]: 2026-01-31 07:53:05.278 221554 DEBUG nova.policy [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'db9fe8a6ca1443d099966af09b0a5402', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4ce2602aa08c4766b5f575340b920fd7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:53:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:53:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:05.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:53:06 np0005603609 podman[249241]: 2026-01-31 07:53:06.176657942 +0000 UTC m=+0.055852008 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 02:53:06 np0005603609 podman[249240]: 2026-01-31 07:53:06.209425623 +0000 UTC m=+0.089413957 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 31 02:53:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:06 np0005603609 nova_compute[221550]: 2026-01-31 07:53:06.257 221554 DEBUG nova.network.neutron [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Successfully created port: 6b5e1733-ec2f-449c-bea0-144c6665f129 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:53:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:07.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:07 np0005603609 nova_compute[221550]: 2026-01-31 07:53:07.316 221554 DEBUG nova.network.neutron [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Successfully updated port: 6b5e1733-ec2f-449c-bea0-144c6665f129 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:53:07 np0005603609 nova_compute[221550]: 2026-01-31 07:53:07.446 221554 DEBUG oslo_concurrency.lockutils [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Acquiring lock "refresh_cache-a214f153-3adc-4183-8407-d768f3fbf70e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:53:07 np0005603609 nova_compute[221550]: 2026-01-31 07:53:07.447 221554 DEBUG oslo_concurrency.lockutils [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Acquired lock "refresh_cache-a214f153-3adc-4183-8407-d768f3fbf70e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:53:07 np0005603609 nova_compute[221550]: 2026-01-31 07:53:07.447 221554 DEBUG nova.network.neutron [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:53:07 np0005603609 nova_compute[221550]: 2026-01-31 07:53:07.452 221554 DEBUG nova.compute.manager [req-47357688-c624-4709-b1b7-9e406105d442 req-7f30833d-2c07-4596-a661-781a9ab3de46 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Received event network-changed-6b5e1733-ec2f-449c-bea0-144c6665f129 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:53:07 np0005603609 nova_compute[221550]: 2026-01-31 07:53:07.453 221554 DEBUG nova.compute.manager [req-47357688-c624-4709-b1b7-9e406105d442 req-7f30833d-2c07-4596-a661-781a9ab3de46 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Refreshing instance network info cache due to event network-changed-6b5e1733-ec2f-449c-bea0-144c6665f129. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:53:07 np0005603609 nova_compute[221550]: 2026-01-31 07:53:07.453 221554 DEBUG oslo_concurrency.lockutils [req-47357688-c624-4709-b1b7-9e406105d442 req-7f30833d-2c07-4596-a661-781a9ab3de46 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-a214f153-3adc-4183-8407-d768f3fbf70e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:53:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:07.487 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:07.488 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:07.488 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:07 np0005603609 nova_compute[221550]: 2026-01-31 07:53:07.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:53:07 np0005603609 nova_compute[221550]: 2026-01-31 07:53:07.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:53:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:07.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:07 np0005603609 nova_compute[221550]: 2026-01-31 07:53:07.809 221554 WARNING nova.network.neutron [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] 6b3fc65f-f4fb-4d63-a01a-0637865c750d already exists in list: networks containing: ['6b3fc65f-f4fb-4d63-a01a-0637865c750d']. ignoring it#033[00m
Jan 31 02:53:07 np0005603609 nova_compute[221550]: 2026-01-31 07:53:07.847 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:08 np0005603609 nova_compute[221550]: 2026-01-31 07:53:08.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:53:08 np0005603609 nova_compute[221550]: 2026-01-31 07:53:08.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:53:08 np0005603609 nova_compute[221550]: 2026-01-31 07:53:08.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:53:08 np0005603609 nova_compute[221550]: 2026-01-31 07:53:08.718 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:08 np0005603609 nova_compute[221550]: 2026-01-31 07:53:08.920 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-c440f73c-e6b3-46be-a2e3-702703379890" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:53:08 np0005603609 nova_compute[221550]: 2026-01-31 07:53:08.921 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-c440f73c-e6b3-46be-a2e3-702703379890" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:53:08 np0005603609 nova_compute[221550]: 2026-01-31 07:53:08.921 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:53:08 np0005603609 nova_compute[221550]: 2026-01-31 07:53:08.921 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c440f73c-e6b3-46be-a2e3-702703379890 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:53:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:09.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:09.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:11.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.321 221554 DEBUG nova.network.neutron [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Updating instance_info_cache with network_info: [{"id": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "address": "fa:16:3e:ae:db:d1", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c1ece7-cf", "ovs_interfaceid": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6b5e1733-ec2f-449c-bea0-144c6665f129", "address": "fa:16:3e:89:d3:5f", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5e1733-ec", "ovs_interfaceid": "6b5e1733-ec2f-449c-bea0-144c6665f129", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.343 221554 DEBUG oslo_concurrency.lockutils [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Releasing lock "refresh_cache-a214f153-3adc-4183-8407-d768f3fbf70e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.344 221554 DEBUG oslo_concurrency.lockutils [req-47357688-c624-4709-b1b7-9e406105d442 req-7f30833d-2c07-4596-a661-781a9ab3de46 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-a214f153-3adc-4183-8407-d768f3fbf70e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.344 221554 DEBUG nova.network.neutron [req-47357688-c624-4709-b1b7-9e406105d442 req-7f30833d-2c07-4596-a661-781a9ab3de46 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Refreshing network info cache for port 6b5e1733-ec2f-449c-bea0-144c6665f129 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.347 221554 DEBUG nova.virt.libvirt.vif [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:52:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1617535596',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1617535596',id=75,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:53:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4ce2602aa08c4766b5f575340b920fd7',ramdisk_id='',reservation_id='r-hgx0nnqm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-667072375',owner_user_name='tempest-AttachInterfacesV270Test-667072375-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:53:01Z,user_data=None,user_id='db9fe8a6ca1443d099966af09b0a5402',uuid=a214f153-3adc-4183-8407-d768f3fbf70e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b5e1733-ec2f-449c-bea0-144c6665f129", "address": "fa:16:3e:89:d3:5f", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5e1733-ec", "ovs_interfaceid": "6b5e1733-ec2f-449c-bea0-144c6665f129", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.347 221554 DEBUG nova.network.os_vif_util [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Converting VIF {"id": "6b5e1733-ec2f-449c-bea0-144c6665f129", "address": "fa:16:3e:89:d3:5f", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5e1733-ec", "ovs_interfaceid": "6b5e1733-ec2f-449c-bea0-144c6665f129", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.348 221554 DEBUG nova.network.os_vif_util [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:d3:5f,bridge_name='br-int',has_traffic_filtering=True,id=6b5e1733-ec2f-449c-bea0-144c6665f129,network=Network(6b3fc65f-f4fb-4d63-a01a-0637865c750d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5e1733-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.348 221554 DEBUG os_vif [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:d3:5f,bridge_name='br-int',has_traffic_filtering=True,id=6b5e1733-ec2f-449c-bea0-144c6665f129,network=Network(6b3fc65f-f4fb-4d63-a01a-0637865c750d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5e1733-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.349 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.349 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.350 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.353 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.353 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b5e1733-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.354 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b5e1733-ec, col_values=(('external_ids', {'iface-id': '6b5e1733-ec2f-449c-bea0-144c6665f129', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:d3:5f', 'vm-uuid': 'a214f153-3adc-4183-8407-d768f3fbf70e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:53:11 np0005603609 NetworkManager[49064]: <info>  [1769845991.3574] manager: (tap6b5e1733-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.358 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.365 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.366 221554 INFO os_vif [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:d3:5f,bridge_name='br-int',has_traffic_filtering=True,id=6b5e1733-ec2f-449c-bea0-144c6665f129,network=Network(6b3fc65f-f4fb-4d63-a01a-0637865c750d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5e1733-ec')#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.367 221554 DEBUG nova.virt.libvirt.vif [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:52:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1617535596',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1617535596',id=75,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:53:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4ce2602aa08c4766b5f575340b920fd7',ramdisk_id='',reservation_id='r-hgx0nnqm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-667072375',owner_user_name='tempest-AttachInterfacesV270Test-667072375-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:53:01Z,user_data=None,user_id='db9fe8a6ca1443d099966af09b0a5402',uuid=a214f153-3adc-4183-8407-d768f3fbf70e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b5e1733-ec2f-449c-bea0-144c6665f129", "address": "fa:16:3e:89:d3:5f", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5e1733-ec", "ovs_interfaceid": "6b5e1733-ec2f-449c-bea0-144c6665f129", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.368 221554 DEBUG nova.network.os_vif_util [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Converting VIF {"id": "6b5e1733-ec2f-449c-bea0-144c6665f129", "address": "fa:16:3e:89:d3:5f", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5e1733-ec", "ovs_interfaceid": "6b5e1733-ec2f-449c-bea0-144c6665f129", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.368 221554 DEBUG nova.network.os_vif_util [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:d3:5f,bridge_name='br-int',has_traffic_filtering=True,id=6b5e1733-ec2f-449c-bea0-144c6665f129,network=Network(6b3fc65f-f4fb-4d63-a01a-0637865c750d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5e1733-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.371 221554 DEBUG nova.virt.libvirt.guest [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] attach device xml: <interface type="ethernet">
Jan 31 02:53:11 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:89:d3:5f"/>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:  <target dev="tap6b5e1733-ec"/>
Jan 31 02:53:11 np0005603609 nova_compute[221550]: </interface>
Jan 31 02:53:11 np0005603609 nova_compute[221550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 02:53:11 np0005603609 kernel: tap6b5e1733-ec: entered promiscuous mode
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.386 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:11 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:11Z|00232|binding|INFO|Claiming lport 6b5e1733-ec2f-449c-bea0-144c6665f129 for this chassis.
Jan 31 02:53:11 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:11Z|00233|binding|INFO|6b5e1733-ec2f-449c-bea0-144c6665f129: Claiming fa:16:3e:89:d3:5f 10.100.0.5
Jan 31 02:53:11 np0005603609 NetworkManager[49064]: <info>  [1769845991.3979] manager: (tap6b5e1733-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/123)
Jan 31 02:53:11 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:11Z|00234|binding|INFO|Setting lport 6b5e1733-ec2f-449c-bea0-144c6665f129 ovn-installed in OVS
Jan 31 02:53:11 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:11Z|00235|binding|INFO|Setting lport 6b5e1733-ec2f-449c-bea0-144c6665f129 up in Southbound
Jan 31 02:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:11.402 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:d3:5f 10.100.0.5'], port_security=['fa:16:3e:89:d3:5f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a214f153-3adc-4183-8407-d768f3fbf70e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b3fc65f-f4fb-4d63-a01a-0637865c750d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce2602aa08c4766b5f575340b920fd7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee6f790d-cbbe-4563-b40a-f23eb2235db6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d9f7b22-8a34-401c-9a90-a1268249d403, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=6b5e1733-ec2f-449c-bea0-144c6665f129) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:11.403 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 6b5e1733-ec2f-449c-bea0-144c6665f129 in datapath 6b3fc65f-f4fb-4d63-a01a-0637865c750d bound to our chassis#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.403 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:11.405 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6b3fc65f-f4fb-4d63-a01a-0637865c750d#033[00m
Jan 31 02:53:11 np0005603609 systemd-udevd[249293]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:11.419 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[40bd6e8a-a09e-4e23-ba1a-24023f24875a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:11 np0005603609 NetworkManager[49064]: <info>  [1769845991.4233] device (tap6b5e1733-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:53:11 np0005603609 NetworkManager[49064]: <info>  [1769845991.4239] device (tap6b5e1733-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:11.442 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[3a40888d-d47b-43b5-8092-112646c77405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:11.446 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[8415434e-a718-4523-81dd-b421900ee2f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:11.469 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[66aa62a1-be98-4add-bad9-93b7845b24ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:11.483 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3709d3c8-fd99-4cb9-87cd-9b2dfde371fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6b3fc65f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:11:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622016, 'reachable_time': 41216, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249300, 'error': None, 'target': 'ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:11.493 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[10b79eca-7a94-4c7c-9071-07e2f3f4e9d9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6b3fc65f-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622025, 'tstamp': 622025}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249301, 'error': None, 'target': 'ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6b3fc65f-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622027, 'tstamp': 622027}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249301, 'error': None, 'target': 'ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:11.494 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b3fc65f-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.495 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.497 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:11.497 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b3fc65f-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:11.497 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:11.498 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6b3fc65f-f0, col_values=(('external_ids', {'iface-id': '8c6feaf2-1214-4d46-b7c0-b68a620c5b7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:11.499 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.533 221554 DEBUG nova.virt.libvirt.driver [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.534 221554 DEBUG nova.virt.libvirt.driver [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.534 221554 DEBUG nova.virt.libvirt.driver [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] No VIF found with MAC fa:16:3e:ae:db:d1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.534 221554 DEBUG nova.virt.libvirt.driver [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] No VIF found with MAC fa:16:3e:89:d3:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.564 221554 DEBUG nova.virt.libvirt.guest [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:53:11 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:  <nova:name>tempest-AttachInterfacesV270Test-server-1617535596</nova:name>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 07:53:11</nova:creationTime>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 02:53:11 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:    <nova:user uuid="db9fe8a6ca1443d099966af09b0a5402">tempest-AttachInterfacesV270Test-667072375-project-member</nova:user>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:    <nova:project uuid="4ce2602aa08c4766b5f575340b920fd7">tempest-AttachInterfacesV270Test-667072375</nova:project>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:    <nova:port uuid="25c1ece7-cf11-445a-b58a-c1d924dadd91">
Jan 31 02:53:11 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:    <nova:port uuid="6b5e1733-ec2f-449c-bea0-144c6665f129">
Jan 31 02:53:11 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:53:11 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 02:53:11 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 02:53:11 np0005603609 nova_compute[221550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 02:53:11 np0005603609 nova_compute[221550]: 2026-01-31 07:53:11.600 221554 DEBUG oslo_concurrency.lockutils [None req-c7cbcb0a-d035-4855-828c-4af59e208b91 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lock "interface-a214f153-3adc-4183-8407-d768f3fbf70e-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:11.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:12 np0005603609 nova_compute[221550]: 2026-01-31 07:53:12.753 221554 DEBUG nova.compute.manager [req-75afc55d-95c6-4120-9264-cbd196a9e93d req-5b78417d-2c02-4445-bb2f-3318b1446f1c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Received event network-vif-plugged-6b5e1733-ec2f-449c-bea0-144c6665f129 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:53:12 np0005603609 nova_compute[221550]: 2026-01-31 07:53:12.753 221554 DEBUG oslo_concurrency.lockutils [req-75afc55d-95c6-4120-9264-cbd196a9e93d req-5b78417d-2c02-4445-bb2f-3318b1446f1c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:12 np0005603609 nova_compute[221550]: 2026-01-31 07:53:12.754 221554 DEBUG oslo_concurrency.lockutils [req-75afc55d-95c6-4120-9264-cbd196a9e93d req-5b78417d-2c02-4445-bb2f-3318b1446f1c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:12 np0005603609 nova_compute[221550]: 2026-01-31 07:53:12.754 221554 DEBUG oslo_concurrency.lockutils [req-75afc55d-95c6-4120-9264-cbd196a9e93d req-5b78417d-2c02-4445-bb2f-3318b1446f1c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:12 np0005603609 nova_compute[221550]: 2026-01-31 07:53:12.754 221554 DEBUG nova.compute.manager [req-75afc55d-95c6-4120-9264-cbd196a9e93d req-5b78417d-2c02-4445-bb2f-3318b1446f1c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] No waiting events found dispatching network-vif-plugged-6b5e1733-ec2f-449c-bea0-144c6665f129 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:53:12 np0005603609 nova_compute[221550]: 2026-01-31 07:53:12.754 221554 WARNING nova.compute.manager [req-75afc55d-95c6-4120-9264-cbd196a9e93d req-5b78417d-2c02-4445-bb2f-3318b1446f1c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Received unexpected event network-vif-plugged-6b5e1733-ec2f-449c-bea0-144c6665f129 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:53:12 np0005603609 nova_compute[221550]: 2026-01-31 07:53:12.849 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:13.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.382 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Updating instance_info_cache with network_info: [{"id": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "address": "fa:16:3e:dc:4a:f7", "network": {"id": "868fa2eb-4f72-481a-9c66-8c69ae0a2f4a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-281656863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a9ee1af66824e368ddad5ea68a3201b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b72c5b5-be", "ovs_interfaceid": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.399 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-c440f73c-e6b3-46be-a2e3-702703379890" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.399 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.400 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.400 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.400 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.401 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.423 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.424 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.424 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.424 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.424 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.536 221554 DEBUG oslo_concurrency.lockutils [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Acquiring lock "c440f73c-e6b3-46be-a2e3-702703379890" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.537 221554 DEBUG oslo_concurrency.lockutils [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Lock "c440f73c-e6b3-46be-a2e3-702703379890" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.537 221554 DEBUG oslo_concurrency.lockutils [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Acquiring lock "c440f73c-e6b3-46be-a2e3-702703379890-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.537 221554 DEBUG oslo_concurrency.lockutils [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Lock "c440f73c-e6b3-46be-a2e3-702703379890-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.537 221554 DEBUG oslo_concurrency.lockutils [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Lock "c440f73c-e6b3-46be-a2e3-702703379890-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.539 221554 INFO nova.compute.manager [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Terminating instance#033[00m
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.540 221554 DEBUG nova.compute.manager [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:53:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:13.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:53:13 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3915967884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:53:13 np0005603609 nova_compute[221550]: 2026-01-31 07:53:13.835 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.033 221554 DEBUG nova.network.neutron [req-47357688-c624-4709-b1b7-9e406105d442 req-7f30833d-2c07-4596-a661-781a9ab3de46 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Updated VIF entry in instance network info cache for port 6b5e1733-ec2f-449c-bea0-144c6665f129. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.033 221554 DEBUG nova.network.neutron [req-47357688-c624-4709-b1b7-9e406105d442 req-7f30833d-2c07-4596-a661-781a9ab3de46 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Updating instance_info_cache with network_info: [{"id": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "address": "fa:16:3e:ae:db:d1", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c1ece7-cf", "ovs_interfaceid": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6b5e1733-ec2f-449c-bea0-144c6665f129", "address": "fa:16:3e:89:d3:5f", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5e1733-ec", "ovs_interfaceid": "6b5e1733-ec2f-449c-bea0-144c6665f129", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.049 221554 DEBUG oslo_concurrency.lockutils [req-47357688-c624-4709-b1b7-9e406105d442 req-7f30833d-2c07-4596-a661-781a9ab3de46 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-a214f153-3adc-4183-8407-d768f3fbf70e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:53:14 np0005603609 kernel: tap9b72c5b5-be (unregistering): left promiscuous mode
Jan 31 02:53:14 np0005603609 NetworkManager[49064]: <info>  [1769845994.1656] device (tap9b72c5b5-be): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.168 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:14 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:14Z|00236|binding|INFO|Releasing lport 9b72c5b5-be8a-41f5-abfe-94cdbd6de794 from this chassis (sb_readonly=0)
Jan 31 02:53:14 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:14Z|00237|binding|INFO|Setting lport 9b72c5b5-be8a-41f5-abfe-94cdbd6de794 down in Southbound
Jan 31 02:53:14 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:14Z|00238|binding|INFO|Removing iface tap9b72c5b5-be ovn-installed in OVS
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.173 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:14 np0005603609 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Jan 31 02:53:14 np0005603609 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004a.scope: Consumed 13.268s CPU time.
Jan 31 02:53:14 np0005603609 systemd-machined[190912]: Machine qemu-33-instance-0000004a terminated.
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.220 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:4a:f7 10.100.0.8'], port_security=['fa:16:3e:dc:4a:f7 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c440f73c-e6b3-46be-a2e3-702703379890', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4a9ee1af66824e368ddad5ea68a3201b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1af7dded-ba4c-464c-a37d-082e55c9ea5f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.230'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4986fb78-ee27-4ba6-b733-7a8bc7b4bbd0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=9b72c5b5-be8a-41f5-abfe-94cdbd6de794) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.221 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 9b72c5b5-be8a-41f5-abfe-94cdbd6de794 in datapath 868fa2eb-4f72-481a-9c66-8c69ae0a2f4a unbound from our chassis#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.224 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 868fa2eb-4f72-481a-9c66-8c69ae0a2f4a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.225 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e9510926-a4c7-46f6-9d1b-19573f5fc69a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.225 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a namespace which is not needed anymore#033[00m
Jan 31 02:53:14 np0005603609 neutron-haproxy-ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a[248758]: [NOTICE]   (248762) : haproxy version is 2.8.14-c23fe91
Jan 31 02:53:14 np0005603609 neutron-haproxy-ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a[248758]: [NOTICE]   (248762) : path to executable is /usr/sbin/haproxy
Jan 31 02:53:14 np0005603609 neutron-haproxy-ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a[248758]: [WARNING]  (248762) : Exiting Master process...
Jan 31 02:53:14 np0005603609 neutron-haproxy-ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a[248758]: [ALERT]    (248762) : Current worker (248764) exited with code 143 (Terminated)
Jan 31 02:53:14 np0005603609 neutron-haproxy-ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a[248758]: [WARNING]  (248762) : All workers exited. Exiting... (0)
Jan 31 02:53:14 np0005603609 systemd[1]: libpod-d4172129a0330ee6e5cd63c0a3ac0adeb321865b04c895e1ba05434d804932d1.scope: Deactivated successfully.
Jan 31 02:53:14 np0005603609 podman[249346]: 2026-01-31 07:53:14.341812609 +0000 UTC m=+0.047254871 container died d4172129a0330ee6e5cd63c0a3ac0adeb321865b04c895e1ba05434d804932d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.358 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.363 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.376 221554 INFO nova.virt.libvirt.driver [-] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Instance destroyed successfully.#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.376 221554 DEBUG nova.objects.instance [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Lazy-loading 'resources' on Instance uuid c440f73c-e6b3-46be-a2e3-702703379890 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:53:14 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d4172129a0330ee6e5cd63c0a3ac0adeb321865b04c895e1ba05434d804932d1-userdata-shm.mount: Deactivated successfully.
Jan 31 02:53:14 np0005603609 systemd[1]: var-lib-containers-storage-overlay-ac238a29fa6e4cb130ada719f9be05d38f113f2f0a50447f14d2f8d487352f24-merged.mount: Deactivated successfully.
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.391 221554 DEBUG nova.virt.libvirt.vif [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:52:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-1483795707',display_name='tempest-ServersTestBootFromVolume-server-1483795707',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-1483795707',id=74,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLghy76daco7LPdR7LJ4ZpR9HlKVlfpSkn4hjpGPHUnSgqF+p/LRarFMcdh5UhCtNMFVsTCUdmFk5AqPUdvfsTHsDylq9sjh7Eic0az0zDNGmUXVEUFm9c1a9Gyj3iNSNg==',key_name='tempest-keypair-714413144',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4a9ee1af66824e368ddad5ea68a3201b',ramdisk_id='',reservation_id='r-60d1v4ja',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-ServersTestBootFromVolume-591669822',owner_user_name='tempest-ServersTestBootFromVolume-591669822-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93403129e78e44f3916d321433a31bd2',uuid=c440f73c-e6b3-46be-a2e3-702703379890,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "address": "fa:16:3e:dc:4a:f7", "network": {"id": "868fa2eb-4f72-481a-9c66-8c69ae0a2f4a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-281656863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a9ee1af66824e368ddad5ea68a3201b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b72c5b5-be", "ovs_interfaceid": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.392 221554 DEBUG nova.network.os_vif_util [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Converting VIF {"id": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "address": "fa:16:3e:dc:4a:f7", "network": {"id": "868fa2eb-4f72-481a-9c66-8c69ae0a2f4a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-281656863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a9ee1af66824e368ddad5ea68a3201b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b72c5b5-be", "ovs_interfaceid": "9b72c5b5-be8a-41f5-abfe-94cdbd6de794", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.393 221554 DEBUG nova.network.os_vif_util [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dc:4a:f7,bridge_name='br-int',has_traffic_filtering=True,id=9b72c5b5-be8a-41f5-abfe-94cdbd6de794,network=Network(868fa2eb-4f72-481a-9c66-8c69ae0a2f4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b72c5b5-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.393 221554 DEBUG os_vif [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:4a:f7,bridge_name='br-int',has_traffic_filtering=True,id=9b72c5b5-be8a-41f5-abfe-94cdbd6de794,network=Network(868fa2eb-4f72-481a-9c66-8c69ae0a2f4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b72c5b5-be') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.395 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.395 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b72c5b5-be, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.398 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.399 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.402 221554 INFO os_vif [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:4a:f7,bridge_name='br-int',has_traffic_filtering=True,id=9b72c5b5-be8a-41f5-abfe-94cdbd6de794,network=Network(868fa2eb-4f72-481a-9c66-8c69ae0a2f4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b72c5b5-be')#033[00m
Jan 31 02:53:14 np0005603609 podman[249346]: 2026-01-31 07:53:14.419885341 +0000 UTC m=+0.125327603 container cleanup d4172129a0330ee6e5cd63c0a3ac0adeb321865b04c895e1ba05434d804932d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 02:53:14 np0005603609 systemd[1]: libpod-conmon-d4172129a0330ee6e5cd63c0a3ac0adeb321865b04c895e1ba05434d804932d1.scope: Deactivated successfully.
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.452 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.452 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.457 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.457 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:53:14 np0005603609 podman[249393]: 2026-01-31 07:53:14.490981236 +0000 UTC m=+0.055236434 container remove d4172129a0330ee6e5cd63c0a3ac0adeb321865b04c895e1ba05434d804932d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.496 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[30e84e48-960a-463f-a0b2-aec459d8b8a2]: (4, ('Sat Jan 31 07:53:14 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a (d4172129a0330ee6e5cd63c0a3ac0adeb321865b04c895e1ba05434d804932d1)\nd4172129a0330ee6e5cd63c0a3ac0adeb321865b04c895e1ba05434d804932d1\nSat Jan 31 07:53:14 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a (d4172129a0330ee6e5cd63c0a3ac0adeb321865b04c895e1ba05434d804932d1)\nd4172129a0330ee6e5cd63c0a3ac0adeb321865b04c895e1ba05434d804932d1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.497 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a56937-2d58-4dda-9e02-90962acdad83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.498 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap868fa2eb-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.500 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:14 np0005603609 kernel: tap868fa2eb-40: left promiscuous mode
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.566 221554 DEBUG oslo_concurrency.lockutils [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Acquiring lock "a214f153-3adc-4183-8407-d768f3fbf70e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.566 221554 DEBUG oslo_concurrency.lockutils [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.566 221554 DEBUG oslo_concurrency.lockutils [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Acquiring lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.567 221554 DEBUG oslo_concurrency.lockutils [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.567 221554 DEBUG oslo_concurrency.lockutils [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.568 221554 INFO nova.compute.manager [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Terminating instance#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.569 221554 DEBUG nova.compute.manager [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.570 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.570 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[09787749-d8e4-4c07-8dff-64ee3833325d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.582 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1b3a253b-5975-47b6-8a5f-6f61ca337910]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.583 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[16165014-c432-4478-b72e-bc975e05be10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:14 np0005603609 virtqemud[221292]: An error occurred, but the cause is unknown
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.593 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2d32b9d7-b4f7-4398-bc0f-717c2358dc8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 620766, 'reachable_time': 25406, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249410, 'error': None, 'target': 'ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:14 np0005603609 systemd[1]: run-netns-ovnmeta\x2d868fa2eb\x2d4f72\x2d481a\x2d9c66\x2d8c69ae0a2f4a.mount: Deactivated successfully.
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.597 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-868fa2eb-4f72-481a-9c66-8c69ae0a2f4a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.597 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[299ff439-39ce-4532-a645-8dd3b5813be1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.678 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.678 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4434MB free_disk=20.921649932861328GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.679 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.679 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.759 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance c440f73c-e6b3-46be-a2e3-702703379890 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.759 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance a214f153-3adc-4183-8407-d768f3fbf70e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.759 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.759 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:53:14 np0005603609 kernel: tap25c1ece7-cf (unregistering): left promiscuous mode
Jan 31 02:53:14 np0005603609 NetworkManager[49064]: <info>  [1769845994.7857] device (tap25c1ece7-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.788 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:14 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:14Z|00239|binding|INFO|Releasing lport 25c1ece7-cf11-445a-b58a-c1d924dadd91 from this chassis (sb_readonly=0)
Jan 31 02:53:14 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:14Z|00240|binding|INFO|Setting lport 25c1ece7-cf11-445a-b58a-c1d924dadd91 down in Southbound
Jan 31 02:53:14 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:14Z|00241|binding|INFO|Removing iface tap25c1ece7-cf ovn-installed in OVS
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.790 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.796 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:14 np0005603609 kernel: tap6b5e1733-ec (unregistering): left promiscuous mode
Jan 31 02:53:14 np0005603609 NetworkManager[49064]: <info>  [1769845994.8059] device (tap6b5e1733-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.807 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.808 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:db:d1 10.100.0.4'], port_security=['fa:16:3e:ae:db:d1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a214f153-3adc-4183-8407-d768f3fbf70e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b3fc65f-f4fb-4d63-a01a-0637865c750d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce2602aa08c4766b5f575340b920fd7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ee6f790d-cbbe-4563-b40a-f23eb2235db6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d9f7b22-8a34-401c-9a90-a1268249d403, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=25c1ece7-cf11-445a-b58a-c1d924dadd91) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.809 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 25c1ece7-cf11-445a-b58a-c1d924dadd91 in datapath 6b3fc65f-f4fb-4d63-a01a-0637865c750d unbound from our chassis#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.810 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6b3fc65f-f4fb-4d63-a01a-0637865c750d#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.814 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:14 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:14Z|00242|binding|INFO|Releasing lport 6b5e1733-ec2f-449c-bea0-144c6665f129 from this chassis (sb_readonly=0)
Jan 31 02:53:14 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:14Z|00243|binding|INFO|Setting lport 6b5e1733-ec2f-449c-bea0-144c6665f129 down in Southbound
Jan 31 02:53:14 np0005603609 ovn_controller[130359]: 2026-01-31T07:53:14Z|00244|binding|INFO|Removing iface tap6b5e1733-ec ovn-installed in OVS
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.816 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.819 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[78c6e551-b4ad-4ba3-a3aa-c09bfe170743]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.820 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.830 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:d3:5f 10.100.0.5'], port_security=['fa:16:3e:89:d3:5f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a214f153-3adc-4183-8407-d768f3fbf70e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b3fc65f-f4fb-4d63-a01a-0637865c750d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce2602aa08c4766b5f575340b920fd7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ee6f790d-cbbe-4563-b40a-f23eb2235db6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d9f7b22-8a34-401c-9a90-a1268249d403, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=6b5e1733-ec2f-449c-bea0-144c6665f129) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.844 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[e4082c46-880f-4170-b02c-e923a12f3d1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.848 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[055dd265-dbb7-454f-9294-242ac7244c42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:14 np0005603609 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Jan 31 02:53:14 np0005603609 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000004b.scope: Consumed 11.783s CPU time.
Jan 31 02:53:14 np0005603609 systemd-machined[190912]: Machine qemu-34-instance-0000004b terminated.
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.875 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b4d571-04ac-45fe-b80d-cca87f89e719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.883 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.895 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[81e9efeb-2cbe-43f9-8979-67b9a042e073]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6b3fc65f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:11:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622016, 'reachable_time': 41216, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249429, 'error': None, 'target': 'ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.912 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9cbeb1ac-b0ba-4d3e-a8a5-3f681b98e672]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6b3fc65f-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622025, 'tstamp': 622025}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249431, 'error': None, 'target': 'ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6b3fc65f-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622027, 'tstamp': 622027}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249431, 'error': None, 'target': 'ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.913 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b3fc65f-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.915 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.921 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.922 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b3fc65f-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.922 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.923 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6b3fc65f-f0, col_values=(('external_ids', {'iface-id': '8c6feaf2-1214-4d46-b7c0-b68a620c5b7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.923 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.924 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 6b5e1733-ec2f-449c-bea0-144c6665f129 in datapath 6b3fc65f-f4fb-4d63-a01a-0637865c750d unbound from our chassis#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.926 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6b3fc65f-f4fb-4d63-a01a-0637865c750d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.926 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[96c470a1-5659-40b4-8daa-64552909e991]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:14.927 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d namespace which is not needed anymore#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.966 221554 INFO nova.virt.libvirt.driver [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Deleting instance files /var/lib/nova/instances/c440f73c-e6b3-46be-a2e3-702703379890_del#033[00m
Jan 31 02:53:14 np0005603609 nova_compute[221550]: 2026-01-31 07:53:14.967 221554 INFO nova.virt.libvirt.driver [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Deletion of /var/lib/nova/instances/c440f73c-e6b3-46be-a2e3-702703379890_del complete#033[00m
Jan 31 02:53:14 np0005603609 NetworkManager[49064]: <info>  [1769845994.9914] manager: (tap6b5e1733-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.003 221554 INFO nova.virt.libvirt.driver [-] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Instance destroyed successfully.#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.004 221554 DEBUG nova.objects.instance [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lazy-loading 'resources' on Instance uuid a214f153-3adc-4183-8407-d768f3fbf70e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.036 221554 DEBUG nova.virt.libvirt.vif [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:52:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1617535596',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1617535596',id=75,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:53:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4ce2602aa08c4766b5f575340b920fd7',ramdisk_id='',reservation_id='r-hgx0nnqm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-667072375',owner_user_name='tempest-AttachInterfacesV270Test-667072375-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:53:01Z,user_data=None,user_id='db9fe8a6ca1443d099966af09b0a5402',uuid=a214f153-3adc-4183-8407-d768f3fbf70e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "address": "fa:16:3e:ae:db:d1", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c1ece7-cf", "ovs_interfaceid": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.037 221554 DEBUG nova.network.os_vif_util [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Converting VIF {"id": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "address": "fa:16:3e:ae:db:d1", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c1ece7-cf", "ovs_interfaceid": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.038 221554 DEBUG nova.network.os_vif_util [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:db:d1,bridge_name='br-int',has_traffic_filtering=True,id=25c1ece7-cf11-445a-b58a-c1d924dadd91,network=Network(6b3fc65f-f4fb-4d63-a01a-0637865c750d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c1ece7-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.038 221554 DEBUG os_vif [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:db:d1,bridge_name='br-int',has_traffic_filtering=True,id=25c1ece7-cf11-445a-b58a-c1d924dadd91,network=Network(6b3fc65f-f4fb-4d63-a01a-0637865c750d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c1ece7-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.040 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.040 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25c1ece7-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.041 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.043 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.044 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.045 221554 INFO nova.compute.manager [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Took 1.51 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.046 221554 DEBUG oslo.service.loopingcall [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.046 221554 DEBUG nova.compute.manager [-] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.046 221554 DEBUG nova.network.neutron [-] [instance: c440f73c-e6b3-46be-a2e3-702703379890] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.049 221554 INFO os_vif [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:db:d1,bridge_name='br-int',has_traffic_filtering=True,id=25c1ece7-cf11-445a-b58a-c1d924dadd91,network=Network(6b3fc65f-f4fb-4d63-a01a-0637865c750d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25c1ece7-cf')#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.049 221554 DEBUG nova.virt.libvirt.vif [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:52:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1617535596',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1617535596',id=75,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:53:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4ce2602aa08c4766b5f575340b920fd7',ramdisk_id='',reservation_id='r-hgx0nnqm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-667072375',owner_user_name='tempest-AttachInterfacesV270Test-667072375-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:53:01Z,user_data=None,user_id='db9fe8a6ca1443d099966af09b0a5402',uuid=a214f153-3adc-4183-8407-d768f3fbf70e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b5e1733-ec2f-449c-bea0-144c6665f129", "address": "fa:16:3e:89:d3:5f", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5e1733-ec", "ovs_interfaceid": "6b5e1733-ec2f-449c-bea0-144c6665f129", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.050 221554 DEBUG nova.network.os_vif_util [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Converting VIF {"id": "6b5e1733-ec2f-449c-bea0-144c6665f129", "address": "fa:16:3e:89:d3:5f", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5e1733-ec", "ovs_interfaceid": "6b5e1733-ec2f-449c-bea0-144c6665f129", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.050 221554 DEBUG nova.network.os_vif_util [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:d3:5f,bridge_name='br-int',has_traffic_filtering=True,id=6b5e1733-ec2f-449c-bea0-144c6665f129,network=Network(6b3fc65f-f4fb-4d63-a01a-0637865c750d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5e1733-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:53:15 np0005603609 neutron-haproxy-ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d[249223]: [NOTICE]   (249228) : haproxy version is 2.8.14-c23fe91
Jan 31 02:53:15 np0005603609 neutron-haproxy-ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d[249223]: [NOTICE]   (249228) : path to executable is /usr/sbin/haproxy
Jan 31 02:53:15 np0005603609 neutron-haproxy-ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d[249223]: [WARNING]  (249228) : Exiting Master process...
Jan 31 02:53:15 np0005603609 neutron-haproxy-ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d[249223]: [WARNING]  (249228) : Exiting Master process...
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.051 221554 DEBUG os_vif [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:d3:5f,bridge_name='br-int',has_traffic_filtering=True,id=6b5e1733-ec2f-449c-bea0-144c6665f129,network=Network(6b3fc65f-f4fb-4d63-a01a-0637865c750d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5e1733-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:53:15 np0005603609 neutron-haproxy-ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d[249223]: [ALERT]    (249228) : Current worker (249230) exited with code 143 (Terminated)
Jan 31 02:53:15 np0005603609 neutron-haproxy-ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d[249223]: [WARNING]  (249228) : All workers exited. Exiting... (0)
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.052 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.053 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b5e1733-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:53:15 np0005603609 systemd[1]: libpod-5da1ab5fb2d59da827630fcaed1076ad569cd434266bf1c835a23bea01a43748.scope: Deactivated successfully.
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.055 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.056 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.058 221554 INFO os_vif [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:d3:5f,bridge_name='br-int',has_traffic_filtering=True,id=6b5e1733-ec2f-449c-bea0-144c6665f129,network=Network(6b3fc65f-f4fb-4d63-a01a-0637865c750d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5e1733-ec')#033[00m
Jan 31 02:53:15 np0005603609 podman[249487]: 2026-01-31 07:53:15.062098576 +0000 UTC m=+0.043605972 container died 5da1ab5fb2d59da827630fcaed1076ad569cd434266bf1c835a23bea01a43748 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 02:53:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:53:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:15.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:53:15 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5da1ab5fb2d59da827630fcaed1076ad569cd434266bf1c835a23bea01a43748-userdata-shm.mount: Deactivated successfully.
Jan 31 02:53:15 np0005603609 systemd[1]: var-lib-containers-storage-overlay-8307503229fbc7fc539562f2260b124c593a0c1f690d9a37fc9f896d82b7f5af-merged.mount: Deactivated successfully.
Jan 31 02:53:15 np0005603609 podman[249487]: 2026-01-31 07:53:15.13191189 +0000 UTC m=+0.113419286 container cleanup 5da1ab5fb2d59da827630fcaed1076ad569cd434266bf1c835a23bea01a43748 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 02:53:15 np0005603609 systemd[1]: libpod-conmon-5da1ab5fb2d59da827630fcaed1076ad569cd434266bf1c835a23bea01a43748.scope: Deactivated successfully.
Jan 31 02:53:15 np0005603609 podman[249536]: 2026-01-31 07:53:15.219753718 +0000 UTC m=+0.069914397 container remove 5da1ab5fb2d59da827630fcaed1076ad569cd434266bf1c835a23bea01a43748 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 02:53:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:15.223 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1e53921e-2fbd-4004-be12-c2746d4b89e9]: (4, ('Sat Jan 31 07:53:15 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d (5da1ab5fb2d59da827630fcaed1076ad569cd434266bf1c835a23bea01a43748)\n5da1ab5fb2d59da827630fcaed1076ad569cd434266bf1c835a23bea01a43748\nSat Jan 31 07:53:15 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d (5da1ab5fb2d59da827630fcaed1076ad569cd434266bf1c835a23bea01a43748)\n5da1ab5fb2d59da827630fcaed1076ad569cd434266bf1c835a23bea01a43748\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:15.225 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[26b3e6fb-7b45-4dc9-ba25-302fa053465c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:15.226 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b3fc65f-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.228 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:15 np0005603609 kernel: tap6b3fc65f-f0: left promiscuous mode
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.239 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:15.242 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ca5d3ce1-826b-4d5d-834f-b27f25908336]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:15.262 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[87b744c4-4d9d-401e-8c42-af18d1c31bd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:15.263 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc14d54-c1af-44cf-a010-227d938d1617]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:15.274 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[733c1edd-6da8-4924-90e4-c5d37375a5b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622009, 'reachable_time': 40064, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249552, 'error': None, 'target': 'ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:15.276 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6b3fc65f-f4fb-4d63-a01a-0637865c750d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:53:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:15.276 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[261baf56-fd6f-4cdb-a611-f680c9a976b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.356 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.361 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:53:15 np0005603609 systemd[1]: run-netns-ovnmeta\x2d6b3fc65f\x2df4fb\x2d4d63\x2da01a\x2d0637865c750d.mount: Deactivated successfully.
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.385 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.407 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.407 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.415 221554 DEBUG nova.compute.manager [req-46709fdf-8222-475c-a172-8efc8a39c401 req-51ff3d65-81cc-44a2-9f3d-7771d72dba72 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Received event network-vif-unplugged-9b72c5b5-be8a-41f5-abfe-94cdbd6de794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.415 221554 DEBUG oslo_concurrency.lockutils [req-46709fdf-8222-475c-a172-8efc8a39c401 req-51ff3d65-81cc-44a2-9f3d-7771d72dba72 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c440f73c-e6b3-46be-a2e3-702703379890-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.416 221554 DEBUG oslo_concurrency.lockutils [req-46709fdf-8222-475c-a172-8efc8a39c401 req-51ff3d65-81cc-44a2-9f3d-7771d72dba72 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c440f73c-e6b3-46be-a2e3-702703379890-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.416 221554 DEBUG oslo_concurrency.lockutils [req-46709fdf-8222-475c-a172-8efc8a39c401 req-51ff3d65-81cc-44a2-9f3d-7771d72dba72 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c440f73c-e6b3-46be-a2e3-702703379890-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.416 221554 DEBUG nova.compute.manager [req-46709fdf-8222-475c-a172-8efc8a39c401 req-51ff3d65-81cc-44a2-9f3d-7771d72dba72 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] No waiting events found dispatching network-vif-unplugged-9b72c5b5-be8a-41f5-abfe-94cdbd6de794 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.417 221554 DEBUG nova.compute.manager [req-46709fdf-8222-475c-a172-8efc8a39c401 req-51ff3d65-81cc-44a2-9f3d-7771d72dba72 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Received event network-vif-unplugged-9b72c5b5-be8a-41f5-abfe-94cdbd6de794 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.417 221554 DEBUG nova.compute.manager [req-46709fdf-8222-475c-a172-8efc8a39c401 req-51ff3d65-81cc-44a2-9f3d-7771d72dba72 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Received event network-vif-plugged-9b72c5b5-be8a-41f5-abfe-94cdbd6de794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.417 221554 DEBUG oslo_concurrency.lockutils [req-46709fdf-8222-475c-a172-8efc8a39c401 req-51ff3d65-81cc-44a2-9f3d-7771d72dba72 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c440f73c-e6b3-46be-a2e3-702703379890-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.417 221554 DEBUG oslo_concurrency.lockutils [req-46709fdf-8222-475c-a172-8efc8a39c401 req-51ff3d65-81cc-44a2-9f3d-7771d72dba72 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c440f73c-e6b3-46be-a2e3-702703379890-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.418 221554 DEBUG oslo_concurrency.lockutils [req-46709fdf-8222-475c-a172-8efc8a39c401 req-51ff3d65-81cc-44a2-9f3d-7771d72dba72 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c440f73c-e6b3-46be-a2e3-702703379890-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.418 221554 DEBUG nova.compute.manager [req-46709fdf-8222-475c-a172-8efc8a39c401 req-51ff3d65-81cc-44a2-9f3d-7771d72dba72 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] No waiting events found dispatching network-vif-plugged-9b72c5b5-be8a-41f5-abfe-94cdbd6de794 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.418 221554 WARNING nova.compute.manager [req-46709fdf-8222-475c-a172-8efc8a39c401 req-51ff3d65-81cc-44a2-9f3d-7771d72dba72 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Received unexpected event network-vif-plugged-9b72c5b5-be8a-41f5-abfe-94cdbd6de794 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.468 221554 DEBUG nova.compute.manager [req-83c2a6f4-7b31-4e79-b233-d8ce66131fe5 req-ee6be7e3-2489-4db5-843a-2fdb1ee54993 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Received event network-vif-plugged-6b5e1733-ec2f-449c-bea0-144c6665f129 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.468 221554 DEBUG oslo_concurrency.lockutils [req-83c2a6f4-7b31-4e79-b233-d8ce66131fe5 req-ee6be7e3-2489-4db5-843a-2fdb1ee54993 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.469 221554 DEBUG oslo_concurrency.lockutils [req-83c2a6f4-7b31-4e79-b233-d8ce66131fe5 req-ee6be7e3-2489-4db5-843a-2fdb1ee54993 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.469 221554 DEBUG oslo_concurrency.lockutils [req-83c2a6f4-7b31-4e79-b233-d8ce66131fe5 req-ee6be7e3-2489-4db5-843a-2fdb1ee54993 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.469 221554 DEBUG nova.compute.manager [req-83c2a6f4-7b31-4e79-b233-d8ce66131fe5 req-ee6be7e3-2489-4db5-843a-2fdb1ee54993 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] No waiting events found dispatching network-vif-plugged-6b5e1733-ec2f-449c-bea0-144c6665f129 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.469 221554 WARNING nova.compute.manager [req-83c2a6f4-7b31-4e79-b233-d8ce66131fe5 req-ee6be7e3-2489-4db5-843a-2fdb1ee54993 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Received unexpected event network-vif-plugged-6b5e1733-ec2f-449c-bea0-144c6665f129 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.598 221554 DEBUG nova.compute.manager [req-f90d405e-525b-4fa1-8876-7e8a945e2f32 req-078d8897-961f-4b48-8518-9b59357462e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Received event network-vif-unplugged-25c1ece7-cf11-445a-b58a-c1d924dadd91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.598 221554 DEBUG oslo_concurrency.lockutils [req-f90d405e-525b-4fa1-8876-7e8a945e2f32 req-078d8897-961f-4b48-8518-9b59357462e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.598 221554 DEBUG oslo_concurrency.lockutils [req-f90d405e-525b-4fa1-8876-7e8a945e2f32 req-078d8897-961f-4b48-8518-9b59357462e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.599 221554 DEBUG oslo_concurrency.lockutils [req-f90d405e-525b-4fa1-8876-7e8a945e2f32 req-078d8897-961f-4b48-8518-9b59357462e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.599 221554 DEBUG nova.compute.manager [req-f90d405e-525b-4fa1-8876-7e8a945e2f32 req-078d8897-961f-4b48-8518-9b59357462e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] No waiting events found dispatching network-vif-unplugged-25c1ece7-cf11-445a-b58a-c1d924dadd91 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.599 221554 DEBUG nova.compute.manager [req-f90d405e-525b-4fa1-8876-7e8a945e2f32 req-078d8897-961f-4b48-8518-9b59357462e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Received event network-vif-unplugged-25c1ece7-cf11-445a-b58a-c1d924dadd91 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.665 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.666 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:53:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:15.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:15 np0005603609 nova_compute[221550]: 2026-01-31 07:53:15.721 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:15.721 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:53:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:15.722 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:53:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:16 np0005603609 nova_compute[221550]: 2026-01-31 07:53:16.412 221554 DEBUG nova.network.neutron [-] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:53:16 np0005603609 nova_compute[221550]: 2026-01-31 07:53:16.430 221554 INFO nova.compute.manager [-] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Took 1.38 seconds to deallocate network for instance.#033[00m
Jan 31 02:53:16 np0005603609 nova_compute[221550]: 2026-01-31 07:53:16.662 221554 INFO nova.compute.manager [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Took 0.23 seconds to detach 1 volumes for instance.#033[00m
Jan 31 02:53:16 np0005603609 nova_compute[221550]: 2026-01-31 07:53:16.663 221554 DEBUG nova.compute.manager [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Deleting volume: a1b4a963-bab3-4e88-b617-96814bf6094a _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Jan 31 02:53:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:16.724 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:53:16 np0005603609 nova_compute[221550]: 2026-01-31 07:53:16.929 221554 DEBUG oslo_concurrency.lockutils [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:16 np0005603609 nova_compute[221550]: 2026-01-31 07:53:16.930 221554 DEBUG oslo_concurrency.lockutils [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:16 np0005603609 nova_compute[221550]: 2026-01-31 07:53:16.990 221554 DEBUG oslo_concurrency.processutils [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:17.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:53:17 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2228712792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.403 221554 DEBUG oslo_concurrency.processutils [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.411 221554 DEBUG nova.compute.provider_tree [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.432 221554 DEBUG nova.scheduler.client.report [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.475 221554 DEBUG oslo_concurrency.lockutils [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.565 221554 INFO nova.scheduler.client.report [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Deleted allocations for instance c440f73c-e6b3-46be-a2e3-702703379890#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.609 221554 DEBUG nova.compute.manager [req-281c1ff3-722b-4487-9e8d-3b5a93a673fe req-be7a9d20-7e53-47a1-af69-ae29a058ba23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Received event network-vif-unplugged-6b5e1733-ec2f-449c-bea0-144c6665f129 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.609 221554 DEBUG oslo_concurrency.lockutils [req-281c1ff3-722b-4487-9e8d-3b5a93a673fe req-be7a9d20-7e53-47a1-af69-ae29a058ba23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.610 221554 DEBUG oslo_concurrency.lockutils [req-281c1ff3-722b-4487-9e8d-3b5a93a673fe req-be7a9d20-7e53-47a1-af69-ae29a058ba23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.610 221554 DEBUG oslo_concurrency.lockutils [req-281c1ff3-722b-4487-9e8d-3b5a93a673fe req-be7a9d20-7e53-47a1-af69-ae29a058ba23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.610 221554 DEBUG nova.compute.manager [req-281c1ff3-722b-4487-9e8d-3b5a93a673fe req-be7a9d20-7e53-47a1-af69-ae29a058ba23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] No waiting events found dispatching network-vif-unplugged-6b5e1733-ec2f-449c-bea0-144c6665f129 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.610 221554 DEBUG nova.compute.manager [req-281c1ff3-722b-4487-9e8d-3b5a93a673fe req-be7a9d20-7e53-47a1-af69-ae29a058ba23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Received event network-vif-unplugged-6b5e1733-ec2f-449c-bea0-144c6665f129 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.610 221554 DEBUG nova.compute.manager [req-281c1ff3-722b-4487-9e8d-3b5a93a673fe req-be7a9d20-7e53-47a1-af69-ae29a058ba23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Received event network-vif-plugged-6b5e1733-ec2f-449c-bea0-144c6665f129 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.611 221554 DEBUG oslo_concurrency.lockutils [req-281c1ff3-722b-4487-9e8d-3b5a93a673fe req-be7a9d20-7e53-47a1-af69-ae29a058ba23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.611 221554 DEBUG oslo_concurrency.lockutils [req-281c1ff3-722b-4487-9e8d-3b5a93a673fe req-be7a9d20-7e53-47a1-af69-ae29a058ba23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.611 221554 DEBUG oslo_concurrency.lockutils [req-281c1ff3-722b-4487-9e8d-3b5a93a673fe req-be7a9d20-7e53-47a1-af69-ae29a058ba23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.611 221554 DEBUG nova.compute.manager [req-281c1ff3-722b-4487-9e8d-3b5a93a673fe req-be7a9d20-7e53-47a1-af69-ae29a058ba23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] No waiting events found dispatching network-vif-plugged-6b5e1733-ec2f-449c-bea0-144c6665f129 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.611 221554 WARNING nova.compute.manager [req-281c1ff3-722b-4487-9e8d-3b5a93a673fe req-be7a9d20-7e53-47a1-af69-ae29a058ba23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Received unexpected event network-vif-plugged-6b5e1733-ec2f-449c-bea0-144c6665f129 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.612 221554 DEBUG nova.compute.manager [req-281c1ff3-722b-4487-9e8d-3b5a93a673fe req-be7a9d20-7e53-47a1-af69-ae29a058ba23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Received event network-vif-deleted-9b72c5b5-be8a-41f5-abfe-94cdbd6de794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.628 221554 DEBUG oslo_concurrency.lockutils [None req-a16c276a-c2c4-4142-9214-a093e91a11da 93403129e78e44f3916d321433a31bd2 4a9ee1af66824e368ddad5ea68a3201b - - default default] Lock "c440f73c-e6b3-46be-a2e3-702703379890" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:17.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.695 221554 DEBUG nova.compute.manager [req-4b2b2ad9-7b04-460c-8b82-ef1be1441d81 req-631b0616-d6a4-49fe-b65d-65d4f0df5ac1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Received event network-vif-plugged-25c1ece7-cf11-445a-b58a-c1d924dadd91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.695 221554 DEBUG oslo_concurrency.lockutils [req-4b2b2ad9-7b04-460c-8b82-ef1be1441d81 req-631b0616-d6a4-49fe-b65d-65d4f0df5ac1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.696 221554 DEBUG oslo_concurrency.lockutils [req-4b2b2ad9-7b04-460c-8b82-ef1be1441d81 req-631b0616-d6a4-49fe-b65d-65d4f0df5ac1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.696 221554 DEBUG oslo_concurrency.lockutils [req-4b2b2ad9-7b04-460c-8b82-ef1be1441d81 req-631b0616-d6a4-49fe-b65d-65d4f0df5ac1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.696 221554 DEBUG nova.compute.manager [req-4b2b2ad9-7b04-460c-8b82-ef1be1441d81 req-631b0616-d6a4-49fe-b65d-65d4f0df5ac1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] No waiting events found dispatching network-vif-plugged-25c1ece7-cf11-445a-b58a-c1d924dadd91 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.697 221554 WARNING nova.compute.manager [req-4b2b2ad9-7b04-460c-8b82-ef1be1441d81 req-631b0616-d6a4-49fe-b65d-65d4f0df5ac1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Received unexpected event network-vif-plugged-25c1ece7-cf11-445a-b58a-c1d924dadd91 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:53:17 np0005603609 nova_compute[221550]: 2026-01-31 07:53:17.851 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:18 np0005603609 nova_compute[221550]: 2026-01-31 07:53:18.115 221554 INFO nova.virt.libvirt.driver [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Deleting instance files /var/lib/nova/instances/a214f153-3adc-4183-8407-d768f3fbf70e_del#033[00m
Jan 31 02:53:18 np0005603609 nova_compute[221550]: 2026-01-31 07:53:18.115 221554 INFO nova.virt.libvirt.driver [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Deletion of /var/lib/nova/instances/a214f153-3adc-4183-8407-d768f3fbf70e_del complete#033[00m
Jan 31 02:53:18 np0005603609 nova_compute[221550]: 2026-01-31 07:53:18.256 221554 INFO nova.compute.manager [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Took 3.69 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:53:18 np0005603609 nova_compute[221550]: 2026-01-31 07:53:18.256 221554 DEBUG oslo.service.loopingcall [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:53:18 np0005603609 nova_compute[221550]: 2026-01-31 07:53:18.257 221554 DEBUG nova.compute.manager [-] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:53:18 np0005603609 nova_compute[221550]: 2026-01-31 07:53:18.257 221554 DEBUG nova.network.neutron [-] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:53:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:53:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:19.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:53:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:19.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:19 np0005603609 nova_compute[221550]: 2026-01-31 07:53:19.857 221554 DEBUG nova.compute.manager [req-65cda0af-7eb0-450d-88f0-b4158c61d5e2 req-9cdf9bcb-0a02-4723-adb1-3ffb8a6b65b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Received event network-vif-deleted-6b5e1733-ec2f-449c-bea0-144c6665f129 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:53:19 np0005603609 nova_compute[221550]: 2026-01-31 07:53:19.858 221554 INFO nova.compute.manager [req-65cda0af-7eb0-450d-88f0-b4158c61d5e2 req-9cdf9bcb-0a02-4723-adb1-3ffb8a6b65b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Neutron deleted interface 6b5e1733-ec2f-449c-bea0-144c6665f129; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 02:53:19 np0005603609 nova_compute[221550]: 2026-01-31 07:53:19.858 221554 DEBUG nova.network.neutron [req-65cda0af-7eb0-450d-88f0-b4158c61d5e2 req-9cdf9bcb-0a02-4723-adb1-3ffb8a6b65b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Updating instance_info_cache with network_info: [{"id": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "address": "fa:16:3e:ae:db:d1", "network": {"id": "6b3fc65f-f4fb-4d63-a01a-0637865c750d", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1104914402-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce2602aa08c4766b5f575340b920fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25c1ece7-cf", "ovs_interfaceid": "25c1ece7-cf11-445a-b58a-c1d924dadd91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:53:19 np0005603609 nova_compute[221550]: 2026-01-31 07:53:19.892 221554 DEBUG nova.compute.manager [req-65cda0af-7eb0-450d-88f0-b4158c61d5e2 req-9cdf9bcb-0a02-4723-adb1-3ffb8a6b65b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Detach interface failed, port_id=6b5e1733-ec2f-449c-bea0-144c6665f129, reason: Instance a214f153-3adc-4183-8407-d768f3fbf70e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 02:53:20 np0005603609 nova_compute[221550]: 2026-01-31 07:53:20.057 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:20 np0005603609 nova_compute[221550]: 2026-01-31 07:53:20.481 221554 DEBUG nova.network.neutron [-] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:53:20 np0005603609 nova_compute[221550]: 2026-01-31 07:53:20.530 221554 INFO nova.compute.manager [-] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Took 2.27 seconds to deallocate network for instance.#033[00m
Jan 31 02:53:20 np0005603609 nova_compute[221550]: 2026-01-31 07:53:20.607 221554 DEBUG oslo_concurrency.lockutils [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:20 np0005603609 nova_compute[221550]: 2026-01-31 07:53:20.608 221554 DEBUG oslo_concurrency.lockutils [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:20 np0005603609 nova_compute[221550]: 2026-01-31 07:53:20.649 221554 DEBUG oslo_concurrency.processutils [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:53:21 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1717920373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:53:21 np0005603609 nova_compute[221550]: 2026-01-31 07:53:21.064 221554 DEBUG oslo_concurrency.processutils [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:21 np0005603609 nova_compute[221550]: 2026-01-31 07:53:21.070 221554 DEBUG nova.compute.provider_tree [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:53:21 np0005603609 nova_compute[221550]: 2026-01-31 07:53:21.092 221554 DEBUG nova.scheduler.client.report [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:53:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:53:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:21.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:53:21 np0005603609 nova_compute[221550]: 2026-01-31 07:53:21.125 221554 DEBUG oslo_concurrency.lockutils [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:21 np0005603609 nova_compute[221550]: 2026-01-31 07:53:21.161 221554 INFO nova.scheduler.client.report [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Deleted allocations for instance a214f153-3adc-4183-8407-d768f3fbf70e#033[00m
Jan 31 02:53:21 np0005603609 nova_compute[221550]: 2026-01-31 07:53:21.252 221554 DEBUG oslo_concurrency.lockutils [None req-53675f00-5964-40d6-a56f-0a04c666b2c7 db9fe8a6ca1443d099966af09b0a5402 4ce2602aa08c4766b5f575340b920fd7 - - default default] Lock "a214f153-3adc-4183-8407-d768f3fbf70e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:21.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:21 np0005603609 nova_compute[221550]: 2026-01-31 07:53:21.943 221554 DEBUG nova.compute.manager [req-3e86d22d-919a-4e54-877e-c715ac7b481c req-c427c519-fc77-43da-a42a-9bfe9c37fa5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Received event network-vif-deleted-25c1ece7-cf11-445a-b58a-c1d924dadd91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:53:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:53:22 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2052022062' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:53:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:53:22 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2052022062' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:53:22 np0005603609 nova_compute[221550]: 2026-01-31 07:53:22.854 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:23.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.003000070s ======
Jan 31 02:53:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:23.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000070s
Jan 31 02:53:25 np0005603609 nova_compute[221550]: 2026-01-31 07:53:25.060 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:25.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:25.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:27.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:27 np0005603609 nova_compute[221550]: 2026-01-31 07:53:27.377 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:27 np0005603609 nova_compute[221550]: 2026-01-31 07:53:27.533 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:27.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:27 np0005603609 nova_compute[221550]: 2026-01-31 07:53:27.856 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:29.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:29 np0005603609 nova_compute[221550]: 2026-01-31 07:53:29.375 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845994.3731544, c440f73c-e6b3-46be-a2e3-702703379890 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:53:29 np0005603609 nova_compute[221550]: 2026-01-31 07:53:29.375 221554 INFO nova.compute.manager [-] [instance: c440f73c-e6b3-46be-a2e3-702703379890] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:53:29 np0005603609 nova_compute[221550]: 2026-01-31 07:53:29.412 221554 DEBUG nova.compute.manager [None req-29a164ce-acb6-4280-b673-253b2d92e66d - - - - - -] [instance: c440f73c-e6b3-46be-a2e3-702703379890] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:53:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:29.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:30 np0005603609 nova_compute[221550]: 2026-01-31 07:53:30.001 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845994.9998586, a214f153-3adc-4183-8407-d768f3fbf70e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:53:30 np0005603609 nova_compute[221550]: 2026-01-31 07:53:30.001 221554 INFO nova.compute.manager [-] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:53:30 np0005603609 nova_compute[221550]: 2026-01-31 07:53:30.055 221554 DEBUG nova.compute.manager [None req-569de0b1-5b59-4be3-a9a6-f5a2d27e0288 - - - - - -] [instance: a214f153-3adc-4183-8407-d768f3fbf70e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:53:30 np0005603609 nova_compute[221550]: 2026-01-31 07:53:30.063 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:31.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:31.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:32 np0005603609 nova_compute[221550]: 2026-01-31 07:53:32.872 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:33.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:53:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:33.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:53:35 np0005603609 nova_compute[221550]: 2026-01-31 07:53:35.066 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:35.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:35.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:53:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:37.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:53:37 np0005603609 podman[249605]: 2026-01-31 07:53:37.222652151 +0000 UTC m=+0.092240495 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:53:37 np0005603609 podman[249604]: 2026-01-31 07:53:37.273778833 +0000 UTC m=+0.143572002 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 02:53:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:37.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:37 np0005603609 nova_compute[221550]: 2026-01-31 07:53:37.874 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:39.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:40 np0005603609 nova_compute[221550]: 2026-01-31 07:53:40.069 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:40.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:41.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:41 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:53:41 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:53:41 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:53:41 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:53:41 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:53:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:42.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:42 np0005603609 nova_compute[221550]: 2026-01-31 07:53:42.876 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:53:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:43.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:53:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:53:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:44.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:53:45 np0005603609 nova_compute[221550]: 2026-01-31 07:53:45.115 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:53:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:45.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:53:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:46.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:47.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:47 np0005603609 nova_compute[221550]: 2026-01-31 07:53:47.877 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:53:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:53:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:53:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:48.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:53:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:53:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:49.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:53:50 np0005603609 nova_compute[221550]: 2026-01-31 07:53:50.119 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:50.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:51.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:52.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:52 np0005603609 nova_compute[221550]: 2026-01-31 07:53:52.879 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:53.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:54.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:55 np0005603609 nova_compute[221550]: 2026-01-31 07:53:55.122 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:53:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:55.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:53:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:56.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:57.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:57 np0005603609 nova_compute[221550]: 2026-01-31 07:53:57.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:53:57 np0005603609 nova_compute[221550]: 2026-01-31 07:53:57.881 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:57 np0005603609 nova_compute[221550]: 2026-01-31 07:53:57.941 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:57.940 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:53:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:53:57.943 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:53:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:58.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:53:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:53:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:59.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:54:00 np0005603609 nova_compute[221550]: 2026-01-31 07:54:00.126 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:00.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:01.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:54:01.945 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:54:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:02.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:02 np0005603609 nova_compute[221550]: 2026-01-31 07:54:02.883 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:54:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:03.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:54:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:04.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:05 np0005603609 nova_compute[221550]: 2026-01-31 07:54:05.130 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:05.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:06.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:54:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:07.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:54:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:54:07.488 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:54:07.489 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:54:07.489 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:54:07 np0005603609 nova_compute[221550]: 2026-01-31 07:54:07.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:54:07 np0005603609 nova_compute[221550]: 2026-01-31 07:54:07.885 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:08 np0005603609 podman[249835]: 2026-01-31 07:54:08.176907258 +0000 UTC m=+0.054255300 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 02:54:08 np0005603609 podman[249834]: 2026-01-31 07:54:08.226750569 +0000 UTC m=+0.104537161 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 02:54:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:08.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:08 np0005603609 nova_compute[221550]: 2026-01-31 07:54:08.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:54:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:09.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:10 np0005603609 nova_compute[221550]: 2026-01-31 07:54:10.131 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:10.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:10 np0005603609 nova_compute[221550]: 2026-01-31 07:54:10.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:54:10 np0005603609 nova_compute[221550]: 2026-01-31 07:54:10.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:54:10 np0005603609 nova_compute[221550]: 2026-01-31 07:54:10.903 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:54:10 np0005603609 nova_compute[221550]: 2026-01-31 07:54:10.903 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:54:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:11.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:12.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:12 np0005603609 nova_compute[221550]: 2026-01-31 07:54:12.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:54:12 np0005603609 nova_compute[221550]: 2026-01-31 07:54:12.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:54:12 np0005603609 nova_compute[221550]: 2026-01-31 07:54:12.886 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:54:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:13.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:54:13 np0005603609 nova_compute[221550]: 2026-01-31 07:54:13.810 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:13 np0005603609 nova_compute[221550]: 2026-01-31 07:54:13.811 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:13 np0005603609 nova_compute[221550]: 2026-01-31 07:54:13.811 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:54:13 np0005603609 nova_compute[221550]: 2026-01-31 07:54:13.811 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:54:13 np0005603609 nova_compute[221550]: 2026-01-31 07:54:13.811 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:54:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:54:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/895703752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:54:14 np0005603609 nova_compute[221550]: 2026-01-31 07:54:14.216 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:54:14 np0005603609 nova_compute[221550]: 2026-01-31 07:54:14.361 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:54:14 np0005603609 nova_compute[221550]: 2026-01-31 07:54:14.362 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4697MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:54:14 np0005603609 nova_compute[221550]: 2026-01-31 07:54:14.362 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:14 np0005603609 nova_compute[221550]: 2026-01-31 07:54:14.362 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:14.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:15.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:15 np0005603609 nova_compute[221550]: 2026-01-31 07:54:15.179 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:54:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:16.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:54:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:17.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:17 np0005603609 nova_compute[221550]: 2026-01-31 07:54:17.757 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:54:17 np0005603609 nova_compute[221550]: 2026-01-31 07:54:17.757 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:54:17 np0005603609 nova_compute[221550]: 2026-01-31 07:54:17.888 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:18.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:18 np0005603609 nova_compute[221550]: 2026-01-31 07:54:18.488 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:54:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:54:18 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3230444530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:54:18 np0005603609 nova_compute[221550]: 2026-01-31 07:54:18.859 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:54:18 np0005603609 nova_compute[221550]: 2026-01-31 07:54:18.863 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:54:19 np0005603609 nova_compute[221550]: 2026-01-31 07:54:19.099 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:54:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:54:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:19.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:54:19 np0005603609 nova_compute[221550]: 2026-01-31 07:54:19.724 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:54:19 np0005603609 nova_compute[221550]: 2026-01-31 07:54:19.724 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 5.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:54:20 np0005603609 nova_compute[221550]: 2026-01-31 07:54:20.183 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:20.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:20 np0005603609 nova_compute[221550]: 2026-01-31 07:54:20.723 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:54:20 np0005603609 nova_compute[221550]: 2026-01-31 07:54:20.723 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:54:20 np0005603609 nova_compute[221550]: 2026-01-31 07:54:20.723 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:54:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:21.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:54:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:22.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:54:22 np0005603609 nova_compute[221550]: 2026-01-31 07:54:22.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:54:22 np0005603609 nova_compute[221550]: 2026-01-31 07:54:22.890 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:23.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:24.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:25 np0005603609 nova_compute[221550]: 2026-01-31 07:54:25.187 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:25.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:54:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:26.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:54:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:54:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:27.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:54:27 np0005603609 nova_compute[221550]: 2026-01-31 07:54:27.892 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:28.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:29.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:30 np0005603609 nova_compute[221550]: 2026-01-31 07:54:30.191 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:30.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:31.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:31 np0005603609 ovn_controller[130359]: 2026-01-31T07:54:31Z|00245|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 31 02:54:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:32.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:32 np0005603609 nova_compute[221550]: 2026-01-31 07:54:32.894 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:33.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:34.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:35 np0005603609 nova_compute[221550]: 2026-01-31 07:54:35.195 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:35.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:36.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:54:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:37.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:54:37 np0005603609 nova_compute[221550]: 2026-01-31 07:54:37.898 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:54:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:38.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:54:39 np0005603609 podman[249926]: 2026-01-31 07:54:39.177036084 +0000 UTC m=+0.060353807 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Jan 31 02:54:39 np0005603609 podman[249925]: 2026-01-31 07:54:39.201314929 +0000 UTC m=+0.084547230 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 31 02:54:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:54:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:39.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:54:40 np0005603609 nova_compute[221550]: 2026-01-31 07:54:40.198 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:40.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:41.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:42.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:42 np0005603609 nova_compute[221550]: 2026-01-31 07:54:42.900 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:43.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:44.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:54:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2192602938' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:54:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:54:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2192602938' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:54:45 np0005603609 nova_compute[221550]: 2026-01-31 07:54:45.202 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:45.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:46.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:54:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:47.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:54:47 np0005603609 nova_compute[221550]: 2026-01-31 07:54:47.902 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:48.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:49.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:54:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:54:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:54:50 np0005603609 nova_compute[221550]: 2026-01-31 07:54:50.115 221554 DEBUG oslo_concurrency.lockutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "2d377db9-047f-4be1-a0cf-8189254def22" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:50 np0005603609 nova_compute[221550]: 2026-01-31 07:54:50.116 221554 DEBUG oslo_concurrency.lockutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:50 np0005603609 nova_compute[221550]: 2026-01-31 07:54:50.204 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:50.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:50 np0005603609 nova_compute[221550]: 2026-01-31 07:54:50.657 221554 DEBUG nova.compute.manager [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:54:50 np0005603609 nova_compute[221550]: 2026-01-31 07:54:50.898 221554 DEBUG oslo_concurrency.lockutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:50 np0005603609 nova_compute[221550]: 2026-01-31 07:54:50.898 221554 DEBUG oslo_concurrency.lockutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:50 np0005603609 nova_compute[221550]: 2026-01-31 07:54:50.905 221554 DEBUG nova.virt.hardware [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:54:50 np0005603609 nova_compute[221550]: 2026-01-31 07:54:50.905 221554 INFO nova.compute.claims [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:54:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:51.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:51 np0005603609 nova_compute[221550]: 2026-01-31 07:54:51.270 221554 DEBUG oslo_concurrency.processutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:54:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:54:51 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1505041532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:54:51 np0005603609 nova_compute[221550]: 2026-01-31 07:54:51.742 221554 DEBUG oslo_concurrency.processutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:54:51 np0005603609 nova_compute[221550]: 2026-01-31 07:54:51.748 221554 DEBUG nova.compute.provider_tree [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:54:51 np0005603609 nova_compute[221550]: 2026-01-31 07:54:51.784 221554 DEBUG nova.scheduler.client.report [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:54:51 np0005603609 nova_compute[221550]: 2026-01-31 07:54:51.836 221554 DEBUG oslo_concurrency.lockutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:54:51 np0005603609 nova_compute[221550]: 2026-01-31 07:54:51.837 221554 DEBUG nova.compute.manager [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:54:51 np0005603609 nova_compute[221550]: 2026-01-31 07:54:51.928 221554 DEBUG nova.compute.manager [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:54:51 np0005603609 nova_compute[221550]: 2026-01-31 07:54:51.929 221554 DEBUG nova.network.neutron [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:54:51 np0005603609 nova_compute[221550]: 2026-01-31 07:54:51.987 221554 INFO nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:54:52 np0005603609 nova_compute[221550]: 2026-01-31 07:54:52.043 221554 DEBUG nova.compute.manager [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:54:52 np0005603609 nova_compute[221550]: 2026-01-31 07:54:52.127 221554 DEBUG nova.policy [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1e1c5eef3d264666bc90735dd338d82a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a8cbd6cc22654dfab04487522a63426c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:54:52 np0005603609 nova_compute[221550]: 2026-01-31 07:54:52.188 221554 DEBUG nova.compute.manager [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:54:52 np0005603609 nova_compute[221550]: 2026-01-31 07:54:52.190 221554 DEBUG nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:54:52 np0005603609 nova_compute[221550]: 2026-01-31 07:54:52.191 221554 INFO nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Creating image(s)#033[00m
Jan 31 02:54:52 np0005603609 nova_compute[221550]: 2026-01-31 07:54:52.370 221554 DEBUG nova.storage.rbd_utils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 2d377db9-047f-4be1-a0cf-8189254def22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:54:52 np0005603609 nova_compute[221550]: 2026-01-31 07:54:52.401 221554 DEBUG nova.storage.rbd_utils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 2d377db9-047f-4be1-a0cf-8189254def22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:54:52 np0005603609 nova_compute[221550]: 2026-01-31 07:54:52.433 221554 DEBUG nova.storage.rbd_utils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 2d377db9-047f-4be1-a0cf-8189254def22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:54:52 np0005603609 nova_compute[221550]: 2026-01-31 07:54:52.437 221554 DEBUG oslo_concurrency.processutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:54:52 np0005603609 nova_compute[221550]: 2026-01-31 07:54:52.501 221554 DEBUG oslo_concurrency.processutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:54:52 np0005603609 nova_compute[221550]: 2026-01-31 07:54:52.502 221554 DEBUG oslo_concurrency.lockutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:52 np0005603609 nova_compute[221550]: 2026-01-31 07:54:52.503 221554 DEBUG oslo_concurrency.lockutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:52 np0005603609 nova_compute[221550]: 2026-01-31 07:54:52.503 221554 DEBUG oslo_concurrency.lockutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:54:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:52.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:52 np0005603609 nova_compute[221550]: 2026-01-31 07:54:52.528 221554 DEBUG nova.storage.rbd_utils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 2d377db9-047f-4be1-a0cf-8189254def22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:54:52 np0005603609 nova_compute[221550]: 2026-01-31 07:54:52.531 221554 DEBUG oslo_concurrency.processutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 2d377db9-047f-4be1-a0cf-8189254def22_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:54:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:54:52.732 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:54:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:54:52.733 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:54:52 np0005603609 nova_compute[221550]: 2026-01-31 07:54:52.733 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:52 np0005603609 nova_compute[221550]: 2026-01-31 07:54:52.905 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:54:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:53.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:54:53 np0005603609 nova_compute[221550]: 2026-01-31 07:54:53.727 221554 DEBUG nova.network.neutron [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Successfully created port: 46923f1f-6cb6-4592-94c9-5a95246f0b99 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:54:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:54.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:54:54.735 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:54:55 np0005603609 nova_compute[221550]: 2026-01-31 07:54:55.207 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:55.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:55 np0005603609 nova_compute[221550]: 2026-01-31 07:54:55.458 221554 DEBUG oslo_concurrency.processutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 2d377db9-047f-4be1-a0cf-8189254def22_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.926s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:54:55 np0005603609 nova_compute[221550]: 2026-01-31 07:54:55.543 221554 DEBUG nova.storage.rbd_utils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] resizing rbd image 2d377db9-047f-4be1-a0cf-8189254def22_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:54:56 np0005603609 nova_compute[221550]: 2026-01-31 07:54:56.023 221554 DEBUG nova.network.neutron [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Successfully updated port: 46923f1f-6cb6-4592-94c9-5a95246f0b99 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:54:56 np0005603609 nova_compute[221550]: 2026-01-31 07:54:56.066 221554 DEBUG oslo_concurrency.lockutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:54:56 np0005603609 nova_compute[221550]: 2026-01-31 07:54:56.066 221554 DEBUG oslo_concurrency.lockutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquired lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:54:56 np0005603609 nova_compute[221550]: 2026-01-31 07:54:56.067 221554 DEBUG nova.network.neutron [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:54:56 np0005603609 nova_compute[221550]: 2026-01-31 07:54:56.169 221554 DEBUG nova.compute.manager [req-0a015b4a-5736-40e6-8527-ac282c24e02a req-f7455f3a-4487-4337-9b2c-b66021415183 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received event network-changed-46923f1f-6cb6-4592-94c9-5a95246f0b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:54:56 np0005603609 nova_compute[221550]: 2026-01-31 07:54:56.170 221554 DEBUG nova.compute.manager [req-0a015b4a-5736-40e6-8527-ac282c24e02a req-f7455f3a-4487-4337-9b2c-b66021415183 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Refreshing instance network info cache due to event network-changed-46923f1f-6cb6-4592-94c9-5a95246f0b99. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:54:56 np0005603609 nova_compute[221550]: 2026-01-31 07:54:56.170 221554 DEBUG oslo_concurrency.lockutils [req-0a015b4a-5736-40e6-8527-ac282c24e02a req-f7455f3a-4487-4337-9b2c-b66021415183 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:54:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:56 np0005603609 nova_compute[221550]: 2026-01-31 07:54:56.350 221554 DEBUG nova.network.neutron [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:54:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:56.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:56 np0005603609 nova_compute[221550]: 2026-01-31 07:54:56.896 221554 DEBUG nova.objects.instance [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'migration_context' on Instance uuid 2d377db9-047f-4be1-a0cf-8189254def22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:54:56 np0005603609 nova_compute[221550]: 2026-01-31 07:54:56.919 221554 DEBUG nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:54:56 np0005603609 nova_compute[221550]: 2026-01-31 07:54:56.920 221554 DEBUG nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Ensure instance console log exists: /var/lib/nova/instances/2d377db9-047f-4be1-a0cf-8189254def22/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:54:56 np0005603609 nova_compute[221550]: 2026-01-31 07:54:56.920 221554 DEBUG oslo_concurrency.lockutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:56 np0005603609 nova_compute[221550]: 2026-01-31 07:54:56.921 221554 DEBUG oslo_concurrency.lockutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:56 np0005603609 nova_compute[221550]: 2026-01-31 07:54:56.921 221554 DEBUG oslo_concurrency.lockutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:54:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:54:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:57.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:54:57 np0005603609 nova_compute[221550]: 2026-01-31 07:54:57.907 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:58.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:54:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:54:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:59.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:54:59 np0005603609 nova_compute[221550]: 2026-01-31 07:54:59.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.094 221554 DEBUG nova.network.neutron [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Updating instance_info_cache with network_info: [{"id": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "address": "fa:16:3e:28:00:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46923f1f-6c", "ovs_interfaceid": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.210 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.304 221554 DEBUG oslo_concurrency.lockutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Releasing lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.305 221554 DEBUG nova.compute.manager [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Instance network_info: |[{"id": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "address": "fa:16:3e:28:00:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46923f1f-6c", "ovs_interfaceid": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.306 221554 DEBUG oslo_concurrency.lockutils [req-0a015b4a-5736-40e6-8527-ac282c24e02a req-f7455f3a-4487-4337-9b2c-b66021415183 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.306 221554 DEBUG nova.network.neutron [req-0a015b4a-5736-40e6-8527-ac282c24e02a req-f7455f3a-4487-4337-9b2c-b66021415183 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Refreshing network info cache for port 46923f1f-6cb6-4592-94c9-5a95246f0b99 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.308 221554 DEBUG nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Start _get_guest_xml network_info=[{"id": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "address": "fa:16:3e:28:00:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46923f1f-6c", "ovs_interfaceid": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.313 221554 WARNING nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.319 221554 DEBUG nova.virt.libvirt.host [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.320 221554 DEBUG nova.virt.libvirt.host [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.323 221554 DEBUG nova.virt.libvirt.host [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.324 221554 DEBUG nova.virt.libvirt.host [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.325 221554 DEBUG nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.325 221554 DEBUG nova.virt.hardware [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.326 221554 DEBUG nova.virt.hardware [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.326 221554 DEBUG nova.virt.hardware [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.327 221554 DEBUG nova.virt.hardware [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.327 221554 DEBUG nova.virt.hardware [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.327 221554 DEBUG nova.virt.hardware [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.327 221554 DEBUG nova.virt.hardware [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.328 221554 DEBUG nova.virt.hardware [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.328 221554 DEBUG nova.virt.hardware [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.328 221554 DEBUG nova.virt.hardware [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.329 221554 DEBUG nova.virt.hardware [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:55:00 np0005603609 nova_compute[221550]: 2026-01-31 07:55:00.332 221554 DEBUG oslo_concurrency.processutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:55:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:00.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:55:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:55:01 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/263086270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.073 221554 DEBUG oslo_concurrency.processutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.741s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.113 221554 DEBUG nova.storage.rbd_utils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 2d377db9-047f-4be1-a0cf-8189254def22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.119 221554 DEBUG oslo_concurrency.processutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:01.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:55:01 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3259012295' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.549 221554 DEBUG oslo_concurrency.processutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.550 221554 DEBUG nova.virt.libvirt.vif [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:54:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-984890616',display_name='tempest-tempest.common.compute-instance-984890616',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-984890616',id=77,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnxydFFT/bcqoM03GknAV6VVEv+uI9/M2thVO454uMQmK087L6GvpajnIifCsqOX+pET7jqX84pd3d5htElL3GbJr344TJNF3ti2SMNzp16z2rTGy4GU5WRv9em26Ul9Q==',key_name='tempest-keypair-1143543931',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-6f0xru8d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:54:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=2d377db9-047f-4be1-a0cf-8189254def22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "address": "fa:16:3e:28:00:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46923f1f-6c", "ovs_interfaceid": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.551 221554 DEBUG nova.network.os_vif_util [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "address": "fa:16:3e:28:00:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46923f1f-6c", "ovs_interfaceid": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.552 221554 DEBUG nova.network.os_vif_util [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:00:82,bridge_name='br-int',has_traffic_filtering=True,id=46923f1f-6cb6-4592-94c9-5a95246f0b99,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46923f1f-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.553 221554 DEBUG nova.objects.instance [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d377db9-047f-4be1-a0cf-8189254def22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.680 221554 DEBUG nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:55:01 np0005603609 nova_compute[221550]:  <uuid>2d377db9-047f-4be1-a0cf-8189254def22</uuid>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:  <name>instance-0000004d</name>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <nova:name>tempest-tempest.common.compute-instance-984890616</nova:name>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:55:00</nova:creationTime>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:55:01 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:        <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:        <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:        <nova:port uuid="46923f1f-6cb6-4592-94c9-5a95246f0b99">
Jan 31 02:55:01 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <entry name="serial">2d377db9-047f-4be1-a0cf-8189254def22</entry>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <entry name="uuid">2d377db9-047f-4be1-a0cf-8189254def22</entry>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/2d377db9-047f-4be1-a0cf-8189254def22_disk">
Jan 31 02:55:01 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:55:01 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/2d377db9-047f-4be1-a0cf-8189254def22_disk.config">
Jan 31 02:55:01 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:55:01 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:28:00:82"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <target dev="tap46923f1f-6c"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/2d377db9-047f-4be1-a0cf-8189254def22/console.log" append="off"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:55:01 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:55:01 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:55:01 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:55:01 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.682 221554 DEBUG nova.compute.manager [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Preparing to wait for external event network-vif-plugged-46923f1f-6cb6-4592-94c9-5a95246f0b99 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.683 221554 DEBUG oslo_concurrency.lockutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "2d377db9-047f-4be1-a0cf-8189254def22-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.683 221554 DEBUG oslo_concurrency.lockutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.684 221554 DEBUG oslo_concurrency.lockutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.685 221554 DEBUG nova.virt.libvirt.vif [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:54:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-984890616',display_name='tempest-tempest.common.compute-instance-984890616',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-984890616',id=77,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnxydFFT/bcqoM03GknAV6VVEv+uI9/M2thVO454uMQmK087L6GvpajnIifCsqOX+pET7jqX84pd3d5htElL3GbJr344TJNF3ti2SMNzp16z2rTGy4GU5WRv9em26Ul9Q==',key_name='tempest-keypair-1143543931',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-6f0xru8d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:54:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=2d377db9-047f-4be1-a0cf-8189254def22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "address": "fa:16:3e:28:00:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46923f1f-6c", "ovs_interfaceid": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.685 221554 DEBUG nova.network.os_vif_util [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "address": "fa:16:3e:28:00:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46923f1f-6c", "ovs_interfaceid": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.686 221554 DEBUG nova.network.os_vif_util [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:00:82,bridge_name='br-int',has_traffic_filtering=True,id=46923f1f-6cb6-4592-94c9-5a95246f0b99,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46923f1f-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.687 221554 DEBUG os_vif [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:00:82,bridge_name='br-int',has_traffic_filtering=True,id=46923f1f-6cb6-4592-94c9-5a95246f0b99,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46923f1f-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.688 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.689 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.689 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.698 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.699 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap46923f1f-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.699 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap46923f1f-6c, col_values=(('external_ids', {'iface-id': '46923f1f-6cb6-4592-94c9-5a95246f0b99', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:00:82', 'vm-uuid': '2d377db9-047f-4be1-a0cf-8189254def22'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:55:01 np0005603609 NetworkManager[49064]: <info>  [1769846101.7414] manager: (tap46923f1f-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.741 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.744 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.747 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:01 np0005603609 nova_compute[221550]: 2026-01-31 07:55:01.748 221554 INFO os_vif [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:00:82,bridge_name='br-int',has_traffic_filtering=True,id=46923f1f-6cb6-4592-94c9-5a95246f0b99,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46923f1f-6c')#033[00m
Jan 31 02:55:02 np0005603609 nova_compute[221550]: 2026-01-31 07:55:02.053 221554 DEBUG nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:55:02 np0005603609 nova_compute[221550]: 2026-01-31 07:55:02.053 221554 DEBUG nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:55:02 np0005603609 nova_compute[221550]: 2026-01-31 07:55:02.053 221554 DEBUG nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No VIF found with MAC fa:16:3e:28:00:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:55:02 np0005603609 nova_compute[221550]: 2026-01-31 07:55:02.054 221554 INFO nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Using config drive#033[00m
Jan 31 02:55:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:02.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:03.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:03 np0005603609 nova_compute[221550]: 2026-01-31 07:55:03.384 221554 DEBUG nova.storage.rbd_utils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 2d377db9-047f-4be1-a0cf-8189254def22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:55:03 np0005603609 nova_compute[221550]: 2026-01-31 07:55:03.392 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:04 np0005603609 nova_compute[221550]: 2026-01-31 07:55:04.430 221554 INFO nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Creating config drive at /var/lib/nova/instances/2d377db9-047f-4be1-a0cf-8189254def22/disk.config#033[00m
Jan 31 02:55:04 np0005603609 nova_compute[221550]: 2026-01-31 07:55:04.440 221554 DEBUG oslo_concurrency.processutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d377db9-047f-4be1-a0cf-8189254def22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8lfpysce execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:04.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:04 np0005603609 nova_compute[221550]: 2026-01-31 07:55:04.568 221554 DEBUG oslo_concurrency.processutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d377db9-047f-4be1-a0cf-8189254def22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8lfpysce" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:04 np0005603609 nova_compute[221550]: 2026-01-31 07:55:04.630 221554 DEBUG nova.storage.rbd_utils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 2d377db9-047f-4be1-a0cf-8189254def22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:55:04 np0005603609 nova_compute[221550]: 2026-01-31 07:55:04.634 221554 DEBUG oslo_concurrency.processutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d377db9-047f-4be1-a0cf-8189254def22/disk.config 2d377db9-047f-4be1-a0cf-8189254def22_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:05.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:05 np0005603609 nova_compute[221550]: 2026-01-31 07:55:05.427 221554 DEBUG nova.network.neutron [req-0a015b4a-5736-40e6-8527-ac282c24e02a req-f7455f3a-4487-4337-9b2c-b66021415183 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Updated VIF entry in instance network info cache for port 46923f1f-6cb6-4592-94c9-5a95246f0b99. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:55:05 np0005603609 nova_compute[221550]: 2026-01-31 07:55:05.428 221554 DEBUG nova.network.neutron [req-0a015b4a-5736-40e6-8527-ac282c24e02a req-f7455f3a-4487-4337-9b2c-b66021415183 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Updating instance_info_cache with network_info: [{"id": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "address": "fa:16:3e:28:00:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46923f1f-6c", "ovs_interfaceid": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:55:05 np0005603609 nova_compute[221550]: 2026-01-31 07:55:05.465 221554 DEBUG oslo_concurrency.lockutils [req-0a015b4a-5736-40e6-8527-ac282c24e02a req-f7455f3a-4487-4337-9b2c-b66021415183 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:55:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:06.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:06 np0005603609 nova_compute[221550]: 2026-01-31 07:55:06.741 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:07.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:07.490 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:07.490 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:07.490 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:07 np0005603609 nova_compute[221550]: 2026-01-31 07:55:07.912 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:08.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:08 np0005603609 nova_compute[221550]: 2026-01-31 07:55:08.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:08 np0005603609 nova_compute[221550]: 2026-01-31 07:55:08.727 221554 DEBUG oslo_concurrency.processutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d377db9-047f-4be1-a0cf-8189254def22/disk.config 2d377db9-047f-4be1-a0cf-8189254def22_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:08 np0005603609 nova_compute[221550]: 2026-01-31 07:55:08.728 221554 INFO nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Deleting local config drive /var/lib/nova/instances/2d377db9-047f-4be1-a0cf-8189254def22/disk.config because it was imported into RBD.#033[00m
Jan 31 02:55:08 np0005603609 kernel: tap46923f1f-6c: entered promiscuous mode
Jan 31 02:55:08 np0005603609 NetworkManager[49064]: <info>  [1769846108.7748] manager: (tap46923f1f-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/126)
Jan 31 02:55:08 np0005603609 ovn_controller[130359]: 2026-01-31T07:55:08Z|00246|binding|INFO|Claiming lport 46923f1f-6cb6-4592-94c9-5a95246f0b99 for this chassis.
Jan 31 02:55:08 np0005603609 ovn_controller[130359]: 2026-01-31T07:55:08Z|00247|binding|INFO|46923f1f-6cb6-4592-94c9-5a95246f0b99: Claiming fa:16:3e:28:00:82 10.100.0.13
Jan 31 02:55:08 np0005603609 nova_compute[221550]: 2026-01-31 07:55:08.778 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:08 np0005603609 nova_compute[221550]: 2026-01-31 07:55:08.781 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:08 np0005603609 nova_compute[221550]: 2026-01-31 07:55:08.783 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:08 np0005603609 systemd-udevd[250472]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:55:08 np0005603609 nova_compute[221550]: 2026-01-31 07:55:08.799 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:08 np0005603609 ovn_controller[130359]: 2026-01-31T07:55:08Z|00248|binding|INFO|Setting lport 46923f1f-6cb6-4592-94c9-5a95246f0b99 ovn-installed in OVS
Jan 31 02:55:08 np0005603609 ovn_controller[130359]: 2026-01-31T07:55:08Z|00249|binding|INFO|Setting lport 46923f1f-6cb6-4592-94c9-5a95246f0b99 up in Southbound
Jan 31 02:55:08 np0005603609 nova_compute[221550]: 2026-01-31 07:55:08.804 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:08 np0005603609 systemd-machined[190912]: New machine qemu-35-instance-0000004d.
Jan 31 02:55:08 np0005603609 NetworkManager[49064]: <info>  [1769846108.8092] device (tap46923f1f-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:55:08 np0005603609 NetworkManager[49064]: <info>  [1769846108.8101] device (tap46923f1f-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.812 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:00:82 10.100.0.13'], port_security=['fa:16:3e:28:00:82 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2d377db9-047f-4be1-a0cf-8189254def22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f473e048-f911-4a1f-850c-a5dd47d82771', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=46923f1f-6cb6-4592-94c9-5a95246f0b99) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.813 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 46923f1f-6cb6-4592-94c9-5a95246f0b99 in datapath 485494d9-5360-41c3-a10e-ef5098af0809 bound to our chassis#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.815 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 485494d9-5360-41c3-a10e-ef5098af0809#033[00m
Jan 31 02:55:08 np0005603609 systemd[1]: Started Virtual Machine qemu-35-instance-0000004d.
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.823 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0d67988b-0d9d-4ef1-8a36-5eec76dc0a27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.824 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap485494d9-51 in ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.826 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap485494d9-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.826 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf371bb-7446-43e7-813f-4cae30a2521c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.826 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d6c5b05b-79d5-4f0f-9f05-9fd9cf49646b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.836 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[9373e0e3-7a66-4b9a-8dee-1b56e3c1ef10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.844 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[65bbd6a4-653b-4096-9dd8-29905b4795bc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.858 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[0f181a5c-d7ac-4533-8a16-7d9f815d4c42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:08 np0005603609 systemd-udevd[250477]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:55:08 np0005603609 NetworkManager[49064]: <info>  [1769846108.8636] manager: (tap485494d9-50): new Veth device (/org/freedesktop/NetworkManager/Devices/127)
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.863 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[489497db-9ae8-45fc-bdf7-1bf30f9de306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.883 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b301495f-a969-4981-97c2-1f361874c6bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.887 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d5798769-ed28-4a53-8efd-902cf868a9e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:08 np0005603609 NetworkManager[49064]: <info>  [1769846108.9000] device (tap485494d9-50): carrier: link connected
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.902 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[95f17991-7d7d-453b-8f3c-827747f556e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.911 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[becdb788-b73c-4ad1-932d-39ed656cdb1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634860, 'reachable_time': 37551, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250508, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.919 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e2dfb469-60df-4f88-9517-83bbbefc4f50]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:4b05'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634860, 'tstamp': 634860}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250509, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.927 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c935ee5e-48ed-4915-876f-39f0a6cb661c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634860, 'reachable_time': 37551, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250510, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.941 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8eea1ad9-6fdf-4cce-8d8b-3a3ca2b49332]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.973 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3a02ae48-daf6-4100-b038-991e3dda7404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.975 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.975 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.975 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap485494d9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:55:08 np0005603609 NetworkManager[49064]: <info>  [1769846108.9783] manager: (tap485494d9-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Jan 31 02:55:08 np0005603609 kernel: tap485494d9-50: entered promiscuous mode
Jan 31 02:55:08 np0005603609 nova_compute[221550]: 2026-01-31 07:55:08.983 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.990 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap485494d9-50, col_values=(('external_ids', {'iface-id': 'cbb039b2-15df-45bd-8800-81213ecc7011'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:55:08 np0005603609 ovn_controller[130359]: 2026-01-31T07:55:08Z|00250|binding|INFO|Releasing lport cbb039b2-15df-45bd-8800-81213ecc7011 from this chassis (sb_readonly=0)
Jan 31 02:55:08 np0005603609 nova_compute[221550]: 2026-01-31 07:55:08.992 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.994 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/485494d9-5360-41c3-a10e-ef5098af0809.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/485494d9-5360-41c3-a10e-ef5098af0809.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:55:08 np0005603609 nova_compute[221550]: 2026-01-31 07:55:08.995 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.995 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[539a6c2a-2789-4063-b6b4-09474fd92c7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.996 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-485494d9-5360-41c3-a10e-ef5098af0809
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/485494d9-5360-41c3-a10e-ef5098af0809.pid.haproxy
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 485494d9-5360-41c3-a10e-ef5098af0809
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:55:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:08.997 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'env', 'PROCESS_TAG=haproxy-485494d9-5360-41c3-a10e-ef5098af0809', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/485494d9-5360-41c3-a10e-ef5098af0809.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:55:09 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:55:09 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:55:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:09.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:09 np0005603609 podman[250556]: 2026-01-31 07:55:09.27055106 +0000 UTC m=+0.018236771 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:55:09 np0005603609 podman[250556]: 2026-01-31 07:55:09.554429514 +0000 UTC m=+0.302115205 container create e3afd40625e94d2ebca6f12b4df404ecbb94ea9b872e9f5a16c8dd8534ba0cf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:55:09 np0005603609 systemd[1]: Started libpod-conmon-e3afd40625e94d2ebca6f12b4df404ecbb94ea9b872e9f5a16c8dd8534ba0cf6.scope.
Jan 31 02:55:09 np0005603609 podman[250594]: 2026-01-31 07:55:09.640780096 +0000 UTC m=+0.054572197 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 02:55:09 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:55:09 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c93ad587cc72dd99ec328ec3962af54ce57e3ab9c96137d924b7ef4b11336013/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:55:09 np0005603609 nova_compute[221550]: 2026-01-31 07:55:09.699 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846109.698657, 2d377db9-047f-4be1-a0cf-8189254def22 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:55:09 np0005603609 nova_compute[221550]: 2026-01-31 07:55:09.699 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] VM Started (Lifecycle Event)#033[00m
Jan 31 02:55:09 np0005603609 podman[250556]: 2026-01-31 07:55:09.719410762 +0000 UTC m=+0.467096483 container init e3afd40625e94d2ebca6f12b4df404ecbb94ea9b872e9f5a16c8dd8534ba0cf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 02:55:09 np0005603609 podman[250556]: 2026-01-31 07:55:09.723428279 +0000 UTC m=+0.471113970 container start e3afd40625e94d2ebca6f12b4df404ecbb94ea9b872e9f5a16c8dd8534ba0cf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 02:55:09 np0005603609 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[250633]: [NOTICE]   (250646) : New worker (250648) forked
Jan 31 02:55:09 np0005603609 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[250633]: [NOTICE]   (250646) : Loading success.
Jan 31 02:55:09 np0005603609 podman[250591]: 2026-01-31 07:55:09.750249846 +0000 UTC m=+0.164552288 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 02:55:09 np0005603609 nova_compute[221550]: 2026-01-31 07:55:09.932 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:55:09 np0005603609 nova_compute[221550]: 2026-01-31 07:55:09.936 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846109.7002308, 2d377db9-047f-4be1-a0cf-8189254def22 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:55:09 np0005603609 nova_compute[221550]: 2026-01-31 07:55:09.936 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:55:09 np0005603609 nova_compute[221550]: 2026-01-31 07:55:09.974 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:55:09 np0005603609 nova_compute[221550]: 2026-01-31 07:55:09.979 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.008 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:55:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:10.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.683 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.684 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.684 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.721 221554 DEBUG nova.compute.manager [req-2cb80b00-b386-462c-b41f-e1d0b5773299 req-c770b57d-c96f-4b61-a71e-4222c63719fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received event network-vif-plugged-46923f1f-6cb6-4592-94c9-5a95246f0b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.722 221554 DEBUG oslo_concurrency.lockutils [req-2cb80b00-b386-462c-b41f-e1d0b5773299 req-c770b57d-c96f-4b61-a71e-4222c63719fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2d377db9-047f-4be1-a0cf-8189254def22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.722 221554 DEBUG oslo_concurrency.lockutils [req-2cb80b00-b386-462c-b41f-e1d0b5773299 req-c770b57d-c96f-4b61-a71e-4222c63719fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.722 221554 DEBUG oslo_concurrency.lockutils [req-2cb80b00-b386-462c-b41f-e1d0b5773299 req-c770b57d-c96f-4b61-a71e-4222c63719fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.722 221554 DEBUG nova.compute.manager [req-2cb80b00-b386-462c-b41f-e1d0b5773299 req-c770b57d-c96f-4b61-a71e-4222c63719fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Processing event network-vif-plugged-46923f1f-6cb6-4592-94c9-5a95246f0b99 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.722 221554 DEBUG nova.compute.manager [req-2cb80b00-b386-462c-b41f-e1d0b5773299 req-c770b57d-c96f-4b61-a71e-4222c63719fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received event network-vif-plugged-46923f1f-6cb6-4592-94c9-5a95246f0b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.723 221554 DEBUG oslo_concurrency.lockutils [req-2cb80b00-b386-462c-b41f-e1d0b5773299 req-c770b57d-c96f-4b61-a71e-4222c63719fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2d377db9-047f-4be1-a0cf-8189254def22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.723 221554 DEBUG oslo_concurrency.lockutils [req-2cb80b00-b386-462c-b41f-e1d0b5773299 req-c770b57d-c96f-4b61-a71e-4222c63719fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.723 221554 DEBUG oslo_concurrency.lockutils [req-2cb80b00-b386-462c-b41f-e1d0b5773299 req-c770b57d-c96f-4b61-a71e-4222c63719fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.723 221554 DEBUG nova.compute.manager [req-2cb80b00-b386-462c-b41f-e1d0b5773299 req-c770b57d-c96f-4b61-a71e-4222c63719fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] No waiting events found dispatching network-vif-plugged-46923f1f-6cb6-4592-94c9-5a95246f0b99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.723 221554 WARNING nova.compute.manager [req-2cb80b00-b386-462c-b41f-e1d0b5773299 req-c770b57d-c96f-4b61-a71e-4222c63719fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received unexpected event network-vif-plugged-46923f1f-6cb6-4592-94c9-5a95246f0b99 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.724 221554 DEBUG nova.compute.manager [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.726 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846110.7267559, 2d377db9-047f-4be1-a0cf-8189254def22 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.727 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.728 221554 DEBUG nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.731 221554 INFO nova.virt.libvirt.driver [-] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Instance spawned successfully.#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.731 221554 DEBUG nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.771 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.774 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.810 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.829 221554 DEBUG nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.829 221554 DEBUG nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.829 221554 DEBUG nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.830 221554 DEBUG nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.830 221554 DEBUG nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:55:10 np0005603609 nova_compute[221550]: 2026-01-31 07:55:10.831 221554 DEBUG nova.virt.libvirt.driver [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:55:11 np0005603609 nova_compute[221550]: 2026-01-31 07:55:11.016 221554 INFO nova.compute.manager [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Took 18.83 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:55:11 np0005603609 nova_compute[221550]: 2026-01-31 07:55:11.016 221554 DEBUG nova.compute.manager [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:55:11 np0005603609 nova_compute[221550]: 2026-01-31 07:55:11.239 221554 INFO nova.compute.manager [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Took 20.39 seconds to build instance.#033[00m
Jan 31 02:55:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:11.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:11 np0005603609 nova_compute[221550]: 2026-01-31 07:55:11.281 221554 DEBUG oslo_concurrency.lockutils [None req-73398a9f-ecac-42d8-a1e3-f5875eaec642 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:11 np0005603609 nova_compute[221550]: 2026-01-31 07:55:11.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:11 np0005603609 nova_compute[221550]: 2026-01-31 07:55:11.743 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:55:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:12.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:55:12 np0005603609 nova_compute[221550]: 2026-01-31 07:55:12.914 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:13.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:13 np0005603609 nova_compute[221550]: 2026-01-31 07:55:13.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:13 np0005603609 nova_compute[221550]: 2026-01-31 07:55:13.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:55:13 np0005603609 nova_compute[221550]: 2026-01-31 07:55:13.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:13 np0005603609 nova_compute[221550]: 2026-01-31 07:55:13.699 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:13 np0005603609 nova_compute[221550]: 2026-01-31 07:55:13.699 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:13 np0005603609 nova_compute[221550]: 2026-01-31 07:55:13.700 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:13 np0005603609 nova_compute[221550]: 2026-01-31 07:55:13.700 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:55:13 np0005603609 nova_compute[221550]: 2026-01-31 07:55:13.700 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:14.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:15.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:16.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:16 np0005603609 nova_compute[221550]: 2026-01-31 07:55:16.746 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 02:55:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:17.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 02:55:17 np0005603609 nova_compute[221550]: 2026-01-31 07:55:17.918 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:18.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:19.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:20 np0005603609 nova_compute[221550]: 2026-01-31 07:55:20.124 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:20 np0005603609 NetworkManager[49064]: <info>  [1769846120.1267] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Jan 31 02:55:20 np0005603609 NetworkManager[49064]: <info>  [1769846120.1274] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/130)
Jan 31 02:55:20 np0005603609 nova_compute[221550]: 2026-01-31 07:55:20.154 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:20 np0005603609 ovn_controller[130359]: 2026-01-31T07:55:20Z|00251|binding|INFO|Releasing lport cbb039b2-15df-45bd-8800-81213ecc7011 from this chassis (sb_readonly=0)
Jan 31 02:55:20 np0005603609 nova_compute[221550]: 2026-01-31 07:55:20.180 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:20 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 02:55:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:20.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:21.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:21 np0005603609 nova_compute[221550]: 2026-01-31 07:55:21.749 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:22.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:22 np0005603609 nova_compute[221550]: 2026-01-31 07:55:22.921 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:23.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:24 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 02:55:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:55:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:24.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:55:24 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 11.780411720s
Jan 31 02:55:24 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 11.780411720s
Jan 31 02:55:24 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.780605316s, txc = 0x55f2009dd200
Jan 31 02:55:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).paxos(paxos updating c 3264..3935) lease_timeout -- calling new election
Jan 31 02:55:24 np0005603609 ceph-mon[81667]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Jan 31 02:55:24 np0005603609 ceph-mon[81667]: paxos.2).electionLogic(34) init, last seen epoch 34
Jan 31 02:55:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:55:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:55:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:55:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:55:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:25.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:55:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:55:25 np0005603609 nova_compute[221550]: 2026-01-31 07:55:25.504 221554 DEBUG nova.compute.manager [req-5b56640e-d836-4e7d-9a3b-9b4bd5181653 req-3a344564-e470-4f81-98f9-884052040852 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received event network-changed-46923f1f-6cb6-4592-94c9-5a95246f0b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:55:25 np0005603609 nova_compute[221550]: 2026-01-31 07:55:25.505 221554 DEBUG nova.compute.manager [req-5b56640e-d836-4e7d-9a3b-9b4bd5181653 req-3a344564-e470-4f81-98f9-884052040852 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Refreshing instance network info cache due to event network-changed-46923f1f-6cb6-4592-94c9-5a95246f0b99. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:55:25 np0005603609 nova_compute[221550]: 2026-01-31 07:55:25.506 221554 DEBUG oslo_concurrency.lockutils [req-5b56640e-d836-4e7d-9a3b-9b4bd5181653 req-3a344564-e470-4f81-98f9-884052040852 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:55:25 np0005603609 nova_compute[221550]: 2026-01-31 07:55:25.506 221554 DEBUG oslo_concurrency.lockutils [req-5b56640e-d836-4e7d-9a3b-9b4bd5181653 req-3a344564-e470-4f81-98f9-884052040852 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:55:25 np0005603609 nova_compute[221550]: 2026-01-31 07:55:25.507 221554 DEBUG nova.network.neutron [req-5b56640e-d836-4e7d-9a3b-9b4bd5181653 req-3a344564-e470-4f81-98f9-884052040852 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Refreshing network info cache for port 46923f1f-6cb6-4592-94c9-5a95246f0b99 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:55:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:55:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:55:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:55:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:55:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:26.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:26 np0005603609 nova_compute[221550]: 2026-01-31 07:55:26.751 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:55:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:55:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:27.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:55:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:55:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:55:27 np0005603609 nova_compute[221550]: 2026-01-31 07:55:27.922 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:28 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 02:55:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:55:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:55:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:28.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:55:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:55:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:55:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:55:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:29.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:55:29 np0005603609 ceph-mon[81667]: paxos.2).electionLogic(35) init, last seen epoch 35, mid-election, bumping
Jan 31 02:55:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:55:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:55:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:55:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:55:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:30.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:30 np0005603609 ceph-mon[81667]: mon.compute-1 calling monitor election
Jan 31 02:55:30 np0005603609 ceph-mon[81667]: mon.compute-2 calling monitor election
Jan 31 02:55:30 np0005603609 ceph-mon[81667]: mon.compute-0 calling monitor election
Jan 31 02:55:30 np0005603609 ceph-mon[81667]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 02:55:30 np0005603609 ceph-mon[81667]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Jan 31 02:55:30 np0005603609 ceph-mon[81667]: Cluster is now healthy
Jan 31 02:55:30 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 02:55:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:55:30 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2519778568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:55:31 np0005603609 nova_compute[221550]: 2026-01-31 07:55:31.011 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 17.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:31 np0005603609 ovn_controller[130359]: 2026-01-31T07:55:31Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:00:82 10.100.0.13
Jan 31 02:55:31 np0005603609 ovn_controller[130359]: 2026-01-31T07:55:31Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:00:82 10.100.0.13
Jan 31 02:55:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:31.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:31 np0005603609 nova_compute[221550]: 2026-01-31 07:55:31.585 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:55:31 np0005603609 nova_compute[221550]: 2026-01-31 07:55:31.585 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:55:31 np0005603609 nova_compute[221550]: 2026-01-31 07:55:31.717 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:55:31 np0005603609 nova_compute[221550]: 2026-01-31 07:55:31.718 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4472MB free_disk=20.90493392944336GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:55:31 np0005603609 nova_compute[221550]: 2026-01-31 07:55:31.719 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:31 np0005603609 nova_compute[221550]: 2026-01-31 07:55:31.719 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:31 np0005603609 nova_compute[221550]: 2026-01-31 07:55:31.795 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.044 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 2d377db9-047f-4be1-a0cf-8189254def22 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.182 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 3f1cc0eb-2641-4574-952f-f02264260ce6 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.182 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.183 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.226 221554 DEBUG oslo_concurrency.lockutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "3f1cc0eb-2641-4574-952f-f02264260ce6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.226 221554 DEBUG oslo_concurrency.lockutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.256 221554 DEBUG nova.compute.manager [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.274 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.401 221554 DEBUG oslo_concurrency.lockutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:55:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:32.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.650 221554 DEBUG nova.network.neutron [req-5b56640e-d836-4e7d-9a3b-9b4bd5181653 req-3a344564-e470-4f81-98f9-884052040852 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Updated VIF entry in instance network info cache for port 46923f1f-6cb6-4592-94c9-5a95246f0b99. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.651 221554 DEBUG nova.network.neutron [req-5b56640e-d836-4e7d-9a3b-9b4bd5181653 req-3a344564-e470-4f81-98f9-884052040852 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Updating instance_info_cache with network_info: [{"id": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "address": "fa:16:3e:28:00:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46923f1f-6c", "ovs_interfaceid": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:55:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:55:32 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1141855199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.672 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.677 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.735 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.759 221554 DEBUG oslo_concurrency.lockutils [req-5b56640e-d836-4e7d-9a3b-9b4bd5181653 req-3a344564-e470-4f81-98f9-884052040852 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.889 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.889 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.891 221554 DEBUG oslo_concurrency.lockutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.902 221554 DEBUG nova.virt.hardware [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.903 221554 INFO nova.compute.claims [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:55:32 np0005603609 nova_compute[221550]: 2026-01-31 07:55:32.924 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:33 np0005603609 nova_compute[221550]: 2026-01-31 07:55:33.197 221554 DEBUG oslo_concurrency.processutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:33.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:55:33 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/407827562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:55:33 np0005603609 nova_compute[221550]: 2026-01-31 07:55:33.634 221554 DEBUG oslo_concurrency.processutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:33 np0005603609 nova_compute[221550]: 2026-01-31 07:55:33.638 221554 DEBUG nova.compute.provider_tree [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:55:33 np0005603609 nova_compute[221550]: 2026-01-31 07:55:33.679 221554 DEBUG nova.scheduler.client.report [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:55:33 np0005603609 nova_compute[221550]: 2026-01-31 07:55:33.770 221554 DEBUG oslo_concurrency.lockutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:33 np0005603609 nova_compute[221550]: 2026-01-31 07:55:33.770 221554 DEBUG nova.compute.manager [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:55:33 np0005603609 nova_compute[221550]: 2026-01-31 07:55:33.885 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:33 np0005603609 nova_compute[221550]: 2026-01-31 07:55:33.886 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:33 np0005603609 nova_compute[221550]: 2026-01-31 07:55:33.895 221554 DEBUG nova.compute.manager [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:55:33 np0005603609 nova_compute[221550]: 2026-01-31 07:55:33.895 221554 DEBUG nova.network.neutron [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:55:33 np0005603609 nova_compute[221550]: 2026-01-31 07:55:33.937 221554 INFO nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:55:34 np0005603609 nova_compute[221550]: 2026-01-31 07:55:34.012 221554 DEBUG nova.compute.manager [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:55:34 np0005603609 nova_compute[221550]: 2026-01-31 07:55:34.280 221554 DEBUG nova.compute.manager [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:55:34 np0005603609 nova_compute[221550]: 2026-01-31 07:55:34.281 221554 DEBUG nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:55:34 np0005603609 nova_compute[221550]: 2026-01-31 07:55:34.282 221554 INFO nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Creating image(s)#033[00m
Jan 31 02:55:34 np0005603609 nova_compute[221550]: 2026-01-31 07:55:34.305 221554 DEBUG nova.storage.rbd_utils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 3f1cc0eb-2641-4574-952f-f02264260ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:55:34 np0005603609 nova_compute[221550]: 2026-01-31 07:55:34.332 221554 DEBUG nova.storage.rbd_utils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 3f1cc0eb-2641-4574-952f-f02264260ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:55:34 np0005603609 nova_compute[221550]: 2026-01-31 07:55:34.368 221554 DEBUG nova.storage.rbd_utils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 3f1cc0eb-2641-4574-952f-f02264260ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:55:34 np0005603609 nova_compute[221550]: 2026-01-31 07:55:34.373 221554 DEBUG oslo_concurrency.processutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:34 np0005603609 nova_compute[221550]: 2026-01-31 07:55:34.428 221554 DEBUG oslo_concurrency.processutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:34 np0005603609 nova_compute[221550]: 2026-01-31 07:55:34.429 221554 DEBUG oslo_concurrency.lockutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:34 np0005603609 nova_compute[221550]: 2026-01-31 07:55:34.430 221554 DEBUG oslo_concurrency.lockutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:34 np0005603609 nova_compute[221550]: 2026-01-31 07:55:34.430 221554 DEBUG oslo_concurrency.lockutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:34 np0005603609 nova_compute[221550]: 2026-01-31 07:55:34.465 221554 DEBUG nova.storage.rbd_utils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 3f1cc0eb-2641-4574-952f-f02264260ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:55:34 np0005603609 nova_compute[221550]: 2026-01-31 07:55:34.472 221554 DEBUG oslo_concurrency.processutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 3f1cc0eb-2641-4574-952f-f02264260ce6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:55:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:34.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:55:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:35 np0005603609 nova_compute[221550]: 2026-01-31 07:55:35.103 221554 DEBUG nova.policy [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1e1c5eef3d264666bc90735dd338d82a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a8cbd6cc22654dfab04487522a63426c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:55:35 np0005603609 nova_compute[221550]: 2026-01-31 07:55:35.242 221554 DEBUG oslo_concurrency.processutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 3f1cc0eb-2641-4574-952f-f02264260ce6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.770s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:35.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:35 np0005603609 nova_compute[221550]: 2026-01-31 07:55:35.300 221554 DEBUG nova.storage.rbd_utils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] resizing rbd image 3f1cc0eb-2641-4574-952f-f02264260ce6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:55:35 np0005603609 nova_compute[221550]: 2026-01-31 07:55:35.451 221554 DEBUG nova.objects.instance [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'migration_context' on Instance uuid 3f1cc0eb-2641-4574-952f-f02264260ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:55:35 np0005603609 nova_compute[221550]: 2026-01-31 07:55:35.555 221554 DEBUG nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:55:35 np0005603609 nova_compute[221550]: 2026-01-31 07:55:35.556 221554 DEBUG nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Ensure instance console log exists: /var/lib/nova/instances/3f1cc0eb-2641-4574-952f-f02264260ce6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:55:35 np0005603609 nova_compute[221550]: 2026-01-31 07:55:35.557 221554 DEBUG oslo_concurrency.lockutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:35 np0005603609 nova_compute[221550]: 2026-01-31 07:55:35.557 221554 DEBUG oslo_concurrency.lockutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:35 np0005603609 nova_compute[221550]: 2026-01-31 07:55:35.558 221554 DEBUG oslo_concurrency.lockutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:55:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:36.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:55:36 np0005603609 nova_compute[221550]: 2026-01-31 07:55:36.797 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:37.152 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:55:37 np0005603609 nova_compute[221550]: 2026-01-31 07:55:37.153 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:37.154 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:55:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:55:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:37.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:55:37 np0005603609 nova_compute[221550]: 2026-01-31 07:55:37.926 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:37 np0005603609 nova_compute[221550]: 2026-01-31 07:55:37.971 221554 DEBUG nova.network.neutron [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Successfully created port: f08f53c3-9326-4b07-bdcd-2c3a3a06da0d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:55:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:55:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:38.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:55:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:39.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:40 np0005603609 podman[250898]: 2026-01-31 07:55:40.158707097 +0000 UTC m=+0.043227213 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:55:40 np0005603609 podman[250897]: 2026-01-31 07:55:40.179716773 +0000 UTC m=+0.065937441 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:55:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:40.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:41.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:41 np0005603609 nova_compute[221550]: 2026-01-31 07:55:41.307 221554 DEBUG nova.network.neutron [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Successfully updated port: f08f53c3-9326-4b07-bdcd-2c3a3a06da0d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:55:41 np0005603609 nova_compute[221550]: 2026-01-31 07:55:41.366 221554 DEBUG nova.compute.manager [req-bf21ec5e-ab60-4388-9a09-c51738c4da0b req-0237d60b-2300-4311-9c47-160e0e02f140 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received event network-changed-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:55:41 np0005603609 nova_compute[221550]: 2026-01-31 07:55:41.367 221554 DEBUG nova.compute.manager [req-bf21ec5e-ab60-4388-9a09-c51738c4da0b req-0237d60b-2300-4311-9c47-160e0e02f140 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Refreshing instance network info cache due to event network-changed-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:55:41 np0005603609 nova_compute[221550]: 2026-01-31 07:55:41.367 221554 DEBUG oslo_concurrency.lockutils [req-bf21ec5e-ab60-4388-9a09-c51738c4da0b req-0237d60b-2300-4311-9c47-160e0e02f140 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:55:41 np0005603609 nova_compute[221550]: 2026-01-31 07:55:41.367 221554 DEBUG oslo_concurrency.lockutils [req-bf21ec5e-ab60-4388-9a09-c51738c4da0b req-0237d60b-2300-4311-9c47-160e0e02f140 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:55:41 np0005603609 nova_compute[221550]: 2026-01-31 07:55:41.367 221554 DEBUG nova.network.neutron [req-bf21ec5e-ab60-4388-9a09-c51738c4da0b req-0237d60b-2300-4311-9c47-160e0e02f140 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Refreshing network info cache for port f08f53c3-9326-4b07-bdcd-2c3a3a06da0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:55:41 np0005603609 nova_compute[221550]: 2026-01-31 07:55:41.401 221554 DEBUG oslo_concurrency.lockutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:55:41 np0005603609 nova_compute[221550]: 2026-01-31 07:55:41.799 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:42 np0005603609 nova_compute[221550]: 2026-01-31 07:55:42.018 221554 DEBUG nova.network.neutron [req-bf21ec5e-ab60-4388-9a09-c51738c4da0b req-0237d60b-2300-4311-9c47-160e0e02f140 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:55:42 np0005603609 nova_compute[221550]: 2026-01-31 07:55:42.511 221554 DEBUG nova.network.neutron [req-bf21ec5e-ab60-4388-9a09-c51738c4da0b req-0237d60b-2300-4311-9c47-160e0e02f140 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:55:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:55:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:42.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:55:42 np0005603609 nova_compute[221550]: 2026-01-31 07:55:42.620 221554 DEBUG oslo_concurrency.lockutils [req-bf21ec5e-ab60-4388-9a09-c51738c4da0b req-0237d60b-2300-4311-9c47-160e0e02f140 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:55:42 np0005603609 nova_compute[221550]: 2026-01-31 07:55:42.622 221554 DEBUG oslo_concurrency.lockutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquired lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:55:42 np0005603609 nova_compute[221550]: 2026-01-31 07:55:42.623 221554 DEBUG nova.network.neutron [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:55:42 np0005603609 nova_compute[221550]: 2026-01-31 07:55:42.835 221554 DEBUG nova.network.neutron [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:55:42 np0005603609 nova_compute[221550]: 2026-01-31 07:55:42.927 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:43.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.037 221554 DEBUG nova.network.neutron [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Updating instance_info_cache with network_info: [{"id": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "address": "fa:16:3e:26:be:39", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08f53c3-93", "ovs_interfaceid": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.127 221554 DEBUG oslo_concurrency.lockutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Releasing lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.127 221554 DEBUG nova.compute.manager [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Instance network_info: |[{"id": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "address": "fa:16:3e:26:be:39", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08f53c3-93", "ovs_interfaceid": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.132 221554 DEBUG nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Start _get_guest_xml network_info=[{"id": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "address": "fa:16:3e:26:be:39", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08f53c3-93", "ovs_interfaceid": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.136 221554 WARNING nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.143 221554 DEBUG nova.virt.libvirt.host [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.144 221554 DEBUG nova.virt.libvirt.host [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.148 221554 DEBUG nova.virt.libvirt.host [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.149 221554 DEBUG nova.virt.libvirt.host [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.150 221554 DEBUG nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.150 221554 DEBUG nova.virt.hardware [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.151 221554 DEBUG nova.virt.hardware [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.151 221554 DEBUG nova.virt.hardware [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.151 221554 DEBUG nova.virt.hardware [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.151 221554 DEBUG nova.virt.hardware [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.151 221554 DEBUG nova.virt.hardware [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.151 221554 DEBUG nova.virt.hardware [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.151 221554 DEBUG nova.virt.hardware [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.152 221554 DEBUG nova.virt.hardware [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.152 221554 DEBUG nova.virt.hardware [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.152 221554 DEBUG nova.virt.hardware [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.154 221554 DEBUG oslo_concurrency.processutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:55:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1164178979' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.555 221554 DEBUG oslo_concurrency.processutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.595 221554 DEBUG nova.storage.rbd_utils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 3f1cc0eb-2641-4574-952f-f02264260ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:55:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:55:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:44.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:55:44 np0005603609 nova_compute[221550]: 2026-01-31 07:55:44.603 221554 DEBUG oslo_concurrency.processutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:55:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2371626794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.059 221554 DEBUG oslo_concurrency.processutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.062 221554 DEBUG nova.virt.libvirt.vif [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:55:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2097858313',display_name='tempest-tempest.common.compute-instance-2097858313',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2097858313',id=80,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnxydFFT/bcqoM03GknAV6VVEv+uI9/M2thVO454uMQmK087L6GvpajnIifCsqOX+pET7jqX84pd3d5htElL3GbJr344TJNF3ti2SMNzp16z2rTGy4GU5WRv9em26Ul9Q==',key_name='tempest-keypair-1143543931',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-c1ud7qy9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:55:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=3f1cc0eb-2641-4574-952f-f02264260ce6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "address": "fa:16:3e:26:be:39", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08f53c3-93", "ovs_interfaceid": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.063 221554 DEBUG nova.network.os_vif_util [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "address": "fa:16:3e:26:be:39", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08f53c3-93", "ovs_interfaceid": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.064 221554 DEBUG nova.network.os_vif_util [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:be:39,bridge_name='br-int',has_traffic_filtering=True,id=f08f53c3-9326-4b07-bdcd-2c3a3a06da0d,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf08f53c3-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.067 221554 DEBUG nova.objects.instance [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'pci_devices' on Instance uuid 3f1cc0eb-2641-4574-952f-f02264260ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.089 221554 DEBUG nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:55:45 np0005603609 nova_compute[221550]:  <uuid>3f1cc0eb-2641-4574-952f-f02264260ce6</uuid>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:  <name>instance-00000050</name>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <nova:name>tempest-tempest.common.compute-instance-2097858313</nova:name>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:55:44</nova:creationTime>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:55:45 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:        <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:        <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:        <nova:port uuid="f08f53c3-9326-4b07-bdcd-2c3a3a06da0d">
Jan 31 02:55:45 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <entry name="serial">3f1cc0eb-2641-4574-952f-f02264260ce6</entry>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <entry name="uuid">3f1cc0eb-2641-4574-952f-f02264260ce6</entry>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/3f1cc0eb-2641-4574-952f-f02264260ce6_disk">
Jan 31 02:55:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:55:45 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/3f1cc0eb-2641-4574-952f-f02264260ce6_disk.config">
Jan 31 02:55:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:55:45 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:26:be:39"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <target dev="tapf08f53c3-93"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/3f1cc0eb-2641-4574-952f-f02264260ce6/console.log" append="off"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:55:45 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:55:45 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:55:45 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:55:45 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.091 221554 DEBUG nova.compute.manager [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Preparing to wait for external event network-vif-plugged-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.092 221554 DEBUG oslo_concurrency.lockutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.092 221554 DEBUG oslo_concurrency.lockutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.093 221554 DEBUG oslo_concurrency.lockutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.094 221554 DEBUG nova.virt.libvirt.vif [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:55:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2097858313',display_name='tempest-tempest.common.compute-instance-2097858313',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2097858313',id=80,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnxydFFT/bcqoM03GknAV6VVEv+uI9/M2thVO454uMQmK087L6GvpajnIifCsqOX+pET7jqX84pd3d5htElL3GbJr344TJNF3ti2SMNzp16z2rTGy4GU5WRv9em26Ul9Q==',key_name='tempest-keypair-1143543931',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-c1ud7qy9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:55:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=3f1cc0eb-2641-4574-952f-f02264260ce6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "address": "fa:16:3e:26:be:39", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08f53c3-93", "ovs_interfaceid": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.095 221554 DEBUG nova.network.os_vif_util [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "address": "fa:16:3e:26:be:39", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08f53c3-93", "ovs_interfaceid": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.096 221554 DEBUG nova.network.os_vif_util [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:be:39,bridge_name='br-int',has_traffic_filtering=True,id=f08f53c3-9326-4b07-bdcd-2c3a3a06da0d,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf08f53c3-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.097 221554 DEBUG os_vif [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:be:39,bridge_name='br-int',has_traffic_filtering=True,id=f08f53c3-9326-4b07-bdcd-2c3a3a06da0d,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf08f53c3-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.098 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.099 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.099 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.103 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.103 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf08f53c3-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.104 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf08f53c3-93, col_values=(('external_ids', {'iface-id': 'f08f53c3-9326-4b07-bdcd-2c3a3a06da0d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:be:39', 'vm-uuid': '3f1cc0eb-2641-4574-952f-f02264260ce6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:55:45 np0005603609 NetworkManager[49064]: <info>  [1769846145.1065] manager: (tapf08f53c3-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.110 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.113 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.114 221554 INFO os_vif [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:be:39,bridge_name='br-int',has_traffic_filtering=True,id=f08f53c3-9326-4b07-bdcd-2c3a3a06da0d,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf08f53c3-93')#033[00m
Jan 31 02:55:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:45.155 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.271 221554 DEBUG nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.272 221554 DEBUG nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.272 221554 DEBUG nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No VIF found with MAC fa:16:3e:26:be:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.273 221554 INFO nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Using config drive#033[00m
Jan 31 02:55:45 np0005603609 nova_compute[221550]: 2026-01-31 07:55:45.297 221554 DEBUG nova.storage.rbd_utils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 3f1cc0eb-2641-4574-952f-f02264260ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:55:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:45.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:46 np0005603609 nova_compute[221550]: 2026-01-31 07:55:46.308 221554 INFO nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Creating config drive at /var/lib/nova/instances/3f1cc0eb-2641-4574-952f-f02264260ce6/disk.config#033[00m
Jan 31 02:55:46 np0005603609 nova_compute[221550]: 2026-01-31 07:55:46.315 221554 DEBUG oslo_concurrency.processutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3f1cc0eb-2641-4574-952f-f02264260ce6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpb47x0izc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:46 np0005603609 nova_compute[221550]: 2026-01-31 07:55:46.435 221554 DEBUG oslo_concurrency.processutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3f1cc0eb-2641-4574-952f-f02264260ce6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpb47x0izc" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:46 np0005603609 nova_compute[221550]: 2026-01-31 07:55:46.461 221554 DEBUG nova.storage.rbd_utils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] rbd image 3f1cc0eb-2641-4574-952f-f02264260ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:55:46 np0005603609 nova_compute[221550]: 2026-01-31 07:55:46.464 221554 DEBUG oslo_concurrency.processutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3f1cc0eb-2641-4574-952f-f02264260ce6/disk.config 3f1cc0eb-2641-4574-952f-f02264260ce6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:46.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:46 np0005603609 nova_compute[221550]: 2026-01-31 07:55:46.806 221554 DEBUG oslo_concurrency.processutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3f1cc0eb-2641-4574-952f-f02264260ce6/disk.config 3f1cc0eb-2641-4574-952f-f02264260ce6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:46 np0005603609 nova_compute[221550]: 2026-01-31 07:55:46.808 221554 INFO nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Deleting local config drive /var/lib/nova/instances/3f1cc0eb-2641-4574-952f-f02264260ce6/disk.config because it was imported into RBD.#033[00m
Jan 31 02:55:46 np0005603609 kernel: tapf08f53c3-93: entered promiscuous mode
Jan 31 02:55:46 np0005603609 NetworkManager[49064]: <info>  [1769846146.8606] manager: (tapf08f53c3-93): new Tun device (/org/freedesktop/NetworkManager/Devices/132)
Jan 31 02:55:46 np0005603609 nova_compute[221550]: 2026-01-31 07:55:46.863 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:46 np0005603609 ovn_controller[130359]: 2026-01-31T07:55:46Z|00252|binding|INFO|Claiming lport f08f53c3-9326-4b07-bdcd-2c3a3a06da0d for this chassis.
Jan 31 02:55:46 np0005603609 ovn_controller[130359]: 2026-01-31T07:55:46Z|00253|binding|INFO|f08f53c3-9326-4b07-bdcd-2c3a3a06da0d: Claiming fa:16:3e:26:be:39 10.100.0.12
Jan 31 02:55:46 np0005603609 ovn_controller[130359]: 2026-01-31T07:55:46Z|00254|binding|INFO|Setting lport f08f53c3-9326-4b07-bdcd-2c3a3a06da0d ovn-installed in OVS
Jan 31 02:55:46 np0005603609 nova_compute[221550]: 2026-01-31 07:55:46.870 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:46 np0005603609 nova_compute[221550]: 2026-01-31 07:55:46.872 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:46 np0005603609 systemd-machined[190912]: New machine qemu-36-instance-00000050.
Jan 31 02:55:46 np0005603609 systemd-udevd[251076]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:55:46 np0005603609 systemd[1]: Started Virtual Machine qemu-36-instance-00000050.
Jan 31 02:55:46 np0005603609 NetworkManager[49064]: <info>  [1769846146.9122] device (tapf08f53c3-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:55:46 np0005603609 NetworkManager[49064]: <info>  [1769846146.9128] device (tapf08f53c3-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:55:47 np0005603609 ovn_controller[130359]: 2026-01-31T07:55:47Z|00255|binding|INFO|Releasing lport cbb039b2-15df-45bd-8800-81213ecc7011 from this chassis (sb_readonly=0)
Jan 31 02:55:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:47.210 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:be:39 10.100.0.12'], port_security=['fa:16:3e:26:be:39 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3f1cc0eb-2641-4574-952f-f02264260ce6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f473e048-f911-4a1f-850c-a5dd47d82771', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=f08f53c3-9326-4b07-bdcd-2c3a3a06da0d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:55:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:47.211 140058 INFO neutron.agent.ovn.metadata.agent [-] Port f08f53c3-9326-4b07-bdcd-2c3a3a06da0d in datapath 485494d9-5360-41c3-a10e-ef5098af0809 bound to our chassis#033[00m
Jan 31 02:55:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:47.213 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 485494d9-5360-41c3-a10e-ef5098af0809#033[00m
Jan 31 02:55:47 np0005603609 ovn_controller[130359]: 2026-01-31T07:55:47Z|00256|binding|INFO|Setting lport f08f53c3-9326-4b07-bdcd-2c3a3a06da0d up in Southbound
Jan 31 02:55:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:47.231 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d812c16b-ee9f-446a-b589-aa4294d3b16a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:47 np0005603609 nova_compute[221550]: 2026-01-31 07:55:47.233 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:47.250 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[4e963b8c-bf56-4317-803d-75f0b6535bbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:47.253 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e00686-3c50-4eee-a826-02c060ff1cc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:47.271 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c40c0d1e-0342-4b95-9556-9a251c6c6955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:47.286 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2a98377e-1c72-43ae-98c2-bffdfcda27eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634860, 'reachable_time': 37551, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251090, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:47.300 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[62777594-7b9d-4bb2-b0eb-b557c977e3b5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634865, 'tstamp': 634865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251091, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634867, 'tstamp': 634867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251091, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:47.302 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:55:47 np0005603609 nova_compute[221550]: 2026-01-31 07:55:47.304 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:47.305 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap485494d9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:55:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:47.305 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:55:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:47.306 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap485494d9-50, col_values=(('external_ids', {'iface-id': 'cbb039b2-15df-45bd-8800-81213ecc7011'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:55:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:55:47.306 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:55:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:55:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:47.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:55:47 np0005603609 nova_compute[221550]: 2026-01-31 07:55:47.533 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846147.5330908, 3f1cc0eb-2641-4574-952f-f02264260ce6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:55:47 np0005603609 nova_compute[221550]: 2026-01-31 07:55:47.534 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] VM Started (Lifecycle Event)#033[00m
Jan 31 02:55:47 np0005603609 nova_compute[221550]: 2026-01-31 07:55:47.591 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:55:47 np0005603609 nova_compute[221550]: 2026-01-31 07:55:47.595 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846147.5332506, 3f1cc0eb-2641-4574-952f-f02264260ce6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:55:47 np0005603609 nova_compute[221550]: 2026-01-31 07:55:47.595 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:55:47 np0005603609 nova_compute[221550]: 2026-01-31 07:55:47.669 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:55:47 np0005603609 nova_compute[221550]: 2026-01-31 07:55:47.673 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:55:47 np0005603609 nova_compute[221550]: 2026-01-31 07:55:47.929 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:48 np0005603609 nova_compute[221550]: 2026-01-31 07:55:48.158 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:55:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:55:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:48.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:55:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:49.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:50 np0005603609 nova_compute[221550]: 2026-01-31 07:55:50.107 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:55:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:50.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:55:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:55:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:51.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:55:51 np0005603609 nova_compute[221550]: 2026-01-31 07:55:51.621 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:52.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:52 np0005603609 ovn_controller[130359]: 2026-01-31T07:55:52Z|00257|binding|INFO|Releasing lport cbb039b2-15df-45bd-8800-81213ecc7011 from this chassis (sb_readonly=0)
Jan 31 02:55:52 np0005603609 nova_compute[221550]: 2026-01-31 07:55:52.830 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:52 np0005603609 nova_compute[221550]: 2026-01-31 07:55:52.930 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.116 221554 DEBUG nova.compute.manager [req-5e1064d3-cfc2-47f0-b1fc-bc517d8387cb req-26b34345-33e9-4c3c-82bf-65f4949272ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received event network-vif-plugged-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.116 221554 DEBUG oslo_concurrency.lockutils [req-5e1064d3-cfc2-47f0-b1fc-bc517d8387cb req-26b34345-33e9-4c3c-82bf-65f4949272ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.117 221554 DEBUG oslo_concurrency.lockutils [req-5e1064d3-cfc2-47f0-b1fc-bc517d8387cb req-26b34345-33e9-4c3c-82bf-65f4949272ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.117 221554 DEBUG oslo_concurrency.lockutils [req-5e1064d3-cfc2-47f0-b1fc-bc517d8387cb req-26b34345-33e9-4c3c-82bf-65f4949272ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.117 221554 DEBUG nova.compute.manager [req-5e1064d3-cfc2-47f0-b1fc-bc517d8387cb req-26b34345-33e9-4c3c-82bf-65f4949272ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Processing event network-vif-plugged-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.117 221554 DEBUG nova.compute.manager [req-5e1064d3-cfc2-47f0-b1fc-bc517d8387cb req-26b34345-33e9-4c3c-82bf-65f4949272ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received event network-vif-plugged-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.117 221554 DEBUG oslo_concurrency.lockutils [req-5e1064d3-cfc2-47f0-b1fc-bc517d8387cb req-26b34345-33e9-4c3c-82bf-65f4949272ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.118 221554 DEBUG oslo_concurrency.lockutils [req-5e1064d3-cfc2-47f0-b1fc-bc517d8387cb req-26b34345-33e9-4c3c-82bf-65f4949272ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.118 221554 DEBUG oslo_concurrency.lockutils [req-5e1064d3-cfc2-47f0-b1fc-bc517d8387cb req-26b34345-33e9-4c3c-82bf-65f4949272ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.118 221554 DEBUG nova.compute.manager [req-5e1064d3-cfc2-47f0-b1fc-bc517d8387cb req-26b34345-33e9-4c3c-82bf-65f4949272ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] No waiting events found dispatching network-vif-plugged-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.118 221554 WARNING nova.compute.manager [req-5e1064d3-cfc2-47f0-b1fc-bc517d8387cb req-26b34345-33e9-4c3c-82bf-65f4949272ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received unexpected event network-vif-plugged-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d for instance with vm_state building and task_state spawning.#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.119 221554 DEBUG nova.compute.manager [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.122 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846153.1222014, 3f1cc0eb-2641-4574-952f-f02264260ce6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.122 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.124 221554 DEBUG nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.127 221554 INFO nova.virt.libvirt.driver [-] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Instance spawned successfully.#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.128 221554 DEBUG nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:55:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:55:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:53.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.518 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.521 221554 DEBUG nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.522 221554 DEBUG nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.522 221554 DEBUG nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.523 221554 DEBUG nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.523 221554 DEBUG nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.524 221554 DEBUG nova.virt.libvirt.driver [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.530 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:55:53 np0005603609 nova_compute[221550]: 2026-01-31 07:55:53.678 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:55:54 np0005603609 nova_compute[221550]: 2026-01-31 07:55:54.249 221554 INFO nova.compute.manager [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Took 19.97 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:55:54 np0005603609 nova_compute[221550]: 2026-01-31 07:55:54.250 221554 DEBUG nova.compute.manager [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:55:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:54.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:54 np0005603609 nova_compute[221550]: 2026-01-31 07:55:54.733 221554 INFO nova.compute.manager [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Took 22.37 seconds to build instance.#033[00m
Jan 31 02:55:54 np0005603609 nova_compute[221550]: 2026-01-31 07:55:54.951 221554 DEBUG oslo_concurrency.lockutils [None req-a36f60ae-055a-4c6b-8670-e9808612bace 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:55 np0005603609 nova_compute[221550]: 2026-01-31 07:55:55.111 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:55:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:55.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:55:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:55:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:56.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:55:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:55:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:57.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:55:57 np0005603609 nova_compute[221550]: 2026-01-31 07:55:57.932 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:58.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:55:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:59.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:00 np0005603609 nova_compute[221550]: 2026-01-31 07:56:00.113 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:56:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:00.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:56:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:01.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:01 np0005603609 nova_compute[221550]: 2026-01-31 07:56:01.871 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:56:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:02.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:56:02 np0005603609 nova_compute[221550]: 2026-01-31 07:56:02.934 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:56:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:03.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:56:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:56:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:04.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:56:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:05 np0005603609 nova_compute[221550]: 2026-01-31 07:56:05.115 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:05.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:05 np0005603609 nova_compute[221550]: 2026-01-31 07:56:05.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:05 np0005603609 nova_compute[221550]: 2026-01-31 07:56:05.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 02:56:05 np0005603609 nova_compute[221550]: 2026-01-31 07:56:05.780 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 02:56:05 np0005603609 ovn_controller[130359]: 2026-01-31T07:56:05Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:be:39 10.100.0.12
Jan 31 02:56:05 np0005603609 ovn_controller[130359]: 2026-01-31T07:56:05Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:be:39 10.100.0.12
Jan 31 02:56:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:56:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:06.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:56:06 np0005603609 nova_compute[221550]: 2026-01-31 07:56:06.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:06 np0005603609 nova_compute[221550]: 2026-01-31 07:56:06.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 02:56:06 np0005603609 nova_compute[221550]: 2026-01-31 07:56:06.989 221554 DEBUG nova.compute.manager [req-035d625b-f880-4b92-8ee8-b174631cf21c req-e24fd373-cb55-4c89-b39b-feb482b1a2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received event network-changed-46923f1f-6cb6-4592-94c9-5a95246f0b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:56:06 np0005603609 nova_compute[221550]: 2026-01-31 07:56:06.989 221554 DEBUG nova.compute.manager [req-035d625b-f880-4b92-8ee8-b174631cf21c req-e24fd373-cb55-4c89-b39b-feb482b1a2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Refreshing instance network info cache due to event network-changed-46923f1f-6cb6-4592-94c9-5a95246f0b99. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:56:06 np0005603609 nova_compute[221550]: 2026-01-31 07:56:06.991 221554 DEBUG oslo_concurrency.lockutils [req-035d625b-f880-4b92-8ee8-b174631cf21c req-e24fd373-cb55-4c89-b39b-feb482b1a2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:56:06 np0005603609 nova_compute[221550]: 2026-01-31 07:56:06.992 221554 DEBUG oslo_concurrency.lockutils [req-035d625b-f880-4b92-8ee8-b174631cf21c req-e24fd373-cb55-4c89-b39b-feb482b1a2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:56:06 np0005603609 nova_compute[221550]: 2026-01-31 07:56:06.992 221554 DEBUG nova.network.neutron [req-035d625b-f880-4b92-8ee8-b174631cf21c req-e24fd373-cb55-4c89-b39b-feb482b1a2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Refreshing network info cache for port 46923f1f-6cb6-4592-94c9-5a95246f0b99 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:56:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:07.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:07.491 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:07.491 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:07.492 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:07 np0005603609 nova_compute[221550]: 2026-01-31 07:56:07.954 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:08.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:08 np0005603609 nova_compute[221550]: 2026-01-31 07:56:08.998 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:56:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:09.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:56:09 np0005603609 nova_compute[221550]: 2026-01-31 07:56:09.421 221554 DEBUG nova.compute.manager [req-96566e8d-0e3f-4c37-b8b0-07aeccd93eb2 req-8329af61-2dc3-46a8-9038-3433ea123234 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received event network-changed-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:56:09 np0005603609 nova_compute[221550]: 2026-01-31 07:56:09.421 221554 DEBUG nova.compute.manager [req-96566e8d-0e3f-4c37-b8b0-07aeccd93eb2 req-8329af61-2dc3-46a8-9038-3433ea123234 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Refreshing instance network info cache due to event network-changed-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:56:09 np0005603609 nova_compute[221550]: 2026-01-31 07:56:09.422 221554 DEBUG oslo_concurrency.lockutils [req-96566e8d-0e3f-4c37-b8b0-07aeccd93eb2 req-8329af61-2dc3-46a8-9038-3433ea123234 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:56:09 np0005603609 nova_compute[221550]: 2026-01-31 07:56:09.423 221554 DEBUG oslo_concurrency.lockutils [req-96566e8d-0e3f-4c37-b8b0-07aeccd93eb2 req-8329af61-2dc3-46a8-9038-3433ea123234 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:56:09 np0005603609 nova_compute[221550]: 2026-01-31 07:56:09.423 221554 DEBUG nova.network.neutron [req-96566e8d-0e3f-4c37-b8b0-07aeccd93eb2 req-8329af61-2dc3-46a8-9038-3433ea123234 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Refreshing network info cache for port f08f53c3-9326-4b07-bdcd-2c3a3a06da0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:56:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:10 np0005603609 nova_compute[221550]: 2026-01-31 07:56:10.119 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:10 np0005603609 nova_compute[221550]: 2026-01-31 07:56:10.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:10 np0005603609 nova_compute[221550]: 2026-01-31 07:56:10.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:56:10 np0005603609 nova_compute[221550]: 2026-01-31 07:56:10.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:56:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:10.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:10 np0005603609 nova_compute[221550]: 2026-01-31 07:56:10.696 221554 DEBUG nova.network.neutron [req-035d625b-f880-4b92-8ee8-b174631cf21c req-e24fd373-cb55-4c89-b39b-feb482b1a2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Updated VIF entry in instance network info cache for port 46923f1f-6cb6-4592-94c9-5a95246f0b99. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:56:10 np0005603609 nova_compute[221550]: 2026-01-31 07:56:10.697 221554 DEBUG nova.network.neutron [req-035d625b-f880-4b92-8ee8-b174631cf21c req-e24fd373-cb55-4c89-b39b-feb482b1a2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Updating instance_info_cache with network_info: [{"id": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "address": "fa:16:3e:28:00:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46923f1f-6c", "ovs_interfaceid": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:56:11 np0005603609 podman[251268]: 2026-01-31 07:56:11.222685305 +0000 UTC m=+0.089120430 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 02:56:11 np0005603609 nova_compute[221550]: 2026-01-31 07:56:11.242 221554 DEBUG oslo_concurrency.lockutils [req-035d625b-f880-4b92-8ee8-b174631cf21c req-e24fd373-cb55-4c89-b39b-feb482b1a2f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:56:11 np0005603609 podman[251267]: 2026-01-31 07:56:11.260100058 +0000 UTC m=+0.126486922 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 02:56:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:11.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:12 np0005603609 nova_compute[221550]: 2026-01-31 07:56:12.104 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:56:12 np0005603609 nova_compute[221550]: 2026-01-31 07:56:12.104 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:56:12 np0005603609 nova_compute[221550]: 2026-01-31 07:56:12.105 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:56:12 np0005603609 nova_compute[221550]: 2026-01-31 07:56:12.105 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2d377db9-047f-4be1-a0cf-8189254def22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:56:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:12.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:12 np0005603609 nova_compute[221550]: 2026-01-31 07:56:12.956 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:56:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:13.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:56:13 np0005603609 nova_compute[221550]: 2026-01-31 07:56:13.902 221554 DEBUG nova.network.neutron [req-96566e8d-0e3f-4c37-b8b0-07aeccd93eb2 req-8329af61-2dc3-46a8-9038-3433ea123234 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Updated VIF entry in instance network info cache for port f08f53c3-9326-4b07-bdcd-2c3a3a06da0d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:56:13 np0005603609 nova_compute[221550]: 2026-01-31 07:56:13.902 221554 DEBUG nova.network.neutron [req-96566e8d-0e3f-4c37-b8b0-07aeccd93eb2 req-8329af61-2dc3-46a8-9038-3433ea123234 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Updating instance_info_cache with network_info: [{"id": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "address": "fa:16:3e:26:be:39", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08f53c3-93", "ovs_interfaceid": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:56:14 np0005603609 nova_compute[221550]: 2026-01-31 07:56:14.016 221554 DEBUG oslo_concurrency.lockutils [req-96566e8d-0e3f-4c37-b8b0-07aeccd93eb2 req-8329af61-2dc3-46a8-9038-3433ea123234 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:56:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:14.412 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:56:14 np0005603609 nova_compute[221550]: 2026-01-31 07:56:14.412 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:14.413 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:56:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:56:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:56:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:14.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:15 np0005603609 nova_compute[221550]: 2026-01-31 07:56:15.121 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:15.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:15 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:56:15 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:56:15 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:56:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:56:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:16.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:56:16 np0005603609 nova_compute[221550]: 2026-01-31 07:56:16.835 221554 DEBUG nova.compute.manager [req-5472e9c4-b3c9-440e-aca2-b3c48a6c7328 req-c56e2da1-ce41-4ba0-ae5a-cb3b1c60dc18 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received event network-changed-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:56:16 np0005603609 nova_compute[221550]: 2026-01-31 07:56:16.836 221554 DEBUG nova.compute.manager [req-5472e9c4-b3c9-440e-aca2-b3c48a6c7328 req-c56e2da1-ce41-4ba0-ae5a-cb3b1c60dc18 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Refreshing instance network info cache due to event network-changed-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:56:16 np0005603609 nova_compute[221550]: 2026-01-31 07:56:16.836 221554 DEBUG oslo_concurrency.lockutils [req-5472e9c4-b3c9-440e-aca2-b3c48a6c7328 req-c56e2da1-ce41-4ba0-ae5a-cb3b1c60dc18 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:56:16 np0005603609 nova_compute[221550]: 2026-01-31 07:56:16.837 221554 DEBUG oslo_concurrency.lockutils [req-5472e9c4-b3c9-440e-aca2-b3c48a6c7328 req-c56e2da1-ce41-4ba0-ae5a-cb3b1c60dc18 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:56:16 np0005603609 nova_compute[221550]: 2026-01-31 07:56:16.837 221554 DEBUG nova.network.neutron [req-5472e9c4-b3c9-440e-aca2-b3c48a6c7328 req-c56e2da1-ce41-4ba0-ae5a-cb3b1c60dc18 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Refreshing network info cache for port f08f53c3-9326-4b07-bdcd-2c3a3a06da0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:56:17 np0005603609 nova_compute[221550]: 2026-01-31 07:56:17.159 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Updating instance_info_cache with network_info: [{"id": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "address": "fa:16:3e:28:00:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46923f1f-6c", "ovs_interfaceid": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:56:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:17.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:17 np0005603609 nova_compute[221550]: 2026-01-31 07:56:17.993 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:18 np0005603609 nova_compute[221550]: 2026-01-31 07:56:18.086 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:56:18 np0005603609 nova_compute[221550]: 2026-01-31 07:56:18.087 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:56:18 np0005603609 nova_compute[221550]: 2026-01-31 07:56:18.087 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:18 np0005603609 nova_compute[221550]: 2026-01-31 07:56:18.087 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:18 np0005603609 nova_compute[221550]: 2026-01-31 07:56:18.088 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:18 np0005603609 nova_compute[221550]: 2026-01-31 07:56:18.088 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:18 np0005603609 nova_compute[221550]: 2026-01-31 07:56:18.473 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Triggering sync for uuid 2d377db9-047f-4be1-a0cf-8189254def22 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 02:56:18 np0005603609 nova_compute[221550]: 2026-01-31 07:56:18.473 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Triggering sync for uuid 3f1cc0eb-2641-4574-952f-f02264260ce6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 02:56:18 np0005603609 nova_compute[221550]: 2026-01-31 07:56:18.474 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "2d377db9-047f-4be1-a0cf-8189254def22" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:18 np0005603609 nova_compute[221550]: 2026-01-31 07:56:18.474 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "2d377db9-047f-4be1-a0cf-8189254def22" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:18 np0005603609 nova_compute[221550]: 2026-01-31 07:56:18.474 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "3f1cc0eb-2641-4574-952f-f02264260ce6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:18 np0005603609 nova_compute[221550]: 2026-01-31 07:56:18.475 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:18 np0005603609 nova_compute[221550]: 2026-01-31 07:56:18.475 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:18 np0005603609 nova_compute[221550]: 2026-01-31 07:56:18.476 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:56:18 np0005603609 nova_compute[221550]: 2026-01-31 07:56:18.476 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:56:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:18.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:56:18 np0005603609 nova_compute[221550]: 2026-01-31 07:56:18.715 221554 DEBUG oslo_concurrency.lockutils [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "interface-2d377db9-047f-4be1-a0cf-8189254def22-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:18 np0005603609 nova_compute[221550]: 2026-01-31 07:56:18.716 221554 DEBUG oslo_concurrency.lockutils [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-2d377db9-047f-4be1-a0cf-8189254def22-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:18 np0005603609 nova_compute[221550]: 2026-01-31 07:56:18.716 221554 DEBUG nova.objects.instance [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'flavor' on Instance uuid 2d377db9-047f-4be1-a0cf-8189254def22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:56:19 np0005603609 nova_compute[221550]: 2026-01-31 07:56:19.097 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:19 np0005603609 nova_compute[221550]: 2026-01-31 07:56:19.098 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:19 np0005603609 nova_compute[221550]: 2026-01-31 07:56:19.098 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:19 np0005603609 nova_compute[221550]: 2026-01-31 07:56:19.099 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:56:19 np0005603609 nova_compute[221550]: 2026-01-31 07:56:19.099 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:19 np0005603609 nova_compute[221550]: 2026-01-31 07:56:19.126 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:19 np0005603609 nova_compute[221550]: 2026-01-31 07:56:19.127 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "2d377db9-047f-4be1-a0cf-8189254def22" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:56:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:19.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:56:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:19.414 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:56:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/51827031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:56:19 np0005603609 nova_compute[221550]: 2026-01-31 07:56:19.574 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:19 np0005603609 nova_compute[221550]: 2026-01-31 07:56:19.813 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:56:19 np0005603609 nova_compute[221550]: 2026-01-31 07:56:19.814 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:56:19 np0005603609 nova_compute[221550]: 2026-01-31 07:56:19.817 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:56:19 np0005603609 nova_compute[221550]: 2026-01-31 07:56:19.818 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:56:19 np0005603609 nova_compute[221550]: 2026-01-31 07:56:19.966 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:56:19 np0005603609 nova_compute[221550]: 2026-01-31 07:56:19.967 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4267MB free_disk=20.897254943847656GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:56:19 np0005603609 nova_compute[221550]: 2026-01-31 07:56:19.967 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:19 np0005603609 nova_compute[221550]: 2026-01-31 07:56:19.967 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:20 np0005603609 nova_compute[221550]: 2026-01-31 07:56:20.123 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:20.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:21 np0005603609 nova_compute[221550]: 2026-01-31 07:56:21.292 221554 DEBUG nova.compute.manager [req-3445b11a-5930-4227-9845-709657980169 req-8ad7e1e3-a9d5-4100-9121-3083d7a1af92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received event network-changed-46923f1f-6cb6-4592-94c9-5a95246f0b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:56:21 np0005603609 nova_compute[221550]: 2026-01-31 07:56:21.292 221554 DEBUG nova.compute.manager [req-3445b11a-5930-4227-9845-709657980169 req-8ad7e1e3-a9d5-4100-9121-3083d7a1af92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Refreshing instance network info cache due to event network-changed-46923f1f-6cb6-4592-94c9-5a95246f0b99. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:56:21 np0005603609 nova_compute[221550]: 2026-01-31 07:56:21.293 221554 DEBUG oslo_concurrency.lockutils [req-3445b11a-5930-4227-9845-709657980169 req-8ad7e1e3-a9d5-4100-9121-3083d7a1af92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:56:21 np0005603609 nova_compute[221550]: 2026-01-31 07:56:21.293 221554 DEBUG oslo_concurrency.lockutils [req-3445b11a-5930-4227-9845-709657980169 req-8ad7e1e3-a9d5-4100-9121-3083d7a1af92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:56:21 np0005603609 nova_compute[221550]: 2026-01-31 07:56:21.293 221554 DEBUG nova.network.neutron [req-3445b11a-5930-4227-9845-709657980169 req-8ad7e1e3-a9d5-4100-9121-3083d7a1af92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Refreshing network info cache for port 46923f1f-6cb6-4592-94c9-5a95246f0b99 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:56:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:56:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:21.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:56:21 np0005603609 nova_compute[221550]: 2026-01-31 07:56:21.519 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 2d377db9-047f-4be1-a0cf-8189254def22 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:56:21 np0005603609 nova_compute[221550]: 2026-01-31 07:56:21.520 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 3f1cc0eb-2641-4574-952f-f02264260ce6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:56:21 np0005603609 nova_compute[221550]: 2026-01-31 07:56:21.520 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:56:21 np0005603609 nova_compute[221550]: 2026-01-31 07:56:21.521 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:56:21 np0005603609 nova_compute[221550]: 2026-01-31 07:56:21.793 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:22 np0005603609 nova_compute[221550]: 2026-01-31 07:56:22.227 221554 DEBUG nova.objects.instance [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'pci_requests' on Instance uuid 2d377db9-047f-4be1-a0cf-8189254def22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:56:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:56:22 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1461398054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:56:22 np0005603609 nova_compute[221550]: 2026-01-31 07:56:22.294 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:22 np0005603609 nova_compute[221550]: 2026-01-31 07:56:22.297 221554 DEBUG nova.network.neutron [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:56:22 np0005603609 nova_compute[221550]: 2026-01-31 07:56:22.302 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:56:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:56:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:56:22 np0005603609 nova_compute[221550]: 2026-01-31 07:56:22.564 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:56:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:22.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:23 np0005603609 nova_compute[221550]: 2026-01-31 07:56:23.038 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:23 np0005603609 nova_compute[221550]: 2026-01-31 07:56:23.110 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:56:23 np0005603609 nova_compute[221550]: 2026-01-31 07:56:23.110 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:23 np0005603609 nova_compute[221550]: 2026-01-31 07:56:23.238 221554 DEBUG nova.policy [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1e1c5eef3d264666bc90735dd338d82a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a8cbd6cc22654dfab04487522a63426c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:56:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:56:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:23.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:56:23 np0005603609 nova_compute[221550]: 2026-01-31 07:56:23.367 221554 DEBUG nova.network.neutron [req-5472e9c4-b3c9-440e-aca2-b3c48a6c7328 req-c56e2da1-ce41-4ba0-ae5a-cb3b1c60dc18 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Updated VIF entry in instance network info cache for port f08f53c3-9326-4b07-bdcd-2c3a3a06da0d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:56:23 np0005603609 nova_compute[221550]: 2026-01-31 07:56:23.368 221554 DEBUG nova.network.neutron [req-5472e9c4-b3c9-440e-aca2-b3c48a6c7328 req-c56e2da1-ce41-4ba0-ae5a-cb3b1c60dc18 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Updating instance_info_cache with network_info: [{"id": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "address": "fa:16:3e:26:be:39", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08f53c3-93", "ovs_interfaceid": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:56:23 np0005603609 nova_compute[221550]: 2026-01-31 07:56:23.887 221554 DEBUG oslo_concurrency.lockutils [req-5472e9c4-b3c9-440e-aca2-b3c48a6c7328 req-c56e2da1-ce41-4ba0-ae5a-cb3b1c60dc18 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:56:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:24.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:24 np0005603609 nova_compute[221550]: 2026-01-31 07:56:24.947 221554 DEBUG nova.network.neutron [req-3445b11a-5930-4227-9845-709657980169 req-8ad7e1e3-a9d5-4100-9121-3083d7a1af92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Updated VIF entry in instance network info cache for port 46923f1f-6cb6-4592-94c9-5a95246f0b99. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:56:24 np0005603609 nova_compute[221550]: 2026-01-31 07:56:24.948 221554 DEBUG nova.network.neutron [req-3445b11a-5930-4227-9845-709657980169 req-8ad7e1e3-a9d5-4100-9121-3083d7a1af92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Updating instance_info_cache with network_info: [{"id": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "address": "fa:16:3e:28:00:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46923f1f-6c", "ovs_interfaceid": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:56:25 np0005603609 nova_compute[221550]: 2026-01-31 07:56:25.046 221554 DEBUG oslo_concurrency.lockutils [req-3445b11a-5930-4227-9845-709657980169 req-8ad7e1e3-a9d5-4100-9121-3083d7a1af92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:56:25 np0005603609 nova_compute[221550]: 2026-01-31 07:56:25.126 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:25 np0005603609 ovn_controller[130359]: 2026-01-31T07:56:25Z|00258|binding|INFO|Releasing lport cbb039b2-15df-45bd-8800-81213ecc7011 from this chassis (sb_readonly=0)
Jan 31 02:56:25 np0005603609 nova_compute[221550]: 2026-01-31 07:56:25.364 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:56:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:25.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:56:25 np0005603609 nova_compute[221550]: 2026-01-31 07:56:25.992 221554 DEBUG nova.network.neutron [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Successfully updated port: 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:56:26 np0005603609 nova_compute[221550]: 2026-01-31 07:56:26.240 221554 DEBUG oslo_concurrency.lockutils [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:56:26 np0005603609 nova_compute[221550]: 2026-01-31 07:56:26.241 221554 DEBUG oslo_concurrency.lockutils [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquired lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:56:26 np0005603609 nova_compute[221550]: 2026-01-31 07:56:26.241 221554 DEBUG nova.network.neutron [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:56:26 np0005603609 nova_compute[221550]: 2026-01-31 07:56:26.367 221554 DEBUG nova.compute.manager [req-d43a28af-c551-4085-95c2-09c57205aacb req-a7abb8db-99c7-4d3d-bc8c-c9554516f1d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received event network-changed-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:56:26 np0005603609 nova_compute[221550]: 2026-01-31 07:56:26.368 221554 DEBUG nova.compute.manager [req-d43a28af-c551-4085-95c2-09c57205aacb req-a7abb8db-99c7-4d3d-bc8c-c9554516f1d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Refreshing instance network info cache due to event network-changed-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:56:26 np0005603609 nova_compute[221550]: 2026-01-31 07:56:26.368 221554 DEBUG oslo_concurrency.lockutils [req-d43a28af-c551-4085-95c2-09c57205aacb req-a7abb8db-99c7-4d3d-bc8c-c9554516f1d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:56:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:56:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:26.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:56:27 np0005603609 nova_compute[221550]: 2026-01-31 07:56:27.087 221554 WARNING nova.network.neutron [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] 485494d9-5360-41c3-a10e-ef5098af0809 already exists in list: networks containing: ['485494d9-5360-41c3-a10e-ef5098af0809']. ignoring it#033[00m
Jan 31 02:56:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:56:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:27.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:56:27 np0005603609 nova_compute[221550]: 2026-01-31 07:56:27.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:27 np0005603609 nova_compute[221550]: 2026-01-31 07:56:27.661 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:27 np0005603609 nova_compute[221550]: 2026-01-31 07:56:27.950 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:28 np0005603609 nova_compute[221550]: 2026-01-31 07:56:28.040 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:56:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:28.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:56:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:29.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:56:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 27K writes, 108K keys, 27K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.04 MB/s#012Cumulative WAL: 27K writes, 9438 syncs, 2.92 writes per sync, written: 0.10 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5740 writes, 21K keys, 5740 commit groups, 1.0 writes per commit group, ingest: 22.26 MB, 0.04 MB/s#012Interval WAL: 5741 writes, 2226 syncs, 2.58 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 02:56:30 np0005603609 nova_compute[221550]: 2026-01-31 07:56:30.128 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:30.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:56:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:31.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.136 221554 DEBUG nova.network.neutron [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Updating instance_info_cache with network_info: [{"id": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "address": "fa:16:3e:28:00:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46923f1f-6c", "ovs_interfaceid": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.189 221554 DEBUG oslo_concurrency.lockutils [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Releasing lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.190 221554 DEBUG oslo_concurrency.lockutils [req-d43a28af-c551-4085-95c2-09c57205aacb req-a7abb8db-99c7-4d3d-bc8c-c9554516f1d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.190 221554 DEBUG nova.network.neutron [req-d43a28af-c551-4085-95c2-09c57205aacb req-a7abb8db-99c7-4d3d-bc8c-c9554516f1d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Refreshing network info cache for port 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.194 221554 DEBUG nova.virt.libvirt.vif [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:54:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-984890616',display_name='tempest-tempest.common.compute-instance-984890616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-984890616',id=77,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnxydFFT/bcqoM03GknAV6VVEv+uI9/M2thVO454uMQmK087L6GvpajnIifCsqOX+pET7jqX84pd3d5htElL3GbJr344TJNF3ti2SMNzp16z2rTGy4GU5WRv9em26Ul9Q==',key_name='tempest-keypair-1143543931',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:55:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-6f0xru8d',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:55:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=2d377db9-047f-4be1-a0cf-8189254def22,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.194 221554 DEBUG nova.network.os_vif_util [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.195 221554 DEBUG nova.network.os_vif_util [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:16:62,bridge_name='br-int',has_traffic_filtering=True,id=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5f60b96f-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.195 221554 DEBUG os_vif [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:16:62,bridge_name='br-int',has_traffic_filtering=True,id=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5f60b96f-c9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.196 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.196 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.196 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.199 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.199 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5f60b96f-c9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.199 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5f60b96f-c9, col_values=(('external_ids', {'iface-id': '5f60b96f-c91b-46d4-b7cf-18b1e71d5d00', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:16:62', 'vm-uuid': '2d377db9-047f-4be1-a0cf-8189254def22'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.254 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:32 np0005603609 NetworkManager[49064]: <info>  [1769846192.2559] manager: (tap5f60b96f-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.258 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.262 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.264 221554 INFO os_vif [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:16:62,bridge_name='br-int',has_traffic_filtering=True,id=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5f60b96f-c9')#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.265 221554 DEBUG nova.virt.libvirt.vif [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:54:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-984890616',display_name='tempest-tempest.common.compute-instance-984890616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-984890616',id=77,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnxydFFT/bcqoM03GknAV6VVEv+uI9/M2thVO454uMQmK087L6GvpajnIifCsqOX+pET7jqX84pd3d5htElL3GbJr344TJNF3ti2SMNzp16z2rTGy4GU5WRv9em26Ul9Q==',key_name='tempest-keypair-1143543931',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:55:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-6f0xru8d',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:55:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=2d377db9-047f-4be1-a0cf-8189254def22,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.265 221554 DEBUG nova.network.os_vif_util [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.266 221554 DEBUG nova.network.os_vif_util [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:16:62,bridge_name='br-int',has_traffic_filtering=True,id=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5f60b96f-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.268 221554 DEBUG nova.virt.libvirt.guest [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] attach device xml: <interface type="ethernet">
Jan 31 02:56:32 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:f2:16:62"/>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:  <target dev="tap5f60b96f-c9"/>
Jan 31 02:56:32 np0005603609 nova_compute[221550]: </interface>
Jan 31 02:56:32 np0005603609 nova_compute[221550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 02:56:32 np0005603609 kernel: tap5f60b96f-c9: entered promiscuous mode
Jan 31 02:56:32 np0005603609 NetworkManager[49064]: <info>  [1769846192.2789] manager: (tap5f60b96f-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/134)
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.280 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:56:32Z|00259|binding|INFO|Claiming lport 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 for this chassis.
Jan 31 02:56:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:56:32Z|00260|binding|INFO|5f60b96f-c91b-46d4-b7cf-18b1e71d5d00: Claiming fa:16:3e:f2:16:62 10.100.0.9
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.288 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:56:32Z|00261|binding|INFO|Setting lport 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 ovn-installed in OVS
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.291 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:32 np0005603609 ovn_controller[130359]: 2026-01-31T07:56:32Z|00262|binding|INFO|Setting lport 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 up in Southbound
Jan 31 02:56:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:32.293 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:16:62 10.100.0.9'], port_security=['fa:16:3e:f2:16:62 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-670789153', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2d377db9-047f-4be1-a0cf-8189254def22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-670789153', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c7e4e426-0b76-41c4-8dcb-ea007c31db76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:56:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:32.294 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 in datapath 485494d9-5360-41c3-a10e-ef5098af0809 bound to our chassis#033[00m
Jan 31 02:56:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:32.296 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 485494d9-5360-41c3-a10e-ef5098af0809#033[00m
Jan 31 02:56:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:32.308 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[200a8ca1-a60a-4761-8b5d-90c7e33880e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:32 np0005603609 systemd-udevd[251418]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:56:32 np0005603609 NetworkManager[49064]: <info>  [1769846192.3195] device (tap5f60b96f-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:56:32 np0005603609 NetworkManager[49064]: <info>  [1769846192.3200] device (tap5f60b96f-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:56:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:32.330 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff64508-1d2c-42ec-ba5b-c9a2ff30aa66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:32.333 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d1481e59-3087-45ce-8ff1-819ed65f7c89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:32.354 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[6e137945-6d37-4c22-b40c-16f666044d91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:32.367 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1be6e5a0-1c4e-4cc1-aeb0-3fd807fed8d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634860, 'reachable_time': 37551, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251425, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:32.377 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d7394a22-86f5-48a3-906b-d283ee751038]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634865, 'tstamp': 634865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251426, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634867, 'tstamp': 634867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251426, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:32.378 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.380 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.381 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:32.381 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap485494d9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:32.381 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:56:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:32.382 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap485494d9-50, col_values=(('external_ids', {'iface-id': 'cbb039b2-15df-45bd-8800-81213ecc7011'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:32.382 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.426 221554 DEBUG nova.virt.libvirt.driver [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.426 221554 DEBUG nova.virt.libvirt.driver [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.426 221554 DEBUG nova.virt.libvirt.driver [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No VIF found with MAC fa:16:3e:28:00:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.426 221554 DEBUG nova.virt.libvirt.driver [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No VIF found with MAC fa:16:3e:f2:16:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.461 221554 DEBUG nova.virt.libvirt.guest [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:56:32 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:  <nova:name>tempest-tempest.common.compute-instance-984890616</nova:name>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 07:56:32</nova:creationTime>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 02:56:32 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:    <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:    <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:    <nova:port uuid="46923f1f-6cb6-4592-94c9-5a95246f0b99">
Jan 31 02:56:32 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:    <nova:port uuid="5f60b96f-c91b-46d4-b7cf-18b1e71d5d00">
Jan 31 02:56:32 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:56:32 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 02:56:32 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 02:56:32 np0005603609 nova_compute[221550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.563 221554 DEBUG oslo_concurrency.lockutils [None req-390deb1e-4c6e-43d7-9cdb-b6e6ce9643b8 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-2d377db9-047f-4be1-a0cf-8189254def22-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 13.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:32.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.962 221554 DEBUG nova.compute.manager [req-389ebbc3-00ab-4294-9917-54612fd4a8c5 req-9b12b8f3-75bd-4774-aad8-a76e85da6953 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received event network-vif-plugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.963 221554 DEBUG oslo_concurrency.lockutils [req-389ebbc3-00ab-4294-9917-54612fd4a8c5 req-9b12b8f3-75bd-4774-aad8-a76e85da6953 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2d377db9-047f-4be1-a0cf-8189254def22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.963 221554 DEBUG oslo_concurrency.lockutils [req-389ebbc3-00ab-4294-9917-54612fd4a8c5 req-9b12b8f3-75bd-4774-aad8-a76e85da6953 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.963 221554 DEBUG oslo_concurrency.lockutils [req-389ebbc3-00ab-4294-9917-54612fd4a8c5 req-9b12b8f3-75bd-4774-aad8-a76e85da6953 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.963 221554 DEBUG nova.compute.manager [req-389ebbc3-00ab-4294-9917-54612fd4a8c5 req-9b12b8f3-75bd-4774-aad8-a76e85da6953 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] No waiting events found dispatching network-vif-plugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:56:32 np0005603609 nova_compute[221550]: 2026-01-31 07:56:32.964 221554 WARNING nova.compute.manager [req-389ebbc3-00ab-4294-9917-54612fd4a8c5 req-9b12b8f3-75bd-4774-aad8-a76e85da6953 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received unexpected event network-vif-plugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:56:33 np0005603609 nova_compute[221550]: 2026-01-31 07:56:33.041 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:33 np0005603609 ovn_controller[130359]: 2026-01-31T07:56:33Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f2:16:62 10.100.0.9
Jan 31 02:56:33 np0005603609 ovn_controller[130359]: 2026-01-31T07:56:33Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f2:16:62 10.100.0.9
Jan 31 02:56:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:33.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:56:34.526101) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846194526169, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2403, "num_deletes": 251, "total_data_size": 5778777, "memory_usage": 5852312, "flush_reason": "Manual Compaction"}
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846194583776, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 3776663, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39315, "largest_seqno": 41713, "table_properties": {"data_size": 3767048, "index_size": 6045, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20259, "raw_average_key_size": 20, "raw_value_size": 3747648, "raw_average_value_size": 3797, "num_data_blocks": 264, "num_entries": 987, "num_filter_entries": 987, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845964, "oldest_key_time": 1769845964, "file_creation_time": 1769846194, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 57717 microseconds, and 6785 cpu microseconds.
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:56:34.583820) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 3776663 bytes OK
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:56:34.583839) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:56:34.587971) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:56:34.587988) EVENT_LOG_v1 {"time_micros": 1769846194587982, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:56:34.588007) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 5768293, prev total WAL file size 5768293, number of live WAL files 2.
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:56:34.589036) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(3688KB)], [75(9752KB)]
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846194589073, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 13763598, "oldest_snapshot_seqno": -1}
Jan 31 02:56:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:56:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:34.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 6767 keys, 11852201 bytes, temperature: kUnknown
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846194761717, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 11852201, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11804935, "index_size": 29234, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16965, "raw_key_size": 173691, "raw_average_key_size": 25, "raw_value_size": 11681687, "raw_average_value_size": 1726, "num_data_blocks": 1167, "num_entries": 6767, "num_filter_entries": 6767, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769846194, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:56:34.769685) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 11852201 bytes
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:56:34.787380) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 79.7 rd, 68.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.5 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 7292, records dropped: 525 output_compression: NoCompression
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:56:34.787400) EVENT_LOG_v1 {"time_micros": 1769846194787391, "job": 46, "event": "compaction_finished", "compaction_time_micros": 172724, "compaction_time_cpu_micros": 26191, "output_level": 6, "num_output_files": 1, "total_output_size": 11852201, "num_input_records": 7292, "num_output_records": 6767, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846194787852, "job": 46, "event": "table_file_deletion", "file_number": 77}
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846194788852, "job": 46, "event": "table_file_deletion", "file_number": 75}
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:56:34.588945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:56:34.789020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:56:34.789026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:56:34.789028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:56:34.789029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:56:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:56:34.789031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:56:35 np0005603609 nova_compute[221550]: 2026-01-31 07:56:35.355 221554 DEBUG nova.compute.manager [req-357454c8-56d6-4d0f-a2de-e2f71b6fa580 req-a490af34-6cf1-4d5a-a094-0bd2fd464747 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received event network-vif-plugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:56:35 np0005603609 nova_compute[221550]: 2026-01-31 07:56:35.356 221554 DEBUG oslo_concurrency.lockutils [req-357454c8-56d6-4d0f-a2de-e2f71b6fa580 req-a490af34-6cf1-4d5a-a094-0bd2fd464747 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2d377db9-047f-4be1-a0cf-8189254def22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:35 np0005603609 nova_compute[221550]: 2026-01-31 07:56:35.356 221554 DEBUG oslo_concurrency.lockutils [req-357454c8-56d6-4d0f-a2de-e2f71b6fa580 req-a490af34-6cf1-4d5a-a094-0bd2fd464747 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:35 np0005603609 nova_compute[221550]: 2026-01-31 07:56:35.356 221554 DEBUG oslo_concurrency.lockutils [req-357454c8-56d6-4d0f-a2de-e2f71b6fa580 req-a490af34-6cf1-4d5a-a094-0bd2fd464747 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:35 np0005603609 nova_compute[221550]: 2026-01-31 07:56:35.357 221554 DEBUG nova.compute.manager [req-357454c8-56d6-4d0f-a2de-e2f71b6fa580 req-a490af34-6cf1-4d5a-a094-0bd2fd464747 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] No waiting events found dispatching network-vif-plugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:56:35 np0005603609 nova_compute[221550]: 2026-01-31 07:56:35.357 221554 WARNING nova.compute.manager [req-357454c8-56d6-4d0f-a2de-e2f71b6fa580 req-a490af34-6cf1-4d5a-a094-0bd2fd464747 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received unexpected event network-vif-plugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:56:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:35.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:36.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:36 np0005603609 nova_compute[221550]: 2026-01-31 07:56:36.809 221554 DEBUG oslo_concurrency.lockutils [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "interface-2d377db9-047f-4be1-a0cf-8189254def22-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:36 np0005603609 nova_compute[221550]: 2026-01-31 07:56:36.810 221554 DEBUG oslo_concurrency.lockutils [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-2d377db9-047f-4be1-a0cf-8189254def22-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:36 np0005603609 nova_compute[221550]: 2026-01-31 07:56:36.934 221554 DEBUG nova.objects.instance [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'flavor' on Instance uuid 2d377db9-047f-4be1-a0cf-8189254def22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.051 221554 DEBUG nova.virt.libvirt.vif [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:54:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-984890616',display_name='tempest-tempest.common.compute-instance-984890616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-984890616',id=77,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnxydFFT/bcqoM03GknAV6VVEv+uI9/M2thVO454uMQmK087L6GvpajnIifCsqOX+pET7jqX84pd3d5htElL3GbJr344TJNF3ti2SMNzp16z2rTGy4GU5WRv9em26Ul9Q==',key_name='tempest-keypair-1143543931',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:55:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-6f0xru8d',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:55:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=2d377db9-047f-4be1-a0cf-8189254def22,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.052 221554 DEBUG nova.network.os_vif_util [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.052 221554 DEBUG nova.network.os_vif_util [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:16:62,bridge_name='br-int',has_traffic_filtering=True,id=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5f60b96f-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.056 221554 DEBUG nova.virt.libvirt.guest [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f2:16:62"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5f60b96f-c9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.058 221554 DEBUG nova.virt.libvirt.guest [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f2:16:62"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5f60b96f-c9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.060 221554 DEBUG nova.virt.libvirt.driver [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Attempting to detach device tap5f60b96f-c9 from instance 2d377db9-047f-4be1-a0cf-8189254def22 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.061 221554 DEBUG nova.virt.libvirt.guest [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] detach device xml: <interface type="ethernet">
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:f2:16:62"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <target dev="tap5f60b96f-c9"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]: </interface>
Jan 31 02:56:37 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.090 221554 DEBUG nova.virt.libvirt.guest [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f2:16:62"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5f60b96f-c9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.095 221554 DEBUG nova.virt.libvirt.guest [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f2:16:62"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5f60b96f-c9"/></interface>not found in domain: <domain type='kvm' id='35'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <name>instance-0000004d</name>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <uuid>2d377db9-047f-4be1-a0cf-8189254def22</uuid>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:name>tempest-tempest.common.compute-instance-984890616</nova:name>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 07:56:32</nova:creationTime>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:port uuid="46923f1f-6cb6-4592-94c9-5a95246f0b99">
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:port uuid="5f60b96f-c91b-46d4-b7cf-18b1e71d5d00">
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 02:56:37 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <memory unit='KiB'>131072</memory>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <vcpu placement='static'>1</vcpu>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <resource>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <partition>/machine</partition>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </resource>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <sysinfo type='smbios'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <entry name='manufacturer'>RDO</entry>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <entry name='serial'>2d377db9-047f-4be1-a0cf-8189254def22</entry>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <entry name='uuid'>2d377db9-047f-4be1-a0cf-8189254def22</entry>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <entry name='family'>Virtual Machine</entry>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <boot dev='hd'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <smbios mode='sysinfo'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <vmcoreinfo state='on'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <model fallback='forbid'>Nehalem</model>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <feature policy='require' name='x2apic'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <feature policy='require' name='hypervisor'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <feature policy='require' name='vme'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <clock offset='utc'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <timer name='hpet' present='no'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <on_poweroff>destroy</on_poweroff>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <on_reboot>restart</on_reboot>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <on_crash>destroy</on_crash>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <disk type='network' device='disk'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/2d377db9-047f-4be1-a0cf-8189254def22_disk' index='2'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target dev='vda' bus='virtio'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='virtio-disk0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <disk type='network' device='cdrom'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/2d377db9-047f-4be1-a0cf-8189254def22_disk.config' index='1'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target dev='sda' bus='sata'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <readonly/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='sata0-0-0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pcie.0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='1' port='0x10'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.1'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='2' port='0x11'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.2'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='3' port='0x12'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.3'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='4' port='0x13'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.4'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='5' port='0x14'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.5'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='6' port='0x15'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.6'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='7' port='0x16'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.7'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='8' port='0x17'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.8'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='9' port='0x18'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.9'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='10' port='0x19'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.10'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='11' port='0x1a'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.11'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='12' port='0x1b'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.12'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='13' port='0x1c'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.13'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='14' port='0x1d'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.14'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='15' port='0x1e'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.15'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='16' port='0x1f'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.16'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='17' port='0x20'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.17'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='18' port='0x21'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.18'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='19' port='0x22'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.19'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='20' port='0x23'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.20'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='21' port='0x24'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.21'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='22' port='0x25'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.22'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='23' port='0x26'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.23'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='24' port='0x27'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.24'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='25' port='0x28'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.25'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-pci-bridge'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.26'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='usb'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='sata' index='0'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='ide'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:28:00:82'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target dev='tap46923f1f-6c'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='net0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:f2:16:62'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target dev='tap5f60b96f-c9'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='net1'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <serial type='pty'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/2d377db9-047f-4be1-a0cf-8189254def22/console.log' append='off'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target type='isa-serial' port='0'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:        <model name='isa-serial'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      </target>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <console type='pty' tty='/dev/pts/0'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/2d377db9-047f-4be1-a0cf-8189254def22/console.log' append='off'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target type='serial' port='0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </console>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <input type='tablet' bus='usb'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='input0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='usb' bus='0' port='1'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </input>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <input type='mouse' bus='ps2'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='input1'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </input>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <input type='keyboard' bus='ps2'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='input2'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </input>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <listen type='address' address='::0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </graphics>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <audio id='1' type='none'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='video0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <watchdog model='itco' action='reset'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='watchdog0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </watchdog>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <memballoon model='virtio'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <stats period='10'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='balloon0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <rng model='virtio'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <backend model='random'>/dev/urandom</backend>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='rng0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <label>system_u:system_r:svirt_t:s0:c343,c836</label>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c343,c836</imagelabel>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <label>+107:+107</label>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <imagelabel>+107:+107</imagelabel>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 02:56:37 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:56:37 np0005603609 nova_compute[221550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.096 221554 INFO nova.virt.libvirt.driver [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully detached device tap5f60b96f-c9 from instance 2d377db9-047f-4be1-a0cf-8189254def22 from the persistent domain config.#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.096 221554 DEBUG nova.virt.libvirt.driver [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] (1/8): Attempting to detach device tap5f60b96f-c9 with device alias net1 from instance 2d377db9-047f-4be1-a0cf-8189254def22 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.097 221554 DEBUG nova.virt.libvirt.guest [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] detach device xml: <interface type="ethernet">
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:f2:16:62"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <target dev="tap5f60b96f-c9"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]: </interface>
Jan 31 02:56:37 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 02:56:37 np0005603609 kernel: tap5f60b96f-c9 (unregistering): left promiscuous mode
Jan 31 02:56:37 np0005603609 NetworkManager[49064]: <info>  [1769846197.2055] device (tap5f60b96f-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.211 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:37 np0005603609 ovn_controller[130359]: 2026-01-31T07:56:37Z|00263|binding|INFO|Releasing lport 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 from this chassis (sb_readonly=0)
Jan 31 02:56:37 np0005603609 ovn_controller[130359]: 2026-01-31T07:56:37Z|00264|binding|INFO|Setting lport 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 down in Southbound
Jan 31 02:56:37 np0005603609 ovn_controller[130359]: 2026-01-31T07:56:37Z|00265|binding|INFO|Removing iface tap5f60b96f-c9 ovn-installed in OVS
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.214 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.216 221554 DEBUG nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Received event <DeviceRemovedEvent: 1769846197.2162373, 2d377db9-047f-4be1-a0cf-8189254def22 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.217 221554 DEBUG nova.virt.libvirt.driver [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Start waiting for the detach event from libvirt for device tap5f60b96f-c9 with device alias net1 for instance 2d377db9-047f-4be1-a0cf-8189254def22 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.217 221554 DEBUG nova.virt.libvirt.guest [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f2:16:62"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5f60b96f-c9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.220 221554 DEBUG nova.virt.libvirt.guest [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f2:16:62"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5f60b96f-c9"/></interface>not found in domain: <domain type='kvm' id='35'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <name>instance-0000004d</name>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <uuid>2d377db9-047f-4be1-a0cf-8189254def22</uuid>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:name>tempest-tempest.common.compute-instance-984890616</nova:name>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 07:56:32</nova:creationTime>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:port uuid="46923f1f-6cb6-4592-94c9-5a95246f0b99">
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:port uuid="5f60b96f-c91b-46d4-b7cf-18b1e71d5d00">
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 02:56:37 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <memory unit='KiB'>131072</memory>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <vcpu placement='static'>1</vcpu>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <resource>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <partition>/machine</partition>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </resource>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <sysinfo type='smbios'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <entry name='manufacturer'>RDO</entry>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <entry name='serial'>2d377db9-047f-4be1-a0cf-8189254def22</entry>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <entry name='uuid'>2d377db9-047f-4be1-a0cf-8189254def22</entry>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <entry name='family'>Virtual Machine</entry>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <boot dev='hd'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <smbios mode='sysinfo'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <vmcoreinfo state='on'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <model fallback='forbid'>Nehalem</model>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <feature policy='require' name='x2apic'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <feature policy='require' name='hypervisor'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <feature policy='require' name='vme'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <clock offset='utc'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <timer name='hpet' present='no'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <on_poweroff>destroy</on_poweroff>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <on_reboot>restart</on_reboot>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <on_crash>destroy</on_crash>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <disk type='network' device='disk'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/2d377db9-047f-4be1-a0cf-8189254def22_disk' index='2'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target dev='vda' bus='virtio'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='virtio-disk0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <disk type='network' device='cdrom'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/2d377db9-047f-4be1-a0cf-8189254def22_disk.config' index='1'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target dev='sda' bus='sata'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <readonly/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='sata0-0-0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pcie.0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='1' port='0x10'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.1'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='2' port='0x11'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.2'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='3' port='0x12'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.3'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='4' port='0x13'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.4'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='5' port='0x14'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.5'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='6' port='0x15'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.6'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='7' port='0x16'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.7'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='8' port='0x17'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.8'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='9' port='0x18'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.9'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='10' port='0x19'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.10'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='11' port='0x1a'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.11'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='12' port='0x1b'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.12'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='13' port='0x1c'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.13'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='14' port='0x1d'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.14'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='15' port='0x1e'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.15'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='16' port='0x1f'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.16'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='17' port='0x20'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.17'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='18' port='0x21'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.18'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='19' port='0x22'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.19'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='20' port='0x23'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.20'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='21' port='0x24'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.21'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='22' port='0x25'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.22'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='23' port='0x26'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.23'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='24' port='0x27'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.24'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target chassis='25' port='0x28'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.25'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model name='pcie-pci-bridge'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='pci.26'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='usb'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <controller type='sata' index='0'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='ide'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:28:00:82'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target dev='tap46923f1f-6c'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='net0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <serial type='pty'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/2d377db9-047f-4be1-a0cf-8189254def22/console.log' append='off'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target type='isa-serial' port='0'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:        <model name='isa-serial'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      </target>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <console type='pty' tty='/dev/pts/0'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/2d377db9-047f-4be1-a0cf-8189254def22/console.log' append='off'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <target type='serial' port='0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </console>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <input type='tablet' bus='usb'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='input0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='usb' bus='0' port='1'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </input>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <input type='mouse' bus='ps2'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='input1'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </input>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <input type='keyboard' bus='ps2'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='input2'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </input>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <listen type='address' address='::0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </graphics>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <audio id='1' type='none'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='video0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <watchdog model='itco' action='reset'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='watchdog0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </watchdog>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <memballoon model='virtio'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <stats period='10'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='balloon0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <rng model='virtio'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <backend model='random'>/dev/urandom</backend>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <alias name='rng0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <label>system_u:system_r:svirt_t:s0:c343,c836</label>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c343,c836</imagelabel>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <label>+107:+107</label>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <imagelabel>+107:+107</imagelabel>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 02:56:37 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:56:37 np0005603609 nova_compute[221550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.220 221554 INFO nova.virt.libvirt.driver [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully detached device tap5f60b96f-c9 from instance 2d377db9-047f-4be1-a0cf-8189254def22 from the live domain config.#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.221 221554 DEBUG nova.virt.libvirt.vif [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:54:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-984890616',display_name='tempest-tempest.common.compute-instance-984890616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-984890616',id=77,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnxydFFT/bcqoM03GknAV6VVEv+uI9/M2thVO454uMQmK087L6GvpajnIifCsqOX+pET7jqX84pd3d5htElL3GbJr344TJNF3ti2SMNzp16z2rTGy4GU5WRv9em26Ul9Q==',key_name='tempest-keypair-1143543931',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:55:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-6f0xru8d',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:55:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=2d377db9-047f-4be1-a0cf-8189254def22,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.221 221554 DEBUG nova.network.os_vif_util [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.222 221554 DEBUG nova.network.os_vif_util [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:16:62,bridge_name='br-int',has_traffic_filtering=True,id=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5f60b96f-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.222 221554 DEBUG os_vif [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:16:62,bridge_name='br-int',has_traffic_filtering=True,id=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5f60b96f-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.224 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.224 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f60b96f-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.225 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.225 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.227 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.228 221554 INFO os_vif [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:16:62,bridge_name='br-int',has_traffic_filtering=True,id=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5f60b96f-c9')#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.229 221554 DEBUG nova.virt.libvirt.guest [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:name>tempest-tempest.common.compute-instance-984890616</nova:name>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 07:56:37</nova:creationTime>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    <nova:port uuid="46923f1f-6cb6-4592-94c9-5a95246f0b99">
Jan 31 02:56:37 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:56:37 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 02:56:37 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 02:56:37 np0005603609 nova_compute[221550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.265 221554 DEBUG nova.network.neutron [req-d43a28af-c551-4085-95c2-09c57205aacb req-a7abb8db-99c7-4d3d-bc8c-c9554516f1d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Updated VIF entry in instance network info cache for port 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.265 221554 DEBUG nova.network.neutron [req-d43a28af-c551-4085-95c2-09c57205aacb req-a7abb8db-99c7-4d3d-bc8c-c9554516f1d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Updating instance_info_cache with network_info: [{"id": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "address": "fa:16:3e:28:00:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46923f1f-6c", "ovs_interfaceid": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:56:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:37.341 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:16:62 10.100.0.9'], port_security=['fa:16:3e:f2:16:62 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-670789153', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2d377db9-047f-4be1-a0cf-8189254def22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-670789153', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c7e4e426-0b76-41c4-8dcb-ea007c31db76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:56:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:37.342 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 in datapath 485494d9-5360-41c3-a10e-ef5098af0809 unbound from our chassis#033[00m
Jan 31 02:56:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:37.343 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 485494d9-5360-41c3-a10e-ef5098af0809#033[00m
Jan 31 02:56:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:37.355 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d65de9-0dff-4793-afe3-111c652181f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.368 221554 DEBUG oslo_concurrency.lockutils [req-d43a28af-c551-4085-95c2-09c57205aacb req-a7abb8db-99c7-4d3d-bc8c-c9554516f1d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:56:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:37.376 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed4bd35-d27a-4aed-80fd-075598fd8a16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:37.378 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[34155c50-f778-4c56-b30e-4f12d41c59d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:37.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:37.401 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[90ac5165-76c8-489d-8a02-2d7b2854c3d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.415 221554 DEBUG oslo_concurrency.lockutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "694a032a-234d-4e0c-9aa6-c7f4050fc232" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.416 221554 DEBUG oslo_concurrency.lockutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "694a032a-234d-4e0c-9aa6-c7f4050fc232" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:37.416 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[42a62411-24e3-461b-865b-c5f63752181a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634860, 'reachable_time': 37551, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251438, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:37.425 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[de66a56a-9f9c-429f-8086-097ef7db893b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634865, 'tstamp': 634865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251439, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634867, 'tstamp': 634867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251439, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:37.427 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.429 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.430 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:37.431 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap485494d9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:37.431 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:56:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:37.432 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap485494d9-50, col_values=(('external_ids', {'iface-id': 'cbb039b2-15df-45bd-8800-81213ecc7011'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:37.433 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.440 221554 DEBUG nova.compute.manager [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.628 221554 DEBUG oslo_concurrency.lockutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.629 221554 DEBUG oslo_concurrency.lockutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.645 221554 DEBUG nova.virt.hardware [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.645 221554 INFO nova.compute.claims [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:56:37 np0005603609 nova_compute[221550]: 2026-01-31 07:56:37.918 221554 DEBUG oslo_concurrency.processutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:38 np0005603609 nova_compute[221550]: 2026-01-31 07:56:38.042 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:38 np0005603609 nova_compute[221550]: 2026-01-31 07:56:38.047 221554 DEBUG nova.compute.manager [req-6ed1f6e9-fda0-4138-8b4f-fa2c3aca6869 req-76aba0c3-304b-4dc8-b56a-a57a642ba73b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received event network-vif-unplugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:56:38 np0005603609 nova_compute[221550]: 2026-01-31 07:56:38.047 221554 DEBUG oslo_concurrency.lockutils [req-6ed1f6e9-fda0-4138-8b4f-fa2c3aca6869 req-76aba0c3-304b-4dc8-b56a-a57a642ba73b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2d377db9-047f-4be1-a0cf-8189254def22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:38 np0005603609 nova_compute[221550]: 2026-01-31 07:56:38.047 221554 DEBUG oslo_concurrency.lockutils [req-6ed1f6e9-fda0-4138-8b4f-fa2c3aca6869 req-76aba0c3-304b-4dc8-b56a-a57a642ba73b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:38 np0005603609 nova_compute[221550]: 2026-01-31 07:56:38.048 221554 DEBUG oslo_concurrency.lockutils [req-6ed1f6e9-fda0-4138-8b4f-fa2c3aca6869 req-76aba0c3-304b-4dc8-b56a-a57a642ba73b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:38 np0005603609 nova_compute[221550]: 2026-01-31 07:56:38.048 221554 DEBUG nova.compute.manager [req-6ed1f6e9-fda0-4138-8b4f-fa2c3aca6869 req-76aba0c3-304b-4dc8-b56a-a57a642ba73b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] No waiting events found dispatching network-vif-unplugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:56:38 np0005603609 nova_compute[221550]: 2026-01-31 07:56:38.048 221554 WARNING nova.compute.manager [req-6ed1f6e9-fda0-4138-8b4f-fa2c3aca6869 req-76aba0c3-304b-4dc8-b56a-a57a642ba73b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received unexpected event network-vif-unplugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:56:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:56:38 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1342499023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:56:38 np0005603609 nova_compute[221550]: 2026-01-31 07:56:38.320 221554 DEBUG oslo_concurrency.processutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:38 np0005603609 nova_compute[221550]: 2026-01-31 07:56:38.325 221554 DEBUG nova.compute.provider_tree [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:56:38 np0005603609 nova_compute[221550]: 2026-01-31 07:56:38.610 221554 DEBUG nova.scheduler.client.report [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:56:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:38.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:39 np0005603609 nova_compute[221550]: 2026-01-31 07:56:39.271 221554 DEBUG oslo_concurrency.lockutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:39 np0005603609 nova_compute[221550]: 2026-01-31 07:56:39.272 221554 DEBUG nova.compute.manager [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:56:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:39.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:39 np0005603609 nova_compute[221550]: 2026-01-31 07:56:39.409 221554 DEBUG nova.compute.manager [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:56:39 np0005603609 nova_compute[221550]: 2026-01-31 07:56:39.410 221554 DEBUG nova.network.neutron [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:56:39 np0005603609 nova_compute[221550]: 2026-01-31 07:56:39.708 221554 INFO nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:56:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:39 np0005603609 nova_compute[221550]: 2026-01-31 07:56:39.894 221554 DEBUG nova.compute.manager [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:56:39 np0005603609 nova_compute[221550]: 2026-01-31 07:56:39.944 221554 DEBUG nova.policy [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f60419a58aea43b9a0b6db7d61d71246', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1cd91610847a480caeee0ae3cdabf066', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.261 221554 DEBUG nova.compute.manager [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.262 221554 DEBUG nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.263 221554 INFO nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Creating image(s)#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.286 221554 DEBUG nova.storage.rbd_utils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 694a032a-234d-4e0c-9aa6-c7f4050fc232_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.312 221554 DEBUG nova.storage.rbd_utils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 694a032a-234d-4e0c-9aa6-c7f4050fc232_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.338 221554 DEBUG nova.storage.rbd_utils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 694a032a-234d-4e0c-9aa6-c7f4050fc232_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.341 221554 DEBUG oslo_concurrency.processutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.379 221554 DEBUG nova.compute.manager [req-298ed746-9829-4afa-84be-f2adedd04134 req-702d53e6-35b8-4297-b1a7-349abc4a6371 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received event network-vif-plugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.379 221554 DEBUG oslo_concurrency.lockutils [req-298ed746-9829-4afa-84be-f2adedd04134 req-702d53e6-35b8-4297-b1a7-349abc4a6371 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2d377db9-047f-4be1-a0cf-8189254def22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.379 221554 DEBUG oslo_concurrency.lockutils [req-298ed746-9829-4afa-84be-f2adedd04134 req-702d53e6-35b8-4297-b1a7-349abc4a6371 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.380 221554 DEBUG oslo_concurrency.lockutils [req-298ed746-9829-4afa-84be-f2adedd04134 req-702d53e6-35b8-4297-b1a7-349abc4a6371 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.380 221554 DEBUG nova.compute.manager [req-298ed746-9829-4afa-84be-f2adedd04134 req-702d53e6-35b8-4297-b1a7-349abc4a6371 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] No waiting events found dispatching network-vif-plugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.380 221554 WARNING nova.compute.manager [req-298ed746-9829-4afa-84be-f2adedd04134 req-702d53e6-35b8-4297-b1a7-349abc4a6371 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received unexpected event network-vif-plugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.390 221554 DEBUG oslo_concurrency.processutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.391 221554 DEBUG oslo_concurrency.lockutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.391 221554 DEBUG oslo_concurrency.lockutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.392 221554 DEBUG oslo_concurrency.lockutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.417 221554 DEBUG nova.storage.rbd_utils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 694a032a-234d-4e0c-9aa6-c7f4050fc232_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.421 221554 DEBUG oslo_concurrency.processutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 694a032a-234d-4e0c-9aa6-c7f4050fc232_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:40.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.731 221554 DEBUG oslo_concurrency.processutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 694a032a-234d-4e0c-9aa6-c7f4050fc232_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.793 221554 DEBUG nova.storage.rbd_utils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] resizing rbd image 694a032a-234d-4e0c-9aa6-c7f4050fc232_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:56:40 np0005603609 nova_compute[221550]: 2026-01-31 07:56:40.981 221554 DEBUG nova.objects.instance [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lazy-loading 'migration_context' on Instance uuid 694a032a-234d-4e0c-9aa6-c7f4050fc232 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:56:41 np0005603609 nova_compute[221550]: 2026-01-31 07:56:41.099 221554 DEBUG nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:56:41 np0005603609 nova_compute[221550]: 2026-01-31 07:56:41.100 221554 DEBUG nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Ensure instance console log exists: /var/lib/nova/instances/694a032a-234d-4e0c-9aa6-c7f4050fc232/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:56:41 np0005603609 nova_compute[221550]: 2026-01-31 07:56:41.100 221554 DEBUG oslo_concurrency.lockutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:41 np0005603609 nova_compute[221550]: 2026-01-31 07:56:41.101 221554 DEBUG oslo_concurrency.lockutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:41 np0005603609 nova_compute[221550]: 2026-01-31 07:56:41.101 221554 DEBUG oslo_concurrency.lockutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:41.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:41 np0005603609 nova_compute[221550]: 2026-01-31 07:56:41.525 221554 DEBUG oslo_concurrency.lockutils [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:56:41 np0005603609 nova_compute[221550]: 2026-01-31 07:56:41.525 221554 DEBUG oslo_concurrency.lockutils [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquired lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:56:41 np0005603609 nova_compute[221550]: 2026-01-31 07:56:41.525 221554 DEBUG nova.network.neutron [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:56:42 np0005603609 podman[251629]: 2026-01-31 07:56:42.182814256 +0000 UTC m=+0.062373765 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:56:42 np0005603609 podman[251628]: 2026-01-31 07:56:42.192809187 +0000 UTC m=+0.080584814 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 02:56:42 np0005603609 nova_compute[221550]: 2026-01-31 07:56:42.226 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:56:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:42.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:56:43 np0005603609 nova_compute[221550]: 2026-01-31 07:56:43.083 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:43 np0005603609 nova_compute[221550]: 2026-01-31 07:56:43.148 221554 DEBUG nova.network.neutron [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Successfully created port: 237f2b31-27f1-466b-b664-d3e725dc6535 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:56:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:43.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:56:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:44.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:56:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:56:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/351507074' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:56:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:56:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/351507074' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:56:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:45.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:45 np0005603609 nova_compute[221550]: 2026-01-31 07:56:45.513 221554 INFO nova.network.neutron [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Port 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Jan 31 02:56:45 np0005603609 nova_compute[221550]: 2026-01-31 07:56:45.513 221554 DEBUG nova.network.neutron [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Updating instance_info_cache with network_info: [{"id": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "address": "fa:16:3e:28:00:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46923f1f-6c", "ovs_interfaceid": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:56:45 np0005603609 nova_compute[221550]: 2026-01-31 07:56:45.741 221554 DEBUG oslo_concurrency.lockutils [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Releasing lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:56:45 np0005603609 nova_compute[221550]: 2026-01-31 07:56:45.748 221554 DEBUG nova.compute.manager [req-644c8e07-0779-4820-bcc9-af091b542ff4 req-3310082d-7bd5-4b9b-afb8-5cefd1859265 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received event network-changed-46923f1f-6cb6-4592-94c9-5a95246f0b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:56:45 np0005603609 nova_compute[221550]: 2026-01-31 07:56:45.748 221554 DEBUG nova.compute.manager [req-644c8e07-0779-4820-bcc9-af091b542ff4 req-3310082d-7bd5-4b9b-afb8-5cefd1859265 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Refreshing instance network info cache due to event network-changed-46923f1f-6cb6-4592-94c9-5a95246f0b99. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:56:45 np0005603609 nova_compute[221550]: 2026-01-31 07:56:45.748 221554 DEBUG oslo_concurrency.lockutils [req-644c8e07-0779-4820-bcc9-af091b542ff4 req-3310082d-7bd5-4b9b-afb8-5cefd1859265 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:56:45 np0005603609 nova_compute[221550]: 2026-01-31 07:56:45.748 221554 DEBUG oslo_concurrency.lockutils [req-644c8e07-0779-4820-bcc9-af091b542ff4 req-3310082d-7bd5-4b9b-afb8-5cefd1859265 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:56:45 np0005603609 nova_compute[221550]: 2026-01-31 07:56:45.748 221554 DEBUG nova.network.neutron [req-644c8e07-0779-4820-bcc9-af091b542ff4 req-3310082d-7bd5-4b9b-afb8-5cefd1859265 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Refreshing network info cache for port 46923f1f-6cb6-4592-94c9-5a95246f0b99 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:56:45 np0005603609 nova_compute[221550]: 2026-01-31 07:56:45.845 221554 DEBUG oslo_concurrency.lockutils [None req-78f9b200-b62f-4cd5-b963-004598716eab 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-2d377db9-047f-4be1-a0cf-8189254def22-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 9.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:46 np0005603609 nova_compute[221550]: 2026-01-31 07:56:46.281 221554 DEBUG oslo_concurrency.lockutils [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "interface-3f1cc0eb-2641-4574-952f-f02264260ce6-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:46 np0005603609 nova_compute[221550]: 2026-01-31 07:56:46.282 221554 DEBUG oslo_concurrency.lockutils [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-3f1cc0eb-2641-4574-952f-f02264260ce6-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:46 np0005603609 nova_compute[221550]: 2026-01-31 07:56:46.282 221554 DEBUG nova.objects.instance [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'flavor' on Instance uuid 3f1cc0eb-2641-4574-952f-f02264260ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:56:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:46.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:46 np0005603609 nova_compute[221550]: 2026-01-31 07:56:46.850 221554 DEBUG nova.network.neutron [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Successfully updated port: 237f2b31-27f1-466b-b664-d3e725dc6535 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:56:46 np0005603609 nova_compute[221550]: 2026-01-31 07:56:46.886 221554 DEBUG oslo_concurrency.lockutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "refresh_cache-694a032a-234d-4e0c-9aa6-c7f4050fc232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:56:46 np0005603609 nova_compute[221550]: 2026-01-31 07:56:46.886 221554 DEBUG oslo_concurrency.lockutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquired lock "refresh_cache-694a032a-234d-4e0c-9aa6-c7f4050fc232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:56:46 np0005603609 nova_compute[221550]: 2026-01-31 07:56:46.887 221554 DEBUG nova.network.neutron [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:56:47 np0005603609 nova_compute[221550]: 2026-01-31 07:56:47.228 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:47.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:47 np0005603609 nova_compute[221550]: 2026-01-31 07:56:47.457 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:47 np0005603609 nova_compute[221550]: 2026-01-31 07:56:47.635 221554 DEBUG nova.network.neutron [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:56:47 np0005603609 nova_compute[221550]: 2026-01-31 07:56:47.941 221554 DEBUG nova.compute.manager [req-8d750bf9-b991-4b04-bbaa-8fd94b1029fe req-5e8f66a5-d0b9-4884-aecf-ac162f4a9067 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received event network-changed-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:56:47 np0005603609 nova_compute[221550]: 2026-01-31 07:56:47.941 221554 DEBUG nova.compute.manager [req-8d750bf9-b991-4b04-bbaa-8fd94b1029fe req-5e8f66a5-d0b9-4884-aecf-ac162f4a9067 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Refreshing instance network info cache due to event network-changed-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:56:47 np0005603609 nova_compute[221550]: 2026-01-31 07:56:47.942 221554 DEBUG oslo_concurrency.lockutils [req-8d750bf9-b991-4b04-bbaa-8fd94b1029fe req-5e8f66a5-d0b9-4884-aecf-ac162f4a9067 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:56:47 np0005603609 nova_compute[221550]: 2026-01-31 07:56:47.942 221554 DEBUG oslo_concurrency.lockutils [req-8d750bf9-b991-4b04-bbaa-8fd94b1029fe req-5e8f66a5-d0b9-4884-aecf-ac162f4a9067 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:56:47 np0005603609 nova_compute[221550]: 2026-01-31 07:56:47.942 221554 DEBUG nova.network.neutron [req-8d750bf9-b991-4b04-bbaa-8fd94b1029fe req-5e8f66a5-d0b9-4884-aecf-ac162f4a9067 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Refreshing network info cache for port f08f53c3-9326-4b07-bdcd-2c3a3a06da0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:56:48 np0005603609 nova_compute[221550]: 2026-01-31 07:56:48.085 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:48.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:48 np0005603609 nova_compute[221550]: 2026-01-31 07:56:48.843 221554 DEBUG nova.objects.instance [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'pci_requests' on Instance uuid 3f1cc0eb-2641-4574-952f-f02264260ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:56:48 np0005603609 nova_compute[221550]: 2026-01-31 07:56:48.876 221554 DEBUG nova.network.neutron [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:56:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:49.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:49 np0005603609 nova_compute[221550]: 2026-01-31 07:56:49.904 221554 DEBUG nova.policy [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1e1c5eef3d264666bc90735dd338d82a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a8cbd6cc22654dfab04487522a63426c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.320 221554 DEBUG nova.network.neutron [req-644c8e07-0779-4820-bcc9-af091b542ff4 req-3310082d-7bd5-4b9b-afb8-5cefd1859265 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Updated VIF entry in instance network info cache for port 46923f1f-6cb6-4592-94c9-5a95246f0b99. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.321 221554 DEBUG nova.network.neutron [req-644c8e07-0779-4820-bcc9-af091b542ff4 req-3310082d-7bd5-4b9b-afb8-5cefd1859265 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Updating instance_info_cache with network_info: [{"id": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "address": "fa:16:3e:28:00:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46923f1f-6c", "ovs_interfaceid": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.325 221554 DEBUG nova.network.neutron [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Updating instance_info_cache with network_info: [{"id": "237f2b31-27f1-466b-b664-d3e725dc6535", "address": "fa:16:3e:da:37:85", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap237f2b31-27", "ovs_interfaceid": "237f2b31-27f1-466b-b664-d3e725dc6535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.357 221554 DEBUG oslo_concurrency.lockutils [req-644c8e07-0779-4820-bcc9-af091b542ff4 req-3310082d-7bd5-4b9b-afb8-5cefd1859265 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2d377db9-047f-4be1-a0cf-8189254def22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.358 221554 DEBUG oslo_concurrency.lockutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Releasing lock "refresh_cache-694a032a-234d-4e0c-9aa6-c7f4050fc232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.358 221554 DEBUG nova.compute.manager [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Instance network_info: |[{"id": "237f2b31-27f1-466b-b664-d3e725dc6535", "address": "fa:16:3e:da:37:85", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap237f2b31-27", "ovs_interfaceid": "237f2b31-27f1-466b-b664-d3e725dc6535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.360 221554 DEBUG nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Start _get_guest_xml network_info=[{"id": "237f2b31-27f1-466b-b664-d3e725dc6535", "address": "fa:16:3e:da:37:85", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap237f2b31-27", "ovs_interfaceid": "237f2b31-27f1-466b-b664-d3e725dc6535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.374 221554 WARNING nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.378 221554 DEBUG nova.virt.libvirt.host [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.379 221554 DEBUG nova.virt.libvirt.host [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.381 221554 DEBUG nova.virt.libvirt.host [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.382 221554 DEBUG nova.virt.libvirt.host [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.383 221554 DEBUG nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.383 221554 DEBUG nova.virt.hardware [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e3bd1dad-95f3-4ed9-94b4-27245cd798b5',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.383 221554 DEBUG nova.virt.hardware [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.384 221554 DEBUG nova.virt.hardware [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.384 221554 DEBUG nova.virt.hardware [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.384 221554 DEBUG nova.virt.hardware [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.384 221554 DEBUG nova.virt.hardware [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.384 221554 DEBUG nova.virt.hardware [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.385 221554 DEBUG nova.virt.hardware [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.385 221554 DEBUG nova.virt.hardware [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.385 221554 DEBUG nova.virt.hardware [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.385 221554 DEBUG nova.virt.hardware [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.387 221554 DEBUG oslo_concurrency.processutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.704 221554 DEBUG nova.network.neutron [req-8d750bf9-b991-4b04-bbaa-8fd94b1029fe req-5e8f66a5-d0b9-4884-aecf-ac162f4a9067 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Updated VIF entry in instance network info cache for port f08f53c3-9326-4b07-bdcd-2c3a3a06da0d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.705 221554 DEBUG nova.network.neutron [req-8d750bf9-b991-4b04-bbaa-8fd94b1029fe req-5e8f66a5-d0b9-4884-aecf-ac162f4a9067 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Updating instance_info_cache with network_info: [{"id": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "address": "fa:16:3e:26:be:39", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08f53c3-93", "ovs_interfaceid": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:56:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:50.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:56:50 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1084162188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.795 221554 DEBUG oslo_concurrency.processutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.833 221554 DEBUG nova.storage.rbd_utils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 694a032a-234d-4e0c-9aa6-c7f4050fc232_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.838 221554 DEBUG oslo_concurrency.processutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:50 np0005603609 nova_compute[221550]: 2026-01-31 07:56:50.885 221554 DEBUG oslo_concurrency.lockutils [req-8d750bf9-b991-4b04-bbaa-8fd94b1029fe req-5e8f66a5-d0b9-4884-aecf-ac162f4a9067 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:56:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:56:51 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1497168377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.288 221554 DEBUG oslo_concurrency.processutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.289 221554 DEBUG nova.virt.libvirt.vif [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:56:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1263019804',display_name='tempest-ListServerFiltersTestJSON-instance-1263019804',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1263019804',id=83,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1cd91610847a480caeee0ae3cdabf066',ramdisk_id='',reservation_id='r-y34izdak',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-334452958',owner_user_name='tempest-ListServerFiltersTestJSON-334452958-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:56:40Z,user_data=None,user_id='f60419a58aea43b9a0b6db7d61d71246',uuid=694a032a-234d-4e0c-9aa6-c7f4050fc232,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "237f2b31-27f1-466b-b664-d3e725dc6535", "address": "fa:16:3e:da:37:85", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap237f2b31-27", "ovs_interfaceid": "237f2b31-27f1-466b-b664-d3e725dc6535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.289 221554 DEBUG nova.network.os_vif_util [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converting VIF {"id": "237f2b31-27f1-466b-b664-d3e725dc6535", "address": "fa:16:3e:da:37:85", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap237f2b31-27", "ovs_interfaceid": "237f2b31-27f1-466b-b664-d3e725dc6535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.290 221554 DEBUG nova.network.os_vif_util [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:37:85,bridge_name='br-int',has_traffic_filtering=True,id=237f2b31-27f1-466b-b664-d3e725dc6535,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap237f2b31-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.291 221554 DEBUG nova.objects.instance [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lazy-loading 'pci_devices' on Instance uuid 694a032a-234d-4e0c-9aa6-c7f4050fc232 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.339 221554 DEBUG nova.compute.manager [req-022656f1-d403-469f-b5e6-caab6390d63c req-8595c498-88e2-4562-99de-3df1fc829e37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Received event network-changed-237f2b31-27f1-466b-b664-d3e725dc6535 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.339 221554 DEBUG nova.compute.manager [req-022656f1-d403-469f-b5e6-caab6390d63c req-8595c498-88e2-4562-99de-3df1fc829e37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Refreshing instance network info cache due to event network-changed-237f2b31-27f1-466b-b664-d3e725dc6535. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.340 221554 DEBUG oslo_concurrency.lockutils [req-022656f1-d403-469f-b5e6-caab6390d63c req-8595c498-88e2-4562-99de-3df1fc829e37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-694a032a-234d-4e0c-9aa6-c7f4050fc232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.340 221554 DEBUG oslo_concurrency.lockutils [req-022656f1-d403-469f-b5e6-caab6390d63c req-8595c498-88e2-4562-99de-3df1fc829e37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-694a032a-234d-4e0c-9aa6-c7f4050fc232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.340 221554 DEBUG nova.network.neutron [req-022656f1-d403-469f-b5e6-caab6390d63c req-8595c498-88e2-4562-99de-3df1fc829e37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Refreshing network info cache for port 237f2b31-27f1-466b-b664-d3e725dc6535 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.414 221554 DEBUG nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:56:51 np0005603609 nova_compute[221550]:  <uuid>694a032a-234d-4e0c-9aa6-c7f4050fc232</uuid>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:  <name>instance-00000053</name>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:  <memory>196608</memory>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-1263019804</nova:name>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:56:50</nova:creationTime>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.micro">
Jan 31 02:56:51 np0005603609 nova_compute[221550]:        <nova:memory>192</nova:memory>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:        <nova:user uuid="f60419a58aea43b9a0b6db7d61d71246">tempest-ListServerFiltersTestJSON-334452958-project-member</nova:user>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:        <nova:project uuid="1cd91610847a480caeee0ae3cdabf066">tempest-ListServerFiltersTestJSON-334452958</nova:project>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:        <nova:port uuid="237f2b31-27f1-466b-b664-d3e725dc6535">
Jan 31 02:56:51 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <entry name="serial">694a032a-234d-4e0c-9aa6-c7f4050fc232</entry>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <entry name="uuid">694a032a-234d-4e0c-9aa6-c7f4050fc232</entry>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/694a032a-234d-4e0c-9aa6-c7f4050fc232_disk">
Jan 31 02:56:51 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:56:51 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/694a032a-234d-4e0c-9aa6-c7f4050fc232_disk.config">
Jan 31 02:56:51 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:56:51 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:da:37:85"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <target dev="tap237f2b31-27"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/694a032a-234d-4e0c-9aa6-c7f4050fc232/console.log" append="off"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:56:51 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:56:51 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:56:51 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:56:51 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.415 221554 DEBUG nova.compute.manager [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Preparing to wait for external event network-vif-plugged-237f2b31-27f1-466b-b664-d3e725dc6535 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.415 221554 DEBUG oslo_concurrency.lockutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "694a032a-234d-4e0c-9aa6-c7f4050fc232-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.416 221554 DEBUG oslo_concurrency.lockutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "694a032a-234d-4e0c-9aa6-c7f4050fc232-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.416 221554 DEBUG oslo_concurrency.lockutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "694a032a-234d-4e0c-9aa6-c7f4050fc232-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.417 221554 DEBUG nova.virt.libvirt.vif [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:56:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1263019804',display_name='tempest-ListServerFiltersTestJSON-instance-1263019804',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1263019804',id=83,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1cd91610847a480caeee0ae3cdabf066',ramdisk_id='',reservation_id='r-y34izdak',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-334452958',owner_user_name='tempest-ListServerFiltersTestJSON-334452958-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:56:40Z,user_data=None,user_id='f60419a58aea43b9a0b6db7d61d71246',uuid=694a032a-234d-4e0c-9aa6-c7f4050fc232,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "237f2b31-27f1-466b-b664-d3e725dc6535", "address": "fa:16:3e:da:37:85", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap237f2b31-27", "ovs_interfaceid": "237f2b31-27f1-466b-b664-d3e725dc6535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.417 221554 DEBUG nova.network.os_vif_util [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converting VIF {"id": "237f2b31-27f1-466b-b664-d3e725dc6535", "address": "fa:16:3e:da:37:85", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap237f2b31-27", "ovs_interfaceid": "237f2b31-27f1-466b-b664-d3e725dc6535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.418 221554 DEBUG nova.network.os_vif_util [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:37:85,bridge_name='br-int',has_traffic_filtering=True,id=237f2b31-27f1-466b-b664-d3e725dc6535,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap237f2b31-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:56:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:51.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.418 221554 DEBUG os_vif [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:37:85,bridge_name='br-int',has_traffic_filtering=True,id=237f2b31-27f1-466b-b664-d3e725dc6535,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap237f2b31-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.419 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.419 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.419 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.422 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.422 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap237f2b31-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.423 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap237f2b31-27, col_values=(('external_ids', {'iface-id': '237f2b31-27f1-466b-b664-d3e725dc6535', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:37:85', 'vm-uuid': '694a032a-234d-4e0c-9aa6-c7f4050fc232'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.424 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:51 np0005603609 NetworkManager[49064]: <info>  [1769846211.4254] manager: (tap237f2b31-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.427 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.428 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:51 np0005603609 nova_compute[221550]: 2026-01-31 07:56:51.429 221554 INFO os_vif [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:37:85,bridge_name='br-int',has_traffic_filtering=True,id=237f2b31-27f1-466b-b664-d3e725dc6535,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap237f2b31-27')#033[00m
Jan 31 02:56:52 np0005603609 nova_compute[221550]: 2026-01-31 07:56:52.468 221554 DEBUG nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:56:52 np0005603609 nova_compute[221550]: 2026-01-31 07:56:52.468 221554 DEBUG nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:56:52 np0005603609 nova_compute[221550]: 2026-01-31 07:56:52.469 221554 DEBUG nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] No VIF found with MAC fa:16:3e:da:37:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:56:52 np0005603609 nova_compute[221550]: 2026-01-31 07:56:52.469 221554 INFO nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Using config drive#033[00m
Jan 31 02:56:52 np0005603609 nova_compute[221550]: 2026-01-31 07:56:52.492 221554 DEBUG nova.storage.rbd_utils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 694a032a-234d-4e0c-9aa6-c7f4050fc232_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:52.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:53 np0005603609 nova_compute[221550]: 2026-01-31 07:56:53.088 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:53 np0005603609 nova_compute[221550]: 2026-01-31 07:56:53.194 221554 DEBUG nova.network.neutron [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Successfully updated port: 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:56:53 np0005603609 nova_compute[221550]: 2026-01-31 07:56:53.224 221554 DEBUG oslo_concurrency.lockutils [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:56:53 np0005603609 nova_compute[221550]: 2026-01-31 07:56:53.224 221554 DEBUG oslo_concurrency.lockutils [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquired lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:56:53 np0005603609 nova_compute[221550]: 2026-01-31 07:56:53.225 221554 DEBUG nova.network.neutron [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:56:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:53.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:53 np0005603609 nova_compute[221550]: 2026-01-31 07:56:53.892 221554 DEBUG nova.compute.manager [req-e016cffb-cb3f-4d40-bccb-ec40ee754a1d req-bf1ad417-9057-4306-a30f-02b92961afd7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received event network-changed-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:56:53 np0005603609 nova_compute[221550]: 2026-01-31 07:56:53.892 221554 DEBUG nova.compute.manager [req-e016cffb-cb3f-4d40-bccb-ec40ee754a1d req-bf1ad417-9057-4306-a30f-02b92961afd7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Refreshing instance network info cache due to event network-changed-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:56:53 np0005603609 nova_compute[221550]: 2026-01-31 07:56:53.892 221554 DEBUG oslo_concurrency.lockutils [req-e016cffb-cb3f-4d40-bccb-ec40ee754a1d req-bf1ad417-9057-4306-a30f-02b92961afd7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:56:54 np0005603609 nova_compute[221550]: 2026-01-31 07:56:54.222 221554 DEBUG nova.network.neutron [req-022656f1-d403-469f-b5e6-caab6390d63c req-8595c498-88e2-4562-99de-3df1fc829e37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Updated VIF entry in instance network info cache for port 237f2b31-27f1-466b-b664-d3e725dc6535. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:56:54 np0005603609 nova_compute[221550]: 2026-01-31 07:56:54.223 221554 DEBUG nova.network.neutron [req-022656f1-d403-469f-b5e6-caab6390d63c req-8595c498-88e2-4562-99de-3df1fc829e37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Updating instance_info_cache with network_info: [{"id": "237f2b31-27f1-466b-b664-d3e725dc6535", "address": "fa:16:3e:da:37:85", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap237f2b31-27", "ovs_interfaceid": "237f2b31-27f1-466b-b664-d3e725dc6535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:56:54 np0005603609 nova_compute[221550]: 2026-01-31 07:56:54.278 221554 WARNING nova.network.neutron [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] 485494d9-5360-41c3-a10e-ef5098af0809 already exists in list: networks containing: ['485494d9-5360-41c3-a10e-ef5098af0809']. ignoring it#033[00m
Jan 31 02:56:54 np0005603609 nova_compute[221550]: 2026-01-31 07:56:54.329 221554 INFO nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Creating config drive at /var/lib/nova/instances/694a032a-234d-4e0c-9aa6-c7f4050fc232/disk.config#033[00m
Jan 31 02:56:54 np0005603609 nova_compute[221550]: 2026-01-31 07:56:54.333 221554 DEBUG oslo_concurrency.processutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/694a032a-234d-4e0c-9aa6-c7f4050fc232/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpww4c_vyc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:54 np0005603609 nova_compute[221550]: 2026-01-31 07:56:54.461 221554 DEBUG oslo_concurrency.processutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/694a032a-234d-4e0c-9aa6-c7f4050fc232/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpww4c_vyc" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:54 np0005603609 nova_compute[221550]: 2026-01-31 07:56:54.495 221554 DEBUG nova.storage.rbd_utils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 694a032a-234d-4e0c-9aa6-c7f4050fc232_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:54 np0005603609 nova_compute[221550]: 2026-01-31 07:56:54.499 221554 DEBUG oslo_concurrency.processutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/694a032a-234d-4e0c-9aa6-c7f4050fc232/disk.config 694a032a-234d-4e0c-9aa6-c7f4050fc232_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:54 np0005603609 nova_compute[221550]: 2026-01-31 07:56:54.519 221554 DEBUG oslo_concurrency.lockutils [req-022656f1-d403-469f-b5e6-caab6390d63c req-8595c498-88e2-4562-99de-3df1fc829e37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-694a032a-234d-4e0c-9aa6-c7f4050fc232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:56:54 np0005603609 nova_compute[221550]: 2026-01-31 07:56:54.697 221554 DEBUG oslo_concurrency.processutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/694a032a-234d-4e0c-9aa6-c7f4050fc232/disk.config 694a032a-234d-4e0c-9aa6-c7f4050fc232_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:54 np0005603609 nova_compute[221550]: 2026-01-31 07:56:54.697 221554 INFO nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Deleting local config drive /var/lib/nova/instances/694a032a-234d-4e0c-9aa6-c7f4050fc232/disk.config because it was imported into RBD.#033[00m
Jan 31 02:56:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:56:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:54.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:56:54 np0005603609 kernel: tap237f2b31-27: entered promiscuous mode
Jan 31 02:56:54 np0005603609 NetworkManager[49064]: <info>  [1769846214.7415] manager: (tap237f2b31-27): new Tun device (/org/freedesktop/NetworkManager/Devices/136)
Jan 31 02:56:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:54 np0005603609 ovn_controller[130359]: 2026-01-31T07:56:54Z|00266|binding|INFO|Claiming lport 237f2b31-27f1-466b-b664-d3e725dc6535 for this chassis.
Jan 31 02:56:54 np0005603609 nova_compute[221550]: 2026-01-31 07:56:54.773 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:54 np0005603609 ovn_controller[130359]: 2026-01-31T07:56:54Z|00267|binding|INFO|237f2b31-27f1-466b-b664-d3e725dc6535: Claiming fa:16:3e:da:37:85 10.100.0.10
Jan 31 02:56:54 np0005603609 ovn_controller[130359]: 2026-01-31T07:56:54Z|00268|binding|INFO|Setting lport 237f2b31-27f1-466b-b664-d3e725dc6535 ovn-installed in OVS
Jan 31 02:56:54 np0005603609 nova_compute[221550]: 2026-01-31 07:56:54.788 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:54 np0005603609 systemd-machined[190912]: New machine qemu-37-instance-00000053.
Jan 31 02:56:54 np0005603609 systemd[1]: Started Virtual Machine qemu-37-instance-00000053.
Jan 31 02:56:54 np0005603609 systemd-udevd[251808]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:56:54 np0005603609 NetworkManager[49064]: <info>  [1769846214.8262] device (tap237f2b31-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:56:54 np0005603609 NetworkManager[49064]: <info>  [1769846214.8268] device (tap237f2b31-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:56:54 np0005603609 ovn_controller[130359]: 2026-01-31T07:56:54Z|00269|binding|INFO|Setting lport 237f2b31-27f1-466b-b664-d3e725dc6535 up in Southbound
Jan 31 02:56:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:54.863 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:37:85 10.100.0.10'], port_security=['fa:16:3e:da:37:85 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '694a032a-234d-4e0c-9aa6-c7f4050fc232', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1cd91610847a480caeee0ae3cdabf066', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f1695a46-1d81-4453-9c52-9917c020bc65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e38d659-b6a8-4d3d-8a23-b8299c5114da, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=237f2b31-27f1-466b-b664-d3e725dc6535) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:56:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:54.864 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 237f2b31-27f1-466b-b664-d3e725dc6535 in datapath ca1ed3b2-b27d-427e-a9bd-cc12393752eb bound to our chassis#033[00m
Jan 31 02:56:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:54.865 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ca1ed3b2-b27d-427e-a9bd-cc12393752eb#033[00m
Jan 31 02:56:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:54.873 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b143d1ca-a6f7-4ce0-89ed-fdebf73537e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:54.874 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapca1ed3b2-b1 in ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:56:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:54.875 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapca1ed3b2-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:56:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:54.875 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[614b865b-4e29-47ae-b782-431a25dff03a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:54.876 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9ece604c-6887-44d5-8436-64f4df24e21e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:54.886 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[de4e52d7-81b1-4b0d-954d-898c952bee6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:54.907 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4f7431-c702-4a55-a8a9-bb0c9c203715]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:54.925 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae9d6ef-1dc6-4378-b531-4b4822e46610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:54 np0005603609 NetworkManager[49064]: <info>  [1769846214.9317] manager: (tapca1ed3b2-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/137)
Jan 31 02:56:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:54.931 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[45939dac-1ae3-44c1-bfb3-a566e02cc8d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:54.953 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[073937f0-5b84-4b71-bc4e-6a3622679b3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:54.956 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[9b527f3c-c9fd-4092-8a67-39dfda4284f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:54 np0005603609 NetworkManager[49064]: <info>  [1769846214.9720] device (tapca1ed3b2-b0): carrier: link connected
Jan 31 02:56:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:54.976 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[03cd3828-b263-45d6-b611-59ac33369b80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:54.987 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ea698a41-7a7f-4c77-9295-4d6d26809ca8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca1ed3b2-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:70:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645467, 'reachable_time': 25692, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251875, 'error': None, 'target': 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:54.997 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6f19bb99-f237-4f5b-88ed-207eddbcfded]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:7011'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 645467, 'tstamp': 645467}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251878, 'error': None, 'target': 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:55.007 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b21312ce-7043-4af0-89a3-a5520df520d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca1ed3b2-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:70:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645467, 'reachable_time': 25692, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251879, 'error': None, 'target': 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:55.028 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[56a0c766-49a0-45a7-813b-89d4a2920b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:55.068 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b32a1191-1e6b-4356-89ab-0ef3bc7756e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:55.070 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca1ed3b2-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:55.070 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:55.070 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca1ed3b2-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.072 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:55 np0005603609 NetworkManager[49064]: <info>  [1769846215.0728] manager: (tapca1ed3b2-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Jan 31 02:56:55 np0005603609 kernel: tapca1ed3b2-b0: entered promiscuous mode
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.075 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:55.075 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapca1ed3b2-b0, col_values=(('external_ids', {'iface-id': 'd19b5f05-fa79-4835-8ef4-51f87493d59b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:55 np0005603609 ovn_controller[130359]: 2026-01-31T07:56:55Z|00270|binding|INFO|Releasing lport d19b5f05-fa79-4835-8ef4-51f87493d59b from this chassis (sb_readonly=0)
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.076 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.077 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:55.078 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ca1ed3b2-b27d-427e-a9bd-cc12393752eb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ca1ed3b2-b27d-427e-a9bd-cc12393752eb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:55.079 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7c6595bc-c652-4af3-9442-79f3ca7577b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:55.079 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-ca1ed3b2-b27d-427e-a9bd-cc12393752eb
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/ca1ed3b2-b27d-427e-a9bd-cc12393752eb.pid.haproxy
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID ca1ed3b2-b27d-427e-a9bd-cc12393752eb
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:56:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:56:55.080 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'env', 'PROCESS_TAG=haproxy-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ca1ed3b2-b27d-427e-a9bd-cc12393752eb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.081 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.092 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846215.0917177, 694a032a-234d-4e0c-9aa6-c7f4050fc232 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.093 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] VM Started (Lifecycle Event)#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.191 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.195 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846215.0918565, 694a032a-234d-4e0c-9aa6-c7f4050fc232 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.195 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.300 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.309 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.388 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:56:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:55.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:55 np0005603609 podman[251917]: 2026-01-31 07:56:55.435287022 +0000 UTC m=+0.061969895 container create db782b39a79e5380823f6a303ab7e2bdc2d6742be71bf8bb8cb9c9d63598d37b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:56:55 np0005603609 podman[251917]: 2026-01-31 07:56:55.392298395 +0000 UTC m=+0.018981298 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:56:55 np0005603609 systemd[1]: Started libpod-conmon-db782b39a79e5380823f6a303ab7e2bdc2d6742be71bf8bb8cb9c9d63598d37b.scope.
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.513 221554 DEBUG nova.compute.manager [req-4aab060d-bcae-4615-9a06-b7b238baec43 req-d1c89f6c-a083-4f49-ab18-a66ef99cc7d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Received event network-vif-plugged-237f2b31-27f1-466b-b664-d3e725dc6535 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.514 221554 DEBUG oslo_concurrency.lockutils [req-4aab060d-bcae-4615-9a06-b7b238baec43 req-d1c89f6c-a083-4f49-ab18-a66ef99cc7d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "694a032a-234d-4e0c-9aa6-c7f4050fc232-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.514 221554 DEBUG oslo_concurrency.lockutils [req-4aab060d-bcae-4615-9a06-b7b238baec43 req-d1c89f6c-a083-4f49-ab18-a66ef99cc7d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "694a032a-234d-4e0c-9aa6-c7f4050fc232-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.514 221554 DEBUG oslo_concurrency.lockutils [req-4aab060d-bcae-4615-9a06-b7b238baec43 req-d1c89f6c-a083-4f49-ab18-a66ef99cc7d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "694a032a-234d-4e0c-9aa6-c7f4050fc232-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.515 221554 DEBUG nova.compute.manager [req-4aab060d-bcae-4615-9a06-b7b238baec43 req-d1c89f6c-a083-4f49-ab18-a66ef99cc7d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Processing event network-vif-plugged-237f2b31-27f1-466b-b664-d3e725dc6535 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.515 221554 DEBUG nova.compute.manager [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:56:55 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.519 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846215.5192327, 694a032a-234d-4e0c-9aa6-c7f4050fc232 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.519 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.521 221554 DEBUG nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:56:55 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f90f5d32489bdc40ce4152f7f53b7a70dafc7dfa2524b21b14731b3543922f87/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.524 221554 INFO nova.virt.libvirt.driver [-] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Instance spawned successfully.#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.524 221554 DEBUG nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:56:55 np0005603609 podman[251917]: 2026-01-31 07:56:55.542487677 +0000 UTC m=+0.169170560 container init db782b39a79e5380823f6a303ab7e2bdc2d6742be71bf8bb8cb9c9d63598d37b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:56:55 np0005603609 podman[251917]: 2026-01-31 07:56:55.546651098 +0000 UTC m=+0.173333971 container start db782b39a79e5380823f6a303ab7e2bdc2d6742be71bf8bb8cb9c9d63598d37b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:56:55 np0005603609 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[251932]: [NOTICE]   (251936) : New worker (251939) forked
Jan 31 02:56:55 np0005603609 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[251932]: [NOTICE]   (251936) : Loading success.
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.610 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.616 221554 DEBUG nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.617 221554 DEBUG nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.617 221554 DEBUG nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.617 221554 DEBUG nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.618 221554 DEBUG nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.618 221554 DEBUG nova.virt.libvirt.driver [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.622 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.692 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.759 221554 INFO nova.compute.manager [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Took 15.50 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.759 221554 DEBUG nova.compute.manager [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.912 221554 INFO nova.compute.manager [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Took 18.33 seconds to build instance.#033[00m
Jan 31 02:56:55 np0005603609 nova_compute[221550]: 2026-01-31 07:56:55.965 221554 DEBUG oslo_concurrency.lockutils [None req-c3b1d0bb-761d-4685-ac52-8c158257360d f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "694a032a-234d-4e0c-9aa6-c7f4050fc232" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:56 np0005603609 nova_compute[221550]: 2026-01-31 07:56:56.424 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:56.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:57.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:58 np0005603609 nova_compute[221550]: 2026-01-31 07:56:58.064 221554 DEBUG nova.compute.manager [req-289d204c-ddf7-4f0c-b6eb-9d1c22bf694f req-1a0cacbe-fdd4-4214-9dca-8b7b2cafca64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Received event network-vif-plugged-237f2b31-27f1-466b-b664-d3e725dc6535 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:56:58 np0005603609 nova_compute[221550]: 2026-01-31 07:56:58.065 221554 DEBUG oslo_concurrency.lockutils [req-289d204c-ddf7-4f0c-b6eb-9d1c22bf694f req-1a0cacbe-fdd4-4214-9dca-8b7b2cafca64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "694a032a-234d-4e0c-9aa6-c7f4050fc232-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:58 np0005603609 nova_compute[221550]: 2026-01-31 07:56:58.065 221554 DEBUG oslo_concurrency.lockutils [req-289d204c-ddf7-4f0c-b6eb-9d1c22bf694f req-1a0cacbe-fdd4-4214-9dca-8b7b2cafca64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "694a032a-234d-4e0c-9aa6-c7f4050fc232-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:58 np0005603609 nova_compute[221550]: 2026-01-31 07:56:58.065 221554 DEBUG oslo_concurrency.lockutils [req-289d204c-ddf7-4f0c-b6eb-9d1c22bf694f req-1a0cacbe-fdd4-4214-9dca-8b7b2cafca64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "694a032a-234d-4e0c-9aa6-c7f4050fc232-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:58 np0005603609 nova_compute[221550]: 2026-01-31 07:56:58.065 221554 DEBUG nova.compute.manager [req-289d204c-ddf7-4f0c-b6eb-9d1c22bf694f req-1a0cacbe-fdd4-4214-9dca-8b7b2cafca64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] No waiting events found dispatching network-vif-plugged-237f2b31-27f1-466b-b664-d3e725dc6535 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:56:58 np0005603609 nova_compute[221550]: 2026-01-31 07:56:58.066 221554 WARNING nova.compute.manager [req-289d204c-ddf7-4f0c-b6eb-9d1c22bf694f req-1a0cacbe-fdd4-4214-9dca-8b7b2cafca64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Received unexpected event network-vif-plugged-237f2b31-27f1-466b-b664-d3e725dc6535 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:56:58 np0005603609 nova_compute[221550]: 2026-01-31 07:56:58.066 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:58 np0005603609 nova_compute[221550]: 2026-01-31 07:56:58.091 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:58.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:56:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:59.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.209 221554 DEBUG nova.network.neutron [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Updating instance_info_cache with network_info: [{"id": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "address": "fa:16:3e:26:be:39", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08f53c3-93", "ovs_interfaceid": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.307 221554 DEBUG oslo_concurrency.lockutils [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Releasing lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.308 221554 DEBUG oslo_concurrency.lockutils [req-e016cffb-cb3f-4d40-bccb-ec40ee754a1d req-bf1ad417-9057-4306-a30f-02b92961afd7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.308 221554 DEBUG nova.network.neutron [req-e016cffb-cb3f-4d40-bccb-ec40ee754a1d req-bf1ad417-9057-4306-a30f-02b92961afd7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Refreshing network info cache for port 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.311 221554 DEBUG nova.virt.libvirt.vif [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:55:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2097858313',display_name='tempest-tempest.common.compute-instance-2097858313',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2097858313',id=80,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnxydFFT/bcqoM03GknAV6VVEv+uI9/M2thVO454uMQmK087L6GvpajnIifCsqOX+pET7jqX84pd3d5htElL3GbJr344TJNF3ti2SMNzp16z2rTGy4GU5WRv9em26Ul9Q==',key_name='tempest-keypair-1143543931',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:55:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-c1ud7qy9',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:55:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=3f1cc0eb-2641-4574-952f-f02264260ce6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.312 221554 DEBUG nova.network.os_vif_util [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.312 221554 DEBUG nova.network.os_vif_util [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:16:62,bridge_name='br-int',has_traffic_filtering=True,id=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5f60b96f-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.313 221554 DEBUG os_vif [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:16:62,bridge_name='br-int',has_traffic_filtering=True,id=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5f60b96f-c9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.313 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.314 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.314 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.317 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.317 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5f60b96f-c9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.318 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5f60b96f-c9, col_values=(('external_ids', {'iface-id': '5f60b96f-c91b-46d4-b7cf-18b1e71d5d00', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:16:62', 'vm-uuid': '3f1cc0eb-2641-4574-952f-f02264260ce6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.319 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:00 np0005603609 NetworkManager[49064]: <info>  [1769846220.3201] manager: (tap5f60b96f-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.324 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.326 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.327 221554 INFO os_vif [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:16:62,bridge_name='br-int',has_traffic_filtering=True,id=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5f60b96f-c9')#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.328 221554 DEBUG nova.virt.libvirt.vif [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:55:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2097858313',display_name='tempest-tempest.common.compute-instance-2097858313',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2097858313',id=80,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnxydFFT/bcqoM03GknAV6VVEv+uI9/M2thVO454uMQmK087L6GvpajnIifCsqOX+pET7jqX84pd3d5htElL3GbJr344TJNF3ti2SMNzp16z2rTGy4GU5WRv9em26Ul9Q==',key_name='tempest-keypair-1143543931',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:55:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-c1ud7qy9',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:55:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=3f1cc0eb-2641-4574-952f-f02264260ce6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.328 221554 DEBUG nova.network.os_vif_util [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.329 221554 DEBUG nova.network.os_vif_util [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:16:62,bridge_name='br-int',has_traffic_filtering=True,id=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5f60b96f-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.331 221554 DEBUG nova.virt.libvirt.guest [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] attach device xml: <interface type="ethernet">
Jan 31 02:57:00 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:f2:16:62"/>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:  <target dev="tap5f60b96f-c9"/>
Jan 31 02:57:00 np0005603609 nova_compute[221550]: </interface>
Jan 31 02:57:00 np0005603609 nova_compute[221550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 02:57:00 np0005603609 kernel: tap5f60b96f-c9: entered promiscuous mode
Jan 31 02:57:00 np0005603609 NetworkManager[49064]: <info>  [1769846220.3465] manager: (tap5f60b96f-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/140)
Jan 31 02:57:00 np0005603609 ovn_controller[130359]: 2026-01-31T07:57:00Z|00271|binding|INFO|Claiming lport 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 for this chassis.
Jan 31 02:57:00 np0005603609 ovn_controller[130359]: 2026-01-31T07:57:00Z|00272|binding|INFO|5f60b96f-c91b-46d4-b7cf-18b1e71d5d00: Claiming fa:16:3e:f2:16:62 10.100.0.9
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.349 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:00 np0005603609 ovn_controller[130359]: 2026-01-31T07:57:00Z|00273|binding|INFO|Setting lport 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 ovn-installed in OVS
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.359 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.361 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:00 np0005603609 systemd-udevd[251957]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:57:00 np0005603609 NetworkManager[49064]: <info>  [1769846220.3874] device (tap5f60b96f-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:57:00 np0005603609 NetworkManager[49064]: <info>  [1769846220.3881] device (tap5f60b96f-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:57:00 np0005603609 ovn_controller[130359]: 2026-01-31T07:57:00Z|00274|binding|INFO|Setting lport 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 up in Southbound
Jan 31 02:57:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:00.525 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:16:62 10.100.0.9'], port_security=['fa:16:3e:f2:16:62 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-670789153', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '3f1cc0eb-2641-4574-952f-f02264260ce6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-670789153', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'c7e4e426-0b76-41c4-8dcb-ea007c31db76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:57:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:00.528 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 in datapath 485494d9-5360-41c3-a10e-ef5098af0809 bound to our chassis#033[00m
Jan 31 02:57:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:00.532 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 485494d9-5360-41c3-a10e-ef5098af0809#033[00m
Jan 31 02:57:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:00.545 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8833b7-e59e-4280-873b-4d3b2cdfecbb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:00.571 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[6b9b3a62-0ca9-4e83-ba42-f2204f4939dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:00.575 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[a5453e19-3194-4c5d-942b-553a0d0d415b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:00.601 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[7769ded0-7436-4211-9334-e961d277678f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:00.616 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef4dc31-1fb6-449a-9da9-4d79bf9d6635]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634860, 'reachable_time': 34927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251965, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:00.627 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f3edc13d-a17e-40b8-a168-2649e7e78a99]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634865, 'tstamp': 634865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251966, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634867, 'tstamp': 634867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251966, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:00.628 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.630 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.631 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:00.631 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap485494d9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:00.632 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:00.632 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap485494d9-50, col_values=(('external_ids', {'iface-id': 'cbb039b2-15df-45bd-8800-81213ecc7011'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:00.632 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.692 221554 DEBUG nova.virt.libvirt.driver [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.693 221554 DEBUG nova.virt.libvirt.driver [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.693 221554 DEBUG nova.virt.libvirt.driver [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No VIF found with MAC fa:16:3e:26:be:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.694 221554 DEBUG nova.virt.libvirt.driver [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] No VIF found with MAC fa:16:3e:f2:16:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.741 221554 DEBUG nova.virt.libvirt.guest [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:57:00 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:  <nova:name>tempest-tempest.common.compute-instance-2097858313</nova:name>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 07:57:00</nova:creationTime>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 02:57:00 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:    <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:    <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:    <nova:port uuid="f08f53c3-9326-4b07-bdcd-2c3a3a06da0d">
Jan 31 02:57:00 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:    <nova:port uuid="5f60b96f-c91b-46d4-b7cf-18b1e71d5d00">
Jan 31 02:57:00 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:57:00 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 02:57:00 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 02:57:00 np0005603609 nova_compute[221550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 02:57:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:57:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:00.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:57:00 np0005603609 nova_compute[221550]: 2026-01-31 07:57:00.797 221554 DEBUG oslo_concurrency.lockutils [None req-5389f5b5-71e6-4c5b-a7f7-4d4e88e04b90 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-3f1cc0eb-2641-4574-952f-f02264260ce6-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 14.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:01.242 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:57:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:01.243 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:57:01 np0005603609 nova_compute[221550]: 2026-01-31 07:57:01.244 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:01.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:01 np0005603609 nova_compute[221550]: 2026-01-31 07:57:01.455 221554 DEBUG nova.compute.manager [req-c84d22c5-b3c1-4a64-ade3-ae7fbc3247ff req-6641ad07-972f-42ee-88a4-10a2ad5e52a2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received event network-vif-plugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:57:01 np0005603609 nova_compute[221550]: 2026-01-31 07:57:01.455 221554 DEBUG oslo_concurrency.lockutils [req-c84d22c5-b3c1-4a64-ade3-ae7fbc3247ff req-6641ad07-972f-42ee-88a4-10a2ad5e52a2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:01 np0005603609 nova_compute[221550]: 2026-01-31 07:57:01.456 221554 DEBUG oslo_concurrency.lockutils [req-c84d22c5-b3c1-4a64-ade3-ae7fbc3247ff req-6641ad07-972f-42ee-88a4-10a2ad5e52a2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:01 np0005603609 nova_compute[221550]: 2026-01-31 07:57:01.456 221554 DEBUG oslo_concurrency.lockutils [req-c84d22c5-b3c1-4a64-ade3-ae7fbc3247ff req-6641ad07-972f-42ee-88a4-10a2ad5e52a2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:01 np0005603609 nova_compute[221550]: 2026-01-31 07:57:01.456 221554 DEBUG nova.compute.manager [req-c84d22c5-b3c1-4a64-ade3-ae7fbc3247ff req-6641ad07-972f-42ee-88a4-10a2ad5e52a2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] No waiting events found dispatching network-vif-plugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:57:01 np0005603609 nova_compute[221550]: 2026-01-31 07:57:01.457 221554 WARNING nova.compute.manager [req-c84d22c5-b3c1-4a64-ade3-ae7fbc3247ff req-6641ad07-972f-42ee-88a4-10a2ad5e52a2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received unexpected event network-vif-plugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:57:02 np0005603609 ovn_controller[130359]: 2026-01-31T07:57:02Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f2:16:62 10.100.0.9
Jan 31 02:57:02 np0005603609 ovn_controller[130359]: 2026-01-31T07:57:02Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f2:16:62 10.100.0.9
Jan 31 02:57:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:57:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.3 total, 600.0 interval#012Cumulative writes: 8151 writes, 41K keys, 8151 commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s#012Cumulative WAL: 8151 writes, 8151 syncs, 1.00 writes per sync, written: 0.08 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1560 writes, 7434 keys, 1560 commit groups, 1.0 writes per commit group, ingest: 15.39 MB, 0.03 MB/s#012Interval WAL: 1560 writes, 1560 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     69.5      0.74              0.11        23    0.032       0      0       0.0       0.0#012  L6      1/0   11.30 MB   0.0      0.2     0.1      0.2       0.2      0.0       0.0   3.9     85.5     71.1      2.80              0.43        22    0.127    121K    12K       0.0       0.0#012 Sum      1/0   11.30 MB   0.0      0.2     0.1      0.2       0.2      0.1       0.0   4.9     67.6     70.8      3.55              0.54        45    0.079    121K    12K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.8     73.9     75.6      0.83              0.11        10    0.083     34K   2631       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.1      0.2       0.2      0.0       0.0   0.0     85.5     71.1      2.80              0.43        22    0.127    121K    12K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     72.6      0.71              0.11        22    0.032       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.3 total, 600.0 interval#012Flush(GB): cumulative 0.050, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.25 GB write, 0.08 MB/s write, 0.23 GB read, 0.08 MB/s read, 3.5 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619ad8ed1f0#2 capacity: 304.00 MB usage: 27.33 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000267 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1614,26.38 MB,8.67874%) FilterBlock(45,344.05 KB,0.110521%) IndexBlock(45,621.00 KB,0.199489%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 02:57:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:02.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.095 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:03.245 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.342 221554 DEBUG oslo_concurrency.lockutils [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "interface-3f1cc0eb-2641-4574-952f-f02264260ce6-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.343 221554 DEBUG oslo_concurrency.lockutils [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-3f1cc0eb-2641-4574-952f-f02264260ce6-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:03.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.550 221554 DEBUG nova.objects.instance [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'flavor' on Instance uuid 3f1cc0eb-2641-4574-952f-f02264260ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.746 221554 DEBUG nova.virt.libvirt.vif [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:55:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2097858313',display_name='tempest-tempest.common.compute-instance-2097858313',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2097858313',id=80,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnxydFFT/bcqoM03GknAV6VVEv+uI9/M2thVO454uMQmK087L6GvpajnIifCsqOX+pET7jqX84pd3d5htElL3GbJr344TJNF3ti2SMNzp16z2rTGy4GU5WRv9em26Ul9Q==',key_name='tempest-keypair-1143543931',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:55:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-c1ud7qy9',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:55:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=3f1cc0eb-2641-4574-952f-f02264260ce6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.747 221554 DEBUG nova.network.os_vif_util [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.748 221554 DEBUG nova.network.os_vif_util [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:16:62,bridge_name='br-int',has_traffic_filtering=True,id=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5f60b96f-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.750 221554 DEBUG nova.virt.libvirt.guest [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f2:16:62"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5f60b96f-c9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.753 221554 DEBUG nova.virt.libvirt.guest [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f2:16:62"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5f60b96f-c9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.754 221554 DEBUG nova.virt.libvirt.driver [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Attempting to detach device tap5f60b96f-c9 from instance 3f1cc0eb-2641-4574-952f-f02264260ce6 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.754 221554 DEBUG nova.virt.libvirt.guest [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] detach device xml: <interface type="ethernet">
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:f2:16:62"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <target dev="tap5f60b96f-c9"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]: </interface>
Jan 31 02:57:03 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.759 221554 DEBUG nova.virt.libvirt.guest [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f2:16:62"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5f60b96f-c9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.762 221554 DEBUG nova.virt.libvirt.guest [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f2:16:62"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5f60b96f-c9"/></interface>not found in domain: <domain type='kvm' id='36'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <name>instance-00000050</name>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <uuid>3f1cc0eb-2641-4574-952f-f02264260ce6</uuid>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:name>tempest-tempest.common.compute-instance-2097858313</nova:name>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 07:57:00</nova:creationTime>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:port uuid="f08f53c3-9326-4b07-bdcd-2c3a3a06da0d">
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:port uuid="5f60b96f-c91b-46d4-b7cf-18b1e71d5d00">
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 02:57:03 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <memory unit='KiB'>131072</memory>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <vcpu placement='static'>1</vcpu>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <resource>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <partition>/machine</partition>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </resource>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <sysinfo type='smbios'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <entry name='manufacturer'>RDO</entry>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <entry name='serial'>3f1cc0eb-2641-4574-952f-f02264260ce6</entry>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <entry name='uuid'>3f1cc0eb-2641-4574-952f-f02264260ce6</entry>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <entry name='family'>Virtual Machine</entry>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <boot dev='hd'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <smbios mode='sysinfo'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <vmcoreinfo state='on'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <model fallback='forbid'>Nehalem</model>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <feature policy='require' name='x2apic'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <feature policy='require' name='hypervisor'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <feature policy='require' name='vme'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <clock offset='utc'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <timer name='hpet' present='no'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <on_poweroff>destroy</on_poweroff>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <on_reboot>restart</on_reboot>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <on_crash>destroy</on_crash>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <disk type='network' device='disk'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/3f1cc0eb-2641-4574-952f-f02264260ce6_disk' index='2'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target dev='vda' bus='virtio'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='virtio-disk0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <disk type='network' device='cdrom'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/3f1cc0eb-2641-4574-952f-f02264260ce6_disk.config' index='1'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target dev='sda' bus='sata'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <readonly/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='sata0-0-0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pcie.0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='1' port='0x10'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.1'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='2' port='0x11'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.2'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='3' port='0x12'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.3'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='4' port='0x13'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.4'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='5' port='0x14'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.5'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='6' port='0x15'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.6'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='7' port='0x16'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.7'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='8' port='0x17'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.8'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='9' port='0x18'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.9'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='10' port='0x19'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.10'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='11' port='0x1a'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.11'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='12' port='0x1b'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.12'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='13' port='0x1c'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.13'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='14' port='0x1d'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.14'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='15' port='0x1e'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.15'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='16' port='0x1f'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.16'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='17' port='0x20'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.17'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='18' port='0x21'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.18'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='19' port='0x22'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.19'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='20' port='0x23'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.20'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='21' port='0x24'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.21'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='22' port='0x25'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.22'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='23' port='0x26'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.23'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='24' port='0x27'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.24'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='25' port='0x28'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.25'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-pci-bridge'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.26'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='usb'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='sata' index='0'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='ide'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:26:be:39'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target dev='tapf08f53c3-93'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='net0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:f2:16:62'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target dev='tap5f60b96f-c9'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='net1'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <serial type='pty'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <source path='/dev/pts/1'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/3f1cc0eb-2641-4574-952f-f02264260ce6/console.log' append='off'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target type='isa-serial' port='0'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:        <model name='isa-serial'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      </target>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <console type='pty' tty='/dev/pts/1'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <source path='/dev/pts/1'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/3f1cc0eb-2641-4574-952f-f02264260ce6/console.log' append='off'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target type='serial' port='0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </console>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <input type='tablet' bus='usb'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='input0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='usb' bus='0' port='1'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </input>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <input type='mouse' bus='ps2'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='input1'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </input>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <input type='keyboard' bus='ps2'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='input2'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </input>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <listen type='address' address='::0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </graphics>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <audio id='1' type='none'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='video0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <watchdog model='itco' action='reset'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='watchdog0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </watchdog>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <memballoon model='virtio'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <stats period='10'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='balloon0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <rng model='virtio'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <backend model='random'>/dev/urandom</backend>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='rng0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <label>system_u:system_r:svirt_t:s0:c491,c818</label>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c491,c818</imagelabel>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <label>+107:+107</label>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <imagelabel>+107:+107</imagelabel>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 02:57:03 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:57:03 np0005603609 nova_compute[221550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.773 221554 INFO nova.virt.libvirt.driver [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully detached device tap5f60b96f-c9 from instance 3f1cc0eb-2641-4574-952f-f02264260ce6 from the persistent domain config.#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.773 221554 DEBUG nova.virt.libvirt.driver [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] (1/8): Attempting to detach device tap5f60b96f-c9 with device alias net1 from instance 3f1cc0eb-2641-4574-952f-f02264260ce6 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.773 221554 DEBUG nova.virt.libvirt.guest [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] detach device xml: <interface type="ethernet">
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:f2:16:62"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <target dev="tap5f60b96f-c9"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]: </interface>
Jan 31 02:57:03 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 02:57:03 np0005603609 kernel: tap5f60b96f-c9 (unregistering): left promiscuous mode
Jan 31 02:57:03 np0005603609 NetworkManager[49064]: <info>  [1769846223.8861] device (tap5f60b96f-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:57:03 np0005603609 ovn_controller[130359]: 2026-01-31T07:57:03Z|00275|binding|INFO|Releasing lport 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 from this chassis (sb_readonly=0)
Jan 31 02:57:03 np0005603609 ovn_controller[130359]: 2026-01-31T07:57:03Z|00276|binding|INFO|Setting lport 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 down in Southbound
Jan 31 02:57:03 np0005603609 ovn_controller[130359]: 2026-01-31T07:57:03Z|00277|binding|INFO|Removing iface tap5f60b96f-c9 ovn-installed in OVS
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.891 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.897 221554 DEBUG nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Received event <DeviceRemovedEvent: 1769846223.896733, 3f1cc0eb-2641-4574-952f-f02264260ce6 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.899 221554 DEBUG nova.virt.libvirt.driver [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Start waiting for the detach event from libvirt for device tap5f60b96f-c9 with device alias net1 for instance 3f1cc0eb-2641-4574-952f-f02264260ce6 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.900 221554 DEBUG nova.virt.libvirt.guest [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f2:16:62"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5f60b96f-c9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.904 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.908 221554 DEBUG nova.virt.libvirt.guest [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f2:16:62"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5f60b96f-c9"/></interface>not found in domain: <domain type='kvm' id='36'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <name>instance-00000050</name>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <uuid>3f1cc0eb-2641-4574-952f-f02264260ce6</uuid>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:name>tempest-tempest.common.compute-instance-2097858313</nova:name>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 07:57:00</nova:creationTime>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:port uuid="f08f53c3-9326-4b07-bdcd-2c3a3a06da0d">
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:port uuid="5f60b96f-c91b-46d4-b7cf-18b1e71d5d00">
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 02:57:03 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <memory unit='KiB'>131072</memory>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <vcpu placement='static'>1</vcpu>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <resource>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <partition>/machine</partition>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </resource>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <sysinfo type='smbios'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <entry name='manufacturer'>RDO</entry>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <entry name='serial'>3f1cc0eb-2641-4574-952f-f02264260ce6</entry>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <entry name='uuid'>3f1cc0eb-2641-4574-952f-f02264260ce6</entry>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <entry name='family'>Virtual Machine</entry>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <boot dev='hd'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <smbios mode='sysinfo'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <vmcoreinfo state='on'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <model fallback='forbid'>Nehalem</model>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <feature policy='require' name='x2apic'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <feature policy='require' name='hypervisor'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <feature policy='require' name='vme'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <clock offset='utc'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <timer name='hpet' present='no'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <on_poweroff>destroy</on_poweroff>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <on_reboot>restart</on_reboot>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <on_crash>destroy</on_crash>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <disk type='network' device='disk'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/3f1cc0eb-2641-4574-952f-f02264260ce6_disk' index='2'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target dev='vda' bus='virtio'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='virtio-disk0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <disk type='network' device='cdrom'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/3f1cc0eb-2641-4574-952f-f02264260ce6_disk.config' index='1'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target dev='sda' bus='sata'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <readonly/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='sata0-0-0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pcie.0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='1' port='0x10'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.1'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='2' port='0x11'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.2'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='3' port='0x12'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.3'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='4' port='0x13'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.4'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='5' port='0x14'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.5'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='6' port='0x15'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.6'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='7' port='0x16'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.7'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='8' port='0x17'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.8'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='9' port='0x18'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.9'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='10' port='0x19'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.10'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='11' port='0x1a'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.11'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='12' port='0x1b'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.12'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='13' port='0x1c'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.13'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='14' port='0x1d'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.14'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='15' port='0x1e'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.15'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='16' port='0x1f'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.16'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='17' port='0x20'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.17'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='18' port='0x21'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.18'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='19' port='0x22'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.19'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='20' port='0x23'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.20'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='21' port='0x24'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.21'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='22' port='0x25'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.22'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='23' port='0x26'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.23'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='24' port='0x27'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.24'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target chassis='25' port='0x28'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.25'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model name='pcie-pci-bridge'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='pci.26'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='usb'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <controller type='sata' index='0'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='ide'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </controller>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:26:be:39'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target dev='tapf08f53c3-93'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='net0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <serial type='pty'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <source path='/dev/pts/1'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/3f1cc0eb-2641-4574-952f-f02264260ce6/console.log' append='off'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target type='isa-serial' port='0'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:        <model name='isa-serial'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      </target>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <console type='pty' tty='/dev/pts/1'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <source path='/dev/pts/1'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/3f1cc0eb-2641-4574-952f-f02264260ce6/console.log' append='off'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <target type='serial' port='0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </console>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <input type='tablet' bus='usb'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='input0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='usb' bus='0' port='1'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </input>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <input type='mouse' bus='ps2'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='input1'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </input>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <input type='keyboard' bus='ps2'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='input2'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </input>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <listen type='address' address='::0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </graphics>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <audio id='1' type='none'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='video0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <watchdog model='itco' action='reset'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='watchdog0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </watchdog>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <memballoon model='virtio'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <stats period='10'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='balloon0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <rng model='virtio'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <backend model='random'>/dev/urandom</backend>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <alias name='rng0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <label>system_u:system_r:svirt_t:s0:c491,c818</label>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c491,c818</imagelabel>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <label>+107:+107</label>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <imagelabel>+107:+107</imagelabel>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 02:57:03 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:57:03 np0005603609 nova_compute[221550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.917 221554 INFO nova.virt.libvirt.driver [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully detached device tap5f60b96f-c9 from instance 3f1cc0eb-2641-4574-952f-f02264260ce6 from the live domain config.#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.918 221554 DEBUG nova.virt.libvirt.vif [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:55:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2097858313',display_name='tempest-tempest.common.compute-instance-2097858313',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2097858313',id=80,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnxydFFT/bcqoM03GknAV6VVEv+uI9/M2thVO454uMQmK087L6GvpajnIifCsqOX+pET7jqX84pd3d5htElL3GbJr344TJNF3ti2SMNzp16z2rTGy4GU5WRv9em26Ul9Q==',key_name='tempest-keypair-1143543931',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:55:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-c1ud7qy9',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:55:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=3f1cc0eb-2641-4574-952f-f02264260ce6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.918 221554 DEBUG nova.network.os_vif_util [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.918 221554 DEBUG nova.network.os_vif_util [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:16:62,bridge_name='br-int',has_traffic_filtering=True,id=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5f60b96f-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.919 221554 DEBUG os_vif [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:16:62,bridge_name='br-int',has_traffic_filtering=True,id=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5f60b96f-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.920 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.920 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f60b96f-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.925 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.927 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.929 221554 INFO os_vif [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:16:62,bridge_name='br-int',has_traffic_filtering=True,id=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5f60b96f-c9')#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.929 221554 DEBUG nova.virt.libvirt.guest [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:name>tempest-tempest.common.compute-instance-2097858313</nova:name>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 07:57:03</nova:creationTime>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:user uuid="1e1c5eef3d264666bc90735dd338d82a">tempest-AttachInterfacesTestJSON-668371425-project-member</nova:user>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:project uuid="a8cbd6cc22654dfab04487522a63426c">tempest-AttachInterfacesTestJSON-668371425</nova:project>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    <nova:port uuid="f08f53c3-9326-4b07-bdcd-2c3a3a06da0d">
Jan 31 02:57:03 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 02:57:03 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 02:57:03 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 02:57:03 np0005603609 nova_compute[221550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 02:57:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:03.981 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:16:62 10.100.0.9'], port_security=['fa:16:3e:f2:16:62 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-670789153', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '3f1cc0eb-2641-4574-952f-f02264260ce6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-670789153', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'c7e4e426-0b76-41c4-8dcb-ea007c31db76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:57:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:03.983 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 in datapath 485494d9-5360-41c3-a10e-ef5098af0809 unbound from our chassis#033[00m
Jan 31 02:57:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:03.985 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 485494d9-5360-41c3-a10e-ef5098af0809#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.993 221554 DEBUG nova.compute.manager [req-3b3ad419-dcb7-455e-a2bc-d7b413ed7bed req-14aad1d1-7534-4832-82c2-36e15be829f6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received event network-vif-plugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.994 221554 DEBUG oslo_concurrency.lockutils [req-3b3ad419-dcb7-455e-a2bc-d7b413ed7bed req-14aad1d1-7534-4832-82c2-36e15be829f6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.994 221554 DEBUG oslo_concurrency.lockutils [req-3b3ad419-dcb7-455e-a2bc-d7b413ed7bed req-14aad1d1-7534-4832-82c2-36e15be829f6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.994 221554 DEBUG oslo_concurrency.lockutils [req-3b3ad419-dcb7-455e-a2bc-d7b413ed7bed req-14aad1d1-7534-4832-82c2-36e15be829f6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.995 221554 DEBUG nova.compute.manager [req-3b3ad419-dcb7-455e-a2bc-d7b413ed7bed req-14aad1d1-7534-4832-82c2-36e15be829f6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] No waiting events found dispatching network-vif-plugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.995 221554 WARNING nova.compute.manager [req-3b3ad419-dcb7-455e-a2bc-d7b413ed7bed req-14aad1d1-7534-4832-82c2-36e15be829f6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received unexpected event network-vif-plugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:57:03 np0005603609 nova_compute[221550]: 2026-01-31 07:57:03.999 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:57:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:04.012 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[29c30454-53c3-47c5-8636-f871b70a0415]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:04.047 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[ef39d788-501c-406c-ab8a-ad43bbef2bab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:04.050 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[039277af-4fdc-42ea-96e2-cc5407cd8b07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:04.081 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[da070c13-d3c7-4b85-9235-e97d7d6ec80c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:04.105 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[58f3af4a-a115-44fa-8267-88800a6a836d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634860, 'reachable_time': 34927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251980, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:04.121 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[76ab27d6-ad72-4c1d-a43d-df0621ef3dd0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634865, 'tstamp': 634865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251981, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634867, 'tstamp': 634867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251981, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:04.122 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:04 np0005603609 nova_compute[221550]: 2026-01-31 07:57:04.124 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:04 np0005603609 nova_compute[221550]: 2026-01-31 07:57:04.125 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:04.125 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap485494d9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:04.126 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:04.126 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap485494d9-50, col_values=(('external_ids', {'iface-id': 'cbb039b2-15df-45bd-8800-81213ecc7011'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:04.126 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:04 np0005603609 nova_compute[221550]: 2026-01-31 07:57:04.354 221554 DEBUG nova.network.neutron [req-e016cffb-cb3f-4d40-bccb-ec40ee754a1d req-bf1ad417-9057-4306-a30f-02b92961afd7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Updated VIF entry in instance network info cache for port 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:57:04 np0005603609 nova_compute[221550]: 2026-01-31 07:57:04.354 221554 DEBUG nova.network.neutron [req-e016cffb-cb3f-4d40-bccb-ec40ee754a1d req-bf1ad417-9057-4306-a30f-02b92961afd7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Updating instance_info_cache with network_info: [{"id": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "address": "fa:16:3e:26:be:39", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08f53c3-93", "ovs_interfaceid": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:57:04 np0005603609 nova_compute[221550]: 2026-01-31 07:57:04.417 221554 DEBUG oslo_concurrency.lockutils [req-e016cffb-cb3f-4d40-bccb-ec40ee754a1d req-bf1ad417-9057-4306-a30f-02b92961afd7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:57:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:04.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:05.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:06 np0005603609 nova_compute[221550]: 2026-01-31 07:57:06.249 221554 DEBUG nova.compute.manager [req-b4f28c29-cf74-4eec-b2e2-44af113d5e52 req-9a06f5c7-b2c8-4b8f-8e69-789bb3f11c6c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received event network-vif-unplugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:57:06 np0005603609 nova_compute[221550]: 2026-01-31 07:57:06.250 221554 DEBUG oslo_concurrency.lockutils [req-b4f28c29-cf74-4eec-b2e2-44af113d5e52 req-9a06f5c7-b2c8-4b8f-8e69-789bb3f11c6c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:06 np0005603609 nova_compute[221550]: 2026-01-31 07:57:06.250 221554 DEBUG oslo_concurrency.lockutils [req-b4f28c29-cf74-4eec-b2e2-44af113d5e52 req-9a06f5c7-b2c8-4b8f-8e69-789bb3f11c6c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:06 np0005603609 nova_compute[221550]: 2026-01-31 07:57:06.250 221554 DEBUG oslo_concurrency.lockutils [req-b4f28c29-cf74-4eec-b2e2-44af113d5e52 req-9a06f5c7-b2c8-4b8f-8e69-789bb3f11c6c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:06 np0005603609 nova_compute[221550]: 2026-01-31 07:57:06.250 221554 DEBUG nova.compute.manager [req-b4f28c29-cf74-4eec-b2e2-44af113d5e52 req-9a06f5c7-b2c8-4b8f-8e69-789bb3f11c6c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] No waiting events found dispatching network-vif-unplugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:57:06 np0005603609 nova_compute[221550]: 2026-01-31 07:57:06.250 221554 WARNING nova.compute.manager [req-b4f28c29-cf74-4eec-b2e2-44af113d5e52 req-9a06f5c7-b2c8-4b8f-8e69-789bb3f11c6c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received unexpected event network-vif-unplugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:57:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:57:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:06.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:57:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:57:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:07.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:57:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:07.492 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:07.492 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:07.493 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:08 np0005603609 nova_compute[221550]: 2026-01-31 07:57:08.097 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:08 np0005603609 nova_compute[221550]: 2026-01-31 07:57:08.534 221554 DEBUG nova.compute.manager [req-d37b8f86-6d50-4698-ab29-819672168862 req-e85aec99-8407-4233-9cf8-58564e1657ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received event network-vif-plugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:57:08 np0005603609 nova_compute[221550]: 2026-01-31 07:57:08.534 221554 DEBUG oslo_concurrency.lockutils [req-d37b8f86-6d50-4698-ab29-819672168862 req-e85aec99-8407-4233-9cf8-58564e1657ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:08 np0005603609 nova_compute[221550]: 2026-01-31 07:57:08.534 221554 DEBUG oslo_concurrency.lockutils [req-d37b8f86-6d50-4698-ab29-819672168862 req-e85aec99-8407-4233-9cf8-58564e1657ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:08 np0005603609 nova_compute[221550]: 2026-01-31 07:57:08.534 221554 DEBUG oslo_concurrency.lockutils [req-d37b8f86-6d50-4698-ab29-819672168862 req-e85aec99-8407-4233-9cf8-58564e1657ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:08 np0005603609 nova_compute[221550]: 2026-01-31 07:57:08.535 221554 DEBUG nova.compute.manager [req-d37b8f86-6d50-4698-ab29-819672168862 req-e85aec99-8407-4233-9cf8-58564e1657ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] No waiting events found dispatching network-vif-plugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:57:08 np0005603609 nova_compute[221550]: 2026-01-31 07:57:08.535 221554 WARNING nova.compute.manager [req-d37b8f86-6d50-4698-ab29-819672168862 req-e85aec99-8407-4233-9cf8-58564e1657ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received unexpected event network-vif-plugged-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:57:08 np0005603609 nova_compute[221550]: 2026-01-31 07:57:08.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:57:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:08.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:08 np0005603609 nova_compute[221550]: 2026-01-31 07:57:08.922 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:09 np0005603609 nova_compute[221550]: 2026-01-31 07:57:09.150 221554 DEBUG oslo_concurrency.lockutils [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:57:09 np0005603609 nova_compute[221550]: 2026-01-31 07:57:09.151 221554 DEBUG oslo_concurrency.lockutils [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquired lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:57:09 np0005603609 nova_compute[221550]: 2026-01-31 07:57:09.151 221554 DEBUG nova.network.neutron [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:57:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:57:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:09.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:57:09 np0005603609 nova_compute[221550]: 2026-01-31 07:57:09.740 221554 DEBUG oslo_concurrency.lockutils [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "3f1cc0eb-2641-4574-952f-f02264260ce6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:09 np0005603609 nova_compute[221550]: 2026-01-31 07:57:09.741 221554 DEBUG oslo_concurrency.lockutils [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:09 np0005603609 nova_compute[221550]: 2026-01-31 07:57:09.741 221554 DEBUG oslo_concurrency.lockutils [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:09 np0005603609 nova_compute[221550]: 2026-01-31 07:57:09.741 221554 DEBUG oslo_concurrency.lockutils [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:09 np0005603609 nova_compute[221550]: 2026-01-31 07:57:09.741 221554 DEBUG oslo_concurrency.lockutils [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:09 np0005603609 nova_compute[221550]: 2026-01-31 07:57:09.742 221554 INFO nova.compute.manager [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Terminating instance#033[00m
Jan 31 02:57:09 np0005603609 nova_compute[221550]: 2026-01-31 07:57:09.743 221554 DEBUG nova.compute.manager [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:57:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:09 np0005603609 kernel: tapf08f53c3-93 (unregistering): left promiscuous mode
Jan 31 02:57:09 np0005603609 NetworkManager[49064]: <info>  [1769846229.7940] device (tapf08f53c3-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:57:09 np0005603609 ovn_controller[130359]: 2026-01-31T07:57:09Z|00278|binding|INFO|Releasing lport f08f53c3-9326-4b07-bdcd-2c3a3a06da0d from this chassis (sb_readonly=0)
Jan 31 02:57:09 np0005603609 ovn_controller[130359]: 2026-01-31T07:57:09Z|00279|binding|INFO|Setting lport f08f53c3-9326-4b07-bdcd-2c3a3a06da0d down in Southbound
Jan 31 02:57:09 np0005603609 nova_compute[221550]: 2026-01-31 07:57:09.800 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:09 np0005603609 ovn_controller[130359]: 2026-01-31T07:57:09Z|00280|binding|INFO|Removing iface tapf08f53c3-93 ovn-installed in OVS
Jan 31 02:57:09 np0005603609 nova_compute[221550]: 2026-01-31 07:57:09.802 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:09 np0005603609 nova_compute[221550]: 2026-01-31 07:57:09.806 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:09 np0005603609 ovn_controller[130359]: 2026-01-31T07:57:09Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:37:85 10.100.0.10
Jan 31 02:57:09 np0005603609 ovn_controller[130359]: 2026-01-31T07:57:09Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:37:85 10.100.0.10
Jan 31 02:57:09 np0005603609 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000050.scope: Deactivated successfully.
Jan 31 02:57:09 np0005603609 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000050.scope: Consumed 16.072s CPU time.
Jan 31 02:57:09 np0005603609 systemd-machined[190912]: Machine qemu-36-instance-00000050 terminated.
Jan 31 02:57:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:09.941 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:be:39 10.100.0.12'], port_security=['fa:16:3e:26:be:39 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3f1cc0eb-2641-4574-952f-f02264260ce6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f473e048-f911-4a1f-850c-a5dd47d82771', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=f08f53c3-9326-4b07-bdcd-2c3a3a06da0d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:57:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:09.942 140058 INFO neutron.agent.ovn.metadata.agent [-] Port f08f53c3-9326-4b07-bdcd-2c3a3a06da0d in datapath 485494d9-5360-41c3-a10e-ef5098af0809 unbound from our chassis#033[00m
Jan 31 02:57:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:09.944 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 485494d9-5360-41c3-a10e-ef5098af0809#033[00m
Jan 31 02:57:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:09.961 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa7fe36-1617-475a-9f5a-c107b852993b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:09 np0005603609 nova_compute[221550]: 2026-01-31 07:57:09.975 221554 INFO nova.virt.libvirt.driver [-] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Instance destroyed successfully.#033[00m
Jan 31 02:57:09 np0005603609 nova_compute[221550]: 2026-01-31 07:57:09.976 221554 DEBUG nova.objects.instance [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'resources' on Instance uuid 3f1cc0eb-2641-4574-952f-f02264260ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:09.988 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[78739591-cc47-4f6f-8955-bafdaa5b9b43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:09.991 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[aaba13d8-3245-408a-bb3d-7165bb16e56f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:10.014 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[5387c1d8-84da-4992-b64d-7f54ee25336b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:10.028 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fc664a49-9de5-4916-a796-cb97d6bb7d9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap485494d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:4b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634860, 'reachable_time': 34927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252005, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:10.045 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c3e19a22-6adf-4a29-84e0-01e727b49eba]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634865, 'tstamp': 634865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252006, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap485494d9-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634867, 'tstamp': 634867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252006, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:10.047 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.048 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.052 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:10.053 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap485494d9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:10.053 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:10.054 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap485494d9-50, col_values=(('external_ids', {'iface-id': 'cbb039b2-15df-45bd-8800-81213ecc7011'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:10.055 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.241 221554 DEBUG nova.virt.libvirt.vif [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:55:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2097858313',display_name='tempest-tempest.common.compute-instance-2097858313',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2097858313',id=80,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnxydFFT/bcqoM03GknAV6VVEv+uI9/M2thVO454uMQmK087L6GvpajnIifCsqOX+pET7jqX84pd3d5htElL3GbJr344TJNF3ti2SMNzp16z2rTGy4GU5WRv9em26Ul9Q==',key_name='tempest-keypair-1143543931',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:55:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-c1ud7qy9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:55:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=3f1cc0eb-2641-4574-952f-f02264260ce6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "address": "fa:16:3e:26:be:39", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08f53c3-93", "ovs_interfaceid": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.242 221554 DEBUG nova.network.os_vif_util [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "address": "fa:16:3e:26:be:39", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08f53c3-93", "ovs_interfaceid": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.243 221554 DEBUG nova.network.os_vif_util [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:be:39,bridge_name='br-int',has_traffic_filtering=True,id=f08f53c3-9326-4b07-bdcd-2c3a3a06da0d,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf08f53c3-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.244 221554 DEBUG os_vif [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:be:39,bridge_name='br-int',has_traffic_filtering=True,id=f08f53c3-9326-4b07-bdcd-2c3a3a06da0d,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf08f53c3-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.246 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.246 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf08f53c3-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.247 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.250 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.252 221554 INFO os_vif [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:be:39,bridge_name='br-int',has_traffic_filtering=True,id=f08f53c3-9326-4b07-bdcd-2c3a3a06da0d,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf08f53c3-93')#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.253 221554 DEBUG nova.virt.libvirt.vif [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:55:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2097858313',display_name='tempest-tempest.common.compute-instance-2097858313',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2097858313',id=80,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnxydFFT/bcqoM03GknAV6VVEv+uI9/M2thVO454uMQmK087L6GvpajnIifCsqOX+pET7jqX84pd3d5htElL3GbJr344TJNF3ti2SMNzp16z2rTGy4GU5WRv9em26Ul9Q==',key_name='tempest-keypair-1143543931',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:55:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-c1ud7qy9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:55:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=3f1cc0eb-2641-4574-952f-f02264260ce6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.253 221554 DEBUG nova.network.os_vif_util [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "address": "fa:16:3e:f2:16:62", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f60b96f-c9", "ovs_interfaceid": "5f60b96f-c91b-46d4-b7cf-18b1e71d5d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.254 221554 DEBUG nova.network.os_vif_util [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:16:62,bridge_name='br-int',has_traffic_filtering=True,id=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5f60b96f-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.254 221554 DEBUG os_vif [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:16:62,bridge_name='br-int',has_traffic_filtering=True,id=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5f60b96f-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.255 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.256 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f60b96f-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.256 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.258 221554 INFO os_vif [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:16:62,bridge_name='br-int',has_traffic_filtering=True,id=5f60b96f-c91b-46d4-b7cf-18b1e71d5d00,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap5f60b96f-c9')#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.701 221554 INFO nova.virt.libvirt.driver [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Deleting instance files /var/lib/nova/instances/3f1cc0eb-2641-4574-952f-f02264260ce6_del#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.703 221554 INFO nova.virt.libvirt.driver [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Deletion of /var/lib/nova/instances/3f1cc0eb-2641-4574-952f-f02264260ce6_del complete#033[00m
Jan 31 02:57:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:10.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.962 221554 DEBUG nova.compute.manager [req-d8d91f98-359a-44c1-8dfd-9c8e52c31fac req-9d7d6c2c-47b3-4f23-8e18-d749d14240d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received event network-vif-unplugged-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.963 221554 DEBUG oslo_concurrency.lockutils [req-d8d91f98-359a-44c1-8dfd-9c8e52c31fac req-9d7d6c2c-47b3-4f23-8e18-d749d14240d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.963 221554 DEBUG oslo_concurrency.lockutils [req-d8d91f98-359a-44c1-8dfd-9c8e52c31fac req-9d7d6c2c-47b3-4f23-8e18-d749d14240d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.964 221554 DEBUG oslo_concurrency.lockutils [req-d8d91f98-359a-44c1-8dfd-9c8e52c31fac req-9d7d6c2c-47b3-4f23-8e18-d749d14240d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.964 221554 DEBUG nova.compute.manager [req-d8d91f98-359a-44c1-8dfd-9c8e52c31fac req-9d7d6c2c-47b3-4f23-8e18-d749d14240d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] No waiting events found dispatching network-vif-unplugged-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:57:10 np0005603609 nova_compute[221550]: 2026-01-31 07:57:10.965 221554 DEBUG nova.compute.manager [req-d8d91f98-359a-44c1-8dfd-9c8e52c31fac req-9d7d6c2c-47b3-4f23-8e18-d749d14240d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received event network-vif-unplugged-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:57:11 np0005603609 nova_compute[221550]: 2026-01-31 07:57:11.319 221554 INFO nova.compute.manager [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Took 1.58 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:57:11 np0005603609 nova_compute[221550]: 2026-01-31 07:57:11.320 221554 DEBUG oslo.service.loopingcall [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:57:11 np0005603609 nova_compute[221550]: 2026-01-31 07:57:11.320 221554 DEBUG nova.compute.manager [-] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:57:11 np0005603609 nova_compute[221550]: 2026-01-31 07:57:11.320 221554 DEBUG nova.network.neutron [-] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:57:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:11.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:11 np0005603609 nova_compute[221550]: 2026-01-31 07:57:11.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:57:11 np0005603609 nova_compute[221550]: 2026-01-31 07:57:11.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:57:11 np0005603609 nova_compute[221550]: 2026-01-31 07:57:11.828 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907#033[00m
Jan 31 02:57:11 np0005603609 nova_compute[221550]: 2026-01-31 07:57:11.828 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:57:11 np0005603609 nova_compute[221550]: 2026-01-31 07:57:11.829 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:57:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:12.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:13 np0005603609 nova_compute[221550]: 2026-01-31 07:57:13.000 221554 INFO nova.network.neutron [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Port 5f60b96f-c91b-46d4-b7cf-18b1e71d5d00 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Jan 31 02:57:13 np0005603609 nova_compute[221550]: 2026-01-31 07:57:13.001 221554 DEBUG nova.network.neutron [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Updating instance_info_cache with network_info: [{"id": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "address": "fa:16:3e:26:be:39", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08f53c3-93", "ovs_interfaceid": "f08f53c3-9326-4b07-bdcd-2c3a3a06da0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:57:13 np0005603609 nova_compute[221550]: 2026-01-31 07:57:13.115 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:13 np0005603609 podman[252028]: 2026-01-31 07:57:13.175868395 +0000 UTC m=+0.050233982 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:57:13 np0005603609 podman[252027]: 2026-01-31 07:57:13.226298861 +0000 UTC m=+0.100667978 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 02:57:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:13.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:13 np0005603609 nova_compute[221550]: 2026-01-31 07:57:13.596 221554 DEBUG oslo_concurrency.lockutils [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Releasing lock "refresh_cache-3f1cc0eb-2641-4574-952f-f02264260ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:57:13 np0005603609 nova_compute[221550]: 2026-01-31 07:57:13.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:57:14 np0005603609 nova_compute[221550]: 2026-01-31 07:57:14.101 221554 DEBUG oslo_concurrency.lockutils [None req-a0ce1ed0-7752-4f2b-bf50-c87d2f7dcbf9 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "interface-3f1cc0eb-2641-4574-952f-f02264260ce6-5f60b96f-c91b-46d4-b7cf-18b1e71d5d00" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 10.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:14 np0005603609 nova_compute[221550]: 2026-01-31 07:57:14.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:57:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:14.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:15 np0005603609 nova_compute[221550]: 2026-01-31 07:57:15.287 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:15 np0005603609 nova_compute[221550]: 2026-01-31 07:57:15.437 221554 DEBUG nova.compute.manager [req-e2bbe513-514d-4eec-a7e2-ac8fc19b1e55 req-310bb17a-4a21-4114-8d8b-e6c97a41390e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received event network-vif-plugged-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:57:15 np0005603609 nova_compute[221550]: 2026-01-31 07:57:15.437 221554 DEBUG oslo_concurrency.lockutils [req-e2bbe513-514d-4eec-a7e2-ac8fc19b1e55 req-310bb17a-4a21-4114-8d8b-e6c97a41390e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:15 np0005603609 nova_compute[221550]: 2026-01-31 07:57:15.438 221554 DEBUG oslo_concurrency.lockutils [req-e2bbe513-514d-4eec-a7e2-ac8fc19b1e55 req-310bb17a-4a21-4114-8d8b-e6c97a41390e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:15 np0005603609 nova_compute[221550]: 2026-01-31 07:57:15.439 221554 DEBUG oslo_concurrency.lockutils [req-e2bbe513-514d-4eec-a7e2-ac8fc19b1e55 req-310bb17a-4a21-4114-8d8b-e6c97a41390e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:15 np0005603609 nova_compute[221550]: 2026-01-31 07:57:15.439 221554 DEBUG nova.compute.manager [req-e2bbe513-514d-4eec-a7e2-ac8fc19b1e55 req-310bb17a-4a21-4114-8d8b-e6c97a41390e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] No waiting events found dispatching network-vif-plugged-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:57:15 np0005603609 nova_compute[221550]: 2026-01-31 07:57:15.440 221554 WARNING nova.compute.manager [req-e2bbe513-514d-4eec-a7e2-ac8fc19b1e55 req-310bb17a-4a21-4114-8d8b-e6c97a41390e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received unexpected event network-vif-plugged-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:57:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:15.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:15 np0005603609 nova_compute[221550]: 2026-01-31 07:57:15.454 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:15 np0005603609 nova_compute[221550]: 2026-01-31 07:57:15.455 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:15 np0005603609 nova_compute[221550]: 2026-01-31 07:57:15.455 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:15 np0005603609 nova_compute[221550]: 2026-01-31 07:57:15.455 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:57:15 np0005603609 nova_compute[221550]: 2026-01-31 07:57:15.456 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:57:15 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/36840789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:57:15 np0005603609 nova_compute[221550]: 2026-01-31 07:57:15.885 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:57:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:16.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:57:17 np0005603609 nova_compute[221550]: 2026-01-31 07:57:17.429 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:57:17 np0005603609 nova_compute[221550]: 2026-01-31 07:57:17.430 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:57:17 np0005603609 nova_compute[221550]: 2026-01-31 07:57:17.434 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:57:17 np0005603609 nova_compute[221550]: 2026-01-31 07:57:17.434 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:57:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:17.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:17 np0005603609 nova_compute[221550]: 2026-01-31 07:57:17.571 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:57:17 np0005603609 nova_compute[221550]: 2026-01-31 07:57:17.572 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4219MB free_disk=20.78543472290039GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:57:17 np0005603609 nova_compute[221550]: 2026-01-31 07:57:17.572 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:17 np0005603609 nova_compute[221550]: 2026-01-31 07:57:17.572 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:18 np0005603609 nova_compute[221550]: 2026-01-31 07:57:18.105 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 2d377db9-047f-4be1-a0cf-8189254def22 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:57:18 np0005603609 nova_compute[221550]: 2026-01-31 07:57:18.106 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 3f1cc0eb-2641-4574-952f-f02264260ce6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:57:18 np0005603609 nova_compute[221550]: 2026-01-31 07:57:18.106 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 694a032a-234d-4e0c-9aa6-c7f4050fc232 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:57:18 np0005603609 nova_compute[221550]: 2026-01-31 07:57:18.106 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:57:18 np0005603609 nova_compute[221550]: 2026-01-31 07:57:18.106 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:57:18 np0005603609 nova_compute[221550]: 2026-01-31 07:57:18.116 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:18 np0005603609 nova_compute[221550]: 2026-01-31 07:57:18.125 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 02:57:18 np0005603609 nova_compute[221550]: 2026-01-31 07:57:18.298 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 02:57:18 np0005603609 nova_compute[221550]: 2026-01-31 07:57:18.299 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 02:57:18 np0005603609 nova_compute[221550]: 2026-01-31 07:57:18.317 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 02:57:18 np0005603609 nova_compute[221550]: 2026-01-31 07:57:18.347 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 02:57:18 np0005603609 nova_compute[221550]: 2026-01-31 07:57:18.547 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:18.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:57:18 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/664972945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:57:18 np0005603609 nova_compute[221550]: 2026-01-31 07:57:18.935 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:18 np0005603609 nova_compute[221550]: 2026-01-31 07:57:18.941 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:57:19 np0005603609 nova_compute[221550]: 2026-01-31 07:57:19.104 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:57:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:19.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:19 np0005603609 nova_compute[221550]: 2026-01-31 07:57:19.591 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:57:19 np0005603609 nova_compute[221550]: 2026-01-31 07:57:19.592 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:20 np0005603609 nova_compute[221550]: 2026-01-31 07:57:20.289 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:20 np0005603609 nova_compute[221550]: 2026-01-31 07:57:20.484 221554 DEBUG nova.network.neutron [-] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:57:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:20.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:21.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:21 np0005603609 nova_compute[221550]: 2026-01-31 07:57:21.594 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:57:21 np0005603609 nova_compute[221550]: 2026-01-31 07:57:21.595 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:57:21 np0005603609 nova_compute[221550]: 2026-01-31 07:57:21.595 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:57:21 np0005603609 nova_compute[221550]: 2026-01-31 07:57:21.595 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:57:22 np0005603609 nova_compute[221550]: 2026-01-31 07:57:22.363 221554 DEBUG nova.compute.manager [req-942eb9de-7a7a-46d7-9b32-ba0afb447ee3 req-4d658a6e-898f-4e19-b84b-dda633ca476e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Received event network-vif-deleted-f08f53c3-9326-4b07-bdcd-2c3a3a06da0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:57:22 np0005603609 nova_compute[221550]: 2026-01-31 07:57:22.364 221554 INFO nova.compute.manager [req-942eb9de-7a7a-46d7-9b32-ba0afb447ee3 req-4d658a6e-898f-4e19-b84b-dda633ca476e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Neutron deleted interface f08f53c3-9326-4b07-bdcd-2c3a3a06da0d; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 02:57:22 np0005603609 nova_compute[221550]: 2026-01-31 07:57:22.364 221554 DEBUG nova.network.neutron [req-942eb9de-7a7a-46d7-9b32-ba0afb447ee3 req-4d658a6e-898f-4e19-b84b-dda633ca476e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:57:22 np0005603609 nova_compute[221550]: 2026-01-31 07:57:22.404 221554 INFO nova.compute.manager [-] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Took 11.08 seconds to deallocate network for instance.#033[00m
Jan 31 02:57:22 np0005603609 nova_compute[221550]: 2026-01-31 07:57:22.430 221554 DEBUG nova.compute.manager [req-942eb9de-7a7a-46d7-9b32-ba0afb447ee3 req-4d658a6e-898f-4e19-b84b-dda633ca476e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Detach interface failed, port_id=f08f53c3-9326-4b07-bdcd-2c3a3a06da0d, reason: Instance 3f1cc0eb-2641-4574-952f-f02264260ce6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 02:57:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:22.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:22 np0005603609 nova_compute[221550]: 2026-01-31 07:57:22.899 221554 DEBUG oslo_concurrency.lockutils [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:22 np0005603609 nova_compute[221550]: 2026-01-31 07:57:22.900 221554 DEBUG oslo_concurrency.lockutils [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:23 np0005603609 nova_compute[221550]: 2026-01-31 07:57:23.042 221554 DEBUG oslo_concurrency.processutils [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:23 np0005603609 nova_compute[221550]: 2026-01-31 07:57:23.162 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:57:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:57:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:57:23 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3936542120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:57:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:23.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:23 np0005603609 nova_compute[221550]: 2026-01-31 07:57:23.454 221554 DEBUG oslo_concurrency.processutils [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:23 np0005603609 nova_compute[221550]: 2026-01-31 07:57:23.459 221554 DEBUG nova.compute.provider_tree [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:57:23 np0005603609 nova_compute[221550]: 2026-01-31 07:57:23.793 221554 DEBUG nova.scheduler.client.report [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:57:24 np0005603609 nova_compute[221550]: 2026-01-31 07:57:24.042 221554 DEBUG oslo_concurrency.lockutils [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:24 np0005603609 nova_compute[221550]: 2026-01-31 07:57:24.254 221554 INFO nova.scheduler.client.report [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Deleted allocations for instance 3f1cc0eb-2641-4574-952f-f02264260ce6#033[00m
Jan 31 02:57:24 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:57:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:24.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:24 np0005603609 nova_compute[221550]: 2026-01-31 07:57:24.912 221554 DEBUG oslo_concurrency.lockutils [None req-a564ea55-8401-4edd-a01a-72389c0bb544 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "3f1cc0eb-2641-4574-952f-f02264260ce6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 15.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:24 np0005603609 nova_compute[221550]: 2026-01-31 07:57:24.974 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846229.9731238, 3f1cc0eb-2641-4574-952f-f02264260ce6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:57:24 np0005603609 nova_compute[221550]: 2026-01-31 07:57:24.974 221554 INFO nova.compute.manager [-] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:57:25 np0005603609 nova_compute[221550]: 2026-01-31 07:57:25.140 221554 DEBUG nova.compute.manager [None req-a9a4f034-2ed4-4af7-a68a-38ed0c80b477 - - - - - -] [instance: 3f1cc0eb-2641-4574-952f-f02264260ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:57:25 np0005603609 nova_compute[221550]: 2026-01-31 07:57:25.335 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:25.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:26.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:27.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:27 np0005603609 nova_compute[221550]: 2026-01-31 07:57:27.874 221554 DEBUG oslo_concurrency.lockutils [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "2d377db9-047f-4be1-a0cf-8189254def22" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:27 np0005603609 nova_compute[221550]: 2026-01-31 07:57:27.874 221554 DEBUG oslo_concurrency.lockutils [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:27 np0005603609 nova_compute[221550]: 2026-01-31 07:57:27.875 221554 DEBUG oslo_concurrency.lockutils [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "2d377db9-047f-4be1-a0cf-8189254def22-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:27 np0005603609 nova_compute[221550]: 2026-01-31 07:57:27.875 221554 DEBUG oslo_concurrency.lockutils [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:27 np0005603609 nova_compute[221550]: 2026-01-31 07:57:27.875 221554 DEBUG oslo_concurrency.lockutils [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:27 np0005603609 nova_compute[221550]: 2026-01-31 07:57:27.876 221554 INFO nova.compute.manager [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Terminating instance#033[00m
Jan 31 02:57:27 np0005603609 nova_compute[221550]: 2026-01-31 07:57:27.877 221554 DEBUG nova.compute.manager [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:57:27 np0005603609 kernel: tap46923f1f-6c (unregistering): left promiscuous mode
Jan 31 02:57:27 np0005603609 NetworkManager[49064]: <info>  [1769846247.9304] device (tap46923f1f-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:57:27 np0005603609 ovn_controller[130359]: 2026-01-31T07:57:27Z|00281|binding|INFO|Releasing lport 46923f1f-6cb6-4592-94c9-5a95246f0b99 from this chassis (sb_readonly=0)
Jan 31 02:57:27 np0005603609 ovn_controller[130359]: 2026-01-31T07:57:27Z|00282|binding|INFO|Setting lport 46923f1f-6cb6-4592-94c9-5a95246f0b99 down in Southbound
Jan 31 02:57:27 np0005603609 ovn_controller[130359]: 2026-01-31T07:57:27Z|00283|binding|INFO|Removing iface tap46923f1f-6c ovn-installed in OVS
Jan 31 02:57:27 np0005603609 nova_compute[221550]: 2026-01-31 07:57:27.940 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:27 np0005603609 nova_compute[221550]: 2026-01-31 07:57:27.944 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:27 np0005603609 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Jan 31 02:57:27 np0005603609 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000004d.scope: Consumed 17.727s CPU time.
Jan 31 02:57:27 np0005603609 systemd-machined[190912]: Machine qemu-35-instance-0000004d terminated.
Jan 31 02:57:28 np0005603609 nova_compute[221550]: 2026-01-31 07:57:28.119 221554 INFO nova.virt.libvirt.driver [-] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Instance destroyed successfully.#033[00m
Jan 31 02:57:28 np0005603609 nova_compute[221550]: 2026-01-31 07:57:28.120 221554 DEBUG nova.objects.instance [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lazy-loading 'resources' on Instance uuid 2d377db9-047f-4be1-a0cf-8189254def22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:28 np0005603609 nova_compute[221550]: 2026-01-31 07:57:28.164 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:28.468 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:00:82 10.100.0.13'], port_security=['fa:16:3e:28:00:82 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2d377db9-047f-4be1-a0cf-8189254def22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-485494d9-5360-41c3-a10e-ef5098af0809', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8cbd6cc22654dfab04487522a63426c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f473e048-f911-4a1f-850c-a5dd47d82771', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44651228-c761-41fc-b495-d4156af21548, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=46923f1f-6cb6-4592-94c9-5a95246f0b99) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:57:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:28.470 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 46923f1f-6cb6-4592-94c9-5a95246f0b99 in datapath 485494d9-5360-41c3-a10e-ef5098af0809 unbound from our chassis#033[00m
Jan 31 02:57:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:28.472 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 485494d9-5360-41c3-a10e-ef5098af0809, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:57:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:28.474 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4d4aa0f8-a515-4b9b-a60f-ea324249753e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:28.474 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809 namespace which is not needed anymore#033[00m
Jan 31 02:57:28 np0005603609 nova_compute[221550]: 2026-01-31 07:57:28.573 221554 DEBUG nova.virt.libvirt.vif [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:54:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-984890616',display_name='tempest-tempest.common.compute-instance-984890616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-984890616',id=77,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnxydFFT/bcqoM03GknAV6VVEv+uI9/M2thVO454uMQmK087L6GvpajnIifCsqOX+pET7jqX84pd3d5htElL3GbJr344TJNF3ti2SMNzp16z2rTGy4GU5WRv9em26Ul9Q==',key_name='tempest-keypair-1143543931',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:55:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a8cbd6cc22654dfab04487522a63426c',ramdisk_id='',reservation_id='r-6f0xru8d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-668371425',owner_user_name='tempest-AttachInterfacesTestJSON-668371425-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:55:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1e1c5eef3d264666bc90735dd338d82a',uuid=2d377db9-047f-4be1-a0cf-8189254def22,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "address": "fa:16:3e:28:00:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46923f1f-6c", "ovs_interfaceid": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:57:28 np0005603609 nova_compute[221550]: 2026-01-31 07:57:28.574 221554 DEBUG nova.network.os_vif_util [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converting VIF {"id": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "address": "fa:16:3e:28:00:82", "network": {"id": "485494d9-5360-41c3-a10e-ef5098af0809", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1876544247-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a8cbd6cc22654dfab04487522a63426c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46923f1f-6c", "ovs_interfaceid": "46923f1f-6cb6-4592-94c9-5a95246f0b99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:57:28 np0005603609 nova_compute[221550]: 2026-01-31 07:57:28.575 221554 DEBUG nova.network.os_vif_util [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:00:82,bridge_name='br-int',has_traffic_filtering=True,id=46923f1f-6cb6-4592-94c9-5a95246f0b99,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46923f1f-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:57:28 np0005603609 nova_compute[221550]: 2026-01-31 07:57:28.575 221554 DEBUG os_vif [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:00:82,bridge_name='br-int',has_traffic_filtering=True,id=46923f1f-6cb6-4592-94c9-5a95246f0b99,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46923f1f-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:57:28 np0005603609 nova_compute[221550]: 2026-01-31 07:57:28.577 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:28 np0005603609 nova_compute[221550]: 2026-01-31 07:57:28.577 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap46923f1f-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:28 np0005603609 nova_compute[221550]: 2026-01-31 07:57:28.578 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:28 np0005603609 nova_compute[221550]: 2026-01-31 07:57:28.579 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:28 np0005603609 nova_compute[221550]: 2026-01-31 07:57:28.582 221554 INFO os_vif [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:00:82,bridge_name='br-int',has_traffic_filtering=True,id=46923f1f-6cb6-4592-94c9-5a95246f0b99,network=Network(485494d9-5360-41c3-a10e-ef5098af0809),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46923f1f-6c')#033[00m
Jan 31 02:57:28 np0005603609 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[250633]: [NOTICE]   (250646) : haproxy version is 2.8.14-c23fe91
Jan 31 02:57:28 np0005603609 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[250633]: [NOTICE]   (250646) : path to executable is /usr/sbin/haproxy
Jan 31 02:57:28 np0005603609 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[250633]: [WARNING]  (250646) : Exiting Master process...
Jan 31 02:57:28 np0005603609 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[250633]: [ALERT]    (250646) : Current worker (250648) exited with code 143 (Terminated)
Jan 31 02:57:28 np0005603609 neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809[250633]: [WARNING]  (250646) : All workers exited. Exiting... (0)
Jan 31 02:57:28 np0005603609 systemd[1]: libpod-e3afd40625e94d2ebca6f12b4df404ecbb94ea9b872e9f5a16c8dd8534ba0cf6.scope: Deactivated successfully.
Jan 31 02:57:28 np0005603609 podman[252306]: 2026-01-31 07:57:28.597621774 +0000 UTC m=+0.050009917 container died e3afd40625e94d2ebca6f12b4df404ecbb94ea9b872e9f5a16c8dd8534ba0cf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 02:57:28 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e3afd40625e94d2ebca6f12b4df404ecbb94ea9b872e9f5a16c8dd8534ba0cf6-userdata-shm.mount: Deactivated successfully.
Jan 31 02:57:28 np0005603609 systemd[1]: var-lib-containers-storage-overlay-c93ad587cc72dd99ec328ec3962af54ce57e3ab9c96137d924b7ef4b11336013-merged.mount: Deactivated successfully.
Jan 31 02:57:28 np0005603609 podman[252306]: 2026-01-31 07:57:28.640805256 +0000 UTC m=+0.093193389 container cleanup e3afd40625e94d2ebca6f12b4df404ecbb94ea9b872e9f5a16c8dd8534ba0cf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 02:57:28 np0005603609 systemd[1]: libpod-conmon-e3afd40625e94d2ebca6f12b4df404ecbb94ea9b872e9f5a16c8dd8534ba0cf6.scope: Deactivated successfully.
Jan 31 02:57:28 np0005603609 podman[252353]: 2026-01-31 07:57:28.719201746 +0000 UTC m=+0.062466948 container remove e3afd40625e94d2ebca6f12b4df404ecbb94ea9b872e9f5a16c8dd8534ba0cf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:57:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:28.724 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4270f40f-fb70-41d4-82b1-6cfdd273ca86]: (4, ('Sat Jan 31 07:57:28 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809 (e3afd40625e94d2ebca6f12b4df404ecbb94ea9b872e9f5a16c8dd8534ba0cf6)\ne3afd40625e94d2ebca6f12b4df404ecbb94ea9b872e9f5a16c8dd8534ba0cf6\nSat Jan 31 07:57:28 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809 (e3afd40625e94d2ebca6f12b4df404ecbb94ea9b872e9f5a16c8dd8534ba0cf6)\ne3afd40625e94d2ebca6f12b4df404ecbb94ea9b872e9f5a16c8dd8534ba0cf6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:28.726 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[679a1070-d510-41e9-96e3-b2dd16d64e6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:28.727 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap485494d9-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:28 np0005603609 kernel: tap485494d9-50: left promiscuous mode
Jan 31 02:57:28 np0005603609 nova_compute[221550]: 2026-01-31 07:57:28.729 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:28.732 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[35d68299-2ac7-40dd-b219-079b2151f993]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:28 np0005603609 nova_compute[221550]: 2026-01-31 07:57:28.735 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:28.744 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d224f0fd-ac29-4181-88e4-622551d87f1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:28.745 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a3c703-4f93-49c2-aaf5-a19fd2f3ba99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:28.756 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[305d1be7-8447-4c88-a446-bdf6da9e0822]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634855, 'reachable_time': 24982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252368, 'error': None, 'target': 'ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:28 np0005603609 systemd[1]: run-netns-ovnmeta\x2d485494d9\x2d5360\x2d41c3\x2da10e\x2def5098af0809.mount: Deactivated successfully.
Jan 31 02:57:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:28.759 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-485494d9-5360-41c3-a10e-ef5098af0809 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:57:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:28.759 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[0cfe3ff0-a35c-4285-9f3a-9cfdfa42b1e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:28.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:29.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:29 np0005603609 nova_compute[221550]: 2026-01-31 07:57:29.718 221554 INFO nova.virt.libvirt.driver [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Deleting instance files /var/lib/nova/instances/2d377db9-047f-4be1-a0cf-8189254def22_del#033[00m
Jan 31 02:57:29 np0005603609 nova_compute[221550]: 2026-01-31 07:57:29.720 221554 INFO nova.virt.libvirt.driver [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Deletion of /var/lib/nova/instances/2d377db9-047f-4be1-a0cf-8189254def22_del complete#033[00m
Jan 31 02:57:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:57:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:57:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:57:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:30.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:57:31 np0005603609 nova_compute[221550]: 2026-01-31 07:57:31.132 221554 INFO nova.compute.manager [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Took 3.25 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:57:31 np0005603609 nova_compute[221550]: 2026-01-31 07:57:31.133 221554 DEBUG oslo.service.loopingcall [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:57:31 np0005603609 nova_compute[221550]: 2026-01-31 07:57:31.133 221554 DEBUG nova.compute.manager [-] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:57:31 np0005603609 nova_compute[221550]: 2026-01-31 07:57:31.134 221554 DEBUG nova.network.neutron [-] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:57:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:31.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:32 np0005603609 nova_compute[221550]: 2026-01-31 07:57:32.785 221554 DEBUG nova.compute.manager [req-c3c17bde-5891-4a01-b5bb-4dbda9ed8ca6 req-9de5d38d-d18a-4b46-a7cf-bc6fc5285f3e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received event network-vif-unplugged-46923f1f-6cb6-4592-94c9-5a95246f0b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:57:32 np0005603609 nova_compute[221550]: 2026-01-31 07:57:32.785 221554 DEBUG oslo_concurrency.lockutils [req-c3c17bde-5891-4a01-b5bb-4dbda9ed8ca6 req-9de5d38d-d18a-4b46-a7cf-bc6fc5285f3e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2d377db9-047f-4be1-a0cf-8189254def22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:32 np0005603609 nova_compute[221550]: 2026-01-31 07:57:32.786 221554 DEBUG oslo_concurrency.lockutils [req-c3c17bde-5891-4a01-b5bb-4dbda9ed8ca6 req-9de5d38d-d18a-4b46-a7cf-bc6fc5285f3e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:32 np0005603609 nova_compute[221550]: 2026-01-31 07:57:32.786 221554 DEBUG oslo_concurrency.lockutils [req-c3c17bde-5891-4a01-b5bb-4dbda9ed8ca6 req-9de5d38d-d18a-4b46-a7cf-bc6fc5285f3e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:32 np0005603609 nova_compute[221550]: 2026-01-31 07:57:32.786 221554 DEBUG nova.compute.manager [req-c3c17bde-5891-4a01-b5bb-4dbda9ed8ca6 req-9de5d38d-d18a-4b46-a7cf-bc6fc5285f3e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] No waiting events found dispatching network-vif-unplugged-46923f1f-6cb6-4592-94c9-5a95246f0b99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:57:32 np0005603609 nova_compute[221550]: 2026-01-31 07:57:32.786 221554 DEBUG nova.compute.manager [req-c3c17bde-5891-4a01-b5bb-4dbda9ed8ca6 req-9de5d38d-d18a-4b46-a7cf-bc6fc5285f3e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received event network-vif-unplugged-46923f1f-6cb6-4592-94c9-5a95246f0b99 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:57:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:32.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:33 np0005603609 nova_compute[221550]: 2026-01-31 07:57:33.200 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:57:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:33.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:57:33 np0005603609 nova_compute[221550]: 2026-01-31 07:57:33.578 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:57:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:34.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:57:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:35.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:57:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:36.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:57:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:57:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:37.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:57:38 np0005603609 nova_compute[221550]: 2026-01-31 07:57:38.201 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:38 np0005603609 nova_compute[221550]: 2026-01-31 07:57:38.580 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:57:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:38.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:57:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:39.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:57:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:40.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:57:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:41.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:57:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:42.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:57:43 np0005603609 nova_compute[221550]: 2026-01-31 07:57:43.117 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846248.1160758, 2d377db9-047f-4be1-a0cf-8189254def22 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:57:43 np0005603609 nova_compute[221550]: 2026-01-31 07:57:43.118 221554 INFO nova.compute.manager [-] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:57:43 np0005603609 nova_compute[221550]: 2026-01-31 07:57:43.204 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:57:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:43.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:57:43 np0005603609 nova_compute[221550]: 2026-01-31 07:57:43.631 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:44 np0005603609 podman[252422]: 2026-01-31 07:57:44.225447997 +0000 UTC m=+0.098906836 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:57:44 np0005603609 podman[252423]: 2026-01-31 07:57:44.239818763 +0000 UTC m=+0.106452348 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 02:57:44 np0005603609 nova_compute[221550]: 2026-01-31 07:57:44.355 221554 DEBUG nova.compute.manager [req-e211e7da-2bd0-4f06-85b7-4bee2b74b0a8 req-b41c4b33-8fa5-40dd-9a6a-62e9f0dcacef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received event network-vif-plugged-46923f1f-6cb6-4592-94c9-5a95246f0b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:57:44 np0005603609 nova_compute[221550]: 2026-01-31 07:57:44.355 221554 DEBUG oslo_concurrency.lockutils [req-e211e7da-2bd0-4f06-85b7-4bee2b74b0a8 req-b41c4b33-8fa5-40dd-9a6a-62e9f0dcacef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2d377db9-047f-4be1-a0cf-8189254def22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:44 np0005603609 nova_compute[221550]: 2026-01-31 07:57:44.356 221554 DEBUG oslo_concurrency.lockutils [req-e211e7da-2bd0-4f06-85b7-4bee2b74b0a8 req-b41c4b33-8fa5-40dd-9a6a-62e9f0dcacef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:44 np0005603609 nova_compute[221550]: 2026-01-31 07:57:44.356 221554 DEBUG oslo_concurrency.lockutils [req-e211e7da-2bd0-4f06-85b7-4bee2b74b0a8 req-b41c4b33-8fa5-40dd-9a6a-62e9f0dcacef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:44 np0005603609 nova_compute[221550]: 2026-01-31 07:57:44.356 221554 DEBUG nova.compute.manager [req-e211e7da-2bd0-4f06-85b7-4bee2b74b0a8 req-b41c4b33-8fa5-40dd-9a6a-62e9f0dcacef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] No waiting events found dispatching network-vif-plugged-46923f1f-6cb6-4592-94c9-5a95246f0b99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:57:44 np0005603609 nova_compute[221550]: 2026-01-31 07:57:44.356 221554 WARNING nova.compute.manager [req-e211e7da-2bd0-4f06-85b7-4bee2b74b0a8 req-b41c4b33-8fa5-40dd-9a6a-62e9f0dcacef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received unexpected event network-vif-plugged-46923f1f-6cb6-4592-94c9-5a95246f0b99 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:57:44 np0005603609 nova_compute[221550]: 2026-01-31 07:57:44.407 221554 DEBUG nova.compute.manager [None req-d83bc510-8f6a-4363-a8aa-7220e7f70dc5 - - - - - -] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:57:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:44.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:45.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:45.626 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:57:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:45.626 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:57:45 np0005603609 nova_compute[221550]: 2026-01-31 07:57:45.628 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:57:46.628 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:46.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:47.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:48 np0005603609 nova_compute[221550]: 2026-01-31 07:57:48.241 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:48 np0005603609 nova_compute[221550]: 2026-01-31 07:57:48.632 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:48.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:49.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:57:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:50.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:57:50 np0005603609 nova_compute[221550]: 2026-01-31 07:57:50.885 221554 DEBUG nova.compute.manager [req-3560669c-fe70-454a-b65f-72c6adfd86e5 req-8f9de9dd-f7cc-49dc-87c7-46b0545fa494 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Received event network-vif-deleted-46923f1f-6cb6-4592-94c9-5a95246f0b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:57:50 np0005603609 nova_compute[221550]: 2026-01-31 07:57:50.885 221554 INFO nova.compute.manager [req-3560669c-fe70-454a-b65f-72c6adfd86e5 req-8f9de9dd-f7cc-49dc-87c7-46b0545fa494 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Neutron deleted interface 46923f1f-6cb6-4592-94c9-5a95246f0b99; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 02:57:50 np0005603609 nova_compute[221550]: 2026-01-31 07:57:50.886 221554 DEBUG nova.network.neutron [req-3560669c-fe70-454a-b65f-72c6adfd86e5 req-8f9de9dd-f7cc-49dc-87c7-46b0545fa494 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:57:51 np0005603609 nova_compute[221550]: 2026-01-31 07:57:51.171 221554 DEBUG nova.network.neutron [-] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:57:51 np0005603609 nova_compute[221550]: 2026-01-31 07:57:51.485 221554 INFO nova.compute.manager [-] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Took 20.35 seconds to deallocate network for instance.#033[00m
Jan 31 02:57:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:51.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:51 np0005603609 nova_compute[221550]: 2026-01-31 07:57:51.492 221554 DEBUG nova.compute.manager [req-3560669c-fe70-454a-b65f-72c6adfd86e5 req-8f9de9dd-f7cc-49dc-87c7-46b0545fa494 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2d377db9-047f-4be1-a0cf-8189254def22] Detach interface failed, port_id=46923f1f-6cb6-4592-94c9-5a95246f0b99, reason: Instance 2d377db9-047f-4be1-a0cf-8189254def22 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 02:57:52 np0005603609 nova_compute[221550]: 2026-01-31 07:57:52.730 221554 DEBUG oslo_concurrency.lockutils [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:52 np0005603609 nova_compute[221550]: 2026-01-31 07:57:52.730 221554 DEBUG oslo_concurrency.lockutils [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:52 np0005603609 nova_compute[221550]: 2026-01-31 07:57:52.843 221554 DEBUG oslo_concurrency.processutils [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:52.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:53 np0005603609 nova_compute[221550]: 2026-01-31 07:57:53.244 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:57:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1737226846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:57:53 np0005603609 nova_compute[221550]: 2026-01-31 07:57:53.299 221554 DEBUG oslo_concurrency.processutils [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:53 np0005603609 nova_compute[221550]: 2026-01-31 07:57:53.305 221554 DEBUG nova.compute.provider_tree [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:57:53 np0005603609 nova_compute[221550]: 2026-01-31 07:57:53.454 221554 DEBUG nova.scheduler.client.report [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:57:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:57:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:53.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:57:53 np0005603609 nova_compute[221550]: 2026-01-31 07:57:53.634 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:53 np0005603609 nova_compute[221550]: 2026-01-31 07:57:53.642 221554 DEBUG oslo_concurrency.lockutils [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:53 np0005603609 nova_compute[221550]: 2026-01-31 07:57:53.850 221554 INFO nova.scheduler.client.report [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Deleted allocations for instance 2d377db9-047f-4be1-a0cf-8189254def22#033[00m
Jan 31 02:57:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:57:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:54.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:57:55 np0005603609 nova_compute[221550]: 2026-01-31 07:57:55.230 221554 DEBUG oslo_concurrency.lockutils [None req-a85a7d32-6a49-4158-b2d9-4ae81d16f440 1e1c5eef3d264666bc90735dd338d82a a8cbd6cc22654dfab04487522a63426c - - default default] Lock "2d377db9-047f-4be1-a0cf-8189254def22" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 27.355s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:55.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:57:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:56.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:57:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:57:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:57.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:57:58 np0005603609 nova_compute[221550]: 2026-01-31 07:57:58.246 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:58 np0005603609 nova_compute[221550]: 2026-01-31 07:57:58.679 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:57:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:58.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:57:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:57:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:57:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:59.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:57:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:00.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:01.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:02.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:03 np0005603609 nova_compute[221550]: 2026-01-31 07:58:03.248 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:03.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:03 np0005603609 nova_compute[221550]: 2026-01-31 07:58:03.682 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:58:04.505290) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846284505356, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1126, "num_deletes": 250, "total_data_size": 2333327, "memory_usage": 2361512, "flush_reason": "Manual Compaction"}
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846284516642, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 942484, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41719, "largest_seqno": 42839, "table_properties": {"data_size": 938542, "index_size": 1530, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10696, "raw_average_key_size": 20, "raw_value_size": 930034, "raw_average_value_size": 1812, "num_data_blocks": 68, "num_entries": 513, "num_filter_entries": 513, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846194, "oldest_key_time": 1769846194, "file_creation_time": 1769846284, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 11378 microseconds, and 2341 cpu microseconds.
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:58:04.516676) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 942484 bytes OK
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:58:04.516693) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:58:04.518184) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:58:04.518196) EVENT_LOG_v1 {"time_micros": 1769846284518192, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:58:04.518211) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 2327907, prev total WAL file size 2327907, number of live WAL files 2.
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:58:04.518675) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323538' seq:72057594037927935, type:22 .. '6D6772737461740031353039' seq:0, type:0; will stop at (end)
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(920KB)], [78(11MB)]
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846284518706, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 12794685, "oldest_snapshot_seqno": -1}
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 6806 keys, 9538376 bytes, temperature: kUnknown
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846284631435, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 9538376, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9494556, "index_size": 25724, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17029, "raw_key_size": 174739, "raw_average_key_size": 25, "raw_value_size": 9374371, "raw_average_value_size": 1377, "num_data_blocks": 1023, "num_entries": 6806, "num_filter_entries": 6806, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769846284, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:58:04.631792) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9538376 bytes
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:58:04.633635) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.4 rd, 84.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.3 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(23.7) write-amplify(10.1) OK, records in: 7280, records dropped: 474 output_compression: NoCompression
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:58:04.633668) EVENT_LOG_v1 {"time_micros": 1769846284633654, "job": 48, "event": "compaction_finished", "compaction_time_micros": 112838, "compaction_time_cpu_micros": 23215, "output_level": 6, "num_output_files": 1, "total_output_size": 9538376, "num_input_records": 7280, "num_output_records": 6806, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846284634083, "job": 48, "event": "table_file_deletion", "file_number": 80}
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846284635924, "job": 48, "event": "table_file_deletion", "file_number": 78}
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:58:04.518595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:58:04.636011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:58:04.636019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:58:04.636023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:58:04.636028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:58:04.636032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:58:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:58:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:04.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:58:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:05.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:05 np0005603609 nova_compute[221550]: 2026-01-31 07:58:05.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:06.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:07.493 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:07.493 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:07.494 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:58:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:07.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:58:08 np0005603609 nova_compute[221550]: 2026-01-31 07:58:08.249 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:08 np0005603609 nova_compute[221550]: 2026-01-31 07:58:08.684 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:08.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:09 np0005603609 ovn_controller[130359]: 2026-01-31T07:58:09Z|00284|binding|INFO|Releasing lport d19b5f05-fa79-4835-8ef4-51f87493d59b from this chassis (sb_readonly=0)
Jan 31 02:58:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:09.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:09 np0005603609 nova_compute[221550]: 2026-01-31 07:58:09.519 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:09 np0005603609 ovn_controller[130359]: 2026-01-31T07:58:09Z|00285|binding|INFO|Releasing lport d19b5f05-fa79-4835-8ef4-51f87493d59b from this chassis (sb_readonly=0)
Jan 31 02:58:09 np0005603609 nova_compute[221550]: 2026-01-31 07:58:09.561 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:09 np0005603609 nova_compute[221550]: 2026-01-31 07:58:09.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:10 np0005603609 nova_compute[221550]: 2026-01-31 07:58:10.831 221554 DEBUG oslo_concurrency.lockutils [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "694a032a-234d-4e0c-9aa6-c7f4050fc232" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:10 np0005603609 nova_compute[221550]: 2026-01-31 07:58:10.832 221554 DEBUG oslo_concurrency.lockutils [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "694a032a-234d-4e0c-9aa6-c7f4050fc232" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:10 np0005603609 nova_compute[221550]: 2026-01-31 07:58:10.832 221554 DEBUG oslo_concurrency.lockutils [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "694a032a-234d-4e0c-9aa6-c7f4050fc232-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:10 np0005603609 nova_compute[221550]: 2026-01-31 07:58:10.833 221554 DEBUG oslo_concurrency.lockutils [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "694a032a-234d-4e0c-9aa6-c7f4050fc232-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:10 np0005603609 nova_compute[221550]: 2026-01-31 07:58:10.833 221554 DEBUG oslo_concurrency.lockutils [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "694a032a-234d-4e0c-9aa6-c7f4050fc232-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:10 np0005603609 nova_compute[221550]: 2026-01-31 07:58:10.835 221554 INFO nova.compute.manager [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Terminating instance#033[00m
Jan 31 02:58:10 np0005603609 nova_compute[221550]: 2026-01-31 07:58:10.837 221554 DEBUG nova.compute.manager [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:58:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:58:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:10.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:58:11 np0005603609 kernel: tap237f2b31-27 (unregistering): left promiscuous mode
Jan 31 02:58:11 np0005603609 NetworkManager[49064]: <info>  [1769846291.0542] device (tap237f2b31-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:58:11 np0005603609 ovn_controller[130359]: 2026-01-31T07:58:11Z|00286|binding|INFO|Releasing lport 237f2b31-27f1-466b-b664-d3e725dc6535 from this chassis (sb_readonly=0)
Jan 31 02:58:11 np0005603609 ovn_controller[130359]: 2026-01-31T07:58:11Z|00287|binding|INFO|Setting lport 237f2b31-27f1-466b-b664-d3e725dc6535 down in Southbound
Jan 31 02:58:11 np0005603609 ovn_controller[130359]: 2026-01-31T07:58:11Z|00288|binding|INFO|Removing iface tap237f2b31-27 ovn-installed in OVS
Jan 31 02:58:11 np0005603609 nova_compute[221550]: 2026-01-31 07:58:11.065 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:11 np0005603609 nova_compute[221550]: 2026-01-31 07:58:11.072 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:11 np0005603609 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000053.scope: Deactivated successfully.
Jan 31 02:58:11 np0005603609 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000053.scope: Consumed 15.404s CPU time.
Jan 31 02:58:11 np0005603609 systemd-machined[190912]: Machine qemu-37-instance-00000053 terminated.
Jan 31 02:58:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:11.224 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:37:85 10.100.0.10'], port_security=['fa:16:3e:da:37:85 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '694a032a-234d-4e0c-9aa6-c7f4050fc232', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1cd91610847a480caeee0ae3cdabf066', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f1695a46-1d81-4453-9c52-9917c020bc65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e38d659-b6a8-4d3d-8a23-b8299c5114da, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=237f2b31-27f1-466b-b664-d3e725dc6535) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:58:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:11.226 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 237f2b31-27f1-466b-b664-d3e725dc6535 in datapath ca1ed3b2-b27d-427e-a9bd-cc12393752eb unbound from our chassis#033[00m
Jan 31 02:58:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:11.228 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ca1ed3b2-b27d-427e-a9bd-cc12393752eb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:58:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:11.229 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e477ede2-9e93-4ed8-bda6-bf5579aee075]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:11.229 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb namespace which is not needed anymore#033[00m
Jan 31 02:58:11 np0005603609 nova_compute[221550]: 2026-01-31 07:58:11.306 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:11 np0005603609 nova_compute[221550]: 2026-01-31 07:58:11.318 221554 INFO nova.virt.libvirt.driver [-] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Instance destroyed successfully.#033[00m
Jan 31 02:58:11 np0005603609 nova_compute[221550]: 2026-01-31 07:58:11.319 221554 DEBUG nova.objects.instance [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lazy-loading 'resources' on Instance uuid 694a032a-234d-4e0c-9aa6-c7f4050fc232 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:58:11 np0005603609 nova_compute[221550]: 2026-01-31 07:58:11.450 221554 DEBUG nova.virt.libvirt.vif [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:56:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1263019804',display_name='tempest-ListServerFiltersTestJSON-instance-1263019804',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1263019804',id=83,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:56:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1cd91610847a480caeee0ae3cdabf066',ramdisk_id='',reservation_id='r-y34izdak',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-334452958',owner_user_name='tempest-ListServerFiltersTestJSON-334452958-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:56:55Z,user_data=None,user_id='f60419a58aea43b9a0b6db7d61d71246',uuid=694a032a-234d-4e0c-9aa6-c7f4050fc232,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "237f2b31-27f1-466b-b664-d3e725dc6535", "address": "fa:16:3e:da:37:85", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap237f2b31-27", "ovs_interfaceid": "237f2b31-27f1-466b-b664-d3e725dc6535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:58:11 np0005603609 nova_compute[221550]: 2026-01-31 07:58:11.450 221554 DEBUG nova.network.os_vif_util [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converting VIF {"id": "237f2b31-27f1-466b-b664-d3e725dc6535", "address": "fa:16:3e:da:37:85", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap237f2b31-27", "ovs_interfaceid": "237f2b31-27f1-466b-b664-d3e725dc6535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:58:11 np0005603609 nova_compute[221550]: 2026-01-31 07:58:11.451 221554 DEBUG nova.network.os_vif_util [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:37:85,bridge_name='br-int',has_traffic_filtering=True,id=237f2b31-27f1-466b-b664-d3e725dc6535,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap237f2b31-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:58:11 np0005603609 nova_compute[221550]: 2026-01-31 07:58:11.452 221554 DEBUG os_vif [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:37:85,bridge_name='br-int',has_traffic_filtering=True,id=237f2b31-27f1-466b-b664-d3e725dc6535,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap237f2b31-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:58:11 np0005603609 nova_compute[221550]: 2026-01-31 07:58:11.453 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:11 np0005603609 nova_compute[221550]: 2026-01-31 07:58:11.453 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap237f2b31-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:58:11 np0005603609 nova_compute[221550]: 2026-01-31 07:58:11.454 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:11 np0005603609 nova_compute[221550]: 2026-01-31 07:58:11.456 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:11 np0005603609 nova_compute[221550]: 2026-01-31 07:58:11.458 221554 INFO os_vif [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:37:85,bridge_name='br-int',has_traffic_filtering=True,id=237f2b31-27f1-466b-b664-d3e725dc6535,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap237f2b31-27')#033[00m
Jan 31 02:58:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:11.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:11 np0005603609 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[251932]: [NOTICE]   (251936) : haproxy version is 2.8.14-c23fe91
Jan 31 02:58:11 np0005603609 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[251932]: [NOTICE]   (251936) : path to executable is /usr/sbin/haproxy
Jan 31 02:58:11 np0005603609 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[251932]: [WARNING]  (251936) : Exiting Master process...
Jan 31 02:58:11 np0005603609 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[251932]: [ALERT]    (251936) : Current worker (251939) exited with code 143 (Terminated)
Jan 31 02:58:11 np0005603609 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[251932]: [WARNING]  (251936) : All workers exited. Exiting... (0)
Jan 31 02:58:11 np0005603609 systemd[1]: libpod-db782b39a79e5380823f6a303ab7e2bdc2d6742be71bf8bb8cb9c9d63598d37b.scope: Deactivated successfully.
Jan 31 02:58:11 np0005603609 podman[252522]: 2026-01-31 07:58:11.657279175 +0000 UTC m=+0.360088626 container died db782b39a79e5380823f6a303ab7e2bdc2d6742be71bf8bb8cb9c9d63598d37b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 02:58:11 np0005603609 nova_compute[221550]: 2026-01-31 07:58:11.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:11 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-db782b39a79e5380823f6a303ab7e2bdc2d6742be71bf8bb8cb9c9d63598d37b-userdata-shm.mount: Deactivated successfully.
Jan 31 02:58:11 np0005603609 systemd[1]: var-lib-containers-storage-overlay-f90f5d32489bdc40ce4152f7f53b7a70dafc7dfa2524b21b14731b3543922f87-merged.mount: Deactivated successfully.
Jan 31 02:58:12 np0005603609 podman[252522]: 2026-01-31 07:58:12.427176469 +0000 UTC m=+1.129985950 container cleanup db782b39a79e5380823f6a303ab7e2bdc2d6742be71bf8bb8cb9c9d63598d37b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:58:12 np0005603609 systemd[1]: libpod-conmon-db782b39a79e5380823f6a303ab7e2bdc2d6742be71bf8bb8cb9c9d63598d37b.scope: Deactivated successfully.
Jan 31 02:58:12 np0005603609 nova_compute[221550]: 2026-01-31 07:58:12.651 221554 DEBUG nova.compute.manager [req-adac341a-fe7f-4ad4-b5b9-ec5a59a32a58 req-ab2b6417-e69e-4505-98b2-cc3a6d56debd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Received event network-vif-unplugged-237f2b31-27f1-466b-b664-d3e725dc6535 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:58:12 np0005603609 nova_compute[221550]: 2026-01-31 07:58:12.652 221554 DEBUG oslo_concurrency.lockutils [req-adac341a-fe7f-4ad4-b5b9-ec5a59a32a58 req-ab2b6417-e69e-4505-98b2-cc3a6d56debd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "694a032a-234d-4e0c-9aa6-c7f4050fc232-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:12 np0005603609 nova_compute[221550]: 2026-01-31 07:58:12.652 221554 DEBUG oslo_concurrency.lockutils [req-adac341a-fe7f-4ad4-b5b9-ec5a59a32a58 req-ab2b6417-e69e-4505-98b2-cc3a6d56debd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "694a032a-234d-4e0c-9aa6-c7f4050fc232-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:12 np0005603609 nova_compute[221550]: 2026-01-31 07:58:12.653 221554 DEBUG oslo_concurrency.lockutils [req-adac341a-fe7f-4ad4-b5b9-ec5a59a32a58 req-ab2b6417-e69e-4505-98b2-cc3a6d56debd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "694a032a-234d-4e0c-9aa6-c7f4050fc232-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:12 np0005603609 nova_compute[221550]: 2026-01-31 07:58:12.653 221554 DEBUG nova.compute.manager [req-adac341a-fe7f-4ad4-b5b9-ec5a59a32a58 req-ab2b6417-e69e-4505-98b2-cc3a6d56debd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] No waiting events found dispatching network-vif-unplugged-237f2b31-27f1-466b-b664-d3e725dc6535 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:58:12 np0005603609 nova_compute[221550]: 2026-01-31 07:58:12.653 221554 DEBUG nova.compute.manager [req-adac341a-fe7f-4ad4-b5b9-ec5a59a32a58 req-ab2b6417-e69e-4505-98b2-cc3a6d56debd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Received event network-vif-unplugged-237f2b31-27f1-466b-b664-d3e725dc6535 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:58:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:12.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:13 np0005603609 nova_compute[221550]: 2026-01-31 07:58:13.251 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:13 np0005603609 podman[252580]: 2026-01-31 07:58:13.309200014 +0000 UTC m=+0.855277125 container remove db782b39a79e5380823f6a303ab7e2bdc2d6742be71bf8bb8cb9c9d63598d37b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:58:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:13.313 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2e2b4af3-6797-4e64-b3ce-a2e42be342a6]: (4, ('Sat Jan 31 07:58:11 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb (db782b39a79e5380823f6a303ab7e2bdc2d6742be71bf8bb8cb9c9d63598d37b)\ndb782b39a79e5380823f6a303ab7e2bdc2d6742be71bf8bb8cb9c9d63598d37b\nSat Jan 31 07:58:12 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb (db782b39a79e5380823f6a303ab7e2bdc2d6742be71bf8bb8cb9c9d63598d37b)\ndb782b39a79e5380823f6a303ab7e2bdc2d6742be71bf8bb8cb9c9d63598d37b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:13.315 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a36ac2-d367-49e6-a571-87be17016d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:13.316 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca1ed3b2-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:58:13 np0005603609 nova_compute[221550]: 2026-01-31 07:58:13.318 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:13 np0005603609 kernel: tapca1ed3b2-b0: left promiscuous mode
Jan 31 02:58:13 np0005603609 nova_compute[221550]: 2026-01-31 07:58:13.323 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:13.325 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e2751eeb-7eb0-4e6e-8fae-e6e09a1108a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:13.340 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c96706f9-bc80-4f28-9528-cfa45a913c16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:13.342 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1394a6-a60b-4db5-8249-2bfc9b6705e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:13.359 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7af558c5-116d-4fb4-8c39-2ec1bad6f9f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645462, 'reachable_time': 21239, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252596, 'error': None, 'target': 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:13 np0005603609 systemd[1]: run-netns-ovnmeta\x2dca1ed3b2\x2db27d\x2d427e\x2da9bd\x2dcc12393752eb.mount: Deactivated successfully.
Jan 31 02:58:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:13.364 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:58:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:13.364 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[671b1317-b490-4609-858d-a54ceb224bb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:58:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:13.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:58:13 np0005603609 nova_compute[221550]: 2026-01-31 07:58:13.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:13 np0005603609 nova_compute[221550]: 2026-01-31 07:58:13.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:58:13 np0005603609 nova_compute[221550]: 2026-01-31 07:58:13.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:58:13 np0005603609 nova_compute[221550]: 2026-01-31 07:58:13.949 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 31 02:58:13 np0005603609 nova_compute[221550]: 2026-01-31 07:58:13.949 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:58:14 np0005603609 nova_compute[221550]: 2026-01-31 07:58:14.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:14 np0005603609 nova_compute[221550]: 2026-01-31 07:58:14.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:14 np0005603609 nova_compute[221550]: 2026-01-31 07:58:14.856 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:14 np0005603609 nova_compute[221550]: 2026-01-31 07:58:14.857 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:14 np0005603609 nova_compute[221550]: 2026-01-31 07:58:14.857 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:14 np0005603609 nova_compute[221550]: 2026-01-31 07:58:14.857 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:58:14 np0005603609 nova_compute[221550]: 2026-01-31 07:58:14.858 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:14.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:15 np0005603609 nova_compute[221550]: 2026-01-31 07:58:15.172 221554 DEBUG nova.compute.manager [req-a2e6d093-e968-42d4-bab4-59c4a7c71790 req-5c1f5f78-588e-4f31-837d-c152c5b84c31 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Received event network-vif-plugged-237f2b31-27f1-466b-b664-d3e725dc6535 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:58:15 np0005603609 nova_compute[221550]: 2026-01-31 07:58:15.173 221554 DEBUG oslo_concurrency.lockutils [req-a2e6d093-e968-42d4-bab4-59c4a7c71790 req-5c1f5f78-588e-4f31-837d-c152c5b84c31 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "694a032a-234d-4e0c-9aa6-c7f4050fc232-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:15 np0005603609 nova_compute[221550]: 2026-01-31 07:58:15.173 221554 DEBUG oslo_concurrency.lockutils [req-a2e6d093-e968-42d4-bab4-59c4a7c71790 req-5c1f5f78-588e-4f31-837d-c152c5b84c31 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "694a032a-234d-4e0c-9aa6-c7f4050fc232-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:15 np0005603609 nova_compute[221550]: 2026-01-31 07:58:15.173 221554 DEBUG oslo_concurrency.lockutils [req-a2e6d093-e968-42d4-bab4-59c4a7c71790 req-5c1f5f78-588e-4f31-837d-c152c5b84c31 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "694a032a-234d-4e0c-9aa6-c7f4050fc232-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:15 np0005603609 nova_compute[221550]: 2026-01-31 07:58:15.173 221554 DEBUG nova.compute.manager [req-a2e6d093-e968-42d4-bab4-59c4a7c71790 req-5c1f5f78-588e-4f31-837d-c152c5b84c31 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] No waiting events found dispatching network-vif-plugged-237f2b31-27f1-466b-b664-d3e725dc6535 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:58:15 np0005603609 nova_compute[221550]: 2026-01-31 07:58:15.173 221554 WARNING nova.compute.manager [req-a2e6d093-e968-42d4-bab4-59c4a7c71790 req-5c1f5f78-588e-4f31-837d-c152c5b84c31 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Received unexpected event network-vif-plugged-237f2b31-27f1-466b-b664-d3e725dc6535 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:58:15 np0005603609 podman[252610]: 2026-01-31 07:58:15.182642541 +0000 UTC m=+0.055351381 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:58:15 np0005603609 podman[252609]: 2026-01-31 07:58:15.276193471 +0000 UTC m=+0.152015756 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 02:58:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:58:15 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1802042998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:58:15 np0005603609 nova_compute[221550]: 2026-01-31 07:58:15.485 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:15.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:15 np0005603609 nova_compute[221550]: 2026-01-31 07:58:15.939 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:58:15 np0005603609 nova_compute[221550]: 2026-01-31 07:58:15.939 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:58:16 np0005603609 nova_compute[221550]: 2026-01-31 07:58:16.136 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:58:16 np0005603609 nova_compute[221550]: 2026-01-31 07:58:16.138 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4647MB free_disk=20.830791473388672GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:58:16 np0005603609 nova_compute[221550]: 2026-01-31 07:58:16.138 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:16 np0005603609 nova_compute[221550]: 2026-01-31 07:58:16.138 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:16 np0005603609 nova_compute[221550]: 2026-01-31 07:58:16.455 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:16 np0005603609 nova_compute[221550]: 2026-01-31 07:58:16.589 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 694a032a-234d-4e0c-9aa6-c7f4050fc232 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:58:16 np0005603609 nova_compute[221550]: 2026-01-31 07:58:16.590 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:58:16 np0005603609 nova_compute[221550]: 2026-01-31 07:58:16.591 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=704MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:58:16 np0005603609 nova_compute[221550]: 2026-01-31 07:58:16.708 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:58:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:16.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:58:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:58:17 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/14013203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:58:17 np0005603609 nova_compute[221550]: 2026-01-31 07:58:17.146 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:17 np0005603609 nova_compute[221550]: 2026-01-31 07:58:17.151 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:58:17 np0005603609 nova_compute[221550]: 2026-01-31 07:58:17.248 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:58:17 np0005603609 nova_compute[221550]: 2026-01-31 07:58:17.453 221554 INFO nova.virt.libvirt.driver [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Deleting instance files /var/lib/nova/instances/694a032a-234d-4e0c-9aa6-c7f4050fc232_del#033[00m
Jan 31 02:58:17 np0005603609 nova_compute[221550]: 2026-01-31 07:58:17.454 221554 INFO nova.virt.libvirt.driver [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Deletion of /var/lib/nova/instances/694a032a-234d-4e0c-9aa6-c7f4050fc232_del complete#033[00m
Jan 31 02:58:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:58:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:17.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:58:17 np0005603609 nova_compute[221550]: 2026-01-31 07:58:17.882 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:58:17 np0005603609 nova_compute[221550]: 2026-01-31 07:58:17.882 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:18 np0005603609 nova_compute[221550]: 2026-01-31 07:58:18.045 221554 INFO nova.compute.manager [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Took 7.21 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:58:18 np0005603609 nova_compute[221550]: 2026-01-31 07:58:18.045 221554 DEBUG oslo.service.loopingcall [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:58:18 np0005603609 nova_compute[221550]: 2026-01-31 07:58:18.045 221554 DEBUG nova.compute.manager [-] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:58:18 np0005603609 nova_compute[221550]: 2026-01-31 07:58:18.046 221554 DEBUG nova.network.neutron [-] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:58:18 np0005603609 nova_compute[221550]: 2026-01-31 07:58:18.253 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:58:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:18.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:58:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:19.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:58:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:20.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:58:21 np0005603609 nova_compute[221550]: 2026-01-31 07:58:21.460 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:21.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:21 np0005603609 nova_compute[221550]: 2026-01-31 07:58:21.879 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:21 np0005603609 nova_compute[221550]: 2026-01-31 07:58:21.880 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:21 np0005603609 nova_compute[221550]: 2026-01-31 07:58:21.880 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:21 np0005603609 nova_compute[221550]: 2026-01-31 07:58:21.880 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:58:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:22.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:23 np0005603609 nova_compute[221550]: 2026-01-31 07:58:23.256 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:23.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:24.058 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:58:24 np0005603609 nova_compute[221550]: 2026-01-31 07:58:24.058 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:24.059 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:58:24 np0005603609 nova_compute[221550]: 2026-01-31 07:58:24.088 221554 DEBUG nova.network.neutron [-] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:58:24 np0005603609 nova_compute[221550]: 2026-01-31 07:58:24.155 221554 INFO nova.compute.manager [-] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Took 6.11 seconds to deallocate network for instance.#033[00m
Jan 31 02:58:24 np0005603609 nova_compute[221550]: 2026-01-31 07:58:24.290 221554 DEBUG oslo_concurrency.lockutils [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:24 np0005603609 nova_compute[221550]: 2026-01-31 07:58:24.291 221554 DEBUG oslo_concurrency.lockutils [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:24 np0005603609 nova_compute[221550]: 2026-01-31 07:58:24.383 221554 DEBUG oslo_concurrency.processutils [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:24 np0005603609 nova_compute[221550]: 2026-01-31 07:58:24.404 221554 DEBUG nova.compute.manager [req-1cbf1d20-89ee-4e98-bee9-30aafb263b62 req-351fc756-87f5-4869-94c1-ab16917b29c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Received event network-vif-deleted-237f2b31-27f1-466b-b664-d3e725dc6535 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:58:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:58:24 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1324782983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:58:24 np0005603609 nova_compute[221550]: 2026-01-31 07:58:24.806 221554 DEBUG oslo_concurrency.processutils [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:24 np0005603609 nova_compute[221550]: 2026-01-31 07:58:24.812 221554 DEBUG nova.compute.provider_tree [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:58:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:24.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:25 np0005603609 nova_compute[221550]: 2026-01-31 07:58:25.028 221554 DEBUG nova.scheduler.client.report [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:58:25 np0005603609 nova_compute[221550]: 2026-01-31 07:58:25.055 221554 DEBUG oslo_concurrency.lockutils [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:25 np0005603609 nova_compute[221550]: 2026-01-31 07:58:25.178 221554 INFO nova.scheduler.client.report [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Deleted allocations for instance 694a032a-234d-4e0c-9aa6-c7f4050fc232#033[00m
Jan 31 02:58:25 np0005603609 nova_compute[221550]: 2026-01-31 07:58:25.318 221554 DEBUG oslo_concurrency.lockutils [None req-2566d154-2c85-4de4-a3d4-d2b6c3d7b5a7 f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "694a032a-234d-4e0c-9aa6-c7f4050fc232" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 14.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:25.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:26 np0005603609 nova_compute[221550]: 2026-01-31 07:58:26.316 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846291.3156943, 694a032a-234d-4e0c-9aa6-c7f4050fc232 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:58:26 np0005603609 nova_compute[221550]: 2026-01-31 07:58:26.317 221554 INFO nova.compute.manager [-] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:58:26 np0005603609 nova_compute[221550]: 2026-01-31 07:58:26.422 221554 DEBUG nova.compute.manager [None req-98d74595-14b7-4990-8659-62a7a1eed9e1 - - - - - -] [instance: 694a032a-234d-4e0c-9aa6-c7f4050fc232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:58:26 np0005603609 nova_compute[221550]: 2026-01-31 07:58:26.510 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:26.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:58:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:27.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:58:27 np0005603609 nova_compute[221550]: 2026-01-31 07:58:27.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:28 np0005603609 nova_compute[221550]: 2026-01-31 07:58:28.257 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:28.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:58:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:29.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:58:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:58:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:58:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:30.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:58:31.062 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:58:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:58:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:58:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:58:31 np0005603609 nova_compute[221550]: 2026-01-31 07:58:31.513 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:31.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:58:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:32.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:58:33 np0005603609 nova_compute[221550]: 2026-01-31 07:58:33.260 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:58:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:33.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:58:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:34.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:58:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:35.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:58:36 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:58:36 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:58:36 np0005603609 nova_compute[221550]: 2026-01-31 07:58:36.516 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:36.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:37.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:38 np0005603609 nova_compute[221550]: 2026-01-31 07:58:38.261 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:38.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:39.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:41.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:41 np0005603609 nova_compute[221550]: 2026-01-31 07:58:41.557 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:41.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:43.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:43 np0005603609 nova_compute[221550]: 2026-01-31 07:58:43.265 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:58:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:43.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:58:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:45.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:45.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:46 np0005603609 podman[252892]: 2026-01-31 07:58:46.154015603 +0000 UTC m=+0.043375248 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 02:58:46 np0005603609 podman[252891]: 2026-01-31 07:58:46.182905788 +0000 UTC m=+0.072111129 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 02:58:46 np0005603609 nova_compute[221550]: 2026-01-31 07:58:46.558 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:58:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:47.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:58:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:58:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:47.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:58:48 np0005603609 nova_compute[221550]: 2026-01-31 07:58:48.105 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:48 np0005603609 nova_compute[221550]: 2026-01-31 07:58:48.265 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:49.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:58:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:49.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:58:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:51.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:51 np0005603609 nova_compute[221550]: 2026-01-31 07:58:51.562 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 02:58:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:51.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 02:58:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:53.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:53 np0005603609 nova_compute[221550]: 2026-01-31 07:58:53.317 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:53.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:55.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:55.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:56 np0005603609 nova_compute[221550]: 2026-01-31 07:58:56.566 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:57.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:57.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:58 np0005603609 nova_compute[221550]: 2026-01-31 07:58:58.319 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:59.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:58:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:58:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:59.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:58:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:59:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:01.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:59:01 np0005603609 nova_compute[221550]: 2026-01-31 07:59:01.567 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:01.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:59:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:03.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:59:03 np0005603609 nova_compute[221550]: 2026-01-31 07:59:03.321 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:03.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:05.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:05.389 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:59:05 np0005603609 nova_compute[221550]: 2026-01-31 07:59:05.389 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:05.390 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:59:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:05.391 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:05.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:06 np0005603609 nova_compute[221550]: 2026-01-31 07:59:06.571 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:07.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:07.494 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:07.494 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:07.495 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:59:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:07.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:59:07 np0005603609 nova_compute[221550]: 2026-01-31 07:59:07.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:59:08 np0005603609 nova_compute[221550]: 2026-01-31 07:59:08.324 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:09.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:59:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:09.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:59:09 np0005603609 nova_compute[221550]: 2026-01-31 07:59:09.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:59:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:11 np0005603609 nova_compute[221550]: 2026-01-31 07:59:11.150 221554 DEBUG oslo_concurrency.lockutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Acquiring lock "65660f76-4c1d-4057-9be2-1ced558ca9e2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:11 np0005603609 nova_compute[221550]: 2026-01-31 07:59:11.151 221554 DEBUG oslo_concurrency.lockutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:11.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:11 np0005603609 nova_compute[221550]: 2026-01-31 07:59:11.290 221554 DEBUG nova.compute.manager [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:59:11 np0005603609 nova_compute[221550]: 2026-01-31 07:59:11.441 221554 DEBUG oslo_concurrency.lockutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:11 np0005603609 nova_compute[221550]: 2026-01-31 07:59:11.441 221554 DEBUG oslo_concurrency.lockutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:11 np0005603609 nova_compute[221550]: 2026-01-31 07:59:11.449 221554 DEBUG nova.virt.hardware [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:59:11 np0005603609 nova_compute[221550]: 2026-01-31 07:59:11.449 221554 INFO nova.compute.claims [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 02:59:11 np0005603609 nova_compute[221550]: 2026-01-31 07:59:11.573 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:59:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:11.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:59:11 np0005603609 nova_compute[221550]: 2026-01-31 07:59:11.675 221554 DEBUG oslo_concurrency.processutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:59:12 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/471546171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.068 221554 DEBUG oslo_concurrency.processutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.073 221554 DEBUG nova.compute.provider_tree [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.131 221554 DEBUG nova.scheduler.client.report [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.176 221554 DEBUG oslo_concurrency.lockutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.177 221554 DEBUG nova.compute.manager [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.249 221554 DEBUG nova.compute.manager [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.249 221554 DEBUG nova.network.neutron [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.281 221554 INFO nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.338 221554 DEBUG nova.compute.manager [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.456 221554 DEBUG nova.compute.manager [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.457 221554 DEBUG nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.457 221554 INFO nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Creating image(s)#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.481 221554 DEBUG nova.storage.rbd_utils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] rbd image 65660f76-4c1d-4057-9be2-1ced558ca9e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.512 221554 DEBUG nova.storage.rbd_utils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] rbd image 65660f76-4c1d-4057-9be2-1ced558ca9e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.539 221554 DEBUG nova.storage.rbd_utils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] rbd image 65660f76-4c1d-4057-9be2-1ced558ca9e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.543 221554 DEBUG oslo_concurrency.processutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.596 221554 DEBUG oslo_concurrency.processutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.597 221554 DEBUG oslo_concurrency.lockutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.597 221554 DEBUG oslo_concurrency.lockutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.598 221554 DEBUG oslo_concurrency.lockutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.620 221554 DEBUG nova.storage.rbd_utils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] rbd image 65660f76-4c1d-4057-9be2-1ced558ca9e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.623 221554 DEBUG oslo_concurrency.processutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 65660f76-4c1d-4057-9be2-1ced558ca9e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.702 221554 DEBUG nova.policy [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb422aa2270c43fa9ecb0da10968f867', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '18503ca80bcc4b588e398b4f03f7908b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.867 221554 DEBUG oslo_concurrency.processutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 65660f76-4c1d-4057-9be2-1ced558ca9e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.244s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:12 np0005603609 nova_compute[221550]: 2026-01-31 07:59:12.935 221554 DEBUG nova.storage.rbd_utils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] resizing rbd image 65660f76-4c1d-4057-9be2-1ced558ca9e2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:59:13 np0005603609 nova_compute[221550]: 2026-01-31 07:59:13.039 221554 DEBUG nova.objects.instance [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lazy-loading 'migration_context' on Instance uuid 65660f76-4c1d-4057-9be2-1ced558ca9e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:59:13 np0005603609 nova_compute[221550]: 2026-01-31 07:59:13.071 221554 DEBUG nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:59:13 np0005603609 nova_compute[221550]: 2026-01-31 07:59:13.071 221554 DEBUG nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Ensure instance console log exists: /var/lib/nova/instances/65660f76-4c1d-4057-9be2-1ced558ca9e2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:59:13 np0005603609 nova_compute[221550]: 2026-01-31 07:59:13.072 221554 DEBUG oslo_concurrency.lockutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:13 np0005603609 nova_compute[221550]: 2026-01-31 07:59:13.072 221554 DEBUG oslo_concurrency.lockutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:13 np0005603609 nova_compute[221550]: 2026-01-31 07:59:13.072 221554 DEBUG oslo_concurrency.lockutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:59:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:13.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:59:13 np0005603609 nova_compute[221550]: 2026-01-31 07:59:13.371 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:59:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:13.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:59:13 np0005603609 nova_compute[221550]: 2026-01-31 07:59:13.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:59:13 np0005603609 nova_compute[221550]: 2026-01-31 07:59:13.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:59:13 np0005603609 nova_compute[221550]: 2026-01-31 07:59:13.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:59:13 np0005603609 nova_compute[221550]: 2026-01-31 07:59:13.695 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 02:59:13 np0005603609 nova_compute[221550]: 2026-01-31 07:59:13.695 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:59:13 np0005603609 nova_compute[221550]: 2026-01-31 07:59:13.697 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:59:14 np0005603609 nova_compute[221550]: 2026-01-31 07:59:14.242 221554 DEBUG nova.network.neutron [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Successfully created port: b11ec066-c722-43dc-8ff2-3aa8f300c14c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:59:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:59:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:15.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:59:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:59:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:15.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:59:15 np0005603609 nova_compute[221550]: 2026-01-31 07:59:15.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:59:15 np0005603609 nova_compute[221550]: 2026-01-31 07:59:15.661 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:59:16 np0005603609 nova_compute[221550]: 2026-01-31 07:59:16.577 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:17.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:17 np0005603609 podman[253126]: 2026-01-31 07:59:17.166675016 +0000 UTC m=+0.049654097 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 02:59:17 np0005603609 podman[253125]: 2026-01-31 07:59:17.215606264 +0000 UTC m=+0.101811813 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 02:59:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:59:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:17.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:59:18 np0005603609 nova_compute[221550]: 2026-01-31 07:59:18.374 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:19.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:19.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:19 np0005603609 nova_compute[221550]: 2026-01-31 07:59:19.626 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:19 np0005603609 nova_compute[221550]: 2026-01-31 07:59:19.626 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:19 np0005603609 nova_compute[221550]: 2026-01-31 07:59:19.626 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:19 np0005603609 nova_compute[221550]: 2026-01-31 07:59:19.626 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:59:19 np0005603609 nova_compute[221550]: 2026-01-31 07:59:19.626 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:19 np0005603609 nova_compute[221550]: 2026-01-31 07:59:19.911 221554 DEBUG nova.network.neutron [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Successfully updated port: b11ec066-c722-43dc-8ff2-3aa8f300c14c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:59:19 np0005603609 nova_compute[221550]: 2026-01-31 07:59:19.950 221554 DEBUG oslo_concurrency.lockutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Acquiring lock "refresh_cache-65660f76-4c1d-4057-9be2-1ced558ca9e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:59:19 np0005603609 nova_compute[221550]: 2026-01-31 07:59:19.950 221554 DEBUG oslo_concurrency.lockutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Acquired lock "refresh_cache-65660f76-4c1d-4057-9be2-1ced558ca9e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:59:19 np0005603609 nova_compute[221550]: 2026-01-31 07:59:19.951 221554 DEBUG nova.network.neutron [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:59:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:59:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3292487236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:59:20 np0005603609 nova_compute[221550]: 2026-01-31 07:59:20.028 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:20 np0005603609 nova_compute[221550]: 2026-01-31 07:59:20.123 221554 DEBUG nova.compute.manager [req-97b0abab-3ef9-4017-850c-ae793b47753d req-bf35a162-517c-4cfc-9093-5013a1a5feba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Received event network-changed-b11ec066-c722-43dc-8ff2-3aa8f300c14c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:59:20 np0005603609 nova_compute[221550]: 2026-01-31 07:59:20.123 221554 DEBUG nova.compute.manager [req-97b0abab-3ef9-4017-850c-ae793b47753d req-bf35a162-517c-4cfc-9093-5013a1a5feba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Refreshing instance network info cache due to event network-changed-b11ec066-c722-43dc-8ff2-3aa8f300c14c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:59:20 np0005603609 nova_compute[221550]: 2026-01-31 07:59:20.123 221554 DEBUG oslo_concurrency.lockutils [req-97b0abab-3ef9-4017-850c-ae793b47753d req-bf35a162-517c-4cfc-9093-5013a1a5feba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-65660f76-4c1d-4057-9be2-1ced558ca9e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:59:20 np0005603609 nova_compute[221550]: 2026-01-31 07:59:20.155 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:59:20 np0005603609 nova_compute[221550]: 2026-01-31 07:59:20.156 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4655MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:59:20 np0005603609 nova_compute[221550]: 2026-01-31 07:59:20.156 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:20 np0005603609 nova_compute[221550]: 2026-01-31 07:59:20.156 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:20 np0005603609 nova_compute[221550]: 2026-01-31 07:59:20.290 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 65660f76-4c1d-4057-9be2-1ced558ca9e2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:59:20 np0005603609 nova_compute[221550]: 2026-01-31 07:59:20.291 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:59:20 np0005603609 nova_compute[221550]: 2026-01-31 07:59:20.291 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:59:20 np0005603609 nova_compute[221550]: 2026-01-31 07:59:20.316 221554 DEBUG nova.network.neutron [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:59:20 np0005603609 nova_compute[221550]: 2026-01-31 07:59:20.354 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:59:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1724540177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:59:20 np0005603609 nova_compute[221550]: 2026-01-31 07:59:20.816 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:20 np0005603609 nova_compute[221550]: 2026-01-31 07:59:20.823 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:59:20 np0005603609 nova_compute[221550]: 2026-01-31 07:59:20.883 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:59:20 np0005603609 nova_compute[221550]: 2026-01-31 07:59:20.991 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:59:20 np0005603609 nova_compute[221550]: 2026-01-31 07:59:20.992 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:59:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:21.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:59:21 np0005603609 nova_compute[221550]: 2026-01-31 07:59:21.582 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:21.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.799 221554 DEBUG nova.network.neutron [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Updating instance_info_cache with network_info: [{"id": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "address": "fa:16:3e:a7:a7:01", "network": {"id": "88012c28-2cc8-46ef-a3e3-1589170b67c0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1906706305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18503ca80bcc4b588e398b4f03f7908b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb11ec066-c7", "ovs_interfaceid": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.859 221554 DEBUG oslo_concurrency.lockutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Releasing lock "refresh_cache-65660f76-4c1d-4057-9be2-1ced558ca9e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.860 221554 DEBUG nova.compute.manager [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Instance network_info: |[{"id": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "address": "fa:16:3e:a7:a7:01", "network": {"id": "88012c28-2cc8-46ef-a3e3-1589170b67c0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1906706305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18503ca80bcc4b588e398b4f03f7908b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb11ec066-c7", "ovs_interfaceid": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.860 221554 DEBUG oslo_concurrency.lockutils [req-97b0abab-3ef9-4017-850c-ae793b47753d req-bf35a162-517c-4cfc-9093-5013a1a5feba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-65660f76-4c1d-4057-9be2-1ced558ca9e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.860 221554 DEBUG nova.network.neutron [req-97b0abab-3ef9-4017-850c-ae793b47753d req-bf35a162-517c-4cfc-9093-5013a1a5feba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Refreshing network info cache for port b11ec066-c722-43dc-8ff2-3aa8f300c14c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.864 221554 DEBUG nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Start _get_guest_xml network_info=[{"id": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "address": "fa:16:3e:a7:a7:01", "network": {"id": "88012c28-2cc8-46ef-a3e3-1589170b67c0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1906706305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18503ca80bcc4b588e398b4f03f7908b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb11ec066-c7", "ovs_interfaceid": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.869 221554 WARNING nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.879 221554 DEBUG nova.virt.libvirt.host [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.880 221554 DEBUG nova.virt.libvirt.host [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.888 221554 DEBUG nova.virt.libvirt.host [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.889 221554 DEBUG nova.virt.libvirt.host [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.890 221554 DEBUG nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.890 221554 DEBUG nova.virt.hardware [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.891 221554 DEBUG nova.virt.hardware [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.891 221554 DEBUG nova.virt.hardware [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.891 221554 DEBUG nova.virt.hardware [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.892 221554 DEBUG nova.virt.hardware [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.892 221554 DEBUG nova.virt.hardware [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.892 221554 DEBUG nova.virt.hardware [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.892 221554 DEBUG nova.virt.hardware [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.893 221554 DEBUG nova.virt.hardware [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.893 221554 DEBUG nova.virt.hardware [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.893 221554 DEBUG nova.virt.hardware [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:59:22 np0005603609 nova_compute[221550]: 2026-01-31 07:59:22.896 221554 DEBUG oslo_concurrency.processutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:23.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:59:23 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4102117780' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.299 221554 DEBUG oslo_concurrency.processutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.325 221554 DEBUG nova.storage.rbd_utils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] rbd image 65660f76-4c1d-4057-9be2-1ced558ca9e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.329 221554 DEBUG oslo_concurrency.processutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.375 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:23.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:59:23 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2095121666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.722 221554 DEBUG oslo_concurrency.processutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.724 221554 DEBUG nova.virt.libvirt.vif [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:59:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-783386843',display_name='tempest-InstanceActionsTestJSON-server-783386843',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-783386843',id=86,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18503ca80bcc4b588e398b4f03f7908b',ramdisk_id='',reservation_id='r-d0q3szj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-62779839',owner_user_name='tempest-InstanceActionsTestJSON-62779839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:59:12Z,user_data=None,user_id='fb422aa2270c43fa9ecb0da10968f867',uuid=65660f76-4c1d-4057-9be2-1ced558ca9e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "address": "fa:16:3e:a7:a7:01", "network": {"id": "88012c28-2cc8-46ef-a3e3-1589170b67c0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1906706305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18503ca80bcc4b588e398b4f03f7908b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb11ec066-c7", "ovs_interfaceid": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.724 221554 DEBUG nova.network.os_vif_util [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Converting VIF {"id": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "address": "fa:16:3e:a7:a7:01", "network": {"id": "88012c28-2cc8-46ef-a3e3-1589170b67c0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1906706305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18503ca80bcc4b588e398b4f03f7908b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb11ec066-c7", "ovs_interfaceid": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.725 221554 DEBUG nova.network.os_vif_util [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:a7:01,bridge_name='br-int',has_traffic_filtering=True,id=b11ec066-c722-43dc-8ff2-3aa8f300c14c,network=Network(88012c28-2cc8-46ef-a3e3-1589170b67c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb11ec066-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.726 221554 DEBUG nova.objects.instance [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lazy-loading 'pci_devices' on Instance uuid 65660f76-4c1d-4057-9be2-1ced558ca9e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.753 221554 DEBUG nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:59:23 np0005603609 nova_compute[221550]:  <uuid>65660f76-4c1d-4057-9be2-1ced558ca9e2</uuid>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:  <name>instance-00000056</name>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <nova:name>tempest-InstanceActionsTestJSON-server-783386843</nova:name>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:59:22</nova:creationTime>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:59:23 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:        <nova:user uuid="fb422aa2270c43fa9ecb0da10968f867">tempest-InstanceActionsTestJSON-62779839-project-member</nova:user>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:        <nova:project uuid="18503ca80bcc4b588e398b4f03f7908b">tempest-InstanceActionsTestJSON-62779839</nova:project>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:        <nova:port uuid="b11ec066-c722-43dc-8ff2-3aa8f300c14c">
Jan 31 02:59:23 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <entry name="serial">65660f76-4c1d-4057-9be2-1ced558ca9e2</entry>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <entry name="uuid">65660f76-4c1d-4057-9be2-1ced558ca9e2</entry>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/65660f76-4c1d-4057-9be2-1ced558ca9e2_disk">
Jan 31 02:59:23 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:59:23 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/65660f76-4c1d-4057-9be2-1ced558ca9e2_disk.config">
Jan 31 02:59:23 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:59:23 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:a7:a7:01"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <target dev="tapb11ec066-c7"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/65660f76-4c1d-4057-9be2-1ced558ca9e2/console.log" append="off"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:59:23 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:59:23 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:59:23 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:59:23 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.754 221554 DEBUG nova.compute.manager [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Preparing to wait for external event network-vif-plugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.754 221554 DEBUG oslo_concurrency.lockutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Acquiring lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.755 221554 DEBUG oslo_concurrency.lockutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.755 221554 DEBUG oslo_concurrency.lockutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.756 221554 DEBUG nova.virt.libvirt.vif [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:59:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-783386843',display_name='tempest-InstanceActionsTestJSON-server-783386843',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-783386843',id=86,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18503ca80bcc4b588e398b4f03f7908b',ramdisk_id='',reservation_id='r-d0q3szj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-62779839',owner_user_name='tempest-InstanceActionsTestJSON-62779839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:59:12Z,user_data=None,user_id='fb422aa2270c43fa9ecb0da10968f867',uuid=65660f76-4c1d-4057-9be2-1ced558ca9e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "address": "fa:16:3e:a7:a7:01", "network": {"id": "88012c28-2cc8-46ef-a3e3-1589170b67c0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1906706305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18503ca80bcc4b588e398b4f03f7908b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb11ec066-c7", "ovs_interfaceid": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.756 221554 DEBUG nova.network.os_vif_util [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Converting VIF {"id": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "address": "fa:16:3e:a7:a7:01", "network": {"id": "88012c28-2cc8-46ef-a3e3-1589170b67c0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1906706305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18503ca80bcc4b588e398b4f03f7908b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb11ec066-c7", "ovs_interfaceid": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.756 221554 DEBUG nova.network.os_vif_util [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:a7:01,bridge_name='br-int',has_traffic_filtering=True,id=b11ec066-c722-43dc-8ff2-3aa8f300c14c,network=Network(88012c28-2cc8-46ef-a3e3-1589170b67c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb11ec066-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.757 221554 DEBUG os_vif [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:a7:01,bridge_name='br-int',has_traffic_filtering=True,id=b11ec066-c722-43dc-8ff2-3aa8f300c14c,network=Network(88012c28-2cc8-46ef-a3e3-1589170b67c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb11ec066-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.757 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.757 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.758 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.760 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.760 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb11ec066-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.761 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb11ec066-c7, col_values=(('external_ids', {'iface-id': 'b11ec066-c722-43dc-8ff2-3aa8f300c14c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:a7:01', 'vm-uuid': '65660f76-4c1d-4057-9be2-1ced558ca9e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.791 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:23 np0005603609 NetworkManager[49064]: <info>  [1769846363.7927] manager: (tapb11ec066-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.794 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.797 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.798 221554 INFO os_vif [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:a7:01,bridge_name='br-int',has_traffic_filtering=True,id=b11ec066-c722-43dc-8ff2-3aa8f300c14c,network=Network(88012c28-2cc8-46ef-a3e3-1589170b67c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb11ec066-c7')#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.891 221554 DEBUG nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.892 221554 DEBUG nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.895 221554 DEBUG nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] No VIF found with MAC fa:16:3e:a7:a7:01, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.896 221554 INFO nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Using config drive#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.920 221554 DEBUG nova.storage.rbd_utils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] rbd image 65660f76-4c1d-4057-9be2-1ced558ca9e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.992 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.993 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.993 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:59:23 np0005603609 nova_compute[221550]: 2026-01-31 07:59:23.993 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:59:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:25.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:25.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:26 np0005603609 nova_compute[221550]: 2026-01-31 07:59:26.611 221554 INFO nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Creating config drive at /var/lib/nova/instances/65660f76-4c1d-4057-9be2-1ced558ca9e2/disk.config#033[00m
Jan 31 02:59:26 np0005603609 nova_compute[221550]: 2026-01-31 07:59:26.617 221554 DEBUG oslo_concurrency.processutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/65660f76-4c1d-4057-9be2-1ced558ca9e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp0pwn66m7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:26 np0005603609 nova_compute[221550]: 2026-01-31 07:59:26.744 221554 DEBUG oslo_concurrency.processutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/65660f76-4c1d-4057-9be2-1ced558ca9e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp0pwn66m7" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:26 np0005603609 nova_compute[221550]: 2026-01-31 07:59:26.782 221554 DEBUG nova.storage.rbd_utils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] rbd image 65660f76-4c1d-4057-9be2-1ced558ca9e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:59:26 np0005603609 nova_compute[221550]: 2026-01-31 07:59:26.788 221554 DEBUG oslo_concurrency.processutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/65660f76-4c1d-4057-9be2-1ced558ca9e2/disk.config 65660f76-4c1d-4057-9be2-1ced558ca9e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.069 221554 DEBUG oslo_concurrency.processutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/65660f76-4c1d-4057-9be2-1ced558ca9e2/disk.config 65660f76-4c1d-4057-9be2-1ced558ca9e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.070 221554 INFO nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Deleting local config drive /var/lib/nova/instances/65660f76-4c1d-4057-9be2-1ced558ca9e2/disk.config because it was imported into RBD.#033[00m
Jan 31 02:59:27 np0005603609 kernel: tapb11ec066-c7: entered promiscuous mode
Jan 31 02:59:27 np0005603609 NetworkManager[49064]: <info>  [1769846367.1371] manager: (tapb11ec066-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/142)
Jan 31 02:59:27 np0005603609 ovn_controller[130359]: 2026-01-31T07:59:27Z|00289|binding|INFO|Claiming lport b11ec066-c722-43dc-8ff2-3aa8f300c14c for this chassis.
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.137 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:27 np0005603609 ovn_controller[130359]: 2026-01-31T07:59:27Z|00290|binding|INFO|b11ec066-c722-43dc-8ff2-3aa8f300c14c: Claiming fa:16:3e:a7:a7:01 10.100.0.3
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.143 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.146 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.150 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.167 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:a7:01 10.100.0.3'], port_security=['fa:16:3e:a7:a7:01 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '65660f76-4c1d-4057-9be2-1ced558ca9e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88012c28-2cc8-46ef-a3e3-1589170b67c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18503ca80bcc4b588e398b4f03f7908b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f478e482-3c6f-43fe-ac92-0fd53d2caff4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e491a564-bfa2-4605-adee-6a1b7cb1585c, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=b11ec066-c722-43dc-8ff2-3aa8f300c14c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.169 140058 INFO neutron.agent.ovn.metadata.agent [-] Port b11ec066-c722-43dc-8ff2-3aa8f300c14c in datapath 88012c28-2cc8-46ef-a3e3-1589170b67c0 bound to our chassis#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.170 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 88012c28-2cc8-46ef-a3e3-1589170b67c0#033[00m
Jan 31 02:59:27 np0005603609 systemd-machined[190912]: New machine qemu-38-instance-00000056.
Jan 31 02:59:27 np0005603609 ovn_controller[130359]: 2026-01-31T07:59:27Z|00291|binding|INFO|Setting lport b11ec066-c722-43dc-8ff2-3aa8f300c14c ovn-installed in OVS
Jan 31 02:59:27 np0005603609 ovn_controller[130359]: 2026-01-31T07:59:27Z|00292|binding|INFO|Setting lport b11ec066-c722-43dc-8ff2-3aa8f300c14c up in Southbound
Jan 31 02:59:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:27.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.180 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:27 np0005603609 systemd[1]: Started Virtual Machine qemu-38-instance-00000056.
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.185 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[043913af-1ba6-49c2-aa81-7514bb585061]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.187 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap88012c28-21 in ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.190 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap88012c28-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.190 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[49eafb46-6228-4d7d-afd2-34582514e159]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.191 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd05ca8-42dd-42be-be01-a4497529a14f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:27 np0005603609 systemd-udevd[253353]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.201 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[27cb333e-7485-4d2c-91df-9fb8d97310de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:27 np0005603609 NetworkManager[49064]: <info>  [1769846367.2071] device (tapb11ec066-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:59:27 np0005603609 NetworkManager[49064]: <info>  [1769846367.2077] device (tapb11ec066-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.215 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ba197847-4f67-4c5a-8be2-96443e379c11]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.242 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[cd56d8d7-6904-4717-91af-71652d815d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.249 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5d046d15-9a4f-49c2-bd9b-7e7da2c29299]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:27 np0005603609 NetworkManager[49064]: <info>  [1769846367.2514] manager: (tap88012c28-20): new Veth device (/org/freedesktop/NetworkManager/Devices/143)
Jan 31 02:59:27 np0005603609 systemd-udevd[253360]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.282 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[775d1900-81d0-4731-998b-2bfc96c5fffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.286 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d784c4c1-46a4-43d9-8ff4-73d2e86beadf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:27 np0005603609 NetworkManager[49064]: <info>  [1769846367.3037] device (tap88012c28-20): carrier: link connected
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.309 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[0d8aa190-95b8-49d3-b8a7-e7f49a1deff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.322 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3ab470-505b-4edd-ab87-c854ecb37937]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88012c28-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:7f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660700, 'reachable_time': 26677, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253385, 'error': None, 'target': 'ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.336 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e36f79b9-8e27-4258-810a-e9247d8deed9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe49:7fae'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 660700, 'tstamp': 660700}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253386, 'error': None, 'target': 'ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.351 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8f64ba4a-a75f-4bb9-95f5-0e7b80f560cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88012c28-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:7f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660700, 'reachable_time': 26677, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253387, 'error': None, 'target': 'ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.373 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f05a5773-2340-468d-8823-0e3a6958f98b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.416 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bdcad97e-03fa-440b-9b33-6ebb38257dc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.418 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88012c28-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.418 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.418 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88012c28-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.420 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:27 np0005603609 kernel: tap88012c28-20: entered promiscuous mode
Jan 31 02:59:27 np0005603609 NetworkManager[49064]: <info>  [1769846367.4210] manager: (tap88012c28-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.422 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.424 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap88012c28-20, col_values=(('external_ids', {'iface-id': '4b49eb8a-82a2-420f-8ab9-52025af81cc8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.425 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:27 np0005603609 ovn_controller[130359]: 2026-01-31T07:59:27Z|00293|binding|INFO|Releasing lport 4b49eb8a-82a2-420f-8ab9-52025af81cc8 from this chassis (sb_readonly=0)
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.426 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.428 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/88012c28-2cc8-46ef-a3e3-1589170b67c0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/88012c28-2cc8-46ef-a3e3-1589170b67c0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.429 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b99ff87d-55aa-4b3a-9beb-70f4ea510dac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.430 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-88012c28-2cc8-46ef-a3e3-1589170b67c0
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/88012c28-2cc8-46ef-a3e3-1589170b67c0.pid.haproxy
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 88012c28-2cc8-46ef-a3e3-1589170b67c0
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.431 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:27.431 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0', 'env', 'PROCESS_TAG=haproxy-88012c28-2cc8-46ef-a3e3-1589170b67c0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/88012c28-2cc8-46ef-a3e3-1589170b67c0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:59:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:59:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:27.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.643 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846367.6429102, 65660f76-4c1d-4057-9be2-1ced558ca9e2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.644 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] VM Started (Lifecycle Event)#033[00m
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.684 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.688 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846367.6440852, 65660f76-4c1d-4057-9be2-1ced558ca9e2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.689 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.720 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.723 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:59:27 np0005603609 nova_compute[221550]: 2026-01-31 07:59:27.756 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:59:27 np0005603609 podman[253461]: 2026-01-31 07:59:27.759793915 +0000 UTC m=+0.022926733 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:59:27 np0005603609 podman[253461]: 2026-01-31 07:59:27.896758542 +0000 UTC m=+0.159891340 container create ff4f7d9b7dcabb3f2fa0de9467380c2564646e36a4246fedc5ae8f874c4ad6dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 02:59:27 np0005603609 systemd[1]: Started libpod-conmon-ff4f7d9b7dcabb3f2fa0de9467380c2564646e36a4246fedc5ae8f874c4ad6dd.scope.
Jan 31 02:59:27 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:59:27 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc15d9cba15afbc6d331669bb67e16e99412d915b4a4e68cc27be07c3259483/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:59:28 np0005603609 podman[253461]: 2026-01-31 07:59:27.999887155 +0000 UTC m=+0.263019983 container init ff4f7d9b7dcabb3f2fa0de9467380c2564646e36a4246fedc5ae8f874c4ad6dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:59:28 np0005603609 podman[253461]: 2026-01-31 07:59:28.007434877 +0000 UTC m=+0.270567715 container start ff4f7d9b7dcabb3f2fa0de9467380c2564646e36a4246fedc5ae8f874c4ad6dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 02:59:28 np0005603609 neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0[253476]: [NOTICE]   (253480) : New worker (253482) forked
Jan 31 02:59:28 np0005603609 neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0[253476]: [NOTICE]   (253480) : Loading success.
Jan 31 02:59:28 np0005603609 nova_compute[221550]: 2026-01-31 07:59:28.377 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:28 np0005603609 nova_compute[221550]: 2026-01-31 07:59:28.792 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:28 np0005603609 nova_compute[221550]: 2026-01-31 07:59:28.857 221554 DEBUG nova.network.neutron [req-97b0abab-3ef9-4017-850c-ae793b47753d req-bf35a162-517c-4cfc-9093-5013a1a5feba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Updated VIF entry in instance network info cache for port b11ec066-c722-43dc-8ff2-3aa8f300c14c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:59:28 np0005603609 nova_compute[221550]: 2026-01-31 07:59:28.857 221554 DEBUG nova.network.neutron [req-97b0abab-3ef9-4017-850c-ae793b47753d req-bf35a162-517c-4cfc-9093-5013a1a5feba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Updating instance_info_cache with network_info: [{"id": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "address": "fa:16:3e:a7:a7:01", "network": {"id": "88012c28-2cc8-46ef-a3e3-1589170b67c0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1906706305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18503ca80bcc4b588e398b4f03f7908b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb11ec066-c7", "ovs_interfaceid": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:59:28 np0005603609 nova_compute[221550]: 2026-01-31 07:59:28.930 221554 DEBUG oslo_concurrency.lockutils [req-97b0abab-3ef9-4017-850c-ae793b47753d req-bf35a162-517c-4cfc-9093-5013a1a5feba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-65660f76-4c1d-4057-9be2-1ced558ca9e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:59:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:29.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:29.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:30 np0005603609 nova_compute[221550]: 2026-01-31 07:59:30.650 221554 DEBUG nova.compute.manager [req-b436f2fc-9353-4cb9-b0b3-776dce73a282 req-42636c76-cf1e-4781-89cb-968346a73d9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Received event network-vif-plugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:59:30 np0005603609 nova_compute[221550]: 2026-01-31 07:59:30.651 221554 DEBUG oslo_concurrency.lockutils [req-b436f2fc-9353-4cb9-b0b3-776dce73a282 req-42636c76-cf1e-4781-89cb-968346a73d9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:30 np0005603609 nova_compute[221550]: 2026-01-31 07:59:30.651 221554 DEBUG oslo_concurrency.lockutils [req-b436f2fc-9353-4cb9-b0b3-776dce73a282 req-42636c76-cf1e-4781-89cb-968346a73d9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:30 np0005603609 nova_compute[221550]: 2026-01-31 07:59:30.652 221554 DEBUG oslo_concurrency.lockutils [req-b436f2fc-9353-4cb9-b0b3-776dce73a282 req-42636c76-cf1e-4781-89cb-968346a73d9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:30 np0005603609 nova_compute[221550]: 2026-01-31 07:59:30.652 221554 DEBUG nova.compute.manager [req-b436f2fc-9353-4cb9-b0b3-776dce73a282 req-42636c76-cf1e-4781-89cb-968346a73d9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Processing event network-vif-plugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:59:30 np0005603609 nova_compute[221550]: 2026-01-31 07:59:30.653 221554 DEBUG nova.compute.manager [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:59:30 np0005603609 nova_compute[221550]: 2026-01-31 07:59:30.658 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846370.6583583, 65660f76-4c1d-4057-9be2-1ced558ca9e2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:59:30 np0005603609 nova_compute[221550]: 2026-01-31 07:59:30.659 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:59:30 np0005603609 nova_compute[221550]: 2026-01-31 07:59:30.661 221554 DEBUG nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:59:30 np0005603609 nova_compute[221550]: 2026-01-31 07:59:30.664 221554 INFO nova.virt.libvirt.driver [-] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Instance spawned successfully.#033[00m
Jan 31 02:59:30 np0005603609 nova_compute[221550]: 2026-01-31 07:59:30.664 221554 DEBUG nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:59:31 np0005603609 nova_compute[221550]: 2026-01-31 07:59:31.021 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:59:31 np0005603609 nova_compute[221550]: 2026-01-31 07:59:31.025 221554 DEBUG nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:59:31 np0005603609 nova_compute[221550]: 2026-01-31 07:59:31.026 221554 DEBUG nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:59:31 np0005603609 nova_compute[221550]: 2026-01-31 07:59:31.026 221554 DEBUG nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:59:31 np0005603609 nova_compute[221550]: 2026-01-31 07:59:31.027 221554 DEBUG nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:59:31 np0005603609 nova_compute[221550]: 2026-01-31 07:59:31.027 221554 DEBUG nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:59:31 np0005603609 nova_compute[221550]: 2026-01-31 07:59:31.028 221554 DEBUG nova.virt.libvirt.driver [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:59:31 np0005603609 nova_compute[221550]: 2026-01-31 07:59:31.033 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:59:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:31.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:31 np0005603609 nova_compute[221550]: 2026-01-31 07:59:31.408 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:59:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:31.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:31 np0005603609 nova_compute[221550]: 2026-01-31 07:59:31.646 221554 INFO nova.compute.manager [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Took 19.19 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:59:31 np0005603609 nova_compute[221550]: 2026-01-31 07:59:31.647 221554 DEBUG nova.compute.manager [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:59:31 np0005603609 nova_compute[221550]: 2026-01-31 07:59:31.785 221554 INFO nova.compute.manager [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Took 20.40 seconds to build instance.#033[00m
Jan 31 02:59:31 np0005603609 nova_compute[221550]: 2026-01-31 07:59:31.846 221554 DEBUG oslo_concurrency.lockutils [None req-7b0e65db-02c9-4fe0-a775-250309ab6637 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:32 np0005603609 nova_compute[221550]: 2026-01-31 07:59:32.797 221554 DEBUG nova.compute.manager [req-74fdf728-82a9-4711-8e32-9e145f10c2e9 req-01ae350e-693d-4132-b1d3-7e0011a1426b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Received event network-vif-plugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:59:32 np0005603609 nova_compute[221550]: 2026-01-31 07:59:32.797 221554 DEBUG oslo_concurrency.lockutils [req-74fdf728-82a9-4711-8e32-9e145f10c2e9 req-01ae350e-693d-4132-b1d3-7e0011a1426b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:32 np0005603609 nova_compute[221550]: 2026-01-31 07:59:32.797 221554 DEBUG oslo_concurrency.lockutils [req-74fdf728-82a9-4711-8e32-9e145f10c2e9 req-01ae350e-693d-4132-b1d3-7e0011a1426b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:32 np0005603609 nova_compute[221550]: 2026-01-31 07:59:32.797 221554 DEBUG oslo_concurrency.lockutils [req-74fdf728-82a9-4711-8e32-9e145f10c2e9 req-01ae350e-693d-4132-b1d3-7e0011a1426b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:32 np0005603609 nova_compute[221550]: 2026-01-31 07:59:32.798 221554 DEBUG nova.compute.manager [req-74fdf728-82a9-4711-8e32-9e145f10c2e9 req-01ae350e-693d-4132-b1d3-7e0011a1426b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] No waiting events found dispatching network-vif-plugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:59:32 np0005603609 nova_compute[221550]: 2026-01-31 07:59:32.798 221554 WARNING nova.compute.manager [req-74fdf728-82a9-4711-8e32-9e145f10c2e9 req-01ae350e-693d-4132-b1d3-7e0011a1426b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Received unexpected event network-vif-plugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c for instance with vm_state active and task_state None.#033[00m
Jan 31 02:59:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 02:59:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:33.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 02:59:33 np0005603609 nova_compute[221550]: 2026-01-31 07:59:33.421 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:33.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:33 np0005603609 nova_compute[221550]: 2026-01-31 07:59:33.794 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:59:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:35.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:59:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:35.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:36 np0005603609 nova_compute[221550]: 2026-01-31 07:59:36.450 221554 DEBUG oslo_concurrency.lockutils [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Acquiring lock "65660f76-4c1d-4057-9be2-1ced558ca9e2" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:36 np0005603609 nova_compute[221550]: 2026-01-31 07:59:36.450 221554 DEBUG oslo_concurrency.lockutils [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:36 np0005603609 nova_compute[221550]: 2026-01-31 07:59:36.451 221554 INFO nova.compute.manager [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Rebooting instance#033[00m
Jan 31 02:59:36 np0005603609 nova_compute[221550]: 2026-01-31 07:59:36.499 221554 DEBUG oslo_concurrency.lockutils [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Acquiring lock "refresh_cache-65660f76-4c1d-4057-9be2-1ced558ca9e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:59:36 np0005603609 nova_compute[221550]: 2026-01-31 07:59:36.499 221554 DEBUG oslo_concurrency.lockutils [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Acquired lock "refresh_cache-65660f76-4c1d-4057-9be2-1ced558ca9e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:59:36 np0005603609 nova_compute[221550]: 2026-01-31 07:59:36.500 221554 DEBUG nova.network.neutron [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:59:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:37.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:37.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:59:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 02:59:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:59:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:59:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:59:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 02:59:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:59:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:59:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:59:38 np0005603609 nova_compute[221550]: 2026-01-31 07:59:38.422 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:38 np0005603609 nova_compute[221550]: 2026-01-31 07:59:38.797 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:39.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:59:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:39.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:59:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:40 np0005603609 nova_compute[221550]: 2026-01-31 07:59:40.476 221554 DEBUG nova.network.neutron [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Updating instance_info_cache with network_info: [{"id": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "address": "fa:16:3e:a7:a7:01", "network": {"id": "88012c28-2cc8-46ef-a3e3-1589170b67c0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1906706305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18503ca80bcc4b588e398b4f03f7908b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb11ec066-c7", "ovs_interfaceid": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:59:40 np0005603609 nova_compute[221550]: 2026-01-31 07:59:40.528 221554 DEBUG oslo_concurrency.lockutils [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Releasing lock "refresh_cache-65660f76-4c1d-4057-9be2-1ced558ca9e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:59:40 np0005603609 nova_compute[221550]: 2026-01-31 07:59:40.529 221554 DEBUG nova.compute.manager [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:59:40 np0005603609 kernel: tapb11ec066-c7 (unregistering): left promiscuous mode
Jan 31 02:59:40 np0005603609 NetworkManager[49064]: <info>  [1769846380.8258] device (tapb11ec066-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:59:40 np0005603609 nova_compute[221550]: 2026-01-31 07:59:40.827 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:40 np0005603609 ovn_controller[130359]: 2026-01-31T07:59:40Z|00294|binding|INFO|Releasing lport b11ec066-c722-43dc-8ff2-3aa8f300c14c from this chassis (sb_readonly=0)
Jan 31 02:59:40 np0005603609 ovn_controller[130359]: 2026-01-31T07:59:40Z|00295|binding|INFO|Setting lport b11ec066-c722-43dc-8ff2-3aa8f300c14c down in Southbound
Jan 31 02:59:40 np0005603609 nova_compute[221550]: 2026-01-31 07:59:40.833 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:40 np0005603609 ovn_controller[130359]: 2026-01-31T07:59:40Z|00296|binding|INFO|Removing iface tapb11ec066-c7 ovn-installed in OVS
Jan 31 02:59:40 np0005603609 nova_compute[221550]: 2026-01-31 07:59:40.834 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:40 np0005603609 nova_compute[221550]: 2026-01-31 07:59:40.838 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:40 np0005603609 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000056.scope: Deactivated successfully.
Jan 31 02:59:40 np0005603609 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000056.scope: Consumed 10.760s CPU time.
Jan 31 02:59:40 np0005603609 systemd-machined[190912]: Machine qemu-38-instance-00000056 terminated.
Jan 31 02:59:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:40.889 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:a7:01 10.100.0.3'], port_security=['fa:16:3e:a7:a7:01 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '65660f76-4c1d-4057-9be2-1ced558ca9e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88012c28-2cc8-46ef-a3e3-1589170b67c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18503ca80bcc4b588e398b4f03f7908b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f478e482-3c6f-43fe-ac92-0fd53d2caff4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e491a564-bfa2-4605-adee-6a1b7cb1585c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=b11ec066-c722-43dc-8ff2-3aa8f300c14c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:59:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:40.891 140058 INFO neutron.agent.ovn.metadata.agent [-] Port b11ec066-c722-43dc-8ff2-3aa8f300c14c in datapath 88012c28-2cc8-46ef-a3e3-1589170b67c0 unbound from our chassis#033[00m
Jan 31 02:59:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:40.892 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 88012c28-2cc8-46ef-a3e3-1589170b67c0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:59:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:40.893 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e12357c6-159b-4da2-b446-2c72d813275c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:40.894 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0 namespace which is not needed anymore#033[00m
Jan 31 02:59:40 np0005603609 nova_compute[221550]: 2026-01-31 07:59:40.955 221554 INFO nova.virt.libvirt.driver [-] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Instance destroyed successfully.#033[00m
Jan 31 02:59:40 np0005603609 nova_compute[221550]: 2026-01-31 07:59:40.956 221554 DEBUG nova.objects.instance [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lazy-loading 'resources' on Instance uuid 65660f76-4c1d-4057-9be2-1ced558ca9e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.014 221554 DEBUG nova.virt.libvirt.vif [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:59:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-783386843',display_name='tempest-InstanceActionsTestJSON-server-783386843',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-783386843',id=86,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:59:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='18503ca80bcc4b588e398b4f03f7908b',ramdisk_id='',reservation_id='r-d0q3szj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-62779839',owner_user_name='tempest-InstanceActionsTestJSON-62779839-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:59:40Z,user_data=None,user_id='fb422aa2270c43fa9ecb0da10968f867',uuid=65660f76-4c1d-4057-9be2-1ced558ca9e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "address": "fa:16:3e:a7:a7:01", "network": {"id": "88012c28-2cc8-46ef-a3e3-1589170b67c0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1906706305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18503ca80bcc4b588e398b4f03f7908b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb11ec066-c7", "ovs_interfaceid": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.014 221554 DEBUG nova.network.os_vif_util [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Converting VIF {"id": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "address": "fa:16:3e:a7:a7:01", "network": {"id": "88012c28-2cc8-46ef-a3e3-1589170b67c0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1906706305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18503ca80bcc4b588e398b4f03f7908b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb11ec066-c7", "ovs_interfaceid": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.015 221554 DEBUG nova.network.os_vif_util [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a7:a7:01,bridge_name='br-int',has_traffic_filtering=True,id=b11ec066-c722-43dc-8ff2-3aa8f300c14c,network=Network(88012c28-2cc8-46ef-a3e3-1589170b67c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb11ec066-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.016 221554 DEBUG os_vif [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:a7:01,bridge_name='br-int',has_traffic_filtering=True,id=b11ec066-c722-43dc-8ff2-3aa8f300c14c,network=Network(88012c28-2cc8-46ef-a3e3-1589170b67c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb11ec066-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.017 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.017 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb11ec066-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.019 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.020 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.057 221554 INFO os_vif [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:a7:01,bridge_name='br-int',has_traffic_filtering=True,id=b11ec066-c722-43dc-8ff2-3aa8f300c14c,network=Network(88012c28-2cc8-46ef-a3e3-1589170b67c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb11ec066-c7')#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.063 221554 DEBUG nova.virt.libvirt.driver [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Start _get_guest_xml network_info=[{"id": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "address": "fa:16:3e:a7:a7:01", "network": {"id": "88012c28-2cc8-46ef-a3e3-1589170b67c0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1906706305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18503ca80bcc4b588e398b4f03f7908b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb11ec066-c7", "ovs_interfaceid": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.066 221554 WARNING nova.virt.libvirt.driver [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.070 221554 DEBUG nova.virt.libvirt.host [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.071 221554 DEBUG nova.virt.libvirt.host [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.074 221554 DEBUG nova.virt.libvirt.host [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.074 221554 DEBUG nova.virt.libvirt.host [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.075 221554 DEBUG nova.virt.libvirt.driver [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.075 221554 DEBUG nova.virt.hardware [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.076 221554 DEBUG nova.virt.hardware [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.076 221554 DEBUG nova.virt.hardware [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.076 221554 DEBUG nova.virt.hardware [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.077 221554 DEBUG nova.virt.hardware [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.077 221554 DEBUG nova.virt.hardware [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.077 221554 DEBUG nova.virt.hardware [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.077 221554 DEBUG nova.virt.hardware [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.078 221554 DEBUG nova.virt.hardware [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.078 221554 DEBUG nova.virt.hardware [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.078 221554 DEBUG nova.virt.hardware [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.078 221554 DEBUG nova.objects.instance [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 65660f76-4c1d-4057-9be2-1ced558ca9e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:59:41 np0005603609 neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0[253476]: [NOTICE]   (253480) : haproxy version is 2.8.14-c23fe91
Jan 31 02:59:41 np0005603609 neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0[253476]: [NOTICE]   (253480) : path to executable is /usr/sbin/haproxy
Jan 31 02:59:41 np0005603609 neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0[253476]: [WARNING]  (253480) : Exiting Master process...
Jan 31 02:59:41 np0005603609 neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0[253476]: [ALERT]    (253480) : Current worker (253482) exited with code 143 (Terminated)
Jan 31 02:59:41 np0005603609 neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0[253476]: [WARNING]  (253480) : All workers exited. Exiting... (0)
Jan 31 02:59:41 np0005603609 systemd[1]: libpod-ff4f7d9b7dcabb3f2fa0de9467380c2564646e36a4246fedc5ae8f874c4ad6dd.scope: Deactivated successfully.
Jan 31 02:59:41 np0005603609 podman[253779]: 2026-01-31 07:59:41.097236481 +0000 UTC m=+0.119343383 container died ff4f7d9b7dcabb3f2fa0de9467380c2564646e36a4246fedc5ae8f874c4ad6dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.111 221554 DEBUG oslo_concurrency.processutils [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:41 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ff4f7d9b7dcabb3f2fa0de9467380c2564646e36a4246fedc5ae8f874c4ad6dd-userdata-shm.mount: Deactivated successfully.
Jan 31 02:59:41 np0005603609 systemd[1]: var-lib-containers-storage-overlay-4dc15d9cba15afbc6d331669bb67e16e99412d915b4a4e68cc27be07c3259483-merged.mount: Deactivated successfully.
Jan 31 02:59:41 np0005603609 podman[253779]: 2026-01-31 07:59:41.140175325 +0000 UTC m=+0.162282237 container cleanup ff4f7d9b7dcabb3f2fa0de9467380c2564646e36a4246fedc5ae8f874c4ad6dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 02:59:41 np0005603609 systemd[1]: libpod-conmon-ff4f7d9b7dcabb3f2fa0de9467380c2564646e36a4246fedc5ae8f874c4ad6dd.scope: Deactivated successfully.
Jan 31 02:59:41 np0005603609 podman[253812]: 2026-01-31 07:59:41.19686955 +0000 UTC m=+0.040818184 container remove ff4f7d9b7dcabb3f2fa0de9467380c2564646e36a4246fedc5ae8f874c4ad6dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 02:59:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:41.201 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[080050d4-2e42-4420-836d-3c9296512dbe]: (4, ('Sat Jan 31 07:59:40 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0 (ff4f7d9b7dcabb3f2fa0de9467380c2564646e36a4246fedc5ae8f874c4ad6dd)\nff4f7d9b7dcabb3f2fa0de9467380c2564646e36a4246fedc5ae8f874c4ad6dd\nSat Jan 31 07:59:41 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0 (ff4f7d9b7dcabb3f2fa0de9467380c2564646e36a4246fedc5ae8f874c4ad6dd)\nff4f7d9b7dcabb3f2fa0de9467380c2564646e36a4246fedc5ae8f874c4ad6dd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:41.204 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b21051-b45d-4b8a-b9b8-daa8c3744cce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:41.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:41.206 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88012c28-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:41 np0005603609 kernel: tap88012c28-20: left promiscuous mode
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.209 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.212 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:41.218 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9a716c14-8c37-49d2-b5b0-ded18047480e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:41.235 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d0039d60-180e-4dde-9554-ec98976fc423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:41.238 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6fcea3-0042-4aa1-91c8-6be6f2d5f7e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:41.251 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d1187c-d9ef-4f7e-9f6d-14471992b173]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660693, 'reachable_time': 35520, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253845, 'error': None, 'target': 'ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:41.254 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:59:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:41.255 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[2011ee55-4237-4798-8440-aa3abd73392a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:41 np0005603609 systemd[1]: run-netns-ovnmeta\x2d88012c28\x2d2cc8\x2d46ef\x2da3e3\x2d1589170b67c0.mount: Deactivated successfully.
Jan 31 02:59:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:59:41 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2876323651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.549 221554 DEBUG oslo_concurrency.processutils [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.583 221554 DEBUG oslo_concurrency.processutils [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:41.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:59:41 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3586046035' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.967 221554 DEBUG oslo_concurrency.processutils [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.968 221554 DEBUG nova.virt.libvirt.vif [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:59:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-783386843',display_name='tempest-InstanceActionsTestJSON-server-783386843',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-783386843',id=86,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:59:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='18503ca80bcc4b588e398b4f03f7908b',ramdisk_id='',reservation_id='r-d0q3szj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-62779839',owner_user_name='tempest-InstanceActionsTestJSON-62779839-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:59:40Z,user_data=None,user_id='fb422aa2270c43fa9ecb0da10968f867',uuid=65660f76-4c1d-4057-9be2-1ced558ca9e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "address": "fa:16:3e:a7:a7:01", "network": {"id": "88012c28-2cc8-46ef-a3e3-1589170b67c0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1906706305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18503ca80bcc4b588e398b4f03f7908b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb11ec066-c7", "ovs_interfaceid": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.969 221554 DEBUG nova.network.os_vif_util [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Converting VIF {"id": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "address": "fa:16:3e:a7:a7:01", "network": {"id": "88012c28-2cc8-46ef-a3e3-1589170b67c0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1906706305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18503ca80bcc4b588e398b4f03f7908b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb11ec066-c7", "ovs_interfaceid": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.969 221554 DEBUG nova.network.os_vif_util [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a7:a7:01,bridge_name='br-int',has_traffic_filtering=True,id=b11ec066-c722-43dc-8ff2-3aa8f300c14c,network=Network(88012c28-2cc8-46ef-a3e3-1589170b67c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb11ec066-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:59:41 np0005603609 nova_compute[221550]: 2026-01-31 07:59:41.970 221554 DEBUG nova.objects.instance [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lazy-loading 'pci_devices' on Instance uuid 65660f76-4c1d-4057-9be2-1ced558ca9e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.097 221554 DEBUG nova.virt.libvirt.driver [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:59:42 np0005603609 nova_compute[221550]:  <uuid>65660f76-4c1d-4057-9be2-1ced558ca9e2</uuid>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:  <name>instance-00000056</name>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <nova:name>tempest-InstanceActionsTestJSON-server-783386843</nova:name>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 07:59:41</nova:creationTime>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 02:59:42 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:        <nova:user uuid="fb422aa2270c43fa9ecb0da10968f867">tempest-InstanceActionsTestJSON-62779839-project-member</nova:user>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:        <nova:project uuid="18503ca80bcc4b588e398b4f03f7908b">tempest-InstanceActionsTestJSON-62779839</nova:project>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:        <nova:port uuid="b11ec066-c722-43dc-8ff2-3aa8f300c14c">
Jan 31 02:59:42 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <system>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <entry name="serial">65660f76-4c1d-4057-9be2-1ced558ca9e2</entry>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <entry name="uuid">65660f76-4c1d-4057-9be2-1ced558ca9e2</entry>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    </system>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:  <os>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:  </os>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:  <features>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:  </features>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:  </clock>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:  <devices>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/65660f76-4c1d-4057-9be2-1ced558ca9e2_disk">
Jan 31 02:59:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:59:42 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/65660f76-4c1d-4057-9be2-1ced558ca9e2_disk.config">
Jan 31 02:59:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      </source>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 02:59:42 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      </auth>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    </disk>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:a7:a7:01"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <target dev="tapb11ec066-c7"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    </interface>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/65660f76-4c1d-4057-9be2-1ced558ca9e2/console.log" append="off"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    </serial>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <video>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    </video>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <input type="keyboard" bus="usb"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    </rng>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 02:59:42 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 02:59:42 np0005603609 nova_compute[221550]:  </devices>
Jan 31 02:59:42 np0005603609 nova_compute[221550]: </domain>
Jan 31 02:59:42 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.099 221554 DEBUG nova.virt.libvirt.driver [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.099 221554 DEBUG nova.virt.libvirt.driver [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.100 221554 DEBUG nova.virt.libvirt.vif [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:59:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-783386843',display_name='tempest-InstanceActionsTestJSON-server-783386843',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-783386843',id=86,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:59:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='18503ca80bcc4b588e398b4f03f7908b',ramdisk_id='',reservation_id='r-d0q3szj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-62779839',owner_user_name='tempest-InstanceActionsTestJSON-62779839-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:59:40Z,user_data=None,user_id='fb422aa2270c43fa9ecb0da10968f867',uuid=65660f76-4c1d-4057-9be2-1ced558ca9e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "address": "fa:16:3e:a7:a7:01", "network": {"id": "88012c28-2cc8-46ef-a3e3-1589170b67c0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1906706305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18503ca80bcc4b588e398b4f03f7908b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb11ec066-c7", "ovs_interfaceid": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.101 221554 DEBUG nova.network.os_vif_util [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Converting VIF {"id": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "address": "fa:16:3e:a7:a7:01", "network": {"id": "88012c28-2cc8-46ef-a3e3-1589170b67c0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1906706305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18503ca80bcc4b588e398b4f03f7908b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb11ec066-c7", "ovs_interfaceid": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.104 221554 DEBUG nova.network.os_vif_util [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a7:a7:01,bridge_name='br-int',has_traffic_filtering=True,id=b11ec066-c722-43dc-8ff2-3aa8f300c14c,network=Network(88012c28-2cc8-46ef-a3e3-1589170b67c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb11ec066-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.104 221554 DEBUG os_vif [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:a7:01,bridge_name='br-int',has_traffic_filtering=True,id=b11ec066-c722-43dc-8ff2-3aa8f300c14c,network=Network(88012c28-2cc8-46ef-a3e3-1589170b67c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb11ec066-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.105 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.105 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.106 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.108 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.109 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb11ec066-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.109 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb11ec066-c7, col_values=(('external_ids', {'iface-id': 'b11ec066-c722-43dc-8ff2-3aa8f300c14c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:a7:01', 'vm-uuid': '65660f76-4c1d-4057-9be2-1ced558ca9e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.110 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:42 np0005603609 NetworkManager[49064]: <info>  [1769846382.1119] manager: (tapb11ec066-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.113 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.115 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.116 221554 INFO os_vif [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:a7:01,bridge_name='br-int',has_traffic_filtering=True,id=b11ec066-c722-43dc-8ff2-3aa8f300c14c,network=Network(88012c28-2cc8-46ef-a3e3-1589170b67c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb11ec066-c7')#033[00m
Jan 31 02:59:42 np0005603609 kernel: tapb11ec066-c7: entered promiscuous mode
Jan 31 02:59:42 np0005603609 NetworkManager[49064]: <info>  [1769846382.1930] manager: (tapb11ec066-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/146)
Jan 31 02:59:42 np0005603609 systemd-udevd[253747]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.193 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:42 np0005603609 ovn_controller[130359]: 2026-01-31T07:59:42Z|00297|binding|INFO|Claiming lport b11ec066-c722-43dc-8ff2-3aa8f300c14c for this chassis.
Jan 31 02:59:42 np0005603609 ovn_controller[130359]: 2026-01-31T07:59:42Z|00298|binding|INFO|b11ec066-c722-43dc-8ff2-3aa8f300c14c: Claiming fa:16:3e:a7:a7:01 10.100.0.3
Jan 31 02:59:42 np0005603609 ovn_controller[130359]: 2026-01-31T07:59:42Z|00299|binding|INFO|Setting lport b11ec066-c722-43dc-8ff2-3aa8f300c14c ovn-installed in OVS
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.201 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.205 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:42 np0005603609 NetworkManager[49064]: <info>  [1769846382.2072] device (tapb11ec066-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:59:42 np0005603609 NetworkManager[49064]: <info>  [1769846382.2081] device (tapb11ec066-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.211 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:42 np0005603609 systemd-machined[190912]: New machine qemu-39-instance-00000056.
Jan 31 02:59:42 np0005603609 systemd[1]: Started Virtual Machine qemu-39-instance-00000056.
Jan 31 02:59:42 np0005603609 ovn_controller[130359]: 2026-01-31T07:59:42Z|00300|binding|INFO|Setting lport b11ec066-c722-43dc-8ff2-3aa8f300c14c up in Southbound
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.315 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:a7:01 10.100.0.3'], port_security=['fa:16:3e:a7:a7:01 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '65660f76-4c1d-4057-9be2-1ced558ca9e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88012c28-2cc8-46ef-a3e3-1589170b67c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18503ca80bcc4b588e398b4f03f7908b', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f478e482-3c6f-43fe-ac92-0fd53d2caff4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e491a564-bfa2-4605-adee-6a1b7cb1585c, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=b11ec066-c722-43dc-8ff2-3aa8f300c14c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.316 140058 INFO neutron.agent.ovn.metadata.agent [-] Port b11ec066-c722-43dc-8ff2-3aa8f300c14c in datapath 88012c28-2cc8-46ef-a3e3-1589170b67c0 bound to our chassis#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.318 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 88012c28-2cc8-46ef-a3e3-1589170b67c0#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.324 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7249bf22-293c-4ebb-9424-1c96d0199389]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.325 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap88012c28-21 in ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.328 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap88012c28-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.328 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2673f3d5-4dbb-40eb-bff5-1e25e9bf59fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.329 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[17232586-73cb-4886-94ae-e9a05ebda960]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.339 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[284b9f12-79cd-4a11-a6ff-ab7c957b1ff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.348 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[54708c42-0c6a-42a7-84a1-9e5776429100]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.368 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b69d207a-03ae-49dc-9756-6d8529cee860]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.373 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6aab6efe-d3d6-42fb-9eab-2f0e42f8e58d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:42 np0005603609 NetworkManager[49064]: <info>  [1769846382.3750] manager: (tap88012c28-20): new Veth device (/org/freedesktop/NetworkManager/Devices/147)
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.398 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7ab1fc-5a0f-4b57-b598-4e2832836088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.401 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[2bee3133-84fa-4c9b-a65d-f0e7c03f2f9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:42 np0005603609 NetworkManager[49064]: <info>  [1769846382.4189] device (tap88012c28-20): carrier: link connected
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.424 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[653cf43f-c163-4408-a280-57ee5abe12f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.437 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f6370662-8f96-4317-a3b5-48f427df688f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88012c28-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:7f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662211, 'reachable_time': 26828, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253933, 'error': None, 'target': 'ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.449 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8549c7dd-dfec-47b1-82ec-e701fa920466]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe49:7fae'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662211, 'tstamp': 662211}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253934, 'error': None, 'target': 'ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.465 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[408bb1c0-385b-489b-930a-1c604455f37f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88012c28-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:7f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662211, 'reachable_time': 26828, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253935, 'error': None, 'target': 'ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.487 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1e423a77-4541-43e1-8636-4fdcace5e707]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.540 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9b8da0-1bb6-4b36-8390-ff900d0176ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.542 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88012c28-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.542 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.543 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88012c28-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.545 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:42 np0005603609 NetworkManager[49064]: <info>  [1769846382.5460] manager: (tap88012c28-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Jan 31 02:59:42 np0005603609 kernel: tap88012c28-20: entered promiscuous mode
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.550 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.550 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap88012c28-20, col_values=(('external_ids', {'iface-id': '4b49eb8a-82a2-420f-8ab9-52025af81cc8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.551 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.555 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.555 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/88012c28-2cc8-46ef-a3e3-1589170b67c0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/88012c28-2cc8-46ef-a3e3-1589170b67c0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:59:42 np0005603609 ovn_controller[130359]: 2026-01-31T07:59:42Z|00301|binding|INFO|Releasing lport 4b49eb8a-82a2-420f-8ab9-52025af81cc8 from this chassis (sb_readonly=0)
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.556 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9f271bb7-4eb0-4b21-818d-e1b8f796461a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.558 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-88012c28-2cc8-46ef-a3e3-1589170b67c0
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/88012c28-2cc8-46ef-a3e3-1589170b67c0.pid.haproxy
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 88012c28-2cc8-46ef-a3e3-1589170b67c0
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:59:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:42.559 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0', 'env', 'PROCESS_TAG=haproxy-88012c28-2cc8-46ef-a3e3-1589170b67c0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/88012c28-2cc8-46ef-a3e3-1589170b67c0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:59:42 np0005603609 nova_compute[221550]: 2026-01-31 07:59:42.565 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:42 np0005603609 podman[253968]: 2026-01-31 07:59:42.898503736 +0000 UTC m=+0.046772928 container create 674c229373fc7544498aca0be63bcb790587d41dd2d6778ff5fd00468386a2b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:59:42 np0005603609 systemd[1]: Started libpod-conmon-674c229373fc7544498aca0be63bcb790587d41dd2d6778ff5fd00468386a2b7.scope.
Jan 31 02:59:42 np0005603609 systemd[1]: Started libcrun container.
Jan 31 02:59:42 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e6e80c52306d28031a5ea91605d8bc7b2ca3653d52aa24c4e84c668e0e9e9f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:59:42 np0005603609 podman[253968]: 2026-01-31 07:59:42.875240795 +0000 UTC m=+0.023510007 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:59:42 np0005603609 podman[253968]: 2026-01-31 07:59:42.982052186 +0000 UTC m=+0.130321408 container init 674c229373fc7544498aca0be63bcb790587d41dd2d6778ff5fd00468386a2b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Jan 31 02:59:42 np0005603609 podman[253968]: 2026-01-31 07:59:42.986377881 +0000 UTC m=+0.134647073 container start 674c229373fc7544498aca0be63bcb790587d41dd2d6778ff5fd00468386a2b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 02:59:43 np0005603609 neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0[253984]: [NOTICE]   (253988) : New worker (253990) forked
Jan 31 02:59:43 np0005603609 neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0[253984]: [NOTICE]   (253988) : Loading success.
Jan 31 02:59:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:43.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:43 np0005603609 nova_compute[221550]: 2026-01-31 07:59:43.229 221554 DEBUG nova.compute.manager [req-2d701d0c-cbfa-4797-a41c-cf4d27904086 req-4ee7e764-b50d-4140-8f19-e326febc6d3b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Received event network-vif-unplugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:59:43 np0005603609 nova_compute[221550]: 2026-01-31 07:59:43.229 221554 DEBUG oslo_concurrency.lockutils [req-2d701d0c-cbfa-4797-a41c-cf4d27904086 req-4ee7e764-b50d-4140-8f19-e326febc6d3b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:43 np0005603609 nova_compute[221550]: 2026-01-31 07:59:43.230 221554 DEBUG oslo_concurrency.lockutils [req-2d701d0c-cbfa-4797-a41c-cf4d27904086 req-4ee7e764-b50d-4140-8f19-e326febc6d3b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:43 np0005603609 nova_compute[221550]: 2026-01-31 07:59:43.230 221554 DEBUG oslo_concurrency.lockutils [req-2d701d0c-cbfa-4797-a41c-cf4d27904086 req-4ee7e764-b50d-4140-8f19-e326febc6d3b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:43 np0005603609 nova_compute[221550]: 2026-01-31 07:59:43.230 221554 DEBUG nova.compute.manager [req-2d701d0c-cbfa-4797-a41c-cf4d27904086 req-4ee7e764-b50d-4140-8f19-e326febc6d3b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] No waiting events found dispatching network-vif-unplugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:59:43 np0005603609 nova_compute[221550]: 2026-01-31 07:59:43.230 221554 WARNING nova.compute.manager [req-2d701d0c-cbfa-4797-a41c-cf4d27904086 req-4ee7e764-b50d-4140-8f19-e326febc6d3b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Received unexpected event network-vif-unplugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c for instance with vm_state active and task_state reboot_started_hard.#033[00m
Jan 31 02:59:43 np0005603609 nova_compute[221550]: 2026-01-31 07:59:43.424 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:43 np0005603609 nova_compute[221550]: 2026-01-31 07:59:43.561 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 65660f76-4c1d-4057-9be2-1ced558ca9e2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 02:59:43 np0005603609 nova_compute[221550]: 2026-01-31 07:59:43.562 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846383.561142, 65660f76-4c1d-4057-9be2-1ced558ca9e2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:59:43 np0005603609 nova_compute[221550]: 2026-01-31 07:59:43.562 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:59:43 np0005603609 nova_compute[221550]: 2026-01-31 07:59:43.564 221554 DEBUG nova.compute.manager [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:59:43 np0005603609 nova_compute[221550]: 2026-01-31 07:59:43.567 221554 INFO nova.virt.libvirt.driver [-] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Instance rebooted successfully.#033[00m
Jan 31 02:59:43 np0005603609 nova_compute[221550]: 2026-01-31 07:59:43.567 221554 DEBUG nova.compute.manager [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:59:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:43.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:43 np0005603609 nova_compute[221550]: 2026-01-31 07:59:43.671 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:59:43 np0005603609 nova_compute[221550]: 2026-01-31 07:59:43.675 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:59:43 np0005603609 nova_compute[221550]: 2026-01-31 07:59:43.944 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Jan 31 02:59:43 np0005603609 nova_compute[221550]: 2026-01-31 07:59:43.945 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846383.5631003, 65660f76-4c1d-4057-9be2-1ced558ca9e2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:59:43 np0005603609 nova_compute[221550]: 2026-01-31 07:59:43.945 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] VM Started (Lifecycle Event)#033[00m
Jan 31 02:59:44 np0005603609 nova_compute[221550]: 2026-01-31 07:59:44.010 221554 DEBUG oslo_concurrency.lockutils [None req-f2d1d40b-8b9b-4a06-aa31-b2daba2260af fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:44 np0005603609 nova_compute[221550]: 2026-01-31 07:59:44.035 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:59:44 np0005603609 nova_compute[221550]: 2026-01-31 07:59:44.040 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:59:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:45.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:45 np0005603609 nova_compute[221550]: 2026-01-31 07:59:45.564 221554 DEBUG nova.compute.manager [req-d2d876fd-2d60-4a6a-bbdd-da00fa32ec3e req-b4c66dbe-75a7-4bb5-9c78-7f2a7d9ed3c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Received event network-vif-plugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:59:45 np0005603609 nova_compute[221550]: 2026-01-31 07:59:45.564 221554 DEBUG oslo_concurrency.lockutils [req-d2d876fd-2d60-4a6a-bbdd-da00fa32ec3e req-b4c66dbe-75a7-4bb5-9c78-7f2a7d9ed3c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:45 np0005603609 nova_compute[221550]: 2026-01-31 07:59:45.565 221554 DEBUG oslo_concurrency.lockutils [req-d2d876fd-2d60-4a6a-bbdd-da00fa32ec3e req-b4c66dbe-75a7-4bb5-9c78-7f2a7d9ed3c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:45 np0005603609 nova_compute[221550]: 2026-01-31 07:59:45.565 221554 DEBUG oslo_concurrency.lockutils [req-d2d876fd-2d60-4a6a-bbdd-da00fa32ec3e req-b4c66dbe-75a7-4bb5-9c78-7f2a7d9ed3c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:45 np0005603609 nova_compute[221550]: 2026-01-31 07:59:45.566 221554 DEBUG nova.compute.manager [req-d2d876fd-2d60-4a6a-bbdd-da00fa32ec3e req-b4c66dbe-75a7-4bb5-9c78-7f2a7d9ed3c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] No waiting events found dispatching network-vif-plugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:59:45 np0005603609 nova_compute[221550]: 2026-01-31 07:59:45.566 221554 WARNING nova.compute.manager [req-d2d876fd-2d60-4a6a-bbdd-da00fa32ec3e req-b4c66dbe-75a7-4bb5-9c78-7f2a7d9ed3c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Received unexpected event network-vif-plugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c for instance with vm_state active and task_state None.#033[00m
Jan 31 02:59:45 np0005603609 nova_compute[221550]: 2026-01-31 07:59:45.566 221554 DEBUG nova.compute.manager [req-d2d876fd-2d60-4a6a-bbdd-da00fa32ec3e req-b4c66dbe-75a7-4bb5-9c78-7f2a7d9ed3c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Received event network-vif-plugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:59:45 np0005603609 nova_compute[221550]: 2026-01-31 07:59:45.566 221554 DEBUG oslo_concurrency.lockutils [req-d2d876fd-2d60-4a6a-bbdd-da00fa32ec3e req-b4c66dbe-75a7-4bb5-9c78-7f2a7d9ed3c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:45 np0005603609 nova_compute[221550]: 2026-01-31 07:59:45.567 221554 DEBUG oslo_concurrency.lockutils [req-d2d876fd-2d60-4a6a-bbdd-da00fa32ec3e req-b4c66dbe-75a7-4bb5-9c78-7f2a7d9ed3c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:45 np0005603609 nova_compute[221550]: 2026-01-31 07:59:45.567 221554 DEBUG oslo_concurrency.lockutils [req-d2d876fd-2d60-4a6a-bbdd-da00fa32ec3e req-b4c66dbe-75a7-4bb5-9c78-7f2a7d9ed3c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:45 np0005603609 nova_compute[221550]: 2026-01-31 07:59:45.567 221554 DEBUG nova.compute.manager [req-d2d876fd-2d60-4a6a-bbdd-da00fa32ec3e req-b4c66dbe-75a7-4bb5-9c78-7f2a7d9ed3c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] No waiting events found dispatching network-vif-plugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:59:45 np0005603609 nova_compute[221550]: 2026-01-31 07:59:45.567 221554 WARNING nova.compute.manager [req-d2d876fd-2d60-4a6a-bbdd-da00fa32ec3e req-b4c66dbe-75a7-4bb5-9c78-7f2a7d9ed3c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Received unexpected event network-vif-plugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c for instance with vm_state active and task_state None.#033[00m
Jan 31 02:59:45 np0005603609 nova_compute[221550]: 2026-01-31 07:59:45.568 221554 DEBUG nova.compute.manager [req-d2d876fd-2d60-4a6a-bbdd-da00fa32ec3e req-b4c66dbe-75a7-4bb5-9c78-7f2a7d9ed3c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Received event network-vif-plugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:59:45 np0005603609 nova_compute[221550]: 2026-01-31 07:59:45.568 221554 DEBUG oslo_concurrency.lockutils [req-d2d876fd-2d60-4a6a-bbdd-da00fa32ec3e req-b4c66dbe-75a7-4bb5-9c78-7f2a7d9ed3c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:45 np0005603609 nova_compute[221550]: 2026-01-31 07:59:45.568 221554 DEBUG oslo_concurrency.lockutils [req-d2d876fd-2d60-4a6a-bbdd-da00fa32ec3e req-b4c66dbe-75a7-4bb5-9c78-7f2a7d9ed3c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:45 np0005603609 nova_compute[221550]: 2026-01-31 07:59:45.568 221554 DEBUG oslo_concurrency.lockutils [req-d2d876fd-2d60-4a6a-bbdd-da00fa32ec3e req-b4c66dbe-75a7-4bb5-9c78-7f2a7d9ed3c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:45 np0005603609 nova_compute[221550]: 2026-01-31 07:59:45.568 221554 DEBUG nova.compute.manager [req-d2d876fd-2d60-4a6a-bbdd-da00fa32ec3e req-b4c66dbe-75a7-4bb5-9c78-7f2a7d9ed3c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] No waiting events found dispatching network-vif-plugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:59:45 np0005603609 nova_compute[221550]: 2026-01-31 07:59:45.569 221554 WARNING nova.compute.manager [req-d2d876fd-2d60-4a6a-bbdd-da00fa32ec3e req-b4c66dbe-75a7-4bb5-9c78-7f2a7d9ed3c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Received unexpected event network-vif-plugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c for instance with vm_state active and task_state None.#033[00m
Jan 31 02:59:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:59:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:45.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:59:47 np0005603609 nova_compute[221550]: 2026-01-31 07:59:47.111 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:47 np0005603609 nova_compute[221550]: 2026-01-31 07:59:47.200 221554 DEBUG oslo_concurrency.lockutils [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Acquiring lock "65660f76-4c1d-4057-9be2-1ced558ca9e2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:47 np0005603609 nova_compute[221550]: 2026-01-31 07:59:47.201 221554 DEBUG oslo_concurrency.lockutils [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:47 np0005603609 nova_compute[221550]: 2026-01-31 07:59:47.201 221554 DEBUG oslo_concurrency.lockutils [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Acquiring lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:47 np0005603609 nova_compute[221550]: 2026-01-31 07:59:47.202 221554 DEBUG oslo_concurrency.lockutils [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:47 np0005603609 nova_compute[221550]: 2026-01-31 07:59:47.202 221554 DEBUG oslo_concurrency.lockutils [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:47 np0005603609 nova_compute[221550]: 2026-01-31 07:59:47.203 221554 INFO nova.compute.manager [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Terminating instance#033[00m
Jan 31 02:59:47 np0005603609 nova_compute[221550]: 2026-01-31 07:59:47.205 221554 DEBUG nova.compute.manager [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:59:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:59:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:47.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:59:47 np0005603609 kernel: tapb11ec066-c7 (unregistering): left promiscuous mode
Jan 31 02:59:47 np0005603609 NetworkManager[49064]: <info>  [1769846387.3140] device (tapb11ec066-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:59:47 np0005603609 ovn_controller[130359]: 2026-01-31T07:59:47Z|00302|binding|INFO|Releasing lport b11ec066-c722-43dc-8ff2-3aa8f300c14c from this chassis (sb_readonly=0)
Jan 31 02:59:47 np0005603609 nova_compute[221550]: 2026-01-31 07:59:47.323 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:47 np0005603609 ovn_controller[130359]: 2026-01-31T07:59:47Z|00303|binding|INFO|Setting lport b11ec066-c722-43dc-8ff2-3aa8f300c14c down in Southbound
Jan 31 02:59:47 np0005603609 ovn_controller[130359]: 2026-01-31T07:59:47Z|00304|binding|INFO|Removing iface tapb11ec066-c7 ovn-installed in OVS
Jan 31 02:59:47 np0005603609 nova_compute[221550]: 2026-01-31 07:59:47.325 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:47 np0005603609 nova_compute[221550]: 2026-01-31 07:59:47.331 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:47 np0005603609 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000056.scope: Deactivated successfully.
Jan 31 02:59:47 np0005603609 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000056.scope: Consumed 5.160s CPU time.
Jan 31 02:59:47 np0005603609 systemd-machined[190912]: Machine qemu-39-instance-00000056 terminated.
Jan 31 02:59:47 np0005603609 podman[254045]: 2026-01-31 07:59:47.388636202 +0000 UTC m=+0.050573780 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 02:59:47 np0005603609 podman[254043]: 2026-01-31 07:59:47.41722779 +0000 UTC m=+0.079762832 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 02:59:47 np0005603609 nova_compute[221550]: 2026-01-31 07:59:47.423 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:47 np0005603609 nova_compute[221550]: 2026-01-31 07:59:47.426 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:47 np0005603609 nova_compute[221550]: 2026-01-31 07:59:47.432 221554 INFO nova.virt.libvirt.driver [-] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Instance destroyed successfully.#033[00m
Jan 31 02:59:47 np0005603609 nova_compute[221550]: 2026-01-31 07:59:47.433 221554 DEBUG nova.objects.instance [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lazy-loading 'resources' on Instance uuid 65660f76-4c1d-4057-9be2-1ced558ca9e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:59:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:47.569 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:a7:01 10.100.0.3'], port_security=['fa:16:3e:a7:a7:01 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '65660f76-4c1d-4057-9be2-1ced558ca9e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88012c28-2cc8-46ef-a3e3-1589170b67c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18503ca80bcc4b588e398b4f03f7908b', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f478e482-3c6f-43fe-ac92-0fd53d2caff4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e491a564-bfa2-4605-adee-6a1b7cb1585c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=b11ec066-c722-43dc-8ff2-3aa8f300c14c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:59:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:47.571 140058 INFO neutron.agent.ovn.metadata.agent [-] Port b11ec066-c722-43dc-8ff2-3aa8f300c14c in datapath 88012c28-2cc8-46ef-a3e3-1589170b67c0 unbound from our chassis#033[00m
Jan 31 02:59:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:47.574 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 88012c28-2cc8-46ef-a3e3-1589170b67c0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:59:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:47.575 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1c9561-27b1-44eb-9baa-913db1463ae2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:47.576 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0 namespace which is not needed anymore#033[00m
Jan 31 02:59:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:59:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:47.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:59:47 np0005603609 neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0[253984]: [NOTICE]   (253988) : haproxy version is 2.8.14-c23fe91
Jan 31 02:59:47 np0005603609 neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0[253984]: [NOTICE]   (253988) : path to executable is /usr/sbin/haproxy
Jan 31 02:59:47 np0005603609 neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0[253984]: [WARNING]  (253988) : Exiting Master process...
Jan 31 02:59:47 np0005603609 neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0[253984]: [WARNING]  (253988) : Exiting Master process...
Jan 31 02:59:47 np0005603609 neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0[253984]: [ALERT]    (253988) : Current worker (253990) exited with code 143 (Terminated)
Jan 31 02:59:47 np0005603609 neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0[253984]: [WARNING]  (253988) : All workers exited. Exiting... (0)
Jan 31 02:59:47 np0005603609 systemd[1]: libpod-674c229373fc7544498aca0be63bcb790587d41dd2d6778ff5fd00468386a2b7.scope: Deactivated successfully.
Jan 31 02:59:47 np0005603609 podman[254119]: 2026-01-31 07:59:47.727592502 +0000 UTC m=+0.055438257 container died 674c229373fc7544498aca0be63bcb790587d41dd2d6778ff5fd00468386a2b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:59:47 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-674c229373fc7544498aca0be63bcb790587d41dd2d6778ff5fd00468386a2b7-userdata-shm.mount: Deactivated successfully.
Jan 31 02:59:47 np0005603609 systemd[1]: var-lib-containers-storage-overlay-d5e6e80c52306d28031a5ea91605d8bc7b2ca3653d52aa24c4e84c668e0e9e9f-merged.mount: Deactivated successfully.
Jan 31 02:59:47 np0005603609 podman[254119]: 2026-01-31 07:59:47.851612897 +0000 UTC m=+0.179458592 container cleanup 674c229373fc7544498aca0be63bcb790587d41dd2d6778ff5fd00468386a2b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:59:47 np0005603609 systemd[1]: libpod-conmon-674c229373fc7544498aca0be63bcb790587d41dd2d6778ff5fd00468386a2b7.scope: Deactivated successfully.
Jan 31 02:59:47 np0005603609 podman[254201]: 2026-01-31 07:59:47.964896364 +0000 UTC m=+0.089461735 container remove 674c229373fc7544498aca0be63bcb790587d41dd2d6778ff5fd00468386a2b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 02:59:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:47.971 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[60aa7138-f03d-468a-97f6-67e9bc825cdb]: (4, ('Sat Jan 31 07:59:47 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0 (674c229373fc7544498aca0be63bcb790587d41dd2d6778ff5fd00468386a2b7)\n674c229373fc7544498aca0be63bcb790587d41dd2d6778ff5fd00468386a2b7\nSat Jan 31 07:59:47 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0 (674c229373fc7544498aca0be63bcb790587d41dd2d6778ff5fd00468386a2b7)\n674c229373fc7544498aca0be63bcb790587d41dd2d6778ff5fd00468386a2b7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:47.973 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[199e54c3-e8aa-4ee7-bf7e-6390f3cfd4f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:47.974 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88012c28-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:47 np0005603609 nova_compute[221550]: 2026-01-31 07:59:47.976 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:47 np0005603609 kernel: tap88012c28-20: left promiscuous mode
Jan 31 02:59:47 np0005603609 nova_compute[221550]: 2026-01-31 07:59:47.984 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:47.987 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[20b9df29-8895-4bb0-8edd-f60ac7f1a148]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:48.004 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd9e182-0c20-44d8-8bdf-051619551275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:48.006 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4a5d5113-3cef-4aa6-8697-9cff45d47a8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:48.018 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[10a434fa-4d15-42fd-ae96-eb95897973ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662206, 'reachable_time': 36962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254220, 'error': None, 'target': 'ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:48.021 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-88012c28-2cc8-46ef-a3e3-1589170b67c0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:59:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:48.021 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[71e27917-19e7-4834-baaa-2e86a1d16088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:48 np0005603609 systemd[1]: run-netns-ovnmeta\x2d88012c28\x2d2cc8\x2d46ef\x2da3e3\x2d1589170b67c0.mount: Deactivated successfully.
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.149 221554 DEBUG nova.virt.libvirt.vif [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:59:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-783386843',display_name='tempest-InstanceActionsTestJSON-server-783386843',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-783386843',id=86,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:59:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='18503ca80bcc4b588e398b4f03f7908b',ramdisk_id='',reservation_id='r-d0q3szj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-62779839',owner_user_name='tempest-InstanceActionsTestJSON-62779839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:59:43Z,user_data=None,user_id='fb422aa2270c43fa9ecb0da10968f867',uuid=65660f76-4c1d-4057-9be2-1ced558ca9e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "address": "fa:16:3e:a7:a7:01", "network": {"id": "88012c28-2cc8-46ef-a3e3-1589170b67c0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1906706305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18503ca80bcc4b588e398b4f03f7908b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb11ec066-c7", "ovs_interfaceid": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.150 221554 DEBUG nova.network.os_vif_util [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Converting VIF {"id": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "address": "fa:16:3e:a7:a7:01", "network": {"id": "88012c28-2cc8-46ef-a3e3-1589170b67c0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1906706305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18503ca80bcc4b588e398b4f03f7908b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb11ec066-c7", "ovs_interfaceid": "b11ec066-c722-43dc-8ff2-3aa8f300c14c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.151 221554 DEBUG nova.network.os_vif_util [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a7:a7:01,bridge_name='br-int',has_traffic_filtering=True,id=b11ec066-c722-43dc-8ff2-3aa8f300c14c,network=Network(88012c28-2cc8-46ef-a3e3-1589170b67c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb11ec066-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.151 221554 DEBUG os_vif [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:a7:01,bridge_name='br-int',has_traffic_filtering=True,id=b11ec066-c722-43dc-8ff2-3aa8f300c14c,network=Network(88012c28-2cc8-46ef-a3e3-1589170b67c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb11ec066-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.153 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.153 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb11ec066-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.154 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.156 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.158 221554 INFO os_vif [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:a7:01,bridge_name='br-int',has_traffic_filtering=True,id=b11ec066-c722-43dc-8ff2-3aa8f300c14c,network=Network(88012c28-2cc8-46ef-a3e3-1589170b67c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb11ec066-c7')#033[00m
Jan 31 02:59:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:59:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.426 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.593 221554 INFO nova.virt.libvirt.driver [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Deleting instance files /var/lib/nova/instances/65660f76-4c1d-4057-9be2-1ced558ca9e2_del#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.594 221554 INFO nova.virt.libvirt.driver [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Deletion of /var/lib/nova/instances/65660f76-4c1d-4057-9be2-1ced558ca9e2_del complete#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.729 221554 DEBUG nova.compute.manager [req-cccd504a-29e6-4868-92a4-fdd7e9b3bb73 req-b342db80-9705-4b7a-ae61-08bd4073bf76 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Received event network-vif-unplugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.729 221554 DEBUG oslo_concurrency.lockutils [req-cccd504a-29e6-4868-92a4-fdd7e9b3bb73 req-b342db80-9705-4b7a-ae61-08bd4073bf76 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.730 221554 DEBUG oslo_concurrency.lockutils [req-cccd504a-29e6-4868-92a4-fdd7e9b3bb73 req-b342db80-9705-4b7a-ae61-08bd4073bf76 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.730 221554 DEBUG oslo_concurrency.lockutils [req-cccd504a-29e6-4868-92a4-fdd7e9b3bb73 req-b342db80-9705-4b7a-ae61-08bd4073bf76 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.730 221554 DEBUG nova.compute.manager [req-cccd504a-29e6-4868-92a4-fdd7e9b3bb73 req-b342db80-9705-4b7a-ae61-08bd4073bf76 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] No waiting events found dispatching network-vif-unplugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.731 221554 DEBUG nova.compute.manager [req-cccd504a-29e6-4868-92a4-fdd7e9b3bb73 req-b342db80-9705-4b7a-ae61-08bd4073bf76 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Received event network-vif-unplugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.954 221554 INFO nova.compute.manager [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Took 1.75 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.956 221554 DEBUG oslo.service.loopingcall [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.956 221554 DEBUG nova.compute.manager [-] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:59:48 np0005603609 nova_compute[221550]: 2026-01-31 07:59:48.957 221554 DEBUG nova.network.neutron [-] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:59:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:49.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:49.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:51.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 02:59:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:51.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 02:59:52 np0005603609 nova_compute[221550]: 2026-01-31 07:59:52.780 221554 DEBUG nova.compute.manager [req-9a7a22e5-1be8-44f1-abbb-2c5e9c34e27f req-9a6c0e9e-aeac-4e05-8b07-3dfabdb277db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Received event network-vif-plugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:59:52 np0005603609 nova_compute[221550]: 2026-01-31 07:59:52.781 221554 DEBUG oslo_concurrency.lockutils [req-9a7a22e5-1be8-44f1-abbb-2c5e9c34e27f req-9a6c0e9e-aeac-4e05-8b07-3dfabdb277db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:52 np0005603609 nova_compute[221550]: 2026-01-31 07:59:52.781 221554 DEBUG oslo_concurrency.lockutils [req-9a7a22e5-1be8-44f1-abbb-2c5e9c34e27f req-9a6c0e9e-aeac-4e05-8b07-3dfabdb277db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:52 np0005603609 nova_compute[221550]: 2026-01-31 07:59:52.781 221554 DEBUG oslo_concurrency.lockutils [req-9a7a22e5-1be8-44f1-abbb-2c5e9c34e27f req-9a6c0e9e-aeac-4e05-8b07-3dfabdb277db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:52 np0005603609 nova_compute[221550]: 2026-01-31 07:59:52.781 221554 DEBUG nova.compute.manager [req-9a7a22e5-1be8-44f1-abbb-2c5e9c34e27f req-9a6c0e9e-aeac-4e05-8b07-3dfabdb277db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] No waiting events found dispatching network-vif-plugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:59:52 np0005603609 nova_compute[221550]: 2026-01-31 07:59:52.781 221554 WARNING nova.compute.manager [req-9a7a22e5-1be8-44f1-abbb-2c5e9c34e27f req-9a6c0e9e-aeac-4e05-8b07-3dfabdb277db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Received unexpected event network-vif-plugged-b11ec066-c722-43dc-8ff2-3aa8f300c14c for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:59:53 np0005603609 nova_compute[221550]: 2026-01-31 07:59:53.155 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:53.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:53 np0005603609 nova_compute[221550]: 2026-01-31 07:59:53.428 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:53.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:53 np0005603609 nova_compute[221550]: 2026-01-31 07:59:53.841 221554 DEBUG nova.network.neutron [-] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:59:54 np0005603609 nova_compute[221550]: 2026-01-31 07:59:54.103 221554 INFO nova.compute.manager [-] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Took 5.15 seconds to deallocate network for instance.#033[00m
Jan 31 02:59:54 np0005603609 nova_compute[221550]: 2026-01-31 07:59:54.287 221554 DEBUG oslo_concurrency.lockutils [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:54 np0005603609 nova_compute[221550]: 2026-01-31 07:59:54.288 221554 DEBUG oslo_concurrency.lockutils [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:54 np0005603609 nova_compute[221550]: 2026-01-31 07:59:54.343 221554 DEBUG oslo_concurrency.processutils [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:59:54 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/18686290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:59:54 np0005603609 nova_compute[221550]: 2026-01-31 07:59:54.798 221554 DEBUG oslo_concurrency.processutils [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:54 np0005603609 nova_compute[221550]: 2026-01-31 07:59:54.805 221554 DEBUG nova.compute.provider_tree [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:59:54 np0005603609 nova_compute[221550]: 2026-01-31 07:59:54.855 221554 DEBUG nova.scheduler.client.report [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:59:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:55 np0005603609 nova_compute[221550]: 2026-01-31 07:59:55.027 221554 DEBUG nova.compute.manager [req-2ab663a3-6eae-4d4f-8f68-f43cce53bf81 req-9652888a-8441-4168-9c73-90677cf2d60b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Received event network-vif-deleted-b11ec066-c722-43dc-8ff2-3aa8f300c14c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:59:55 np0005603609 nova_compute[221550]: 2026-01-31 07:59:55.221 221554 DEBUG oslo_concurrency.lockutils [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:55.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:55 np0005603609 nova_compute[221550]: 2026-01-31 07:59:55.436 221554 INFO nova.scheduler.client.report [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Deleted allocations for instance 65660f76-4c1d-4057-9be2-1ced558ca9e2#033[00m
Jan 31 02:59:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:55.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:55 np0005603609 nova_compute[221550]: 2026-01-31 07:59:55.793 221554 DEBUG oslo_concurrency.lockutils [None req-2f9b3f49-93bf-40c6-b121-a87f5dcd0a15 fb422aa2270c43fa9ecb0da10968f867 18503ca80bcc4b588e398b4f03f7908b - - default default] Lock "65660f76-4c1d-4057-9be2-1ced558ca9e2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:57.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:57.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:58 np0005603609 nova_compute[221550]: 2026-01-31 07:59:58.051 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:58.051 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:59:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 07:59:58.054 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:59:58 np0005603609 nova_compute[221550]: 2026-01-31 07:59:58.157 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:58 np0005603609 nova_compute[221550]: 2026-01-31 07:59:58.431 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:59.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 02:59:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:59.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:59 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #82. Immutable memtables: 0.
Jan 31 02:59:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-07:59:59.868718) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:59:59 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 82
Jan 31 02:59:59 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846399868774, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1398, "num_deletes": 256, "total_data_size": 3195884, "memory_usage": 3240048, "flush_reason": "Manual Compaction"}
Jan 31 02:59:59 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #83: started
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846400076613, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 83, "file_size": 2086952, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42845, "largest_seqno": 44237, "table_properties": {"data_size": 2080869, "index_size": 3350, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12959, "raw_average_key_size": 19, "raw_value_size": 2068660, "raw_average_value_size": 3172, "num_data_blocks": 147, "num_entries": 652, "num_filter_entries": 652, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846284, "oldest_key_time": 1769846284, "file_creation_time": 1769846399, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 207979 microseconds, and 5649 cpu microseconds.
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:00.076694) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #83: 2086952 bytes OK
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:00.076727) [db/memtable_list.cc:519] [default] Level-0 commit table #83 started
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:00.158632) [db/memtable_list.cc:722] [default] Level-0 commit table #83: memtable #1 done
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:00.158664) EVENT_LOG_v1 {"time_micros": 1769846400158657, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:00.158683) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 3189334, prev total WAL file size 3205387, number of live WAL files 2.
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000079.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:00.160119) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323535' seq:72057594037927935, type:22 .. '6C6F676D0031353037' seq:0, type:0; will stop at (end)
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [83(2038KB)], [81(9314KB)]
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846400160187, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [83], "files_L6": [81], "score": -1, "input_data_size": 11625328, "oldest_snapshot_seqno": -1}
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #84: 6929 keys, 11452390 bytes, temperature: kUnknown
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846400592939, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 84, "file_size": 11452390, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11405642, "index_size": 28332, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 178303, "raw_average_key_size": 25, "raw_value_size": 11281128, "raw_average_value_size": 1628, "num_data_blocks": 1133, "num_entries": 6929, "num_filter_entries": 6929, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769846400, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 84, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:00.593180) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 11452390 bytes
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:00.807719) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 26.9 rd, 26.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.1 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(11.1) write-amplify(5.5) OK, records in: 7458, records dropped: 529 output_compression: NoCompression
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:00.807781) EVENT_LOG_v1 {"time_micros": 1769846400807757, "job": 50, "event": "compaction_finished", "compaction_time_micros": 432836, "compaction_time_cpu_micros": 41083, "output_level": 6, "num_output_files": 1, "total_output_size": 11452390, "num_input_records": 7458, "num_output_records": 6929, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846400808335, "job": 50, "event": "table_file_deletion", "file_number": 83}
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000081.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846400809836, "job": 50, "event": "table_file_deletion", "file_number": 81}
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:00.159642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:00.809935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:00.809940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:00.809942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:00.809944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:00:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:00.809945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:00:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:01.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:01.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:01 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 03:00:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:00:02.056 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:00:02 np0005603609 nova_compute[221550]: 2026-01-31 08:00:02.433 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846387.4314318, 65660f76-4c1d-4057-9be2-1ced558ca9e2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:00:02 np0005603609 nova_compute[221550]: 2026-01-31 08:00:02.433 221554 INFO nova.compute.manager [-] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:00:02 np0005603609 nova_compute[221550]: 2026-01-31 08:00:02.563 221554 DEBUG nova.compute.manager [None req-e90243cd-5276-47c4-bf76-36c7a78f2506 - - - - - -] [instance: 65660f76-4c1d-4057-9be2-1ced558ca9e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:00:03 np0005603609 nova_compute[221550]: 2026-01-31 08:00:03.159 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:00:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:03.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:00:03 np0005603609 nova_compute[221550]: 2026-01-31 08:00:03.434 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:03.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:00:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:05.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:00:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:00:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:05.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:00:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:07.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:00:07.495 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:00:07.496 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:00:07.496 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:00:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:07.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:00:08 np0005603609 nova_compute[221550]: 2026-01-31 08:00:08.160 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:08 np0005603609 nova_compute[221550]: 2026-01-31 08:00:08.471 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:08 np0005603609 nova_compute[221550]: 2026-01-31 08:00:08.473 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:08 np0005603609 nova_compute[221550]: 2026-01-31 08:00:08.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:00:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:09.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:00:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:00:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:09.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:00:09 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #85. Immutable memtables: 0.
Jan 31 03:00:09 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:09.976677) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:00:09 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 85
Jan 31 03:00:09 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846409976745, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 359, "num_deletes": 251, "total_data_size": 314920, "memory_usage": 322824, "flush_reason": "Manual Compaction"}
Jan 31 03:00:09 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #86: started
Jan 31 03:00:09 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846409980555, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 86, "file_size": 207473, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44242, "largest_seqno": 44596, "table_properties": {"data_size": 205294, "index_size": 343, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5437, "raw_average_key_size": 18, "raw_value_size": 201023, "raw_average_value_size": 686, "num_data_blocks": 15, "num_entries": 293, "num_filter_entries": 293, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846400, "oldest_key_time": 1769846400, "file_creation_time": 1769846409, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:00:09 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 3915 microseconds, and 1210 cpu microseconds.
Jan 31 03:00:09 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:00:09 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:09.980603) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #86: 207473 bytes OK
Jan 31 03:00:09 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:09.980618) [db/memtable_list.cc:519] [default] Level-0 commit table #86 started
Jan 31 03:00:09 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:09.982156) [db/memtable_list.cc:722] [default] Level-0 commit table #86: memtable #1 done
Jan 31 03:00:09 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:09.982169) EVENT_LOG_v1 {"time_micros": 1769846409982165, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:00:09 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:09.982185) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:00:09 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 312492, prev total WAL file size 312492, number of live WAL files 2.
Jan 31 03:00:09 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000082.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:00:09 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:09.982566) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Jan 31 03:00:09 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:00:09 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [86(202KB)], [84(10MB)]
Jan 31 03:00:09 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846409982614, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [86], "files_L6": [84], "score": -1, "input_data_size": 11659863, "oldest_snapshot_seqno": -1}
Jan 31 03:00:10 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #87: 6712 keys, 9720040 bytes, temperature: kUnknown
Jan 31 03:00:10 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846410086175, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 87, "file_size": 9720040, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9676197, "index_size": 25939, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16837, "raw_key_size": 174486, "raw_average_key_size": 25, "raw_value_size": 9556998, "raw_average_value_size": 1423, "num_data_blocks": 1024, "num_entries": 6712, "num_filter_entries": 6712, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769846409, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 87, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:00:10 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:00:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:10.086407) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 9720040 bytes
Jan 31 03:00:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:10.087496) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.5 rd, 93.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.9 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(103.0) write-amplify(46.8) OK, records in: 7222, records dropped: 510 output_compression: NoCompression
Jan 31 03:00:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:10.087513) EVENT_LOG_v1 {"time_micros": 1769846410087505, "job": 52, "event": "compaction_finished", "compaction_time_micros": 103632, "compaction_time_cpu_micros": 16009, "output_level": 6, "num_output_files": 1, "total_output_size": 9720040, "num_input_records": 7222, "num_output_records": 6712, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:00:10 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:00:10 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846410087624, "job": 52, "event": "table_file_deletion", "file_number": 86}
Jan 31 03:00:10 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000084.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:00:10 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846410088374, "job": 52, "event": "table_file_deletion", "file_number": 84}
Jan 31 03:00:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:09.982458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:00:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:10.088472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:00:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:10.088480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:00:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:10.088483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:00:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:10.088487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:00:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:00:10.088490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:00:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:10 np0005603609 nova_compute[221550]: 2026-01-31 08:00:10.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:00:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:11.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:00:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:11.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:13 np0005603609 nova_compute[221550]: 2026-01-31 08:00:13.162 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:13.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:13 np0005603609 nova_compute[221550]: 2026-01-31 08:00:13.474 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:13 np0005603609 nova_compute[221550]: 2026-01-31 08:00:13.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:00:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:13.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:00:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:15.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:15 np0005603609 nova_compute[221550]: 2026-01-31 08:00:15.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:15 np0005603609 nova_compute[221550]: 2026-01-31 08:00:15.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:00:15 np0005603609 nova_compute[221550]: 2026-01-31 08:00:15.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:00:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:15.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:15 np0005603609 nova_compute[221550]: 2026-01-31 08:00:15.737 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:00:15 np0005603609 nova_compute[221550]: 2026-01-31 08:00:15.737 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:15 np0005603609 nova_compute[221550]: 2026-01-31 08:00:15.737 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:15 np0005603609 nova_compute[221550]: 2026-01-31 08:00:15.913 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:15 np0005603609 nova_compute[221550]: 2026-01-31 08:00:15.914 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:15 np0005603609 nova_compute[221550]: 2026-01-31 08:00:15.914 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:15 np0005603609 nova_compute[221550]: 2026-01-31 08:00:15.914 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:00:15 np0005603609 nova_compute[221550]: 2026-01-31 08:00:15.915 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:00:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:00:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3962088818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:00:16 np0005603609 nova_compute[221550]: 2026-01-31 08:00:16.355 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:00:16 np0005603609 nova_compute[221550]: 2026-01-31 08:00:16.487 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:00:16 np0005603609 nova_compute[221550]: 2026-01-31 08:00:16.488 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4661MB free_disk=20.96752166748047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:00:16 np0005603609 nova_compute[221550]: 2026-01-31 08:00:16.488 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:16 np0005603609 nova_compute[221550]: 2026-01-31 08:00:16.488 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:17.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:00:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:17.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:00:18 np0005603609 nova_compute[221550]: 2026-01-31 08:00:18.163 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:18 np0005603609 podman[254288]: 2026-01-31 08:00:18.174878211 +0000 UTC m=+0.054844732 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Jan 31 03:00:18 np0005603609 podman[254287]: 2026-01-31 08:00:18.194910163 +0000 UTC m=+0.074784372 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:00:18 np0005603609 nova_compute[221550]: 2026-01-31 08:00:18.477 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:18 np0005603609 nova_compute[221550]: 2026-01-31 08:00:18.914 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:00:18 np0005603609 nova_compute[221550]: 2026-01-31 08:00:18.915 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:00:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:19.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:19.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:20 np0005603609 nova_compute[221550]: 2026-01-31 08:00:20.417 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:00:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:00:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1411870379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:00:20 np0005603609 nova_compute[221550]: 2026-01-31 08:00:20.869 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:00:20 np0005603609 nova_compute[221550]: 2026-01-31 08:00:20.877 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:00:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:21.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:21 np0005603609 nova_compute[221550]: 2026-01-31 08:00:21.355 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:00:21 np0005603609 nova_compute[221550]: 2026-01-31 08:00:21.634 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:00:21 np0005603609 nova_compute[221550]: 2026-01-31 08:00:21.635 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 5.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:00:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:21.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:00:23 np0005603609 nova_compute[221550]: 2026-01-31 08:00:23.210 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:23.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:23 np0005603609 nova_compute[221550]: 2026-01-31 08:00:23.480 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:00:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:23.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:00:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:25.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:25 np0005603609 nova_compute[221550]: 2026-01-31 08:00:25.557 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:25 np0005603609 nova_compute[221550]: 2026-01-31 08:00:25.558 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:25 np0005603609 nova_compute[221550]: 2026-01-31 08:00:25.558 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:25 np0005603609 nova_compute[221550]: 2026-01-31 08:00:25.558 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:00:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:00:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:25.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:00:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:27.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:00:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:27.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:00:28 np0005603609 nova_compute[221550]: 2026-01-31 08:00:28.213 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:28 np0005603609 nova_compute[221550]: 2026-01-31 08:00:28.482 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:28 np0005603609 nova_compute[221550]: 2026-01-31 08:00:28.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:29.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:00:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:29.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:00:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:31.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:00:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:31.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:00:33 np0005603609 nova_compute[221550]: 2026-01-31 08:00:33.215 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:33.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:33 np0005603609 nova_compute[221550]: 2026-01-31 08:00:33.484 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:33.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:35.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:35.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:37.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:00:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:37.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:00:38 np0005603609 nova_compute[221550]: 2026-01-31 08:00:38.217 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:38 np0005603609 nova_compute[221550]: 2026-01-31 08:00:38.527 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:39.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:39 np0005603609 nova_compute[221550]: 2026-01-31 08:00:39.489 221554 DEBUG oslo_concurrency.lockutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:39 np0005603609 nova_compute[221550]: 2026-01-31 08:00:39.489 221554 DEBUG oslo_concurrency.lockutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:39 np0005603609 nova_compute[221550]: 2026-01-31 08:00:39.595 221554 DEBUG nova.compute.manager [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:00:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:00:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:39.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:00:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:41 np0005603609 nova_compute[221550]: 2026-01-31 08:00:41.097 221554 DEBUG oslo_concurrency.lockutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:41 np0005603609 nova_compute[221550]: 2026-01-31 08:00:41.098 221554 DEBUG oslo_concurrency.lockutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:41 np0005603609 nova_compute[221550]: 2026-01-31 08:00:41.104 221554 DEBUG nova.virt.hardware [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:00:41 np0005603609 nova_compute[221550]: 2026-01-31 08:00:41.105 221554 INFO nova.compute.claims [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:00:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:41.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:41 np0005603609 nova_compute[221550]: 2026-01-31 08:00:41.600 221554 DEBUG oslo_concurrency.processutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:00:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:00:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:41.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:00:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:00:42 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/937994319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:00:42 np0005603609 nova_compute[221550]: 2026-01-31 08:00:42.021 221554 DEBUG oslo_concurrency.processutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:00:42 np0005603609 nova_compute[221550]: 2026-01-31 08:00:42.026 221554 DEBUG nova.compute.provider_tree [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:00:42 np0005603609 nova_compute[221550]: 2026-01-31 08:00:42.284 221554 DEBUG nova.scheduler.client.report [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:00:42 np0005603609 nova_compute[221550]: 2026-01-31 08:00:42.760 221554 DEBUG oslo_concurrency.lockutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:42 np0005603609 nova_compute[221550]: 2026-01-31 08:00:42.761 221554 DEBUG nova.compute.manager [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:00:42 np0005603609 nova_compute[221550]: 2026-01-31 08:00:42.959 221554 DEBUG nova.compute.manager [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:00:42 np0005603609 nova_compute[221550]: 2026-01-31 08:00:42.960 221554 DEBUG nova.network.neutron [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:00:43 np0005603609 nova_compute[221550]: 2026-01-31 08:00:43.043 221554 INFO nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:00:43 np0005603609 nova_compute[221550]: 2026-01-31 08:00:43.115 221554 DEBUG nova.compute.manager [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:00:43 np0005603609 nova_compute[221550]: 2026-01-31 08:00:43.220 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:00:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:43.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:00:43 np0005603609 nova_compute[221550]: 2026-01-31 08:00:43.502 221554 DEBUG nova.compute.manager [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:00:43 np0005603609 nova_compute[221550]: 2026-01-31 08:00:43.503 221554 DEBUG nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:00:43 np0005603609 nova_compute[221550]: 2026-01-31 08:00:43.503 221554 INFO nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Creating image(s)#033[00m
Jan 31 03:00:43 np0005603609 nova_compute[221550]: 2026-01-31 08:00:43.531 221554 DEBUG nova.storage.rbd_utils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:00:43 np0005603609 nova_compute[221550]: 2026-01-31 08:00:43.561 221554 DEBUG nova.storage.rbd_utils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:00:43 np0005603609 nova_compute[221550]: 2026-01-31 08:00:43.592 221554 DEBUG nova.storage.rbd_utils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:00:43 np0005603609 nova_compute[221550]: 2026-01-31 08:00:43.596 221554 DEBUG oslo_concurrency.processutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:00:43 np0005603609 nova_compute[221550]: 2026-01-31 08:00:43.612 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:43 np0005603609 nova_compute[221550]: 2026-01-31 08:00:43.648 221554 DEBUG oslo_concurrency.processutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:00:43 np0005603609 nova_compute[221550]: 2026-01-31 08:00:43.649 221554 DEBUG oslo_concurrency.lockutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:43 np0005603609 nova_compute[221550]: 2026-01-31 08:00:43.649 221554 DEBUG oslo_concurrency.lockutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:43 np0005603609 nova_compute[221550]: 2026-01-31 08:00:43.650 221554 DEBUG oslo_concurrency.lockutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:43 np0005603609 nova_compute[221550]: 2026-01-31 08:00:43.673 221554 DEBUG nova.storage.rbd_utils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:00:43 np0005603609 nova_compute[221550]: 2026-01-31 08:00:43.677 221554 DEBUG oslo_concurrency.processutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:00:43 np0005603609 nova_compute[221550]: 2026-01-31 08:00:43.703 221554 DEBUG nova.policy [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '63e95edea0164ae2a9820dc10467335d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:00:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:43.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:43 np0005603609 nova_compute[221550]: 2026-01-31 08:00:43.964 221554 DEBUG oslo_concurrency.processutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:00:44 np0005603609 nova_compute[221550]: 2026-01-31 08:00:44.028 221554 DEBUG nova.storage.rbd_utils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] resizing rbd image 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:00:44 np0005603609 nova_compute[221550]: 2026-01-31 08:00:44.143 221554 DEBUG nova.objects.instance [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'migration_context' on Instance uuid 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:00:44 np0005603609 nova_compute[221550]: 2026-01-31 08:00:44.164 221554 DEBUG nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:00:44 np0005603609 nova_compute[221550]: 2026-01-31 08:00:44.164 221554 DEBUG nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Ensure instance console log exists: /var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:00:44 np0005603609 nova_compute[221550]: 2026-01-31 08:00:44.165 221554 DEBUG oslo_concurrency.lockutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:44 np0005603609 nova_compute[221550]: 2026-01-31 08:00:44.165 221554 DEBUG oslo_concurrency.lockutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:44 np0005603609 nova_compute[221550]: 2026-01-31 08:00:44.165 221554 DEBUG oslo_concurrency.lockutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:45.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:00:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:45.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:00:45 np0005603609 nova_compute[221550]: 2026-01-31 08:00:45.991 221554 DEBUG nova.network.neutron [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Successfully created port: 0690c15a-98ed-4f55-839f-bd15b85cc81a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:00:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:00:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:47.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:00:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:47.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:48 np0005603609 nova_compute[221550]: 2026-01-31 08:00:48.222 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:48 np0005603609 nova_compute[221550]: 2026-01-31 08:00:48.532 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:49 np0005603609 podman[254677]: 2026-01-31 08:00:49.156704779 +0000 UTC m=+0.045006884 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 03:00:49 np0005603609 podman[254676]: 2026-01-31 08:00:49.188663889 +0000 UTC m=+0.074605337 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:00:49 np0005603609 nova_compute[221550]: 2026-01-31 08:00:49.191 221554 DEBUG nova.network.neutron [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Successfully updated port: 0690c15a-98ed-4f55-839f-bd15b85cc81a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:00:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:00:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:49.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:00:49 np0005603609 nova_compute[221550]: 2026-01-31 08:00:49.349 221554 DEBUG oslo_concurrency.lockutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "refresh_cache-8b3d7088-3fbb-4615-9d5a-92c65bac3be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:00:49 np0005603609 nova_compute[221550]: 2026-01-31 08:00:49.349 221554 DEBUG oslo_concurrency.lockutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquired lock "refresh_cache-8b3d7088-3fbb-4615-9d5a-92c65bac3be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:00:49 np0005603609 nova_compute[221550]: 2026-01-31 08:00:49.349 221554 DEBUG nova.network.neutron [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:00:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 03:00:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:00:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:00:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:00:49 np0005603609 nova_compute[221550]: 2026-01-31 08:00:49.592 221554 DEBUG nova.compute.manager [req-5d0288ee-955f-49f7-b826-a50c9f1b329a req-1d9fcb05-2674-406b-9c7a-9d7e11447acb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Received event network-changed-0690c15a-98ed-4f55-839f-bd15b85cc81a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:00:49 np0005603609 nova_compute[221550]: 2026-01-31 08:00:49.592 221554 DEBUG nova.compute.manager [req-5d0288ee-955f-49f7-b826-a50c9f1b329a req-1d9fcb05-2674-406b-9c7a-9d7e11447acb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Refreshing instance network info cache due to event network-changed-0690c15a-98ed-4f55-839f-bd15b85cc81a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:00:49 np0005603609 nova_compute[221550]: 2026-01-31 08:00:49.592 221554 DEBUG oslo_concurrency.lockutils [req-5d0288ee-955f-49f7-b826-a50c9f1b329a req-1d9fcb05-2674-406b-9c7a-9d7e11447acb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-8b3d7088-3fbb-4615-9d5a-92c65bac3be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:00:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:49.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:50 np0005603609 nova_compute[221550]: 2026-01-31 08:00:50.333 221554 DEBUG nova.network.neutron [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:00:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:51.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:51.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:53 np0005603609 nova_compute[221550]: 2026-01-31 08:00:53.257 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:53.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:53 np0005603609 nova_compute[221550]: 2026-01-31 08:00:53.535 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:00:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:53.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:00:54 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:00:54 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:00:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.333 221554 DEBUG nova.network.neutron [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Updating instance_info_cache with network_info: [{"id": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "address": "fa:16:3e:a3:87:6f", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690c15a-98", "ovs_interfaceid": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:00:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:55.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:55.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.774 221554 DEBUG oslo_concurrency.lockutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Releasing lock "refresh_cache-8b3d7088-3fbb-4615-9d5a-92c65bac3be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.774 221554 DEBUG nova.compute.manager [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Instance network_info: |[{"id": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "address": "fa:16:3e:a3:87:6f", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690c15a-98", "ovs_interfaceid": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.775 221554 DEBUG oslo_concurrency.lockutils [req-5d0288ee-955f-49f7-b826-a50c9f1b329a req-1d9fcb05-2674-406b-9c7a-9d7e11447acb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-8b3d7088-3fbb-4615-9d5a-92c65bac3be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.775 221554 DEBUG nova.network.neutron [req-5d0288ee-955f-49f7-b826-a50c9f1b329a req-1d9fcb05-2674-406b-9c7a-9d7e11447acb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Refreshing network info cache for port 0690c15a-98ed-4f55-839f-bd15b85cc81a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.780 221554 DEBUG nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Start _get_guest_xml network_info=[{"id": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "address": "fa:16:3e:a3:87:6f", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690c15a-98", "ovs_interfaceid": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.785 221554 WARNING nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.791 221554 DEBUG nova.virt.libvirt.host [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.792 221554 DEBUG nova.virt.libvirt.host [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.797 221554 DEBUG nova.virt.libvirt.host [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.797 221554 DEBUG nova.virt.libvirt.host [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.798 221554 DEBUG nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.798 221554 DEBUG nova.virt.hardware [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.799 221554 DEBUG nova.virt.hardware [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.799 221554 DEBUG nova.virt.hardware [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.799 221554 DEBUG nova.virt.hardware [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.800 221554 DEBUG nova.virt.hardware [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.800 221554 DEBUG nova.virt.hardware [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.800 221554 DEBUG nova.virt.hardware [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.800 221554 DEBUG nova.virt.hardware [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.801 221554 DEBUG nova.virt.hardware [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.801 221554 DEBUG nova.virt.hardware [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.801 221554 DEBUG nova.virt.hardware [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:00:55 np0005603609 nova_compute[221550]: 2026-01-31 08:00:55.804 221554 DEBUG oslo_concurrency.processutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:00:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:00:56 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/493614597' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.263 221554 DEBUG oslo_concurrency.processutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.288 221554 DEBUG nova.storage.rbd_utils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.292 221554 DEBUG oslo_concurrency.processutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:00:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:00:56 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3673755692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.728 221554 DEBUG oslo_concurrency.processutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.730 221554 DEBUG nova.virt.libvirt.vif [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:00:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1870360892',display_name='tempest-ServerDiskConfigTestJSON-server-1870360892',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1870360892',id=89,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-3kh0tikj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:00:43Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=8b3d7088-3fbb-4615-9d5a-92c65bac3be2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "address": "fa:16:3e:a3:87:6f", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690c15a-98", "ovs_interfaceid": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.731 221554 DEBUG nova.network.os_vif_util [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "address": "fa:16:3e:a3:87:6f", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690c15a-98", "ovs_interfaceid": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.732 221554 DEBUG nova.network.os_vif_util [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:87:6f,bridge_name='br-int',has_traffic_filtering=True,id=0690c15a-98ed-4f55-839f-bd15b85cc81a,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690c15a-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.734 221554 DEBUG nova.objects.instance [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.778 221554 DEBUG nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:00:56 np0005603609 nova_compute[221550]:  <uuid>8b3d7088-3fbb-4615-9d5a-92c65bac3be2</uuid>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:  <name>instance-00000059</name>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1870360892</nova:name>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:00:55</nova:creationTime>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:00:56 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:        <nova:user uuid="63e95edea0164ae2a9820dc10467335d">tempest-ServerDiskConfigTestJSON-984925022-project-member</nova:user>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:        <nova:project uuid="be74d11d2f5a4d9aae2dbe32c31ad9c3">tempest-ServerDiskConfigTestJSON-984925022</nova:project>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:        <nova:port uuid="0690c15a-98ed-4f55-839f-bd15b85cc81a">
Jan 31 03:00:56 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <entry name="serial">8b3d7088-3fbb-4615-9d5a-92c65bac3be2</entry>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <entry name="uuid">8b3d7088-3fbb-4615-9d5a-92c65bac3be2</entry>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk">
Jan 31 03:00:56 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:00:56 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk.config">
Jan 31 03:00:56 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:00:56 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:a3:87:6f"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <target dev="tap0690c15a-98"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2/console.log" append="off"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:00:56 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:00:56 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:00:56 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:00:56 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.779 221554 DEBUG nova.compute.manager [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Preparing to wait for external event network-vif-plugged-0690c15a-98ed-4f55-839f-bd15b85cc81a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.779 221554 DEBUG oslo_concurrency.lockutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.779 221554 DEBUG oslo_concurrency.lockutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.780 221554 DEBUG oslo_concurrency.lockutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.780 221554 DEBUG nova.virt.libvirt.vif [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:00:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1870360892',display_name='tempest-ServerDiskConfigTestJSON-server-1870360892',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1870360892',id=89,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-3kh0tikj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:00:43Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=8b3d7088-3fbb-4615-9d5a-92c65bac3be2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "address": "fa:16:3e:a3:87:6f", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690c15a-98", "ovs_interfaceid": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.780 221554 DEBUG nova.network.os_vif_util [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "address": "fa:16:3e:a3:87:6f", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690c15a-98", "ovs_interfaceid": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.781 221554 DEBUG nova.network.os_vif_util [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:87:6f,bridge_name='br-int',has_traffic_filtering=True,id=0690c15a-98ed-4f55-839f-bd15b85cc81a,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690c15a-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.781 221554 DEBUG os_vif [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:87:6f,bridge_name='br-int',has_traffic_filtering=True,id=0690c15a-98ed-4f55-839f-bd15b85cc81a,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690c15a-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.782 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.782 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.782 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.785 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.785 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0690c15a-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.786 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0690c15a-98, col_values=(('external_ids', {'iface-id': '0690c15a-98ed-4f55-839f-bd15b85cc81a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:87:6f', 'vm-uuid': '8b3d7088-3fbb-4615-9d5a-92c65bac3be2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.787 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:56 np0005603609 NetworkManager[49064]: <info>  [1769846456.7881] manager: (tap0690c15a-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/149)
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.792 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.793 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:56 np0005603609 nova_compute[221550]: 2026-01-31 08:00:56.794 221554 INFO os_vif [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:87:6f,bridge_name='br-int',has_traffic_filtering=True,id=0690c15a-98ed-4f55-839f-bd15b85cc81a,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690c15a-98')#033[00m
Jan 31 03:00:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:57.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:57 np0005603609 nova_compute[221550]: 2026-01-31 08:00:57.734 221554 DEBUG nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:00:57 np0005603609 nova_compute[221550]: 2026-01-31 08:00:57.735 221554 DEBUG nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:00:57 np0005603609 nova_compute[221550]: 2026-01-31 08:00:57.735 221554 DEBUG nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No VIF found with MAC fa:16:3e:a3:87:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:00:57 np0005603609 nova_compute[221550]: 2026-01-31 08:00:57.736 221554 INFO nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Using config drive#033[00m
Jan 31 03:00:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:57.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:57 np0005603609 nova_compute[221550]: 2026-01-31 08:00:57.767 221554 DEBUG nova.storage.rbd_utils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:00:58 np0005603609 nova_compute[221550]: 2026-01-31 08:00:58.535 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:58 np0005603609 nova_compute[221550]: 2026-01-31 08:00:58.978 221554 INFO nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Creating config drive at /var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2/disk.config#033[00m
Jan 31 03:00:58 np0005603609 nova_compute[221550]: 2026-01-31 08:00:58.985 221554 DEBUG oslo_concurrency.processutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_kucvt04 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:00:59 np0005603609 nova_compute[221550]: 2026-01-31 08:00:59.108 221554 DEBUG oslo_concurrency.processutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_kucvt04" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:00:59 np0005603609 nova_compute[221550]: 2026-01-31 08:00:59.137 221554 DEBUG nova.storage.rbd_utils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:00:59 np0005603609 nova_compute[221550]: 2026-01-31 08:00:59.140 221554 DEBUG oslo_concurrency.processutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2/disk.config 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:00:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:59.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:59 np0005603609 nova_compute[221550]: 2026-01-31 08:00:59.363 221554 DEBUG nova.network.neutron [req-5d0288ee-955f-49f7-b826-a50c9f1b329a req-1d9fcb05-2674-406b-9c7a-9d7e11447acb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Updated VIF entry in instance network info cache for port 0690c15a-98ed-4f55-839f-bd15b85cc81a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:00:59 np0005603609 nova_compute[221550]: 2026-01-31 08:00:59.364 221554 DEBUG nova.network.neutron [req-5d0288ee-955f-49f7-b826-a50c9f1b329a req-1d9fcb05-2674-406b-9c7a-9d7e11447acb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Updating instance_info_cache with network_info: [{"id": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "address": "fa:16:3e:a3:87:6f", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690c15a-98", "ovs_interfaceid": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:00:59 np0005603609 nova_compute[221550]: 2026-01-31 08:00:59.718 221554 DEBUG oslo_concurrency.processutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2/disk.config 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:00:59 np0005603609 nova_compute[221550]: 2026-01-31 08:00:59.719 221554 INFO nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Deleting local config drive /var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2/disk.config because it was imported into RBD.#033[00m
Jan 31 03:00:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:00:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:59.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:59 np0005603609 kernel: tap0690c15a-98: entered promiscuous mode
Jan 31 03:00:59 np0005603609 NetworkManager[49064]: <info>  [1769846459.7573] manager: (tap0690c15a-98): new Tun device (/org/freedesktop/NetworkManager/Devices/150)
Jan 31 03:00:59 np0005603609 ovn_controller[130359]: 2026-01-31T08:00:59Z|00305|binding|INFO|Claiming lport 0690c15a-98ed-4f55-839f-bd15b85cc81a for this chassis.
Jan 31 03:00:59 np0005603609 ovn_controller[130359]: 2026-01-31T08:00:59Z|00306|binding|INFO|0690c15a-98ed-4f55-839f-bd15b85cc81a: Claiming fa:16:3e:a3:87:6f 10.100.0.9
Jan 31 03:00:59 np0005603609 nova_compute[221550]: 2026-01-31 08:00:59.758 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:59 np0005603609 systemd-udevd[254905]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:00:59 np0005603609 systemd-machined[190912]: New machine qemu-40-instance-00000059.
Jan 31 03:00:59 np0005603609 NetworkManager[49064]: <info>  [1769846459.7877] device (tap0690c15a-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:00:59 np0005603609 NetworkManager[49064]: <info>  [1769846459.7881] device (tap0690c15a-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:00:59 np0005603609 systemd[1]: Started Virtual Machine qemu-40-instance-00000059.
Jan 31 03:00:59 np0005603609 nova_compute[221550]: 2026-01-31 08:00:59.803 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:59 np0005603609 ovn_controller[130359]: 2026-01-31T08:00:59Z|00307|binding|INFO|Setting lport 0690c15a-98ed-4f55-839f-bd15b85cc81a ovn-installed in OVS
Jan 31 03:00:59 np0005603609 nova_compute[221550]: 2026-01-31 08:00:59.813 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:59 np0005603609 nova_compute[221550]: 2026-01-31 08:00:59.959 221554 DEBUG oslo_concurrency.lockutils [req-5d0288ee-955f-49f7-b826-a50c9f1b329a req-1d9fcb05-2674-406b-9c7a-9d7e11447acb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-8b3d7088-3fbb-4615-9d5a-92c65bac3be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:00:59 np0005603609 ovn_controller[130359]: 2026-01-31T08:00:59Z|00308|binding|INFO|Setting lport 0690c15a-98ed-4f55-839f-bd15b85cc81a up in Southbound
Jan 31 03:00:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:00:59.969 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:87:6f 10.100.0.9'], port_security=['fa:16:3e:a3:87:6f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8b3d7088-3fbb-4615-9d5a-92c65bac3be2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-121329c8-2359-4e9d-9f2b-4932f8740470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '755edf0d-318a-4b49-b9f5-851611889f15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cae77ce7-0851-4c6f-a030-c066a50c0f3d, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=0690c15a-98ed-4f55-839f-bd15b85cc81a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:00:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:00:59.970 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 0690c15a-98ed-4f55-839f-bd15b85cc81a in datapath 121329c8-2359-4e9d-9f2b-4932f8740470 bound to our chassis#033[00m
Jan 31 03:00:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:00:59.971 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 121329c8-2359-4e9d-9f2b-4932f8740470#033[00m
Jan 31 03:00:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:00:59.986 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd76577-4fe7-4066-9388-e1145209f330]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:00:59.987 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap121329c8-21 in ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:00:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:00:59.990 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap121329c8-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:00:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:00:59.990 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[70ba6f84-8664-4a52-92b7-d4c1761bed27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:00:59.991 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[75659fd2-f32b-49f5-8973-08aca325f549]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.005 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[7517bc3e-8ba0-442c-8fd1-d68c06ab86af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.014 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[67d04fac-15a2-44ca-99b1-a0beb3f2f59f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.034 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb771ab-739c-470c-93fb-aba0bed936a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.038 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ca553fc5-e7b9-47e3-ac2e-5b017bcda3bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:00 np0005603609 NetworkManager[49064]: <info>  [1769846460.0392] manager: (tap121329c8-20): new Veth device (/org/freedesktop/NetworkManager/Devices/151)
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.062 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[887b70f5-b876-4104-9a1e-2a16a8090ae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.065 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f7fb685e-7396-4204-a919-c6e63011c57d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:00 np0005603609 NetworkManager[49064]: <info>  [1769846460.0791] device (tap121329c8-20): carrier: link connected
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.083 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[e4997b81-e18a-4080-a7d7-ead7f074bd73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.096 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2cdd8b0d-243c-484c-b665-3826ac6aafec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap121329c8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:a3:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669977, 'reachable_time': 21718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254939, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.107 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fc128129-ac28-42ce-ac34-30d87f08cdeb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:a3c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669977, 'tstamp': 669977}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254940, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.118 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e69f907e-1152-4c08-9bd2-b338df5fb113]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap121329c8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:a3:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669977, 'reachable_time': 21718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254941, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.140 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6303ba14-489f-4855-9e96-d21abb433872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.186 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c28a07-671e-4c11-a492-0039acafa7cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.188 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap121329c8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.189 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.189 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap121329c8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:00 np0005603609 nova_compute[221550]: 2026-01-31 08:01:00.192 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:00 np0005603609 kernel: tap121329c8-20: entered promiscuous mode
Jan 31 03:01:00 np0005603609 NetworkManager[49064]: <info>  [1769846460.1926] manager: (tap121329c8-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.195 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap121329c8-20, col_values=(('external_ids', {'iface-id': 'e59d8348-5c5c-4c82-ba21-91f3a512c65e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:00 np0005603609 ovn_controller[130359]: 2026-01-31T08:01:00Z|00309|binding|INFO|Releasing lport e59d8348-5c5c-4c82-ba21-91f3a512c65e from this chassis (sb_readonly=0)
Jan 31 03:01:00 np0005603609 nova_compute[221550]: 2026-01-31 08:01:00.196 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:00 np0005603609 nova_compute[221550]: 2026-01-31 08:01:00.203 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.204 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.205 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[36bbe1ff-12a3-431c-9fea-b1639fa1fbd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.206 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:01:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:00.208 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'env', 'PROCESS_TAG=haproxy-121329c8-2359-4e9d-9f2b-4932f8740470', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/121329c8-2359-4e9d-9f2b-4932f8740470.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:01:00 np0005603609 nova_compute[221550]: 2026-01-31 08:01:00.364 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846460.3638787, 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:01:00 np0005603609 nova_compute[221550]: 2026-01-31 08:01:00.364 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] VM Started (Lifecycle Event)#033[00m
Jan 31 03:01:00 np0005603609 nova_compute[221550]: 2026-01-31 08:01:00.459 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:01:00 np0005603609 nova_compute[221550]: 2026-01-31 08:01:00.464 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846460.364658, 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:01:00 np0005603609 nova_compute[221550]: 2026-01-31 08:01:00.464 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:01:00 np0005603609 podman[255015]: 2026-01-31 08:01:00.47436487 +0000 UTC m=+0.020579826 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:01:00 np0005603609 nova_compute[221550]: 2026-01-31 08:01:00.586 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:01:00 np0005603609 nova_compute[221550]: 2026-01-31 08:01:00.590 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:01:00 np0005603609 podman[255015]: 2026-01-31 08:01:00.664018746 +0000 UTC m=+0.210233692 container create 69290a49b720a776a4ab5ccf14f04a6de17d471a167dcbf5517f0e3edd9b4772 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:01:00 np0005603609 nova_compute[221550]: 2026-01-31 08:01:00.664 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:01:00 np0005603609 systemd[1]: Started libpod-conmon-69290a49b720a776a4ab5ccf14f04a6de17d471a167dcbf5517f0e3edd9b4772.scope.
Jan 31 03:01:00 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:01:00 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30cfda97ac77299373543ae704dc5e343581e76752103d5bf0b0a436b6beb8ce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:01:00 np0005603609 podman[255015]: 2026-01-31 08:01:00.769100286 +0000 UTC m=+0.315315242 container init 69290a49b720a776a4ab5ccf14f04a6de17d471a167dcbf5517f0e3edd9b4772 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:01:00 np0005603609 podman[255015]: 2026-01-31 08:01:00.775423538 +0000 UTC m=+0.321638484 container start 69290a49b720a776a4ab5ccf14f04a6de17d471a167dcbf5517f0e3edd9b4772 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 03:01:00 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[255030]: [NOTICE]   (255034) : New worker (255036) forked
Jan 31 03:01:00 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[255030]: [NOTICE]   (255034) : Loading success.
Jan 31 03:01:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:01.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.452 221554 DEBUG nova.compute.manager [req-42a17e1e-a497-4b62-9356-a60779e1790d req-5191ad96-7b83-4f17-9f2e-9c1cf9db169e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Received event network-vif-plugged-0690c15a-98ed-4f55-839f-bd15b85cc81a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.453 221554 DEBUG oslo_concurrency.lockutils [req-42a17e1e-a497-4b62-9356-a60779e1790d req-5191ad96-7b83-4f17-9f2e-9c1cf9db169e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.453 221554 DEBUG oslo_concurrency.lockutils [req-42a17e1e-a497-4b62-9356-a60779e1790d req-5191ad96-7b83-4f17-9f2e-9c1cf9db169e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.453 221554 DEBUG oslo_concurrency.lockutils [req-42a17e1e-a497-4b62-9356-a60779e1790d req-5191ad96-7b83-4f17-9f2e-9c1cf9db169e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.454 221554 DEBUG nova.compute.manager [req-42a17e1e-a497-4b62-9356-a60779e1790d req-5191ad96-7b83-4f17-9f2e-9c1cf9db169e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Processing event network-vif-plugged-0690c15a-98ed-4f55-839f-bd15b85cc81a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.454 221554 DEBUG nova.compute.manager [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.458 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846461.457855, 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.458 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.459 221554 DEBUG nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.463 221554 INFO nova.virt.libvirt.driver [-] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Instance spawned successfully.#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.464 221554 DEBUG nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.528 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.534 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.538 221554 DEBUG nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.539 221554 DEBUG nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.540 221554 DEBUG nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.540 221554 DEBUG nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.541 221554 DEBUG nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.542 221554 DEBUG nova.virt.libvirt.driver [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.701 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:01:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:01.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.789 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.935 221554 INFO nova.compute.manager [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Took 18.43 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:01:01 np0005603609 nova_compute[221550]: 2026-01-31 08:01:01.936 221554 DEBUG nova.compute.manager [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:01:01 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:01:01 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:01:02 np0005603609 nova_compute[221550]: 2026-01-31 08:01:02.288 221554 INFO nova.compute.manager [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Took 21.68 seconds to build instance.#033[00m
Jan 31 03:01:02 np0005603609 nova_compute[221550]: 2026-01-31 08:01:02.649 221554 DEBUG oslo_concurrency.lockutils [None req-3a2cf4af-d35d-4c20-9e13-a752ffd71c5c 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:01:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:03.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:01:03 np0005603609 nova_compute[221550]: 2026-01-31 08:01:03.399 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:03.399 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:01:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:03.400 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:01:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:03.401 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:03 np0005603609 nova_compute[221550]: 2026-01-31 08:01:03.539 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:03.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:04 np0005603609 nova_compute[221550]: 2026-01-31 08:01:04.088 221554 DEBUG nova.compute.manager [req-aa925116-ac37-4792-9559-9047be8981e5 req-11965faf-209b-4861-ba05-bf7ef4b4414e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Received event network-vif-plugged-0690c15a-98ed-4f55-839f-bd15b85cc81a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:01:04 np0005603609 nova_compute[221550]: 2026-01-31 08:01:04.088 221554 DEBUG oslo_concurrency.lockutils [req-aa925116-ac37-4792-9559-9047be8981e5 req-11965faf-209b-4861-ba05-bf7ef4b4414e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:04 np0005603609 nova_compute[221550]: 2026-01-31 08:01:04.089 221554 DEBUG oslo_concurrency.lockutils [req-aa925116-ac37-4792-9559-9047be8981e5 req-11965faf-209b-4861-ba05-bf7ef4b4414e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:04 np0005603609 nova_compute[221550]: 2026-01-31 08:01:04.089 221554 DEBUG oslo_concurrency.lockutils [req-aa925116-ac37-4792-9559-9047be8981e5 req-11965faf-209b-4861-ba05-bf7ef4b4414e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:04 np0005603609 nova_compute[221550]: 2026-01-31 08:01:04.089 221554 DEBUG nova.compute.manager [req-aa925116-ac37-4792-9559-9047be8981e5 req-11965faf-209b-4861-ba05-bf7ef4b4414e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] No waiting events found dispatching network-vif-plugged-0690c15a-98ed-4f55-839f-bd15b85cc81a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:01:04 np0005603609 nova_compute[221550]: 2026-01-31 08:01:04.090 221554 WARNING nova.compute.manager [req-aa925116-ac37-4792-9559-9047be8981e5 req-11965faf-209b-4861-ba05-bf7ef4b4414e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Received unexpected event network-vif-plugged-0690c15a-98ed-4f55-839f-bd15b85cc81a for instance with vm_state active and task_state None.#033[00m
Jan 31 03:01:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:05.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:05.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:06 np0005603609 nova_compute[221550]: 2026-01-31 08:01:06.791 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:07.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:07.495 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:07.495 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:07.496 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:07.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:08 np0005603609 nova_compute[221550]: 2026-01-31 08:01:08.585 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:09.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:09 np0005603609 nova_compute[221550]: 2026-01-31 08:01:09.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:09.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:01:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:11.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:01:11 np0005603609 nova_compute[221550]: 2026-01-31 08:01:11.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:11 np0005603609 nova_compute[221550]: 2026-01-31 08:01:11.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:01:11 np0005603609 nova_compute[221550]: 2026-01-31 08:01:11.728 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:01:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:11.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:11 np0005603609 nova_compute[221550]: 2026-01-31 08:01:11.794 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:11 np0005603609 nova_compute[221550]: 2026-01-31 08:01:11.863 221554 INFO nova.compute.manager [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Rebuilding instance#033[00m
Jan 31 03:01:12 np0005603609 nova_compute[221550]: 2026-01-31 08:01:12.390 221554 DEBUG nova.objects.instance [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:01:12 np0005603609 nova_compute[221550]: 2026-01-31 08:01:12.455 221554 DEBUG nova.compute.manager [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:01:12 np0005603609 nova_compute[221550]: 2026-01-31 08:01:12.728 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:12 np0005603609 nova_compute[221550]: 2026-01-31 08:01:12.745 221554 DEBUG nova.objects.instance [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:01:12 np0005603609 nova_compute[221550]: 2026-01-31 08:01:12.792 221554 DEBUG nova.objects.instance [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:01:12 np0005603609 nova_compute[221550]: 2026-01-31 08:01:12.825 221554 DEBUG nova.objects.instance [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'resources' on Instance uuid 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:01:12 np0005603609 nova_compute[221550]: 2026-01-31 08:01:12.937 221554 DEBUG nova.objects.instance [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'migration_context' on Instance uuid 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:01:12 np0005603609 nova_compute[221550]: 2026-01-31 08:01:12.977 221554 DEBUG nova.objects.instance [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 03:01:12 np0005603609 nova_compute[221550]: 2026-01-31 08:01:12.981 221554 DEBUG nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:01:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:13.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:13 np0005603609 nova_compute[221550]: 2026-01-31 08:01:13.586 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:13.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:14 np0005603609 ovn_controller[130359]: 2026-01-31T08:01:14Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a3:87:6f 10.100.0.9
Jan 31 03:01:14 np0005603609 ovn_controller[130359]: 2026-01-31T08:01:14Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a3:87:6f 10.100.0.9
Jan 31 03:01:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:01:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:15.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:01:15 np0005603609 nova_compute[221550]: 2026-01-31 08:01:15.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:15 np0005603609 nova_compute[221550]: 2026-01-31 08:01:15.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:01:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:15.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:01:16 np0005603609 nova_compute[221550]: 2026-01-31 08:01:16.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:16 np0005603609 nova_compute[221550]: 2026-01-31 08:01:16.694 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:16 np0005603609 nova_compute[221550]: 2026-01-31 08:01:16.694 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:16 np0005603609 nova_compute[221550]: 2026-01-31 08:01:16.695 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:16 np0005603609 nova_compute[221550]: 2026-01-31 08:01:16.695 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:01:16 np0005603609 nova_compute[221550]: 2026-01-31 08:01:16.696 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:01:16 np0005603609 nova_compute[221550]: 2026-01-31 08:01:16.835 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:01:17 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1041038657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:01:17 np0005603609 nova_compute[221550]: 2026-01-31 08:01:17.172 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:01:17 np0005603609 nova_compute[221550]: 2026-01-31 08:01:17.292 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:01:17 np0005603609 nova_compute[221550]: 2026-01-31 08:01:17.293 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:01:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:17.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:17 np0005603609 nova_compute[221550]: 2026-01-31 08:01:17.449 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:01:17 np0005603609 nova_compute[221550]: 2026-01-31 08:01:17.451 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4472MB free_disk=20.933761596679688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:01:17 np0005603609 nova_compute[221550]: 2026-01-31 08:01:17.451 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:17 np0005603609 nova_compute[221550]: 2026-01-31 08:01:17.451 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:17 np0005603609 nova_compute[221550]: 2026-01-31 08:01:17.695 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:01:17 np0005603609 nova_compute[221550]: 2026-01-31 08:01:17.695 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:01:17 np0005603609 nova_compute[221550]: 2026-01-31 08:01:17.695 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:01:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:01:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:17.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:01:17 np0005603609 nova_compute[221550]: 2026-01-31 08:01:17.775 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:01:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:01:18 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/212975079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:01:18 np0005603609 nova_compute[221550]: 2026-01-31 08:01:18.215 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:01:18 np0005603609 nova_compute[221550]: 2026-01-31 08:01:18.222 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:01:18 np0005603609 nova_compute[221550]: 2026-01-31 08:01:18.272 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:01:18 np0005603609 nova_compute[221550]: 2026-01-31 08:01:18.371 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:01:18 np0005603609 nova_compute[221550]: 2026-01-31 08:01:18.371 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:18 np0005603609 nova_compute[221550]: 2026-01-31 08:01:18.589 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:19 np0005603609 nova_compute[221550]: 2026-01-31 08:01:19.372 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:19 np0005603609 nova_compute[221550]: 2026-01-31 08:01:19.372 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:01:19 np0005603609 nova_compute[221550]: 2026-01-31 08:01:19.373 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:01:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:19.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:19 np0005603609 nova_compute[221550]: 2026-01-31 08:01:19.429 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-8b3d7088-3fbb-4615-9d5a-92c65bac3be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:01:19 np0005603609 nova_compute[221550]: 2026-01-31 08:01:19.430 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-8b3d7088-3fbb-4615-9d5a-92c65bac3be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:01:19 np0005603609 nova_compute[221550]: 2026-01-31 08:01:19.430 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:01:19 np0005603609 nova_compute[221550]: 2026-01-31 08:01:19.430 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:01:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:01:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:19.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:01:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:20 np0005603609 podman[255103]: 2026-01-31 08:01:20.213175073 +0000 UTC m=+0.088932341 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 03:01:20 np0005603609 podman[255104]: 2026-01-31 08:01:20.21348361 +0000 UTC m=+0.083660454 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:01:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:21.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:01:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:21.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:01:21 np0005603609 nova_compute[221550]: 2026-01-31 08:01:21.838 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:23 np0005603609 nova_compute[221550]: 2026-01-31 08:01:23.034 221554 DEBUG nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:01:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:23.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:23 np0005603609 nova_compute[221550]: 2026-01-31 08:01:23.591 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:01:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:23.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:01:24 np0005603609 nova_compute[221550]: 2026-01-31 08:01:24.022 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Updating instance_info_cache with network_info: [{"id": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "address": "fa:16:3e:a3:87:6f", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690c15a-98", "ovs_interfaceid": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:01:24 np0005603609 nova_compute[221550]: 2026-01-31 08:01:24.097 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-8b3d7088-3fbb-4615-9d5a-92c65bac3be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:01:24 np0005603609 nova_compute[221550]: 2026-01-31 08:01:24.098 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:01:24 np0005603609 nova_compute[221550]: 2026-01-31 08:01:24.098 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:24 np0005603609 nova_compute[221550]: 2026-01-31 08:01:24.098 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:24 np0005603609 nova_compute[221550]: 2026-01-31 08:01:24.098 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:01:24 np0005603609 nova_compute[221550]: 2026-01-31 08:01:24.098 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:24 np0005603609 nova_compute[221550]: 2026-01-31 08:01:24.099 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:01:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:25.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:25 np0005603609 nova_compute[221550]: 2026-01-31 08:01:25.420 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:01:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:25.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:01:26 np0005603609 kernel: tap0690c15a-98 (unregistering): left promiscuous mode
Jan 31 03:01:26 np0005603609 NetworkManager[49064]: <info>  [1769846486.0339] device (tap0690c15a-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:01:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:01:26Z|00310|binding|INFO|Releasing lport 0690c15a-98ed-4f55-839f-bd15b85cc81a from this chassis (sb_readonly=0)
Jan 31 03:01:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:01:26Z|00311|binding|INFO|Setting lport 0690c15a-98ed-4f55-839f-bd15b85cc81a down in Southbound
Jan 31 03:01:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:01:26Z|00312|binding|INFO|Removing iface tap0690c15a-98 ovn-installed in OVS
Jan 31 03:01:26 np0005603609 nova_compute[221550]: 2026-01-31 08:01:26.079 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:26 np0005603609 nova_compute[221550]: 2026-01-31 08:01:26.082 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:26 np0005603609 nova_compute[221550]: 2026-01-31 08:01:26.089 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:26 np0005603609 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000059.scope: Deactivated successfully.
Jan 31 03:01:26 np0005603609 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000059.scope: Consumed 13.414s CPU time.
Jan 31 03:01:26 np0005603609 systemd-machined[190912]: Machine qemu-40-instance-00000059 terminated.
Jan 31 03:01:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:26.120 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:87:6f 10.100.0.9'], port_security=['fa:16:3e:a3:87:6f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8b3d7088-3fbb-4615-9d5a-92c65bac3be2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-121329c8-2359-4e9d-9f2b-4932f8740470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '755edf0d-318a-4b49-b9f5-851611889f15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cae77ce7-0851-4c6f-a030-c066a50c0f3d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=0690c15a-98ed-4f55-839f-bd15b85cc81a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:01:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:26.121 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 0690c15a-98ed-4f55-839f-bd15b85cc81a in datapath 121329c8-2359-4e9d-9f2b-4932f8740470 unbound from our chassis#033[00m
Jan 31 03:01:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:26.123 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 121329c8-2359-4e9d-9f2b-4932f8740470, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:01:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:26.124 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d950e815-8130-48ba-bc66-763eabf7cab9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:26.125 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 namespace which is not needed anymore#033[00m
Jan 31 03:01:26 np0005603609 virtqemud[221292]: cannot parse process status data
Jan 31 03:01:26 np0005603609 nova_compute[221550]: 2026-01-31 08:01:26.281 221554 INFO nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Instance shutdown successfully after 13 seconds.#033[00m
Jan 31 03:01:26 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[255030]: [NOTICE]   (255034) : haproxy version is 2.8.14-c23fe91
Jan 31 03:01:26 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[255030]: [NOTICE]   (255034) : path to executable is /usr/sbin/haproxy
Jan 31 03:01:26 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[255030]: [WARNING]  (255034) : Exiting Master process...
Jan 31 03:01:26 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[255030]: [ALERT]    (255034) : Current worker (255036) exited with code 143 (Terminated)
Jan 31 03:01:26 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[255030]: [WARNING]  (255034) : All workers exited. Exiting... (0)
Jan 31 03:01:26 np0005603609 systemd[1]: libpod-69290a49b720a776a4ab5ccf14f04a6de17d471a167dcbf5517f0e3edd9b4772.scope: Deactivated successfully.
Jan 31 03:01:26 np0005603609 podman[255172]: 2026-01-31 08:01:26.307383386 +0000 UTC m=+0.102580292 container died 69290a49b720a776a4ab5ccf14f04a6de17d471a167dcbf5517f0e3edd9b4772 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 03:01:26 np0005603609 nova_compute[221550]: 2026-01-31 08:01:26.308 221554 INFO nova.virt.libvirt.driver [-] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Instance destroyed successfully.#033[00m
Jan 31 03:01:26 np0005603609 nova_compute[221550]: 2026-01-31 08:01:26.317 221554 INFO nova.virt.libvirt.driver [-] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Instance destroyed successfully.#033[00m
Jan 31 03:01:26 np0005603609 nova_compute[221550]: 2026-01-31 08:01:26.318 221554 DEBUG nova.virt.libvirt.vif [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:00:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1870360892',display_name='tempest-ServerDiskConfigTestJSON-server-1870360892',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1870360892',id=89,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-3kh0tikj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:01:10Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=8b3d7088-3fbb-4615-9d5a-92c65bac3be2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "address": "fa:16:3e:a3:87:6f", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690c15a-98", "ovs_interfaceid": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:01:26 np0005603609 nova_compute[221550]: 2026-01-31 08:01:26.319 221554 DEBUG nova.network.os_vif_util [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "address": "fa:16:3e:a3:87:6f", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690c15a-98", "ovs_interfaceid": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:01:26 np0005603609 nova_compute[221550]: 2026-01-31 08:01:26.320 221554 DEBUG nova.network.os_vif_util [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:87:6f,bridge_name='br-int',has_traffic_filtering=True,id=0690c15a-98ed-4f55-839f-bd15b85cc81a,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690c15a-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:01:26 np0005603609 nova_compute[221550]: 2026-01-31 08:01:26.321 221554 DEBUG os_vif [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:87:6f,bridge_name='br-int',has_traffic_filtering=True,id=0690c15a-98ed-4f55-839f-bd15b85cc81a,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690c15a-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:01:26 np0005603609 nova_compute[221550]: 2026-01-31 08:01:26.323 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:26 np0005603609 nova_compute[221550]: 2026-01-31 08:01:26.323 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0690c15a-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:26 np0005603609 nova_compute[221550]: 2026-01-31 08:01:26.325 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:26 np0005603609 nova_compute[221550]: 2026-01-31 08:01:26.329 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:01:26 np0005603609 nova_compute[221550]: 2026-01-31 08:01:26.332 221554 INFO os_vif [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:87:6f,bridge_name='br-int',has_traffic_filtering=True,id=0690c15a-98ed-4f55-839f-bd15b85cc81a,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690c15a-98')#033[00m
Jan 31 03:01:26 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-69290a49b720a776a4ab5ccf14f04a6de17d471a167dcbf5517f0e3edd9b4772-userdata-shm.mount: Deactivated successfully.
Jan 31 03:01:26 np0005603609 systemd[1]: var-lib-containers-storage-overlay-30cfda97ac77299373543ae704dc5e343581e76752103d5bf0b0a436b6beb8ce-merged.mount: Deactivated successfully.
Jan 31 03:01:26 np0005603609 podman[255172]: 2026-01-31 08:01:26.861490195 +0000 UTC m=+0.656687131 container cleanup 69290a49b720a776a4ab5ccf14f04a6de17d471a167dcbf5517f0e3edd9b4772 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:01:26 np0005603609 systemd[1]: libpod-conmon-69290a49b720a776a4ab5ccf14f04a6de17d471a167dcbf5517f0e3edd9b4772.scope: Deactivated successfully.
Jan 31 03:01:27 np0005603609 podman[255232]: 2026-01-31 08:01:27.180367412 +0000 UTC m=+0.280237438 container remove 69290a49b720a776a4ab5ccf14f04a6de17d471a167dcbf5517f0e3edd9b4772 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 03:01:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:27.185 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ffff1b55-742b-4507-a616-4832a443dfdc]: (4, ('Sat Jan 31 08:01:26 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 (69290a49b720a776a4ab5ccf14f04a6de17d471a167dcbf5517f0e3edd9b4772)\n69290a49b720a776a4ab5ccf14f04a6de17d471a167dcbf5517f0e3edd9b4772\nSat Jan 31 08:01:26 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 (69290a49b720a776a4ab5ccf14f04a6de17d471a167dcbf5517f0e3edd9b4772)\n69290a49b720a776a4ab5ccf14f04a6de17d471a167dcbf5517f0e3edd9b4772\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:27.187 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c4021355-e89a-4ae6-80c5-416124943603]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:27.188 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap121329c8-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:27 np0005603609 nova_compute[221550]: 2026-01-31 08:01:27.190 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:27 np0005603609 kernel: tap121329c8-20: left promiscuous mode
Jan 31 03:01:27 np0005603609 nova_compute[221550]: 2026-01-31 08:01:27.196 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:27.200 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef844b2-d0f5-4e28-88c4-bf93b1ff4663]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:27.230 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[34f2ff70-a0ba-418e-9cc1-6132f92c1022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:27.232 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[093efca2-0757-400d-9a9d-865751b7ef5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:27.250 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7d10688c-6823-43b5-8845-8d70f25e6bae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669973, 'reachable_time': 29259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255247, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:27.252 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:01:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:27.252 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[672096ac-358d-4a3e-ae4b-4125635808f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:27 np0005603609 systemd[1]: run-netns-ovnmeta\x2d121329c8\x2d2359\x2d4e9d\x2d9f2b\x2d4932f8740470.mount: Deactivated successfully.
Jan 31 03:01:27 np0005603609 nova_compute[221550]: 2026-01-31 08:01:27.360 221554 DEBUG nova.compute.manager [req-c5ba7770-9257-4220-842f-0b24617e1451 req-b297972c-6c68-479a-83ec-3eb962058c82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Received event network-vif-unplugged-0690c15a-98ed-4f55-839f-bd15b85cc81a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:01:27 np0005603609 nova_compute[221550]: 2026-01-31 08:01:27.361 221554 DEBUG oslo_concurrency.lockutils [req-c5ba7770-9257-4220-842f-0b24617e1451 req-b297972c-6c68-479a-83ec-3eb962058c82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:27 np0005603609 nova_compute[221550]: 2026-01-31 08:01:27.361 221554 DEBUG oslo_concurrency.lockutils [req-c5ba7770-9257-4220-842f-0b24617e1451 req-b297972c-6c68-479a-83ec-3eb962058c82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:27 np0005603609 nova_compute[221550]: 2026-01-31 08:01:27.361 221554 DEBUG oslo_concurrency.lockutils [req-c5ba7770-9257-4220-842f-0b24617e1451 req-b297972c-6c68-479a-83ec-3eb962058c82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:27 np0005603609 nova_compute[221550]: 2026-01-31 08:01:27.361 221554 DEBUG nova.compute.manager [req-c5ba7770-9257-4220-842f-0b24617e1451 req-b297972c-6c68-479a-83ec-3eb962058c82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] No waiting events found dispatching network-vif-unplugged-0690c15a-98ed-4f55-839f-bd15b85cc81a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:01:27 np0005603609 nova_compute[221550]: 2026-01-31 08:01:27.361 221554 WARNING nova.compute.manager [req-c5ba7770-9257-4220-842f-0b24617e1451 req-b297972c-6c68-479a-83ec-3eb962058c82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Received unexpected event network-vif-unplugged-0690c15a-98ed-4f55-839f-bd15b85cc81a for instance with vm_state active and task_state rebuilding.#033[00m
Jan 31 03:01:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:27.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:01:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:27.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:01:27 np0005603609 nova_compute[221550]: 2026-01-31 08:01:27.961 221554 INFO nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Deleting instance files /var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2_del#033[00m
Jan 31 03:01:27 np0005603609 nova_compute[221550]: 2026-01-31 08:01:27.962 221554 INFO nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Deletion of /var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2_del complete#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.190 221554 DEBUG nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.191 221554 INFO nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Creating image(s)#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.217 221554 DEBUG nova.storage.rbd_utils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.247 221554 DEBUG nova.storage.rbd_utils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.276 221554 DEBUG nova.storage.rbd_utils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.280 221554 DEBUG oslo_concurrency.processutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.337 221554 DEBUG oslo_concurrency.processutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.339 221554 DEBUG oslo_concurrency.lockutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.340 221554 DEBUG oslo_concurrency.lockutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.340 221554 DEBUG oslo_concurrency.lockutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.367 221554 DEBUG nova.storage.rbd_utils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.371 221554 DEBUG oslo_concurrency.processutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.594 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.642 221554 DEBUG oslo_concurrency.processutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.710 221554 DEBUG nova.storage.rbd_utils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] resizing rbd image 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.822 221554 DEBUG nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.823 221554 DEBUG nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Ensure instance console log exists: /var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.824 221554 DEBUG oslo_concurrency.lockutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.824 221554 DEBUG oslo_concurrency.lockutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.824 221554 DEBUG oslo_concurrency.lockutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.826 221554 DEBUG nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Start _get_guest_xml network_info=[{"id": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "address": "fa:16:3e:a3:87:6f", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690c15a-98", "ovs_interfaceid": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.829 221554 WARNING nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.836 221554 DEBUG nova.virt.libvirt.host [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.837 221554 DEBUG nova.virt.libvirt.host [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.841 221554 DEBUG nova.virt.libvirt.host [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.842 221554 DEBUG nova.virt.libvirt.host [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.843 221554 DEBUG nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.843 221554 DEBUG nova.virt.hardware [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.843 221554 DEBUG nova.virt.hardware [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.843 221554 DEBUG nova.virt.hardware [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.844 221554 DEBUG nova.virt.hardware [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.844 221554 DEBUG nova.virt.hardware [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.844 221554 DEBUG nova.virt.hardware [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.844 221554 DEBUG nova.virt.hardware [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.845 221554 DEBUG nova.virt.hardware [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.845 221554 DEBUG nova.virt.hardware [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.845 221554 DEBUG nova.virt.hardware [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.845 221554 DEBUG nova.virt.hardware [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.846 221554 DEBUG nova.objects.instance [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:01:28 np0005603609 nova_compute[221550]: 2026-01-31 08:01:28.963 221554 DEBUG oslo_concurrency.processutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:01:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:01:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:29.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:01:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:01:29 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3439344745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.418 221554 DEBUG oslo_concurrency.processutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.444 221554 DEBUG nova.storage.rbd_utils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.448 221554 DEBUG oslo_concurrency.processutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.508 221554 DEBUG nova.compute.manager [req-864d8fa1-e0c8-4941-bb8d-5a971bec6af6 req-fd834aec-fc6e-4903-8714-1df10c4b7872 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Received event network-vif-plugged-0690c15a-98ed-4f55-839f-bd15b85cc81a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.509 221554 DEBUG oslo_concurrency.lockutils [req-864d8fa1-e0c8-4941-bb8d-5a971bec6af6 req-fd834aec-fc6e-4903-8714-1df10c4b7872 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.509 221554 DEBUG oslo_concurrency.lockutils [req-864d8fa1-e0c8-4941-bb8d-5a971bec6af6 req-fd834aec-fc6e-4903-8714-1df10c4b7872 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.510 221554 DEBUG oslo_concurrency.lockutils [req-864d8fa1-e0c8-4941-bb8d-5a971bec6af6 req-fd834aec-fc6e-4903-8714-1df10c4b7872 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.510 221554 DEBUG nova.compute.manager [req-864d8fa1-e0c8-4941-bb8d-5a971bec6af6 req-fd834aec-fc6e-4903-8714-1df10c4b7872 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] No waiting events found dispatching network-vif-plugged-0690c15a-98ed-4f55-839f-bd15b85cc81a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.510 221554 WARNING nova.compute.manager [req-864d8fa1-e0c8-4941-bb8d-5a971bec6af6 req-fd834aec-fc6e-4903-8714-1df10c4b7872 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Received unexpected event network-vif-plugged-0690c15a-98ed-4f55-839f-bd15b85cc81a for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 31 03:01:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:01:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:29.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:01:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:01:29 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/824275673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.859 221554 DEBUG oslo_concurrency.processutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.860 221554 DEBUG nova.virt.libvirt.vif [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:00:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1870360892',display_name='tempest-ServerDiskConfigTestJSON-server-1870360892',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1870360892',id=89,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-3kh0tikj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:01:28Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=8b3d7088-3fbb-4615-9d5a-92c65bac3be2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "address": "fa:16:3e:a3:87:6f", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690c15a-98", "ovs_interfaceid": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.861 221554 DEBUG nova.network.os_vif_util [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "address": "fa:16:3e:a3:87:6f", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690c15a-98", "ovs_interfaceid": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.862 221554 DEBUG nova.network.os_vif_util [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:87:6f,bridge_name='br-int',has_traffic_filtering=True,id=0690c15a-98ed-4f55-839f-bd15b85cc81a,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690c15a-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.864 221554 DEBUG nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:01:29 np0005603609 nova_compute[221550]:  <uuid>8b3d7088-3fbb-4615-9d5a-92c65bac3be2</uuid>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:  <name>instance-00000059</name>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1870360892</nova:name>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:01:28</nova:creationTime>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:01:29 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:        <nova:user uuid="63e95edea0164ae2a9820dc10467335d">tempest-ServerDiskConfigTestJSON-984925022-project-member</nova:user>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:        <nova:project uuid="be74d11d2f5a4d9aae2dbe32c31ad9c3">tempest-ServerDiskConfigTestJSON-984925022</nova:project>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="40cf2ff3-f7ff-4843-b4ab-b7dcc843006f"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:        <nova:port uuid="0690c15a-98ed-4f55-839f-bd15b85cc81a">
Jan 31 03:01:29 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <entry name="serial">8b3d7088-3fbb-4615-9d5a-92c65bac3be2</entry>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <entry name="uuid">8b3d7088-3fbb-4615-9d5a-92c65bac3be2</entry>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk">
Jan 31 03:01:29 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:01:29 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk.config">
Jan 31 03:01:29 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:01:29 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:a3:87:6f"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <target dev="tap0690c15a-98"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2/console.log" append="off"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:01:29 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:01:29 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:01:29 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:01:29 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.865 221554 DEBUG nova.compute.manager [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Preparing to wait for external event network-vif-plugged-0690c15a-98ed-4f55-839f-bd15b85cc81a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.865 221554 DEBUG oslo_concurrency.lockutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.866 221554 DEBUG oslo_concurrency.lockutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.866 221554 DEBUG oslo_concurrency.lockutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.866 221554 DEBUG nova.virt.libvirt.vif [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:00:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1870360892',display_name='tempest-ServerDiskConfigTestJSON-server-1870360892',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1870360892',id=89,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-3kh0tikj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:01:28Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=8b3d7088-3fbb-4615-9d5a-92c65bac3be2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "address": "fa:16:3e:a3:87:6f", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690c15a-98", "ovs_interfaceid": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.866 221554 DEBUG nova.network.os_vif_util [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "address": "fa:16:3e:a3:87:6f", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690c15a-98", "ovs_interfaceid": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.867 221554 DEBUG nova.network.os_vif_util [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:87:6f,bridge_name='br-int',has_traffic_filtering=True,id=0690c15a-98ed-4f55-839f-bd15b85cc81a,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690c15a-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.867 221554 DEBUG os_vif [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:87:6f,bridge_name='br-int',has_traffic_filtering=True,id=0690c15a-98ed-4f55-839f-bd15b85cc81a,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690c15a-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.868 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.868 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.869 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.870 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.870 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0690c15a-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.871 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0690c15a-98, col_values=(('external_ids', {'iface-id': '0690c15a-98ed-4f55-839f-bd15b85cc81a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:87:6f', 'vm-uuid': '8b3d7088-3fbb-4615-9d5a-92c65bac3be2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.918 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:29 np0005603609 NetworkManager[49064]: <info>  [1769846489.9191] manager: (tap0690c15a-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.921 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.922 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:29 np0005603609 nova_compute[221550]: 2026-01-31 08:01:29.923 221554 INFO os_vif [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:87:6f,bridge_name='br-int',has_traffic_filtering=True,id=0690c15a-98ed-4f55-839f-bd15b85cc81a,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690c15a-98')#033[00m
Jan 31 03:01:30 np0005603609 nova_compute[221550]: 2026-01-31 08:01:30.017 221554 DEBUG nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:01:30 np0005603609 nova_compute[221550]: 2026-01-31 08:01:30.017 221554 DEBUG nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:01:30 np0005603609 nova_compute[221550]: 2026-01-31 08:01:30.018 221554 DEBUG nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No VIF found with MAC fa:16:3e:a3:87:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:01:30 np0005603609 nova_compute[221550]: 2026-01-31 08:01:30.018 221554 INFO nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Using config drive#033[00m
Jan 31 03:01:30 np0005603609 nova_compute[221550]: 2026-01-31 08:01:30.041 221554 DEBUG nova.storage.rbd_utils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:01:30 np0005603609 nova_compute[221550]: 2026-01-31 08:01:30.075 221554 DEBUG nova.objects.instance [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:01:30 np0005603609 nova_compute[221550]: 2026-01-31 08:01:30.128 221554 DEBUG nova.objects.instance [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'keypairs' on Instance uuid 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:01:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:31 np0005603609 nova_compute[221550]: 2026-01-31 08:01:31.216 221554 INFO nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Creating config drive at /var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2/disk.config#033[00m
Jan 31 03:01:31 np0005603609 nova_compute[221550]: 2026-01-31 08:01:31.222 221554 DEBUG oslo_concurrency.processutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1ienie4y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:01:31 np0005603609 nova_compute[221550]: 2026-01-31 08:01:31.346 221554 DEBUG oslo_concurrency.processutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1ienie4y" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:01:31 np0005603609 nova_compute[221550]: 2026-01-31 08:01:31.374 221554 DEBUG nova.storage.rbd_utils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:01:31 np0005603609 nova_compute[221550]: 2026-01-31 08:01:31.377 221554 DEBUG oslo_concurrency.processutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2/disk.config 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:01:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:31.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:31 np0005603609 nova_compute[221550]: 2026-01-31 08:01:31.554 221554 DEBUG oslo_concurrency.processutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2/disk.config 8b3d7088-3fbb-4615-9d5a-92c65bac3be2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:01:31 np0005603609 nova_compute[221550]: 2026-01-31 08:01:31.555 221554 INFO nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Deleting local config drive /var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2/disk.config because it was imported into RBD.#033[00m
Jan 31 03:01:31 np0005603609 kernel: tap0690c15a-98: entered promiscuous mode
Jan 31 03:01:31 np0005603609 NetworkManager[49064]: <info>  [1769846491.6125] manager: (tap0690c15a-98): new Tun device (/org/freedesktop/NetworkManager/Devices/154)
Jan 31 03:01:31 np0005603609 ovn_controller[130359]: 2026-01-31T08:01:31Z|00313|binding|INFO|Claiming lport 0690c15a-98ed-4f55-839f-bd15b85cc81a for this chassis.
Jan 31 03:01:31 np0005603609 ovn_controller[130359]: 2026-01-31T08:01:31Z|00314|binding|INFO|0690c15a-98ed-4f55-839f-bd15b85cc81a: Claiming fa:16:3e:a3:87:6f 10.100.0.9
Jan 31 03:01:31 np0005603609 nova_compute[221550]: 2026-01-31 08:01:31.614 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:31 np0005603609 nova_compute[221550]: 2026-01-31 08:01:31.622 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:31 np0005603609 ovn_controller[130359]: 2026-01-31T08:01:31Z|00315|binding|INFO|Setting lport 0690c15a-98ed-4f55-839f-bd15b85cc81a ovn-installed in OVS
Jan 31 03:01:31 np0005603609 nova_compute[221550]: 2026-01-31 08:01:31.625 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:31 np0005603609 nova_compute[221550]: 2026-01-31 08:01:31.628 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:31 np0005603609 systemd-machined[190912]: New machine qemu-41-instance-00000059.
Jan 31 03:01:31 np0005603609 systemd[1]: Started Virtual Machine qemu-41-instance-00000059.
Jan 31 03:01:31 np0005603609 systemd-udevd[255552]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:01:31 np0005603609 NetworkManager[49064]: <info>  [1769846491.6758] device (tap0690c15a-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:01:31 np0005603609 NetworkManager[49064]: <info>  [1769846491.6774] device (tap0690c15a-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:01:31 np0005603609 ovn_controller[130359]: 2026-01-31T08:01:31Z|00316|binding|INFO|Setting lport 0690c15a-98ed-4f55-839f-bd15b85cc81a up in Southbound
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.701 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:87:6f 10.100.0.9'], port_security=['fa:16:3e:a3:87:6f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8b3d7088-3fbb-4615-9d5a-92c65bac3be2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-121329c8-2359-4e9d-9f2b-4932f8740470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '755edf0d-318a-4b49-b9f5-851611889f15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cae77ce7-0851-4c6f-a030-c066a50c0f3d, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=0690c15a-98ed-4f55-839f-bd15b85cc81a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.704 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 0690c15a-98ed-4f55-839f-bd15b85cc81a in datapath 121329c8-2359-4e9d-9f2b-4932f8740470 bound to our chassis#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.707 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 121329c8-2359-4e9d-9f2b-4932f8740470#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.720 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ba69cd0b-495c-4745-87a1-51afbd272068]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.721 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap121329c8-21 in ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.724 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap121329c8-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.724 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3855bcd0-451e-4b62-b0eb-a56356197bbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.726 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b64963b2-0538-40f0-a25f-5ebf0bd2d16b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.737 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[494dcdef-d243-4db6-b8b6-5231e3213db6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.765 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ed700b30-4d58-4f54-a26b-d998b9ed0b1f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.788 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[5e4fde2e-e49a-458f-a227-9dbfc1ace07b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:31.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.793 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d61c3e3e-7e71-446c-b9b0-7ce160c7fd38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:31 np0005603609 systemd-udevd[255558]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:01:31 np0005603609 NetworkManager[49064]: <info>  [1769846491.7956] manager: (tap121329c8-20): new Veth device (/org/freedesktop/NetworkManager/Devices/155)
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.820 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[e8fd487c-0c5d-4a0e-878c-f1f1942fed3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.823 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[ef440574-5a1f-4ce3-84e7-0e359026c564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:31 np0005603609 NetworkManager[49064]: <info>  [1769846491.8453] device (tap121329c8-20): carrier: link connected
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.853 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[1c43365b-3c89-4d77-af47-bdf33bba7b99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.868 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[022724fa-32e7-41cb-9868-13456691b318]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap121329c8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:a3:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673154, 'reachable_time': 28320, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255585, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.888 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f66d3da0-de0b-4a93-8ef0-251ad86d3226]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:a3c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673154, 'tstamp': 673154}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255586, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.904 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a701b855-2841-474d-b913-7b022e74d7fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap121329c8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:a3:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673154, 'reachable_time': 28320, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255602, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.939 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[544029d7-ade3-4e6b-b1b2-819ab936e6b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.988 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[98fcc376-d6b6-43d8-8346-9565289ff364]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.990 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap121329c8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.990 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.991 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap121329c8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:31 np0005603609 kernel: tap121329c8-20: entered promiscuous mode
Jan 31 03:01:31 np0005603609 NetworkManager[49064]: <info>  [1769846491.9937] manager: (tap121329c8-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Jan 31 03:01:31 np0005603609 nova_compute[221550]: 2026-01-31 08:01:31.993 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:31 np0005603609 nova_compute[221550]: 2026-01-31 08:01:31.996 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:31.997 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap121329c8-20, col_values=(('external_ids', {'iface-id': 'e59d8348-5c5c-4c82-ba21-91f3a512c65e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:31 np0005603609 nova_compute[221550]: 2026-01-31 08:01:31.998 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:31 np0005603609 ovn_controller[130359]: 2026-01-31T08:01:31Z|00317|binding|INFO|Releasing lport e59d8348-5c5c-4c82-ba21-91f3a512c65e from this chassis (sb_readonly=0)
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:32.001 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:32.003 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[478ae93c-86b1-4dcc-bca7-2528d55e9ed0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:32.005 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.005 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:32.006 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'env', 'PROCESS_TAG=haproxy-121329c8-2359-4e9d-9f2b-4932f8740470', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/121329c8-2359-4e9d-9f2b-4932f8740470.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.064 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.064 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846492.063353, 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.065 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] VM Started (Lifecycle Event)#033[00m
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.119 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.122 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846492.063845, 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.122 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.211 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.214 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:01:32 np0005603609 podman[255661]: 2026-01-31 08:01:32.319219495 +0000 UTC m=+0.016757235 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.614 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 03:01:32 np0005603609 podman[255661]: 2026-01-31 08:01:32.808794711 +0000 UTC m=+0.506332431 container create 4ee3dddc74002aea41783bddf7b1eb881333550820e0bc95b4085b5ef8824b74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 03:01:32 np0005603609 systemd[1]: Started libpod-conmon-4ee3dddc74002aea41783bddf7b1eb881333550820e0bc95b4085b5ef8824b74.scope.
Jan 31 03:01:32 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:01:32 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f7e4f8120eef188c7bec6e123cc390781a2a2c0ff227423f79a2b7e9d52d870/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:01:32 np0005603609 podman[255661]: 2026-01-31 08:01:32.94168313 +0000 UTC m=+0.639220870 container init 4ee3dddc74002aea41783bddf7b1eb881333550820e0bc95b4085b5ef8824b74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:01:32 np0005603609 podman[255661]: 2026-01-31 08:01:32.945419121 +0000 UTC m=+0.642956841 container start 4ee3dddc74002aea41783bddf7b1eb881333550820e0bc95b4085b5ef8824b74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.959 221554 DEBUG nova.compute.manager [req-d93ac0f5-63af-4fd3-9ced-b91b8f2824a8 req-c3cc9c8e-724e-46c0-a07a-ce9621559978 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Received event network-vif-plugged-0690c15a-98ed-4f55-839f-bd15b85cc81a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.959 221554 DEBUG oslo_concurrency.lockutils [req-d93ac0f5-63af-4fd3-9ced-b91b8f2824a8 req-c3cc9c8e-724e-46c0-a07a-ce9621559978 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.959 221554 DEBUG oslo_concurrency.lockutils [req-d93ac0f5-63af-4fd3-9ced-b91b8f2824a8 req-c3cc9c8e-724e-46c0-a07a-ce9621559978 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.960 221554 DEBUG oslo_concurrency.lockutils [req-d93ac0f5-63af-4fd3-9ced-b91b8f2824a8 req-c3cc9c8e-724e-46c0-a07a-ce9621559978 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.960 221554 DEBUG nova.compute.manager [req-d93ac0f5-63af-4fd3-9ced-b91b8f2824a8 req-c3cc9c8e-724e-46c0-a07a-ce9621559978 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Processing event network-vif-plugged-0690c15a-98ed-4f55-839f-bd15b85cc81a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.960 221554 DEBUG nova.compute.manager [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.964 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846492.9641695, 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:01:32 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[255676]: [NOTICE]   (255680) : New worker (255682) forked
Jan 31 03:01:32 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[255676]: [NOTICE]   (255680) : Loading success.
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.965 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.968 221554 DEBUG nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.971 221554 INFO nova.virt.libvirt.driver [-] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Instance spawned successfully.#033[00m
Jan 31 03:01:32 np0005603609 nova_compute[221550]: 2026-01-31 08:01:32.971 221554 DEBUG nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:01:33 np0005603609 nova_compute[221550]: 2026-01-31 08:01:33.208 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:01:33 np0005603609 nova_compute[221550]: 2026-01-31 08:01:33.214 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:01:33 np0005603609 nova_compute[221550]: 2026-01-31 08:01:33.216 221554 DEBUG nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:01:33 np0005603609 nova_compute[221550]: 2026-01-31 08:01:33.217 221554 DEBUG nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:01:33 np0005603609 nova_compute[221550]: 2026-01-31 08:01:33.217 221554 DEBUG nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:01:33 np0005603609 nova_compute[221550]: 2026-01-31 08:01:33.218 221554 DEBUG nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:01:33 np0005603609 nova_compute[221550]: 2026-01-31 08:01:33.218 221554 DEBUG nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:01:33 np0005603609 nova_compute[221550]: 2026-01-31 08:01:33.219 221554 DEBUG nova.virt.libvirt.driver [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:01:33 np0005603609 nova_compute[221550]: 2026-01-31 08:01:33.391 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 03:01:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:33.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:33 np0005603609 nova_compute[221550]: 2026-01-31 08:01:33.538 221554 DEBUG nova.compute.manager [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:01:33 np0005603609 nova_compute[221550]: 2026-01-31 08:01:33.595 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:33 np0005603609 nova_compute[221550]: 2026-01-31 08:01:33.702 221554 DEBUG oslo_concurrency.lockutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:33 np0005603609 nova_compute[221550]: 2026-01-31 08:01:33.704 221554 DEBUG oslo_concurrency.lockutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:33 np0005603609 nova_compute[221550]: 2026-01-31 08:01:33.704 221554 DEBUG nova.objects.instance [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 03:01:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:33.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:34 np0005603609 nova_compute[221550]: 2026-01-31 08:01:34.897 221554 DEBUG oslo_concurrency.lockutils [None req-77d57500-1ab6-4033-85cf-618186014df9 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 1.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:34 np0005603609 nova_compute[221550]: 2026-01-31 08:01:34.918 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:01:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:35.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:01:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:35.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:37 np0005603609 nova_compute[221550]: 2026-01-31 08:01:37.062 221554 DEBUG nova.compute.manager [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Received event network-vif-plugged-0690c15a-98ed-4f55-839f-bd15b85cc81a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:01:37 np0005603609 nova_compute[221550]: 2026-01-31 08:01:37.062 221554 DEBUG oslo_concurrency.lockutils [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:37 np0005603609 nova_compute[221550]: 2026-01-31 08:01:37.062 221554 DEBUG oslo_concurrency.lockutils [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:37 np0005603609 nova_compute[221550]: 2026-01-31 08:01:37.062 221554 DEBUG oslo_concurrency.lockutils [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:37 np0005603609 nova_compute[221550]: 2026-01-31 08:01:37.063 221554 DEBUG nova.compute.manager [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] No waiting events found dispatching network-vif-plugged-0690c15a-98ed-4f55-839f-bd15b85cc81a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:01:37 np0005603609 nova_compute[221550]: 2026-01-31 08:01:37.063 221554 WARNING nova.compute.manager [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Received unexpected event network-vif-plugged-0690c15a-98ed-4f55-839f-bd15b85cc81a for instance with vm_state active and task_state None.#033[00m
Jan 31 03:01:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:37.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:37.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:38 np0005603609 nova_compute[221550]: 2026-01-31 08:01:38.597 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:39.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:39 np0005603609 nova_compute[221550]: 2026-01-31 08:01:39.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:39.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:39 np0005603609 nova_compute[221550]: 2026-01-31 08:01:39.961 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:41.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:41 np0005603609 nova_compute[221550]: 2026-01-31 08:01:41.651 221554 DEBUG oslo_concurrency.lockutils [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:41 np0005603609 nova_compute[221550]: 2026-01-31 08:01:41.652 221554 DEBUG oslo_concurrency.lockutils [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:41 np0005603609 nova_compute[221550]: 2026-01-31 08:01:41.652 221554 DEBUG oslo_concurrency.lockutils [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:41 np0005603609 nova_compute[221550]: 2026-01-31 08:01:41.652 221554 DEBUG oslo_concurrency.lockutils [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:41 np0005603609 nova_compute[221550]: 2026-01-31 08:01:41.652 221554 DEBUG oslo_concurrency.lockutils [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:41 np0005603609 nova_compute[221550]: 2026-01-31 08:01:41.653 221554 INFO nova.compute.manager [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Terminating instance#033[00m
Jan 31 03:01:41 np0005603609 nova_compute[221550]: 2026-01-31 08:01:41.654 221554 DEBUG nova.compute.manager [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:01:41 np0005603609 kernel: tap0690c15a-98 (unregistering): left promiscuous mode
Jan 31 03:01:41 np0005603609 NetworkManager[49064]: <info>  [1769846501.6929] device (tap0690c15a-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:01:41 np0005603609 nova_compute[221550]: 2026-01-31 08:01:41.693 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:41 np0005603609 nova_compute[221550]: 2026-01-31 08:01:41.699 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:41 np0005603609 ovn_controller[130359]: 2026-01-31T08:01:41Z|00318|binding|INFO|Releasing lport 0690c15a-98ed-4f55-839f-bd15b85cc81a from this chassis (sb_readonly=0)
Jan 31 03:01:41 np0005603609 ovn_controller[130359]: 2026-01-31T08:01:41Z|00319|binding|INFO|Setting lport 0690c15a-98ed-4f55-839f-bd15b85cc81a down in Southbound
Jan 31 03:01:41 np0005603609 ovn_controller[130359]: 2026-01-31T08:01:41Z|00320|binding|INFO|Removing iface tap0690c15a-98 ovn-installed in OVS
Jan 31 03:01:41 np0005603609 nova_compute[221550]: 2026-01-31 08:01:41.706 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:41.736 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:87:6f 10.100.0.9'], port_security=['fa:16:3e:a3:87:6f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8b3d7088-3fbb-4615-9d5a-92c65bac3be2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-121329c8-2359-4e9d-9f2b-4932f8740470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '755edf0d-318a-4b49-b9f5-851611889f15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cae77ce7-0851-4c6f-a030-c066a50c0f3d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=0690c15a-98ed-4f55-839f-bd15b85cc81a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:01:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:41.738 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 0690c15a-98ed-4f55-839f-bd15b85cc81a in datapath 121329c8-2359-4e9d-9f2b-4932f8740470 unbound from our chassis#033[00m
Jan 31 03:01:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:41.739 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 121329c8-2359-4e9d-9f2b-4932f8740470, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:01:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:41.740 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a19f1bdb-bc7d-4d74-844d-f0b6f1fedcba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:41.740 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 namespace which is not needed anymore#033[00m
Jan 31 03:01:41 np0005603609 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000059.scope: Deactivated successfully.
Jan 31 03:01:41 np0005603609 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000059.scope: Consumed 9.347s CPU time.
Jan 31 03:01:41 np0005603609 systemd-machined[190912]: Machine qemu-41-instance-00000059 terminated.
Jan 31 03:01:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:41.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:41 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[255676]: [NOTICE]   (255680) : haproxy version is 2.8.14-c23fe91
Jan 31 03:01:41 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[255676]: [NOTICE]   (255680) : path to executable is /usr/sbin/haproxy
Jan 31 03:01:41 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[255676]: [WARNING]  (255680) : Exiting Master process...
Jan 31 03:01:41 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[255676]: [ALERT]    (255680) : Current worker (255682) exited with code 143 (Terminated)
Jan 31 03:01:41 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[255676]: [WARNING]  (255680) : All workers exited. Exiting... (0)
Jan 31 03:01:41 np0005603609 systemd[1]: libpod-4ee3dddc74002aea41783bddf7b1eb881333550820e0bc95b4085b5ef8824b74.scope: Deactivated successfully.
Jan 31 03:01:41 np0005603609 conmon[255676]: conmon 4ee3dddc74002aea4178 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ee3dddc74002aea41783bddf7b1eb881333550820e0bc95b4085b5ef8824b74.scope/container/memory.events
Jan 31 03:01:41 np0005603609 podman[255716]: 2026-01-31 08:01:41.840088791 +0000 UTC m=+0.040416413 container died 4ee3dddc74002aea41783bddf7b1eb881333550820e0bc95b4085b5ef8824b74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:01:41 np0005603609 nova_compute[221550]: 2026-01-31 08:01:41.871 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:41 np0005603609 nova_compute[221550]: 2026-01-31 08:01:41.873 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:41 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ee3dddc74002aea41783bddf7b1eb881333550820e0bc95b4085b5ef8824b74-userdata-shm.mount: Deactivated successfully.
Jan 31 03:01:41 np0005603609 systemd[1]: var-lib-containers-storage-overlay-8f7e4f8120eef188c7bec6e123cc390781a2a2c0ff227423f79a2b7e9d52d870-merged.mount: Deactivated successfully.
Jan 31 03:01:41 np0005603609 nova_compute[221550]: 2026-01-31 08:01:41.887 221554 INFO nova.virt.libvirt.driver [-] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Instance destroyed successfully.#033[00m
Jan 31 03:01:41 np0005603609 nova_compute[221550]: 2026-01-31 08:01:41.887 221554 DEBUG nova.objects.instance [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'resources' on Instance uuid 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:01:41 np0005603609 podman[255716]: 2026-01-31 08:01:41.894813408 +0000 UTC m=+0.095141030 container cleanup 4ee3dddc74002aea41783bddf7b1eb881333550820e0bc95b4085b5ef8824b74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:01:41 np0005603609 systemd[1]: libpod-conmon-4ee3dddc74002aea41783bddf7b1eb881333550820e0bc95b4085b5ef8824b74.scope: Deactivated successfully.
Jan 31 03:01:41 np0005603609 podman[255755]: 2026-01-31 08:01:41.993792181 +0000 UTC m=+0.079317819 container remove 4ee3dddc74002aea41783bddf7b1eb881333550820e0bc95b4085b5ef8824b74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:01:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:42.000 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[95603b4b-fb0a-44ef-a199-fa1900ba9cb6]: (4, ('Sat Jan 31 08:01:41 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 (4ee3dddc74002aea41783bddf7b1eb881333550820e0bc95b4085b5ef8824b74)\n4ee3dddc74002aea41783bddf7b1eb881333550820e0bc95b4085b5ef8824b74\nSat Jan 31 08:01:41 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 (4ee3dddc74002aea41783bddf7b1eb881333550820e0bc95b4085b5ef8824b74)\n4ee3dddc74002aea41783bddf7b1eb881333550820e0bc95b4085b5ef8824b74\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:42.005 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[49df8e69-705e-4aaf-b3c1-49c3161148d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:42.007 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap121329c8-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:42 np0005603609 nova_compute[221550]: 2026-01-31 08:01:42.010 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:42 np0005603609 kernel: tap121329c8-20: left promiscuous mode
Jan 31 03:01:42 np0005603609 nova_compute[221550]: 2026-01-31 08:01:42.019 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:42 np0005603609 nova_compute[221550]: 2026-01-31 08:01:42.020 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:42.022 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4beb689b-1bdb-45d3-9bc2-d8cd2b2c84eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:42.037 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[eca9d260-b620-492e-bd3b-0b2976c1004f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:42.039 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[45506797-ec20-496a-b848-360be45c92b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:42.055 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f8748eea-437f-4b1a-a11d-de451a91087f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673148, 'reachable_time': 27886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255774, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:42 np0005603609 systemd[1]: run-netns-ovnmeta\x2d121329c8\x2d2359\x2d4e9d\x2d9f2b\x2d4932f8740470.mount: Deactivated successfully.
Jan 31 03:01:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:42.061 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:01:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:01:42.061 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[d58ad36e-3a28-4366-b557-6991e660255b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:43 np0005603609 nova_compute[221550]: 2026-01-31 08:01:43.021 221554 DEBUG nova.virt.libvirt.vif [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:00:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1870360892',display_name='tempest-ServerDiskConfigTestJSON-server-1870360892',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1870360892',id=89,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:33Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-3kh0tikj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:01:34Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=8b3d7088-3fbb-4615-9d5a-92c65bac3be2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "address": "fa:16:3e:a3:87:6f", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690c15a-98", "ovs_interfaceid": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:01:43 np0005603609 nova_compute[221550]: 2026-01-31 08:01:43.021 221554 DEBUG nova.network.os_vif_util [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "address": "fa:16:3e:a3:87:6f", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690c15a-98", "ovs_interfaceid": "0690c15a-98ed-4f55-839f-bd15b85cc81a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:01:43 np0005603609 nova_compute[221550]: 2026-01-31 08:01:43.022 221554 DEBUG nova.network.os_vif_util [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a3:87:6f,bridge_name='br-int',has_traffic_filtering=True,id=0690c15a-98ed-4f55-839f-bd15b85cc81a,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690c15a-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:01:43 np0005603609 nova_compute[221550]: 2026-01-31 08:01:43.022 221554 DEBUG os_vif [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:87:6f,bridge_name='br-int',has_traffic_filtering=True,id=0690c15a-98ed-4f55-839f-bd15b85cc81a,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690c15a-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:01:43 np0005603609 nova_compute[221550]: 2026-01-31 08:01:43.023 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:43 np0005603609 nova_compute[221550]: 2026-01-31 08:01:43.023 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0690c15a-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:43 np0005603609 nova_compute[221550]: 2026-01-31 08:01:43.025 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:43 np0005603609 nova_compute[221550]: 2026-01-31 08:01:43.026 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:43 np0005603609 nova_compute[221550]: 2026-01-31 08:01:43.029 221554 INFO os_vif [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:87:6f,bridge_name='br-int',has_traffic_filtering=True,id=0690c15a-98ed-4f55-839f-bd15b85cc81a,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690c15a-98')#033[00m
Jan 31 03:01:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:43.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:43 np0005603609 nova_compute[221550]: 2026-01-31 08:01:43.600 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:43 np0005603609 nova_compute[221550]: 2026-01-31 08:01:43.608 221554 INFO nova.virt.libvirt.driver [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Deleting instance files /var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2_del#033[00m
Jan 31 03:01:43 np0005603609 nova_compute[221550]: 2026-01-31 08:01:43.609 221554 INFO nova.virt.libvirt.driver [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Deletion of /var/lib/nova/instances/8b3d7088-3fbb-4615-9d5a-92c65bac3be2_del complete#033[00m
Jan 31 03:01:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:43.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:44 np0005603609 nova_compute[221550]: 2026-01-31 08:01:44.316 221554 INFO nova.compute.manager [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Took 2.66 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:01:44 np0005603609 nova_compute[221550]: 2026-01-31 08:01:44.316 221554 DEBUG oslo.service.loopingcall [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:01:44 np0005603609 nova_compute[221550]: 2026-01-31 08:01:44.317 221554 DEBUG nova.compute.manager [-] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:01:44 np0005603609 nova_compute[221550]: 2026-01-31 08:01:44.317 221554 DEBUG nova.network.neutron [-] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:01:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:45.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:01:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:45.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:01:46 np0005603609 nova_compute[221550]: 2026-01-31 08:01:46.346 221554 DEBUG nova.compute.manager [req-562b6aad-ef4d-4572-b25d-7dd12138260b req-4162038c-7d4f-4eae-bb24-aa0b4a3be8f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Received event network-vif-unplugged-0690c15a-98ed-4f55-839f-bd15b85cc81a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:01:46 np0005603609 nova_compute[221550]: 2026-01-31 08:01:46.346 221554 DEBUG oslo_concurrency.lockutils [req-562b6aad-ef4d-4572-b25d-7dd12138260b req-4162038c-7d4f-4eae-bb24-aa0b4a3be8f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:46 np0005603609 nova_compute[221550]: 2026-01-31 08:01:46.347 221554 DEBUG oslo_concurrency.lockutils [req-562b6aad-ef4d-4572-b25d-7dd12138260b req-4162038c-7d4f-4eae-bb24-aa0b4a3be8f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:46 np0005603609 nova_compute[221550]: 2026-01-31 08:01:46.347 221554 DEBUG oslo_concurrency.lockutils [req-562b6aad-ef4d-4572-b25d-7dd12138260b req-4162038c-7d4f-4eae-bb24-aa0b4a3be8f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:46 np0005603609 nova_compute[221550]: 2026-01-31 08:01:46.347 221554 DEBUG nova.compute.manager [req-562b6aad-ef4d-4572-b25d-7dd12138260b req-4162038c-7d4f-4eae-bb24-aa0b4a3be8f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] No waiting events found dispatching network-vif-unplugged-0690c15a-98ed-4f55-839f-bd15b85cc81a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:01:46 np0005603609 nova_compute[221550]: 2026-01-31 08:01:46.347 221554 DEBUG nova.compute.manager [req-562b6aad-ef4d-4572-b25d-7dd12138260b req-4162038c-7d4f-4eae-bb24-aa0b4a3be8f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Received event network-vif-unplugged-0690c15a-98ed-4f55-839f-bd15b85cc81a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:01:47 np0005603609 nova_compute[221550]: 2026-01-31 08:01:47.374 221554 DEBUG nova.network.neutron [-] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:01:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:01:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:47.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:01:47 np0005603609 nova_compute[221550]: 2026-01-31 08:01:47.650 221554 INFO nova.compute.manager [-] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Took 3.33 seconds to deallocate network for instance.#033[00m
Jan 31 03:01:47 np0005603609 nova_compute[221550]: 2026-01-31 08:01:47.664 221554 DEBUG nova.compute.manager [req-364c15fc-4b27-4158-b19b-f80d8a78fdba req-24121375-5f94-4fab-aac7-70de04c7e3da 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Received event network-vif-deleted-0690c15a-98ed-4f55-839f-bd15b85cc81a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:01:47 np0005603609 nova_compute[221550]: 2026-01-31 08:01:47.665 221554 INFO nova.compute.manager [req-364c15fc-4b27-4158-b19b-f80d8a78fdba req-24121375-5f94-4fab-aac7-70de04c7e3da 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Neutron deleted interface 0690c15a-98ed-4f55-839f-bd15b85cc81a; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:01:47 np0005603609 nova_compute[221550]: 2026-01-31 08:01:47.665 221554 DEBUG nova.network.neutron [req-364c15fc-4b27-4158-b19b-f80d8a78fdba req-24121375-5f94-4fab-aac7-70de04c7e3da 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:01:47 np0005603609 nova_compute[221550]: 2026-01-31 08:01:47.738 221554 DEBUG nova.compute.manager [req-364c15fc-4b27-4158-b19b-f80d8a78fdba req-24121375-5f94-4fab-aac7-70de04c7e3da 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Detach interface failed, port_id=0690c15a-98ed-4f55-839f-bd15b85cc81a, reason: Instance 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:01:47 np0005603609 nova_compute[221550]: 2026-01-31 08:01:47.783 221554 DEBUG oslo_concurrency.lockutils [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:47 np0005603609 nova_compute[221550]: 2026-01-31 08:01:47.784 221554 DEBUG oslo_concurrency.lockutils [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:47.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:48 np0005603609 nova_compute[221550]: 2026-01-31 08:01:48.025 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:48 np0005603609 nova_compute[221550]: 2026-01-31 08:01:48.136 221554 DEBUG oslo_concurrency.processutils [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:01:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:01:48 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1292270437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:01:48 np0005603609 nova_compute[221550]: 2026-01-31 08:01:48.556 221554 DEBUG oslo_concurrency.processutils [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:01:48 np0005603609 nova_compute[221550]: 2026-01-31 08:01:48.561 221554 DEBUG nova.compute.provider_tree [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:01:48 np0005603609 nova_compute[221550]: 2026-01-31 08:01:48.602 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:48 np0005603609 nova_compute[221550]: 2026-01-31 08:01:48.669 221554 DEBUG nova.compute.manager [req-f98c9835-6b63-4682-af9b-1f92dfa265a1 req-4dffd38d-6ce0-4edc-b384-c6fc05e2f12a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Received event network-vif-plugged-0690c15a-98ed-4f55-839f-bd15b85cc81a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:01:48 np0005603609 nova_compute[221550]: 2026-01-31 08:01:48.670 221554 DEBUG oslo_concurrency.lockutils [req-f98c9835-6b63-4682-af9b-1f92dfa265a1 req-4dffd38d-6ce0-4edc-b384-c6fc05e2f12a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:48 np0005603609 nova_compute[221550]: 2026-01-31 08:01:48.670 221554 DEBUG oslo_concurrency.lockutils [req-f98c9835-6b63-4682-af9b-1f92dfa265a1 req-4dffd38d-6ce0-4edc-b384-c6fc05e2f12a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:48 np0005603609 nova_compute[221550]: 2026-01-31 08:01:48.670 221554 DEBUG oslo_concurrency.lockutils [req-f98c9835-6b63-4682-af9b-1f92dfa265a1 req-4dffd38d-6ce0-4edc-b384-c6fc05e2f12a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:48 np0005603609 nova_compute[221550]: 2026-01-31 08:01:48.670 221554 DEBUG nova.compute.manager [req-f98c9835-6b63-4682-af9b-1f92dfa265a1 req-4dffd38d-6ce0-4edc-b384-c6fc05e2f12a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] No waiting events found dispatching network-vif-plugged-0690c15a-98ed-4f55-839f-bd15b85cc81a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:01:48 np0005603609 nova_compute[221550]: 2026-01-31 08:01:48.671 221554 WARNING nova.compute.manager [req-f98c9835-6b63-4682-af9b-1f92dfa265a1 req-4dffd38d-6ce0-4edc-b384-c6fc05e2f12a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Received unexpected event network-vif-plugged-0690c15a-98ed-4f55-839f-bd15b85cc81a for instance with vm_state deleted and task_state None.#033[00m
Jan 31 03:01:48 np0005603609 nova_compute[221550]: 2026-01-31 08:01:48.745 221554 DEBUG nova.scheduler.client.report [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:01:48 np0005603609 nova_compute[221550]: 2026-01-31 08:01:48.878 221554 DEBUG oslo_concurrency.lockutils [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:49 np0005603609 nova_compute[221550]: 2026-01-31 08:01:49.023 221554 INFO nova.scheduler.client.report [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Deleted allocations for instance 8b3d7088-3fbb-4615-9d5a-92c65bac3be2#033[00m
Jan 31 03:01:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:49.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:49 np0005603609 nova_compute[221550]: 2026-01-31 08:01:49.568 221554 DEBUG oslo_concurrency.lockutils [None req-39e6bcd9-884d-4f1b-972b-64ffe69d7138 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "8b3d7088-3fbb-4615-9d5a-92c65bac3be2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:49.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:51 np0005603609 podman[255817]: 2026-01-31 08:01:51.200613577 +0000 UTC m=+0.087550489 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 03:01:51 np0005603609 podman[255818]: 2026-01-31 08:01:51.205948016 +0000 UTC m=+0.084701761 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:01:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:01:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:51.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:01:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:01:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:51.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:01:53 np0005603609 nova_compute[221550]: 2026-01-31 08:01:53.027 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:01:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:53.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:01:53 np0005603609 nova_compute[221550]: 2026-01-31 08:01:53.603 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:53.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:01:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:55.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:01:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:55.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:01:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:01:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:01:56 np0005603609 nova_compute[221550]: 2026-01-31 08:01:56.884 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846501.8834786, 8b3d7088-3fbb-4615-9d5a-92c65bac3be2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:01:56 np0005603609 nova_compute[221550]: 2026-01-31 08:01:56.884 221554 INFO nova.compute.manager [-] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:01:56 np0005603609 nova_compute[221550]: 2026-01-31 08:01:56.920 221554 DEBUG nova.compute.manager [None req-24a68c97-13c1-491d-9fae-bbcbbef68fe7 - - - - - -] [instance: 8b3d7088-3fbb-4615-9d5a-92c65bac3be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:01:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:57.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:57.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:58 np0005603609 nova_compute[221550]: 2026-01-31 08:01:58.030 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:58 np0005603609 nova_compute[221550]: 2026-01-31 08:01:58.631 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:59.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:01:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:01:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:59.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:02:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:01.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:02:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:01.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:02:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:02:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:02:03 np0005603609 nova_compute[221550]: 2026-01-31 08:02:03.033 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:03.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:03 np0005603609 nova_compute[221550]: 2026-01-31 08:02:03.634 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:03.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:05.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:05.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:06.034 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:02:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:06.035 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:02:06 np0005603609 nova_compute[221550]: 2026-01-31 08:02:06.035 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:06.037 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:02:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:07.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:02:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:07.495 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:07.496 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:07.497 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:07.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:08 np0005603609 nova_compute[221550]: 2026-01-31 08:02:08.036 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:08 np0005603609 nova_compute[221550]: 2026-01-31 08:02:08.636 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:09.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:02:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:09.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:02:09 np0005603609 nova_compute[221550]: 2026-01-31 08:02:09.901 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:11.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:11.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:13 np0005603609 nova_compute[221550]: 2026-01-31 08:02:13.038 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:02:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:13.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:02:13 np0005603609 nova_compute[221550]: 2026-01-31 08:02:13.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:13 np0005603609 nova_compute[221550]: 2026-01-31 08:02:13.672 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:13.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:14 np0005603609 nova_compute[221550]: 2026-01-31 08:02:14.775 221554 DEBUG nova.compute.manager [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 31 03:02:15 np0005603609 nova_compute[221550]: 2026-01-31 08:02:15.027 221554 DEBUG oslo_concurrency.lockutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:15 np0005603609 nova_compute[221550]: 2026-01-31 08:02:15.028 221554 DEBUG oslo_concurrency.lockutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:15 np0005603609 nova_compute[221550]: 2026-01-31 08:02:15.220 221554 DEBUG nova.objects.instance [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'pci_requests' on Instance uuid e2e6f697-3383-4887-b25a-c89447e67fc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:15.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:15 np0005603609 nova_compute[221550]: 2026-01-31 08:02:15.682 221554 DEBUG nova.virt.hardware [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:02:15 np0005603609 nova_compute[221550]: 2026-01-31 08:02:15.683 221554 INFO nova.compute.claims [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:02:15 np0005603609 nova_compute[221550]: 2026-01-31 08:02:15.683 221554 DEBUG nova.objects.instance [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'resources' on Instance uuid e2e6f697-3383-4887-b25a-c89447e67fc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:15 np0005603609 nova_compute[221550]: 2026-01-31 08:02:15.760 221554 DEBUG nova.objects.instance [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid e2e6f697-3383-4887-b25a-c89447e67fc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:15.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:15 np0005603609 nova_compute[221550]: 2026-01-31 08:02:15.970 221554 INFO nova.compute.resource_tracker [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Updating resource usage from migration 5efdbc6e-b65a-4054-9384-ce6814e3054a#033[00m
Jan 31 03:02:15 np0005603609 nova_compute[221550]: 2026-01-31 08:02:15.971 221554 DEBUG nova.compute.resource_tracker [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Starting to track incoming migration 5efdbc6e-b65a-4054-9384-ce6814e3054a with flavor e3bd1dad-95f3-4ed9-94b4-27245cd798b5 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 31 03:02:16 np0005603609 nova_compute[221550]: 2026-01-31 08:02:16.076 221554 DEBUG oslo_concurrency.processutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:02:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3785980968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:02:16 np0005603609 nova_compute[221550]: 2026-01-31 08:02:16.501 221554 DEBUG oslo_concurrency.processutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:16 np0005603609 nova_compute[221550]: 2026-01-31 08:02:16.509 221554 DEBUG nova.compute.provider_tree [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:02:16 np0005603609 nova_compute[221550]: 2026-01-31 08:02:16.625 221554 DEBUG nova.scheduler.client.report [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:02:16 np0005603609 nova_compute[221550]: 2026-01-31 08:02:16.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:16 np0005603609 nova_compute[221550]: 2026-01-31 08:02:16.816 221554 DEBUG oslo_concurrency.lockutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:16 np0005603609 nova_compute[221550]: 2026-01-31 08:02:16.816 221554 INFO nova.compute.manager [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Migrating#033[00m
Jan 31 03:02:16 np0005603609 nova_compute[221550]: 2026-01-31 08:02:16.921 221554 DEBUG oslo_concurrency.lockutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "49127a4f-80f7-4729-a1b3-ef852d68862a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:16 np0005603609 nova_compute[221550]: 2026-01-31 08:02:16.921 221554 DEBUG oslo_concurrency.lockutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:16 np0005603609 nova_compute[221550]: 2026-01-31 08:02:16.955 221554 DEBUG nova.compute.manager [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:02:17 np0005603609 nova_compute[221550]: 2026-01-31 08:02:17.054 221554 DEBUG oslo_concurrency.lockutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:17 np0005603609 nova_compute[221550]: 2026-01-31 08:02:17.055 221554 DEBUG oslo_concurrency.lockutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:17 np0005603609 nova_compute[221550]: 2026-01-31 08:02:17.063 221554 DEBUG nova.virt.hardware [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:02:17 np0005603609 nova_compute[221550]: 2026-01-31 08:02:17.063 221554 INFO nova.compute.claims [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:02:17 np0005603609 nova_compute[221550]: 2026-01-31 08:02:17.203 221554 DEBUG oslo_concurrency.processutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:17.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:02:17 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3615033175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:02:17 np0005603609 nova_compute[221550]: 2026-01-31 08:02:17.651 221554 DEBUG oslo_concurrency.processutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:17 np0005603609 nova_compute[221550]: 2026-01-31 08:02:17.655 221554 DEBUG nova.compute.provider_tree [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:02:17 np0005603609 nova_compute[221550]: 2026-01-31 08:02:17.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:17 np0005603609 nova_compute[221550]: 2026-01-31 08:02:17.811 221554 DEBUG nova.scheduler.client.report [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:02:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:02:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:17.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:02:17 np0005603609 nova_compute[221550]: 2026-01-31 08:02:17.854 221554 DEBUG oslo_concurrency.lockutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:17 np0005603609 nova_compute[221550]: 2026-01-31 08:02:17.855 221554 DEBUG nova.compute.manager [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:02:17 np0005603609 nova_compute[221550]: 2026-01-31 08:02:17.950 221554 DEBUG nova.compute.manager [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:02:17 np0005603609 nova_compute[221550]: 2026-01-31 08:02:17.951 221554 DEBUG nova.network.neutron [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:17.999 221554 INFO nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.027 221554 DEBUG nova.compute.manager [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.040 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.153 221554 DEBUG nova.compute.manager [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.155 221554 DEBUG nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.156 221554 INFO nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Creating image(s)#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.192 221554 DEBUG nova.storage.rbd_utils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 49127a4f-80f7-4729-a1b3-ef852d68862a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.221 221554 DEBUG nova.storage.rbd_utils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 49127a4f-80f7-4729-a1b3-ef852d68862a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.252 221554 DEBUG nova.storage.rbd_utils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 49127a4f-80f7-4729-a1b3-ef852d68862a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.255 221554 DEBUG oslo_concurrency.processutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.318 221554 DEBUG oslo_concurrency.processutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.319 221554 DEBUG oslo_concurrency.lockutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.320 221554 DEBUG oslo_concurrency.lockutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.320 221554 DEBUG oslo_concurrency.lockutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.346 221554 DEBUG nova.storage.rbd_utils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 49127a4f-80f7-4729-a1b3-ef852d68862a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.349 221554 DEBUG oslo_concurrency.processutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 49127a4f-80f7-4729-a1b3-ef852d68862a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.404 221554 DEBUG nova.policy [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31043e345f6b48b585fb7b8ab7304764', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd352316ff6534075952e2d0c28061b09', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.646 221554 DEBUG oslo_concurrency.processutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 49127a4f-80f7-4729-a1b3-ef852d68862a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.682 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.684 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.685 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.685 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.726 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.727 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.727 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.732 221554 DEBUG nova.storage.rbd_utils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] resizing rbd image 49127a4f-80f7-4729-a1b3-ef852d68862a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.795 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.796 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.796 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.796 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.796 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.892 221554 DEBUG nova.objects.instance [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'migration_context' on Instance uuid 49127a4f-80f7-4729-a1b3-ef852d68862a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.923 221554 DEBUG nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.923 221554 DEBUG nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Ensure instance console log exists: /var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.924 221554 DEBUG oslo_concurrency.lockutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.924 221554 DEBUG oslo_concurrency.lockutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:18 np0005603609 nova_compute[221550]: 2026-01-31 08:02:18.924 221554 DEBUG oslo_concurrency.lockutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:02:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1313823788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:02:19 np0005603609 nova_compute[221550]: 2026-01-31 08:02:19.192 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:19 np0005603609 nova_compute[221550]: 2026-01-31 08:02:19.320 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:02:19 np0005603609 nova_compute[221550]: 2026-01-31 08:02:19.321 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4625MB free_disk=20.86736297607422GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:02:19 np0005603609 nova_compute[221550]: 2026-01-31 08:02:19.321 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:19 np0005603609 nova_compute[221550]: 2026-01-31 08:02:19.322 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:19 np0005603609 nova_compute[221550]: 2026-01-31 08:02:19.387 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Migration for instance e2e6f697-3383-4887-b25a-c89447e67fc2 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 31 03:02:19 np0005603609 nova_compute[221550]: 2026-01-31 08:02:19.416 221554 INFO nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Updating resource usage from migration 5efdbc6e-b65a-4054-9384-ce6814e3054a#033[00m
Jan 31 03:02:19 np0005603609 nova_compute[221550]: 2026-01-31 08:02:19.417 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Starting to track incoming migration 5efdbc6e-b65a-4054-9384-ce6814e3054a with flavor e3bd1dad-95f3-4ed9-94b4-27245cd798b5 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 31 03:02:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:02:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:19.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:02:19 np0005603609 nova_compute[221550]: 2026-01-31 08:02:19.477 221554 WARNING nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance e2e6f697-3383-4887-b25a-c89447e67fc2 has been moved to another host compute-2.ctlplane.example.com(compute-2.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}.#033[00m
Jan 31 03:02:19 np0005603609 nova_compute[221550]: 2026-01-31 08:02:19.477 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 49127a4f-80f7-4729-a1b3-ef852d68862a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:02:19 np0005603609 nova_compute[221550]: 2026-01-31 08:02:19.477 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:02:19 np0005603609 nova_compute[221550]: 2026-01-31 08:02:19.478 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:02:19 np0005603609 nova_compute[221550]: 2026-01-31 08:02:19.514 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:02:19 np0005603609 nova_compute[221550]: 2026-01-31 08:02:19.541 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:02:19 np0005603609 nova_compute[221550]: 2026-01-31 08:02:19.542 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:02:19 np0005603609 nova_compute[221550]: 2026-01-31 08:02:19.568 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:02:19 np0005603609 nova_compute[221550]: 2026-01-31 08:02:19.613 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:02:19 np0005603609 nova_compute[221550]: 2026-01-31 08:02:19.795 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:19.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:19 np0005603609 nova_compute[221550]: 2026-01-31 08:02:19.858 221554 DEBUG nova.network.neutron [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Successfully created port: 6a146b62-d383-4976-b311-188a5e891208 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:02:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:02:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3581130653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:02:20 np0005603609 nova_compute[221550]: 2026-01-31 08:02:20.267 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:20 np0005603609 nova_compute[221550]: 2026-01-31 08:02:20.272 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:02:20 np0005603609 nova_compute[221550]: 2026-01-31 08:02:20.317 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:02:20 np0005603609 nova_compute[221550]: 2026-01-31 08:02:20.397 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:02:20 np0005603609 nova_compute[221550]: 2026-01-31 08:02:20.398 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:20 np0005603609 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 03:02:20 np0005603609 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 03:02:20 np0005603609 systemd-logind[823]: New session 57 of user nova.
Jan 31 03:02:20 np0005603609 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 03:02:20 np0005603609 systemd[1]: Starting User Manager for UID 42436...
Jan 31 03:02:20 np0005603609 systemd[256307]: Queued start job for default target Main User Target.
Jan 31 03:02:20 np0005603609 systemd[256307]: Created slice User Application Slice.
Jan 31 03:02:20 np0005603609 systemd[256307]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 03:02:20 np0005603609 systemd[256307]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 03:02:20 np0005603609 systemd[256307]: Reached target Paths.
Jan 31 03:02:20 np0005603609 systemd[256307]: Reached target Timers.
Jan 31 03:02:20 np0005603609 systemd[256307]: Starting D-Bus User Message Bus Socket...
Jan 31 03:02:20 np0005603609 systemd[256307]: Starting Create User's Volatile Files and Directories...
Jan 31 03:02:20 np0005603609 systemd[256307]: Listening on D-Bus User Message Bus Socket.
Jan 31 03:02:20 np0005603609 systemd[256307]: Reached target Sockets.
Jan 31 03:02:20 np0005603609 systemd[256307]: Finished Create User's Volatile Files and Directories.
Jan 31 03:02:20 np0005603609 systemd[256307]: Reached target Basic System.
Jan 31 03:02:20 np0005603609 systemd[256307]: Reached target Main User Target.
Jan 31 03:02:20 np0005603609 systemd[256307]: Startup finished in 130ms.
Jan 31 03:02:20 np0005603609 systemd[1]: Started User Manager for UID 42436.
Jan 31 03:02:20 np0005603609 systemd[1]: Started Session 57 of User nova.
Jan 31 03:02:20 np0005603609 systemd-logind[823]: Session 57 logged out. Waiting for processes to exit.
Jan 31 03:02:20 np0005603609 systemd[1]: session-57.scope: Deactivated successfully.
Jan 31 03:02:20 np0005603609 systemd-logind[823]: Removed session 57.
Jan 31 03:02:20 np0005603609 systemd-logind[823]: New session 59 of user nova.
Jan 31 03:02:20 np0005603609 systemd[1]: Started Session 59 of User nova.
Jan 31 03:02:20 np0005603609 systemd[1]: session-59.scope: Deactivated successfully.
Jan 31 03:02:20 np0005603609 systemd-logind[823]: Session 59 logged out. Waiting for processes to exit.
Jan 31 03:02:20 np0005603609 systemd-logind[823]: Removed session 59.
Jan 31 03:02:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:21.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:21 np0005603609 nova_compute[221550]: 2026-01-31 08:02:21.687 221554 DEBUG nova.network.neutron [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Successfully updated port: 6a146b62-d383-4976-b311-188a5e891208 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:02:21 np0005603609 nova_compute[221550]: 2026-01-31 08:02:21.725 221554 DEBUG oslo_concurrency.lockutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "refresh_cache-49127a4f-80f7-4729-a1b3-ef852d68862a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:02:21 np0005603609 nova_compute[221550]: 2026-01-31 08:02:21.725 221554 DEBUG oslo_concurrency.lockutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquired lock "refresh_cache-49127a4f-80f7-4729-a1b3-ef852d68862a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:02:21 np0005603609 nova_compute[221550]: 2026-01-31 08:02:21.726 221554 DEBUG nova.network.neutron [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:02:21 np0005603609 nova_compute[221550]: 2026-01-31 08:02:21.853 221554 DEBUG nova.compute.manager [req-eeefed75-3b67-4625-9092-ba41c6aa45c9 req-49e4b2e0-6523-427d-bcd7-7a1245f6d78d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Received event network-changed-6a146b62-d383-4976-b311-188a5e891208 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:21 np0005603609 nova_compute[221550]: 2026-01-31 08:02:21.854 221554 DEBUG nova.compute.manager [req-eeefed75-3b67-4625-9092-ba41c6aa45c9 req-49e4b2e0-6523-427d-bcd7-7a1245f6d78d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Refreshing instance network info cache due to event network-changed-6a146b62-d383-4976-b311-188a5e891208. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:02:21 np0005603609 nova_compute[221550]: 2026-01-31 08:02:21.854 221554 DEBUG oslo_concurrency.lockutils [req-eeefed75-3b67-4625-9092-ba41c6aa45c9 req-49e4b2e0-6523-427d-bcd7-7a1245f6d78d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-49127a4f-80f7-4729-a1b3-ef852d68862a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:02:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:21.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:22 np0005603609 nova_compute[221550]: 2026-01-31 08:02:22.093 221554 DEBUG nova.network.neutron [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:02:22 np0005603609 podman[256329]: 2026-01-31 08:02:22.216642829 +0000 UTC m=+0.092550549 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 03:02:22 np0005603609 podman[256330]: 2026-01-31 08:02:22.222445058 +0000 UTC m=+0.096891693 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 03:02:23 np0005603609 nova_compute[221550]: 2026-01-31 08:02:23.041 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:23 np0005603609 nova_compute[221550]: 2026-01-31 08:02:23.368 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:23 np0005603609 nova_compute[221550]: 2026-01-31 08:02:23.369 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:02:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:23.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:02:23 np0005603609 nova_compute[221550]: 2026-01-31 08:02:23.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:23 np0005603609 nova_compute[221550]: 2026-01-31 08:02:23.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:02:23 np0005603609 nova_compute[221550]: 2026-01-31 08:02:23.679 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:02:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:23.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:02:23 np0005603609 nova_compute[221550]: 2026-01-31 08:02:23.934 221554 DEBUG nova.network.neutron [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Updating instance_info_cache with network_info: [{"id": "6a146b62-d383-4976-b311-188a5e891208", "address": "fa:16:3e:5f:56:31", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a146b62-d3", "ovs_interfaceid": "6a146b62-d383-4976-b311-188a5e891208", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:02:24 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:24Z|00321|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.131 221554 DEBUG oslo_concurrency.lockutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Releasing lock "refresh_cache-49127a4f-80f7-4729-a1b3-ef852d68862a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.131 221554 DEBUG nova.compute.manager [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Instance network_info: |[{"id": "6a146b62-d383-4976-b311-188a5e891208", "address": "fa:16:3e:5f:56:31", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a146b62-d3", "ovs_interfaceid": "6a146b62-d383-4976-b311-188a5e891208", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.132 221554 DEBUG oslo_concurrency.lockutils [req-eeefed75-3b67-4625-9092-ba41c6aa45c9 req-49e4b2e0-6523-427d-bcd7-7a1245f6d78d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-49127a4f-80f7-4729-a1b3-ef852d68862a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.132 221554 DEBUG nova.network.neutron [req-eeefed75-3b67-4625-9092-ba41c6aa45c9 req-49e4b2e0-6523-427d-bcd7-7a1245f6d78d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Refreshing network info cache for port 6a146b62-d383-4976-b311-188a5e891208 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.137 221554 DEBUG nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Start _get_guest_xml network_info=[{"id": "6a146b62-d383-4976-b311-188a5e891208", "address": "fa:16:3e:5f:56:31", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a146b62-d3", "ovs_interfaceid": "6a146b62-d383-4976-b311-188a5e891208", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.142 221554 WARNING nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.149 221554 DEBUG nova.virt.libvirt.host [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.150 221554 DEBUG nova.virt.libvirt.host [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.154 221554 DEBUG nova.virt.libvirt.host [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.155 221554 DEBUG nova.virt.libvirt.host [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.157 221554 DEBUG nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.157 221554 DEBUG nova.virt.hardware [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.158 221554 DEBUG nova.virt.hardware [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.158 221554 DEBUG nova.virt.hardware [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.159 221554 DEBUG nova.virt.hardware [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.159 221554 DEBUG nova.virt.hardware [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.160 221554 DEBUG nova.virt.hardware [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.160 221554 DEBUG nova.virt.hardware [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.160 221554 DEBUG nova.virt.hardware [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.161 221554 DEBUG nova.virt.hardware [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.161 221554 DEBUG nova.virt.hardware [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.162 221554 DEBUG nova.virt.hardware [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.167 221554 DEBUG oslo_concurrency.processutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.272 221554 DEBUG nova.compute.manager [req-b0a447e9-32c3-46a4-a42e-61a261f4fc5c req-dc586af0-734e-4a56-ad1a-b3d5cb091367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received event network-vif-unplugged-647b42a2-81d3-49dc-9075-d31077bfffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.273 221554 DEBUG oslo_concurrency.lockutils [req-b0a447e9-32c3-46a4-a42e-61a261f4fc5c req-dc586af0-734e-4a56-ad1a-b3d5cb091367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.274 221554 DEBUG oslo_concurrency.lockutils [req-b0a447e9-32c3-46a4-a42e-61a261f4fc5c req-dc586af0-734e-4a56-ad1a-b3d5cb091367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.274 221554 DEBUG oslo_concurrency.lockutils [req-b0a447e9-32c3-46a4-a42e-61a261f4fc5c req-dc586af0-734e-4a56-ad1a-b3d5cb091367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.275 221554 DEBUG nova.compute.manager [req-b0a447e9-32c3-46a4-a42e-61a261f4fc5c req-dc586af0-734e-4a56-ad1a-b3d5cb091367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] No waiting events found dispatching network-vif-unplugged-647b42a2-81d3-49dc-9075-d31077bfffe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.276 221554 WARNING nova.compute.manager [req-b0a447e9-32c3-46a4-a42e-61a261f4fc5c req-dc586af0-734e-4a56-ad1a-b3d5cb091367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received unexpected event network-vif-unplugged-647b42a2-81d3-49dc-9075-d31077bfffe8 for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 31 03:02:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:02:24 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1328614510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.635 221554 DEBUG oslo_concurrency.processutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.674 221554 DEBUG nova.storage.rbd_utils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 49127a4f-80f7-4729-a1b3-ef852d68862a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:24 np0005603609 nova_compute[221550]: 2026-01-31 08:02:24.679 221554 DEBUG oslo_concurrency.processutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.120 221554 INFO nova.network.neutron [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Updating port 647b42a2-81d3-49dc-9075-d31077bfffe8 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 31 03:02:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:02:25 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4180076573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.155 221554 DEBUG oslo_concurrency.processutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.157 221554 DEBUG nova.virt.libvirt.vif [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:02:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-944608155',display_name='tempest-tempest.common.compute-instance-944608155',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-944608155',id=94,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-honqqjjn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:02:18Z,user_data=None,user_id='31043e345f6b48b585fb7b8ab7304764',uuid=49127a4f-80f7-4729-a1b3-ef852d68862a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a146b62-d383-4976-b311-188a5e891208", "address": "fa:16:3e:5f:56:31", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a146b62-d3", "ovs_interfaceid": "6a146b62-d383-4976-b311-188a5e891208", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.158 221554 DEBUG nova.network.os_vif_util [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "6a146b62-d383-4976-b311-188a5e891208", "address": "fa:16:3e:5f:56:31", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a146b62-d3", "ovs_interfaceid": "6a146b62-d383-4976-b311-188a5e891208", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.159 221554 DEBUG nova.network.os_vif_util [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:56:31,bridge_name='br-int',has_traffic_filtering=True,id=6a146b62-d383-4976-b311-188a5e891208,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a146b62-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.162 221554 DEBUG nova.objects.instance [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'pci_devices' on Instance uuid 49127a4f-80f7-4729-a1b3-ef852d68862a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.193 221554 DEBUG nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:02:25 np0005603609 nova_compute[221550]:  <uuid>49127a4f-80f7-4729-a1b3-ef852d68862a</uuid>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:  <name>instance-0000005e</name>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <nova:name>tempest-tempest.common.compute-instance-944608155</nova:name>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:02:24</nova:creationTime>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:02:25 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:        <nova:user uuid="31043e345f6b48b585fb7b8ab7304764">tempest-ServerActionsTestOtherA-527878807-project-member</nova:user>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:        <nova:project uuid="d352316ff6534075952e2d0c28061b09">tempest-ServerActionsTestOtherA-527878807</nova:project>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:        <nova:port uuid="6a146b62-d383-4976-b311-188a5e891208">
Jan 31 03:02:25 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <entry name="serial">49127a4f-80f7-4729-a1b3-ef852d68862a</entry>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <entry name="uuid">49127a4f-80f7-4729-a1b3-ef852d68862a</entry>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/49127a4f-80f7-4729-a1b3-ef852d68862a_disk">
Jan 31 03:02:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:02:25 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/49127a4f-80f7-4729-a1b3-ef852d68862a_disk.config">
Jan 31 03:02:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:02:25 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:5f:56:31"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <target dev="tap6a146b62-d3"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a/console.log" append="off"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:02:25 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:02:25 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:02:25 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:02:25 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.195 221554 DEBUG nova.compute.manager [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Preparing to wait for external event network-vif-plugged-6a146b62-d383-4976-b311-188a5e891208 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.196 221554 DEBUG oslo_concurrency.lockutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.196 221554 DEBUG oslo_concurrency.lockutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.197 221554 DEBUG oslo_concurrency.lockutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.198 221554 DEBUG nova.virt.libvirt.vif [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:02:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-944608155',display_name='tempest-tempest.common.compute-instance-944608155',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-944608155',id=94,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-honqqjjn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:02:18Z,user_data=None,user_id='31043e345f6b48b585fb7b8ab7304764',uuid=49127a4f-80f7-4729-a1b3-ef852d68862a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a146b62-d383-4976-b311-188a5e891208", "address": "fa:16:3e:5f:56:31", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a146b62-d3", "ovs_interfaceid": "6a146b62-d383-4976-b311-188a5e891208", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.198 221554 DEBUG nova.network.os_vif_util [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "6a146b62-d383-4976-b311-188a5e891208", "address": "fa:16:3e:5f:56:31", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a146b62-d3", "ovs_interfaceid": "6a146b62-d383-4976-b311-188a5e891208", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.199 221554 DEBUG nova.network.os_vif_util [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:56:31,bridge_name='br-int',has_traffic_filtering=True,id=6a146b62-d383-4976-b311-188a5e891208,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a146b62-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.200 221554 DEBUG os_vif [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:56:31,bridge_name='br-int',has_traffic_filtering=True,id=6a146b62-d383-4976-b311-188a5e891208,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a146b62-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.201 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.202 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.202 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.206 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.207 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a146b62-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.208 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6a146b62-d3, col_values=(('external_ids', {'iface-id': '6a146b62-d383-4976-b311-188a5e891208', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:56:31', 'vm-uuid': '49127a4f-80f7-4729-a1b3-ef852d68862a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.211 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:25 np0005603609 NetworkManager[49064]: <info>  [1769846545.2121] manager: (tap6a146b62-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.213 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.217 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.218 221554 INFO os_vif [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:56:31,bridge_name='br-int',has_traffic_filtering=True,id=6a146b62-d383-4976-b311-188a5e891208,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a146b62-d3')#033[00m
Jan 31 03:02:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.281 221554 DEBUG nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.282 221554 DEBUG nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.282 221554 DEBUG nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No VIF found with MAC fa:16:3e:5f:56:31, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.282 221554 INFO nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Using config drive#033[00m
Jan 31 03:02:25 np0005603609 nova_compute[221550]: 2026-01-31 08:02:25.310 221554 DEBUG nova.storage.rbd_utils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 49127a4f-80f7-4729-a1b3-ef852d68862a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:25.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:25.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:26 np0005603609 nova_compute[221550]: 2026-01-31 08:02:26.228 221554 INFO nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Creating config drive at /var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a/disk.config#033[00m
Jan 31 03:02:26 np0005603609 nova_compute[221550]: 2026-01-31 08:02:26.233 221554 DEBUG oslo_concurrency.processutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpf5stadds execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:26 np0005603609 nova_compute[221550]: 2026-01-31 08:02:26.290 221554 DEBUG oslo_concurrency.lockutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:02:26 np0005603609 nova_compute[221550]: 2026-01-31 08:02:26.292 221554 DEBUG oslo_concurrency.lockutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquired lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:02:26 np0005603609 nova_compute[221550]: 2026-01-31 08:02:26.292 221554 DEBUG nova.network.neutron [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:02:26 np0005603609 nova_compute[221550]: 2026-01-31 08:02:26.357 221554 DEBUG oslo_concurrency.processutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpf5stadds" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:26 np0005603609 nova_compute[221550]: 2026-01-31 08:02:26.390 221554 DEBUG nova.storage.rbd_utils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 49127a4f-80f7-4729-a1b3-ef852d68862a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:26 np0005603609 nova_compute[221550]: 2026-01-31 08:02:26.395 221554 DEBUG oslo_concurrency.processutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a/disk.config 49127a4f-80f7-4729-a1b3-ef852d68862a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:26 np0005603609 nova_compute[221550]: 2026-01-31 08:02:26.565 221554 DEBUG oslo_concurrency.processutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a/disk.config 49127a4f-80f7-4729-a1b3-ef852d68862a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:26 np0005603609 nova_compute[221550]: 2026-01-31 08:02:26.566 221554 INFO nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Deleting local config drive /var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a/disk.config because it was imported into RBD.#033[00m
Jan 31 03:02:26 np0005603609 kernel: tap6a146b62-d3: entered promiscuous mode
Jan 31 03:02:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:26Z|00322|binding|INFO|Claiming lport 6a146b62-d383-4976-b311-188a5e891208 for this chassis.
Jan 31 03:02:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:26Z|00323|binding|INFO|6a146b62-d383-4976-b311-188a5e891208: Claiming fa:16:3e:5f:56:31 10.100.0.7
Jan 31 03:02:26 np0005603609 nova_compute[221550]: 2026-01-31 08:02:26.622 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:26 np0005603609 NetworkManager[49064]: <info>  [1769846546.6242] manager: (tap6a146b62-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/158)
Jan 31 03:02:26 np0005603609 nova_compute[221550]: 2026-01-31 08:02:26.628 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:26 np0005603609 nova_compute[221550]: 2026-01-31 08:02:26.630 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:26 np0005603609 nova_compute[221550]: 2026-01-31 08:02:26.636 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:26 np0005603609 NetworkManager[49064]: <info>  [1769846546.6412] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/159)
Jan 31 03:02:26 np0005603609 NetworkManager[49064]: <info>  [1769846546.6419] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Jan 31 03:02:26 np0005603609 nova_compute[221550]: 2026-01-31 08:02:26.640 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:26 np0005603609 systemd-udevd[256503]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:02:26 np0005603609 NetworkManager[49064]: <info>  [1769846546.6603] device (tap6a146b62-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:02:26 np0005603609 NetworkManager[49064]: <info>  [1769846546.6608] device (tap6a146b62-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:02:26 np0005603609 systemd-machined[190912]: New machine qemu-42-instance-0000005e.
Jan 31 03:02:26 np0005603609 systemd[1]: Started Virtual Machine qemu-42-instance-0000005e.
Jan 31 03:02:26 np0005603609 nova_compute[221550]: 2026-01-31 08:02:26.719 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:26 np0005603609 nova_compute[221550]: 2026-01-31 08:02:26.728 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:26.831 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:56:31 10.100.0.7'], port_security=['fa:16:3e:5f:56:31 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '49127a4f-80f7-4729-a1b3-ef852d68862a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd352316ff6534075952e2d0c28061b09', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd60e8396-567a-4b0f-955f-0777fc5fbc4d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b782827-2c44-4671-8f12-bb67b4f64804, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=6a146b62-d383-4976-b311-188a5e891208) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:02:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:26.833 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 6a146b62-d383-4976-b311-188a5e891208 in datapath 79cb2b81-3369-468a-8bf6-7e13d5df334b bound to our chassis#033[00m
Jan 31 03:02:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:26.836 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79cb2b81-3369-468a-8bf6-7e13d5df334b#033[00m
Jan 31 03:02:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:26.845 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e90972cf-7de3-49a1-9d7d-22fdca93bd3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:26.847 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap79cb2b81-31 in ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:02:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:26.850 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap79cb2b81-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:02:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:26.850 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ba6c80-fdd9-455a-9da8-0560a58e102b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:26Z|00324|binding|INFO|Setting lport 6a146b62-d383-4976-b311-188a5e891208 ovn-installed in OVS
Jan 31 03:02:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:26Z|00325|binding|INFO|Setting lport 6a146b62-d383-4976-b311-188a5e891208 up in Southbound
Jan 31 03:02:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:26.852 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a9552a1c-bacc-43e8-9e9b-be1340846287]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:26 np0005603609 nova_compute[221550]: 2026-01-31 08:02:26.854 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:26 np0005603609 nova_compute[221550]: 2026-01-31 08:02:26.860 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:26.870 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[beeb907f-65a9-4fbc-a335-8cd4777cb6df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:26.885 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[99b3712c-bbc5-4c71-b522-df0d7fb67fc1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:26.904 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[7c7c2c62-4c7a-485a-9046-67c369b86a5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:26 np0005603609 NetworkManager[49064]: <info>  [1769846546.9133] manager: (tap79cb2b81-30): new Veth device (/org/freedesktop/NetworkManager/Devices/161)
Jan 31 03:02:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:26.912 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[aaae5084-7d28-493a-b963-fc86bf3c8792]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:26.939 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[e8ba4623-de0c-4ddb-b64a-c6cce11c35e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:26.942 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[19ec1fbe-a501-40fe-a76c-f402038e11a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:26 np0005603609 NetworkManager[49064]: <info>  [1769846546.9605] device (tap79cb2b81-30): carrier: link connected
Jan 31 03:02:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:26.967 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[4c6695ef-5012-40b6-b000-05551624c551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:26.983 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b3e7a1-6c55-4828-b03c-a2b1b2c3e543]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79cb2b81-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:12:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678666, 'reachable_time': 39222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256539, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:26.998 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[497eac70-3cf3-4aee-b828-f22bdbfe1e27]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:12e3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678666, 'tstamp': 678666}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256540, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:27.014 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e1774fda-c012-4f1f-972a-310dc135040d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79cb2b81-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:12:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678666, 'reachable_time': 39222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256541, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:27.045 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb071b8-076c-468a-a7b2-59d2bf5b1c01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:27.103 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[378f4f5d-604e-49e1-baf2-2120b2c2a5e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:27.105 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79cb2b81-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:27.105 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:27.106 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79cb2b81-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.108 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:27 np0005603609 NetworkManager[49064]: <info>  [1769846547.1097] manager: (tap79cb2b81-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Jan 31 03:02:27 np0005603609 kernel: tap79cb2b81-30: entered promiscuous mode
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:27.116 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79cb2b81-30, col_values=(('external_ids', {'iface-id': '9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.117 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:27 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:27Z|00326|binding|INFO|Releasing lport 9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef from this chassis (sb_readonly=0)
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:27.121 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.123 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:27.122 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e70753c6-ffab-4b45-bfc1-e8eac1d1b1b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:27.125 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:02:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:27.128 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'env', 'PROCESS_TAG=haproxy-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/79cb2b81-3369-468a-8bf6-7e13d5df334b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.339 221554 DEBUG nova.compute.manager [req-44b854e0-98b7-4811-9a2b-00caa540bf72 req-8993468c-3fbe-4653-93ab-35898389b64c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received event network-changed-647b42a2-81d3-49dc-9075-d31077bfffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.340 221554 DEBUG nova.compute.manager [req-44b854e0-98b7-4811-9a2b-00caa540bf72 req-8993468c-3fbe-4653-93ab-35898389b64c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Refreshing instance network info cache due to event network-changed-647b42a2-81d3-49dc-9075-d31077bfffe8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.340 221554 DEBUG oslo_concurrency.lockutils [req-44b854e0-98b7-4811-9a2b-00caa540bf72 req-8993468c-3fbe-4653-93ab-35898389b64c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.360 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846547.3592732, 49127a4f-80f7-4729-a1b3-ef852d68862a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.360 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] VM Started (Lifecycle Event)#033[00m
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.386 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.390 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846547.3595507, 49127a4f-80f7-4729-a1b3-ef852d68862a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.390 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:02:27 np0005603609 podman[256615]: 2026-01-31 08:02:27.470529221 +0000 UTC m=+0.038780484 container create 987232caf5d74a8070287452700ae3369093d39805b1d225ee3ba47f3165dfa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 03:02:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.004000094s ======
Jan 31 03:02:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:27.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000094s
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.476 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.480 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:02:27 np0005603609 systemd[1]: Started libpod-conmon-987232caf5d74a8070287452700ae3369093d39805b1d225ee3ba47f3165dfa8.scope.
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.507 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:02:27 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:02:27 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69db5d2e6c5e3b0cb66272631634d7c4db29aec0abb03281d8ecced4f6465490/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:02:27 np0005603609 podman[256615]: 2026-01-31 08:02:27.449055444 +0000 UTC m=+0.017306727 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:02:27 np0005603609 podman[256615]: 2026-01-31 08:02:27.549739498 +0000 UTC m=+0.117990841 container init 987232caf5d74a8070287452700ae3369093d39805b1d225ee3ba47f3165dfa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:02:27 np0005603609 podman[256615]: 2026-01-31 08:02:27.556615463 +0000 UTC m=+0.124866766 container start 987232caf5d74a8070287452700ae3369093d39805b1d225ee3ba47f3165dfa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:02:27 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[256631]: [NOTICE]   (256635) : New worker (256637) forked
Jan 31 03:02:27 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[256631]: [NOTICE]   (256635) : Loading success.
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.629 221554 DEBUG nova.compute.manager [req-f6568a7c-fe3f-425b-847a-2f89831f18c3 req-7031cae9-2520-49c0-b740-0b5b11035950 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received event network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.630 221554 DEBUG oslo_concurrency.lockutils [req-f6568a7c-fe3f-425b-847a-2f89831f18c3 req-7031cae9-2520-49c0-b740-0b5b11035950 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.631 221554 DEBUG oslo_concurrency.lockutils [req-f6568a7c-fe3f-425b-847a-2f89831f18c3 req-7031cae9-2520-49c0-b740-0b5b11035950 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.631 221554 DEBUG oslo_concurrency.lockutils [req-f6568a7c-fe3f-425b-847a-2f89831f18c3 req-7031cae9-2520-49c0-b740-0b5b11035950 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.632 221554 DEBUG nova.compute.manager [req-f6568a7c-fe3f-425b-847a-2f89831f18c3 req-7031cae9-2520-49c0-b740-0b5b11035950 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] No waiting events found dispatching network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:02:27 np0005603609 nova_compute[221550]: 2026-01-31 08:02:27.632 221554 WARNING nova.compute.manager [req-f6568a7c-fe3f-425b-847a-2f89831f18c3 req-7031cae9-2520-49c0-b740-0b5b11035950 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received unexpected event network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:02:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:02:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:27.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:02:28 np0005603609 nova_compute[221550]: 2026-01-31 08:02:28.022 221554 DEBUG nova.network.neutron [req-eeefed75-3b67-4625-9092-ba41c6aa45c9 req-49e4b2e0-6523-427d-bcd7-7a1245f6d78d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Updated VIF entry in instance network info cache for port 6a146b62-d383-4976-b311-188a5e891208. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:02:28 np0005603609 nova_compute[221550]: 2026-01-31 08:02:28.023 221554 DEBUG nova.network.neutron [req-eeefed75-3b67-4625-9092-ba41c6aa45c9 req-49e4b2e0-6523-427d-bcd7-7a1245f6d78d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Updating instance_info_cache with network_info: [{"id": "6a146b62-d383-4976-b311-188a5e891208", "address": "fa:16:3e:5f:56:31", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a146b62-d3", "ovs_interfaceid": "6a146b62-d383-4976-b311-188a5e891208", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:02:28 np0005603609 nova_compute[221550]: 2026-01-31 08:02:28.053 221554 DEBUG oslo_concurrency.lockutils [req-eeefed75-3b67-4625-9092-ba41c6aa45c9 req-49e4b2e0-6523-427d-bcd7-7a1245f6d78d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-49127a4f-80f7-4729-a1b3-ef852d68862a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:02:28 np0005603609 nova_compute[221550]: 2026-01-31 08:02:28.680 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:29.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:29 np0005603609 nova_compute[221550]: 2026-01-31 08:02:29.761 221554 DEBUG nova.network.neutron [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Updating instance_info_cache with network_info: [{"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:02:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:29.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.179 221554 DEBUG oslo_concurrency.lockutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Releasing lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.183 221554 DEBUG oslo_concurrency.lockutils [req-44b854e0-98b7-4811-9a2b-00caa540bf72 req-8993468c-3fbe-4653-93ab-35898389b64c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.183 221554 DEBUG nova.network.neutron [req-44b854e0-98b7-4811-9a2b-00caa540bf72 req-8993468c-3fbe-4653-93ab-35898389b64c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Refreshing network info cache for port 647b42a2-81d3-49dc-9075-d31077bfffe8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.211 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.247 221554 DEBUG nova.compute.manager [req-1e5dfb0b-edd6-4f92-8884-57bff9db3718 req-ca85ec6e-a518-43b6-aae9-3bf4f2402598 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Received event network-vif-plugged-6a146b62-d383-4976-b311-188a5e891208 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.247 221554 DEBUG oslo_concurrency.lockutils [req-1e5dfb0b-edd6-4f92-8884-57bff9db3718 req-ca85ec6e-a518-43b6-aae9-3bf4f2402598 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.247 221554 DEBUG oslo_concurrency.lockutils [req-1e5dfb0b-edd6-4f92-8884-57bff9db3718 req-ca85ec6e-a518-43b6-aae9-3bf4f2402598 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.248 221554 DEBUG oslo_concurrency.lockutils [req-1e5dfb0b-edd6-4f92-8884-57bff9db3718 req-ca85ec6e-a518-43b6-aae9-3bf4f2402598 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.248 221554 DEBUG nova.compute.manager [req-1e5dfb0b-edd6-4f92-8884-57bff9db3718 req-ca85ec6e-a518-43b6-aae9-3bf4f2402598 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Processing event network-vif-plugged-6a146b62-d383-4976-b311-188a5e891208 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.248 221554 DEBUG nova.compute.manager [req-1e5dfb0b-edd6-4f92-8884-57bff9db3718 req-ca85ec6e-a518-43b6-aae9-3bf4f2402598 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Received event network-vif-plugged-6a146b62-d383-4976-b311-188a5e891208 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.248 221554 DEBUG oslo_concurrency.lockutils [req-1e5dfb0b-edd6-4f92-8884-57bff9db3718 req-ca85ec6e-a518-43b6-aae9-3bf4f2402598 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.249 221554 DEBUG oslo_concurrency.lockutils [req-1e5dfb0b-edd6-4f92-8884-57bff9db3718 req-ca85ec6e-a518-43b6-aae9-3bf4f2402598 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.249 221554 DEBUG oslo_concurrency.lockutils [req-1e5dfb0b-edd6-4f92-8884-57bff9db3718 req-ca85ec6e-a518-43b6-aae9-3bf4f2402598 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.249 221554 DEBUG nova.compute.manager [req-1e5dfb0b-edd6-4f92-8884-57bff9db3718 req-ca85ec6e-a518-43b6-aae9-3bf4f2402598 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] No waiting events found dispatching network-vif-plugged-6a146b62-d383-4976-b311-188a5e891208 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.249 221554 WARNING nova.compute.manager [req-1e5dfb0b-edd6-4f92-8884-57bff9db3718 req-ca85ec6e-a518-43b6-aae9-3bf4f2402598 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Received unexpected event network-vif-plugged-6a146b62-d383-4976-b311-188a5e891208 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.250 221554 DEBUG nova.compute.manager [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.253 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846550.253128, 49127a4f-80f7-4729-a1b3-ef852d68862a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.253 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.256 221554 DEBUG nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.259 221554 INFO nova.virt.libvirt.driver [-] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Instance spawned successfully.#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.260 221554 DEBUG nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.407 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.411 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.504 221554 DEBUG nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.504 221554 DEBUG nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.505 221554 DEBUG nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.505 221554 DEBUG nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.506 221554 DEBUG nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.506 221554 DEBUG nova.virt.libvirt.driver [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.596 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.633 221554 DEBUG nova.virt.libvirt.driver [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.635 221554 DEBUG nova.virt.libvirt.driver [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.636 221554 INFO nova.virt.libvirt.driver [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Creating image(s)#033[00m
Jan 31 03:02:30 np0005603609 nova_compute[221550]: 2026-01-31 08:02:30.677 221554 DEBUG nova.storage.rbd_utils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] creating snapshot(nova-resize) on rbd image(e2e6f697-3383-4887-b25a-c89447e67fc2_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:02:31 np0005603609 nova_compute[221550]: 2026-01-31 08:02:31.074 221554 INFO nova.compute.manager [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Took 12.92 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:02:31 np0005603609 nova_compute[221550]: 2026-01-31 08:02:31.075 221554 DEBUG nova.compute.manager [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:31 np0005603609 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 03:02:31 np0005603609 systemd[256307]: Activating special unit Exit the Session...
Jan 31 03:02:31 np0005603609 systemd[256307]: Stopped target Main User Target.
Jan 31 03:02:31 np0005603609 systemd[256307]: Stopped target Basic System.
Jan 31 03:02:31 np0005603609 systemd[256307]: Stopped target Paths.
Jan 31 03:02:31 np0005603609 systemd[256307]: Stopped target Sockets.
Jan 31 03:02:31 np0005603609 systemd[256307]: Stopped target Timers.
Jan 31 03:02:31 np0005603609 systemd[256307]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 03:02:31 np0005603609 systemd[256307]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 03:02:31 np0005603609 systemd[256307]: Closed D-Bus User Message Bus Socket.
Jan 31 03:02:31 np0005603609 systemd[256307]: Stopped Create User's Volatile Files and Directories.
Jan 31 03:02:31 np0005603609 systemd[256307]: Removed slice User Application Slice.
Jan 31 03:02:31 np0005603609 systemd[256307]: Reached target Shutdown.
Jan 31 03:02:31 np0005603609 systemd[256307]: Finished Exit the Session.
Jan 31 03:02:31 np0005603609 systemd[256307]: Reached target Exit the Session.
Jan 31 03:02:31 np0005603609 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 03:02:31 np0005603609 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 03:02:31 np0005603609 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 03:02:31 np0005603609 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 03:02:31 np0005603609 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 03:02:31 np0005603609 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 03:02:31 np0005603609 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 03:02:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e251 e251: 3 total, 3 up, 3 in
Jan 31 03:02:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:31.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:31 np0005603609 nova_compute[221550]: 2026-01-31 08:02:31.631 221554 DEBUG nova.objects.instance [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e2e6f697-3383-4887-b25a-c89447e67fc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:31.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:31 np0005603609 nova_compute[221550]: 2026-01-31 08:02:31.994 221554 INFO nova.compute.manager [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Took 14.96 seconds to build instance.#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.256 221554 DEBUG oslo_concurrency.lockutils [None req-65f3d71e-fb7b-42f4-ae51-85aeb1488e5c 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.292 221554 DEBUG nova.virt.libvirt.driver [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.292 221554 DEBUG nova.virt.libvirt.driver [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Ensure instance console log exists: /var/lib/nova/instances/e2e6f697-3383-4887-b25a-c89447e67fc2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.293 221554 DEBUG oslo_concurrency.lockutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.293 221554 DEBUG oslo_concurrency.lockutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.293 221554 DEBUG oslo_concurrency.lockutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.297 221554 DEBUG nova.virt.libvirt.driver [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Start _get_guest_xml network_info=[{"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "vif_mac": "fa:16:3e:14:eb:48"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.301 221554 WARNING nova.virt.libvirt.driver [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.336 221554 DEBUG nova.virt.libvirt.host [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.336 221554 DEBUG nova.virt.libvirt.host [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.475 221554 DEBUG nova.virt.libvirt.host [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.476 221554 DEBUG nova.virt.libvirt.host [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.477 221554 DEBUG nova.virt.libvirt.driver [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.477 221554 DEBUG nova.virt.hardware [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e3bd1dad-95f3-4ed9-94b4-27245cd798b5',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.477 221554 DEBUG nova.virt.hardware [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.477 221554 DEBUG nova.virt.hardware [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.477 221554 DEBUG nova.virt.hardware [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.478 221554 DEBUG nova.virt.hardware [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.478 221554 DEBUG nova.virt.hardware [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.478 221554 DEBUG nova.virt.hardware [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.478 221554 DEBUG nova.virt.hardware [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.478 221554 DEBUG nova.virt.hardware [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.479 221554 DEBUG nova.virt.hardware [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.479 221554 DEBUG nova.virt.hardware [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.479 221554 DEBUG nova.objects.instance [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e2e6f697-3383-4887-b25a-c89447e67fc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.585 221554 DEBUG oslo_concurrency.processutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.736 221554 DEBUG nova.network.neutron [req-44b854e0-98b7-4811-9a2b-00caa540bf72 req-8993468c-3fbe-4653-93ab-35898389b64c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Updated VIF entry in instance network info cache for port 647b42a2-81d3-49dc-9075-d31077bfffe8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.737 221554 DEBUG nova.network.neutron [req-44b854e0-98b7-4811-9a2b-00caa540bf72 req-8993468c-3fbe-4653-93ab-35898389b64c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Updating instance_info_cache with network_info: [{"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:02:32 np0005603609 nova_compute[221550]: 2026-01-31 08:02:32.873 221554 DEBUG oslo_concurrency.lockutils [req-44b854e0-98b7-4811-9a2b-00caa540bf72 req-8993468c-3fbe-4653-93ab-35898389b64c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-e2e6f697-3383-4887-b25a-c89447e67fc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:02:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:02:32 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3417555742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.013 221554 DEBUG oslo_concurrency.processutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.047 221554 DEBUG oslo_concurrency.processutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:33Z|00327|binding|INFO|Releasing lport 9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef from this chassis (sb_readonly=0)
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.172 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:02:33 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/520128495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:02:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:33.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.485 221554 DEBUG oslo_concurrency.processutils [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.487 221554 DEBUG nova.virt.libvirt.vif [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-810198552',display_name='tempest-ServerDiskConfigTestJSON-server-810198552',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-810198552',id=93,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:02:02Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-1w400nzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:02:24Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=e2e6f697-3383-4887-b25a-c89447e67fc2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "vif_mac": "fa:16:3e:14:eb:48"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.487 221554 DEBUG nova.network.os_vif_util [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "vif_mac": "fa:16:3e:14:eb:48"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.488 221554 DEBUG nova.network.os_vif_util [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:eb:48,bridge_name='br-int',has_traffic_filtering=True,id=647b42a2-81d3-49dc-9075-d31077bfffe8,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b42a2-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.493 221554 DEBUG nova.virt.libvirt.driver [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:02:33 np0005603609 nova_compute[221550]:  <uuid>e2e6f697-3383-4887-b25a-c89447e67fc2</uuid>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:  <name>instance-0000005d</name>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:  <memory>196608</memory>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-810198552</nova:name>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:02:32</nova:creationTime>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.micro">
Jan 31 03:02:33 np0005603609 nova_compute[221550]:        <nova:memory>192</nova:memory>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:        <nova:user uuid="63e95edea0164ae2a9820dc10467335d">tempest-ServerDiskConfigTestJSON-984925022-project-member</nova:user>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:        <nova:project uuid="be74d11d2f5a4d9aae2dbe32c31ad9c3">tempest-ServerDiskConfigTestJSON-984925022</nova:project>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:        <nova:port uuid="647b42a2-81d3-49dc-9075-d31077bfffe8">
Jan 31 03:02:33 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <entry name="serial">e2e6f697-3383-4887-b25a-c89447e67fc2</entry>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <entry name="uuid">e2e6f697-3383-4887-b25a-c89447e67fc2</entry>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/e2e6f697-3383-4887-b25a-c89447e67fc2_disk">
Jan 31 03:02:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:02:33 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/e2e6f697-3383-4887-b25a-c89447e67fc2_disk.config">
Jan 31 03:02:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:02:33 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:14:eb:48"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <target dev="tap647b42a2-81"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/e2e6f697-3383-4887-b25a-c89447e67fc2/console.log" append="off"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:02:33 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:02:33 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:02:33 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:02:33 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.496 221554 DEBUG nova.virt.libvirt.vif [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-810198552',display_name='tempest-ServerDiskConfigTestJSON-server-810198552',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-810198552',id=93,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:02:02Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-1w400nzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:02:24Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=e2e6f697-3383-4887-b25a-c89447e67fc2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "vif_mac": "fa:16:3e:14:eb:48"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.497 221554 DEBUG nova.network.os_vif_util [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "vif_mac": "fa:16:3e:14:eb:48"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.498 221554 DEBUG nova.network.os_vif_util [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:eb:48,bridge_name='br-int',has_traffic_filtering=True,id=647b42a2-81d3-49dc-9075-d31077bfffe8,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b42a2-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.498 221554 DEBUG os_vif [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:eb:48,bridge_name='br-int',has_traffic_filtering=True,id=647b42a2-81d3-49dc-9075-d31077bfffe8,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b42a2-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.499 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.500 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.501 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.503 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.504 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap647b42a2-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.504 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap647b42a2-81, col_values=(('external_ids', {'iface-id': '647b42a2-81d3-49dc-9075-d31077bfffe8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:eb:48', 'vm-uuid': 'e2e6f697-3383-4887-b25a-c89447e67fc2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.505 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:33 np0005603609 NetworkManager[49064]: <info>  [1769846553.5061] manager: (tap647b42a2-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/163)
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.508 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.511 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.512 221554 INFO os_vif [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:eb:48,bridge_name='br-int',has_traffic_filtering=True,id=647b42a2-81d3-49dc-9075-d31077bfffe8,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b42a2-81')#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.623 221554 DEBUG nova.virt.libvirt.driver [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.623 221554 DEBUG nova.virt.libvirt.driver [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.624 221554 DEBUG nova.virt.libvirt.driver [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No VIF found with MAC fa:16:3e:14:eb:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.625 221554 INFO nova.virt.libvirt.driver [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Using config drive#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.682 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:33 np0005603609 kernel: tap647b42a2-81: entered promiscuous mode
Jan 31 03:02:33 np0005603609 NetworkManager[49064]: <info>  [1769846553.7130] manager: (tap647b42a2-81): new Tun device (/org/freedesktop/NetworkManager/Devices/164)
Jan 31 03:02:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:33Z|00328|binding|INFO|Claiming lport 647b42a2-81d3-49dc-9075-d31077bfffe8 for this chassis.
Jan 31 03:02:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:33Z|00329|binding|INFO|647b42a2-81d3-49dc-9075-d31077bfffe8: Claiming fa:16:3e:14:eb:48 10.100.0.3
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.719 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:33Z|00330|binding|INFO|Setting lport 647b42a2-81d3-49dc-9075-d31077bfffe8 ovn-installed in OVS
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.731 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:33 np0005603609 nova_compute[221550]: 2026-01-31 08:02:33.735 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:33 np0005603609 systemd-machined[190912]: New machine qemu-43-instance-0000005d.
Jan 31 03:02:33 np0005603609 systemd[1]: Started Virtual Machine qemu-43-instance-0000005d.
Jan 31 03:02:33 np0005603609 systemd-udevd[256817]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:02:33 np0005603609 NetworkManager[49064]: <info>  [1769846553.7751] device (tap647b42a2-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:02:33 np0005603609 NetworkManager[49064]: <info>  [1769846553.7757] device (tap647b42a2-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:02:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:33Z|00331|binding|INFO|Setting lport 647b42a2-81d3-49dc-9075-d31077bfffe8 up in Southbound
Jan 31 03:02:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:33.796 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:eb:48 10.100.0.3'], port_security=['fa:16:3e:14:eb:48 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e2e6f697-3383-4887-b25a-c89447e67fc2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-121329c8-2359-4e9d-9f2b-4932f8740470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '755edf0d-318a-4b49-b9f5-851611889f15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cae77ce7-0851-4c6f-a030-c066a50c0f3d, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=647b42a2-81d3-49dc-9075-d31077bfffe8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:02:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:33.797 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 647b42a2-81d3-49dc-9075-d31077bfffe8 in datapath 121329c8-2359-4e9d-9f2b-4932f8740470 bound to our chassis#033[00m
Jan 31 03:02:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:33.798 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 121329c8-2359-4e9d-9f2b-4932f8740470#033[00m
Jan 31 03:02:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:33.813 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[945f680d-f999-48eb-b75b-01b0d428b140]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:33.814 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap121329c8-21 in ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:02:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:33.815 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap121329c8-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:02:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:33.815 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bc11e1c0-6ce7-4d9d-b42a-521f5d05c946]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:33.816 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[97f1babb-6333-474a-8db0-d8729875c225]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:33.827 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[af040c2a-0820-45a3-9ba5-de964eef61e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:33.846 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ea47f15c-b9ca-4664-889f-2cce30e4f7a1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:33.870 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[46d9aac0-b2d3-40b0-aae1-68ad2ac9492d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:33 np0005603609 NetworkManager[49064]: <info>  [1769846553.8755] manager: (tap121329c8-20): new Veth device (/org/freedesktop/NetworkManager/Devices/165)
Jan 31 03:02:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:33.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:33 np0005603609 systemd-udevd[256819]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:02:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:33.874 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f01f2205-5528-40e7-8aa7-cca9b0069b11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:33.908 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[7ab9911e-3605-4901-b054-3f6a8c987e49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:33.911 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[81561d3e-d16d-452b-861a-cc9a44274c08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:33 np0005603609 NetworkManager[49064]: <info>  [1769846553.9295] device (tap121329c8-20): carrier: link connected
Jan 31 03:02:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:33.933 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[59e19bdd-0a5f-4491-a643-256d9f2dbc91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:33.946 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[65796c0c-c037-4845-b5cf-2060a91636ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap121329c8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:a3:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679363, 'reachable_time': 25150, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256850, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:33.957 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1c765659-fa72-41af-9a78-89a7b842e210]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:a3c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 679363, 'tstamp': 679363}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256851, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:33.970 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9091471c-cf8c-475d-a1db-2f24e3a23c83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap121329c8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:a3:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679363, 'reachable_time': 25150, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256852, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:33.996 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[37d396ea-db78-4a78-8405-abda8af27588]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:34.037 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e8783b58-a06d-4a7c-8e4b-5f6c2decfcdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:34.039 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap121329c8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:34.039 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:34.040 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap121329c8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.072 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:34 np0005603609 kernel: tap121329c8-20: entered promiscuous mode
Jan 31 03:02:34 np0005603609 NetworkManager[49064]: <info>  [1769846554.0784] manager: (tap121329c8-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.079 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:34.079 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap121329c8-20, col_values=(('external_ids', {'iface-id': 'e59d8348-5c5c-4c82-ba21-91f3a512c65e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:34 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:34Z|00332|binding|INFO|Releasing lport e59d8348-5c5c-4c82-ba21-91f3a512c65e from this chassis (sb_readonly=0)
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.081 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.085 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.086 221554 DEBUG oslo_concurrency.lockutils [None req-501030fb-bd4f-44df-a20b-97f740d46482 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "49127a4f-80f7-4729-a1b3-ef852d68862a" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.087 221554 DEBUG oslo_concurrency.lockutils [None req-501030fb-bd4f-44df-a20b-97f740d46482 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.087 221554 DEBUG nova.compute.manager [None req-501030fb-bd4f-44df-a20b-97f740d46482 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:34.085 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:34.085 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3774b591-2689-4cfd-817c-9ca30cec2734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:34.086 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:02:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:34.086 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'env', 'PROCESS_TAG=haproxy-121329c8-2359-4e9d-9f2b-4932f8740470', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/121329c8-2359-4e9d-9f2b-4932f8740470.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.092 221554 DEBUG nova.compute.manager [None req-501030fb-bd4f-44df-a20b-97f740d46482 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.093 221554 DEBUG nova.objects.instance [None req-501030fb-bd4f-44df-a20b-97f740d46482 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'flavor' on Instance uuid 49127a4f-80f7-4729-a1b3-ef852d68862a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.141 221554 DEBUG nova.virt.libvirt.driver [None req-501030fb-bd4f-44df-a20b-97f740d46482 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.227 221554 DEBUG nova.compute.manager [req-33133716-b477-4879-a77e-7000d1d9de36 req-7b9647c1-d198-4c61-a72f-ba64267c0d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received event network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.228 221554 DEBUG oslo_concurrency.lockutils [req-33133716-b477-4879-a77e-7000d1d9de36 req-7b9647c1-d198-4c61-a72f-ba64267c0d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.228 221554 DEBUG oslo_concurrency.lockutils [req-33133716-b477-4879-a77e-7000d1d9de36 req-7b9647c1-d198-4c61-a72f-ba64267c0d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.228 221554 DEBUG oslo_concurrency.lockutils [req-33133716-b477-4879-a77e-7000d1d9de36 req-7b9647c1-d198-4c61-a72f-ba64267c0d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.229 221554 DEBUG nova.compute.manager [req-33133716-b477-4879-a77e-7000d1d9de36 req-7b9647c1-d198-4c61-a72f-ba64267c0d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] No waiting events found dispatching network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.229 221554 WARNING nova.compute.manager [req-33133716-b477-4879-a77e-7000d1d9de36 req-7b9647c1-d198-4c61-a72f-ba64267c0d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received unexpected event network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 for instance with vm_state active and task_state resize_finish.#033[00m
Jan 31 03:02:34 np0005603609 podman[256882]: 2026-01-31 08:02:34.457820044 +0000 UTC m=+0.044556314 container create b99c6aaebc04b4ca5e11b539fe306af5c216e6458857caf85538fe8e3c769977 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:02:34 np0005603609 systemd[1]: Started libpod-conmon-b99c6aaebc04b4ca5e11b539fe306af5c216e6458857caf85538fe8e3c769977.scope.
Jan 31 03:02:34 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:02:34 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54254a6d410423a6a09480894c73bd85b9ece7ed03bd7c859ddaeb4cf8cf9ede/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:02:34 np0005603609 podman[256882]: 2026-01-31 08:02:34.529192692 +0000 UTC m=+0.115928962 container init b99c6aaebc04b4ca5e11b539fe306af5c216e6458857caf85538fe8e3c769977 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 03:02:34 np0005603609 podman[256882]: 2026-01-31 08:02:34.435332722 +0000 UTC m=+0.022069022 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:02:34 np0005603609 podman[256882]: 2026-01-31 08:02:34.534854658 +0000 UTC m=+0.121590938 container start b99c6aaebc04b4ca5e11b539fe306af5c216e6458857caf85538fe8e3c769977 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:02:34 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[256896]: [NOTICE]   (256918) : New worker (256920) forked
Jan 31 03:02:34 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[256896]: [NOTICE]   (256918) : Loading success.
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.701 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846554.700799, e2e6f697-3383-4887-b25a-c89447e67fc2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.702 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.706 221554 DEBUG nova.compute.manager [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.710 221554 INFO nova.virt.libvirt.driver [-] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Instance running successfully.#033[00m
Jan 31 03:02:34 np0005603609 virtqemud[221292]: argument unsupported: QEMU guest agent is not configured
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.713 221554 DEBUG nova.virt.libvirt.guest [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.714 221554 DEBUG nova.virt.libvirt.driver [None req-7cc103b1-faf8-4997-8363-cabc91a59524 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.756 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.767 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.884 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.885 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846554.7019804, e2e6f697-3383-4887-b25a-c89447e67fc2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.885 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] VM Started (Lifecycle Event)#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.918 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:34 np0005603609 nova_compute[221550]: 2026-01-31 08:02:34.921 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:02:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:02:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:35.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:02:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:35.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:36 np0005603609 nova_compute[221550]: 2026-01-31 08:02:36.396 221554 DEBUG nova.compute.manager [req-9d805d70-3b50-4f78-9891-2235ff256925 req-878791f6-e24a-4ad6-926b-181df508ae9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received event network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:36 np0005603609 nova_compute[221550]: 2026-01-31 08:02:36.396 221554 DEBUG oslo_concurrency.lockutils [req-9d805d70-3b50-4f78-9891-2235ff256925 req-878791f6-e24a-4ad6-926b-181df508ae9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:36 np0005603609 nova_compute[221550]: 2026-01-31 08:02:36.397 221554 DEBUG oslo_concurrency.lockutils [req-9d805d70-3b50-4f78-9891-2235ff256925 req-878791f6-e24a-4ad6-926b-181df508ae9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:36 np0005603609 nova_compute[221550]: 2026-01-31 08:02:36.397 221554 DEBUG oslo_concurrency.lockutils [req-9d805d70-3b50-4f78-9891-2235ff256925 req-878791f6-e24a-4ad6-926b-181df508ae9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:36 np0005603609 nova_compute[221550]: 2026-01-31 08:02:36.397 221554 DEBUG nova.compute.manager [req-9d805d70-3b50-4f78-9891-2235ff256925 req-878791f6-e24a-4ad6-926b-181df508ae9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] No waiting events found dispatching network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:02:36 np0005603609 nova_compute[221550]: 2026-01-31 08:02:36.397 221554 WARNING nova.compute.manager [req-9d805d70-3b50-4f78-9891-2235ff256925 req-878791f6-e24a-4ad6-926b-181df508ae9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received unexpected event network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 for instance with vm_state resized and task_state None.#033[00m
Jan 31 03:02:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:37.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:37.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:38 np0005603609 nova_compute[221550]: 2026-01-31 08:02:38.507 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:38 np0005603609 nova_compute[221550]: 2026-01-31 08:02:38.726 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:02:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:39.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:02:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:39.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:02:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:41.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:02:41 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:41Z|00333|binding|INFO|Releasing lport 9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef from this chassis (sb_readonly=0)
Jan 31 03:02:41 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:41Z|00334|binding|INFO|Releasing lport e59d8348-5c5c-4c82-ba21-91f3a512c65e from this chassis (sb_readonly=0)
Jan 31 03:02:41 np0005603609 nova_compute[221550]: 2026-01-31 08:02:41.579 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:41.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:43.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:43 np0005603609 nova_compute[221550]: 2026-01-31 08:02:43.510 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:43 np0005603609 nova_compute[221550]: 2026-01-31 08:02:43.729 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:02:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:43.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:02:44 np0005603609 nova_compute[221550]: 2026-01-31 08:02:44.183 221554 DEBUG nova.virt.libvirt.driver [None req-501030fb-bd4f-44df-a20b-97f740d46482 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:02:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:02:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3863939065' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:02:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:02:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3863939065' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:02:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e252 e252: 3 total, 3 up, 3 in
Jan 31 03:02:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:45.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:45.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:46 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:46Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5f:56:31 10.100.0.7
Jan 31 03:02:46 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:46Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5f:56:31 10.100.0.7
Jan 31 03:02:47 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:47Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:14:eb:48 10.100.0.3
Jan 31 03:02:47 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:47Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:14:eb:48 10.100.0.3
Jan 31 03:02:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:02:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:47.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:02:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:02:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:47.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:02:48 np0005603609 nova_compute[221550]: 2026-01-31 08:02:48.513 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:48 np0005603609 nova_compute[221550]: 2026-01-31 08:02:48.730 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:49.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e253 e253: 3 total, 3 up, 3 in
Jan 31 03:02:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:49.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:50 np0005603609 kernel: tap6a146b62-d3 (unregistering): left promiscuous mode
Jan 31 03:02:50 np0005603609 NetworkManager[49064]: <info>  [1769846570.2814] device (tap6a146b62-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:02:50 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:50Z|00335|binding|INFO|Releasing lport 6a146b62-d383-4976-b311-188a5e891208 from this chassis (sb_readonly=0)
Jan 31 03:02:50 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:50Z|00336|binding|INFO|Setting lport 6a146b62-d383-4976-b311-188a5e891208 down in Southbound
Jan 31 03:02:50 np0005603609 nova_compute[221550]: 2026-01-31 08:02:50.284 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:50 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:50Z|00337|binding|INFO|Removing iface tap6a146b62-d3 ovn-installed in OVS
Jan 31 03:02:50 np0005603609 nova_compute[221550]: 2026-01-31 08:02:50.286 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:50 np0005603609 nova_compute[221550]: 2026-01-31 08:02:50.291 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:50.314 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:56:31 10.100.0.7'], port_security=['fa:16:3e:5f:56:31 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '49127a4f-80f7-4729-a1b3-ef852d68862a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd352316ff6534075952e2d0c28061b09', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd60e8396-567a-4b0f-955f-0777fc5fbc4d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b782827-2c44-4671-8f12-bb67b4f64804, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=6a146b62-d383-4976-b311-188a5e891208) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:02:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:50.316 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 6a146b62-d383-4976-b311-188a5e891208 in datapath 79cb2b81-3369-468a-8bf6-7e13d5df334b unbound from our chassis#033[00m
Jan 31 03:02:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:50.317 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79cb2b81-3369-468a-8bf6-7e13d5df334b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:02:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:50.318 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb922da-0ed8-4195-9c7f-69854bce97ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:50.318 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b namespace which is not needed anymore#033[00m
Jan 31 03:02:50 np0005603609 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Jan 31 03:02:50 np0005603609 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d0000005e.scope: Consumed 14.252s CPU time.
Jan 31 03:02:50 np0005603609 systemd-machined[190912]: Machine qemu-42-instance-0000005e terminated.
Jan 31 03:02:50 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[256631]: [NOTICE]   (256635) : haproxy version is 2.8.14-c23fe91
Jan 31 03:02:50 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[256631]: [NOTICE]   (256635) : path to executable is /usr/sbin/haproxy
Jan 31 03:02:50 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[256631]: [WARNING]  (256635) : Exiting Master process...
Jan 31 03:02:50 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[256631]: [ALERT]    (256635) : Current worker (256637) exited with code 143 (Terminated)
Jan 31 03:02:50 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[256631]: [WARNING]  (256635) : All workers exited. Exiting... (0)
Jan 31 03:02:50 np0005603609 systemd[1]: libpod-987232caf5d74a8070287452700ae3369093d39805b1d225ee3ba47f3165dfa8.scope: Deactivated successfully.
Jan 31 03:02:50 np0005603609 podman[256977]: 2026-01-31 08:02:50.431782842 +0000 UTC m=+0.048884848 container died 987232caf5d74a8070287452700ae3369093d39805b1d225ee3ba47f3165dfa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:02:50 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-987232caf5d74a8070287452700ae3369093d39805b1d225ee3ba47f3165dfa8-userdata-shm.mount: Deactivated successfully.
Jan 31 03:02:50 np0005603609 systemd[1]: var-lib-containers-storage-overlay-69db5d2e6c5e3b0cb66272631634d7c4db29aec0abb03281d8ecced4f6465490-merged.mount: Deactivated successfully.
Jan 31 03:02:50 np0005603609 podman[256977]: 2026-01-31 08:02:50.46622336 +0000 UTC m=+0.083325366 container cleanup 987232caf5d74a8070287452700ae3369093d39805b1d225ee3ba47f3165dfa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:02:50 np0005603609 systemd[1]: libpod-conmon-987232caf5d74a8070287452700ae3369093d39805b1d225ee3ba47f3165dfa8.scope: Deactivated successfully.
Jan 31 03:02:50 np0005603609 podman[257006]: 2026-01-31 08:02:50.536312158 +0000 UTC m=+0.048321374 container remove 987232caf5d74a8070287452700ae3369093d39805b1d225ee3ba47f3165dfa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 03:02:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:50.543 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b80fd549-19f5-4026-95f4-b3e55a3865b2]: (4, ('Sat Jan 31 08:02:50 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b (987232caf5d74a8070287452700ae3369093d39805b1d225ee3ba47f3165dfa8)\n987232caf5d74a8070287452700ae3369093d39805b1d225ee3ba47f3165dfa8\nSat Jan 31 08:02:50 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b (987232caf5d74a8070287452700ae3369093d39805b1d225ee3ba47f3165dfa8)\n987232caf5d74a8070287452700ae3369093d39805b1d225ee3ba47f3165dfa8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:50.545 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1f903955-96d8-4ffd-968d-a808e3f55be0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:50.546 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79cb2b81-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:50 np0005603609 nova_compute[221550]: 2026-01-31 08:02:50.547 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:50 np0005603609 kernel: tap79cb2b81-30: left promiscuous mode
Jan 31 03:02:50 np0005603609 nova_compute[221550]: 2026-01-31 08:02:50.558 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:50.561 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d21fae64-35e4-45f1-b310-ea68fced8b4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:50.578 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[826356c2-c4be-42ef-86a4-d172dc89896b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:50.580 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2c3376ff-e5e5-4b4d-ac6e-1e48b8b3ff2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:50.592 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1f3b42d8-0196-4021-b8da-84db5c97d287]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678660, 'reachable_time': 29850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257034, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:50.594 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:02:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:50.594 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[8d2afd60-0560-4c81-8a84-dd6b30ad4f76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:50 np0005603609 systemd[1]: run-netns-ovnmeta\x2d79cb2b81\x2d3369\x2d468a\x2d8bf6\x2d7e13d5df334b.mount: Deactivated successfully.
Jan 31 03:02:51 np0005603609 nova_compute[221550]: 2026-01-31 08:02:51.210 221554 INFO nova.virt.libvirt.driver [None req-501030fb-bd4f-44df-a20b-97f740d46482 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Instance shutdown successfully after 17 seconds.#033[00m
Jan 31 03:02:51 np0005603609 nova_compute[221550]: 2026-01-31 08:02:51.215 221554 INFO nova.virt.libvirt.driver [-] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Instance destroyed successfully.#033[00m
Jan 31 03:02:51 np0005603609 nova_compute[221550]: 2026-01-31 08:02:51.215 221554 DEBUG nova.objects.instance [None req-501030fb-bd4f-44df-a20b-97f740d46482 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'numa_topology' on Instance uuid 49127a4f-80f7-4729-a1b3-ef852d68862a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:51 np0005603609 nova_compute[221550]: 2026-01-31 08:02:51.272 221554 DEBUG nova.compute.manager [None req-501030fb-bd4f-44df-a20b-97f740d46482 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:51 np0005603609 nova_compute[221550]: 2026-01-31 08:02:51.332 221554 DEBUG oslo_concurrency.lockutils [None req-501030fb-bd4f-44df-a20b-97f740d46482 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 17.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:51.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:51 np0005603609 nova_compute[221550]: 2026-01-31 08:02:51.686 221554 DEBUG nova.compute.manager [req-7e7b61be-f4c8-4ee2-899c-0183ae730c22 req-940f4dec-3322-486c-a64f-0fd051a183d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Received event network-vif-unplugged-6a146b62-d383-4976-b311-188a5e891208 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:51 np0005603609 nova_compute[221550]: 2026-01-31 08:02:51.686 221554 DEBUG oslo_concurrency.lockutils [req-7e7b61be-f4c8-4ee2-899c-0183ae730c22 req-940f4dec-3322-486c-a64f-0fd051a183d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:51 np0005603609 nova_compute[221550]: 2026-01-31 08:02:51.686 221554 DEBUG oslo_concurrency.lockutils [req-7e7b61be-f4c8-4ee2-899c-0183ae730c22 req-940f4dec-3322-486c-a64f-0fd051a183d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:51 np0005603609 nova_compute[221550]: 2026-01-31 08:02:51.686 221554 DEBUG oslo_concurrency.lockutils [req-7e7b61be-f4c8-4ee2-899c-0183ae730c22 req-940f4dec-3322-486c-a64f-0fd051a183d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:51 np0005603609 nova_compute[221550]: 2026-01-31 08:02:51.687 221554 DEBUG nova.compute.manager [req-7e7b61be-f4c8-4ee2-899c-0183ae730c22 req-940f4dec-3322-486c-a64f-0fd051a183d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] No waiting events found dispatching network-vif-unplugged-6a146b62-d383-4976-b311-188a5e891208 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:02:51 np0005603609 nova_compute[221550]: 2026-01-31 08:02:51.687 221554 WARNING nova.compute.manager [req-7e7b61be-f4c8-4ee2-899c-0183ae730c22 req-940f4dec-3322-486c-a64f-0fd051a183d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Received unexpected event network-vif-unplugged-6a146b62-d383-4976-b311-188a5e891208 for instance with vm_state stopped and task_state None.#033[00m
Jan 31 03:02:51 np0005603609 nova_compute[221550]: 2026-01-31 08:02:51.789 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:51.790 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:02:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:51.791 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:02:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:51.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:53 np0005603609 podman[257036]: 2026-01-31 08:02:53.171709602 +0000 UTC m=+0.054051891 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 31 03:02:53 np0005603609 podman[257035]: 2026-01-31 08:02:53.199709746 +0000 UTC m=+0.081624005 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 03:02:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:53.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:53 np0005603609 nova_compute[221550]: 2026-01-31 08:02:53.550 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:53 np0005603609 nova_compute[221550]: 2026-01-31 08:02:53.733 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:53.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:54 np0005603609 nova_compute[221550]: 2026-01-31 08:02:54.199 221554 DEBUG nova.compute.manager [req-24502f5a-b2d7-40a9-b184-77470ba7af72 req-c689afa3-856f-49d9-94a7-dc38cdb5f6a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Received event network-vif-plugged-6a146b62-d383-4976-b311-188a5e891208 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:54 np0005603609 nova_compute[221550]: 2026-01-31 08:02:54.200 221554 DEBUG oslo_concurrency.lockutils [req-24502f5a-b2d7-40a9-b184-77470ba7af72 req-c689afa3-856f-49d9-94a7-dc38cdb5f6a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:54 np0005603609 nova_compute[221550]: 2026-01-31 08:02:54.200 221554 DEBUG oslo_concurrency.lockutils [req-24502f5a-b2d7-40a9-b184-77470ba7af72 req-c689afa3-856f-49d9-94a7-dc38cdb5f6a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:54 np0005603609 nova_compute[221550]: 2026-01-31 08:02:54.200 221554 DEBUG oslo_concurrency.lockutils [req-24502f5a-b2d7-40a9-b184-77470ba7af72 req-c689afa3-856f-49d9-94a7-dc38cdb5f6a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:54 np0005603609 nova_compute[221550]: 2026-01-31 08:02:54.200 221554 DEBUG nova.compute.manager [req-24502f5a-b2d7-40a9-b184-77470ba7af72 req-c689afa3-856f-49d9-94a7-dc38cdb5f6a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] No waiting events found dispatching network-vif-plugged-6a146b62-d383-4976-b311-188a5e891208 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:02:54 np0005603609 nova_compute[221550]: 2026-01-31 08:02:54.200 221554 WARNING nova.compute.manager [req-24502f5a-b2d7-40a9-b184-77470ba7af72 req-c689afa3-856f-49d9-94a7-dc38cdb5f6a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Received unexpected event network-vif-plugged-6a146b62-d383-4976-b311-188a5e891208 for instance with vm_state stopped and task_state rebuilding.#033[00m
Jan 31 03:02:54 np0005603609 nova_compute[221550]: 2026-01-31 08:02:54.674 221554 INFO nova.compute.manager [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Rebuilding instance#033[00m
Jan 31 03:02:55 np0005603609 nova_compute[221550]: 2026-01-31 08:02:55.030 221554 DEBUG nova.objects.instance [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 49127a4f-80f7-4729-a1b3-ef852d68862a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:55 np0005603609 nova_compute[221550]: 2026-01-31 08:02:55.071 221554 DEBUG nova.compute.manager [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:55 np0005603609 nova_compute[221550]: 2026-01-31 08:02:55.204 221554 DEBUG nova.objects.instance [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'pci_requests' on Instance uuid 49127a4f-80f7-4729-a1b3-ef852d68862a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:55 np0005603609 nova_compute[221550]: 2026-01-31 08:02:55.287 221554 DEBUG nova.objects.instance [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'pci_devices' on Instance uuid 49127a4f-80f7-4729-a1b3-ef852d68862a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:55 np0005603609 nova_compute[221550]: 2026-01-31 08:02:55.356 221554 DEBUG nova.objects.instance [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'resources' on Instance uuid 49127a4f-80f7-4729-a1b3-ef852d68862a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:55 np0005603609 nova_compute[221550]: 2026-01-31 08:02:55.436 221554 DEBUG nova.objects.instance [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'migration_context' on Instance uuid 49127a4f-80f7-4729-a1b3-ef852d68862a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:55 np0005603609 nova_compute[221550]: 2026-01-31 08:02:55.483 221554 DEBUG nova.objects.instance [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 03:02:55 np0005603609 nova_compute[221550]: 2026-01-31 08:02:55.486 221554 INFO nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Instance already shutdown.#033[00m
Jan 31 03:02:55 np0005603609 nova_compute[221550]: 2026-01-31 08:02:55.491 221554 INFO nova.virt.libvirt.driver [-] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Instance destroyed successfully.#033[00m
Jan 31 03:02:55 np0005603609 nova_compute[221550]: 2026-01-31 08:02:55.497 221554 INFO nova.virt.libvirt.driver [-] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Instance destroyed successfully.#033[00m
Jan 31 03:02:55 np0005603609 nova_compute[221550]: 2026-01-31 08:02:55.498 221554 DEBUG nova.virt.libvirt.vif [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:02:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-944608155',display_name='tempest-tempest.common.compute-instance-944608155',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-944608155',id=94,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:02:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-honqqjjn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:02:53Z,user_data=None,user_id='31043e345f6b48b585fb7b8ab7304764',uuid=49127a4f-80f7-4729-a1b3-ef852d68862a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "6a146b62-d383-4976-b311-188a5e891208", "address": "fa:16:3e:5f:56:31", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a146b62-d3", "ovs_interfaceid": "6a146b62-d383-4976-b311-188a5e891208", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:02:55 np0005603609 nova_compute[221550]: 2026-01-31 08:02:55.499 221554 DEBUG nova.network.os_vif_util [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "6a146b62-d383-4976-b311-188a5e891208", "address": "fa:16:3e:5f:56:31", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a146b62-d3", "ovs_interfaceid": "6a146b62-d383-4976-b311-188a5e891208", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:02:55 np0005603609 nova_compute[221550]: 2026-01-31 08:02:55.499 221554 DEBUG nova.network.os_vif_util [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:56:31,bridge_name='br-int',has_traffic_filtering=True,id=6a146b62-d383-4976-b311-188a5e891208,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a146b62-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:02:55 np0005603609 nova_compute[221550]: 2026-01-31 08:02:55.500 221554 DEBUG os_vif [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:56:31,bridge_name='br-int',has_traffic_filtering=True,id=6a146b62-d383-4976-b311-188a5e891208,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a146b62-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:02:55 np0005603609 nova_compute[221550]: 2026-01-31 08:02:55.502 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:55 np0005603609 nova_compute[221550]: 2026-01-31 08:02:55.502 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a146b62-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:55 np0005603609 nova_compute[221550]: 2026-01-31 08:02:55.504 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:55 np0005603609 nova_compute[221550]: 2026-01-31 08:02:55.505 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:55 np0005603609 nova_compute[221550]: 2026-01-31 08:02:55.508 221554 INFO os_vif [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:56:31,bridge_name='br-int',has_traffic_filtering=True,id=6a146b62-d383-4976-b311-188a5e891208,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a146b62-d3')#033[00m
Jan 31 03:02:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:55.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:55.792 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:55.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.015 221554 INFO nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Deleting instance files /var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a_del#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.015 221554 INFO nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Deletion of /var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a_del complete#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.280 221554 DEBUG nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.281 221554 INFO nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Creating image(s)#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.313 221554 DEBUG nova.storage.rbd_utils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 49127a4f-80f7-4729-a1b3-ef852d68862a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.349 221554 DEBUG nova.storage.rbd_utils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 49127a4f-80f7-4729-a1b3-ef852d68862a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.379 221554 DEBUG nova.storage.rbd_utils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 49127a4f-80f7-4729-a1b3-ef852d68862a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.383 221554 DEBUG oslo_concurrency.processutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.431 221554 DEBUG oslo_concurrency.processutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.432 221554 DEBUG oslo_concurrency.lockutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.432 221554 DEBUG oslo_concurrency.lockutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.432 221554 DEBUG oslo_concurrency.lockutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.463 221554 DEBUG nova.storage.rbd_utils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 49127a4f-80f7-4729-a1b3-ef852d68862a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.468 221554 DEBUG oslo_concurrency.processutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c 49127a4f-80f7-4729-a1b3-ef852d68862a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.743 221554 DEBUG oslo_concurrency.processutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c 49127a4f-80f7-4729-a1b3-ef852d68862a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.835 221554 DEBUG nova.storage.rbd_utils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] resizing rbd image 49127a4f-80f7-4729-a1b3-ef852d68862a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.954 221554 DEBUG nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.955 221554 DEBUG nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Ensure instance console log exists: /var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.956 221554 DEBUG oslo_concurrency.lockutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.956 221554 DEBUG oslo_concurrency.lockutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.956 221554 DEBUG oslo_concurrency.lockutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.958 221554 DEBUG nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Start _get_guest_xml network_info=[{"id": "6a146b62-d383-4976-b311-188a5e891208", "address": "fa:16:3e:5f:56:31", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a146b62-d3", "ovs_interfaceid": "6a146b62-d383-4976-b311-188a5e891208", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.963 221554 WARNING nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.974 221554 DEBUG nova.virt.libvirt.host [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.975 221554 DEBUG nova.virt.libvirt.host [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.978 221554 DEBUG nova.virt.libvirt.host [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.979 221554 DEBUG nova.virt.libvirt.host [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.980 221554 DEBUG nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.981 221554 DEBUG nova.virt.hardware [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.981 221554 DEBUG nova.virt.hardware [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.981 221554 DEBUG nova.virt.hardware [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.982 221554 DEBUG nova.virt.hardware [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.982 221554 DEBUG nova.virt.hardware [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.982 221554 DEBUG nova.virt.hardware [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.983 221554 DEBUG nova.virt.hardware [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.983 221554 DEBUG nova.virt.hardware [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.983 221554 DEBUG nova.virt.hardware [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.984 221554 DEBUG nova.virt.hardware [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.984 221554 DEBUG nova.virt.hardware [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:02:56 np0005603609 nova_compute[221550]: 2026-01-31 08:02:56.984 221554 DEBUG nova.objects.instance [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 49127a4f-80f7-4729-a1b3-ef852d68862a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.037 221554 DEBUG oslo_concurrency.processutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.248 221554 DEBUG oslo_concurrency.lockutils [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "e2e6f697-3383-4887-b25a-c89447e67fc2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.248 221554 DEBUG oslo_concurrency.lockutils [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.249 221554 DEBUG oslo_concurrency.lockutils [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.249 221554 DEBUG oslo_concurrency.lockutils [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.249 221554 DEBUG oslo_concurrency.lockutils [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.250 221554 INFO nova.compute.manager [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Terminating instance#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.251 221554 DEBUG nova.compute.manager [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:02:57 np0005603609 kernel: tap647b42a2-81 (unregistering): left promiscuous mode
Jan 31 03:02:57 np0005603609 NetworkManager[49064]: <info>  [1769846577.3021] device (tap647b42a2-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.313 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:57 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:57Z|00338|binding|INFO|Releasing lport 647b42a2-81d3-49dc-9075-d31077bfffe8 from this chassis (sb_readonly=0)
Jan 31 03:02:57 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:57Z|00339|binding|INFO|Setting lport 647b42a2-81d3-49dc-9075-d31077bfffe8 down in Southbound
Jan 31 03:02:57 np0005603609 ovn_controller[130359]: 2026-01-31T08:02:57Z|00340|binding|INFO|Removing iface tap647b42a2-81 ovn-installed in OVS
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.318 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:57.324 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:eb:48 10.100.0.3'], port_security=['fa:16:3e:14:eb:48 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e2e6f697-3383-4887-b25a-c89447e67fc2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-121329c8-2359-4e9d-9f2b-4932f8740470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'neutron:revision_number': '8', 'neutron:security_group_ids': '755edf0d-318a-4b49-b9f5-851611889f15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cae77ce7-0851-4c6f-a030-c066a50c0f3d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=647b42a2-81d3-49dc-9075-d31077bfffe8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:02:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:57.325 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 647b42a2-81d3-49dc-9075-d31077bfffe8 in datapath 121329c8-2359-4e9d-9f2b-4932f8740470 unbound from our chassis#033[00m
Jan 31 03:02:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:57.326 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 121329c8-2359-4e9d-9f2b-4932f8740470, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:02:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:57.328 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8a4eebc5-002f-40e7-a8d5-d72486258dd2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:57.328 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 namespace which is not needed anymore#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.334 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:57 np0005603609 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Jan 31 03:02:57 np0005603609 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d0000005d.scope: Consumed 13.470s CPU time.
Jan 31 03:02:57 np0005603609 systemd-machined[190912]: Machine qemu-43-instance-0000005d terminated.
Jan 31 03:02:57 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[256896]: [NOTICE]   (256918) : haproxy version is 2.8.14-c23fe91
Jan 31 03:02:57 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[256896]: [NOTICE]   (256918) : path to executable is /usr/sbin/haproxy
Jan 31 03:02:57 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[256896]: [WARNING]  (256918) : Exiting Master process...
Jan 31 03:02:57 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[256896]: [ALERT]    (256918) : Current worker (256920) exited with code 143 (Terminated)
Jan 31 03:02:57 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[256896]: [WARNING]  (256918) : All workers exited. Exiting... (0)
Jan 31 03:02:57 np0005603609 systemd[1]: libpod-b99c6aaebc04b4ca5e11b539fe306af5c216e6458857caf85538fe8e3c769977.scope: Deactivated successfully.
Jan 31 03:02:57 np0005603609 podman[257310]: 2026-01-31 08:02:57.4475794 +0000 UTC m=+0.048419446 container died b99c6aaebc04b4ca5e11b539fe306af5c216e6458857caf85538fe8e3c769977 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 03:02:57 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b99c6aaebc04b4ca5e11b539fe306af5c216e6458857caf85538fe8e3c769977-userdata-shm.mount: Deactivated successfully.
Jan 31 03:02:57 np0005603609 systemd[1]: var-lib-containers-storage-overlay-54254a6d410423a6a09480894c73bd85b9ece7ed03bd7c859ddaeb4cf8cf9ede-merged.mount: Deactivated successfully.
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.487 221554 INFO nova.virt.libvirt.driver [-] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Instance destroyed successfully.#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.488 221554 DEBUG nova.objects.instance [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'resources' on Instance uuid e2e6f697-3383-4887-b25a-c89447e67fc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:57 np0005603609 podman[257310]: 2026-01-31 08:02:57.491307514 +0000 UTC m=+0.092147550 container cleanup b99c6aaebc04b4ca5e11b539fe306af5c216e6458857caf85538fe8e3c769977 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 03:02:57 np0005603609 systemd[1]: libpod-conmon-b99c6aaebc04b4ca5e11b539fe306af5c216e6458857caf85538fe8e3c769977.scope: Deactivated successfully.
Jan 31 03:02:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:57.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.532 221554 DEBUG nova.virt.libvirt.vif [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-810198552',display_name='tempest-ServerDiskConfigTestJSON-server-810198552',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-810198552',id=93,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:02:34Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-1w400nzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:02:46Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=e2e6f697-3383-4887-b25a-c89447e67fc2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.533 221554 DEBUG nova.network.os_vif_util [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "647b42a2-81d3-49dc-9075-d31077bfffe8", "address": "fa:16:3e:14:eb:48", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b42a2-81", "ovs_interfaceid": "647b42a2-81d3-49dc-9075-d31077bfffe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.533 221554 DEBUG nova.network.os_vif_util [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:14:eb:48,bridge_name='br-int',has_traffic_filtering=True,id=647b42a2-81d3-49dc-9075-d31077bfffe8,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b42a2-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.534 221554 DEBUG os_vif [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:eb:48,bridge_name='br-int',has_traffic_filtering=True,id=647b42a2-81d3-49dc-9075-d31077bfffe8,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b42a2-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.535 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.536 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap647b42a2-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.539 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.542 221554 INFO os_vif [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:eb:48,bridge_name='br-int',has_traffic_filtering=True,id=647b42a2-81d3-49dc-9075-d31077bfffe8,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b42a2-81')#033[00m
Jan 31 03:02:57 np0005603609 podman[257350]: 2026-01-31 08:02:57.551256176 +0000 UTC m=+0.044111572 container remove b99c6aaebc04b4ca5e11b539fe306af5c216e6458857caf85538fe8e3c769977 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Jan 31 03:02:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:57.556 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[232c431a-b171-4887-ace0-c6cbcf9fbd5f]: (4, ('Sat Jan 31 08:02:57 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 (b99c6aaebc04b4ca5e11b539fe306af5c216e6458857caf85538fe8e3c769977)\nb99c6aaebc04b4ca5e11b539fe306af5c216e6458857caf85538fe8e3c769977\nSat Jan 31 08:02:57 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 (b99c6aaebc04b4ca5e11b539fe306af5c216e6458857caf85538fe8e3c769977)\nb99c6aaebc04b4ca5e11b539fe306af5c216e6458857caf85538fe8e3c769977\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:57.562 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[601b4f6c-0651-4682-b93f-a9c941b519ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:57.563 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap121329c8-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:57 np0005603609 kernel: tap121329c8-20: left promiscuous mode
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.567 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.570 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:02:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:57.573 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[56ceac03-a34c-4e82-abb2-46afd423bcb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:57 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1288948321' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:02:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:57.588 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[54cc58e8-791b-4ef1-8485-2e6e337eb480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:57.590 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c9beed71-aa32-4d9e-8884-d5af93f33abc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.596 221554 DEBUG oslo_concurrency.processutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:57.600 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[32311942-7305-4c6a-9cc3-404c9ddd0862]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679356, 'reachable_time': 40807, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257385, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:57 np0005603609 systemd[1]: run-netns-ovnmeta\x2d121329c8\x2d2359\x2d4e9d\x2d9f2b\x2d4932f8740470.mount: Deactivated successfully.
Jan 31 03:02:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:57.603 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:02:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:02:57.604 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba82ebd-14ba-449f-b395-c5487f10b144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.622 221554 DEBUG nova.storage.rbd_utils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 49127a4f-80f7-4729-a1b3-ef852d68862a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.626 221554 DEBUG oslo_concurrency.processutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.644 221554 DEBUG nova.compute.manager [req-4284fa14-d9bf-461f-9a6f-a9eb0b9f6ba0 req-172fb5de-97c3-4929-9498-8eff721a402e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received event network-vif-unplugged-647b42a2-81d3-49dc-9075-d31077bfffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.644 221554 DEBUG oslo_concurrency.lockutils [req-4284fa14-d9bf-461f-9a6f-a9eb0b9f6ba0 req-172fb5de-97c3-4929-9498-8eff721a402e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.644 221554 DEBUG oslo_concurrency.lockutils [req-4284fa14-d9bf-461f-9a6f-a9eb0b9f6ba0 req-172fb5de-97c3-4929-9498-8eff721a402e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.644 221554 DEBUG oslo_concurrency.lockutils [req-4284fa14-d9bf-461f-9a6f-a9eb0b9f6ba0 req-172fb5de-97c3-4929-9498-8eff721a402e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.645 221554 DEBUG nova.compute.manager [req-4284fa14-d9bf-461f-9a6f-a9eb0b9f6ba0 req-172fb5de-97c3-4929-9498-8eff721a402e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] No waiting events found dispatching network-vif-unplugged-647b42a2-81d3-49dc-9075-d31077bfffe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:02:57 np0005603609 nova_compute[221550]: 2026-01-31 08:02:57.645 221554 DEBUG nova.compute.manager [req-4284fa14-d9bf-461f-9a6f-a9eb0b9f6ba0 req-172fb5de-97c3-4929-9498-8eff721a402e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received event network-vif-unplugged-647b42a2-81d3-49dc-9075-d31077bfffe8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:02:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:57.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.027 221554 INFO nova.virt.libvirt.driver [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Deleting instance files /var/lib/nova/instances/e2e6f697-3383-4887-b25a-c89447e67fc2_del#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.028 221554 INFO nova.virt.libvirt.driver [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Deletion of /var/lib/nova/instances/e2e6f697-3383-4887-b25a-c89447e67fc2_del complete#033[00m
Jan 31 03:02:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:02:58 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/729951909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.058 221554 DEBUG oslo_concurrency.processutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.060 221554 DEBUG nova.virt.libvirt.vif [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:02:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-944608155',display_name='tempest-tempest.common.compute-instance-944608155',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-944608155',id=94,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:02:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-honqqjjn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:02:56Z,user_data=None,user_id='31043e345f6b48b585fb7b8ab7304764',uuid=49127a4f-80f7-4729-a1b3-ef852d68862a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "6a146b62-d383-4976-b311-188a5e891208", "address": "fa:16:3e:5f:56:31", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a146b62-d3", "ovs_interfaceid": "6a146b62-d383-4976-b311-188a5e891208", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.061 221554 DEBUG nova.network.os_vif_util [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "6a146b62-d383-4976-b311-188a5e891208", "address": "fa:16:3e:5f:56:31", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a146b62-d3", "ovs_interfaceid": "6a146b62-d383-4976-b311-188a5e891208", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.063 221554 DEBUG nova.network.os_vif_util [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:56:31,bridge_name='br-int',has_traffic_filtering=True,id=6a146b62-d383-4976-b311-188a5e891208,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a146b62-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.067 221554 DEBUG nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:02:58 np0005603609 nova_compute[221550]:  <uuid>49127a4f-80f7-4729-a1b3-ef852d68862a</uuid>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:  <name>instance-0000005e</name>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <nova:name>tempest-tempest.common.compute-instance-944608155</nova:name>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:02:56</nova:creationTime>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:02:58 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:        <nova:user uuid="31043e345f6b48b585fb7b8ab7304764">tempest-ServerActionsTestOtherA-527878807-project-member</nova:user>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:        <nova:project uuid="d352316ff6534075952e2d0c28061b09">tempest-ServerActionsTestOtherA-527878807</nova:project>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="40cf2ff3-f7ff-4843-b4ab-b7dcc843006f"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:        <nova:port uuid="6a146b62-d383-4976-b311-188a5e891208">
Jan 31 03:02:58 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <entry name="serial">49127a4f-80f7-4729-a1b3-ef852d68862a</entry>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <entry name="uuid">49127a4f-80f7-4729-a1b3-ef852d68862a</entry>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/49127a4f-80f7-4729-a1b3-ef852d68862a_disk">
Jan 31 03:02:58 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:02:58 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/49127a4f-80f7-4729-a1b3-ef852d68862a_disk.config">
Jan 31 03:02:58 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:02:58 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:5f:56:31"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <target dev="tap6a146b62-d3"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a/console.log" append="off"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:02:58 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:02:58 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:02:58 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:02:58 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.069 221554 DEBUG nova.compute.manager [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Preparing to wait for external event network-vif-plugged-6a146b62-d383-4976-b311-188a5e891208 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.069 221554 DEBUG oslo_concurrency.lockutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.070 221554 DEBUG oslo_concurrency.lockutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.070 221554 DEBUG oslo_concurrency.lockutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.072 221554 DEBUG nova.virt.libvirt.vif [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:02:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-944608155',display_name='tempest-tempest.common.compute-instance-944608155',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-944608155',id=94,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:02:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-honqqjjn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:02:56Z,user_data=None,user_id='31043e345f6b48b585fb7b8ab7304764',uuid=49127a4f-80f7-4729-a1b3-ef852d68862a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "6a146b62-d383-4976-b311-188a5e891208", "address": "fa:16:3e:5f:56:31", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a146b62-d3", "ovs_interfaceid": "6a146b62-d383-4976-b311-188a5e891208", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.072 221554 DEBUG nova.network.os_vif_util [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "6a146b62-d383-4976-b311-188a5e891208", "address": "fa:16:3e:5f:56:31", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a146b62-d3", "ovs_interfaceid": "6a146b62-d383-4976-b311-188a5e891208", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.073 221554 DEBUG nova.network.os_vif_util [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:56:31,bridge_name='br-int',has_traffic_filtering=True,id=6a146b62-d383-4976-b311-188a5e891208,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a146b62-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.074 221554 DEBUG os_vif [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:56:31,bridge_name='br-int',has_traffic_filtering=True,id=6a146b62-d383-4976-b311-188a5e891208,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a146b62-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.075 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.076 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.077 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.078 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.080 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.081 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a146b62-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.082 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6a146b62-d3, col_values=(('external_ids', {'iface-id': '6a146b62-d383-4976-b311-188a5e891208', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:56:31', 'vm-uuid': '49127a4f-80f7-4729-a1b3-ef852d68862a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.084 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:58 np0005603609 NetworkManager[49064]: <info>  [1769846578.0851] manager: (tap6a146b62-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/167)
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.088 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.090 221554 INFO os_vif [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:56:31,bridge_name='br-int',has_traffic_filtering=True,id=6a146b62-d383-4976-b311-188a5e891208,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a146b62-d3')#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.179 221554 INFO nova.compute.manager [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.182 221554 DEBUG oslo.service.loopingcall [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.183 221554 DEBUG nova.compute.manager [-] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.183 221554 DEBUG nova.network.neutron [-] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.202 221554 DEBUG nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.202 221554 DEBUG nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.202 221554 DEBUG nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No VIF found with MAC fa:16:3e:5f:56:31, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.203 221554 INFO nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Using config drive#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.228 221554 DEBUG nova.storage.rbd_utils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 49127a4f-80f7-4729-a1b3-ef852d68862a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.252 221554 DEBUG nova.objects.instance [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 49127a4f-80f7-4729-a1b3-ef852d68862a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.335 221554 DEBUG nova.objects.instance [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'keypairs' on Instance uuid 49127a4f-80f7-4729-a1b3-ef852d68862a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:58 np0005603609 nova_compute[221550]: 2026-01-31 08:02:58.734 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:02:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:59.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:02:59 np0005603609 nova_compute[221550]: 2026-01-31 08:02:59.789 221554 DEBUG nova.compute.manager [req-97da5934-9a35-465a-88e9-58d160bdaa81 req-4c0a283b-e167-413e-8240-2101e1657e45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received event network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:59 np0005603609 nova_compute[221550]: 2026-01-31 08:02:59.790 221554 DEBUG oslo_concurrency.lockutils [req-97da5934-9a35-465a-88e9-58d160bdaa81 req-4c0a283b-e167-413e-8240-2101e1657e45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:59 np0005603609 nova_compute[221550]: 2026-01-31 08:02:59.790 221554 DEBUG oslo_concurrency.lockutils [req-97da5934-9a35-465a-88e9-58d160bdaa81 req-4c0a283b-e167-413e-8240-2101e1657e45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:59 np0005603609 nova_compute[221550]: 2026-01-31 08:02:59.790 221554 DEBUG oslo_concurrency.lockutils [req-97da5934-9a35-465a-88e9-58d160bdaa81 req-4c0a283b-e167-413e-8240-2101e1657e45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:59 np0005603609 nova_compute[221550]: 2026-01-31 08:02:59.791 221554 DEBUG nova.compute.manager [req-97da5934-9a35-465a-88e9-58d160bdaa81 req-4c0a283b-e167-413e-8240-2101e1657e45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] No waiting events found dispatching network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:02:59 np0005603609 nova_compute[221550]: 2026-01-31 08:02:59.791 221554 WARNING nova.compute.manager [req-97da5934-9a35-465a-88e9-58d160bdaa81 req-4c0a283b-e167-413e-8240-2101e1657e45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received unexpected event network-vif-plugged-647b42a2-81d3-49dc-9075-d31077bfffe8 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:02:59 np0005603609 nova_compute[221550]: 2026-01-31 08:02:59.824 221554 INFO nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Creating config drive at /var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a/disk.config#033[00m
Jan 31 03:02:59 np0005603609 nova_compute[221550]: 2026-01-31 08:02:59.827 221554 DEBUG oslo_concurrency.processutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpkrr5oa0j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:02:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:59.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:59 np0005603609 nova_compute[221550]: 2026-01-31 08:02:59.944 221554 DEBUG oslo_concurrency.processutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpkrr5oa0j" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:59 np0005603609 nova_compute[221550]: 2026-01-31 08:02:59.967 221554 DEBUG nova.storage.rbd_utils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 49127a4f-80f7-4729-a1b3-ef852d68862a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:59 np0005603609 nova_compute[221550]: 2026-01-31 08:02:59.970 221554 DEBUG oslo_concurrency.processutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a/disk.config 49127a4f-80f7-4729-a1b3-ef852d68862a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.219 221554 DEBUG oslo_concurrency.processutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a/disk.config 49127a4f-80f7-4729-a1b3-ef852d68862a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.219 221554 INFO nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Deleting local config drive /var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a/disk.config because it was imported into RBD.#033[00m
Jan 31 03:03:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:00 np0005603609 kernel: tap6a146b62-d3: entered promiscuous mode
Jan 31 03:03:00 np0005603609 systemd-udevd[257288]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:03:00 np0005603609 NetworkManager[49064]: <info>  [1769846580.2564] manager: (tap6a146b62-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/168)
Jan 31 03:03:00 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:00Z|00341|binding|INFO|Claiming lport 6a146b62-d383-4976-b311-188a5e891208 for this chassis.
Jan 31 03:03:00 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:00Z|00342|binding|INFO|6a146b62-d383-4976-b311-188a5e891208: Claiming fa:16:3e:5f:56:31 10.100.0.7
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.256 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:00 np0005603609 NetworkManager[49064]: <info>  [1769846580.2640] device (tap6a146b62-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:03:00 np0005603609 NetworkManager[49064]: <info>  [1769846580.2644] device (tap6a146b62-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.264 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:00 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:00Z|00343|binding|INFO|Setting lport 6a146b62-d383-4976-b311-188a5e891208 ovn-installed in OVS
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.266 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.268 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:00 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:00Z|00344|binding|INFO|Setting lport 6a146b62-d383-4976-b311-188a5e891208 up in Southbound
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.270 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:56:31 10.100.0.7'], port_security=['fa:16:3e:5f:56:31 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '49127a4f-80f7-4729-a1b3-ef852d68862a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd352316ff6534075952e2d0c28061b09', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'd60e8396-567a-4b0f-955f-0777fc5fbc4d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b782827-2c44-4671-8f12-bb67b4f64804, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=6a146b62-d383-4976-b311-188a5e891208) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.271 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 6a146b62-d383-4976-b311-188a5e891208 in datapath 79cb2b81-3369-468a-8bf6-7e13d5df334b bound to our chassis#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.272 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79cb2b81-3369-468a-8bf6-7e13d5df334b#033[00m
Jan 31 03:03:00 np0005603609 systemd-machined[190912]: New machine qemu-44-instance-0000005e.
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.282 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb6ea95-ae4c-4ba5-98ec-bcb6057ae68a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.282 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap79cb2b81-31 in ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.283 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap79cb2b81-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.283 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3303af8f-a3a9-48f7-9e23-58d5f92d7670]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.284 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9f6fb386-66a8-4562-b71f-bcc3f9062f36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.290 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd37492-8379-4a54-985c-ca8283eb50d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:00 np0005603609 systemd[1]: Started Virtual Machine qemu-44-instance-0000005e.
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.298 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bd47b9a8-39b4-427b-b200-d747a696d6e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.320 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[4072a9f5-70d1-4e88-80ca-2ae3765cd5b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:00 np0005603609 NetworkManager[49064]: <info>  [1769846580.3243] manager: (tap79cb2b81-30): new Veth device (/org/freedesktop/NetworkManager/Devices/169)
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.323 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f647f6f7-84e3-4219-9e13-1921a925e552]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:00 np0005603609 systemd-udevd[257514]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.345 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[52b18b8c-e1f0-4cc2-95d2-9e954e6de2f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.349 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c2191b06-bb99-4f8d-b434-3a33ec8f1a3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:00 np0005603609 NetworkManager[49064]: <info>  [1769846580.3670] device (tap79cb2b81-30): carrier: link connected
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.368 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[8f8237a3-2bbf-4287-b689-6d0f246bc833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.382 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d8d49a-de64-45c9-87c4-0e3f4a43e911]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79cb2b81-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:12:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682006, 'reachable_time': 25642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257533, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.393 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9d4463f3-aa00-4598-b86e-0f3f783e463a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:12e3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 682006, 'tstamp': 682006}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257534, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.402 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc5b69d-4fcc-465d-b96b-7265ab2e3f99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79cb2b81-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:12:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682006, 'reachable_time': 25642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257535, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.419 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8736bff8-58bc-4ed7-9c87-ea598bf2a2c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.432 221554 DEBUG nova.network.neutron [-] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.455 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e30bc25f-4f91-47e5-b33e-6e46d4961804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.456 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79cb2b81-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.456 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.456 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79cb2b81-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.458 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:00 np0005603609 NetworkManager[49064]: <info>  [1769846580.4588] manager: (tap79cb2b81-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Jan 31 03:03:00 np0005603609 kernel: tap79cb2b81-30: entered promiscuous mode
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.461 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.463 221554 INFO nova.compute.manager [-] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Took 2.28 seconds to deallocate network for instance.#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.468 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79cb2b81-30, col_values=(('external_ids', {'iface-id': '9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.469 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:00 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:00Z|00345|binding|INFO|Releasing lport 9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef from this chassis (sb_readonly=0)
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.471 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.473 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.475 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[63b9f2e8-5946-4337-bd23-2bbb87228a7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.476 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:03:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:00.476 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'env', 'PROCESS_TAG=haproxy-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/79cb2b81-3369-468a-8bf6-7e13d5df334b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.479 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.544 221554 DEBUG oslo_concurrency.lockutils [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.544 221554 DEBUG oslo_concurrency.lockutils [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.548 221554 DEBUG oslo_concurrency.lockutils [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.609 221554 INFO nova.scheduler.client.report [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Deleted allocations for instance e2e6f697-3383-4887-b25a-c89447e67fc2#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.725 221554 DEBUG oslo_concurrency.lockutils [None req-270fe3a1-0867-4498-b459-ca529c716864 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e2e6f697-3383-4887-b25a-c89447e67fc2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.737 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 49127a4f-80f7-4729-a1b3-ef852d68862a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.738 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846580.737608, 49127a4f-80f7-4729-a1b3-ef852d68862a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.738 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] VM Started (Lifecycle Event)#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.769 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.772 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846580.7381587, 49127a4f-80f7-4729-a1b3-ef852d68862a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.772 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.801 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.803 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:03:00 np0005603609 podman[257609]: 2026-01-31 08:03:00.826882684 +0000 UTC m=+0.052401103 container create d8dd6241be0e9bf19b20c6aa0302b4a1500af2120c68571227737ce598fb8275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 03:03:00 np0005603609 nova_compute[221550]: 2026-01-31 08:03:00.838 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 03:03:00 np0005603609 systemd[1]: Started libpod-conmon-d8dd6241be0e9bf19b20c6aa0302b4a1500af2120c68571227737ce598fb8275.scope.
Jan 31 03:03:00 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:03:00 np0005603609 podman[257609]: 2026-01-31 08:03:00.790369955 +0000 UTC m=+0.015888394 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:03:00 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9f838b64832704444665dadb70c7a3c54a8d325b18db9fe7550a2eb0337d89/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:03:00 np0005603609 podman[257609]: 2026-01-31 08:03:00.901998093 +0000 UTC m=+0.127516532 container init d8dd6241be0e9bf19b20c6aa0302b4a1500af2120c68571227737ce598fb8275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 03:03:00 np0005603609 podman[257609]: 2026-01-31 08:03:00.906034689 +0000 UTC m=+0.131553108 container start d8dd6241be0e9bf19b20c6aa0302b4a1500af2120c68571227737ce598fb8275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 03:03:00 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[257624]: [NOTICE]   (257628) : New worker (257630) forked
Jan 31 03:03:00 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[257624]: [NOTICE]   (257628) : Loading success.
Jan 31 03:03:01 np0005603609 nova_compute[221550]: 2026-01-31 08:03:01.054 221554 DEBUG nova.compute.manager [req-92bc0b53-749d-45d5-8176-5fa4bdf7cef9 req-837ac73f-00f1-46db-af7e-de6bfde17adf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Received event network-vif-deleted-647b42a2-81d3-49dc-9075-d31077bfffe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:01.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:03:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:01.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.007 221554 DEBUG nova.compute.manager [req-dbb1bf9d-f1c3-4a01-8af5-877d2aec7f3d req-78bcfd46-61de-4a26-af07-d8c7d7be5d6b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Received event network-vif-plugged-6a146b62-d383-4976-b311-188a5e891208 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.008 221554 DEBUG oslo_concurrency.lockutils [req-dbb1bf9d-f1c3-4a01-8af5-877d2aec7f3d req-78bcfd46-61de-4a26-af07-d8c7d7be5d6b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.009 221554 DEBUG oslo_concurrency.lockutils [req-dbb1bf9d-f1c3-4a01-8af5-877d2aec7f3d req-78bcfd46-61de-4a26-af07-d8c7d7be5d6b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.009 221554 DEBUG oslo_concurrency.lockutils [req-dbb1bf9d-f1c3-4a01-8af5-877d2aec7f3d req-78bcfd46-61de-4a26-af07-d8c7d7be5d6b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.010 221554 DEBUG nova.compute.manager [req-dbb1bf9d-f1c3-4a01-8af5-877d2aec7f3d req-78bcfd46-61de-4a26-af07-d8c7d7be5d6b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Processing event network-vif-plugged-6a146b62-d383-4976-b311-188a5e891208 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.010 221554 DEBUG nova.compute.manager [req-dbb1bf9d-f1c3-4a01-8af5-877d2aec7f3d req-78bcfd46-61de-4a26-af07-d8c7d7be5d6b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Received event network-vif-plugged-6a146b62-d383-4976-b311-188a5e891208 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.011 221554 DEBUG oslo_concurrency.lockutils [req-dbb1bf9d-f1c3-4a01-8af5-877d2aec7f3d req-78bcfd46-61de-4a26-af07-d8c7d7be5d6b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.011 221554 DEBUG oslo_concurrency.lockutils [req-dbb1bf9d-f1c3-4a01-8af5-877d2aec7f3d req-78bcfd46-61de-4a26-af07-d8c7d7be5d6b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.012 221554 DEBUG oslo_concurrency.lockutils [req-dbb1bf9d-f1c3-4a01-8af5-877d2aec7f3d req-78bcfd46-61de-4a26-af07-d8c7d7be5d6b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.012 221554 DEBUG nova.compute.manager [req-dbb1bf9d-f1c3-4a01-8af5-877d2aec7f3d req-78bcfd46-61de-4a26-af07-d8c7d7be5d6b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] No waiting events found dispatching network-vif-plugged-6a146b62-d383-4976-b311-188a5e891208 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.012 221554 WARNING nova.compute.manager [req-dbb1bf9d-f1c3-4a01-8af5-877d2aec7f3d req-78bcfd46-61de-4a26-af07-d8c7d7be5d6b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Received unexpected event network-vif-plugged-6a146b62-d383-4976-b311-188a5e891208 for instance with vm_state stopped and task_state rebuild_spawning.#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.014 221554 DEBUG nova.compute.manager [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.019 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846582.019589, 49127a4f-80f7-4729-a1b3-ef852d68862a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.020 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.027 221554 DEBUG nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.032 221554 INFO nova.virt.libvirt.driver [-] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Instance spawned successfully.#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.033 221554 DEBUG nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.058 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.063 221554 DEBUG nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.064 221554 DEBUG nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.064 221554 DEBUG nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.065 221554 DEBUG nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.066 221554 DEBUG nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.067 221554 DEBUG nova.virt.libvirt.driver [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.073 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.109 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.191 221554 DEBUG nova.compute.manager [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.259 221554 INFO nova.compute.manager [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] bringing vm to original state: 'stopped'#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.335 221554 DEBUG oslo_concurrency.lockutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "49127a4f-80f7-4729-a1b3-ef852d68862a" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.335 221554 DEBUG oslo_concurrency.lockutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.336 221554 DEBUG nova.compute.manager [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.338 221554 DEBUG nova.compute.manager [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 31 03:03:02 np0005603609 kernel: tap6a146b62-d3 (unregistering): left promiscuous mode
Jan 31 03:03:02 np0005603609 NetworkManager[49064]: <info>  [1769846582.3754] device (tap6a146b62-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.381 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:02 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:02Z|00346|binding|INFO|Releasing lport 6a146b62-d383-4976-b311-188a5e891208 from this chassis (sb_readonly=0)
Jan 31 03:03:02 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:02Z|00347|binding|INFO|Setting lport 6a146b62-d383-4976-b311-188a5e891208 down in Southbound
Jan 31 03:03:02 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:02Z|00348|binding|INFO|Removing iface tap6a146b62-d3 ovn-installed in OVS
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.383 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:02.390 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:56:31 10.100.0.7'], port_security=['fa:16:3e:5f:56:31 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '49127a4f-80f7-4729-a1b3-ef852d68862a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd352316ff6534075952e2d0c28061b09', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd60e8396-567a-4b0f-955f-0777fc5fbc4d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b782827-2c44-4671-8f12-bb67b4f64804, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=6a146b62-d383-4976-b311-188a5e891208) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:03:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:02.391 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 6a146b62-d383-4976-b311-188a5e891208 in datapath 79cb2b81-3369-468a-8bf6-7e13d5df334b unbound from our chassis#033[00m
Jan 31 03:03:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:02.392 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79cb2b81-3369-468a-8bf6-7e13d5df334b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:03:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:02.392 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c0504df7-75e9-4015-b4d8-d778cde77f8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:02.393 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b namespace which is not needed anymore#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.393 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:02 np0005603609 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Jan 31 03:03:02 np0005603609 systemd-machined[190912]: Machine qemu-44-instance-0000005e terminated.
Jan 31 03:03:02 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[257624]: [NOTICE]   (257628) : haproxy version is 2.8.14-c23fe91
Jan 31 03:03:02 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[257624]: [NOTICE]   (257628) : path to executable is /usr/sbin/haproxy
Jan 31 03:03:02 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[257624]: [WARNING]  (257628) : Exiting Master process...
Jan 31 03:03:02 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[257624]: [WARNING]  (257628) : Exiting Master process...
Jan 31 03:03:02 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[257624]: [ALERT]    (257628) : Current worker (257630) exited with code 143 (Terminated)
Jan 31 03:03:02 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[257624]: [WARNING]  (257628) : All workers exited. Exiting... (0)
Jan 31 03:03:02 np0005603609 systemd[1]: libpod-d8dd6241be0e9bf19b20c6aa0302b4a1500af2120c68571227737ce598fb8275.scope: Deactivated successfully.
Jan 31 03:03:02 np0005603609 podman[257661]: 2026-01-31 08:03:02.498870516 +0000 UTC m=+0.041973672 container died d8dd6241be0e9bf19b20c6aa0302b4a1500af2120c68571227737ce598fb8275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:03:02 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8dd6241be0e9bf19b20c6aa0302b4a1500af2120c68571227737ce598fb8275-userdata-shm.mount: Deactivated successfully.
Jan 31 03:03:02 np0005603609 systemd[1]: var-lib-containers-storage-overlay-fe9f838b64832704444665dadb70c7a3c54a8d325b18db9fe7550a2eb0337d89-merged.mount: Deactivated successfully.
Jan 31 03:03:02 np0005603609 podman[257661]: 2026-01-31 08:03:02.53519234 +0000 UTC m=+0.078295496 container cleanup d8dd6241be0e9bf19b20c6aa0302b4a1500af2120c68571227737ce598fb8275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:03:02 np0005603609 systemd[1]: libpod-conmon-d8dd6241be0e9bf19b20c6aa0302b4a1500af2120c68571227737ce598fb8275.scope: Deactivated successfully.
Jan 31 03:03:02 np0005603609 NetworkManager[49064]: <info>  [1769846582.5533] manager: (tap6a146b62-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/171)
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.569 221554 INFO nova.virt.libvirt.driver [-] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Instance destroyed successfully.#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.570 221554 DEBUG nova.compute.manager [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:03:02 np0005603609 podman[257691]: 2026-01-31 08:03:02.58792579 +0000 UTC m=+0.036290315 container remove d8dd6241be0e9bf19b20c6aa0302b4a1500af2120c68571227737ce598fb8275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:03:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:02.592 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[29deef27-40c5-4291-8cd1-3cad3f11ea0d]: (4, ('Sat Jan 31 08:03:02 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b (d8dd6241be0e9bf19b20c6aa0302b4a1500af2120c68571227737ce598fb8275)\nd8dd6241be0e9bf19b20c6aa0302b4a1500af2120c68571227737ce598fb8275\nSat Jan 31 08:03:02 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b (d8dd6241be0e9bf19b20c6aa0302b4a1500af2120c68571227737ce598fb8275)\nd8dd6241be0e9bf19b20c6aa0302b4a1500af2120c68571227737ce598fb8275\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:02.594 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c7cfbb2b-c9ce-43c4-b538-fabba4b99922]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:02.595 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79cb2b81-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.631 221554 DEBUG oslo_concurrency.lockutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.638 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:02 np0005603609 kernel: tap79cb2b81-30: left promiscuous mode
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.651 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:02.653 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e0870d47-f2a9-435c-b951-0b750626b475]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.654 221554 DEBUG oslo_concurrency.lockutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.655 221554 DEBUG oslo_concurrency.lockutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.655 221554 DEBUG nova.objects.instance [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 03:03:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:02.671 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1914e7ca-9502-43b9-affb-209958e2e9f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:02.673 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4db21e4a-4fd8-4a5a-820f-3089b73755fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:02.685 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[80faa3c1-1f53-4203-b4cb-db122a6c1dbe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682001, 'reachable_time': 38496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257722, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:02.689 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:03:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:02.689 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[55f51a45-0acd-4adc-86ba-0291e50dc785]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:02 np0005603609 systemd[1]: run-netns-ovnmeta\x2d79cb2b81\x2d3369\x2d468a\x2d8bf6\x2d7e13d5df334b.mount: Deactivated successfully.
Jan 31 03:03:02 np0005603609 nova_compute[221550]: 2026-01-31 08:03:02.721 221554 DEBUG oslo_concurrency.lockutils [None req-8e70e3b7-89ec-40db-b9dc-b32a9e569ca1 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:03 np0005603609 nova_compute[221550]: 2026-01-31 08:03:03.084 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:03 np0005603609 podman[257895]: 2026-01-31 08:03:03.287398539 +0000 UTC m=+0.046715476 container exec 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:03:03 np0005603609 podman[257917]: 2026-01-31 08:03:03.428169768 +0000 UTC m=+0.050171189 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 03:03:03 np0005603609 podman[257895]: 2026-01-31 08:03:03.433111236 +0000 UTC m=+0.192428163 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:03:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:03.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:03 np0005603609 nova_compute[221550]: 2026-01-31 08:03:03.771 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:03.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:03 np0005603609 nova_compute[221550]: 2026-01-31 08:03:03.923 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:04 np0005603609 nova_compute[221550]: 2026-01-31 08:03:04.209 221554 DEBUG nova.compute.manager [req-9d98211c-533e-488f-8666-43a78795333e req-4920dab6-95f1-44ff-899a-6e25971f57e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Received event network-vif-unplugged-6a146b62-d383-4976-b311-188a5e891208 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:04 np0005603609 nova_compute[221550]: 2026-01-31 08:03:04.210 221554 DEBUG oslo_concurrency.lockutils [req-9d98211c-533e-488f-8666-43a78795333e req-4920dab6-95f1-44ff-899a-6e25971f57e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:04 np0005603609 nova_compute[221550]: 2026-01-31 08:03:04.210 221554 DEBUG oslo_concurrency.lockutils [req-9d98211c-533e-488f-8666-43a78795333e req-4920dab6-95f1-44ff-899a-6e25971f57e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:04 np0005603609 nova_compute[221550]: 2026-01-31 08:03:04.210 221554 DEBUG oslo_concurrency.lockutils [req-9d98211c-533e-488f-8666-43a78795333e req-4920dab6-95f1-44ff-899a-6e25971f57e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:04 np0005603609 nova_compute[221550]: 2026-01-31 08:03:04.210 221554 DEBUG nova.compute.manager [req-9d98211c-533e-488f-8666-43a78795333e req-4920dab6-95f1-44ff-899a-6e25971f57e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] No waiting events found dispatching network-vif-unplugged-6a146b62-d383-4976-b311-188a5e891208 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:03:04 np0005603609 nova_compute[221550]: 2026-01-31 08:03:04.211 221554 WARNING nova.compute.manager [req-9d98211c-533e-488f-8666-43a78795333e req-4920dab6-95f1-44ff-899a-6e25971f57e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Received unexpected event network-vif-unplugged-6a146b62-d383-4976-b311-188a5e891208 for instance with vm_state stopped and task_state None.#033[00m
Jan 31 03:03:04 np0005603609 nova_compute[221550]: 2026-01-31 08:03:04.211 221554 DEBUG nova.compute.manager [req-9d98211c-533e-488f-8666-43a78795333e req-4920dab6-95f1-44ff-899a-6e25971f57e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Received event network-vif-plugged-6a146b62-d383-4976-b311-188a5e891208 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:04 np0005603609 nova_compute[221550]: 2026-01-31 08:03:04.211 221554 DEBUG oslo_concurrency.lockutils [req-9d98211c-533e-488f-8666-43a78795333e req-4920dab6-95f1-44ff-899a-6e25971f57e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:04 np0005603609 nova_compute[221550]: 2026-01-31 08:03:04.211 221554 DEBUG oslo_concurrency.lockutils [req-9d98211c-533e-488f-8666-43a78795333e req-4920dab6-95f1-44ff-899a-6e25971f57e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:04 np0005603609 nova_compute[221550]: 2026-01-31 08:03:04.212 221554 DEBUG oslo_concurrency.lockutils [req-9d98211c-533e-488f-8666-43a78795333e req-4920dab6-95f1-44ff-899a-6e25971f57e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:04 np0005603609 nova_compute[221550]: 2026-01-31 08:03:04.212 221554 DEBUG nova.compute.manager [req-9d98211c-533e-488f-8666-43a78795333e req-4920dab6-95f1-44ff-899a-6e25971f57e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] No waiting events found dispatching network-vif-plugged-6a146b62-d383-4976-b311-188a5e891208 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:03:04 np0005603609 nova_compute[221550]: 2026-01-31 08:03:04.212 221554 WARNING nova.compute.manager [req-9d98211c-533e-488f-8666-43a78795333e req-4920dab6-95f1-44ff-899a-6e25971f57e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Received unexpected event network-vif-plugged-6a146b62-d383-4976-b311-188a5e891208 for instance with vm_state stopped and task_state None.#033[00m
Jan 31 03:03:04 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:03:04 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:03:04 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:03:04 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:03:04 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:03:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:03:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:05.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:03:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:05.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:07.497 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:07.497 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:07.497 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:07.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:07 np0005603609 nova_compute[221550]: 2026-01-31 08:03:07.738 221554 DEBUG oslo_concurrency.lockutils [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "49127a4f-80f7-4729-a1b3-ef852d68862a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:07 np0005603609 nova_compute[221550]: 2026-01-31 08:03:07.739 221554 DEBUG oslo_concurrency.lockutils [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:07 np0005603609 nova_compute[221550]: 2026-01-31 08:03:07.739 221554 DEBUG oslo_concurrency.lockutils [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:07 np0005603609 nova_compute[221550]: 2026-01-31 08:03:07.740 221554 DEBUG oslo_concurrency.lockutils [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:07 np0005603609 nova_compute[221550]: 2026-01-31 08:03:07.740 221554 DEBUG oslo_concurrency.lockutils [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:07 np0005603609 nova_compute[221550]: 2026-01-31 08:03:07.742 221554 INFO nova.compute.manager [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Terminating instance#033[00m
Jan 31 03:03:07 np0005603609 nova_compute[221550]: 2026-01-31 08:03:07.744 221554 DEBUG nova.compute.manager [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:03:07 np0005603609 nova_compute[221550]: 2026-01-31 08:03:07.751 221554 INFO nova.virt.libvirt.driver [-] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Instance destroyed successfully.#033[00m
Jan 31 03:03:07 np0005603609 nova_compute[221550]: 2026-01-31 08:03:07.751 221554 DEBUG nova.objects.instance [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'resources' on Instance uuid 49127a4f-80f7-4729-a1b3-ef852d68862a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:03:07 np0005603609 nova_compute[221550]: 2026-01-31 08:03:07.778 221554 DEBUG nova.virt.libvirt.vif [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:02:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-944608155',display_name='tempest-tempest.common.compute-instance-944608155',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-944608155',id=94,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:03:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-honqqjjn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:03:02Z,user_data=None,user_id='31043e345f6b48b585fb7b8ab7304764',uuid=49127a4f-80f7-4729-a1b3-ef852d68862a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "6a146b62-d383-4976-b311-188a5e891208", "address": "fa:16:3e:5f:56:31", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a146b62-d3", "ovs_interfaceid": "6a146b62-d383-4976-b311-188a5e891208", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:03:07 np0005603609 nova_compute[221550]: 2026-01-31 08:03:07.779 221554 DEBUG nova.network.os_vif_util [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "6a146b62-d383-4976-b311-188a5e891208", "address": "fa:16:3e:5f:56:31", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a146b62-d3", "ovs_interfaceid": "6a146b62-d383-4976-b311-188a5e891208", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:03:07 np0005603609 nova_compute[221550]: 2026-01-31 08:03:07.779 221554 DEBUG nova.network.os_vif_util [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:56:31,bridge_name='br-int',has_traffic_filtering=True,id=6a146b62-d383-4976-b311-188a5e891208,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a146b62-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:03:07 np0005603609 nova_compute[221550]: 2026-01-31 08:03:07.780 221554 DEBUG os_vif [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:56:31,bridge_name='br-int',has_traffic_filtering=True,id=6a146b62-d383-4976-b311-188a5e891208,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a146b62-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:03:07 np0005603609 nova_compute[221550]: 2026-01-31 08:03:07.781 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:07 np0005603609 nova_compute[221550]: 2026-01-31 08:03:07.781 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a146b62-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:07 np0005603609 nova_compute[221550]: 2026-01-31 08:03:07.783 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:07 np0005603609 nova_compute[221550]: 2026-01-31 08:03:07.784 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:07 np0005603609 nova_compute[221550]: 2026-01-31 08:03:07.786 221554 INFO os_vif [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:56:31,bridge_name='br-int',has_traffic_filtering=True,id=6a146b62-d383-4976-b311-188a5e891208,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a146b62-d3')#033[00m
Jan 31 03:03:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:07.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:08 np0005603609 nova_compute[221550]: 2026-01-31 08:03:08.172 221554 INFO nova.virt.libvirt.driver [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Deleting instance files /var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a_del#033[00m
Jan 31 03:03:08 np0005603609 nova_compute[221550]: 2026-01-31 08:03:08.172 221554 INFO nova.virt.libvirt.driver [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Deletion of /var/lib/nova/instances/49127a4f-80f7-4729-a1b3-ef852d68862a_del complete#033[00m
Jan 31 03:03:08 np0005603609 nova_compute[221550]: 2026-01-31 08:03:08.237 221554 INFO nova.compute.manager [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Took 0.49 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:03:08 np0005603609 nova_compute[221550]: 2026-01-31 08:03:08.237 221554 DEBUG oslo.service.loopingcall [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:03:08 np0005603609 nova_compute[221550]: 2026-01-31 08:03:08.237 221554 DEBUG nova.compute.manager [-] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:03:08 np0005603609 nova_compute[221550]: 2026-01-31 08:03:08.238 221554 DEBUG nova.network.neutron [-] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:03:08 np0005603609 nova_compute[221550]: 2026-01-31 08:03:08.773 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:09.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:09 np0005603609 nova_compute[221550]: 2026-01-31 08:03:09.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:03:09 np0005603609 nova_compute[221550]: 2026-01-31 08:03:09.758 221554 DEBUG nova.network.neutron [-] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:03:09 np0005603609 nova_compute[221550]: 2026-01-31 08:03:09.783 221554 INFO nova.compute.manager [-] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Took 1.55 seconds to deallocate network for instance.#033[00m
Jan 31 03:03:09 np0005603609 nova_compute[221550]: 2026-01-31 08:03:09.831 221554 DEBUG oslo_concurrency.lockutils [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:09 np0005603609 nova_compute[221550]: 2026-01-31 08:03:09.831 221554 DEBUG oslo_concurrency.lockutils [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:09 np0005603609 nova_compute[221550]: 2026-01-31 08:03:09.909 221554 DEBUG oslo_concurrency.processutils [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:09.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:09 np0005603609 nova_compute[221550]: 2026-01-31 08:03:09.931 221554 DEBUG nova.compute.manager [req-1ebfabff-a4cb-4775-8af1-610bcfaa8d9e req-3e55497a-cc61-40d8-aba1-540fd99b1fac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Received event network-vif-deleted-6a146b62-d383-4976-b311-188a5e891208 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:03:10 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1024500865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:03:10 np0005603609 nova_compute[221550]: 2026-01-31 08:03:10.338 221554 DEBUG oslo_concurrency.processutils [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:10 np0005603609 nova_compute[221550]: 2026-01-31 08:03:10.342 221554 DEBUG nova.compute.provider_tree [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:03:10 np0005603609 nova_compute[221550]: 2026-01-31 08:03:10.362 221554 DEBUG nova.scheduler.client.report [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:03:10 np0005603609 nova_compute[221550]: 2026-01-31 08:03:10.404 221554 DEBUG oslo_concurrency.lockutils [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:10 np0005603609 nova_compute[221550]: 2026-01-31 08:03:10.512 221554 INFO nova.scheduler.client.report [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Deleted allocations for instance 49127a4f-80f7-4729-a1b3-ef852d68862a#033[00m
Jan 31 03:03:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:03:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:03:10 np0005603609 nova_compute[221550]: 2026-01-31 08:03:10.642 221554 DEBUG oslo_concurrency.lockutils [None req-313445b3-df11-4df0-9a07-10aa269f1d5b 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "49127a4f-80f7-4729-a1b3-ef852d68862a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.903s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:11 np0005603609 nova_compute[221550]: 2026-01-31 08:03:11.515 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:11.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:11.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:12 np0005603609 nova_compute[221550]: 2026-01-31 08:03:12.485 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846577.4816828, e2e6f697-3383-4887-b25a-c89447e67fc2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:03:12 np0005603609 nova_compute[221550]: 2026-01-31 08:03:12.485 221554 INFO nova.compute.manager [-] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:03:12 np0005603609 nova_compute[221550]: 2026-01-31 08:03:12.554 221554 DEBUG nova.compute.manager [None req-dc4565c1-3f50-4f71-b6ad-22620e21549c - - - - - -] [instance: e2e6f697-3383-4887-b25a-c89447e67fc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:03:12 np0005603609 nova_compute[221550]: 2026-01-31 08:03:12.783 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:13.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:13 np0005603609 nova_compute[221550]: 2026-01-31 08:03:13.776 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:13.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:03:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:15.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:03:15 np0005603609 nova_compute[221550]: 2026-01-31 08:03:15.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:03:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:15.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:16 np0005603609 nova_compute[221550]: 2026-01-31 08:03:16.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:03:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:17.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:17 np0005603609 nova_compute[221550]: 2026-01-31 08:03:17.569 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846582.5673945, 49127a4f-80f7-4729-a1b3-ef852d68862a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:03:17 np0005603609 nova_compute[221550]: 2026-01-31 08:03:17.569 221554 INFO nova.compute.manager [-] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:03:17 np0005603609 nova_compute[221550]: 2026-01-31 08:03:17.613 221554 DEBUG nova.compute.manager [None req-4db87171-a957-4ffc-aad0-50df2359e108 - - - - - -] [instance: 49127a4f-80f7-4729-a1b3-ef852d68862a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:03:17 np0005603609 nova_compute[221550]: 2026-01-31 08:03:17.786 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:03:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:17.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:03:18 np0005603609 nova_compute[221550]: 2026-01-31 08:03:18.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:03:18 np0005603609 nova_compute[221550]: 2026-01-31 08:03:18.694 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:18 np0005603609 nova_compute[221550]: 2026-01-31 08:03:18.694 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:18 np0005603609 nova_compute[221550]: 2026-01-31 08:03:18.694 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:18 np0005603609 nova_compute[221550]: 2026-01-31 08:03:18.695 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:03:18 np0005603609 nova_compute[221550]: 2026-01-31 08:03:18.695 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:18 np0005603609 nova_compute[221550]: 2026-01-31 08:03:18.778 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:03:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/449284864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:03:19 np0005603609 nova_compute[221550]: 2026-01-31 08:03:19.105 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:19 np0005603609 nova_compute[221550]: 2026-01-31 08:03:19.241 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:03:19 np0005603609 nova_compute[221550]: 2026-01-31 08:03:19.242 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4634MB free_disk=20.87613296508789GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:03:19 np0005603609 nova_compute[221550]: 2026-01-31 08:03:19.243 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:19 np0005603609 nova_compute[221550]: 2026-01-31 08:03:19.243 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:19 np0005603609 nova_compute[221550]: 2026-01-31 08:03:19.316 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:03:19 np0005603609 nova_compute[221550]: 2026-01-31 08:03:19.317 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:03:19 np0005603609 nova_compute[221550]: 2026-01-31 08:03:19.339 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:19 np0005603609 nova_compute[221550]: 2026-01-31 08:03:19.524 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:03:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:19.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:03:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:03:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1309241626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:03:19 np0005603609 nova_compute[221550]: 2026-01-31 08:03:19.754 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:19 np0005603609 nova_compute[221550]: 2026-01-31 08:03:19.759 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:03:19 np0005603609 nova_compute[221550]: 2026-01-31 08:03:19.796 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:03:19 np0005603609 nova_compute[221550]: 2026-01-31 08:03:19.840 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:03:19 np0005603609 nova_compute[221550]: 2026-01-31 08:03:19.840 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:19 np0005603609 nova_compute[221550]: 2026-01-31 08:03:19.844 221554 DEBUG nova.compute.manager [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 31 03:03:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:03:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:19.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:03:19 np0005603609 nova_compute[221550]: 2026-01-31 08:03:19.964 221554 DEBUG oslo_concurrency.lockutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:19 np0005603609 nova_compute[221550]: 2026-01-31 08:03:19.964 221554 DEBUG oslo_concurrency.lockutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:20 np0005603609 nova_compute[221550]: 2026-01-31 08:03:20.070 221554 DEBUG nova.objects.instance [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'pci_requests' on Instance uuid dd1cfd57-90d1-4f96-b16e-96d64815af69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:03:20 np0005603609 nova_compute[221550]: 2026-01-31 08:03:20.087 221554 DEBUG nova.virt.hardware [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:03:20 np0005603609 nova_compute[221550]: 2026-01-31 08:03:20.087 221554 INFO nova.compute.claims [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:03:20 np0005603609 nova_compute[221550]: 2026-01-31 08:03:20.088 221554 DEBUG nova.objects.instance [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'resources' on Instance uuid dd1cfd57-90d1-4f96-b16e-96d64815af69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:03:20 np0005603609 nova_compute[221550]: 2026-01-31 08:03:20.100 221554 DEBUG nova.objects.instance [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd1cfd57-90d1-4f96-b16e-96d64815af69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:03:20 np0005603609 nova_compute[221550]: 2026-01-31 08:03:20.147 221554 INFO nova.compute.resource_tracker [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Updating resource usage from migration 47022a27-72dc-45c4-962a-90a5e5996bee#033[00m
Jan 31 03:03:20 np0005603609 nova_compute[221550]: 2026-01-31 08:03:20.148 221554 DEBUG nova.compute.resource_tracker [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Starting to track incoming migration 47022a27-72dc-45c4-962a-90a5e5996bee with flavor e3bd1dad-95f3-4ed9-94b4-27245cd798b5 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 31 03:03:20 np0005603609 nova_compute[221550]: 2026-01-31 08:03:20.228 221554 DEBUG oslo_concurrency.processutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:03:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/689929559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:03:20 np0005603609 nova_compute[221550]: 2026-01-31 08:03:20.637 221554 DEBUG oslo_concurrency.processutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:20 np0005603609 nova_compute[221550]: 2026-01-31 08:03:20.643 221554 DEBUG nova.compute.provider_tree [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:03:20 np0005603609 nova_compute[221550]: 2026-01-31 08:03:20.668 221554 DEBUG nova.scheduler.client.report [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:03:20 np0005603609 nova_compute[221550]: 2026-01-31 08:03:20.841 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:03:20 np0005603609 nova_compute[221550]: 2026-01-31 08:03:20.842 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:03:20 np0005603609 nova_compute[221550]: 2026-01-31 08:03:20.843 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:03:20 np0005603609 nova_compute[221550]: 2026-01-31 08:03:20.962 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:03:20 np0005603609 nova_compute[221550]: 2026-01-31 08:03:20.963 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:03:20 np0005603609 nova_compute[221550]: 2026-01-31 08:03:20.969 221554 DEBUG oslo_concurrency.lockutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:20 np0005603609 nova_compute[221550]: 2026-01-31 08:03:20.970 221554 INFO nova.compute.manager [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Migrating#033[00m
Jan 31 03:03:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:21.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:21 np0005603609 nova_compute[221550]: 2026-01-31 08:03:21.776 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:03:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:21.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:22 np0005603609 nova_compute[221550]: 2026-01-31 08:03:22.787 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:23.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:23 np0005603609 nova_compute[221550]: 2026-01-31 08:03:23.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:03:23 np0005603609 nova_compute[221550]: 2026-01-31 08:03:23.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:03:23 np0005603609 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 03:03:23 np0005603609 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 03:03:23 np0005603609 systemd-logind[823]: New session 60 of user nova.
Jan 31 03:03:23 np0005603609 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 03:03:23 np0005603609 systemd[1]: Starting User Manager for UID 42436...
Jan 31 03:03:23 np0005603609 podman[258315]: 2026-01-31 08:03:23.763879601 +0000 UTC m=+0.049686357 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 31 03:03:23 np0005603609 nova_compute[221550]: 2026-01-31 08:03:23.779 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:23 np0005603609 podman[258313]: 2026-01-31 08:03:23.812743778 +0000 UTC m=+0.098469813 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 03:03:23 np0005603609 systemd[258339]: Queued start job for default target Main User Target.
Jan 31 03:03:23 np0005603609 systemd[258339]: Created slice User Application Slice.
Jan 31 03:03:23 np0005603609 systemd[258339]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 03:03:23 np0005603609 systemd[258339]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 03:03:23 np0005603609 systemd[258339]: Reached target Paths.
Jan 31 03:03:23 np0005603609 systemd[258339]: Reached target Timers.
Jan 31 03:03:23 np0005603609 systemd[258339]: Starting D-Bus User Message Bus Socket...
Jan 31 03:03:23 np0005603609 systemd[258339]: Starting Create User's Volatile Files and Directories...
Jan 31 03:03:23 np0005603609 systemd[258339]: Finished Create User's Volatile Files and Directories.
Jan 31 03:03:23 np0005603609 systemd[258339]: Listening on D-Bus User Message Bus Socket.
Jan 31 03:03:23 np0005603609 systemd[258339]: Reached target Sockets.
Jan 31 03:03:23 np0005603609 systemd[258339]: Reached target Basic System.
Jan 31 03:03:23 np0005603609 systemd[258339]: Reached target Main User Target.
Jan 31 03:03:23 np0005603609 systemd[258339]: Startup finished in 127ms.
Jan 31 03:03:23 np0005603609 systemd[1]: Started User Manager for UID 42436.
Jan 31 03:03:23 np0005603609 systemd[1]: Started Session 60 of User nova.
Jan 31 03:03:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:23.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:23 np0005603609 systemd[1]: session-60.scope: Deactivated successfully.
Jan 31 03:03:23 np0005603609 systemd-logind[823]: Session 60 logged out. Waiting for processes to exit.
Jan 31 03:03:23 np0005603609 systemd-logind[823]: Removed session 60.
Jan 31 03:03:24 np0005603609 systemd-logind[823]: New session 62 of user nova.
Jan 31 03:03:24 np0005603609 systemd[1]: Started Session 62 of User nova.
Jan 31 03:03:24 np0005603609 systemd[1]: session-62.scope: Deactivated successfully.
Jan 31 03:03:24 np0005603609 systemd-logind[823]: Session 62 logged out. Waiting for processes to exit.
Jan 31 03:03:24 np0005603609 systemd-logind[823]: Removed session 62.
Jan 31 03:03:24 np0005603609 nova_compute[221550]: 2026-01-31 08:03:24.482 221554 DEBUG oslo_concurrency.lockutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "79977ab8-f5fd-4e71-b86e-34bf6e623879" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:24 np0005603609 nova_compute[221550]: 2026-01-31 08:03:24.483 221554 DEBUG oslo_concurrency.lockutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:24 np0005603609 nova_compute[221550]: 2026-01-31 08:03:24.517 221554 DEBUG nova.compute.manager [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:03:24 np0005603609 nova_compute[221550]: 2026-01-31 08:03:24.594 221554 DEBUG oslo_concurrency.lockutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:24 np0005603609 nova_compute[221550]: 2026-01-31 08:03:24.595 221554 DEBUG oslo_concurrency.lockutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:24 np0005603609 nova_compute[221550]: 2026-01-31 08:03:24.605 221554 DEBUG nova.virt.hardware [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:03:24 np0005603609 nova_compute[221550]: 2026-01-31 08:03:24.605 221554 INFO nova.compute.claims [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:03:24 np0005603609 nova_compute[221550]: 2026-01-31 08:03:24.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:03:24 np0005603609 nova_compute[221550]: 2026-01-31 08:03:24.757 221554 DEBUG oslo_concurrency.processutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:03:25 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2436905090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:03:25 np0005603609 nova_compute[221550]: 2026-01-31 08:03:25.167 221554 DEBUG oslo_concurrency.processutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:25 np0005603609 nova_compute[221550]: 2026-01-31 08:03:25.175 221554 DEBUG nova.compute.provider_tree [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:03:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:25 np0005603609 nova_compute[221550]: 2026-01-31 08:03:25.244 221554 DEBUG nova.scheduler.client.report [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:03:25 np0005603609 nova_compute[221550]: 2026-01-31 08:03:25.382 221554 DEBUG oslo_concurrency.lockutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:25 np0005603609 nova_compute[221550]: 2026-01-31 08:03:25.383 221554 DEBUG nova.compute.manager [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:03:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:03:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:25.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:03:25 np0005603609 nova_compute[221550]: 2026-01-31 08:03:25.605 221554 DEBUG nova.compute.manager [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:03:25 np0005603609 nova_compute[221550]: 2026-01-31 08:03:25.606 221554 DEBUG nova.network.neutron [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:03:25 np0005603609 nova_compute[221550]: 2026-01-31 08:03:25.702 221554 INFO nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:03:25 np0005603609 nova_compute[221550]: 2026-01-31 08:03:25.736 221554 DEBUG nova.compute.manager [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:03:25 np0005603609 nova_compute[221550]: 2026-01-31 08:03:25.897 221554 DEBUG nova.compute.manager [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:03:25 np0005603609 nova_compute[221550]: 2026-01-31 08:03:25.898 221554 DEBUG nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:03:25 np0005603609 nova_compute[221550]: 2026-01-31 08:03:25.898 221554 INFO nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Creating image(s)#033[00m
Jan 31 03:03:25 np0005603609 nova_compute[221550]: 2026-01-31 08:03:25.922 221554 DEBUG nova.storage.rbd_utils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:03:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:25.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:25 np0005603609 nova_compute[221550]: 2026-01-31 08:03:25.947 221554 DEBUG nova.storage.rbd_utils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:03:25 np0005603609 nova_compute[221550]: 2026-01-31 08:03:25.970 221554 DEBUG nova.storage.rbd_utils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:03:25 np0005603609 nova_compute[221550]: 2026-01-31 08:03:25.972 221554 DEBUG oslo_concurrency.processutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:25 np0005603609 nova_compute[221550]: 2026-01-31 08:03:25.988 221554 DEBUG nova.policy [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31043e345f6b48b585fb7b8ab7304764', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd352316ff6534075952e2d0c28061b09', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:03:26 np0005603609 nova_compute[221550]: 2026-01-31 08:03:26.032 221554 DEBUG oslo_concurrency.processutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:26 np0005603609 nova_compute[221550]: 2026-01-31 08:03:26.033 221554 DEBUG oslo_concurrency.lockutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:26 np0005603609 nova_compute[221550]: 2026-01-31 08:03:26.034 221554 DEBUG oslo_concurrency.lockutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:26 np0005603609 nova_compute[221550]: 2026-01-31 08:03:26.034 221554 DEBUG oslo_concurrency.lockutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:26 np0005603609 nova_compute[221550]: 2026-01-31 08:03:26.059 221554 DEBUG nova.storage.rbd_utils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:03:26 np0005603609 nova_compute[221550]: 2026-01-31 08:03:26.063 221554 DEBUG oslo_concurrency.processutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:26 np0005603609 nova_compute[221550]: 2026-01-31 08:03:26.299 221554 DEBUG oslo_concurrency.processutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:26 np0005603609 nova_compute[221550]: 2026-01-31 08:03:26.371 221554 DEBUG nova.storage.rbd_utils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] resizing rbd image 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:03:26 np0005603609 nova_compute[221550]: 2026-01-31 08:03:26.479 221554 DEBUG nova.objects.instance [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'migration_context' on Instance uuid 79977ab8-f5fd-4e71-b86e-34bf6e623879 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:03:26 np0005603609 nova_compute[221550]: 2026-01-31 08:03:26.498 221554 DEBUG nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:03:26 np0005603609 nova_compute[221550]: 2026-01-31 08:03:26.498 221554 DEBUG nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Ensure instance console log exists: /var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:03:26 np0005603609 nova_compute[221550]: 2026-01-31 08:03:26.498 221554 DEBUG oslo_concurrency.lockutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:26 np0005603609 nova_compute[221550]: 2026-01-31 08:03:26.499 221554 DEBUG oslo_concurrency.lockutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:26 np0005603609 nova_compute[221550]: 2026-01-31 08:03:26.499 221554 DEBUG oslo_concurrency.lockutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:27.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:27 np0005603609 nova_compute[221550]: 2026-01-31 08:03:27.789 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:03:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:27.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:03:28 np0005603609 nova_compute[221550]: 2026-01-31 08:03:28.200 221554 DEBUG nova.network.neutron [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Successfully created port: 216db5c7-79b0-4533-b75f-56027324e484 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:03:28 np0005603609 nova_compute[221550]: 2026-01-31 08:03:28.783 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:03:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:29.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:03:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:03:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:29.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:03:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:30 np0005603609 nova_compute[221550]: 2026-01-31 08:03:30.466 221554 DEBUG nova.network.neutron [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Successfully updated port: 216db5c7-79b0-4533-b75f-56027324e484 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:03:30 np0005603609 nova_compute[221550]: 2026-01-31 08:03:30.489 221554 DEBUG oslo_concurrency.lockutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "refresh_cache-79977ab8-f5fd-4e71-b86e-34bf6e623879" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:03:30 np0005603609 nova_compute[221550]: 2026-01-31 08:03:30.490 221554 DEBUG oslo_concurrency.lockutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquired lock "refresh_cache-79977ab8-f5fd-4e71-b86e-34bf6e623879" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:03:30 np0005603609 nova_compute[221550]: 2026-01-31 08:03:30.491 221554 DEBUG nova.network.neutron [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:03:30 np0005603609 nova_compute[221550]: 2026-01-31 08:03:30.613 221554 DEBUG nova.compute.manager [req-4c7b363f-fe77-4205-825d-5aa0da64e907 req-6e33111b-93c1-4310-bacc-919dcfc20b7d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Received event network-changed-216db5c7-79b0-4533-b75f-56027324e484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:30 np0005603609 nova_compute[221550]: 2026-01-31 08:03:30.614 221554 DEBUG nova.compute.manager [req-4c7b363f-fe77-4205-825d-5aa0da64e907 req-6e33111b-93c1-4310-bacc-919dcfc20b7d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Refreshing instance network info cache due to event network-changed-216db5c7-79b0-4533-b75f-56027324e484. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:03:30 np0005603609 nova_compute[221550]: 2026-01-31 08:03:30.614 221554 DEBUG oslo_concurrency.lockutils [req-4c7b363f-fe77-4205-825d-5aa0da64e907 req-6e33111b-93c1-4310-bacc-919dcfc20b7d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-79977ab8-f5fd-4e71-b86e-34bf6e623879" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:03:30 np0005603609 nova_compute[221550]: 2026-01-31 08:03:30.708 221554 DEBUG nova.network.neutron [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:03:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:03:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:31.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:03:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:31.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.717 221554 DEBUG nova.network.neutron [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Updating instance_info_cache with network_info: [{"id": "216db5c7-79b0-4533-b75f-56027324e484", "address": "fa:16:3e:a5:ba:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap216db5c7-79", "ovs_interfaceid": "216db5c7-79b0-4533-b75f-56027324e484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.744 221554 DEBUG oslo_concurrency.lockutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Releasing lock "refresh_cache-79977ab8-f5fd-4e71-b86e-34bf6e623879" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.745 221554 DEBUG nova.compute.manager [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Instance network_info: |[{"id": "216db5c7-79b0-4533-b75f-56027324e484", "address": "fa:16:3e:a5:ba:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap216db5c7-79", "ovs_interfaceid": "216db5c7-79b0-4533-b75f-56027324e484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.745 221554 DEBUG oslo_concurrency.lockutils [req-4c7b363f-fe77-4205-825d-5aa0da64e907 req-6e33111b-93c1-4310-bacc-919dcfc20b7d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-79977ab8-f5fd-4e71-b86e-34bf6e623879" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.745 221554 DEBUG nova.network.neutron [req-4c7b363f-fe77-4205-825d-5aa0da64e907 req-6e33111b-93c1-4310-bacc-919dcfc20b7d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Refreshing network info cache for port 216db5c7-79b0-4533-b75f-56027324e484 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.748 221554 DEBUG nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Start _get_guest_xml network_info=[{"id": "216db5c7-79b0-4533-b75f-56027324e484", "address": "fa:16:3e:a5:ba:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap216db5c7-79", "ovs_interfaceid": "216db5c7-79b0-4533-b75f-56027324e484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.751 221554 WARNING nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.756 221554 DEBUG nova.virt.libvirt.host [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.756 221554 DEBUG nova.virt.libvirt.host [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.759 221554 DEBUG nova.virt.libvirt.host [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.759 221554 DEBUG nova.virt.libvirt.host [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.760 221554 DEBUG nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.760 221554 DEBUG nova.virt.hardware [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.761 221554 DEBUG nova.virt.hardware [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.761 221554 DEBUG nova.virt.hardware [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.761 221554 DEBUG nova.virt.hardware [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.761 221554 DEBUG nova.virt.hardware [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.761 221554 DEBUG nova.virt.hardware [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.762 221554 DEBUG nova.virt.hardware [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.762 221554 DEBUG nova.virt.hardware [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.762 221554 DEBUG nova.virt.hardware [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.762 221554 DEBUG nova.virt.hardware [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.762 221554 DEBUG nova.virt.hardware [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.764 221554 DEBUG oslo_concurrency.processutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:32 np0005603609 nova_compute[221550]: 2026-01-31 08:03:32.790 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:03:33 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/787839996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.150 221554 DEBUG oslo_concurrency.processutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.180 221554 DEBUG nova.storage.rbd_utils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.185 221554 DEBUG oslo_concurrency.processutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:33.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:03:33 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4228796052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.662 221554 DEBUG oslo_concurrency.processutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.664 221554 DEBUG nova.virt.libvirt.vif [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:03:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-894827809',display_name='tempest-tempest.common.compute-instance-894827809',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-894827809',id=96,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGSsmxs9wjtIdeLS6+TDVPehBXibW1pg/pUNrYaSqukTWGS7m/mx1RvtoAfp2fIHUhH+opIu32b284sDYd8t4TmsPDo7saBhh7IBr6K+Eh6cl1W1MtRphK/nkrUT/jTSkQ==',key_name='tempest-keypair-523704801',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-xyg4dkcc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:03:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='31043e345f6b48b585fb7b8ab7304764',uuid=79977ab8-f5fd-4e71-b86e-34bf6e623879,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "216db5c7-79b0-4533-b75f-56027324e484", "address": "fa:16:3e:a5:ba:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap216db5c7-79", "ovs_interfaceid": "216db5c7-79b0-4533-b75f-56027324e484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.664 221554 DEBUG nova.network.os_vif_util [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "216db5c7-79b0-4533-b75f-56027324e484", "address": "fa:16:3e:a5:ba:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap216db5c7-79", "ovs_interfaceid": "216db5c7-79b0-4533-b75f-56027324e484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.665 221554 DEBUG nova.network.os_vif_util [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ba:9c,bridge_name='br-int',has_traffic_filtering=True,id=216db5c7-79b0-4533-b75f-56027324e484,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap216db5c7-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.667 221554 DEBUG nova.objects.instance [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'pci_devices' on Instance uuid 79977ab8-f5fd-4e71-b86e-34bf6e623879 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.717 221554 DEBUG nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:03:33 np0005603609 nova_compute[221550]:  <uuid>79977ab8-f5fd-4e71-b86e-34bf6e623879</uuid>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:  <name>instance-00000060</name>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <nova:name>tempest-tempest.common.compute-instance-894827809</nova:name>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:03:32</nova:creationTime>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:03:33 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:        <nova:user uuid="31043e345f6b48b585fb7b8ab7304764">tempest-ServerActionsTestOtherA-527878807-project-member</nova:user>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:        <nova:project uuid="d352316ff6534075952e2d0c28061b09">tempest-ServerActionsTestOtherA-527878807</nova:project>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:        <nova:port uuid="216db5c7-79b0-4533-b75f-56027324e484">
Jan 31 03:03:33 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <entry name="serial">79977ab8-f5fd-4e71-b86e-34bf6e623879</entry>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <entry name="uuid">79977ab8-f5fd-4e71-b86e-34bf6e623879</entry>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/79977ab8-f5fd-4e71-b86e-34bf6e623879_disk">
Jan 31 03:03:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:03:33 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/79977ab8-f5fd-4e71-b86e-34bf6e623879_disk.config">
Jan 31 03:03:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:03:33 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:a5:ba:9c"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <target dev="tap216db5c7-79"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879/console.log" append="off"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:03:33 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:03:33 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:03:33 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:03:33 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.718 221554 DEBUG nova.compute.manager [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Preparing to wait for external event network-vif-plugged-216db5c7-79b0-4533-b75f-56027324e484 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.719 221554 DEBUG oslo_concurrency.lockutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.719 221554 DEBUG oslo_concurrency.lockutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.719 221554 DEBUG oslo_concurrency.lockutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.720 221554 DEBUG nova.virt.libvirt.vif [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:03:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-894827809',display_name='tempest-tempest.common.compute-instance-894827809',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-894827809',id=96,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGSsmxs9wjtIdeLS6+TDVPehBXibW1pg/pUNrYaSqukTWGS7m/mx1RvtoAfp2fIHUhH+opIu32b284sDYd8t4TmsPDo7saBhh7IBr6K+Eh6cl1W1MtRphK/nkrUT/jTSkQ==',key_name='tempest-keypair-523704801',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-xyg4dkcc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:03:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='31043e345f6b48b585fb7b8ab7304764',uuid=79977ab8-f5fd-4e71-b86e-34bf6e623879,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "216db5c7-79b0-4533-b75f-56027324e484", "address": "fa:16:3e:a5:ba:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap216db5c7-79", "ovs_interfaceid": "216db5c7-79b0-4533-b75f-56027324e484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.720 221554 DEBUG nova.network.os_vif_util [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "216db5c7-79b0-4533-b75f-56027324e484", "address": "fa:16:3e:a5:ba:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap216db5c7-79", "ovs_interfaceid": "216db5c7-79b0-4533-b75f-56027324e484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.721 221554 DEBUG nova.network.os_vif_util [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ba:9c,bridge_name='br-int',has_traffic_filtering=True,id=216db5c7-79b0-4533-b75f-56027324e484,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap216db5c7-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.721 221554 DEBUG os_vif [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ba:9c,bridge_name='br-int',has_traffic_filtering=True,id=216db5c7-79b0-4533-b75f-56027324e484,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap216db5c7-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.722 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.722 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.723 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.726 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.726 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap216db5c7-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.727 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap216db5c7-79, col_values=(('external_ids', {'iface-id': '216db5c7-79b0-4533-b75f-56027324e484', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:ba:9c', 'vm-uuid': '79977ab8-f5fd-4e71-b86e-34bf6e623879'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.728 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:33 np0005603609 NetworkManager[49064]: <info>  [1769846613.7294] manager: (tap216db5c7-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/172)
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.732 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.733 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.735 221554 INFO os_vif [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ba:9c,bridge_name='br-int',has_traffic_filtering=True,id=216db5c7-79b0-4533-b75f-56027324e484,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap216db5c7-79')#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.783 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.876 221554 DEBUG nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.877 221554 DEBUG nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.877 221554 DEBUG nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No VIF found with MAC fa:16:3e:a5:ba:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.877 221554 INFO nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Using config drive#033[00m
Jan 31 03:03:33 np0005603609 nova_compute[221550]: 2026-01-31 08:03:33.906 221554 DEBUG nova.storage.rbd_utils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:03:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:33.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:34 np0005603609 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 03:03:34 np0005603609 systemd[258339]: Activating special unit Exit the Session...
Jan 31 03:03:34 np0005603609 systemd[258339]: Stopped target Main User Target.
Jan 31 03:03:34 np0005603609 systemd[258339]: Stopped target Basic System.
Jan 31 03:03:34 np0005603609 systemd[258339]: Stopped target Paths.
Jan 31 03:03:34 np0005603609 systemd[258339]: Stopped target Sockets.
Jan 31 03:03:34 np0005603609 systemd[258339]: Stopped target Timers.
Jan 31 03:03:34 np0005603609 systemd[258339]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 03:03:34 np0005603609 systemd[258339]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 03:03:34 np0005603609 systemd[258339]: Closed D-Bus User Message Bus Socket.
Jan 31 03:03:34 np0005603609 systemd[258339]: Stopped Create User's Volatile Files and Directories.
Jan 31 03:03:34 np0005603609 systemd[258339]: Removed slice User Application Slice.
Jan 31 03:03:34 np0005603609 systemd[258339]: Reached target Shutdown.
Jan 31 03:03:34 np0005603609 systemd[258339]: Finished Exit the Session.
Jan 31 03:03:34 np0005603609 systemd[258339]: Reached target Exit the Session.
Jan 31 03:03:34 np0005603609 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 03:03:34 np0005603609 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 03:03:34 np0005603609 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 03:03:34 np0005603609 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 03:03:34 np0005603609 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 03:03:34 np0005603609 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 03:03:34 np0005603609 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 03:03:34 np0005603609 nova_compute[221550]: 2026-01-31 08:03:34.664 221554 INFO nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Creating config drive at /var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879/disk.config#033[00m
Jan 31 03:03:34 np0005603609 nova_compute[221550]: 2026-01-31 08:03:34.669 221554 DEBUG oslo_concurrency.processutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp53eg0ph5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:34 np0005603609 nova_compute[221550]: 2026-01-31 08:03:34.803 221554 DEBUG oslo_concurrency.processutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp53eg0ph5" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:34 np0005603609 nova_compute[221550]: 2026-01-31 08:03:34.832 221554 DEBUG nova.storage.rbd_utils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:03:34 np0005603609 nova_compute[221550]: 2026-01-31 08:03:34.838 221554 DEBUG oslo_concurrency.processutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879/disk.config 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:35 np0005603609 nova_compute[221550]: 2026-01-31 08:03:35.082 221554 DEBUG nova.network.neutron [req-4c7b363f-fe77-4205-825d-5aa0da64e907 req-6e33111b-93c1-4310-bacc-919dcfc20b7d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Updated VIF entry in instance network info cache for port 216db5c7-79b0-4533-b75f-56027324e484. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:03:35 np0005603609 nova_compute[221550]: 2026-01-31 08:03:35.083 221554 DEBUG nova.network.neutron [req-4c7b363f-fe77-4205-825d-5aa0da64e907 req-6e33111b-93c1-4310-bacc-919dcfc20b7d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Updating instance_info_cache with network_info: [{"id": "216db5c7-79b0-4533-b75f-56027324e484", "address": "fa:16:3e:a5:ba:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap216db5c7-79", "ovs_interfaceid": "216db5c7-79b0-4533-b75f-56027324e484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:03:35 np0005603609 nova_compute[221550]: 2026-01-31 08:03:35.120 221554 DEBUG oslo_concurrency.lockutils [req-4c7b363f-fe77-4205-825d-5aa0da64e907 req-6e33111b-93c1-4310-bacc-919dcfc20b7d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-79977ab8-f5fd-4e71-b86e-34bf6e623879" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:03:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:35 np0005603609 nova_compute[221550]: 2026-01-31 08:03:35.318 221554 DEBUG oslo_concurrency.processutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879/disk.config 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:35 np0005603609 nova_compute[221550]: 2026-01-31 08:03:35.319 221554 INFO nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Deleting local config drive /var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879/disk.config because it was imported into RBD.#033[00m
Jan 31 03:03:35 np0005603609 kernel: tap216db5c7-79: entered promiscuous mode
Jan 31 03:03:35 np0005603609 NetworkManager[49064]: <info>  [1769846615.3568] manager: (tap216db5c7-79): new Tun device (/org/freedesktop/NetworkManager/Devices/173)
Jan 31 03:03:35 np0005603609 nova_compute[221550]: 2026-01-31 08:03:35.357 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:35Z|00349|binding|INFO|Claiming lport 216db5c7-79b0-4533-b75f-56027324e484 for this chassis.
Jan 31 03:03:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:35Z|00350|binding|INFO|216db5c7-79b0-4533-b75f-56027324e484: Claiming fa:16:3e:a5:ba:9c 10.100.0.9
Jan 31 03:03:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:35Z|00351|binding|INFO|Setting lport 216db5c7-79b0-4533-b75f-56027324e484 ovn-installed in OVS
Jan 31 03:03:35 np0005603609 nova_compute[221550]: 2026-01-31 08:03:35.365 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:35 np0005603609 nova_compute[221550]: 2026-01-31 08:03:35.367 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:35 np0005603609 systemd-machined[190912]: New machine qemu-45-instance-00000060.
Jan 31 03:03:35 np0005603609 systemd-udevd[258702]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:03:35 np0005603609 systemd[1]: Started Virtual Machine qemu-45-instance-00000060.
Jan 31 03:03:35 np0005603609 NetworkManager[49064]: <info>  [1769846615.3965] device (tap216db5c7-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:03:35 np0005603609 NetworkManager[49064]: <info>  [1769846615.3976] device (tap216db5c7-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:03:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:35Z|00352|binding|INFO|Setting lport 216db5c7-79b0-4533-b75f-56027324e484 up in Southbound
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.541 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:ba:9c 10.100.0.9'], port_security=['fa:16:3e:a5:ba:9c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '79977ab8-f5fd-4e71-b86e-34bf6e623879', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd352316ff6534075952e2d0c28061b09', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2e80c7d1-3f05-4a2c-8ace-7be5723d956d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b782827-2c44-4671-8f12-bb67b4f64804, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=216db5c7-79b0-4533-b75f-56027324e484) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.542 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 216db5c7-79b0-4533-b75f-56027324e484 in datapath 79cb2b81-3369-468a-8bf6-7e13d5df334b bound to our chassis#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.544 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79cb2b81-3369-468a-8bf6-7e13d5df334b#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.551 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[576f15cb-784c-41dd-a578-950ce1936c59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.552 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap79cb2b81-31 in ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.554 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap79cb2b81-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.554 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[df739ab8-d3b2-4a33-bbcc-8108990ced02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.555 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4aca9696-5946-4afd-9baf-70d9b648ba66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.562 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[534b9ff1-7e03-4e17-bed7-42b766bc23d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:35.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.571 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[525037ba-2954-4365-9de8-c2dd118e64f1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.588 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d27334ea-8403-4e9f-a58f-345c50b38627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:35 np0005603609 systemd-udevd[258704]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:03:35 np0005603609 NetworkManager[49064]: <info>  [1769846615.5928] manager: (tap79cb2b81-30): new Veth device (/org/freedesktop/NetworkManager/Devices/174)
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.593 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3cafe81e-e523-4128-b578-7f91f288f1d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.622 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[41d59ac9-0705-4e00-875e-10d84dcc8060]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.624 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b63127ae-c8f8-4251-a68a-45c5aa2b3e8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:35 np0005603609 NetworkManager[49064]: <info>  [1769846615.6422] device (tap79cb2b81-30): carrier: link connected
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.648 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[7fded120-4104-48b8-891c-22339af85bc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.660 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f3f239-4a3b-4c16-b2d4-0aab0d947226]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79cb2b81-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:12:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685534, 'reachable_time': 24137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258735, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.672 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9b261336-e852-45be-8251-378f65ffd7f4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:12e3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685534, 'tstamp': 685534}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258736, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.684 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7f85c400-cb3d-4c44-9f40-770fffa96ba9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79cb2b81-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:12:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685534, 'reachable_time': 24137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258737, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.707 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[69be0a48-cd6d-4db4-8e5c-7f2314da517f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.740 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[52814e1f-0757-40ac-8915-6a899c00b3d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.741 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79cb2b81-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.741 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.742 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79cb2b81-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:35 np0005603609 nova_compute[221550]: 2026-01-31 08:03:35.744 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:35 np0005603609 NetworkManager[49064]: <info>  [1769846615.7446] manager: (tap79cb2b81-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Jan 31 03:03:35 np0005603609 kernel: tap79cb2b81-30: entered promiscuous mode
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.746 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79cb2b81-30, col_values=(('external_ids', {'iface-id': '9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:35 np0005603609 nova_compute[221550]: 2026-01-31 08:03:35.747 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:35Z|00353|binding|INFO|Releasing lport 9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef from this chassis (sb_readonly=0)
Jan 31 03:03:35 np0005603609 nova_compute[221550]: 2026-01-31 08:03:35.748 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.748 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:03:35 np0005603609 nova_compute[221550]: 2026-01-31 08:03:35.751 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.751 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f976fefe-09f8-46c5-8239-6a6745eff442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.753 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:03:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:35.754 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'env', 'PROCESS_TAG=haproxy-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/79cb2b81-3369-468a-8bf6-7e13d5df334b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:03:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:35.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:36 np0005603609 podman[258805]: 2026-01-31 08:03:36.100455384 +0000 UTC m=+0.054415782 container create 91d54e58c78eb52d2b1534f64710b5f3961e313cee6fd008b7ebac444319f242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:03:36 np0005603609 systemd[1]: Started libpod-conmon-91d54e58c78eb52d2b1534f64710b5f3961e313cee6fd008b7ebac444319f242.scope.
Jan 31 03:03:36 np0005603609 nova_compute[221550]: 2026-01-31 08:03:36.145 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846616.1446443, 79977ab8-f5fd-4e71-b86e-34bf6e623879 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:03:36 np0005603609 nova_compute[221550]: 2026-01-31 08:03:36.146 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] VM Started (Lifecycle Event)#033[00m
Jan 31 03:03:36 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:03:36 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957c830655a4fcb1ba7414af3219f51b81324877776381b2e0799315c36fa993/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:03:36 np0005603609 podman[258805]: 2026-01-31 08:03:36.068936274 +0000 UTC m=+0.022896772 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:03:36 np0005603609 podman[258805]: 2026-01-31 08:03:36.170077799 +0000 UTC m=+0.124038227 container init 91d54e58c78eb52d2b1534f64710b5f3961e313cee6fd008b7ebac444319f242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:03:36 np0005603609 podman[258805]: 2026-01-31 08:03:36.174937806 +0000 UTC m=+0.128898204 container start 91d54e58c78eb52d2b1534f64710b5f3961e313cee6fd008b7ebac444319f242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 03:03:36 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[258827]: [NOTICE]   (258831) : New worker (258833) forked
Jan 31 03:03:36 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[258827]: [NOTICE]   (258831) : Loading success.
Jan 31 03:03:36 np0005603609 nova_compute[221550]: 2026-01-31 08:03:36.293 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:03:36 np0005603609 nova_compute[221550]: 2026-01-31 08:03:36.297 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846616.144869, 79977ab8-f5fd-4e71-b86e-34bf6e623879 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:03:36 np0005603609 nova_compute[221550]: 2026-01-31 08:03:36.298 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:03:36 np0005603609 nova_compute[221550]: 2026-01-31 08:03:36.392 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:03:36 np0005603609 nova_compute[221550]: 2026-01-31 08:03:36.394 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:03:36 np0005603609 nova_compute[221550]: 2026-01-31 08:03:36.429 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.286 221554 DEBUG nova.compute.manager [req-9f818b10-66d7-48e4-84a3-3fb847943f0b req-3bab1027-7ac3-4114-b513-b7e9a77a6228 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Received event network-vif-plugged-216db5c7-79b0-4533-b75f-56027324e484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.286 221554 DEBUG oslo_concurrency.lockutils [req-9f818b10-66d7-48e4-84a3-3fb847943f0b req-3bab1027-7ac3-4114-b513-b7e9a77a6228 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.287 221554 DEBUG oslo_concurrency.lockutils [req-9f818b10-66d7-48e4-84a3-3fb847943f0b req-3bab1027-7ac3-4114-b513-b7e9a77a6228 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.287 221554 DEBUG oslo_concurrency.lockutils [req-9f818b10-66d7-48e4-84a3-3fb847943f0b req-3bab1027-7ac3-4114-b513-b7e9a77a6228 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.287 221554 DEBUG nova.compute.manager [req-9f818b10-66d7-48e4-84a3-3fb847943f0b req-3bab1027-7ac3-4114-b513-b7e9a77a6228 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Processing event network-vif-plugged-216db5c7-79b0-4533-b75f-56027324e484 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.288 221554 DEBUG nova.compute.manager [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.290 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846617.2905524, 79977ab8-f5fd-4e71-b86e-34bf6e623879 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.291 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.293 221554 DEBUG nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.295 221554 INFO nova.virt.libvirt.driver [-] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Instance spawned successfully.#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.295 221554 DEBUG nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.315 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.318 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.326 221554 DEBUG nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.327 221554 DEBUG nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.327 221554 DEBUG nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.328 221554 DEBUG nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.328 221554 DEBUG nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.328 221554 DEBUG nova.virt.libvirt.driver [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.360 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.423 221554 DEBUG nova.compute.manager [req-f54885ab-9648-4466-b6ff-a055bd9c9563 req-ee00b238-5252-41d1-b9e6-d7123c3e92a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received event network-vif-unplugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.424 221554 DEBUG oslo_concurrency.lockutils [req-f54885ab-9648-4466-b6ff-a055bd9c9563 req-ee00b238-5252-41d1-b9e6-d7123c3e92a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.424 221554 DEBUG oslo_concurrency.lockutils [req-f54885ab-9648-4466-b6ff-a055bd9c9563 req-ee00b238-5252-41d1-b9e6-d7123c3e92a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.425 221554 DEBUG oslo_concurrency.lockutils [req-f54885ab-9648-4466-b6ff-a055bd9c9563 req-ee00b238-5252-41d1-b9e6-d7123c3e92a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.425 221554 DEBUG nova.compute.manager [req-f54885ab-9648-4466-b6ff-a055bd9c9563 req-ee00b238-5252-41d1-b9e6-d7123c3e92a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] No waiting events found dispatching network-vif-unplugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.425 221554 WARNING nova.compute.manager [req-f54885ab-9648-4466-b6ff-a055bd9c9563 req-ee00b238-5252-41d1-b9e6-d7123c3e92a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received unexpected event network-vif-unplugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.439 221554 INFO nova.compute.manager [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Took 11.54 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.439 221554 DEBUG nova.compute.manager [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:03:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:03:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:37.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.606 221554 INFO nova.compute.manager [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Took 13.03 seconds to build instance.#033[00m
Jan 31 03:03:37 np0005603609 nova_compute[221550]: 2026-01-31 08:03:37.645 221554 DEBUG oslo_concurrency.lockutils [None req-57e5aa74-c8ee-4056-b7ef-6b03b1399ca0 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:37.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:38 np0005603609 nova_compute[221550]: 2026-01-31 08:03:38.070 221554 INFO nova.network.neutron [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Updating port 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 31 03:03:38 np0005603609 nova_compute[221550]: 2026-01-31 08:03:38.729 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:38 np0005603609 nova_compute[221550]: 2026-01-31 08:03:38.784 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:39 np0005603609 nova_compute[221550]: 2026-01-31 08:03:39.438 221554 DEBUG nova.compute.manager [req-4a33d7b2-733a-4fd1-93e1-b831e06caaf5 req-df7e1345-b37c-4ef7-95f3-e4180a89d5eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Received event network-vif-plugged-216db5c7-79b0-4533-b75f-56027324e484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:39 np0005603609 nova_compute[221550]: 2026-01-31 08:03:39.438 221554 DEBUG oslo_concurrency.lockutils [req-4a33d7b2-733a-4fd1-93e1-b831e06caaf5 req-df7e1345-b37c-4ef7-95f3-e4180a89d5eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:39 np0005603609 nova_compute[221550]: 2026-01-31 08:03:39.438 221554 DEBUG oslo_concurrency.lockutils [req-4a33d7b2-733a-4fd1-93e1-b831e06caaf5 req-df7e1345-b37c-4ef7-95f3-e4180a89d5eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:39 np0005603609 nova_compute[221550]: 2026-01-31 08:03:39.439 221554 DEBUG oslo_concurrency.lockutils [req-4a33d7b2-733a-4fd1-93e1-b831e06caaf5 req-df7e1345-b37c-4ef7-95f3-e4180a89d5eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:39 np0005603609 nova_compute[221550]: 2026-01-31 08:03:39.439 221554 DEBUG nova.compute.manager [req-4a33d7b2-733a-4fd1-93e1-b831e06caaf5 req-df7e1345-b37c-4ef7-95f3-e4180a89d5eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] No waiting events found dispatching network-vif-plugged-216db5c7-79b0-4533-b75f-56027324e484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:03:39 np0005603609 nova_compute[221550]: 2026-01-31 08:03:39.439 221554 WARNING nova.compute.manager [req-4a33d7b2-733a-4fd1-93e1-b831e06caaf5 req-df7e1345-b37c-4ef7-95f3-e4180a89d5eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Received unexpected event network-vif-plugged-216db5c7-79b0-4533-b75f-56027324e484 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:03:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:39.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:39 np0005603609 nova_compute[221550]: 2026-01-31 08:03:39.751 221554 DEBUG nova.compute.manager [req-9ab6613a-ef89-4d8a-9723-948c2f6c1dec req-f0b65800-25f1-4e88-aa9b-d39f8eef7dca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received event network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:39 np0005603609 nova_compute[221550]: 2026-01-31 08:03:39.752 221554 DEBUG oslo_concurrency.lockutils [req-9ab6613a-ef89-4d8a-9723-948c2f6c1dec req-f0b65800-25f1-4e88-aa9b-d39f8eef7dca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:39 np0005603609 nova_compute[221550]: 2026-01-31 08:03:39.752 221554 DEBUG oslo_concurrency.lockutils [req-9ab6613a-ef89-4d8a-9723-948c2f6c1dec req-f0b65800-25f1-4e88-aa9b-d39f8eef7dca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:39 np0005603609 nova_compute[221550]: 2026-01-31 08:03:39.753 221554 DEBUG oslo_concurrency.lockutils [req-9ab6613a-ef89-4d8a-9723-948c2f6c1dec req-f0b65800-25f1-4e88-aa9b-d39f8eef7dca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:39 np0005603609 nova_compute[221550]: 2026-01-31 08:03:39.753 221554 DEBUG nova.compute.manager [req-9ab6613a-ef89-4d8a-9723-948c2f6c1dec req-f0b65800-25f1-4e88-aa9b-d39f8eef7dca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] No waiting events found dispatching network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:03:39 np0005603609 nova_compute[221550]: 2026-01-31 08:03:39.753 221554 WARNING nova.compute.manager [req-9ab6613a-ef89-4d8a-9723-948c2f6c1dec req-f0b65800-25f1-4e88-aa9b-d39f8eef7dca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received unexpected event network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:03:39 np0005603609 nova_compute[221550]: 2026-01-31 08:03:39.754 221554 DEBUG oslo_concurrency.lockutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:03:39 np0005603609 nova_compute[221550]: 2026-01-31 08:03:39.755 221554 DEBUG oslo_concurrency.lockutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquired lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:03:39 np0005603609 nova_compute[221550]: 2026-01-31 08:03:39.755 221554 DEBUG nova.network.neutron [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:03:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:39.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #88. Immutable memtables: 0.
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:03:40.513387) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 88
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846620513487, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2400, "num_deletes": 252, "total_data_size": 5626513, "memory_usage": 5695400, "flush_reason": "Manual Compaction"}
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #89: started
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846620553820, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 89, "file_size": 3688012, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44601, "largest_seqno": 46996, "table_properties": {"data_size": 3678370, "index_size": 6072, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20449, "raw_average_key_size": 20, "raw_value_size": 3658904, "raw_average_value_size": 3684, "num_data_blocks": 264, "num_entries": 993, "num_filter_entries": 993, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846410, "oldest_key_time": 1769846410, "file_creation_time": 1769846620, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 40454 microseconds, and 8746 cpu microseconds.
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:03:40.553858) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #89: 3688012 bytes OK
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:03:40.553876) [db/memtable_list.cc:519] [default] Level-0 commit table #89 started
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:03:40.557845) [db/memtable_list.cc:722] [default] Level-0 commit table #89: memtable #1 done
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:03:40.557869) EVENT_LOG_v1 {"time_micros": 1769846620557861, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:03:40.557894) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 5616025, prev total WAL file size 5616025, number of live WAL files 2.
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000085.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:03:40.559089) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [89(3601KB)], [87(9492KB)]
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846620559150, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [89], "files_L6": [87], "score": -1, "input_data_size": 13408052, "oldest_snapshot_seqno": -1}
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #90: 7182 keys, 11455503 bytes, temperature: kUnknown
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846620755818, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 90, "file_size": 11455503, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11407091, "index_size": 29356, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17989, "raw_key_size": 185163, "raw_average_key_size": 25, "raw_value_size": 11278269, "raw_average_value_size": 1570, "num_data_blocks": 1164, "num_entries": 7182, "num_filter_entries": 7182, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769846620, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 90, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:03:40.759035) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 11455503 bytes
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:03:40.761297) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 68.1 rd, 58.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.3 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7705, records dropped: 523 output_compression: NoCompression
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:03:40.761342) EVENT_LOG_v1 {"time_micros": 1769846620761322, "job": 54, "event": "compaction_finished", "compaction_time_micros": 196759, "compaction_time_cpu_micros": 18430, "output_level": 6, "num_output_files": 1, "total_output_size": 11455503, "num_input_records": 7705, "num_output_records": 7182, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846620762240, "job": 54, "event": "table_file_deletion", "file_number": 89}
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000087.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846620764184, "job": 54, "event": "table_file_deletion", "file_number": 87}
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:03:40.558990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:03:40.764295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:03:40.764301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:03:40.764303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:03:40.764305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:03:40 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:03:40.764308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:03:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:41.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:41 np0005603609 nova_compute[221550]: 2026-01-31 08:03:41.910 221554 DEBUG nova.compute.manager [req-bed801c3-afef-47c7-b65d-2084dd011570 req-8792e1d2-e61e-4927-8293-a838b17bbd2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Received event network-changed-216db5c7-79b0-4533-b75f-56027324e484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:41 np0005603609 nova_compute[221550]: 2026-01-31 08:03:41.910 221554 DEBUG nova.compute.manager [req-bed801c3-afef-47c7-b65d-2084dd011570 req-8792e1d2-e61e-4927-8293-a838b17bbd2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Refreshing instance network info cache due to event network-changed-216db5c7-79b0-4533-b75f-56027324e484. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:03:41 np0005603609 nova_compute[221550]: 2026-01-31 08:03:41.910 221554 DEBUG oslo_concurrency.lockutils [req-bed801c3-afef-47c7-b65d-2084dd011570 req-8792e1d2-e61e-4927-8293-a838b17bbd2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-79977ab8-f5fd-4e71-b86e-34bf6e623879" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:03:41 np0005603609 nova_compute[221550]: 2026-01-31 08:03:41.910 221554 DEBUG oslo_concurrency.lockutils [req-bed801c3-afef-47c7-b65d-2084dd011570 req-8792e1d2-e61e-4927-8293-a838b17bbd2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-79977ab8-f5fd-4e71-b86e-34bf6e623879" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:03:41 np0005603609 nova_compute[221550]: 2026-01-31 08:03:41.911 221554 DEBUG nova.network.neutron [req-bed801c3-afef-47c7-b65d-2084dd011570 req-8792e1d2-e61e-4927-8293-a838b17bbd2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Refreshing network info cache for port 216db5c7-79b0-4533-b75f-56027324e484 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:03:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:41.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:42 np0005603609 nova_compute[221550]: 2026-01-31 08:03:42.005 221554 DEBUG nova.compute.manager [req-637b3e0a-c9de-4044-b07b-f76c27210071 req-c8dbea9f-42d3-4241-b95b-c2902fb9a9bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received event network-changed-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:42 np0005603609 nova_compute[221550]: 2026-01-31 08:03:42.006 221554 DEBUG nova.compute.manager [req-637b3e0a-c9de-4044-b07b-f76c27210071 req-c8dbea9f-42d3-4241-b95b-c2902fb9a9bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Refreshing instance network info cache due to event network-changed-0681aec4-62fb-4ff7-9e0f-3038c32e48a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:03:42 np0005603609 nova_compute[221550]: 2026-01-31 08:03:42.007 221554 DEBUG oslo_concurrency.lockutils [req-637b3e0a-c9de-4044-b07b-f76c27210071 req-c8dbea9f-42d3-4241-b95b-c2902fb9a9bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:03:42 np0005603609 nova_compute[221550]: 2026-01-31 08:03:42.689 221554 DEBUG nova.network.neutron [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Updating instance_info_cache with network_info: [{"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:03:42 np0005603609 nova_compute[221550]: 2026-01-31 08:03:42.723 221554 DEBUG oslo_concurrency.lockutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Releasing lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:03:42 np0005603609 nova_compute[221550]: 2026-01-31 08:03:42.728 221554 DEBUG oslo_concurrency.lockutils [req-637b3e0a-c9de-4044-b07b-f76c27210071 req-c8dbea9f-42d3-4241-b95b-c2902fb9a9bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:03:42 np0005603609 nova_compute[221550]: 2026-01-31 08:03:42.729 221554 DEBUG nova.network.neutron [req-637b3e0a-c9de-4044-b07b-f76c27210071 req-c8dbea9f-42d3-4241-b95b-c2902fb9a9bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Refreshing network info cache for port 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:03:42 np0005603609 nova_compute[221550]: 2026-01-31 08:03:42.849 221554 DEBUG nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 31 03:03:42 np0005603609 nova_compute[221550]: 2026-01-31 08:03:42.851 221554 DEBUG nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:03:42 np0005603609 nova_compute[221550]: 2026-01-31 08:03:42.851 221554 INFO nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Creating image(s)#033[00m
Jan 31 03:03:42 np0005603609 nova_compute[221550]: 2026-01-31 08:03:42.887 221554 DEBUG nova.storage.rbd_utils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] creating snapshot(nova-resize) on rbd image(dd1cfd57-90d1-4f96-b16e-96d64815af69_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:03:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e254 e254: 3 total, 3 up, 3 in
Jan 31 03:03:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:43.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.626 221554 DEBUG nova.objects.instance [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid dd1cfd57-90d1-4f96-b16e-96d64815af69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.779 221554 DEBUG nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.779 221554 DEBUG nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Ensure instance console log exists: /var/lib/nova/instances/dd1cfd57-90d1-4f96-b16e-96d64815af69/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.780 221554 DEBUG oslo_concurrency.lockutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.780 221554 DEBUG oslo_concurrency.lockutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.781 221554 DEBUG oslo_concurrency.lockutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.784 221554 DEBUG nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Start _get_guest_xml network_info=[{"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "vif_mac": "fa:16:3e:7c:f2:ba"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.789 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.791 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.795 221554 WARNING nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.803 221554 DEBUG nova.virt.libvirt.host [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.805 221554 DEBUG nova.virt.libvirt.host [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.812 221554 DEBUG nova.virt.libvirt.host [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.813 221554 DEBUG nova.virt.libvirt.host [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.814 221554 DEBUG nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.814 221554 DEBUG nova.virt.hardware [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e3bd1dad-95f3-4ed9-94b4-27245cd798b5',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.815 221554 DEBUG nova.virt.hardware [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.815 221554 DEBUG nova.virt.hardware [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.816 221554 DEBUG nova.virt.hardware [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.816 221554 DEBUG nova.virt.hardware [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.816 221554 DEBUG nova.virt.hardware [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.817 221554 DEBUG nova.virt.hardware [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.817 221554 DEBUG nova.virt.hardware [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.817 221554 DEBUG nova.virt.hardware [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.818 221554 DEBUG nova.virt.hardware [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.818 221554 DEBUG nova.virt.hardware [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.819 221554 DEBUG nova.objects.instance [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid dd1cfd57-90d1-4f96-b16e-96d64815af69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:03:43 np0005603609 nova_compute[221550]: 2026-01-31 08:03:43.871 221554 DEBUG oslo_concurrency.processutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:03:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:43.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.206 221554 DEBUG nova.network.neutron [req-bed801c3-afef-47c7-b65d-2084dd011570 req-8792e1d2-e61e-4927-8293-a838b17bbd2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Updated VIF entry in instance network info cache for port 216db5c7-79b0-4533-b75f-56027324e484. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.207 221554 DEBUG nova.network.neutron [req-bed801c3-afef-47c7-b65d-2084dd011570 req-8792e1d2-e61e-4927-8293-a838b17bbd2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Updating instance_info_cache with network_info: [{"id": "216db5c7-79b0-4533-b75f-56027324e484", "address": "fa:16:3e:a5:ba:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap216db5c7-79", "ovs_interfaceid": "216db5c7-79b0-4533-b75f-56027324e484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.240 221554 DEBUG oslo_concurrency.lockutils [req-bed801c3-afef-47c7-b65d-2084dd011570 req-8792e1d2-e61e-4927-8293-a838b17bbd2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-79977ab8-f5fd-4e71-b86e-34bf6e623879" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:03:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:03:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2913285531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.299 221554 DEBUG oslo_concurrency.processutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.327 221554 DEBUG oslo_concurrency.processutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:03:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1226616456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.784 221554 DEBUG oslo_concurrency.processutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.786 221554 DEBUG nova.virt.libvirt.vif [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:03:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1636985884',display_name='tempest-ServerDiskConfigTestJSON-server-1636985884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1636985884',id=95,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:03:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-chqeaza3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:03:37Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=dd1cfd57-90d1-4f96-b16e-96d64815af69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "vif_mac": "fa:16:3e:7c:f2:ba"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.786 221554 DEBUG nova.network.os_vif_util [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "vif_mac": "fa:16:3e:7c:f2:ba"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.787 221554 DEBUG nova.network.os_vif_util [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:f2:ba,bridge_name='br-int',has_traffic_filtering=True,id=0681aec4-62fb-4ff7-9e0f-3038c32e48a2,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0681aec4-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.789 221554 DEBUG nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:03:44 np0005603609 nova_compute[221550]:  <uuid>dd1cfd57-90d1-4f96-b16e-96d64815af69</uuid>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:  <name>instance-0000005f</name>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:  <memory>196608</memory>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1636985884</nova:name>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:03:43</nova:creationTime>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.micro">
Jan 31 03:03:44 np0005603609 nova_compute[221550]:        <nova:memory>192</nova:memory>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:        <nova:user uuid="63e95edea0164ae2a9820dc10467335d">tempest-ServerDiskConfigTestJSON-984925022-project-member</nova:user>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:        <nova:project uuid="be74d11d2f5a4d9aae2dbe32c31ad9c3">tempest-ServerDiskConfigTestJSON-984925022</nova:project>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:        <nova:port uuid="0681aec4-62fb-4ff7-9e0f-3038c32e48a2">
Jan 31 03:03:44 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <entry name="serial">dd1cfd57-90d1-4f96-b16e-96d64815af69</entry>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <entry name="uuid">dd1cfd57-90d1-4f96-b16e-96d64815af69</entry>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/dd1cfd57-90d1-4f96-b16e-96d64815af69_disk">
Jan 31 03:03:44 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:03:44 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/dd1cfd57-90d1-4f96-b16e-96d64815af69_disk.config">
Jan 31 03:03:44 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:03:44 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:7c:f2:ba"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <target dev="tap0681aec4-62"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/dd1cfd57-90d1-4f96-b16e-96d64815af69/console.log" append="off"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:03:44 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:03:44 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:03:44 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:03:44 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.794 221554 DEBUG nova.virt.libvirt.vif [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:03:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1636985884',display_name='tempest-ServerDiskConfigTestJSON-server-1636985884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1636985884',id=95,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:03:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-chqeaza3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:03:37Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=dd1cfd57-90d1-4f96-b16e-96d64815af69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "vif_mac": "fa:16:3e:7c:f2:ba"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.794 221554 DEBUG nova.network.os_vif_util [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "vif_mac": "fa:16:3e:7c:f2:ba"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.795 221554 DEBUG nova.network.os_vif_util [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:f2:ba,bridge_name='br-int',has_traffic_filtering=True,id=0681aec4-62fb-4ff7-9e0f-3038c32e48a2,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0681aec4-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.795 221554 DEBUG os_vif [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:f2:ba,bridge_name='br-int',has_traffic_filtering=True,id=0681aec4-62fb-4ff7-9e0f-3038c32e48a2,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0681aec4-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.796 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.796 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.797 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.799 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.799 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0681aec4-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.799 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0681aec4-62, col_values=(('external_ids', {'iface-id': '0681aec4-62fb-4ff7-9e0f-3038c32e48a2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:f2:ba', 'vm-uuid': 'dd1cfd57-90d1-4f96-b16e-96d64815af69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.801 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:44 np0005603609 NetworkManager[49064]: <info>  [1769846624.8019] manager: (tap0681aec4-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/176)
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.803 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.806 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.806 221554 INFO os_vif [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:f2:ba,bridge_name='br-int',has_traffic_filtering=True,id=0681aec4-62fb-4ff7-9e0f-3038c32e48a2,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0681aec4-62')#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.881 221554 DEBUG nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.881 221554 DEBUG nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.881 221554 DEBUG nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No VIF found with MAC fa:16:3e:7c:f2:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.882 221554 INFO nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Using config drive#033[00m
Jan 31 03:03:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:03:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1766540124' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:03:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:03:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1766540124' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:03:44 np0005603609 kernel: tap0681aec4-62: entered promiscuous mode
Jan 31 03:03:44 np0005603609 NetworkManager[49064]: <info>  [1769846624.9411] manager: (tap0681aec4-62): new Tun device (/org/freedesktop/NetworkManager/Devices/177)
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.990 221554 DEBUG nova.network.neutron [req-637b3e0a-c9de-4044-b07b-f76c27210071 req-c8dbea9f-42d3-4241-b95b-c2902fb9a9bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Updated VIF entry in instance network info cache for port 0681aec4-62fb-4ff7-9e0f-3038c32e48a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:03:44 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:44Z|00354|binding|INFO|Claiming lport 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 for this chassis.
Jan 31 03:03:44 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:44Z|00355|binding|INFO|0681aec4-62fb-4ff7-9e0f-3038c32e48a2: Claiming fa:16:3e:7c:f2:ba 10.100.0.13
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.996 221554 DEBUG nova.network.neutron [req-637b3e0a-c9de-4044-b07b-f76c27210071 req-c8dbea9f-42d3-4241-b95b-c2902fb9a9bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Updating instance_info_cache with network_info: [{"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:03:44 np0005603609 nova_compute[221550]: 2026-01-31 08:03:44.997 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.001 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:f2:ba 10.100.0.13'], port_security=['fa:16:3e:7c:f2:ba 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dd1cfd57-90d1-4f96-b16e-96d64815af69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-121329c8-2359-4e9d-9f2b-4932f8740470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '755edf0d-318a-4b49-b9f5-851611889f15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cae77ce7-0851-4c6f-a030-c066a50c0f3d, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=0681aec4-62fb-4ff7-9e0f-3038c32e48a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.003 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 in datapath 121329c8-2359-4e9d-9f2b-4932f8740470 bound to our chassis#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.004 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 121329c8-2359-4e9d-9f2b-4932f8740470#033[00m
Jan 31 03:03:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:45Z|00356|binding|INFO|Setting lport 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 ovn-installed in OVS
Jan 31 03:03:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:45Z|00357|binding|INFO|Setting lport 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 up in Southbound
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.008 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.012 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.011 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c7971442-3576-4a2b-bd98-8c1c39d5cda7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.013 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap121329c8-21 in ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.015 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap121329c8-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.015 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[947f8da4-b8f0-4142-b519-d4c869304c81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.016 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[76f6c290-d24d-4b2c-a957-4089d2f9cb3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:45 np0005603609 systemd-udevd[259013]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.023 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[52fda11b-7805-47ad-8edf-96e54c4af10d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.026 221554 DEBUG oslo_concurrency.lockutils [req-637b3e0a-c9de-4044-b07b-f76c27210071 req-c8dbea9f-42d3-4241-b95b-c2902fb9a9bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:03:45 np0005603609 systemd-machined[190912]: New machine qemu-46-instance-0000005f.
Jan 31 03:03:45 np0005603609 NetworkManager[49064]: <info>  [1769846625.0312] device (tap0681aec4-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:03:45 np0005603609 NetworkManager[49064]: <info>  [1769846625.0322] device (tap0681aec4-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.034 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a1023707-7d66-4f1c-8235-761405e4b990]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:45 np0005603609 systemd[1]: Started Virtual Machine qemu-46-instance-0000005f.
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.054 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[6b03b183-67c9-4723-9249-eeb66a29dbe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:45 np0005603609 NetworkManager[49064]: <info>  [1769846625.0589] manager: (tap121329c8-20): new Veth device (/org/freedesktop/NetworkManager/Devices/178)
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.057 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[478d7821-f6a3-4996-acf4-8825878bf206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.086 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[933fe873-8381-455f-b7da-38729e3193d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.089 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[e3bd08a4-62a6-4e60-8449-fd7bcc77cfd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:45 np0005603609 NetworkManager[49064]: <info>  [1769846625.1041] device (tap121329c8-20): carrier: link connected
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.107 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[a48630dd-ec00-4844-b7e1-8e1adf71ad72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.121 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6456ef82-cff6-4620-a7ba-cdcd5af13013]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap121329c8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:a3:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686480, 'reachable_time': 27410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259044, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.136 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2d67777d-b478-41f9-bce9-bcf5670dc756]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:a3c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686480, 'tstamp': 686480}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259045, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.149 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[31849e33-c160-4758-880e-5f3654885e5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap121329c8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:a3:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686480, 'reachable_time': 27410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259046, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.173 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1ffb4587-871e-4bd7-b346-934fc6395fa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.222 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b365cda3-70fc-427b-8821-e806950ea964]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.224 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap121329c8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.224 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.224 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap121329c8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.226 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:45 np0005603609 kernel: tap121329c8-20: entered promiscuous mode
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.228 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:45 np0005603609 NetworkManager[49064]: <info>  [1769846625.2296] manager: (tap121329c8-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/179)
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.230 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap121329c8-20, col_values=(('external_ids', {'iface-id': 'e59d8348-5c5c-4c82-ba21-91f3a512c65e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.230 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:45Z|00358|binding|INFO|Releasing lport e59d8348-5c5c-4c82-ba21-91f3a512c65e from this chassis (sb_readonly=0)
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.237 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.238 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.239 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.240 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[048f0342-30f9-4b4a-aad1-8125e682719d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.240 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:03:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:45.241 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'env', 'PROCESS_TAG=haproxy-121329c8-2359-4e9d-9f2b-4932f8740470', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/121329c8-2359-4e9d-9f2b-4932f8740470.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:03:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.514 221554 DEBUG nova.compute.manager [req-d5b213c0-4955-477e-8c75-b0070ad32672 req-8af824d5-0ff0-47c5-8009-ffe3a35e7708 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received event network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.515 221554 DEBUG oslo_concurrency.lockutils [req-d5b213c0-4955-477e-8c75-b0070ad32672 req-8af824d5-0ff0-47c5-8009-ffe3a35e7708 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.516 221554 DEBUG oslo_concurrency.lockutils [req-d5b213c0-4955-477e-8c75-b0070ad32672 req-8af824d5-0ff0-47c5-8009-ffe3a35e7708 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.516 221554 DEBUG oslo_concurrency.lockutils [req-d5b213c0-4955-477e-8c75-b0070ad32672 req-8af824d5-0ff0-47c5-8009-ffe3a35e7708 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.516 221554 DEBUG nova.compute.manager [req-d5b213c0-4955-477e-8c75-b0070ad32672 req-8af824d5-0ff0-47c5-8009-ffe3a35e7708 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] No waiting events found dispatching network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.517 221554 WARNING nova.compute.manager [req-d5b213c0-4955-477e-8c75-b0070ad32672 req-8af824d5-0ff0-47c5-8009-ffe3a35e7708 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received unexpected event network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 for instance with vm_state active and task_state resize_finish.#033[00m
Jan 31 03:03:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:45.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:45 np0005603609 podman[259103]: 2026-01-31 08:03:45.537388959 +0000 UTC m=+0.021040438 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:03:45 np0005603609 podman[259103]: 2026-01-31 08:03:45.827552774 +0000 UTC m=+0.311204213 container create b5089a5bf8cb5ff0e60165a380f4f6c5a8a2b8a9187c46375b6c7af2c0729557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.861 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846625.8615959, dd1cfd57-90d1-4f96-b16e-96d64815af69 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.863 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.865 221554 DEBUG nova.compute.manager [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.868 221554 INFO nova.virt.libvirt.driver [-] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Instance running successfully.#033[00m
Jan 31 03:03:45 np0005603609 virtqemud[221292]: argument unsupported: QEMU guest agent is not configured
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.871 221554 DEBUG nova.virt.libvirt.guest [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.871 221554 DEBUG nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 31 03:03:45 np0005603609 systemd[1]: Started libpod-conmon-b5089a5bf8cb5ff0e60165a380f4f6c5a8a2b8a9187c46375b6c7af2c0729557.scope.
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.902 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:03:45 np0005603609 nova_compute[221550]: 2026-01-31 08:03:45.909 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:03:45 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:03:45 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79143938ff47ca70448c7aca128eea83a1cda9c756ada859461ead1e134596e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:03:45 np0005603609 podman[259103]: 2026-01-31 08:03:45.935899672 +0000 UTC m=+0.419551141 container init b5089a5bf8cb5ff0e60165a380f4f6c5a8a2b8a9187c46375b6c7af2c0729557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 03:03:45 np0005603609 podman[259103]: 2026-01-31 08:03:45.939936049 +0000 UTC m=+0.423587508 container start b5089a5bf8cb5ff0e60165a380f4f6c5a8a2b8a9187c46375b6c7af2c0729557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 03:03:45 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[259135]: [NOTICE]   (259139) : New worker (259141) forked
Jan 31 03:03:45 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[259135]: [NOTICE]   (259139) : Loading success.
Jan 31 03:03:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:03:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:45.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:03:46 np0005603609 nova_compute[221550]: 2026-01-31 08:03:46.035 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 31 03:03:46 np0005603609 nova_compute[221550]: 2026-01-31 08:03:46.036 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846625.8632917, dd1cfd57-90d1-4f96-b16e-96d64815af69 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:03:46 np0005603609 nova_compute[221550]: 2026-01-31 08:03:46.036 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] VM Started (Lifecycle Event)#033[00m
Jan 31 03:03:46 np0005603609 nova_compute[221550]: 2026-01-31 08:03:46.204 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:03:46 np0005603609 nova_compute[221550]: 2026-01-31 08:03:46.208 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:03:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:47.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:47 np0005603609 nova_compute[221550]: 2026-01-31 08:03:47.635 221554 DEBUG nova.compute.manager [req-c700cf15-f4f3-45b6-ae6d-04824f5d3d08 req-c3551996-15c0-437e-8f82-b06419d7cd92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received event network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:47 np0005603609 nova_compute[221550]: 2026-01-31 08:03:47.635 221554 DEBUG oslo_concurrency.lockutils [req-c700cf15-f4f3-45b6-ae6d-04824f5d3d08 req-c3551996-15c0-437e-8f82-b06419d7cd92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:47 np0005603609 nova_compute[221550]: 2026-01-31 08:03:47.636 221554 DEBUG oslo_concurrency.lockutils [req-c700cf15-f4f3-45b6-ae6d-04824f5d3d08 req-c3551996-15c0-437e-8f82-b06419d7cd92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:47 np0005603609 nova_compute[221550]: 2026-01-31 08:03:47.636 221554 DEBUG oslo_concurrency.lockutils [req-c700cf15-f4f3-45b6-ae6d-04824f5d3d08 req-c3551996-15c0-437e-8f82-b06419d7cd92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:47 np0005603609 nova_compute[221550]: 2026-01-31 08:03:47.636 221554 DEBUG nova.compute.manager [req-c700cf15-f4f3-45b6-ae6d-04824f5d3d08 req-c3551996-15c0-437e-8f82-b06419d7cd92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] No waiting events found dispatching network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:03:47 np0005603609 nova_compute[221550]: 2026-01-31 08:03:47.637 221554 WARNING nova.compute.manager [req-c700cf15-f4f3-45b6-ae6d-04824f5d3d08 req-c3551996-15c0-437e-8f82-b06419d7cd92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received unexpected event network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 for instance with vm_state resized and task_state None.#033[00m
Jan 31 03:03:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:47.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:48 np0005603609 nova_compute[221550]: 2026-01-31 08:03:48.789 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:03:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:49.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:03:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:49Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:ba:9c 10.100.0.9
Jan 31 03:03:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:49Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:ba:9c 10.100.0.9
Jan 31 03:03:49 np0005603609 nova_compute[221550]: 2026-01-31 08:03:49.801 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:49.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:51.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:51.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:53.535 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:03:53 np0005603609 nova_compute[221550]: 2026-01-31 08:03:53.536 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:53.536 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:03:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:53.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:53 np0005603609 nova_compute[221550]: 2026-01-31 08:03:53.791 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:03:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:53.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:03:54 np0005603609 podman[259153]: 2026-01-31 08:03:54.16912384 +0000 UTC m=+0.052675560 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 03:03:54 np0005603609 podman[259152]: 2026-01-31 08:03:54.190617027 +0000 UTC m=+0.076005081 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:03:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e255 e255: 3 total, 3 up, 3 in
Jan 31 03:03:54 np0005603609 nova_compute[221550]: 2026-01-31 08:03:54.803 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:55.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:55.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.028 221554 DEBUG oslo_concurrency.lockutils [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "79977ab8-f5fd-4e71-b86e-34bf6e623879" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.029 221554 DEBUG oslo_concurrency.lockutils [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.047 221554 DEBUG nova.objects.instance [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'flavor' on Instance uuid 79977ab8-f5fd-4e71-b86e-34bf6e623879 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.077 221554 DEBUG oslo_concurrency.lockutils [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.276 221554 DEBUG oslo_concurrency.lockutils [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "79977ab8-f5fd-4e71-b86e-34bf6e623879" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.277 221554 DEBUG oslo_concurrency.lockutils [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.277 221554 INFO nova.compute.manager [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Attaching volume 008cf290-7941-4c64-a361-689cb2af7bcc to /dev/vdb#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.464 221554 DEBUG os_brick.utils [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.465 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.472 226739 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.473 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[9194e6e0-0b09-4d4e-8622-f3fc478a9e7b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.474 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.478 226739 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.478 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[4bbcd392-c256-41ca-86ff-5ab84868c03e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1765b9b6275c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.480 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.483 226739 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.484 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[e5950079-a89a-4f37-ae71-5f830cfec890]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.485 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[1eae2157-0172-4129-bdc9-e4c4aa50c7fb]: (4, '231927d4-1ded-4b84-843c-456d697af567') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.485 221554 DEBUG oslo_concurrency.processutils [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.503 221554 DEBUG oslo_concurrency.processutils [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.505 221554 DEBUG os_brick.initiator.connectors.lightos [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.505 221554 DEBUG os_brick.initiator.connectors.lightos [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.506 221554 DEBUG os_brick.initiator.connectors.lightos [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.506 221554 DEBUG os_brick.utils [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] <== get_connector_properties: return (41ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1765b9b6275c', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '231927d4-1ded-4b84-843c-456d697af567', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:03:57 np0005603609 nova_compute[221550]: 2026-01-31 08:03:57.506 221554 DEBUG nova.virt.block_device [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Updating existing volume attachment record: dfdf7988-9c47-4683-adbf-be7e041137af _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:03:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:57.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:03:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:57.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:03:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:03:58 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1010431320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:03:58 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:58Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7c:f2:ba 10.100.0.13
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.240 221554 DEBUG nova.objects.instance [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'flavor' on Instance uuid 79977ab8-f5fd-4e71-b86e-34bf6e623879 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.264 221554 DEBUG nova.virt.libvirt.driver [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Attempting to attach volume 008cf290-7941-4c64-a361-689cb2af7bcc with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.266 221554 DEBUG nova.virt.libvirt.guest [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 03:03:58 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:03:58 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-008cf290-7941-4c64-a361-689cb2af7bcc">
Jan 31 03:03:58 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:03:58 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:03:58 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:03:58 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:03:58 np0005603609 nova_compute[221550]:  <auth username="openstack">
Jan 31 03:03:58 np0005603609 nova_compute[221550]:    <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:03:58 np0005603609 nova_compute[221550]:  </auth>
Jan 31 03:03:58 np0005603609 nova_compute[221550]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:03:58 np0005603609 nova_compute[221550]:  <serial>008cf290-7941-4c64-a361-689cb2af7bcc</serial>
Jan 31 03:03:58 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:03:58 np0005603609 nova_compute[221550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.378 221554 DEBUG nova.virt.libvirt.driver [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.379 221554 DEBUG nova.virt.libvirt.driver [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.379 221554 DEBUG nova.virt.libvirt.driver [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.379 221554 DEBUG nova.virt.libvirt.driver [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No VIF found with MAC fa:16:3e:a5:ba:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.676 221554 DEBUG oslo_concurrency.lockutils [None req-79295f7d-f737-4eff-99c1-94043c19fb31 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.399s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.692 221554 DEBUG oslo_concurrency.lockutils [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "dd1cfd57-90d1-4f96-b16e-96d64815af69" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.692 221554 DEBUG oslo_concurrency.lockutils [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.693 221554 DEBUG oslo_concurrency.lockutils [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.693 221554 DEBUG oslo_concurrency.lockutils [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.694 221554 DEBUG oslo_concurrency.lockutils [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.696 221554 INFO nova.compute.manager [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Terminating instance#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.697 221554 DEBUG nova.compute.manager [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:03:58 np0005603609 kernel: tap0681aec4-62 (unregistering): left promiscuous mode
Jan 31 03:03:58 np0005603609 NetworkManager[49064]: <info>  [1769846638.7618] device (tap0681aec4-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:03:58 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:58Z|00359|binding|INFO|Releasing lport 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 from this chassis (sb_readonly=0)
Jan 31 03:03:58 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:58Z|00360|binding|INFO|Setting lport 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 down in Southbound
Jan 31 03:03:58 np0005603609 ovn_controller[130359]: 2026-01-31T08:03:58Z|00361|binding|INFO|Removing iface tap0681aec4-62 ovn-installed in OVS
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.771 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.779 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:58.782 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:f2:ba 10.100.0.13'], port_security=['fa:16:3e:7c:f2:ba 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dd1cfd57-90d1-4f96-b16e-96d64815af69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-121329c8-2359-4e9d-9f2b-4932f8740470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'neutron:revision_number': '8', 'neutron:security_group_ids': '755edf0d-318a-4b49-b9f5-851611889f15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cae77ce7-0851-4c6f-a030-c066a50c0f3d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=0681aec4-62fb-4ff7-9e0f-3038c32e48a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:03:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:58.783 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 in datapath 121329c8-2359-4e9d-9f2b-4932f8740470 unbound from our chassis#033[00m
Jan 31 03:03:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:58.785 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 121329c8-2359-4e9d-9f2b-4932f8740470, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:03:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:58.785 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a6add790-acc8-4e8c-98ae-cbc2d2833629]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:58.786 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 namespace which is not needed anymore#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.794 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:58 np0005603609 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Jan 31 03:03:58 np0005603609 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000005f.scope: Consumed 12.513s CPU time.
Jan 31 03:03:58 np0005603609 systemd-machined[190912]: Machine qemu-46-instance-0000005f terminated.
Jan 31 03:03:58 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[259135]: [NOTICE]   (259139) : haproxy version is 2.8.14-c23fe91
Jan 31 03:03:58 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[259135]: [NOTICE]   (259139) : path to executable is /usr/sbin/haproxy
Jan 31 03:03:58 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[259135]: [WARNING]  (259139) : Exiting Master process...
Jan 31 03:03:58 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[259135]: [ALERT]    (259139) : Current worker (259141) exited with code 143 (Terminated)
Jan 31 03:03:58 np0005603609 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[259135]: [WARNING]  (259139) : All workers exited. Exiting... (0)
Jan 31 03:03:58 np0005603609 systemd[1]: libpod-b5089a5bf8cb5ff0e60165a380f4f6c5a8a2b8a9187c46375b6c7af2c0729557.scope: Deactivated successfully.
Jan 31 03:03:58 np0005603609 conmon[259135]: conmon b5089a5bf8cb5ff0e601 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b5089a5bf8cb5ff0e60165a380f4f6c5a8a2b8a9187c46375b6c7af2c0729557.scope/container/memory.events
Jan 31 03:03:58 np0005603609 podman[259245]: 2026-01-31 08:03:58.902781168 +0000 UTC m=+0.047350861 container died b5089a5bf8cb5ff0e60165a380f4f6c5a8a2b8a9187c46375b6c7af2c0729557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.926 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.929 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:58 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5089a5bf8cb5ff0e60165a380f4f6c5a8a2b8a9187c46375b6c7af2c0729557-userdata-shm.mount: Deactivated successfully.
Jan 31 03:03:58 np0005603609 systemd[1]: var-lib-containers-storage-overlay-b79143938ff47ca70448c7aca128eea83a1cda9c756ada859461ead1e134596e-merged.mount: Deactivated successfully.
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.944 221554 INFO nova.virt.libvirt.driver [-] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Instance destroyed successfully.#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.944 221554 DEBUG nova.objects.instance [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'resources' on Instance uuid dd1cfd57-90d1-4f96-b16e-96d64815af69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:03:58 np0005603609 podman[259245]: 2026-01-31 08:03:58.953061698 +0000 UTC m=+0.097631341 container cleanup b5089a5bf8cb5ff0e60165a380f4f6c5a8a2b8a9187c46375b6c7af2c0729557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.966 221554 DEBUG nova.virt.libvirt.vif [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:03:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1636985884',display_name='tempest-ServerDiskConfigTestJSON-server-1636985884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1636985884',id=95,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:03:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-chqeaza3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:03:55Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=dd1cfd57-90d1-4f96-b16e-96d64815af69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.967 221554 DEBUG nova.network.os_vif_util [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:03:58 np0005603609 systemd[1]: libpod-conmon-b5089a5bf8cb5ff0e60165a380f4f6c5a8a2b8a9187c46375b6c7af2c0729557.scope: Deactivated successfully.
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.967 221554 DEBUG nova.network.os_vif_util [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7c:f2:ba,bridge_name='br-int',has_traffic_filtering=True,id=0681aec4-62fb-4ff7-9e0f-3038c32e48a2,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0681aec4-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.969 221554 DEBUG os_vif [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:f2:ba,bridge_name='br-int',has_traffic_filtering=True,id=0681aec4-62fb-4ff7-9e0f-3038c32e48a2,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0681aec4-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.971 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.971 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0681aec4-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.973 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.974 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:58 np0005603609 nova_compute[221550]: 2026-01-31 08:03:58.977 221554 INFO os_vif [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:f2:ba,bridge_name='br-int',has_traffic_filtering=True,id=0681aec4-62fb-4ff7-9e0f-3038c32e48a2,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0681aec4-62')#033[00m
Jan 31 03:03:59 np0005603609 podman[259285]: 2026-01-31 08:03:59.019563629 +0000 UTC m=+0.047462293 container remove b5089a5bf8cb5ff0e60165a380f4f6c5a8a2b8a9187c46375b6c7af2c0729557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 03:03:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:59.025 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e470597f-79f8-4612-8326-73404427321d]: (4, ('Sat Jan 31 08:03:58 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 (b5089a5bf8cb5ff0e60165a380f4f6c5a8a2b8a9187c46375b6c7af2c0729557)\nb5089a5bf8cb5ff0e60165a380f4f6c5a8a2b8a9187c46375b6c7af2c0729557\nSat Jan 31 08:03:58 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 (b5089a5bf8cb5ff0e60165a380f4f6c5a8a2b8a9187c46375b6c7af2c0729557)\nb5089a5bf8cb5ff0e60165a380f4f6c5a8a2b8a9187c46375b6c7af2c0729557\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:59.027 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c412b44e-3233-4dce-9e03-ed33ceafc611]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:59.028 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap121329c8-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:59 np0005603609 nova_compute[221550]: 2026-01-31 08:03:59.030 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:59 np0005603609 kernel: tap121329c8-20: left promiscuous mode
Jan 31 03:03:59 np0005603609 nova_compute[221550]: 2026-01-31 08:03:59.037 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:59.039 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf8835b-ae53-49dc-aebb-8953731927ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:59.054 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[00e15c91-566c-4ab0-b037-37d59fe20afe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:59.055 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[94fc476a-371f-462c-a6b8-718a56b3132f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:59.070 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[93701c3d-065c-4cfe-a635-d97a8847fd34]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686475, 'reachable_time': 38164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259318, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:59 np0005603609 systemd[1]: run-netns-ovnmeta\x2d121329c8\x2d2359\x2d4e9d\x2d9f2b\x2d4932f8740470.mount: Deactivated successfully.
Jan 31 03:03:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:59.074 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:03:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:03:59.074 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[234329b5-e60a-40f0-afa3-bf58dea1cb49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:03:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:59.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:03:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e256 e256: 3 total, 3 up, 3 in
Jan 31 03:03:59 np0005603609 nova_compute[221550]: 2026-01-31 08:03:59.808 221554 DEBUG nova.compute.manager [req-984c4c57-1171-4043-973b-e4200819a4b1 req-c48de785-4968-472b-87a0-c1af7db6fb89 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received event network-vif-unplugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:59 np0005603609 nova_compute[221550]: 2026-01-31 08:03:59.809 221554 DEBUG oslo_concurrency.lockutils [req-984c4c57-1171-4043-973b-e4200819a4b1 req-c48de785-4968-472b-87a0-c1af7db6fb89 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:59 np0005603609 nova_compute[221550]: 2026-01-31 08:03:59.809 221554 DEBUG oslo_concurrency.lockutils [req-984c4c57-1171-4043-973b-e4200819a4b1 req-c48de785-4968-472b-87a0-c1af7db6fb89 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:59 np0005603609 nova_compute[221550]: 2026-01-31 08:03:59.809 221554 DEBUG oslo_concurrency.lockutils [req-984c4c57-1171-4043-973b-e4200819a4b1 req-c48de785-4968-472b-87a0-c1af7db6fb89 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:59 np0005603609 nova_compute[221550]: 2026-01-31 08:03:59.809 221554 DEBUG nova.compute.manager [req-984c4c57-1171-4043-973b-e4200819a4b1 req-c48de785-4968-472b-87a0-c1af7db6fb89 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] No waiting events found dispatching network-vif-unplugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:03:59 np0005603609 nova_compute[221550]: 2026-01-31 08:03:59.809 221554 DEBUG nova.compute.manager [req-984c4c57-1171-4043-973b-e4200819a4b1 req-c48de785-4968-472b-87a0-c1af7db6fb89 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received event network-vif-unplugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:03:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:03:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:59.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:00 np0005603609 nova_compute[221550]: 2026-01-31 08:04:00.462 221554 INFO nova.virt.libvirt.driver [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Deleting instance files /var/lib/nova/instances/dd1cfd57-90d1-4f96-b16e-96d64815af69_del#033[00m
Jan 31 03:04:00 np0005603609 nova_compute[221550]: 2026-01-31 08:04:00.463 221554 INFO nova.virt.libvirt.driver [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Deletion of /var/lib/nova/instances/dd1cfd57-90d1-4f96-b16e-96d64815af69_del complete#033[00m
Jan 31 03:04:00 np0005603609 nova_compute[221550]: 2026-01-31 08:04:00.609 221554 INFO nova.compute.manager [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Took 1.91 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:04:00 np0005603609 nova_compute[221550]: 2026-01-31 08:04:00.609 221554 DEBUG oslo.service.loopingcall [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:04:00 np0005603609 nova_compute[221550]: 2026-01-31 08:04:00.610 221554 DEBUG nova.compute.manager [-] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:04:00 np0005603609 nova_compute[221550]: 2026-01-31 08:04:00.610 221554 DEBUG nova.network.neutron [-] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:04:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:01.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:01.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:02 np0005603609 nova_compute[221550]: 2026-01-31 08:04:02.202 221554 INFO nova.compute.manager [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Rebuilding instance#033[00m
Jan 31 03:04:02 np0005603609 nova_compute[221550]: 2026-01-31 08:04:02.320 221554 DEBUG nova.network.neutron [-] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:04:02 np0005603609 nova_compute[221550]: 2026-01-31 08:04:02.455 221554 DEBUG nova.compute.manager [req-7e26bcb8-7bbd-49e1-b547-840e17182172 req-7b3e8c98-affd-4860-b2d3-ac1b6180d43b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received event network-vif-deleted-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:02 np0005603609 nova_compute[221550]: 2026-01-31 08:04:02.456 221554 INFO nova.compute.manager [req-7e26bcb8-7bbd-49e1-b547-840e17182172 req-7b3e8c98-affd-4860-b2d3-ac1b6180d43b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Neutron deleted interface 0681aec4-62fb-4ff7-9e0f-3038c32e48a2; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:04:02 np0005603609 nova_compute[221550]: 2026-01-31 08:04:02.456 221554 DEBUG nova.network.neutron [req-7e26bcb8-7bbd-49e1-b547-840e17182172 req-7b3e8c98-affd-4860-b2d3-ac1b6180d43b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:04:02 np0005603609 nova_compute[221550]: 2026-01-31 08:04:02.491 221554 INFO nova.compute.manager [-] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Took 1.88 seconds to deallocate network for instance.#033[00m
Jan 31 03:04:02 np0005603609 nova_compute[221550]: 2026-01-31 08:04:02.561 221554 DEBUG nova.compute.manager [req-7e26bcb8-7bbd-49e1-b547-840e17182172 req-7b3e8c98-affd-4860-b2d3-ac1b6180d43b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Detach interface failed, port_id=0681aec4-62fb-4ff7-9e0f-3038c32e48a2, reason: Instance dd1cfd57-90d1-4f96-b16e-96d64815af69 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:04:02 np0005603609 nova_compute[221550]: 2026-01-31 08:04:02.587 221554 DEBUG nova.objects.instance [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 79977ab8-f5fd-4e71-b86e-34bf6e623879 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:02 np0005603609 nova_compute[221550]: 2026-01-31 08:04:02.789 221554 DEBUG nova.compute.manager [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:04:02 np0005603609 nova_compute[221550]: 2026-01-31 08:04:02.890 221554 DEBUG oslo_concurrency.lockutils [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:02 np0005603609 nova_compute[221550]: 2026-01-31 08:04:02.891 221554 DEBUG oslo_concurrency.lockutils [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:02 np0005603609 nova_compute[221550]: 2026-01-31 08:04:02.898 221554 DEBUG oslo_concurrency.lockutils [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:02 np0005603609 nova_compute[221550]: 2026-01-31 08:04:02.903 221554 DEBUG nova.compute.manager [req-926d11ee-3ac6-486d-b135-dacfc8b675b5 req-053a205a-8650-43f7-9fd9-e5498f44356f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received event network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:02 np0005603609 nova_compute[221550]: 2026-01-31 08:04:02.904 221554 DEBUG oslo_concurrency.lockutils [req-926d11ee-3ac6-486d-b135-dacfc8b675b5 req-053a205a-8650-43f7-9fd9-e5498f44356f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:02 np0005603609 nova_compute[221550]: 2026-01-31 08:04:02.904 221554 DEBUG oslo_concurrency.lockutils [req-926d11ee-3ac6-486d-b135-dacfc8b675b5 req-053a205a-8650-43f7-9fd9-e5498f44356f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:02 np0005603609 nova_compute[221550]: 2026-01-31 08:04:02.904 221554 DEBUG oslo_concurrency.lockutils [req-926d11ee-3ac6-486d-b135-dacfc8b675b5 req-053a205a-8650-43f7-9fd9-e5498f44356f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:02 np0005603609 nova_compute[221550]: 2026-01-31 08:04:02.905 221554 DEBUG nova.compute.manager [req-926d11ee-3ac6-486d-b135-dacfc8b675b5 req-053a205a-8650-43f7-9fd9-e5498f44356f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] No waiting events found dispatching network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:04:02 np0005603609 nova_compute[221550]: 2026-01-31 08:04:02.905 221554 WARNING nova.compute.manager [req-926d11ee-3ac6-486d-b135-dacfc8b675b5 req-053a205a-8650-43f7-9fd9-e5498f44356f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received unexpected event network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:04:03 np0005603609 nova_compute[221550]: 2026-01-31 08:04:03.031 221554 DEBUG nova.objects.instance [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'pci_requests' on Instance uuid 79977ab8-f5fd-4e71-b86e-34bf6e623879 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:03 np0005603609 nova_compute[221550]: 2026-01-31 08:04:03.122 221554 INFO nova.scheduler.client.report [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Deleted allocations for instance dd1cfd57-90d1-4f96-b16e-96d64815af69#033[00m
Jan 31 03:04:03 np0005603609 nova_compute[221550]: 2026-01-31 08:04:03.194 221554 DEBUG nova.objects.instance [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'pci_devices' on Instance uuid 79977ab8-f5fd-4e71-b86e-34bf6e623879 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:03 np0005603609 nova_compute[221550]: 2026-01-31 08:04:03.223 221554 DEBUG nova.objects.instance [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'resources' on Instance uuid 79977ab8-f5fd-4e71-b86e-34bf6e623879 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:03.539 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:03.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:03 np0005603609 nova_compute[221550]: 2026-01-31 08:04:03.615 221554 DEBUG nova.objects.instance [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'migration_context' on Instance uuid 79977ab8-f5fd-4e71-b86e-34bf6e623879 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:03 np0005603609 nova_compute[221550]: 2026-01-31 08:04:03.797 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:03 np0005603609 nova_compute[221550]: 2026-01-31 08:04:03.973 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:03.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:04 np0005603609 nova_compute[221550]: 2026-01-31 08:04:04.215 221554 DEBUG oslo_concurrency.lockutils [None req-1058904c-2361-4479-849f-c54aad40055e 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:04 np0005603609 nova_compute[221550]: 2026-01-31 08:04:04.218 221554 DEBUG nova.objects.instance [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 03:04:04 np0005603609 nova_compute[221550]: 2026-01-31 08:04:04.220 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:04:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:05.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:04:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:05.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:04:06 np0005603609 kernel: tap216db5c7-79 (unregistering): left promiscuous mode
Jan 31 03:04:06 np0005603609 NetworkManager[49064]: <info>  [1769846646.5599] device (tap216db5c7-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:04:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:04:06Z|00362|binding|INFO|Releasing lport 216db5c7-79b0-4533-b75f-56027324e484 from this chassis (sb_readonly=0)
Jan 31 03:04:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:04:06Z|00363|binding|INFO|Setting lport 216db5c7-79b0-4533-b75f-56027324e484 down in Southbound
Jan 31 03:04:06 np0005603609 nova_compute[221550]: 2026-01-31 08:04:06.566 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:04:06Z|00364|binding|INFO|Removing iface tap216db5c7-79 ovn-installed in OVS
Jan 31 03:04:06 np0005603609 nova_compute[221550]: 2026-01-31 08:04:06.576 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:06 np0005603609 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000060.scope: Deactivated successfully.
Jan 31 03:04:06 np0005603609 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000060.scope: Consumed 14.599s CPU time.
Jan 31 03:04:06 np0005603609 systemd-machined[190912]: Machine qemu-45-instance-00000060 terminated.
Jan 31 03:04:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:06.653 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:ba:9c 10.100.0.9'], port_security=['fa:16:3e:a5:ba:9c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '79977ab8-f5fd-4e71-b86e-34bf6e623879', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd352316ff6534075952e2d0c28061b09', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2e80c7d1-3f05-4a2c-8ace-7be5723d956d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b782827-2c44-4671-8f12-bb67b4f64804, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=216db5c7-79b0-4533-b75f-56027324e484) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:04:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:06.655 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 216db5c7-79b0-4533-b75f-56027324e484 in datapath 79cb2b81-3369-468a-8bf6-7e13d5df334b unbound from our chassis#033[00m
Jan 31 03:04:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:06.657 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79cb2b81-3369-468a-8bf6-7e13d5df334b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:04:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:06.658 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2e24c538-b49e-4833-95b2-064f5c4ef1c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:06.658 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b namespace which is not needed anymore#033[00m
Jan 31 03:04:06 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[258827]: [NOTICE]   (258831) : haproxy version is 2.8.14-c23fe91
Jan 31 03:04:06 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[258827]: [NOTICE]   (258831) : path to executable is /usr/sbin/haproxy
Jan 31 03:04:06 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[258827]: [WARNING]  (258831) : Exiting Master process...
Jan 31 03:04:06 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[258827]: [ALERT]    (258831) : Current worker (258833) exited with code 143 (Terminated)
Jan 31 03:04:06 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[258827]: [WARNING]  (258831) : All workers exited. Exiting... (0)
Jan 31 03:04:06 np0005603609 systemd[1]: libpod-91d54e58c78eb52d2b1534f64710b5f3961e313cee6fd008b7ebac444319f242.scope: Deactivated successfully.
Jan 31 03:04:06 np0005603609 podman[259345]: 2026-01-31 08:04:06.770415094 +0000 UTC m=+0.047230818 container died 91d54e58c78eb52d2b1534f64710b5f3961e313cee6fd008b7ebac444319f242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:04:06 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-91d54e58c78eb52d2b1534f64710b5f3961e313cee6fd008b7ebac444319f242-userdata-shm.mount: Deactivated successfully.
Jan 31 03:04:06 np0005603609 systemd[1]: var-lib-containers-storage-overlay-957c830655a4fcb1ba7414af3219f51b81324877776381b2e0799315c36fa993-merged.mount: Deactivated successfully.
Jan 31 03:04:06 np0005603609 podman[259345]: 2026-01-31 08:04:06.824057375 +0000 UTC m=+0.100873099 container cleanup 91d54e58c78eb52d2b1534f64710b5f3961e313cee6fd008b7ebac444319f242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 03:04:06 np0005603609 systemd[1]: libpod-conmon-91d54e58c78eb52d2b1534f64710b5f3961e313cee6fd008b7ebac444319f242.scope: Deactivated successfully.
Jan 31 03:04:06 np0005603609 podman[259387]: 2026-01-31 08:04:06.904734008 +0000 UTC m=+0.060088408 container remove 91d54e58c78eb52d2b1534f64710b5f3961e313cee6fd008b7ebac444319f242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:04:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:06.909 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c05f126c-d66d-4fef-8bf6-f1a5e3c53ea7]: (4, ('Sat Jan 31 08:04:06 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b (91d54e58c78eb52d2b1534f64710b5f3961e313cee6fd008b7ebac444319f242)\n91d54e58c78eb52d2b1534f64710b5f3961e313cee6fd008b7ebac444319f242\nSat Jan 31 08:04:06 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b (91d54e58c78eb52d2b1534f64710b5f3961e313cee6fd008b7ebac444319f242)\n91d54e58c78eb52d2b1534f64710b5f3961e313cee6fd008b7ebac444319f242\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:06.910 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[45d927da-e429-4fdd-8531-22cc128c716f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:06.911 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79cb2b81-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:06 np0005603609 nova_compute[221550]: 2026-01-31 08:04:06.913 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:06 np0005603609 kernel: tap79cb2b81-30: left promiscuous mode
Jan 31 03:04:06 np0005603609 nova_compute[221550]: 2026-01-31 08:04:06.920 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:06 np0005603609 nova_compute[221550]: 2026-01-31 08:04:06.921 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:06.923 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[702b0ac1-ce0e-41ed-9656-d07280075d50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:06.945 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d04adc43-c137-407c-a5f9-891842658d2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:06.947 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[befbdb0b-2446-4b50-82cd-576c2f5b3025]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:06.959 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[32d5e423-9fe6-4797-8119-bbec225cf62b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685528, 'reachable_time': 25640, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259406, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:06 np0005603609 systemd[1]: run-netns-ovnmeta\x2d79cb2b81\x2d3369\x2d468a\x2d8bf6\x2d7e13d5df334b.mount: Deactivated successfully.
Jan 31 03:04:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:06.961 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:04:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:06.961 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[40fbe869-4aae-45c4-8b1b-fe60752e928e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:07 np0005603609 nova_compute[221550]: 2026-01-31 08:04:07.233 221554 INFO nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 03:04:07 np0005603609 nova_compute[221550]: 2026-01-31 08:04:07.237 221554 INFO nova.virt.libvirt.driver [-] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Instance destroyed successfully.#033[00m
Jan 31 03:04:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:07.497 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:07.498 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:07.498 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:07 np0005603609 nova_compute[221550]: 2026-01-31 08:04:07.523 221554 INFO nova.compute.manager [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Detaching volume 008cf290-7941-4c64-a361-689cb2af7bcc#033[00m
Jan 31 03:04:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:07.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:07 np0005603609 nova_compute[221550]: 2026-01-31 08:04:07.708 221554 INFO nova.virt.block_device [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Attempting to driver detach volume 008cf290-7941-4c64-a361-689cb2af7bcc from mountpoint /dev/vdb#033[00m
Jan 31 03:04:07 np0005603609 nova_compute[221550]: 2026-01-31 08:04:07.714 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Attempting to detach device vdb from instance 79977ab8-f5fd-4e71-b86e-34bf6e623879 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:04:07 np0005603609 nova_compute[221550]: 2026-01-31 08:04:07.715 221554 DEBUG nova.virt.libvirt.guest [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:04:07 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:04:07 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-008cf290-7941-4c64-a361-689cb2af7bcc">
Jan 31 03:04:07 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:04:07 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:04:07 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:04:07 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:04:07 np0005603609 nova_compute[221550]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:04:07 np0005603609 nova_compute[221550]:  <serial>008cf290-7941-4c64-a361-689cb2af7bcc</serial>
Jan 31 03:04:07 np0005603609 nova_compute[221550]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 03:04:07 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:04:07 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:04:07 np0005603609 nova_compute[221550]: 2026-01-31 08:04:07.728 221554 INFO nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully detached device vdb from instance 79977ab8-f5fd-4e71-b86e-34bf6e623879 from the persistent domain config.#033[00m
Jan 31 03:04:07 np0005603609 nova_compute[221550]: 2026-01-31 08:04:07.858 221554 DEBUG nova.compute.manager [req-8d42aad0-1024-400f-a71c-6e60dd7ac50e req-10ec8b5f-aac2-4320-af40-dc38b64798d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Received event network-vif-unplugged-216db5c7-79b0-4533-b75f-56027324e484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:07 np0005603609 nova_compute[221550]: 2026-01-31 08:04:07.858 221554 DEBUG oslo_concurrency.lockutils [req-8d42aad0-1024-400f-a71c-6e60dd7ac50e req-10ec8b5f-aac2-4320-af40-dc38b64798d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:07 np0005603609 nova_compute[221550]: 2026-01-31 08:04:07.859 221554 DEBUG oslo_concurrency.lockutils [req-8d42aad0-1024-400f-a71c-6e60dd7ac50e req-10ec8b5f-aac2-4320-af40-dc38b64798d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:07 np0005603609 nova_compute[221550]: 2026-01-31 08:04:07.859 221554 DEBUG oslo_concurrency.lockutils [req-8d42aad0-1024-400f-a71c-6e60dd7ac50e req-10ec8b5f-aac2-4320-af40-dc38b64798d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:07 np0005603609 nova_compute[221550]: 2026-01-31 08:04:07.859 221554 DEBUG nova.compute.manager [req-8d42aad0-1024-400f-a71c-6e60dd7ac50e req-10ec8b5f-aac2-4320-af40-dc38b64798d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] No waiting events found dispatching network-vif-unplugged-216db5c7-79b0-4533-b75f-56027324e484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:04:07 np0005603609 nova_compute[221550]: 2026-01-31 08:04:07.859 221554 WARNING nova.compute.manager [req-8d42aad0-1024-400f-a71c-6e60dd7ac50e req-10ec8b5f-aac2-4320-af40-dc38b64798d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Received unexpected event network-vif-unplugged-216db5c7-79b0-4533-b75f-56027324e484 for instance with vm_state active and task_state rebuilding.#033[00m
Jan 31 03:04:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:07.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:08 np0005603609 nova_compute[221550]: 2026-01-31 08:04:08.162 221554 INFO nova.virt.libvirt.driver [-] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Instance destroyed successfully.#033[00m
Jan 31 03:04:08 np0005603609 nova_compute[221550]: 2026-01-31 08:04:08.164 221554 DEBUG nova.virt.libvirt.vif [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:03:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-894827809',display_name='tempest-ServerActionsTestOtherA-server-1043045287',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-894827809',id=96,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGSsmxs9wjtIdeLS6+TDVPehBXibW1pg/pUNrYaSqukTWGS7m/mx1RvtoAfp2fIHUhH+opIu32b284sDYd8t4TmsPDo7saBhh7IBr6K+Eh6cl1W1MtRphK/nkrUT/jTSkQ==',key_name='tempest-keypair-523704801',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:03:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-xyg4dkcc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='31043e345f6b48b585fb7b8ab7304764',uuid=79977ab8-f5fd-4e71-b86e-34bf6e623879,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "216db5c7-79b0-4533-b75f-56027324e484", "address": "fa:16:3e:a5:ba:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap216db5c7-79", "ovs_interfaceid": "216db5c7-79b0-4533-b75f-56027324e484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:04:08 np0005603609 nova_compute[221550]: 2026-01-31 08:04:08.165 221554 DEBUG nova.network.os_vif_util [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "216db5c7-79b0-4533-b75f-56027324e484", "address": "fa:16:3e:a5:ba:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap216db5c7-79", "ovs_interfaceid": "216db5c7-79b0-4533-b75f-56027324e484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:04:08 np0005603609 nova_compute[221550]: 2026-01-31 08:04:08.167 221554 DEBUG nova.network.os_vif_util [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:ba:9c,bridge_name='br-int',has_traffic_filtering=True,id=216db5c7-79b0-4533-b75f-56027324e484,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap216db5c7-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:04:08 np0005603609 nova_compute[221550]: 2026-01-31 08:04:08.168 221554 DEBUG os_vif [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:ba:9c,bridge_name='br-int',has_traffic_filtering=True,id=216db5c7-79b0-4533-b75f-56027324e484,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap216db5c7-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:04:08 np0005603609 nova_compute[221550]: 2026-01-31 08:04:08.170 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:08 np0005603609 nova_compute[221550]: 2026-01-31 08:04:08.170 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap216db5c7-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:08 np0005603609 nova_compute[221550]: 2026-01-31 08:04:08.174 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:04:08 np0005603609 nova_compute[221550]: 2026-01-31 08:04:08.176 221554 INFO os_vif [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:ba:9c,bridge_name='br-int',has_traffic_filtering=True,id=216db5c7-79b0-4533-b75f-56027324e484,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap216db5c7-79')#033[00m
Jan 31 03:04:08 np0005603609 nova_compute[221550]: 2026-01-31 08:04:08.799 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.131 221554 INFO nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Deleting instance files /var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879_del#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.131 221554 INFO nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Deletion of /var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879_del complete#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.320 221554 INFO nova.virt.block_device [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Booting with volume 008cf290-7941-4c64-a361-689cb2af7bcc at /dev/vdb#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.577 221554 DEBUG os_brick.utils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.578 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.586 226739 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.586 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[138e7a98-e4b4-4b29-857d-c6bac2b74640]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.587 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.592 226739 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.592 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[71b70a18-85aa-4607-a73f-82089c83b3bd]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1765b9b6275c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.594 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.600 226739 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.600 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[58b992e9-4c8d-413c-a887-bf799b03061b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.601 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[8e42dcd9-b2c8-4054-b220-00ea0ba105de]: (4, '231927d4-1ded-4b84-843c-456d697af567') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.601 221554 DEBUG oslo_concurrency.processutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:09.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.618 221554 DEBUG oslo_concurrency.processutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.620 221554 DEBUG os_brick.initiator.connectors.lightos [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.621 221554 DEBUG os_brick.initiator.connectors.lightos [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.621 221554 DEBUG os_brick.initiator.connectors.lightos [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.621 221554 DEBUG os_brick.utils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] <== get_connector_properties: return (43ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1765b9b6275c', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '231927d4-1ded-4b84-843c-456d697af567', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:04:09 np0005603609 nova_compute[221550]: 2026-01-31 08:04:09.621 221554 DEBUG nova.virt.block_device [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Updating existing volume attachment record: 78c846b9-b07c-49d7-921b-b840c2849b01 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:04:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:04:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:09.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:04:10 np0005603609 nova_compute[221550]: 2026-01-31 08:04:10.044 221554 DEBUG nova.compute.manager [req-1c20115d-f9e8-4b2b-935a-28df3b0f47ad req-23183a72-5cb2-4049-9a2d-a6b523d8463f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Received event network-vif-plugged-216db5c7-79b0-4533-b75f-56027324e484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:10 np0005603609 nova_compute[221550]: 2026-01-31 08:04:10.045 221554 DEBUG oslo_concurrency.lockutils [req-1c20115d-f9e8-4b2b-935a-28df3b0f47ad req-23183a72-5cb2-4049-9a2d-a6b523d8463f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:10 np0005603609 nova_compute[221550]: 2026-01-31 08:04:10.045 221554 DEBUG oslo_concurrency.lockutils [req-1c20115d-f9e8-4b2b-935a-28df3b0f47ad req-23183a72-5cb2-4049-9a2d-a6b523d8463f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:10 np0005603609 nova_compute[221550]: 2026-01-31 08:04:10.045 221554 DEBUG oslo_concurrency.lockutils [req-1c20115d-f9e8-4b2b-935a-28df3b0f47ad req-23183a72-5cb2-4049-9a2d-a6b523d8463f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:10 np0005603609 nova_compute[221550]: 2026-01-31 08:04:10.045 221554 DEBUG nova.compute.manager [req-1c20115d-f9e8-4b2b-935a-28df3b0f47ad req-23183a72-5cb2-4049-9a2d-a6b523d8463f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] No waiting events found dispatching network-vif-plugged-216db5c7-79b0-4533-b75f-56027324e484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:04:10 np0005603609 nova_compute[221550]: 2026-01-31 08:04:10.046 221554 WARNING nova.compute.manager [req-1c20115d-f9e8-4b2b-935a-28df3b0f47ad req-23183a72-5cb2-4049-9a2d-a6b523d8463f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Received unexpected event network-vif-plugged-216db5c7-79b0-4533-b75f-56027324e484 for instance with vm_state active and task_state rebuild_block_device_mapping.#033[00m
Jan 31 03:04:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:11.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:11 np0005603609 nova_compute[221550]: 2026-01-31 08:04:11.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:11.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:12 np0005603609 nova_compute[221550]: 2026-01-31 08:04:12.815 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:04:12 np0005603609 nova_compute[221550]: 2026-01-31 08:04:12.816 221554 INFO nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Creating image(s)#033[00m
Jan 31 03:04:12 np0005603609 nova_compute[221550]: 2026-01-31 08:04:12.841 221554 DEBUG nova.storage.rbd_utils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:04:12 np0005603609 nova_compute[221550]: 2026-01-31 08:04:12.864 221554 DEBUG nova.storage.rbd_utils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:04:12 np0005603609 nova_compute[221550]: 2026-01-31 08:04:12.897 221554 DEBUG nova.storage.rbd_utils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:04:12 np0005603609 nova_compute[221550]: 2026-01-31 08:04:12.902 221554 DEBUG oslo_concurrency.processutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:12 np0005603609 nova_compute[221550]: 2026-01-31 08:04:12.961 221554 DEBUG oslo_concurrency.processutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:12 np0005603609 nova_compute[221550]: 2026-01-31 08:04:12.962 221554 DEBUG oslo_concurrency.lockutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:12 np0005603609 nova_compute[221550]: 2026-01-31 08:04:12.962 221554 DEBUG oslo_concurrency.lockutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:12 np0005603609 nova_compute[221550]: 2026-01-31 08:04:12.962 221554 DEBUG oslo_concurrency.lockutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:13 np0005603609 nova_compute[221550]: 2026-01-31 08:04:13.007 221554 DEBUG nova.storage.rbd_utils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:04:13 np0005603609 nova_compute[221550]: 2026-01-31 08:04:13.010 221554 DEBUG oslo_concurrency.processutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:13 np0005603609 nova_compute[221550]: 2026-01-31 08:04:13.208 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:13.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:13 np0005603609 nova_compute[221550]: 2026-01-31 08:04:13.802 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:13 np0005603609 nova_compute[221550]: 2026-01-31 08:04:13.804 221554 DEBUG oslo_concurrency.processutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.795s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:13 np0005603609 nova_compute[221550]: 2026-01-31 08:04:13.881 221554 DEBUG nova.storage.rbd_utils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] resizing rbd image 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:04:13 np0005603609 nova_compute[221550]: 2026-01-31 08:04:13.945 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846638.9431772, dd1cfd57-90d1-4f96-b16e-96d64815af69 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:04:13 np0005603609 nova_compute[221550]: 2026-01-31 08:04:13.945 221554 INFO nova.compute.manager [-] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:04:13 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:04:13 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:04:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:13.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.250 221554 DEBUG nova.compute.manager [None req-8b1258dc-7919-4fdf-8cc0-b45ba48381ff - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.529 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.530 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Ensure instance console log exists: /var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.530 221554 DEBUG oslo_concurrency.lockutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.530 221554 DEBUG oslo_concurrency.lockutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.531 221554 DEBUG oslo_concurrency.lockutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.534 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Start _get_guest_xml network_info=[{"id": "216db5c7-79b0-4533-b75f-56027324e484", "address": "fa:16:3e:a5:ba:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap216db5c7-79", "ovs_interfaceid": "216db5c7-79b0-4533-b75f-56027324e484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '78c846b9-b07c-49d7-921b-b840c2849b01', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-008cf290-7941-4c64-a361-689cb2af7bcc', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '008cf290-7941-4c64-a361-689cb2af7bcc', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '79977ab8-f5fd-4e71-b86e-34bf6e623879', 'attached_at': '', 'detached_at': '', 'volume_id': '008cf290-7941-4c64-a361-689cb2af7bcc', 'serial': '008cf290-7941-4c64-a361-689cb2af7bcc'}, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'guest_format': None, 'mount_device': '/dev/vdb', 'boot_index': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.539 221554 WARNING nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.547 221554 DEBUG nova.virt.libvirt.host [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.547 221554 DEBUG nova.virt.libvirt.host [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.551 221554 DEBUG nova.virt.libvirt.host [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.551 221554 DEBUG nova.virt.libvirt.host [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.552 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.553 221554 DEBUG nova.virt.hardware [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.553 221554 DEBUG nova.virt.hardware [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.553 221554 DEBUG nova.virt.hardware [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.554 221554 DEBUG nova.virt.hardware [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.554 221554 DEBUG nova.virt.hardware [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.554 221554 DEBUG nova.virt.hardware [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.555 221554 DEBUG nova.virt.hardware [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.556 221554 DEBUG nova.virt.hardware [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.556 221554 DEBUG nova.virt.hardware [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.556 221554 DEBUG nova.virt.hardware [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.557 221554 DEBUG nova.virt.hardware [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.557 221554 DEBUG nova.objects.instance [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 79977ab8-f5fd-4e71-b86e-34bf6e623879 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:14 np0005603609 nova_compute[221550]: 2026-01-31 08:04:14.581 221554 DEBUG oslo_concurrency.processutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:04:15 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1632053411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:04:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:15 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:04:15 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:04:15 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:04:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:15.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:15 np0005603609 nova_compute[221550]: 2026-01-31 08:04:15.630 221554 DEBUG oslo_concurrency.processutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:15 np0005603609 nova_compute[221550]: 2026-01-31 08:04:15.665 221554 DEBUG nova.storage.rbd_utils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:04:15 np0005603609 nova_compute[221550]: 2026-01-31 08:04:15.669 221554 DEBUG oslo_concurrency.processutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:15 np0005603609 nova_compute[221550]: 2026-01-31 08:04:15.689 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:04:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:15.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:04:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:04:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1474416236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.148 221554 DEBUG oslo_concurrency.processutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.535 221554 DEBUG nova.virt.libvirt.vif [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:03:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-894827809',display_name='tempest-ServerActionsTestOtherA-server-1043045287',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-894827809',id=96,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGSsmxs9wjtIdeLS6+TDVPehBXibW1pg/pUNrYaSqukTWGS7m/mx1RvtoAfp2fIHUhH+opIu32b284sDYd8t4TmsPDo7saBhh7IBr6K+Eh6cl1W1MtRphK/nkrUT/jTSkQ==',key_name='tempest-keypair-523704801',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:03:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-xyg4dkcc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='31043e345f6b48b585fb7b8ab7304764',uuid=79977ab8-f5fd-4e71-b86e-34bf6e623879,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "216db5c7-79b0-4533-b75f-56027324e484", "address": "fa:16:3e:a5:ba:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap216db5c7-79", "ovs_interfaceid": "216db5c7-79b0-4533-b75f-56027324e484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.535 221554 DEBUG nova.network.os_vif_util [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "216db5c7-79b0-4533-b75f-56027324e484", "address": "fa:16:3e:a5:ba:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap216db5c7-79", "ovs_interfaceid": "216db5c7-79b0-4533-b75f-56027324e484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.536 221554 DEBUG nova.network.os_vif_util [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:ba:9c,bridge_name='br-int',has_traffic_filtering=True,id=216db5c7-79b0-4533-b75f-56027324e484,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap216db5c7-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.538 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:  <uuid>79977ab8-f5fd-4e71-b86e-34bf6e623879</uuid>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:  <name>instance-00000060</name>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerActionsTestOtherA-server-1043045287</nova:name>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:04:14</nova:creationTime>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <nova:user uuid="31043e345f6b48b585fb7b8ab7304764">tempest-ServerActionsTestOtherA-527878807-project-member</nova:user>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <nova:project uuid="d352316ff6534075952e2d0c28061b09">tempest-ServerActionsTestOtherA-527878807</nova:project>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="40cf2ff3-f7ff-4843-b4ab-b7dcc843006f"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <nova:port uuid="216db5c7-79b0-4533-b75f-56027324e484">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <entry name="serial">79977ab8-f5fd-4e71-b86e-34bf6e623879</entry>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <entry name="uuid">79977ab8-f5fd-4e71-b86e-34bf6e623879</entry>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/79977ab8-f5fd-4e71-b86e-34bf6e623879_disk">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/79977ab8-f5fd-4e71-b86e-34bf6e623879_disk.config">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="volumes/volume-008cf290-7941-4c64-a361-689cb2af7bcc">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <target dev="vdb" bus="virtio"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <serial>008cf290-7941-4c64-a361-689cb2af7bcc</serial>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:a5:ba:9c"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <target dev="tap216db5c7-79"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879/console.log" append="off"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:04:16 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:04:16 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:04:16 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:04:16 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.538 221554 DEBUG nova.virt.libvirt.vif [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:03:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-894827809',display_name='tempest-ServerActionsTestOtherA-server-1043045287',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-894827809',id=96,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGSsmxs9wjtIdeLS6+TDVPehBXibW1pg/pUNrYaSqukTWGS7m/mx1RvtoAfp2fIHUhH+opIu32b284sDYd8t4TmsPDo7saBhh7IBr6K+Eh6cl1W1MtRphK/nkrUT/jTSkQ==',key_name='tempest-keypair-523704801',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:03:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-xyg4dkcc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='31043e345f6b48b585fb7b8ab7304764',uuid=79977ab8-f5fd-4e71-b86e-34bf6e623879,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "216db5c7-79b0-4533-b75f-56027324e484", "address": "fa:16:3e:a5:ba:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap216db5c7-79", "ovs_interfaceid": "216db5c7-79b0-4533-b75f-56027324e484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.538 221554 DEBUG nova.network.os_vif_util [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "216db5c7-79b0-4533-b75f-56027324e484", "address": "fa:16:3e:a5:ba:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap216db5c7-79", "ovs_interfaceid": "216db5c7-79b0-4533-b75f-56027324e484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.539 221554 DEBUG nova.network.os_vif_util [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:ba:9c,bridge_name='br-int',has_traffic_filtering=True,id=216db5c7-79b0-4533-b75f-56027324e484,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap216db5c7-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.539 221554 DEBUG os_vif [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:ba:9c,bridge_name='br-int',has_traffic_filtering=True,id=216db5c7-79b0-4533-b75f-56027324e484,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap216db5c7-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.539 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.540 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.540 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.542 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.543 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap216db5c7-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.543 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap216db5c7-79, col_values=(('external_ids', {'iface-id': '216db5c7-79b0-4533-b75f-56027324e484', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:ba:9c', 'vm-uuid': '79977ab8-f5fd-4e71-b86e-34bf6e623879'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:16 np0005603609 NetworkManager[49064]: <info>  [1769846656.5463] manager: (tap216db5c7-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/180)
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.545 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.550 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.551 221554 INFO os_vif [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:ba:9c,bridge_name='br-int',has_traffic_filtering=True,id=216db5c7-79b0-4533-b75f-56027324e484,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap216db5c7-79')#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.687 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.688 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.688 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.688 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No VIF found with MAC fa:16:3e:a5:ba:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.688 221554 INFO nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Using config drive#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.711 221554 DEBUG nova.storage.rbd_utils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.781 221554 DEBUG nova.objects.instance [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 79977ab8-f5fd-4e71-b86e-34bf6e623879 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:16 np0005603609 nova_compute[221550]: 2026-01-31 08:04:16.920 221554 DEBUG nova.objects.instance [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'keypairs' on Instance uuid 79977ab8-f5fd-4e71-b86e-34bf6e623879 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:04:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:17.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:04:17 np0005603609 nova_compute[221550]: 2026-01-31 08:04:17.737 221554 INFO nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Creating config drive at /var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879/disk.config#033[00m
Jan 31 03:04:17 np0005603609 nova_compute[221550]: 2026-01-31 08:04:17.741 221554 DEBUG oslo_concurrency.processutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp0hkqhd7i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:17 np0005603609 nova_compute[221550]: 2026-01-31 08:04:17.864 221554 DEBUG oslo_concurrency.processutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp0hkqhd7i" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:17 np0005603609 nova_compute[221550]: 2026-01-31 08:04:17.896 221554 DEBUG nova.storage.rbd_utils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:04:17 np0005603609 nova_compute[221550]: 2026-01-31 08:04:17.902 221554 DEBUG oslo_concurrency.processutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879/disk.config 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:04:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:18.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.062 221554 DEBUG oslo_concurrency.processutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879/disk.config 79977ab8-f5fd-4e71-b86e-34bf6e623879_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.063 221554 INFO nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Deleting local config drive /var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879/disk.config because it was imported into RBD.#033[00m
Jan 31 03:04:18 np0005603609 kernel: tap216db5c7-79: entered promiscuous mode
Jan 31 03:04:18 np0005603609 NetworkManager[49064]: <info>  [1769846658.0988] manager: (tap216db5c7-79): new Tun device (/org/freedesktop/NetworkManager/Devices/181)
Jan 31 03:04:18 np0005603609 ovn_controller[130359]: 2026-01-31T08:04:18Z|00365|binding|INFO|Claiming lport 216db5c7-79b0-4533-b75f-56027324e484 for this chassis.
Jan 31 03:04:18 np0005603609 ovn_controller[130359]: 2026-01-31T08:04:18Z|00366|binding|INFO|216db5c7-79b0-4533-b75f-56027324e484: Claiming fa:16:3e:a5:ba:9c 10.100.0.9
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.100 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:18 np0005603609 ovn_controller[130359]: 2026-01-31T08:04:18Z|00367|binding|INFO|Setting lport 216db5c7-79b0-4533-b75f-56027324e484 ovn-installed in OVS
Jan 31 03:04:18 np0005603609 ovn_controller[130359]: 2026-01-31T08:04:18Z|00368|binding|INFO|Setting lport 216db5c7-79b0-4533-b75f-56027324e484 up in Southbound
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.110 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:ba:9c 10.100.0.9'], port_security=['fa:16:3e:a5:ba:9c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '79977ab8-f5fd-4e71-b86e-34bf6e623879', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd352316ff6534075952e2d0c28061b09', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2e80c7d1-3f05-4a2c-8ace-7be5723d956d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b782827-2c44-4671-8f12-bb67b4f64804, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=216db5c7-79b0-4533-b75f-56027324e484) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.112 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 216db5c7-79b0-4533-b75f-56027324e484 in datapath 79cb2b81-3369-468a-8bf6-7e13d5df334b bound to our chassis#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.111 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.113 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79cb2b81-3369-468a-8bf6-7e13d5df334b#033[00m
Jan 31 03:04:18 np0005603609 systemd-udevd[259869]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.122 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cd594766-70c6-4fd8-b783-0f410c705271]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.123 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap79cb2b81-31 in ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:04:18 np0005603609 systemd-machined[190912]: New machine qemu-47-instance-00000060.
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.125 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap79cb2b81-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.125 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[33466ffb-50cd-478b-a5f3-e4dc4b13cf2a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.126 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9f327423-413a-493d-bf17-b14f8d5d2a43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:18 np0005603609 NetworkManager[49064]: <info>  [1769846658.1316] device (tap216db5c7-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:04:18 np0005603609 systemd[1]: Started Virtual Machine qemu-47-instance-00000060.
Jan 31 03:04:18 np0005603609 NetworkManager[49064]: <info>  [1769846658.1323] device (tap216db5c7-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.133 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[3bd362f1-7a37-43a6-9a2f-ea1c33f1415f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.144 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6011c513-a4f0-43bb-a646-df45d2e49bcc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.164 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[96d90882-f3ec-42a7-a511-1ce6ea395579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:18 np0005603609 NetworkManager[49064]: <info>  [1769846658.1705] manager: (tap79cb2b81-30): new Veth device (/org/freedesktop/NetworkManager/Devices/182)
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.169 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ba9473a4-2e4e-4949-a48e-6b91bfa26e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:18 np0005603609 systemd-udevd[259872]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.190 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[31951acc-11d4-470c-ad49-dfdfa6a47492]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.193 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[df974e57-2a31-4e6b-a6c0-a9d33bb06094]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:18 np0005603609 NetworkManager[49064]: <info>  [1769846658.2085] device (tap79cb2b81-30): carrier: link connected
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.211 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[4297fe1d-8826-463d-83b1-640815f0e2c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.223 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[629e5f2a-d23a-4714-9791-b17b3c58b5e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79cb2b81-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:12:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689790, 'reachable_time': 39077, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259901, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.232 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[829a3658-73a9-40b2-829b-d163a3536d55]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:12e3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689790, 'tstamp': 689790}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259902, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.245 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7c186445-e012-4c77-8ac2-7559f77c0927]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79cb2b81-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:12:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689790, 'reachable_time': 39077, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259903, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.266 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e4c579-e931-4026-86d7-1edbfa0f2c6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.304 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d97b5054-160a-4244-996f-b646f8d42c6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.306 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79cb2b81-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.306 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.306 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79cb2b81-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.308 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:18 np0005603609 kernel: tap79cb2b81-30: entered promiscuous mode
Jan 31 03:04:18 np0005603609 NetworkManager[49064]: <info>  [1769846658.3094] manager: (tap79cb2b81-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.310 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.311 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79cb2b81-30, col_values=(('external_ids', {'iface-id': '9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.311 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:18 np0005603609 ovn_controller[130359]: 2026-01-31T08:04:18Z|00369|binding|INFO|Releasing lport 9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef from this chassis (sb_readonly=0)
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.318 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.319 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.319 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.320 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e52fd02b-3351-44cb-be32-de7dfd61eb7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.320 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:04:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:18.321 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'env', 'PROCESS_TAG=haproxy-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/79cb2b81-3369-468a-8bf6-7e13d5df334b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:04:18 np0005603609 podman[259935]: 2026-01-31 08:04:18.627919611 +0000 UTC m=+0.052257919 container create daed5af8eeeefaf80ab8cf843be9e351f958085f7bd7d8b09d514626be13c22d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:04:18 np0005603609 systemd[1]: Started libpod-conmon-daed5af8eeeefaf80ab8cf843be9e351f958085f7bd7d8b09d514626be13c22d.scope.
Jan 31 03:04:18 np0005603609 podman[259935]: 2026-01-31 08:04:18.592817776 +0000 UTC m=+0.017156094 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:04:18 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:04:18 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1296cb12e8f7049e4e41e6fe1b6f9ca4beb136c5b9f69ccbf6d17e8c93572a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:04:18 np0005603609 podman[259935]: 2026-01-31 08:04:18.726415663 +0000 UTC m=+0.150753991 container init daed5af8eeeefaf80ab8cf843be9e351f958085f7bd7d8b09d514626be13c22d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 03:04:18 np0005603609 podman[259935]: 2026-01-31 08:04:18.73337012 +0000 UTC m=+0.157708428 container start daed5af8eeeefaf80ab8cf843be9e351f958085f7bd7d8b09d514626be13c22d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.792 221554 DEBUG nova.compute.manager [req-4cfa3b74-38cd-45da-b070-1b7b16ae077b req-c7165f26-8d96-4699-ba09-5e12a553f6d3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Received event network-vif-plugged-216db5c7-79b0-4533-b75f-56027324e484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.793 221554 DEBUG oslo_concurrency.lockutils [req-4cfa3b74-38cd-45da-b070-1b7b16ae077b req-c7165f26-8d96-4699-ba09-5e12a553f6d3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.793 221554 DEBUG oslo_concurrency.lockutils [req-4cfa3b74-38cd-45da-b070-1b7b16ae077b req-c7165f26-8d96-4699-ba09-5e12a553f6d3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.793 221554 DEBUG oslo_concurrency.lockutils [req-4cfa3b74-38cd-45da-b070-1b7b16ae077b req-c7165f26-8d96-4699-ba09-5e12a553f6d3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.793 221554 DEBUG nova.compute.manager [req-4cfa3b74-38cd-45da-b070-1b7b16ae077b req-c7165f26-8d96-4699-ba09-5e12a553f6d3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] No waiting events found dispatching network-vif-plugged-216db5c7-79b0-4533-b75f-56027324e484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:04:18 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[259968]: [NOTICE]   (259986) : New worker (259992) forked
Jan 31 03:04:18 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[259968]: [NOTICE]   (259986) : Loading success.
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.793 221554 WARNING nova.compute.manager [req-4cfa3b74-38cd-45da-b070-1b7b16ae077b req-c7165f26-8d96-4699-ba09-5e12a553f6d3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Received unexpected event network-vif-plugged-216db5c7-79b0-4533-b75f-56027324e484 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.842 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.959 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 79977ab8-f5fd-4e71-b86e-34bf6e623879 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.959 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846658.9587493, 79977ab8-f5fd-4e71-b86e-34bf6e623879 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.959 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.961 221554 DEBUG nova.compute.manager [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.962 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.964 221554 INFO nova.virt.libvirt.driver [-] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Instance spawned successfully.#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.964 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.988 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.991 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.992 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.992 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.992 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.993 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.993 221554 DEBUG nova.virt.libvirt.driver [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:04:18 np0005603609 nova_compute[221550]: 2026-01-31 08:04:18.997 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:04:19 np0005603609 nova_compute[221550]: 2026-01-31 08:04:19.051 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 03:04:19 np0005603609 nova_compute[221550]: 2026-01-31 08:04:19.051 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846658.9600282, 79977ab8-f5fd-4e71-b86e-34bf6e623879 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:04:19 np0005603609 nova_compute[221550]: 2026-01-31 08:04:19.051 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] VM Started (Lifecycle Event)#033[00m
Jan 31 03:04:19 np0005603609 nova_compute[221550]: 2026-01-31 08:04:19.081 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:04:19 np0005603609 nova_compute[221550]: 2026-01-31 08:04:19.084 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:04:19 np0005603609 nova_compute[221550]: 2026-01-31 08:04:19.096 221554 DEBUG nova.compute.manager [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:04:19 np0005603609 nova_compute[221550]: 2026-01-31 08:04:19.133 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 03:04:19 np0005603609 nova_compute[221550]: 2026-01-31 08:04:19.171 221554 DEBUG oslo_concurrency.lockutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:19 np0005603609 nova_compute[221550]: 2026-01-31 08:04:19.171 221554 DEBUG oslo_concurrency.lockutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:19 np0005603609 nova_compute[221550]: 2026-01-31 08:04:19.171 221554 DEBUG nova.objects.instance [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 03:04:19 np0005603609 nova_compute[221550]: 2026-01-31 08:04:19.255 221554 DEBUG oslo_concurrency.lockutils [None req-190fe4a7-77df-4017-8c65-fb6fe68e41da 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:04:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2681368728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:04:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:19.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:20.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:20 np0005603609 nova_compute[221550]: 2026-01-31 08:04:20.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:20 np0005603609 nova_compute[221550]: 2026-01-31 08:04:20.936 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:20 np0005603609 nova_compute[221550]: 2026-01-31 08:04:20.936 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:20 np0005603609 nova_compute[221550]: 2026-01-31 08:04:20.936 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:20 np0005603609 nova_compute[221550]: 2026-01-31 08:04:20.937 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:04:20 np0005603609 nova_compute[221550]: 2026-01-31 08:04:20.937 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:04:21 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3994239849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:04:21 np0005603609 nova_compute[221550]: 2026-01-31 08:04:21.344 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:21 np0005603609 nova_compute[221550]: 2026-01-31 08:04:21.485 221554 DEBUG nova.compute.manager [req-144c900d-2663-4add-9437-a7bd6c7578db req-78d4dd3a-282c-4601-b024-2e61e1db7641 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Received event network-vif-plugged-216db5c7-79b0-4533-b75f-56027324e484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:21 np0005603609 nova_compute[221550]: 2026-01-31 08:04:21.486 221554 DEBUG oslo_concurrency.lockutils [req-144c900d-2663-4add-9437-a7bd6c7578db req-78d4dd3a-282c-4601-b024-2e61e1db7641 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:21 np0005603609 nova_compute[221550]: 2026-01-31 08:04:21.486 221554 DEBUG oslo_concurrency.lockutils [req-144c900d-2663-4add-9437-a7bd6c7578db req-78d4dd3a-282c-4601-b024-2e61e1db7641 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:21 np0005603609 nova_compute[221550]: 2026-01-31 08:04:21.486 221554 DEBUG oslo_concurrency.lockutils [req-144c900d-2663-4add-9437-a7bd6c7578db req-78d4dd3a-282c-4601-b024-2e61e1db7641 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:21 np0005603609 nova_compute[221550]: 2026-01-31 08:04:21.486 221554 DEBUG nova.compute.manager [req-144c900d-2663-4add-9437-a7bd6c7578db req-78d4dd3a-282c-4601-b024-2e61e1db7641 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] No waiting events found dispatching network-vif-plugged-216db5c7-79b0-4533-b75f-56027324e484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:04:21 np0005603609 nova_compute[221550]: 2026-01-31 08:04:21.487 221554 WARNING nova.compute.manager [req-144c900d-2663-4add-9437-a7bd6c7578db req-78d4dd3a-282c-4601-b024-2e61e1db7641 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Received unexpected event network-vif-plugged-216db5c7-79b0-4533-b75f-56027324e484 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:04:21 np0005603609 nova_compute[221550]: 2026-01-31 08:04:21.548 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:21 np0005603609 nova_compute[221550]: 2026-01-31 08:04:21.595 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:04:21 np0005603609 nova_compute[221550]: 2026-01-31 08:04:21.595 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:04:21 np0005603609 nova_compute[221550]: 2026-01-31 08:04:21.595 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:04:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:04:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:21.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:04:21 np0005603609 nova_compute[221550]: 2026-01-31 08:04:21.710 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:04:21 np0005603609 nova_compute[221550]: 2026-01-31 08:04:21.711 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4414MB free_disk=20.834457397460938GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:04:21 np0005603609 nova_compute[221550]: 2026-01-31 08:04:21.711 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:21 np0005603609 nova_compute[221550]: 2026-01-31 08:04:21.711 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:22.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:04:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:04:22 np0005603609 nova_compute[221550]: 2026-01-31 08:04:22.273 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 79977ab8-f5fd-4e71-b86e-34bf6e623879 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:04:22 np0005603609 nova_compute[221550]: 2026-01-31 08:04:22.273 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:04:22 np0005603609 nova_compute[221550]: 2026-01-31 08:04:22.274 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:04:22 np0005603609 nova_compute[221550]: 2026-01-31 08:04:22.319 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:04:22 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1877866369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:04:22 np0005603609 nova_compute[221550]: 2026-01-31 08:04:22.758 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:22 np0005603609 nova_compute[221550]: 2026-01-31 08:04:22.763 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:04:22 np0005603609 nova_compute[221550]: 2026-01-31 08:04:22.853 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:04:22 np0005603609 nova_compute[221550]: 2026-01-31 08:04:22.881 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:04:22 np0005603609 nova_compute[221550]: 2026-01-31 08:04:22.882 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:23 np0005603609 nova_compute[221550]: 2026-01-31 08:04:23.175 221554 DEBUG oslo_concurrency.lockutils [None req-d528a53d-ff06-417d-8e7f-10703cd4e62a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "79977ab8-f5fd-4e71-b86e-34bf6e623879" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:23 np0005603609 nova_compute[221550]: 2026-01-31 08:04:23.176 221554 DEBUG oslo_concurrency.lockutils [None req-d528a53d-ff06-417d-8e7f-10703cd4e62a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:23 np0005603609 nova_compute[221550]: 2026-01-31 08:04:23.198 221554 INFO nova.compute.manager [None req-d528a53d-ff06-417d-8e7f-10703cd4e62a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Detaching volume 008cf290-7941-4c64-a361-689cb2af7bcc#033[00m
Jan 31 03:04:23 np0005603609 nova_compute[221550]: 2026-01-31 08:04:23.357 221554 INFO nova.virt.block_device [None req-d528a53d-ff06-417d-8e7f-10703cd4e62a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Attempting to driver detach volume 008cf290-7941-4c64-a361-689cb2af7bcc from mountpoint /dev/vdb#033[00m
Jan 31 03:04:23 np0005603609 nova_compute[221550]: 2026-01-31 08:04:23.364 221554 DEBUG nova.virt.libvirt.driver [None req-d528a53d-ff06-417d-8e7f-10703cd4e62a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Attempting to detach device vdb from instance 79977ab8-f5fd-4e71-b86e-34bf6e623879 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:04:23 np0005603609 nova_compute[221550]: 2026-01-31 08:04:23.364 221554 DEBUG nova.virt.libvirt.guest [None req-d528a53d-ff06-417d-8e7f-10703cd4e62a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:04:23 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:04:23 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-008cf290-7941-4c64-a361-689cb2af7bcc">
Jan 31 03:04:23 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:04:23 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:04:23 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:04:23 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:04:23 np0005603609 nova_compute[221550]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:04:23 np0005603609 nova_compute[221550]:  <serial>008cf290-7941-4c64-a361-689cb2af7bcc</serial>
Jan 31 03:04:23 np0005603609 nova_compute[221550]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 31 03:04:23 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:04:23 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:04:23 np0005603609 nova_compute[221550]: 2026-01-31 08:04:23.370 221554 INFO nova.virt.libvirt.driver [None req-d528a53d-ff06-417d-8e7f-10703cd4e62a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully detached device vdb from instance 79977ab8-f5fd-4e71-b86e-34bf6e623879 from the persistent domain config.#033[00m
Jan 31 03:04:23 np0005603609 nova_compute[221550]: 2026-01-31 08:04:23.371 221554 DEBUG nova.virt.libvirt.driver [None req-d528a53d-ff06-417d-8e7f-10703cd4e62a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 79977ab8-f5fd-4e71-b86e-34bf6e623879 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 03:04:23 np0005603609 nova_compute[221550]: 2026-01-31 08:04:23.371 221554 DEBUG nova.virt.libvirt.guest [None req-d528a53d-ff06-417d-8e7f-10703cd4e62a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:04:23 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:04:23 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-008cf290-7941-4c64-a361-689cb2af7bcc">
Jan 31 03:04:23 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:04:23 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:04:23 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:04:23 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:04:23 np0005603609 nova_compute[221550]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:04:23 np0005603609 nova_compute[221550]:  <serial>008cf290-7941-4c64-a361-689cb2af7bcc</serial>
Jan 31 03:04:23 np0005603609 nova_compute[221550]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 31 03:04:23 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:04:23 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:04:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:23.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:23 np0005603609 nova_compute[221550]: 2026-01-31 08:04:23.844 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:23 np0005603609 nova_compute[221550]: 2026-01-31 08:04:23.883 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:23 np0005603609 nova_compute[221550]: 2026-01-31 08:04:23.883 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:23 np0005603609 nova_compute[221550]: 2026-01-31 08:04:23.884 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:04:23 np0005603609 nova_compute[221550]: 2026-01-31 08:04:23.884 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:04:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:24.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:24 np0005603609 nova_compute[221550]: 2026-01-31 08:04:24.156 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-79977ab8-f5fd-4e71-b86e-34bf6e623879" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:04:24 np0005603609 nova_compute[221550]: 2026-01-31 08:04:24.157 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-79977ab8-f5fd-4e71-b86e-34bf6e623879" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:04:24 np0005603609 nova_compute[221550]: 2026-01-31 08:04:24.157 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:04:24 np0005603609 nova_compute[221550]: 2026-01-31 08:04:24.157 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 79977ab8-f5fd-4e71-b86e-34bf6e623879 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:25 np0005603609 podman[260121]: 2026-01-31 08:04:25.22668546 +0000 UTC m=+0.092599100 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 31 03:04:25 np0005603609 podman[260120]: 2026-01-31 08:04:25.236843586 +0000 UTC m=+0.113763061 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 03:04:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:04:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:25.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:04:25 np0005603609 nova_compute[221550]: 2026-01-31 08:04:25.645 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Updating instance_info_cache with network_info: [{"id": "216db5c7-79b0-4533-b75f-56027324e484", "address": "fa:16:3e:a5:ba:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap216db5c7-79", "ovs_interfaceid": "216db5c7-79b0-4533-b75f-56027324e484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:04:25 np0005603609 nova_compute[221550]: 2026-01-31 08:04:25.918 221554 DEBUG nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Received event <DeviceRemovedEvent: 1769846665.918103, 79977ab8-f5fd-4e71-b86e-34bf6e623879 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 03:04:25 np0005603609 nova_compute[221550]: 2026-01-31 08:04:25.920 221554 DEBUG nova.virt.libvirt.driver [None req-d528a53d-ff06-417d-8e7f-10703cd4e62a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 79977ab8-f5fd-4e71-b86e-34bf6e623879 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 03:04:25 np0005603609 nova_compute[221550]: 2026-01-31 08:04:25.922 221554 INFO nova.virt.libvirt.driver [None req-d528a53d-ff06-417d-8e7f-10703cd4e62a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully detached device vdb from instance 79977ab8-f5fd-4e71-b86e-34bf6e623879 from the live domain config.#033[00m
Jan 31 03:04:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:04:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:26.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:04:26 np0005603609 nova_compute[221550]: 2026-01-31 08:04:26.565 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:26 np0005603609 nova_compute[221550]: 2026-01-31 08:04:26.911 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-79977ab8-f5fd-4e71-b86e-34bf6e623879" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:04:26 np0005603609 nova_compute[221550]: 2026-01-31 08:04:26.911 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:04:26 np0005603609 nova_compute[221550]: 2026-01-31 08:04:26.911 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:26 np0005603609 nova_compute[221550]: 2026-01-31 08:04:26.912 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:26 np0005603609 nova_compute[221550]: 2026-01-31 08:04:26.912 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:26 np0005603609 nova_compute[221550]: 2026-01-31 08:04:26.912 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:04:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:27.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:28.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:28 np0005603609 nova_compute[221550]: 2026-01-31 08:04:28.845 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:04:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:29.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:04:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:30.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:31 np0005603609 nova_compute[221550]: 2026-01-31 08:04:31.567 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:31.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:31 np0005603609 nova_compute[221550]: 2026-01-31 08:04:31.933 221554 DEBUG nova.objects.instance [None req-d528a53d-ff06-417d-8e7f-10703cd4e62a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'flavor' on Instance uuid 79977ab8-f5fd-4e71-b86e-34bf6e623879 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:32.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:32 np0005603609 nova_compute[221550]: 2026-01-31 08:04:32.175 221554 DEBUG oslo_concurrency.lockutils [None req-d528a53d-ff06-417d-8e7f-10703cd4e62a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 8.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:32.681 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:04:32 np0005603609 nova_compute[221550]: 2026-01-31 08:04:32.681 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:32.682 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:04:32 np0005603609 ovn_controller[130359]: 2026-01-31T08:04:32Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:ba:9c 10.100.0.9
Jan 31 03:04:32 np0005603609 ovn_controller[130359]: 2026-01-31T08:04:32Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:ba:9c 10.100.0.9
Jan 31 03:04:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:33.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.658 221554 DEBUG oslo_concurrency.lockutils [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "79977ab8-f5fd-4e71-b86e-34bf6e623879" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.658 221554 DEBUG oslo_concurrency.lockutils [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.659 221554 DEBUG oslo_concurrency.lockutils [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.660 221554 DEBUG oslo_concurrency.lockutils [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.660 221554 DEBUG oslo_concurrency.lockutils [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.662 221554 INFO nova.compute.manager [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Terminating instance#033[00m
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.664 221554 DEBUG nova.compute.manager [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:04:33 np0005603609 kernel: tap216db5c7-79 (unregistering): left promiscuous mode
Jan 31 03:04:33 np0005603609 NetworkManager[49064]: <info>  [1769846673.7197] device (tap216db5c7-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:04:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:04:33Z|00370|binding|INFO|Releasing lport 216db5c7-79b0-4533-b75f-56027324e484 from this chassis (sb_readonly=0)
Jan 31 03:04:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:04:33Z|00371|binding|INFO|Setting lport 216db5c7-79b0-4533-b75f-56027324e484 down in Southbound
Jan 31 03:04:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:04:33Z|00372|binding|INFO|Removing iface tap216db5c7-79 ovn-installed in OVS
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.728 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.738 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:33.740 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:ba:9c 10.100.0.9'], port_security=['fa:16:3e:a5:ba:9c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '79977ab8-f5fd-4e71-b86e-34bf6e623879', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd352316ff6534075952e2d0c28061b09', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2e80c7d1-3f05-4a2c-8ace-7be5723d956d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b782827-2c44-4671-8f12-bb67b4f64804, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=216db5c7-79b0-4533-b75f-56027324e484) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:04:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:33.742 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 216db5c7-79b0-4533-b75f-56027324e484 in datapath 79cb2b81-3369-468a-8bf6-7e13d5df334b unbound from our chassis#033[00m
Jan 31 03:04:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:33.744 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79cb2b81-3369-468a-8bf6-7e13d5df334b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:04:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:33.745 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f101e5-0f99-40d1-a49e-be0ef5f50693]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:33.746 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b namespace which is not needed anymore#033[00m
Jan 31 03:04:33 np0005603609 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000060.scope: Deactivated successfully.
Jan 31 03:04:33 np0005603609 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000060.scope: Consumed 13.205s CPU time.
Jan 31 03:04:33 np0005603609 systemd-machined[190912]: Machine qemu-47-instance-00000060 terminated.
Jan 31 03:04:33 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[259968]: [NOTICE]   (259986) : haproxy version is 2.8.14-c23fe91
Jan 31 03:04:33 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[259968]: [NOTICE]   (259986) : path to executable is /usr/sbin/haproxy
Jan 31 03:04:33 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[259968]: [WARNING]  (259986) : Exiting Master process...
Jan 31 03:04:33 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[259968]: [ALERT]    (259986) : Current worker (259992) exited with code 143 (Terminated)
Jan 31 03:04:33 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[259968]: [WARNING]  (259986) : All workers exited. Exiting... (0)
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.847 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:33 np0005603609 systemd[1]: libpod-daed5af8eeeefaf80ab8cf843be9e351f958085f7bd7d8b09d514626be13c22d.scope: Deactivated successfully.
Jan 31 03:04:33 np0005603609 podman[260192]: 2026-01-31 08:04:33.857322246 +0000 UTC m=+0.037920185 container died daed5af8eeeefaf80ab8cf843be9e351f958085f7bd7d8b09d514626be13c22d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:04:33 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-daed5af8eeeefaf80ab8cf843be9e351f958085f7bd7d8b09d514626be13c22d-userdata-shm.mount: Deactivated successfully.
Jan 31 03:04:33 np0005603609 systemd[1]: var-lib-containers-storage-overlay-cc1296cb12e8f7049e4e41e6fe1b6f9ca4beb136c5b9f69ccbf6d17e8c93572a-merged.mount: Deactivated successfully.
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.889 221554 INFO nova.virt.libvirt.driver [-] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Instance destroyed successfully.#033[00m
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.889 221554 DEBUG nova.objects.instance [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'resources' on Instance uuid 79977ab8-f5fd-4e71-b86e-34bf6e623879 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:33 np0005603609 podman[260192]: 2026-01-31 08:04:33.891242452 +0000 UTC m=+0.071840401 container cleanup daed5af8eeeefaf80ab8cf843be9e351f958085f7bd7d8b09d514626be13c22d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 03:04:33 np0005603609 systemd[1]: libpod-conmon-daed5af8eeeefaf80ab8cf843be9e351f958085f7bd7d8b09d514626be13c22d.scope: Deactivated successfully.
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.903 221554 DEBUG nova.virt.libvirt.vif [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:03:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-894827809',display_name='tempest-ServerActionsTestOtherA-server-1043045287',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-894827809',id=96,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGSsmxs9wjtIdeLS6+TDVPehBXibW1pg/pUNrYaSqukTWGS7m/mx1RvtoAfp2fIHUhH+opIu32b284sDYd8t4TmsPDo7saBhh7IBr6K+Eh6cl1W1MtRphK/nkrUT/jTSkQ==',key_name='tempest-keypair-523704801',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:04:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-xyg4dkcc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:04:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='31043e345f6b48b585fb7b8ab7304764',uuid=79977ab8-f5fd-4e71-b86e-34bf6e623879,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "216db5c7-79b0-4533-b75f-56027324e484", "address": "fa:16:3e:a5:ba:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap216db5c7-79", "ovs_interfaceid": "216db5c7-79b0-4533-b75f-56027324e484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.904 221554 DEBUG nova.network.os_vif_util [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "216db5c7-79b0-4533-b75f-56027324e484", "address": "fa:16:3e:a5:ba:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap216db5c7-79", "ovs_interfaceid": "216db5c7-79b0-4533-b75f-56027324e484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.905 221554 DEBUG nova.network.os_vif_util [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:ba:9c,bridge_name='br-int',has_traffic_filtering=True,id=216db5c7-79b0-4533-b75f-56027324e484,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap216db5c7-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.905 221554 DEBUG os_vif [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:ba:9c,bridge_name='br-int',has_traffic_filtering=True,id=216db5c7-79b0-4533-b75f-56027324e484,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap216db5c7-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.907 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.907 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap216db5c7-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.909 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.912 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.914 221554 INFO os_vif [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:ba:9c,bridge_name='br-int',has_traffic_filtering=True,id=216db5c7-79b0-4533-b75f-56027324e484,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap216db5c7-79')#033[00m
Jan 31 03:04:33 np0005603609 podman[260234]: 2026-01-31 08:04:33.94515442 +0000 UTC m=+0.040185418 container remove daed5af8eeeefaf80ab8cf843be9e351f958085f7bd7d8b09d514626be13c22d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:04:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:33.948 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c4283f-ed9e-44a3-8b4e-ef41349028b3]: (4, ('Sat Jan 31 08:04:33 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b (daed5af8eeeefaf80ab8cf843be9e351f958085f7bd7d8b09d514626be13c22d)\ndaed5af8eeeefaf80ab8cf843be9e351f958085f7bd7d8b09d514626be13c22d\nSat Jan 31 08:04:33 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b (daed5af8eeeefaf80ab8cf843be9e351f958085f7bd7d8b09d514626be13c22d)\ndaed5af8eeeefaf80ab8cf843be9e351f958085f7bd7d8b09d514626be13c22d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:33.950 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba55784-f2e4-46aa-bee7-89796f260702]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:33.951 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79cb2b81-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:33 np0005603609 kernel: tap79cb2b81-30: left promiscuous mode
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.953 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:33 np0005603609 nova_compute[221550]: 2026-01-31 08:04:33.957 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:33.958 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b7cfc17a-467c-41eb-8b42-18a306c4b95e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:33.978 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a5333c48-ba70-483c-a300-cc49c4cbcc5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:33.980 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[356f5aaf-47e1-4ba5-bc24-718b519d8096]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:33.989 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[300dd3e0-bef5-40cf-9994-0f7d0eabdbb2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689785, 'reachable_time': 24320, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260267, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:33 np0005603609 systemd[1]: run-netns-ovnmeta\x2d79cb2b81\x2d3369\x2d468a\x2d8bf6\x2d7e13d5df334b.mount: Deactivated successfully.
Jan 31 03:04:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:33.992 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:04:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:33.992 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[76e8071a-e9ab-468f-bbc7-14164b8d8c4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:34.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:34 np0005603609 nova_compute[221550]: 2026-01-31 08:04:34.375 221554 DEBUG nova.compute.manager [req-d7a04ddc-4cd2-438a-85c2-71b2dccc0706 req-27bcefdc-715a-4658-b2db-cae5273e83d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Received event network-vif-unplugged-216db5c7-79b0-4533-b75f-56027324e484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:34 np0005603609 nova_compute[221550]: 2026-01-31 08:04:34.375 221554 DEBUG oslo_concurrency.lockutils [req-d7a04ddc-4cd2-438a-85c2-71b2dccc0706 req-27bcefdc-715a-4658-b2db-cae5273e83d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:34 np0005603609 nova_compute[221550]: 2026-01-31 08:04:34.376 221554 DEBUG oslo_concurrency.lockutils [req-d7a04ddc-4cd2-438a-85c2-71b2dccc0706 req-27bcefdc-715a-4658-b2db-cae5273e83d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:34 np0005603609 nova_compute[221550]: 2026-01-31 08:04:34.376 221554 DEBUG oslo_concurrency.lockutils [req-d7a04ddc-4cd2-438a-85c2-71b2dccc0706 req-27bcefdc-715a-4658-b2db-cae5273e83d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:34 np0005603609 nova_compute[221550]: 2026-01-31 08:04:34.376 221554 DEBUG nova.compute.manager [req-d7a04ddc-4cd2-438a-85c2-71b2dccc0706 req-27bcefdc-715a-4658-b2db-cae5273e83d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] No waiting events found dispatching network-vif-unplugged-216db5c7-79b0-4533-b75f-56027324e484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:04:34 np0005603609 nova_compute[221550]: 2026-01-31 08:04:34.376 221554 DEBUG nova.compute.manager [req-d7a04ddc-4cd2-438a-85c2-71b2dccc0706 req-27bcefdc-715a-4658-b2db-cae5273e83d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Received event network-vif-unplugged-216db5c7-79b0-4533-b75f-56027324e484 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:04:34 np0005603609 nova_compute[221550]: 2026-01-31 08:04:34.833 221554 INFO nova.virt.libvirt.driver [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Deleting instance files /var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879_del#033[00m
Jan 31 03:04:34 np0005603609 nova_compute[221550]: 2026-01-31 08:04:34.834 221554 INFO nova.virt.libvirt.driver [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Deletion of /var/lib/nova/instances/79977ab8-f5fd-4e71-b86e-34bf6e623879_del complete#033[00m
Jan 31 03:04:34 np0005603609 nova_compute[221550]: 2026-01-31 08:04:34.911 221554 INFO nova.compute.manager [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Took 1.25 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:04:34 np0005603609 nova_compute[221550]: 2026-01-31 08:04:34.911 221554 DEBUG oslo.service.loopingcall [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:04:34 np0005603609 nova_compute[221550]: 2026-01-31 08:04:34.912 221554 DEBUG nova.compute.manager [-] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:04:34 np0005603609 nova_compute[221550]: 2026-01-31 08:04:34.912 221554 DEBUG nova.network.neutron [-] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:04:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:35.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:35 np0005603609 nova_compute[221550]: 2026-01-31 08:04:35.683 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:04:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:36.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.326 221554 DEBUG nova.compute.manager [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.409 221554 DEBUG oslo_concurrency.lockutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.410 221554 DEBUG oslo_concurrency.lockutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.434 221554 DEBUG nova.objects.instance [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'pci_requests' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.452 221554 DEBUG nova.virt.hardware [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.452 221554 INFO nova.compute.claims [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.453 221554 DEBUG nova.objects.instance [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'resources' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.467 221554 DEBUG nova.objects.instance [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'pci_devices' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.479 221554 DEBUG nova.compute.manager [req-5e4e0e97-af08-4610-9baa-c14872e6c529 req-aa306c58-f466-4875-8af3-782627891f7c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Received event network-vif-plugged-216db5c7-79b0-4533-b75f-56027324e484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.479 221554 DEBUG oslo_concurrency.lockutils [req-5e4e0e97-af08-4610-9baa-c14872e6c529 req-aa306c58-f466-4875-8af3-782627891f7c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.479 221554 DEBUG oslo_concurrency.lockutils [req-5e4e0e97-af08-4610-9baa-c14872e6c529 req-aa306c58-f466-4875-8af3-782627891f7c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.479 221554 DEBUG oslo_concurrency.lockutils [req-5e4e0e97-af08-4610-9baa-c14872e6c529 req-aa306c58-f466-4875-8af3-782627891f7c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.480 221554 DEBUG nova.compute.manager [req-5e4e0e97-af08-4610-9baa-c14872e6c529 req-aa306c58-f466-4875-8af3-782627891f7c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] No waiting events found dispatching network-vif-plugged-216db5c7-79b0-4533-b75f-56027324e484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.480 221554 WARNING nova.compute.manager [req-5e4e0e97-af08-4610-9baa-c14872e6c529 req-aa306c58-f466-4875-8af3-782627891f7c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Received unexpected event network-vif-plugged-216db5c7-79b0-4533-b75f-56027324e484 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.525 221554 INFO nova.compute.resource_tracker [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updating resource usage from migration 7aa2cc78-3afd-485b-8162-2af7bfbfb142#033[00m
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.526 221554 DEBUG nova.compute.resource_tracker [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Starting to track incoming migration 7aa2cc78-3afd-485b-8162-2af7bfbfb142 with flavor e3bd1dad-95f3-4ed9-94b4-27245cd798b5 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.591 221554 DEBUG oslo_concurrency.processutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.966 221554 DEBUG nova.network.neutron [-] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:04:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:04:36 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/29766981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.982 221554 DEBUG oslo_concurrency.processutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.985 221554 INFO nova.compute.manager [-] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Took 2.07 seconds to deallocate network for instance.#033[00m
Jan 31 03:04:36 np0005603609 nova_compute[221550]: 2026-01-31 08:04:36.991 221554 DEBUG nova.compute.provider_tree [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:04:37 np0005603609 nova_compute[221550]: 2026-01-31 08:04:37.008 221554 DEBUG nova.scheduler.client.report [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:04:37 np0005603609 nova_compute[221550]: 2026-01-31 08:04:37.038 221554 DEBUG oslo_concurrency.lockutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:37 np0005603609 nova_compute[221550]: 2026-01-31 08:04:37.039 221554 INFO nova.compute.manager [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Migrating#033[00m
Jan 31 03:04:37 np0005603609 nova_compute[221550]: 2026-01-31 08:04:37.051 221554 DEBUG oslo_concurrency.lockutils [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:37 np0005603609 nova_compute[221550]: 2026-01-31 08:04:37.052 221554 DEBUG oslo_concurrency.lockutils [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:37 np0005603609 nova_compute[221550]: 2026-01-31 08:04:37.120 221554 DEBUG oslo_concurrency.processutils [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:04:37 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3004535060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:04:37 np0005603609 nova_compute[221550]: 2026-01-31 08:04:37.545 221554 DEBUG oslo_concurrency.processutils [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:37 np0005603609 nova_compute[221550]: 2026-01-31 08:04:37.549 221554 DEBUG nova.compute.provider_tree [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:04:37 np0005603609 nova_compute[221550]: 2026-01-31 08:04:37.567 221554 DEBUG nova.scheduler.client.report [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:04:37 np0005603609 nova_compute[221550]: 2026-01-31 08:04:37.589 221554 DEBUG oslo_concurrency.lockutils [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:37 np0005603609 nova_compute[221550]: 2026-01-31 08:04:37.613 221554 INFO nova.scheduler.client.report [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Deleted allocations for instance 79977ab8-f5fd-4e71-b86e-34bf6e623879#033[00m
Jan 31 03:04:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:37.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:37 np0005603609 nova_compute[221550]: 2026-01-31 08:04:37.675 221554 DEBUG oslo_concurrency.lockutils [None req-59574875-15b1-48a7-9973-f0dbdf3338ea 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "79977ab8-f5fd-4e71-b86e-34bf6e623879" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:37 np0005603609 nova_compute[221550]: 2026-01-31 08:04:37.736 221554 DEBUG nova.compute.manager [req-a9a50272-c12e-4d6e-b4f0-ec235cf88005 req-a3ae72ac-6c74-475e-8e74-3c56cfd87468 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Received event network-vif-deleted-216db5c7-79b0-4533-b75f-56027324e484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:38.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:38 np0005603609 nova_compute[221550]: 2026-01-31 08:04:38.849 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:38 np0005603609 nova_compute[221550]: 2026-01-31 08:04:38.909 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:39.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:40.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:40 np0005603609 systemd-logind[823]: New session 63 of user nova.
Jan 31 03:04:40 np0005603609 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 03:04:40 np0005603609 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 03:04:40 np0005603609 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 03:04:40 np0005603609 systemd[1]: Starting User Manager for UID 42436...
Jan 31 03:04:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:40 np0005603609 systemd[260317]: Queued start job for default target Main User Target.
Jan 31 03:04:40 np0005603609 systemd[260317]: Created slice User Application Slice.
Jan 31 03:04:40 np0005603609 systemd[260317]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 03:04:40 np0005603609 systemd[260317]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 03:04:40 np0005603609 systemd[260317]: Reached target Paths.
Jan 31 03:04:40 np0005603609 systemd[260317]: Reached target Timers.
Jan 31 03:04:40 np0005603609 systemd[260317]: Starting D-Bus User Message Bus Socket...
Jan 31 03:04:40 np0005603609 systemd[260317]: Starting Create User's Volatile Files and Directories...
Jan 31 03:04:40 np0005603609 systemd[260317]: Finished Create User's Volatile Files and Directories.
Jan 31 03:04:40 np0005603609 systemd[260317]: Listening on D-Bus User Message Bus Socket.
Jan 31 03:04:40 np0005603609 systemd[260317]: Reached target Sockets.
Jan 31 03:04:40 np0005603609 systemd[260317]: Reached target Basic System.
Jan 31 03:04:40 np0005603609 systemd[260317]: Reached target Main User Target.
Jan 31 03:04:40 np0005603609 systemd[260317]: Startup finished in 144ms.
Jan 31 03:04:40 np0005603609 systemd[1]: Started User Manager for UID 42436.
Jan 31 03:04:40 np0005603609 systemd[1]: Started Session 63 of User nova.
Jan 31 03:04:40 np0005603609 systemd[1]: session-63.scope: Deactivated successfully.
Jan 31 03:04:40 np0005603609 systemd-logind[823]: Session 63 logged out. Waiting for processes to exit.
Jan 31 03:04:40 np0005603609 systemd-logind[823]: Removed session 63.
Jan 31 03:04:40 np0005603609 systemd-logind[823]: New session 65 of user nova.
Jan 31 03:04:40 np0005603609 systemd[1]: Started Session 65 of User nova.
Jan 31 03:04:40 np0005603609 systemd[1]: session-65.scope: Deactivated successfully.
Jan 31 03:04:40 np0005603609 systemd-logind[823]: Session 65 logged out. Waiting for processes to exit.
Jan 31 03:04:40 np0005603609 systemd-logind[823]: Removed session 65.
Jan 31 03:04:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:04:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:41.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:04:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:41.685 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:42.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:42 np0005603609 nova_compute[221550]: 2026-01-31 08:04:42.661 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:42 np0005603609 nova_compute[221550]: 2026-01-31 08:04:42.743 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:43 np0005603609 nova_compute[221550]: 2026-01-31 08:04:43.557 221554 DEBUG nova.compute.manager [req-5314fe09-96d5-4451-b9f4-327272e1f5c6 req-f95906dc-1f02-448e-aba2-dd850bf5504d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:43 np0005603609 nova_compute[221550]: 2026-01-31 08:04:43.558 221554 DEBUG oslo_concurrency.lockutils [req-5314fe09-96d5-4451-b9f4-327272e1f5c6 req-f95906dc-1f02-448e-aba2-dd850bf5504d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:43 np0005603609 nova_compute[221550]: 2026-01-31 08:04:43.558 221554 DEBUG oslo_concurrency.lockutils [req-5314fe09-96d5-4451-b9f4-327272e1f5c6 req-f95906dc-1f02-448e-aba2-dd850bf5504d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:43 np0005603609 nova_compute[221550]: 2026-01-31 08:04:43.558 221554 DEBUG oslo_concurrency.lockutils [req-5314fe09-96d5-4451-b9f4-327272e1f5c6 req-f95906dc-1f02-448e-aba2-dd850bf5504d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:43 np0005603609 nova_compute[221550]: 2026-01-31 08:04:43.559 221554 DEBUG nova.compute.manager [req-5314fe09-96d5-4451-b9f4-327272e1f5c6 req-f95906dc-1f02-448e-aba2-dd850bf5504d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:04:43 np0005603609 nova_compute[221550]: 2026-01-31 08:04:43.559 221554 WARNING nova.compute.manager [req-5314fe09-96d5-4451-b9f4-327272e1f5c6 req-f95906dc-1f02-448e-aba2-dd850bf5504d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 31 03:04:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:43.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:43 np0005603609 nova_compute[221550]: 2026-01-31 08:04:43.851 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:43 np0005603609 nova_compute[221550]: 2026-01-31 08:04:43.910 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:44.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:44 np0005603609 nova_compute[221550]: 2026-01-31 08:04:44.689 221554 INFO nova.network.neutron [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updating port 86cf2bf6-2f28-4435-b081-a3945070ed2d with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 31 03:04:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:04:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:45.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:04:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:04:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:46.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:04:46 np0005603609 nova_compute[221550]: 2026-01-31 08:04:46.044 221554 DEBUG nova.compute.manager [req-8d588c46-f597-4052-be82-e8fbdfa1bb33 req-231f0324-fec8-465b-b109-75066f061da9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:46 np0005603609 nova_compute[221550]: 2026-01-31 08:04:46.044 221554 DEBUG oslo_concurrency.lockutils [req-8d588c46-f597-4052-be82-e8fbdfa1bb33 req-231f0324-fec8-465b-b109-75066f061da9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:46 np0005603609 nova_compute[221550]: 2026-01-31 08:04:46.044 221554 DEBUG oslo_concurrency.lockutils [req-8d588c46-f597-4052-be82-e8fbdfa1bb33 req-231f0324-fec8-465b-b109-75066f061da9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:46 np0005603609 nova_compute[221550]: 2026-01-31 08:04:46.045 221554 DEBUG oslo_concurrency.lockutils [req-8d588c46-f597-4052-be82-e8fbdfa1bb33 req-231f0324-fec8-465b-b109-75066f061da9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:46 np0005603609 nova_compute[221550]: 2026-01-31 08:04:46.045 221554 DEBUG nova.compute.manager [req-8d588c46-f597-4052-be82-e8fbdfa1bb33 req-231f0324-fec8-465b-b109-75066f061da9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:04:46 np0005603609 nova_compute[221550]: 2026-01-31 08:04:46.045 221554 WARNING nova.compute.manager [req-8d588c46-f597-4052-be82-e8fbdfa1bb33 req-231f0324-fec8-465b-b109-75066f061da9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:04:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:47.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:47 np0005603609 nova_compute[221550]: 2026-01-31 08:04:47.708 221554 DEBUG oslo_concurrency.lockutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:04:47 np0005603609 nova_compute[221550]: 2026-01-31 08:04:47.708 221554 DEBUG oslo_concurrency.lockutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquired lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:04:47 np0005603609 nova_compute[221550]: 2026-01-31 08:04:47.709 221554 DEBUG nova.network.neutron [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:04:47 np0005603609 nova_compute[221550]: 2026-01-31 08:04:47.920 221554 DEBUG nova.compute.manager [req-7804bbd1-6c24-46f7-b8a7-e9970773bc11 req-1801f5db-579c-493d-acb1-edcc31483f39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-changed-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:47 np0005603609 nova_compute[221550]: 2026-01-31 08:04:47.920 221554 DEBUG nova.compute.manager [req-7804bbd1-6c24-46f7-b8a7-e9970773bc11 req-1801f5db-579c-493d-acb1-edcc31483f39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Refreshing instance network info cache due to event network-changed-86cf2bf6-2f28-4435-b081-a3945070ed2d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:04:47 np0005603609 nova_compute[221550]: 2026-01-31 08:04:47.921 221554 DEBUG oslo_concurrency.lockutils [req-7804bbd1-6c24-46f7-b8a7-e9970773bc11 req-1801f5db-579c-493d-acb1-edcc31483f39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:04:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:48.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:48 np0005603609 nova_compute[221550]: 2026-01-31 08:04:48.853 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:48 np0005603609 nova_compute[221550]: 2026-01-31 08:04:48.888 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846673.8869174, 79977ab8-f5fd-4e71-b86e-34bf6e623879 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:04:48 np0005603609 nova_compute[221550]: 2026-01-31 08:04:48.889 221554 INFO nova.compute.manager [-] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:04:48 np0005603609 nova_compute[221550]: 2026-01-31 08:04:48.912 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:48 np0005603609 nova_compute[221550]: 2026-01-31 08:04:48.939 221554 DEBUG nova.compute.manager [None req-ddb0b16a-2b2e-4af9-8306-9fa605729542 - - - - - -] [instance: 79977ab8-f5fd-4e71-b86e-34bf6e623879] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:04:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:49.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:04:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:50.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.059 221554 DEBUG nova.network.neutron [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updating instance_info_cache with network_info: [{"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.111 221554 DEBUG oslo_concurrency.lockutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Releasing lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.116 221554 DEBUG oslo_concurrency.lockutils [req-7804bbd1-6c24-46f7-b8a7-e9970773bc11 req-1801f5db-579c-493d-acb1-edcc31483f39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.116 221554 DEBUG nova.network.neutron [req-7804bbd1-6c24-46f7-b8a7-e9970773bc11 req-1801f5db-579c-493d-acb1-edcc31483f39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Refreshing network info cache for port 86cf2bf6-2f28-4435-b081-a3945070ed2d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:04:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.265 221554 DEBUG nova.virt.libvirt.driver [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.266 221554 DEBUG nova.virt.libvirt.driver [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.268 221554 INFO nova.virt.libvirt.driver [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Creating image(s)#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.342 221554 DEBUG nova.storage.rbd_utils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] creating snapshot(nova-resize) on rbd image(dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:04:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e257 e257: 3 total, 3 up, 3 in
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.740 221554 DEBUG nova.objects.instance [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'trusted_certs' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:50 np0005603609 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 03:04:50 np0005603609 systemd[260317]: Activating special unit Exit the Session...
Jan 31 03:04:50 np0005603609 systemd[260317]: Stopped target Main User Target.
Jan 31 03:04:50 np0005603609 systemd[260317]: Stopped target Basic System.
Jan 31 03:04:50 np0005603609 systemd[260317]: Stopped target Paths.
Jan 31 03:04:50 np0005603609 systemd[260317]: Stopped target Sockets.
Jan 31 03:04:50 np0005603609 systemd[260317]: Stopped target Timers.
Jan 31 03:04:50 np0005603609 systemd[260317]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 03:04:50 np0005603609 systemd[260317]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 03:04:50 np0005603609 systemd[260317]: Closed D-Bus User Message Bus Socket.
Jan 31 03:04:50 np0005603609 systemd[260317]: Stopped Create User's Volatile Files and Directories.
Jan 31 03:04:50 np0005603609 systemd[260317]: Removed slice User Application Slice.
Jan 31 03:04:50 np0005603609 systemd[260317]: Reached target Shutdown.
Jan 31 03:04:50 np0005603609 systemd[260317]: Finished Exit the Session.
Jan 31 03:04:50 np0005603609 systemd[260317]: Reached target Exit the Session.
Jan 31 03:04:50 np0005603609 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 03:04:50 np0005603609 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 03:04:50 np0005603609 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 03:04:50 np0005603609 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 03:04:50 np0005603609 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 03:04:50 np0005603609 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 03:04:50 np0005603609 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.950 221554 DEBUG nova.virt.libvirt.driver [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.951 221554 DEBUG nova.virt.libvirt.driver [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Ensure instance console log exists: /var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.951 221554 DEBUG oslo_concurrency.lockutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.952 221554 DEBUG oslo_concurrency.lockutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.952 221554 DEBUG oslo_concurrency.lockutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.956 221554 DEBUG nova.virt.libvirt.driver [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Start _get_guest_xml network_info=[{"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1076068063-network", "vif_mac": "fa:16:3e:01:98:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.963 221554 WARNING nova.virt.libvirt.driver [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.973 221554 DEBUG nova.virt.libvirt.host [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.974 221554 DEBUG nova.virt.libvirt.host [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.978 221554 DEBUG nova.virt.libvirt.host [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.978 221554 DEBUG nova.virt.libvirt.host [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.979 221554 DEBUG nova.virt.libvirt.driver [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.979 221554 DEBUG nova.virt.hardware [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e3bd1dad-95f3-4ed9-94b4-27245cd798b5',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.980 221554 DEBUG nova.virt.hardware [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.980 221554 DEBUG nova.virt.hardware [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.980 221554 DEBUG nova.virt.hardware [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.980 221554 DEBUG nova.virt.hardware [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.980 221554 DEBUG nova.virt.hardware [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.981 221554 DEBUG nova.virt.hardware [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.981 221554 DEBUG nova.virt.hardware [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.981 221554 DEBUG nova.virt.hardware [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.981 221554 DEBUG nova.virt.hardware [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.981 221554 DEBUG nova.virt.hardware [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:04:50 np0005603609 nova_compute[221550]: 2026-01-31 08:04:50.982 221554 DEBUG nova.objects.instance [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'vcpu_model' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.008 221554 DEBUG oslo_concurrency.processutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:04:51 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/91292284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.430 221554 DEBUG oslo_concurrency.processutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.464 221554 DEBUG oslo_concurrency.processutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:51.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:04:51 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3432603807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.863 221554 DEBUG oslo_concurrency.processutils [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.864 221554 DEBUG nova.virt.libvirt.vif [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1163372726',display_name='tempest-ServerActionsTestJSON-server-1163372726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1163372726',id=91,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-30t53h1p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=dfba7f29-bde8-4327-a7b3-1c4fd44e045a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1076068063-network", "vif_mac": "fa:16:3e:01:98:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.864 221554 DEBUG nova.network.os_vif_util [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1076068063-network", "vif_mac": "fa:16:3e:01:98:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.865 221554 DEBUG nova.network.os_vif_util [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.868 221554 DEBUG nova.virt.libvirt.driver [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:04:51 np0005603609 nova_compute[221550]:  <uuid>dfba7f29-bde8-4327-a7b3-1c4fd44e045a</uuid>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:  <name>instance-0000005b</name>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:  <memory>196608</memory>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerActionsTestJSON-server-1163372726</nova:name>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:04:50</nova:creationTime>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.micro">
Jan 31 03:04:51 np0005603609 nova_compute[221550]:        <nova:memory>192</nova:memory>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:        <nova:user uuid="d9ed446fb2cf4fc0a4e619c6c766fddc">tempest-ServerActionsTestJSON-1391450973-project-member</nova:user>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:        <nova:project uuid="1fcec9ca13964c7191134db4420ab049">tempest-ServerActionsTestJSON-1391450973</nova:project>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:        <nova:port uuid="86cf2bf6-2f28-4435-b081-a3945070ed2d">
Jan 31 03:04:51 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <entry name="serial">dfba7f29-bde8-4327-a7b3-1c4fd44e045a</entry>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <entry name="uuid">dfba7f29-bde8-4327-a7b3-1c4fd44e045a</entry>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk">
Jan 31 03:04:51 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:04:51 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/dfba7f29-bde8-4327-a7b3-1c4fd44e045a_disk.config">
Jan 31 03:04:51 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:04:51 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:01:98:96"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <target dev="tap86cf2bf6-2f"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a/console.log" append="off"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:04:51 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:04:51 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:04:51 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:04:51 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.869 221554 DEBUG nova.virt.libvirt.vif [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1163372726',display_name='tempest-ServerActionsTestJSON-server-1163372726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1163372726',id=91,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-30t53h1p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=dfba7f29-bde8-4327-a7b3-1c4fd44e045a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1076068063-network", "vif_mac": "fa:16:3e:01:98:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.869 221554 DEBUG nova.network.os_vif_util [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1076068063-network", "vif_mac": "fa:16:3e:01:98:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.869 221554 DEBUG nova.network.os_vif_util [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.870 221554 DEBUG os_vif [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.870 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.871 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.871 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.873 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.873 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86cf2bf6-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.873 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap86cf2bf6-2f, col_values=(('external_ids', {'iface-id': '86cf2bf6-2f28-4435-b081-a3945070ed2d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:98:96', 'vm-uuid': 'dfba7f29-bde8-4327-a7b3-1c4fd44e045a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.875 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:51 np0005603609 NetworkManager[49064]: <info>  [1769846691.8758] manager: (tap86cf2bf6-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.877 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.879 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:51 np0005603609 nova_compute[221550]: 2026-01-31 08:04:51.880 221554 INFO os_vif [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f')#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.014 221554 DEBUG nova.network.neutron [req-7804bbd1-6c24-46f7-b8a7-e9970773bc11 req-1801f5db-579c-493d-acb1-edcc31483f39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updated VIF entry in instance network info cache for port 86cf2bf6-2f28-4435-b081-a3945070ed2d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.015 221554 DEBUG nova.network.neutron [req-7804bbd1-6c24-46f7-b8a7-e9970773bc11 req-1801f5db-579c-493d-acb1-edcc31483f39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updating instance_info_cache with network_info: [{"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:04:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:52.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.215 221554 DEBUG nova.virt.libvirt.driver [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.216 221554 DEBUG nova.virt.libvirt.driver [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.216 221554 DEBUG nova.virt.libvirt.driver [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No VIF found with MAC fa:16:3e:01:98:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.217 221554 INFO nova.virt.libvirt.driver [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Using config drive#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.257 221554 DEBUG oslo_concurrency.lockutils [req-7804bbd1-6c24-46f7-b8a7-e9970773bc11 req-1801f5db-579c-493d-acb1-edcc31483f39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-dfba7f29-bde8-4327-a7b3-1c4fd44e045a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:04:52 np0005603609 kernel: tap86cf2bf6-2f: entered promiscuous mode
Jan 31 03:04:52 np0005603609 NetworkManager[49064]: <info>  [1769846692.2896] manager: (tap86cf2bf6-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/185)
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.290 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:52 np0005603609 ovn_controller[130359]: 2026-01-31T08:04:52Z|00373|binding|INFO|Claiming lport 86cf2bf6-2f28-4435-b081-a3945070ed2d for this chassis.
Jan 31 03:04:52 np0005603609 ovn_controller[130359]: 2026-01-31T08:04:52Z|00374|binding|INFO|86cf2bf6-2f28-4435-b081-a3945070ed2d: Claiming fa:16:3e:01:98:96 10.100.0.13
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.293 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.298 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.300 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.302 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:52 np0005603609 NetworkManager[49064]: <info>  [1769846692.3032] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Jan 31 03:04:52 np0005603609 NetworkManager[49064]: <info>  [1769846692.3040] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Jan 31 03:04:52 np0005603609 systemd-udevd[260508]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:04:52 np0005603609 systemd-machined[190912]: New machine qemu-48-instance-0000005b.
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.319 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:98:96 10.100.0.13'], port_security=['fa:16:3e:01:98:96 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dfba7f29-bde8-4327-a7b3-1c4fd44e045a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '14', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=86cf2bf6-2f28-4435-b081-a3945070ed2d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.321 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 86cf2bf6-2f28-4435-b081-a3945070ed2d in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 bound to our chassis#033[00m
Jan 31 03:04:52 np0005603609 NetworkManager[49064]: <info>  [1769846692.3226] device (tap86cf2bf6-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.323 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc2535f-0f8f-4713-a35c-9805048a29a8#033[00m
Jan 31 03:04:52 np0005603609 NetworkManager[49064]: <info>  [1769846692.3232] device (tap86cf2bf6-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.330 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[79eef6d0-0f67-4835-8421-a4349b750d1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.330 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5cc2535f-01 in ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.332 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5cc2535f-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.332 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ad89ce70-6e52-4a30-ab64-3816194dd1a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.332 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[37db7522-92fc-4257-ab0c-74dcd6b9116b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.340 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[af4a5e21-09ba-4255-b0b6-2d82eb34cb1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:52 np0005603609 systemd[1]: Started Virtual Machine qemu-48-instance-0000005b.
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.347 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c44fcd1c-a7c0-4ec5-bacf-59f9af4e5e05]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.366 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.369 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f245d1-00d9-427a-9a1e-41992e96f0c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.373 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fa5be3e0-2e49-4fa1-a250-5e78607a9cd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:52 np0005603609 NetworkManager[49064]: <info>  [1769846692.3747] manager: (tap5cc2535f-00): new Veth device (/org/freedesktop/NetworkManager/Devices/188)
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.392 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.393 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[31baa5c5-9a9d-4297-a808-84ad25dfcaec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:52 np0005603609 ovn_controller[130359]: 2026-01-31T08:04:52Z|00375|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d ovn-installed in OVS
Jan 31 03:04:52 np0005603609 ovn_controller[130359]: 2026-01-31T08:04:52Z|00376|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d up in Southbound
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.397 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.399 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f4baf36c-f286-48ba-8cda-edf511de91ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:52 np0005603609 NetworkManager[49064]: <info>  [1769846692.4129] device (tap5cc2535f-00): carrier: link connected
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.416 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0f110a-05d2-4bfd-8fec-b14c728b406a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.427 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b260b4b3-405e-4603-9ea3-e60147c4b825]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693211, 'reachable_time': 37339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260545, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.437 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[94d3b16c-61d2-4eff-a210-d796eaadd2b8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:76f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693211, 'tstamp': 693211}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260546, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.447 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[984aa14b-57ae-4149-a547-e70525eaa6be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693211, 'reachable_time': 37339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260547, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.462 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[012bd426-ba3f-4a68-b951-b1ba57fc3943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.490 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0c11ad04-7634-4648-bd25-84bd54597bd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.491 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.492 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.492 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc2535f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:52 np0005603609 NetworkManager[49064]: <info>  [1769846692.4947] manager: (tap5cc2535f-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Jan 31 03:04:52 np0005603609 kernel: tap5cc2535f-00: entered promiscuous mode
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.494 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.498 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc2535f-00, col_values=(('external_ids', {'iface-id': 'ab077a7e-cc79-4948-8987-2cd87d88deff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:52 np0005603609 ovn_controller[130359]: 2026-01-31T08:04:52Z|00377|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.499 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.500 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.502 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.505 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.505 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[89a15187-1062-44c6-b617-7629b747f795]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.507 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:04:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:04:52.507 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'env', 'PROCESS_TAG=haproxy-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5cc2535f-0f8f-4713-a35c-9805048a29a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.674 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846692.674435, dfba7f29-bde8-4327-a7b3-1c4fd44e045a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.675 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.676 221554 DEBUG nova.compute.manager [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.679 221554 INFO nova.virt.libvirt.driver [-] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance running successfully.#033[00m
Jan 31 03:04:52 np0005603609 virtqemud[221292]: argument unsupported: QEMU guest agent is not configured
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.680 221554 DEBUG nova.virt.libvirt.guest [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.681 221554 DEBUG nova.virt.libvirt.driver [None req-40a0be29-e6a5-4a1f-9c0c-2c5158635191 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.745 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.747 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.842 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.842 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846692.6759977, dfba7f29-bde8-4327-a7b3-1c4fd44e045a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.842 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] VM Started (Lifecycle Event)#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.847 221554 DEBUG nova.compute.manager [req-ed7f284a-5722-44e9-b532-5334cce88790 req-24a331d1-5bee-48aa-ae8f-bc1c9ab11f4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.847 221554 DEBUG oslo_concurrency.lockutils [req-ed7f284a-5722-44e9-b532-5334cce88790 req-24a331d1-5bee-48aa-ae8f-bc1c9ab11f4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.847 221554 DEBUG oslo_concurrency.lockutils [req-ed7f284a-5722-44e9-b532-5334cce88790 req-24a331d1-5bee-48aa-ae8f-bc1c9ab11f4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.847 221554 DEBUG oslo_concurrency.lockutils [req-ed7f284a-5722-44e9-b532-5334cce88790 req-24a331d1-5bee-48aa-ae8f-bc1c9ab11f4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.848 221554 DEBUG nova.compute.manager [req-ed7f284a-5722-44e9-b532-5334cce88790 req-24a331d1-5bee-48aa-ae8f-bc1c9ab11f4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.848 221554 WARNING nova.compute.manager [req-ed7f284a-5722-44e9-b532-5334cce88790 req-24a331d1-5bee-48aa-ae8f-bc1c9ab11f4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state resize_finish.#033[00m
Jan 31 03:04:52 np0005603609 podman[260618]: 2026-01-31 08:04:52.78164509 +0000 UTC m=+0.017483823 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.888 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:04:52 np0005603609 nova_compute[221550]: 2026-01-31 08:04:52.897 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:04:52 np0005603609 podman[260618]: 2026-01-31 08:04:52.947595094 +0000 UTC m=+0.183433797 container create 6ceaa90c3d4f54cc78560ff340d1f7b83a75370fa45180b25d9bfee13affdf50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:04:53 np0005603609 systemd[1]: Started libpod-conmon-6ceaa90c3d4f54cc78560ff340d1f7b83a75370fa45180b25d9bfee13affdf50.scope.
Jan 31 03:04:53 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:04:53 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e4bca395b452b9ac4452326838dfb7ca86d1f44ec94c59a190d39524f8f701/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:04:53 np0005603609 podman[260618]: 2026-01-31 08:04:53.051468905 +0000 UTC m=+0.287307628 container init 6ceaa90c3d4f54cc78560ff340d1f7b83a75370fa45180b25d9bfee13affdf50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:04:53 np0005603609 podman[260618]: 2026-01-31 08:04:53.056471975 +0000 UTC m=+0.292310688 container start 6ceaa90c3d4f54cc78560ff340d1f7b83a75370fa45180b25d9bfee13affdf50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 03:04:53 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[260634]: [NOTICE]   (260638) : New worker (260640) forked
Jan 31 03:04:53 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[260634]: [NOTICE]   (260638) : Loading success.
Jan 31 03:04:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:04:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:53.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:04:53 np0005603609 nova_compute[221550]: 2026-01-31 08:04:53.857 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:04:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:54.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:04:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:55.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:55 np0005603609 nova_compute[221550]: 2026-01-31 08:04:55.857 221554 DEBUG nova.compute.manager [req-b7fdb447-cd42-4414-b585-f32d37abcddb req-147ebd4a-9ed8-44bf-b63c-df33606e1ae3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:55 np0005603609 nova_compute[221550]: 2026-01-31 08:04:55.858 221554 DEBUG oslo_concurrency.lockutils [req-b7fdb447-cd42-4414-b585-f32d37abcddb req-147ebd4a-9ed8-44bf-b63c-df33606e1ae3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:55 np0005603609 nova_compute[221550]: 2026-01-31 08:04:55.858 221554 DEBUG oslo_concurrency.lockutils [req-b7fdb447-cd42-4414-b585-f32d37abcddb req-147ebd4a-9ed8-44bf-b63c-df33606e1ae3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:55 np0005603609 nova_compute[221550]: 2026-01-31 08:04:55.858 221554 DEBUG oslo_concurrency.lockutils [req-b7fdb447-cd42-4414-b585-f32d37abcddb req-147ebd4a-9ed8-44bf-b63c-df33606e1ae3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:55 np0005603609 nova_compute[221550]: 2026-01-31 08:04:55.858 221554 DEBUG nova.compute.manager [req-b7fdb447-cd42-4414-b585-f32d37abcddb req-147ebd4a-9ed8-44bf-b63c-df33606e1ae3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:04:55 np0005603609 nova_compute[221550]: 2026-01-31 08:04:55.858 221554 WARNING nova.compute.manager [req-b7fdb447-cd42-4414-b585-f32d37abcddb req-147ebd4a-9ed8-44bf-b63c-df33606e1ae3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state resized and task_state None.#033[00m
Jan 31 03:04:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:56.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:56 np0005603609 podman[260650]: 2026-01-31 08:04:56.143023971 +0000 UTC m=+0.063422598 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 03:04:56 np0005603609 podman[260649]: 2026-01-31 08:04:56.162789897 +0000 UTC m=+0.084147687 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 03:04:56 np0005603609 nova_compute[221550]: 2026-01-31 08:04:56.566 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:56 np0005603609 nova_compute[221550]: 2026-01-31 08:04:56.875 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:57.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:58.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:58 np0005603609 nova_compute[221550]: 2026-01-31 08:04:58.858 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:04:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:59.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:00.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:01.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:01 np0005603609 nova_compute[221550]: 2026-01-31 08:05:01.879 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:02.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e258 e258: 3 total, 3 up, 3 in
Jan 31 03:05:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:03.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:03 np0005603609 nova_compute[221550]: 2026-01-31 08:05:03.861 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:04.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:04 np0005603609 nova_compute[221550]: 2026-01-31 08:05:04.419 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:05 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:05Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:98:96 10.100.0.13
Jan 31 03:05:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:05.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:06.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.693 221554 DEBUG oslo_concurrency.lockutils [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.694 221554 DEBUG oslo_concurrency.lockutils [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.694 221554 DEBUG oslo_concurrency.lockutils [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.694 221554 DEBUG oslo_concurrency.lockutils [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.695 221554 DEBUG oslo_concurrency.lockutils [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.696 221554 INFO nova.compute.manager [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Terminating instance#033[00m
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.698 221554 DEBUG nova.compute.manager [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:05:06 np0005603609 kernel: tap86cf2bf6-2f (unregistering): left promiscuous mode
Jan 31 03:05:06 np0005603609 NetworkManager[49064]: <info>  [1769846706.7805] device (tap86cf2bf6-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:05:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:06Z|00378|binding|INFO|Releasing lport 86cf2bf6-2f28-4435-b081-a3945070ed2d from this chassis (sb_readonly=0)
Jan 31 03:05:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:06Z|00379|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d down in Southbound
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.788 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:06Z|00380|binding|INFO|Removing iface tap86cf2bf6-2f ovn-installed in OVS
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.790 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:06.797 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:98:96 10.100.0.13'], port_security=['fa:16:3e:01:98:96 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dfba7f29-bde8-4327-a7b3-1c4fd44e045a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '16', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=86cf2bf6-2f28-4435-b081-a3945070ed2d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:05:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:06.799 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 86cf2bf6-2f28-4435-b081-a3945070ed2d in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 unbound from our chassis#033[00m
Jan 31 03:05:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:06.801 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5cc2535f-0f8f-4713-a35c-9805048a29a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:05:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:06.802 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[01c225a3-0c0a-4478-8229-41249dc5770f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:06.803 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace which is not needed anymore#033[00m
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.803 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:06 np0005603609 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Jan 31 03:05:06 np0005603609 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000005b.scope: Consumed 12.782s CPU time.
Jan 31 03:05:06 np0005603609 systemd-machined[190912]: Machine qemu-48-instance-0000005b terminated.
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.881 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:06 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[260634]: [NOTICE]   (260638) : haproxy version is 2.8.14-c23fe91
Jan 31 03:05:06 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[260634]: [NOTICE]   (260638) : path to executable is /usr/sbin/haproxy
Jan 31 03:05:06 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[260634]: [WARNING]  (260638) : Exiting Master process...
Jan 31 03:05:06 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[260634]: [WARNING]  (260638) : Exiting Master process...
Jan 31 03:05:06 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[260634]: [ALERT]    (260638) : Current worker (260640) exited with code 143 (Terminated)
Jan 31 03:05:06 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[260634]: [WARNING]  (260638) : All workers exited. Exiting... (0)
Jan 31 03:05:06 np0005603609 systemd[1]: libpod-6ceaa90c3d4f54cc78560ff340d1f7b83a75370fa45180b25d9bfee13affdf50.scope: Deactivated successfully.
Jan 31 03:05:06 np0005603609 kernel: tap86cf2bf6-2f: entered promiscuous mode
Jan 31 03:05:06 np0005603609 kernel: tap86cf2bf6-2f (unregistering): left promiscuous mode
Jan 31 03:05:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:06Z|00381|binding|INFO|Claiming lport 86cf2bf6-2f28-4435-b081-a3945070ed2d for this chassis.
Jan 31 03:05:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:06Z|00382|binding|INFO|86cf2bf6-2f28-4435-b081-a3945070ed2d: Claiming fa:16:3e:01:98:96 10.100.0.13
Jan 31 03:05:06 np0005603609 podman[260715]: 2026-01-31 08:05:06.920670543 +0000 UTC m=+0.047552196 container died 6ceaa90c3d4f54cc78560ff340d1f7b83a75370fa45180b25d9bfee13affdf50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.919 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:06.923 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:98:96 10.100.0.13'], port_security=['fa:16:3e:01:98:96 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dfba7f29-bde8-4327-a7b3-1c4fd44e045a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '16', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=86cf2bf6-2f28-4435-b081-a3945070ed2d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:05:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:06Z|00383|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d ovn-installed in OVS
Jan 31 03:05:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:06Z|00384|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d up in Southbound
Jan 31 03:05:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:06Z|00385|binding|INFO|Releasing lport 86cf2bf6-2f28-4435-b081-a3945070ed2d from this chassis (sb_readonly=1)
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.931 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:06Z|00386|if_status|INFO|Not setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d down as sb is readonly
Jan 31 03:05:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:06Z|00387|binding|INFO|Removing iface tap86cf2bf6-2f ovn-installed in OVS
Jan 31 03:05:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:06Z|00388|binding|INFO|Releasing lport 86cf2bf6-2f28-4435-b081-a3945070ed2d from this chassis (sb_readonly=0)
Jan 31 03:05:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:06Z|00389|binding|INFO|Setting lport 86cf2bf6-2f28-4435-b081-a3945070ed2d down in Southbound
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.937 221554 INFO nova.virt.libvirt.driver [-] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Instance destroyed successfully.#033[00m
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.938 221554 DEBUG nova.objects.instance [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'resources' on Instance uuid dfba7f29-bde8-4327-a7b3-1c4fd44e045a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.939 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:06.943 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:98:96 10.100.0.13'], port_security=['fa:16:3e:01:98:96 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dfba7f29-bde8-4327-a7b3-1c4fd44e045a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '16', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=86cf2bf6-2f28-4435-b081-a3945070ed2d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.959 221554 DEBUG nova.virt.libvirt.vif [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1163372726',display_name='tempest-ServerActionsTestJSON-server-1163372726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1163372726',id=91,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:04:52Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-30t53h1p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:05:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=dfba7f29-bde8-4327-a7b3-1c4fd44e045a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.959 221554 DEBUG nova.network.os_vif_util [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "address": "fa:16:3e:01:98:96", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86cf2bf6-2f", "ovs_interfaceid": "86cf2bf6-2f28-4435-b081-a3945070ed2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.960 221554 DEBUG nova.network.os_vif_util [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.960 221554 DEBUG os_vif [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.962 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.962 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86cf2bf6-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.964 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.965 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:06 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6ceaa90c3d4f54cc78560ff340d1f7b83a75370fa45180b25d9bfee13affdf50-userdata-shm.mount: Deactivated successfully.
Jan 31 03:05:06 np0005603609 nova_compute[221550]: 2026-01-31 08:05:06.968 221554 INFO os_vif [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:98:96,bridge_name='br-int',has_traffic_filtering=True,id=86cf2bf6-2f28-4435-b081-a3945070ed2d,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86cf2bf6-2f')#033[00m
Jan 31 03:05:06 np0005603609 systemd[1]: var-lib-containers-storage-overlay-d9e4bca395b452b9ac4452326838dfb7ca86d1f44ec94c59a190d39524f8f701-merged.mount: Deactivated successfully.
Jan 31 03:05:06 np0005603609 podman[260715]: 2026-01-31 08:05:06.988385173 +0000 UTC m=+0.115266806 container cleanup 6ceaa90c3d4f54cc78560ff340d1f7b83a75370fa45180b25d9bfee13affdf50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:05:06 np0005603609 systemd[1]: libpod-conmon-6ceaa90c3d4f54cc78560ff340d1f7b83a75370fa45180b25d9bfee13affdf50.scope: Deactivated successfully.
Jan 31 03:05:07 np0005603609 podman[260760]: 2026-01-31 08:05:07.049228718 +0000 UTC m=+0.047671398 container remove 6ceaa90c3d4f54cc78560ff340d1f7b83a75370fa45180b25d9bfee13affdf50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:07.052 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0744c8d6-3d8f-4aa5-968f-5028b3f62b45]: (4, ('Sat Jan 31 08:05:06 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (6ceaa90c3d4f54cc78560ff340d1f7b83a75370fa45180b25d9bfee13affdf50)\n6ceaa90c3d4f54cc78560ff340d1f7b83a75370fa45180b25d9bfee13affdf50\nSat Jan 31 08:05:06 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (6ceaa90c3d4f54cc78560ff340d1f7b83a75370fa45180b25d9bfee13affdf50)\n6ceaa90c3d4f54cc78560ff340d1f7b83a75370fa45180b25d9bfee13affdf50\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:07.053 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[510b7641-5409-4a67-83c1-9dd5e55cb5fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:07.054 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:07 np0005603609 kernel: tap5cc2535f-00: left promiscuous mode
Jan 31 03:05:07 np0005603609 nova_compute[221550]: 2026-01-31 08:05:07.056 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:07 np0005603609 nova_compute[221550]: 2026-01-31 08:05:07.061 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:07.063 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4fd06427-ba1a-4005-a4f8-940a98abb45e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:07.077 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cba5302e-f5bd-461c-857c-f48f7e862ec2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:07.078 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a7f72a-43d1-4c50-8f55-1c6180b1c995]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:07.090 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[541a9fdf-94ca-4d5e-a187-e16222c12489]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693206, 'reachable_time': 21881, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260779, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:07.092 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:07.092 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[939e58d7-2dc5-4a80-be35-cae94eabd2b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:07 np0005603609 systemd[1]: run-netns-ovnmeta\x2d5cc2535f\x2d0f8f\x2d4713\x2da35c\x2d9805048a29a8.mount: Deactivated successfully.
Jan 31 03:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:07.094 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 86cf2bf6-2f28-4435-b081-a3945070ed2d in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 unbound from our chassis#033[00m
Jan 31 03:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:07.095 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5cc2535f-0f8f-4713-a35c-9805048a29a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:07.096 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[22e8017f-ca5f-486f-9267-c127cc31ebca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:07.096 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 86cf2bf6-2f28-4435-b081-a3945070ed2d in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 unbound from our chassis#033[00m
Jan 31 03:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:07.098 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5cc2535f-0f8f-4713-a35c-9805048a29a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:07.098 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f54d45f8-a860-45c6-9578-e47a049f0793]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:07.498 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:07.498 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:07.498 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:07.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:07 np0005603609 nova_compute[221550]: 2026-01-31 08:05:07.744 221554 DEBUG nova.compute.manager [req-524f242c-2e24-40f7-8997-d6c877441fa9 req-65105d4b-06e8-48a3-9439-5a004d71d1d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:07 np0005603609 nova_compute[221550]: 2026-01-31 08:05:07.746 221554 DEBUG oslo_concurrency.lockutils [req-524f242c-2e24-40f7-8997-d6c877441fa9 req-65105d4b-06e8-48a3-9439-5a004d71d1d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:07 np0005603609 nova_compute[221550]: 2026-01-31 08:05:07.746 221554 DEBUG oslo_concurrency.lockutils [req-524f242c-2e24-40f7-8997-d6c877441fa9 req-65105d4b-06e8-48a3-9439-5a004d71d1d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:07 np0005603609 nova_compute[221550]: 2026-01-31 08:05:07.747 221554 DEBUG oslo_concurrency.lockutils [req-524f242c-2e24-40f7-8997-d6c877441fa9 req-65105d4b-06e8-48a3-9439-5a004d71d1d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:07 np0005603609 nova_compute[221550]: 2026-01-31 08:05:07.748 221554 DEBUG nova.compute.manager [req-524f242c-2e24-40f7-8997-d6c877441fa9 req-65105d4b-06e8-48a3-9439-5a004d71d1d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:05:07 np0005603609 nova_compute[221550]: 2026-01-31 08:05:07.748 221554 DEBUG nova.compute.manager [req-524f242c-2e24-40f7-8997-d6c877441fa9 req-65105d4b-06e8-48a3-9439-5a004d71d1d2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:05:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:08.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:08 np0005603609 nova_compute[221550]: 2026-01-31 08:05:08.197 221554 INFO nova.virt.libvirt.driver [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Deleting instance files /var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a_del#033[00m
Jan 31 03:05:08 np0005603609 nova_compute[221550]: 2026-01-31 08:05:08.198 221554 INFO nova.virt.libvirt.driver [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Deletion of /var/lib/nova/instances/dfba7f29-bde8-4327-a7b3-1c4fd44e045a_del complete#033[00m
Jan 31 03:05:08 np0005603609 nova_compute[221550]: 2026-01-31 08:05:08.277 221554 INFO nova.compute.manager [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Took 1.58 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:05:08 np0005603609 nova_compute[221550]: 2026-01-31 08:05:08.278 221554 DEBUG oslo.service.loopingcall [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:05:08 np0005603609 nova_compute[221550]: 2026-01-31 08:05:08.278 221554 DEBUG nova.compute.manager [-] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:05:08 np0005603609 nova_compute[221550]: 2026-01-31 08:05:08.278 221554 DEBUG nova.network.neutron [-] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:05:08 np0005603609 nova_compute[221550]: 2026-01-31 08:05:08.862 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:09.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e259 e259: 3 total, 3 up, 3 in
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.031 221554 DEBUG nova.compute.manager [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.032 221554 DEBUG oslo_concurrency.lockutils [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.032 221554 DEBUG oslo_concurrency.lockutils [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.032 221554 DEBUG oslo_concurrency.lockutils [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.033 221554 DEBUG nova.compute.manager [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.033 221554 WARNING nova.compute.manager [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.033 221554 DEBUG nova.compute.manager [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.033 221554 DEBUG oslo_concurrency.lockutils [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.033 221554 DEBUG oslo_concurrency.lockutils [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.034 221554 DEBUG oslo_concurrency.lockutils [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.034 221554 DEBUG nova.compute.manager [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.034 221554 WARNING nova.compute.manager [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.034 221554 DEBUG nova.compute.manager [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.034 221554 DEBUG oslo_concurrency.lockutils [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.035 221554 DEBUG oslo_concurrency.lockutils [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.035 221554 DEBUG oslo_concurrency.lockutils [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.035 221554 DEBUG nova.compute.manager [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.035 221554 WARNING nova.compute.manager [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.035 221554 DEBUG nova.compute.manager [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.035 221554 DEBUG oslo_concurrency.lockutils [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.036 221554 DEBUG oslo_concurrency.lockutils [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.036 221554 DEBUG oslo_concurrency.lockutils [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.036 221554 DEBUG nova.compute.manager [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.036 221554 DEBUG nova.compute.manager [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-unplugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.036 221554 DEBUG nova.compute.manager [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.036 221554 DEBUG oslo_concurrency.lockutils [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.037 221554 DEBUG oslo_concurrency.lockutils [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.037 221554 DEBUG oslo_concurrency.lockutils [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.037 221554 DEBUG nova.compute.manager [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] No waiting events found dispatching network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.037 221554 WARNING nova.compute.manager [req-ec25709a-1e41-437a-a10b-6a8764918293 req-49d63655-33a8-4dd7-8724-561336c8ef77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received unexpected event network-vif-plugged-86cf2bf6-2f28-4435-b081-a3945070ed2d for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:05:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:10.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.413 221554 DEBUG nova.network.neutron [-] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.451 221554 INFO nova.compute.manager [-] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Took 2.17 seconds to deallocate network for instance.#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.512 221554 DEBUG oslo_concurrency.lockutils [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.513 221554 DEBUG oslo_concurrency.lockutils [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.520 221554 DEBUG oslo_concurrency.lockutils [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.549 221554 DEBUG nova.compute.manager [req-03a3d55c-8e39-441c-a6a7-a4cf59f0eb8d req-304667be-a93a-4bc5-bdf3-f7b4c2e413ad 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Received event network-vif-deleted-86cf2bf6-2f28-4435-b081-a3945070ed2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.560 221554 INFO nova.scheduler.client.report [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Deleted allocations for instance dfba7f29-bde8-4327-a7b3-1c4fd44e045a#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.644 221554 DEBUG oslo_concurrency.lockutils [None req-e2f6b710-78f9-45d5-b20c-3414423509fb d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "dfba7f29-bde8-4327-a7b3-1c4fd44e045a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.657 221554 DEBUG oslo_concurrency.lockutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Acquiring lock "cea5640a-d322-4483-90fd-a2f73af32daf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.657 221554 DEBUG oslo_concurrency.lockutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Lock "cea5640a-d322-4483-90fd-a2f73af32daf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.670 221554 DEBUG nova.compute.manager [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.736 221554 DEBUG oslo_concurrency.lockutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.737 221554 DEBUG oslo_concurrency.lockutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.745 221554 DEBUG nova.virt.hardware [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.745 221554 INFO nova.compute.claims [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:05:10 np0005603609 nova_compute[221550]: 2026-01-31 08:05:10.915 221554 DEBUG oslo_concurrency.processutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:05:11 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3053920949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.348 221554 DEBUG oslo_concurrency.processutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.354 221554 DEBUG nova.compute.provider_tree [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.372 221554 DEBUG nova.scheduler.client.report [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.393 221554 DEBUG oslo_concurrency.lockutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.394 221554 DEBUG nova.compute.manager [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.476 221554 DEBUG nova.compute.manager [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.477 221554 DEBUG nova.network.neutron [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.498 221554 INFO nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.523 221554 DEBUG nova.compute.manager [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.626 221554 DEBUG nova.compute.manager [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.628 221554 DEBUG nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.629 221554 INFO nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Creating image(s)#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.669 221554 DEBUG nova.storage.rbd_utils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] rbd image cea5640a-d322-4483-90fd-a2f73af32daf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.699 221554 DEBUG nova.storage.rbd_utils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] rbd image cea5640a-d322-4483-90fd-a2f73af32daf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:05:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:05:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:11.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.728 221554 DEBUG nova.storage.rbd_utils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] rbd image cea5640a-d322-4483-90fd-a2f73af32daf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.732 221554 DEBUG oslo_concurrency.processutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.749 221554 DEBUG nova.policy [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a4721a8ec2c640e9974410c4fff33dfc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '37e2e9f699f541eda70b24ee42388ea2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.778 221554 DEBUG oslo_concurrency.processutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.779 221554 DEBUG oslo_concurrency.lockutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.780 221554 DEBUG oslo_concurrency.lockutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.780 221554 DEBUG oslo_concurrency.lockutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.810 221554 DEBUG nova.storage.rbd_utils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] rbd image cea5640a-d322-4483-90fd-a2f73af32daf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.813 221554 DEBUG oslo_concurrency.processutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 cea5640a-d322-4483-90fd-a2f73af32daf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:11 np0005603609 nova_compute[221550]: 2026-01-31 08:05:11.966 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:12.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:12 np0005603609 nova_compute[221550]: 2026-01-31 08:05:12.284 221554 DEBUG oslo_concurrency.processutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 cea5640a-d322-4483-90fd-a2f73af32daf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:12 np0005603609 nova_compute[221550]: 2026-01-31 08:05:12.374 221554 DEBUG nova.storage.rbd_utils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] resizing rbd image cea5640a-d322-4483-90fd-a2f73af32daf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:05:12 np0005603609 nova_compute[221550]: 2026-01-31 08:05:12.497 221554 DEBUG nova.objects.instance [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Lazy-loading 'migration_context' on Instance uuid cea5640a-d322-4483-90fd-a2f73af32daf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:05:12 np0005603609 nova_compute[221550]: 2026-01-31 08:05:12.515 221554 DEBUG nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:05:12 np0005603609 nova_compute[221550]: 2026-01-31 08:05:12.516 221554 DEBUG nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Ensure instance console log exists: /var/lib/nova/instances/cea5640a-d322-4483-90fd-a2f73af32daf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:05:12 np0005603609 nova_compute[221550]: 2026-01-31 08:05:12.516 221554 DEBUG oslo_concurrency.lockutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:12 np0005603609 nova_compute[221550]: 2026-01-31 08:05:12.517 221554 DEBUG oslo_concurrency.lockutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:12 np0005603609 nova_compute[221550]: 2026-01-31 08:05:12.517 221554 DEBUG oslo_concurrency.lockutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:13 np0005603609 nova_compute[221550]: 2026-01-31 08:05:13.654 221554 DEBUG nova.network.neutron [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Successfully created port: 8202c002-6374-4377-a0ab-a026c0592b0d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:05:13 np0005603609 nova_compute[221550]: 2026-01-31 08:05:13.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:13.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:13 np0005603609 nova_compute[221550]: 2026-01-31 08:05:13.895 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:14.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:14 np0005603609 nova_compute[221550]: 2026-01-31 08:05:14.915 221554 DEBUG nova.network.neutron [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Successfully updated port: 8202c002-6374-4377-a0ab-a026c0592b0d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:05:14 np0005603609 nova_compute[221550]: 2026-01-31 08:05:14.946 221554 DEBUG oslo_concurrency.lockutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Acquiring lock "refresh_cache-cea5640a-d322-4483-90fd-a2f73af32daf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:05:14 np0005603609 nova_compute[221550]: 2026-01-31 08:05:14.946 221554 DEBUG oslo_concurrency.lockutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Acquired lock "refresh_cache-cea5640a-d322-4483-90fd-a2f73af32daf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:05:14 np0005603609 nova_compute[221550]: 2026-01-31 08:05:14.947 221554 DEBUG nova.network.neutron [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:05:15 np0005603609 nova_compute[221550]: 2026-01-31 08:05:15.045 221554 DEBUG nova.compute.manager [req-ef039532-66e5-4ac5-8edc-9f65c58170b1 req-3cf800a4-aebf-41cc-a464-2991991d5a78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Received event network-changed-8202c002-6374-4377-a0ab-a026c0592b0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:15 np0005603609 nova_compute[221550]: 2026-01-31 08:05:15.046 221554 DEBUG nova.compute.manager [req-ef039532-66e5-4ac5-8edc-9f65c58170b1 req-3cf800a4-aebf-41cc-a464-2991991d5a78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Refreshing instance network info cache due to event network-changed-8202c002-6374-4377-a0ab-a026c0592b0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:05:15 np0005603609 nova_compute[221550]: 2026-01-31 08:05:15.046 221554 DEBUG oslo_concurrency.lockutils [req-ef039532-66e5-4ac5-8edc-9f65c58170b1 req-3cf800a4-aebf-41cc-a464-2991991d5a78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-cea5640a-d322-4483-90fd-a2f73af32daf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:05:15 np0005603609 nova_compute[221550]: 2026-01-31 08:05:15.188 221554 DEBUG nova.network.neutron [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:05:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:15.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:16.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.098 221554 DEBUG nova.network.neutron [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Updating instance_info_cache with network_info: [{"id": "8202c002-6374-4377-a0ab-a026c0592b0d", "address": "fa:16:3e:c1:cf:c8", "network": {"id": "846d3bec-6823-4905-89ce-0986aecfd46f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-931263165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37e2e9f699f541eda70b24ee42388ea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8202c002-63", "ovs_interfaceid": "8202c002-6374-4377-a0ab-a026c0592b0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.141 221554 DEBUG oslo_concurrency.lockutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Releasing lock "refresh_cache-cea5640a-d322-4483-90fd-a2f73af32daf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.141 221554 DEBUG nova.compute.manager [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Instance network_info: |[{"id": "8202c002-6374-4377-a0ab-a026c0592b0d", "address": "fa:16:3e:c1:cf:c8", "network": {"id": "846d3bec-6823-4905-89ce-0986aecfd46f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-931263165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37e2e9f699f541eda70b24ee42388ea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8202c002-63", "ovs_interfaceid": "8202c002-6374-4377-a0ab-a026c0592b0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.142 221554 DEBUG oslo_concurrency.lockutils [req-ef039532-66e5-4ac5-8edc-9f65c58170b1 req-3cf800a4-aebf-41cc-a464-2991991d5a78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-cea5640a-d322-4483-90fd-a2f73af32daf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.142 221554 DEBUG nova.network.neutron [req-ef039532-66e5-4ac5-8edc-9f65c58170b1 req-3cf800a4-aebf-41cc-a464-2991991d5a78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Refreshing network info cache for port 8202c002-6374-4377-a0ab-a026c0592b0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.144 221554 DEBUG nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Start _get_guest_xml network_info=[{"id": "8202c002-6374-4377-a0ab-a026c0592b0d", "address": "fa:16:3e:c1:cf:c8", "network": {"id": "846d3bec-6823-4905-89ce-0986aecfd46f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-931263165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37e2e9f699f541eda70b24ee42388ea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8202c002-63", "ovs_interfaceid": "8202c002-6374-4377-a0ab-a026c0592b0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.149 221554 WARNING nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.154 221554 DEBUG nova.virt.libvirt.host [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.155 221554 DEBUG nova.virt.libvirt.host [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.159 221554 DEBUG nova.virt.libvirt.host [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.160 221554 DEBUG nova.virt.libvirt.host [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.161 221554 DEBUG nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.161 221554 DEBUG nova.virt.hardware [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.162 221554 DEBUG nova.virt.hardware [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.162 221554 DEBUG nova.virt.hardware [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.162 221554 DEBUG nova.virt.hardware [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.162 221554 DEBUG nova.virt.hardware [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.163 221554 DEBUG nova.virt.hardware [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.163 221554 DEBUG nova.virt.hardware [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.163 221554 DEBUG nova.virt.hardware [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.163 221554 DEBUG nova.virt.hardware [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.164 221554 DEBUG nova.virt.hardware [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.164 221554 DEBUG nova.virt.hardware [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.167 221554 DEBUG oslo_concurrency.processutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:16 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Jan 31 03:05:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:05:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3115382932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.624 221554 DEBUG oslo_concurrency.processutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.650 221554 DEBUG nova.storage.rbd_utils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] rbd image cea5640a-d322-4483-90fd-a2f73af32daf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.653 221554 DEBUG oslo_concurrency.processutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.668 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:16 np0005603609 nova_compute[221550]: 2026-01-31 08:05:16.971 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:05:17 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3374096316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.064 221554 DEBUG oslo_concurrency.processutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.065 221554 DEBUG nova.virt.libvirt.vif [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:05:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1107983605',display_name='tempest-InstanceActionsNegativeTestJSON-server-1107983605',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1107983605',id=101,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='37e2e9f699f541eda70b24ee42388ea2',ramdisk_id='',reservation_id='r-l5tjt0ob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1072045255',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1072045255-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:05:11Z,user_data=None,user_id='a4721a8ec2c640e9974410c4fff33dfc',uuid=cea5640a-d322-4483-90fd-a2f73af32daf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8202c002-6374-4377-a0ab-a026c0592b0d", "address": "fa:16:3e:c1:cf:c8", "network": {"id": "846d3bec-6823-4905-89ce-0986aecfd46f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-931263165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37e2e9f699f541eda70b24ee42388ea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8202c002-63", "ovs_interfaceid": "8202c002-6374-4377-a0ab-a026c0592b0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.066 221554 DEBUG nova.network.os_vif_util [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Converting VIF {"id": "8202c002-6374-4377-a0ab-a026c0592b0d", "address": "fa:16:3e:c1:cf:c8", "network": {"id": "846d3bec-6823-4905-89ce-0986aecfd46f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-931263165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37e2e9f699f541eda70b24ee42388ea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8202c002-63", "ovs_interfaceid": "8202c002-6374-4377-a0ab-a026c0592b0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.067 221554 DEBUG nova.network.os_vif_util [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:cf:c8,bridge_name='br-int',has_traffic_filtering=True,id=8202c002-6374-4377-a0ab-a026c0592b0d,network=Network(846d3bec-6823-4905-89ce-0986aecfd46f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8202c002-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.068 221554 DEBUG nova.objects.instance [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Lazy-loading 'pci_devices' on Instance uuid cea5640a-d322-4483-90fd-a2f73af32daf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.103 221554 DEBUG nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:05:17 np0005603609 nova_compute[221550]:  <uuid>cea5640a-d322-4483-90fd-a2f73af32daf</uuid>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:  <name>instance-00000065</name>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <nova:name>tempest-InstanceActionsNegativeTestJSON-server-1107983605</nova:name>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:05:16</nova:creationTime>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:05:17 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:        <nova:user uuid="a4721a8ec2c640e9974410c4fff33dfc">tempest-InstanceActionsNegativeTestJSON-1072045255-project-member</nova:user>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:        <nova:project uuid="37e2e9f699f541eda70b24ee42388ea2">tempest-InstanceActionsNegativeTestJSON-1072045255</nova:project>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:        <nova:port uuid="8202c002-6374-4377-a0ab-a026c0592b0d">
Jan 31 03:05:17 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <entry name="serial">cea5640a-d322-4483-90fd-a2f73af32daf</entry>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <entry name="uuid">cea5640a-d322-4483-90fd-a2f73af32daf</entry>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/cea5640a-d322-4483-90fd-a2f73af32daf_disk">
Jan 31 03:05:17 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:05:17 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/cea5640a-d322-4483-90fd-a2f73af32daf_disk.config">
Jan 31 03:05:17 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:05:17 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:c1:cf:c8"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <target dev="tap8202c002-63"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/cea5640a-d322-4483-90fd-a2f73af32daf/console.log" append="off"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:05:17 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:05:17 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:05:17 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:05:17 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.104 221554 DEBUG nova.compute.manager [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Preparing to wait for external event network-vif-plugged-8202c002-6374-4377-a0ab-a026c0592b0d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.104 221554 DEBUG oslo_concurrency.lockutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Acquiring lock "cea5640a-d322-4483-90fd-a2f73af32daf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.105 221554 DEBUG oslo_concurrency.lockutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Lock "cea5640a-d322-4483-90fd-a2f73af32daf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.105 221554 DEBUG oslo_concurrency.lockutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Lock "cea5640a-d322-4483-90fd-a2f73af32daf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.106 221554 DEBUG nova.virt.libvirt.vif [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:05:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1107983605',display_name='tempest-InstanceActionsNegativeTestJSON-server-1107983605',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1107983605',id=101,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='37e2e9f699f541eda70b24ee42388ea2',ramdisk_id='',reservation_id='r-l5tjt0ob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1072045255',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1072045255-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:05:11Z,user_data=None,user_id='a4721a8ec2c640e9974410c4fff33dfc',uuid=cea5640a-d322-4483-90fd-a2f73af32daf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8202c002-6374-4377-a0ab-a026c0592b0d", "address": "fa:16:3e:c1:cf:c8", "network": {"id": "846d3bec-6823-4905-89ce-0986aecfd46f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-931263165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37e2e9f699f541eda70b24ee42388ea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8202c002-63", "ovs_interfaceid": "8202c002-6374-4377-a0ab-a026c0592b0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.106 221554 DEBUG nova.network.os_vif_util [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Converting VIF {"id": "8202c002-6374-4377-a0ab-a026c0592b0d", "address": "fa:16:3e:c1:cf:c8", "network": {"id": "846d3bec-6823-4905-89ce-0986aecfd46f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-931263165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37e2e9f699f541eda70b24ee42388ea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8202c002-63", "ovs_interfaceid": "8202c002-6374-4377-a0ab-a026c0592b0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.107 221554 DEBUG nova.network.os_vif_util [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:cf:c8,bridge_name='br-int',has_traffic_filtering=True,id=8202c002-6374-4377-a0ab-a026c0592b0d,network=Network(846d3bec-6823-4905-89ce-0986aecfd46f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8202c002-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.107 221554 DEBUG os_vif [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:cf:c8,bridge_name='br-int',has_traffic_filtering=True,id=8202c002-6374-4377-a0ab-a026c0592b0d,network=Network(846d3bec-6823-4905-89ce-0986aecfd46f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8202c002-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.108 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.108 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.109 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.112 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.112 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8202c002-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.113 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8202c002-63, col_values=(('external_ids', {'iface-id': '8202c002-6374-4377-a0ab-a026c0592b0d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:cf:c8', 'vm-uuid': 'cea5640a-d322-4483-90fd-a2f73af32daf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.114 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:17 np0005603609 NetworkManager[49064]: <info>  [1769846717.1153] manager: (tap8202c002-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.117 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.118 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.119 221554 INFO os_vif [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:cf:c8,bridge_name='br-int',has_traffic_filtering=True,id=8202c002-6374-4377-a0ab-a026c0592b0d,network=Network(846d3bec-6823-4905-89ce-0986aecfd46f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8202c002-63')#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.224 221554 DEBUG nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.225 221554 DEBUG nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.225 221554 DEBUG nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] No VIF found with MAC fa:16:3e:c1:cf:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.226 221554 INFO nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Using config drive#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.314 221554 DEBUG nova.storage.rbd_utils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] rbd image cea5640a-d322-4483-90fd-a2f73af32daf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:05:17 np0005603609 nova_compute[221550]: 2026-01-31 08:05:17.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:17.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:18.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:18 np0005603609 nova_compute[221550]: 2026-01-31 08:05:18.190 221554 INFO nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Creating config drive at /var/lib/nova/instances/cea5640a-d322-4483-90fd-a2f73af32daf/disk.config#033[00m
Jan 31 03:05:18 np0005603609 nova_compute[221550]: 2026-01-31 08:05:18.195 221554 DEBUG oslo_concurrency.processutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cea5640a-d322-4483-90fd-a2f73af32daf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp2dd7_5vf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:18 np0005603609 nova_compute[221550]: 2026-01-31 08:05:18.318 221554 DEBUG oslo_concurrency.processutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cea5640a-d322-4483-90fd-a2f73af32daf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp2dd7_5vf" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:18 np0005603609 nova_compute[221550]: 2026-01-31 08:05:18.344 221554 DEBUG nova.storage.rbd_utils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] rbd image cea5640a-d322-4483-90fd-a2f73af32daf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:05:18 np0005603609 nova_compute[221550]: 2026-01-31 08:05:18.347 221554 DEBUG oslo_concurrency.processutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cea5640a-d322-4483-90fd-a2f73af32daf/disk.config cea5640a-d322-4483-90fd-a2f73af32daf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:18 np0005603609 nova_compute[221550]: 2026-01-31 08:05:18.581 221554 DEBUG nova.network.neutron [req-ef039532-66e5-4ac5-8edc-9f65c58170b1 req-3cf800a4-aebf-41cc-a464-2991991d5a78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Updated VIF entry in instance network info cache for port 8202c002-6374-4377-a0ab-a026c0592b0d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:05:18 np0005603609 nova_compute[221550]: 2026-01-31 08:05:18.582 221554 DEBUG nova.network.neutron [req-ef039532-66e5-4ac5-8edc-9f65c58170b1 req-3cf800a4-aebf-41cc-a464-2991991d5a78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Updating instance_info_cache with network_info: [{"id": "8202c002-6374-4377-a0ab-a026c0592b0d", "address": "fa:16:3e:c1:cf:c8", "network": {"id": "846d3bec-6823-4905-89ce-0986aecfd46f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-931263165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37e2e9f699f541eda70b24ee42388ea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8202c002-63", "ovs_interfaceid": "8202c002-6374-4377-a0ab-a026c0592b0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:05:18 np0005603609 nova_compute[221550]: 2026-01-31 08:05:18.600 221554 DEBUG oslo_concurrency.lockutils [req-ef039532-66e5-4ac5-8edc-9f65c58170b1 req-3cf800a4-aebf-41cc-a464-2991991d5a78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-cea5640a-d322-4483-90fd-a2f73af32daf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:05:18 np0005603609 nova_compute[221550]: 2026-01-31 08:05:18.932 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:19 np0005603609 nova_compute[221550]: 2026-01-31 08:05:19.565 221554 DEBUG oslo_concurrency.processutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cea5640a-d322-4483-90fd-a2f73af32daf/disk.config cea5640a-d322-4483-90fd-a2f73af32daf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:19 np0005603609 nova_compute[221550]: 2026-01-31 08:05:19.565 221554 INFO nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Deleting local config drive /var/lib/nova/instances/cea5640a-d322-4483-90fd-a2f73af32daf/disk.config because it was imported into RBD.#033[00m
Jan 31 03:05:19 np0005603609 kernel: tap8202c002-63: entered promiscuous mode
Jan 31 03:05:19 np0005603609 NetworkManager[49064]: <info>  [1769846719.6044] manager: (tap8202c002-63): new Tun device (/org/freedesktop/NetworkManager/Devices/191)
Jan 31 03:05:19 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:19Z|00390|binding|INFO|Claiming lport 8202c002-6374-4377-a0ab-a026c0592b0d for this chassis.
Jan 31 03:05:19 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:19Z|00391|binding|INFO|8202c002-6374-4377-a0ab-a026c0592b0d: Claiming fa:16:3e:c1:cf:c8 10.100.0.7
Jan 31 03:05:19 np0005603609 nova_compute[221550]: 2026-01-31 08:05:19.606 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:19 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:19Z|00392|binding|INFO|Setting lport 8202c002-6374-4377-a0ab-a026c0592b0d ovn-installed in OVS
Jan 31 03:05:19 np0005603609 nova_compute[221550]: 2026-01-31 08:05:19.613 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:19 np0005603609 nova_compute[221550]: 2026-01-31 08:05:19.615 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:19 np0005603609 systemd-udevd[261102]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:05:19 np0005603609 NetworkManager[49064]: <info>  [1769846719.6351] device (tap8202c002-63): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:05:19 np0005603609 NetworkManager[49064]: <info>  [1769846719.6357] device (tap8202c002-63): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:05:19 np0005603609 systemd-machined[190912]: New machine qemu-49-instance-00000065.
Jan 31 03:05:19 np0005603609 systemd[1]: Started Virtual Machine qemu-49-instance-00000065.
Jan 31 03:05:19 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:19Z|00393|binding|INFO|Setting lport 8202c002-6374-4377-a0ab-a026c0592b0d up in Southbound
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.648 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:cf:c8 10.100.0.7'], port_security=['fa:16:3e:c1:cf:c8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'cea5640a-d322-4483-90fd-a2f73af32daf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-846d3bec-6823-4905-89ce-0986aecfd46f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '37e2e9f699f541eda70b24ee42388ea2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '392966ae-7704-4146-b003-43055b15a227', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=397767b2-6259-488e-982e-ff5b99c57c32, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=8202c002-6374-4377-a0ab-a026c0592b0d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.649 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 8202c002-6374-4377-a0ab-a026c0592b0d in datapath 846d3bec-6823-4905-89ce-0986aecfd46f bound to our chassis#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.650 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 846d3bec-6823-4905-89ce-0986aecfd46f#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.658 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[30e45339-8034-4fba-952a-c576a7bdf230]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.659 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap846d3bec-61 in ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.660 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap846d3bec-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.660 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7d7a38ac-ed6b-48f7-9b94-ea839090fcd6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.661 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[adafa731-743f-4659-89ec-57866457b6db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.669 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[ad00a7da-c5ce-4891-a140-4eb703441177]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.677 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2286eab7-c578-446f-88e0-403df2b3b615]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.692 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[55ab763c-3ee6-44b7-912d-6aa5d8ab5758]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:19 np0005603609 NetworkManager[49064]: <info>  [1769846719.6980] manager: (tap846d3bec-60): new Veth device (/org/freedesktop/NetworkManager/Devices/192)
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.696 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8a346a0c-90e5-4627-98ae-ce3cda505814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:19 np0005603609 systemd-udevd[261107]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.717 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[078d8657-91ef-4131-be5a-e1982b5a1bf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.719 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[a58463e1-d060-46d7-82ae-892b1ff3c87f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:05:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:19.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:05:19 np0005603609 NetworkManager[49064]: <info>  [1769846719.7336] device (tap846d3bec-60): carrier: link connected
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.736 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[350d7503-3300-44fe-accb-4dd8747437a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.745 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce78f3a-d587-4d50-92c2-d875c1ff8b57]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap846d3bec-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:0f:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695943, 'reachable_time': 20630, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261138, 'error': None, 'target': 'ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.755 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a57139c7-4873-44bf-bc4f-45351415e625]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:f0d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695943, 'tstamp': 695943}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261139, 'error': None, 'target': 'ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.762 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e7bf56bf-85b0-467c-9342-d08a33dd3350]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap846d3bec-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:0f:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695943, 'reachable_time': 20630, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261140, 'error': None, 'target': 'ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.776 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fec1e574-e32e-4d76-8833-dd22cea5ceb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.809 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6a9e48b5-43b3-413b-9b4a-58fc9a7f58dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.810 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap846d3bec-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.810 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.811 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap846d3bec-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:19 np0005603609 nova_compute[221550]: 2026-01-31 08:05:19.812 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:19 np0005603609 NetworkManager[49064]: <info>  [1769846719.8133] manager: (tap846d3bec-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Jan 31 03:05:19 np0005603609 kernel: tap846d3bec-60: entered promiscuous mode
Jan 31 03:05:19 np0005603609 nova_compute[221550]: 2026-01-31 08:05:19.815 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.816 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap846d3bec-60, col_values=(('external_ids', {'iface-id': '181e1158-d274-41e2-a07e-61edb6e47de8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:19 np0005603609 nova_compute[221550]: 2026-01-31 08:05:19.817 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:19 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:19Z|00394|binding|INFO|Releasing lport 181e1158-d274-41e2-a07e-61edb6e47de8 from this chassis (sb_readonly=0)
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.819 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/846d3bec-6823-4905-89ce-0986aecfd46f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/846d3bec-6823-4905-89ce-0986aecfd46f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.820 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[17448347-3aa0-4ee9-b107-51a6a601b0ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.820 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-846d3bec-6823-4905-89ce-0986aecfd46f
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/846d3bec-6823-4905-89ce-0986aecfd46f.pid.haproxy
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 846d3bec-6823-4905-89ce-0986aecfd46f
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:05:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:19.821 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f', 'env', 'PROCESS_TAG=haproxy-846d3bec-6823-4905-89ce-0986aecfd46f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/846d3bec-6823-4905-89ce-0986aecfd46f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:05:19 np0005603609 nova_compute[221550]: 2026-01-31 08:05:19.823 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:19 np0005603609 nova_compute[221550]: 2026-01-31 08:05:19.959 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846719.9587233, cea5640a-d322-4483-90fd-a2f73af32daf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:05:19 np0005603609 nova_compute[221550]: 2026-01-31 08:05:19.959 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] VM Started (Lifecycle Event)#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.012 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.016 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846719.9588737, cea5640a-d322-4483-90fd-a2f73af32daf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.017 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.042 221554 DEBUG nova.compute.manager [req-afc62300-220f-4eae-8855-021cd67a0d84 req-ab098ae4-6bed-4323-9354-790937a843ef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Received event network-vif-plugged-8202c002-6374-4377-a0ab-a026c0592b0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.043 221554 DEBUG oslo_concurrency.lockutils [req-afc62300-220f-4eae-8855-021cd67a0d84 req-ab098ae4-6bed-4323-9354-790937a843ef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cea5640a-d322-4483-90fd-a2f73af32daf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.043 221554 DEBUG oslo_concurrency.lockutils [req-afc62300-220f-4eae-8855-021cd67a0d84 req-ab098ae4-6bed-4323-9354-790937a843ef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cea5640a-d322-4483-90fd-a2f73af32daf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.043 221554 DEBUG oslo_concurrency.lockutils [req-afc62300-220f-4eae-8855-021cd67a0d84 req-ab098ae4-6bed-4323-9354-790937a843ef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cea5640a-d322-4483-90fd-a2f73af32daf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.044 221554 DEBUG nova.compute.manager [req-afc62300-220f-4eae-8855-021cd67a0d84 req-ab098ae4-6bed-4323-9354-790937a843ef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Processing event network-vif-plugged-8202c002-6374-4377-a0ab-a026c0592b0d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.045 221554 DEBUG nova.compute.manager [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.045 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.049 221554 DEBUG nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.054 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846720.048817, cea5640a-d322-4483-90fd-a2f73af32daf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.054 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.058 221554 INFO nova.virt.libvirt.driver [-] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Instance spawned successfully.#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.058 221554 DEBUG nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:05:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:05:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:20.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.088 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.090 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.101 221554 DEBUG nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.101 221554 DEBUG nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.102 221554 DEBUG nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.102 221554 DEBUG nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.103 221554 DEBUG nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.103 221554 DEBUG nova.virt.libvirt.driver [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.143 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:05:20 np0005603609 podman[261213]: 2026-01-31 08:05:20.094500842 +0000 UTC m=+0.018885966 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.197 221554 INFO nova.compute.manager [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Took 8.57 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.198 221554 DEBUG nova.compute.manager [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:05:20 np0005603609 podman[261213]: 2026-01-31 08:05:20.211612741 +0000 UTC m=+0.135997835 container create 4357ad6384f8ef555f8ca18c1367adece5ee76b2af55aedf7d88bfe6ae7ecbaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:05:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:20.214 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:05:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.261 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:20 np0005603609 systemd[1]: Started libpod-conmon-4357ad6384f8ef555f8ca18c1367adece5ee76b2af55aedf7d88bfe6ae7ecbaf.scope.
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.278 221554 INFO nova.compute.manager [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Took 9.56 seconds to build instance.#033[00m
Jan 31 03:05:20 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:05:20 np0005603609 nova_compute[221550]: 2026-01-31 08:05:20.309 221554 DEBUG oslo_concurrency.lockutils [None req-2115df2b-8e5c-4f13-a761-2d4f5d92d5dc a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Lock "cea5640a-d322-4483-90fd-a2f73af32daf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:20 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a316109c3f63b6851a145d7d8796d239665bee374bfe5af626cc3a2ca593057/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:05:20 np0005603609 podman[261213]: 2026-01-31 08:05:20.323944816 +0000 UTC m=+0.248329930 container init 4357ad6384f8ef555f8ca18c1367adece5ee76b2af55aedf7d88bfe6ae7ecbaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:05:20 np0005603609 podman[261213]: 2026-01-31 08:05:20.328436704 +0000 UTC m=+0.252821798 container start 4357ad6384f8ef555f8ca18c1367adece5ee76b2af55aedf7d88bfe6ae7ecbaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:05:20 np0005603609 neutron-haproxy-ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f[261228]: [NOTICE]   (261232) : New worker (261234) forked
Jan 31 03:05:20 np0005603609 neutron-haproxy-ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f[261228]: [NOTICE]   (261232) : Loading success.
Jan 31 03:05:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:20.379 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:05:21 np0005603609 nova_compute[221550]: 2026-01-31 08:05:21.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:21 np0005603609 nova_compute[221550]: 2026-01-31 08:05:21.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:21.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:21 np0005603609 nova_compute[221550]: 2026-01-31 08:05:21.729 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:21 np0005603609 nova_compute[221550]: 2026-01-31 08:05:21.729 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:21 np0005603609 nova_compute[221550]: 2026-01-31 08:05:21.729 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:21 np0005603609 nova_compute[221550]: 2026-01-31 08:05:21.729 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:05:21 np0005603609 nova_compute[221550]: 2026-01-31 08:05:21.730 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:21 np0005603609 nova_compute[221550]: 2026-01-31 08:05:21.932 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846706.930571, dfba7f29-bde8-4327-a7b3-1c4fd44e045a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:05:21 np0005603609 nova_compute[221550]: 2026-01-31 08:05:21.934 221554 INFO nova.compute.manager [-] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.023 221554 DEBUG oslo_concurrency.lockutils [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Acquiring lock "cea5640a-d322-4483-90fd-a2f73af32daf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.024 221554 DEBUG oslo_concurrency.lockutils [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Lock "cea5640a-d322-4483-90fd-a2f73af32daf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.024 221554 DEBUG oslo_concurrency.lockutils [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Acquiring lock "cea5640a-d322-4483-90fd-a2f73af32daf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.025 221554 DEBUG oslo_concurrency.lockutils [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Lock "cea5640a-d322-4483-90fd-a2f73af32daf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.025 221554 DEBUG oslo_concurrency.lockutils [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Lock "cea5640a-d322-4483-90fd-a2f73af32daf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.026 221554 INFO nova.compute.manager [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Terminating instance#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.027 221554 DEBUG nova.compute.manager [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:05:22 np0005603609 kernel: tap8202c002-63 (unregistering): left promiscuous mode
Jan 31 03:05:22 np0005603609 NetworkManager[49064]: <info>  [1769846722.0817] device (tap8202c002-63): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.082 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:22.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:22 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:22Z|00395|binding|INFO|Releasing lport 8202c002-6374-4377-a0ab-a026c0592b0d from this chassis (sb_readonly=0)
Jan 31 03:05:22 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:22Z|00396|binding|INFO|Setting lport 8202c002-6374-4377-a0ab-a026c0592b0d down in Southbound
Jan 31 03:05:22 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:22Z|00397|binding|INFO|Removing iface tap8202c002-63 ovn-installed in OVS
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.088 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.094 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:22.101 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:cf:c8 10.100.0.7'], port_security=['fa:16:3e:c1:cf:c8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'cea5640a-d322-4483-90fd-a2f73af32daf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-846d3bec-6823-4905-89ce-0986aecfd46f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '37e2e9f699f541eda70b24ee42388ea2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '392966ae-7704-4146-b003-43055b15a227', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=397767b2-6259-488e-982e-ff5b99c57c32, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=8202c002-6374-4377-a0ab-a026c0592b0d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:05:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:22.103 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 8202c002-6374-4377-a0ab-a026c0592b0d in datapath 846d3bec-6823-4905-89ce-0986aecfd46f unbound from our chassis#033[00m
Jan 31 03:05:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:22.104 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 846d3bec-6823-4905-89ce-0986aecfd46f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:05:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:22.105 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3cec407c-e5c6-4edf-85ad-0f6a46776010]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:22.106 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f namespace which is not needed anymore#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.109 221554 DEBUG nova.compute.manager [None req-73543e57-a6c7-452f-ae9e-d68d69e06041 - - - - - -] [instance: dfba7f29-bde8-4327-a7b3-1c4fd44e045a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.114 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:22 np0005603609 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000065.scope: Deactivated successfully.
Jan 31 03:05:22 np0005603609 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000065.scope: Consumed 2.178s CPU time.
Jan 31 03:05:22 np0005603609 systemd-machined[190912]: Machine qemu-49-instance-00000065 terminated.
Jan 31 03:05:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:05:22 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/18868591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.166 221554 DEBUG nova.compute.manager [req-52cec0e5-f664-49ef-8c40-fbc65dc21232 req-f298587c-8bca-44fc-a65d-135769f93985 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Received event network-vif-plugged-8202c002-6374-4377-a0ab-a026c0592b0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.167 221554 DEBUG oslo_concurrency.lockutils [req-52cec0e5-f664-49ef-8c40-fbc65dc21232 req-f298587c-8bca-44fc-a65d-135769f93985 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cea5640a-d322-4483-90fd-a2f73af32daf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.167 221554 DEBUG oslo_concurrency.lockutils [req-52cec0e5-f664-49ef-8c40-fbc65dc21232 req-f298587c-8bca-44fc-a65d-135769f93985 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cea5640a-d322-4483-90fd-a2f73af32daf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.167 221554 DEBUG oslo_concurrency.lockutils [req-52cec0e5-f664-49ef-8c40-fbc65dc21232 req-f298587c-8bca-44fc-a65d-135769f93985 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cea5640a-d322-4483-90fd-a2f73af32daf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.168 221554 DEBUG nova.compute.manager [req-52cec0e5-f664-49ef-8c40-fbc65dc21232 req-f298587c-8bca-44fc-a65d-135769f93985 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] No waiting events found dispatching network-vif-plugged-8202c002-6374-4377-a0ab-a026c0592b0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.168 221554 WARNING nova.compute.manager [req-52cec0e5-f664-49ef-8c40-fbc65dc21232 req-f298587c-8bca-44fc-a65d-135769f93985 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Received unexpected event network-vif-plugged-8202c002-6374-4377-a0ab-a026c0592b0d for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.173 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:22 np0005603609 neutron-haproxy-ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f[261228]: [NOTICE]   (261232) : haproxy version is 2.8.14-c23fe91
Jan 31 03:05:22 np0005603609 neutron-haproxy-ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f[261228]: [NOTICE]   (261232) : path to executable is /usr/sbin/haproxy
Jan 31 03:05:22 np0005603609 neutron-haproxy-ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f[261228]: [ALERT]    (261232) : Current worker (261234) exited with code 143 (Terminated)
Jan 31 03:05:22 np0005603609 neutron-haproxy-ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f[261228]: [WARNING]  (261232) : All workers exited. Exiting... (0)
Jan 31 03:05:22 np0005603609 systemd[1]: libpod-4357ad6384f8ef555f8ca18c1367adece5ee76b2af55aedf7d88bfe6ae7ecbaf.scope: Deactivated successfully.
Jan 31 03:05:22 np0005603609 podman[261419]: 2026-01-31 08:05:22.219456268 +0000 UTC m=+0.043039357 container died 4357ad6384f8ef555f8ca18c1367adece5ee76b2af55aedf7d88bfe6ae7ecbaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:05:22 np0005603609 NetworkManager[49064]: <info>  [1769846722.2424] manager: (tap8202c002-63): new Tun device (/org/freedesktop/NetworkManager/Devices/194)
Jan 31 03:05:22 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4357ad6384f8ef555f8ca18c1367adece5ee76b2af55aedf7d88bfe6ae7ecbaf-userdata-shm.mount: Deactivated successfully.
Jan 31 03:05:22 np0005603609 systemd[1]: var-lib-containers-storage-overlay-6a316109c3f63b6851a145d7d8796d239665bee374bfe5af626cc3a2ca593057-merged.mount: Deactivated successfully.
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.266 221554 INFO nova.virt.libvirt.driver [-] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Instance destroyed successfully.#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.267 221554 DEBUG nova.objects.instance [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Lazy-loading 'resources' on Instance uuid cea5640a-d322-4483-90fd-a2f73af32daf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:05:22 np0005603609 podman[261419]: 2026-01-31 08:05:22.300571261 +0000 UTC m=+0.124154340 container cleanup 4357ad6384f8ef555f8ca18c1367adece5ee76b2af55aedf7d88bfe6ae7ecbaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:05:22 np0005603609 systemd[1]: libpod-conmon-4357ad6384f8ef555f8ca18c1367adece5ee76b2af55aedf7d88bfe6ae7ecbaf.scope: Deactivated successfully.
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.332 221554 DEBUG nova.virt.libvirt.vif [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:05:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1107983605',display_name='tempest-InstanceActionsNegativeTestJSON-server-1107983605',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1107983605',id=101,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:05:20Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='37e2e9f699f541eda70b24ee42388ea2',ramdisk_id='',reservation_id='r-l5tjt0ob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1072045255',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1072045255-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:05:20Z,user_data=None,user_id='a4721a8ec2c640e9974410c4fff33dfc',uuid=cea5640a-d322-4483-90fd-a2f73af32daf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8202c002-6374-4377-a0ab-a026c0592b0d", "address": "fa:16:3e:c1:cf:c8", "network": {"id": "846d3bec-6823-4905-89ce-0986aecfd46f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-931263165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37e2e9f699f541eda70b24ee42388ea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8202c002-63", "ovs_interfaceid": "8202c002-6374-4377-a0ab-a026c0592b0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.332 221554 DEBUG nova.network.os_vif_util [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Converting VIF {"id": "8202c002-6374-4377-a0ab-a026c0592b0d", "address": "fa:16:3e:c1:cf:c8", "network": {"id": "846d3bec-6823-4905-89ce-0986aecfd46f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-931263165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37e2e9f699f541eda70b24ee42388ea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8202c002-63", "ovs_interfaceid": "8202c002-6374-4377-a0ab-a026c0592b0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.333 221554 DEBUG nova.network.os_vif_util [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:cf:c8,bridge_name='br-int',has_traffic_filtering=True,id=8202c002-6374-4377-a0ab-a026c0592b0d,network=Network(846d3bec-6823-4905-89ce-0986aecfd46f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8202c002-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.333 221554 DEBUG os_vif [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:cf:c8,bridge_name='br-int',has_traffic_filtering=True,id=8202c002-6374-4377-a0ab-a026c0592b0d,network=Network(846d3bec-6823-4905-89ce-0986aecfd46f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8202c002-63') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.335 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.335 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8202c002-63, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.336 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.339 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.341 221554 INFO os_vif [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:cf:c8,bridge_name='br-int',has_traffic_filtering=True,id=8202c002-6374-4377-a0ab-a026c0592b0d,network=Network(846d3bec-6823-4905-89ce-0986aecfd46f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8202c002-63')#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.384 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.385 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:05:22 np0005603609 podman[261461]: 2026-01-31 08:05:22.4355319 +0000 UTC m=+0.116529006 container remove 4357ad6384f8ef555f8ca18c1367adece5ee76b2af55aedf7d88bfe6ae7ecbaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:05:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:22.439 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb42b51-6c0b-4da9-b035-f2a99776dcda]: (4, ('Sat Jan 31 08:05:22 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f (4357ad6384f8ef555f8ca18c1367adece5ee76b2af55aedf7d88bfe6ae7ecbaf)\n4357ad6384f8ef555f8ca18c1367adece5ee76b2af55aedf7d88bfe6ae7ecbaf\nSat Jan 31 08:05:22 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f (4357ad6384f8ef555f8ca18c1367adece5ee76b2af55aedf7d88bfe6ae7ecbaf)\n4357ad6384f8ef555f8ca18c1367adece5ee76b2af55aedf7d88bfe6ae7ecbaf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:22.441 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[feafb7f0-8c84-44c7-9b93-e39cc50496f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:22.442 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap846d3bec-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.444 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:22 np0005603609 kernel: tap846d3bec-60: left promiscuous mode
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.446 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:22.449 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b266ee9d-533c-4822-9a99-f6cddd8f3397]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.452 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:22.461 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1563bd-4f7e-4f0c-aaf4-6c82560265d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:22.463 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ceb2132c-43f2-4f38-8528-9cd6fcca70da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:22.472 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c69af6d4-781d-439f-b3cf-4f8917fad608]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695939, 'reachable_time': 39181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261494, 'error': None, 'target': 'ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:22 np0005603609 systemd[1]: run-netns-ovnmeta\x2d846d3bec\x2d6823\x2d4905\x2d89ce\x2d0986aecfd46f.mount: Deactivated successfully.
Jan 31 03:05:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:22.476 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-846d3bec-6823-4905-89ce-0986aecfd46f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:05:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:22.476 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[16659377-195f-4bbd-ab4e-d1d4df466698]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.539 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.540 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4547MB free_disk=20.85565948486328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.541 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.541 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.653 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance cea5640a-d322-4483-90fd-a2f73af32daf actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.653 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.654 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:05:22 np0005603609 nova_compute[221550]: 2026-01-31 08:05:22.732 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:05:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:05:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:05:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:05:23 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1249913127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:05:23 np0005603609 nova_compute[221550]: 2026-01-31 08:05:23.153 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:23 np0005603609 nova_compute[221550]: 2026-01-31 08:05:23.158 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:05:23 np0005603609 nova_compute[221550]: 2026-01-31 08:05:23.178 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:05:23 np0005603609 nova_compute[221550]: 2026-01-31 08:05:23.209 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:05:23 np0005603609 nova_compute[221550]: 2026-01-31 08:05:23.210 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:23.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:23 np0005603609 nova_compute[221550]: 2026-01-31 08:05:23.970 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:24.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:24 np0005603609 nova_compute[221550]: 2026-01-31 08:05:24.330 221554 DEBUG nova.compute.manager [req-fd3f7358-0ca9-41f2-abfc-252f5b32428e req-05047f6f-5041-466e-9804-1f4e048327eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Received event network-vif-unplugged-8202c002-6374-4377-a0ab-a026c0592b0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:24 np0005603609 nova_compute[221550]: 2026-01-31 08:05:24.330 221554 DEBUG oslo_concurrency.lockutils [req-fd3f7358-0ca9-41f2-abfc-252f5b32428e req-05047f6f-5041-466e-9804-1f4e048327eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cea5640a-d322-4483-90fd-a2f73af32daf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:24 np0005603609 nova_compute[221550]: 2026-01-31 08:05:24.330 221554 DEBUG oslo_concurrency.lockutils [req-fd3f7358-0ca9-41f2-abfc-252f5b32428e req-05047f6f-5041-466e-9804-1f4e048327eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cea5640a-d322-4483-90fd-a2f73af32daf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:24 np0005603609 nova_compute[221550]: 2026-01-31 08:05:24.331 221554 DEBUG oslo_concurrency.lockutils [req-fd3f7358-0ca9-41f2-abfc-252f5b32428e req-05047f6f-5041-466e-9804-1f4e048327eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cea5640a-d322-4483-90fd-a2f73af32daf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:24 np0005603609 nova_compute[221550]: 2026-01-31 08:05:24.331 221554 DEBUG nova.compute.manager [req-fd3f7358-0ca9-41f2-abfc-252f5b32428e req-05047f6f-5041-466e-9804-1f4e048327eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] No waiting events found dispatching network-vif-unplugged-8202c002-6374-4377-a0ab-a026c0592b0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:05:24 np0005603609 nova_compute[221550]: 2026-01-31 08:05:24.331 221554 DEBUG nova.compute.manager [req-fd3f7358-0ca9-41f2-abfc-252f5b32428e req-05047f6f-5041-466e-9804-1f4e048327eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Received event network-vif-unplugged-8202c002-6374-4377-a0ab-a026c0592b0d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:05:24 np0005603609 nova_compute[221550]: 2026-01-31 08:05:24.331 221554 DEBUG nova.compute.manager [req-fd3f7358-0ca9-41f2-abfc-252f5b32428e req-05047f6f-5041-466e-9804-1f4e048327eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Received event network-vif-plugged-8202c002-6374-4377-a0ab-a026c0592b0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:24 np0005603609 nova_compute[221550]: 2026-01-31 08:05:24.331 221554 DEBUG oslo_concurrency.lockutils [req-fd3f7358-0ca9-41f2-abfc-252f5b32428e req-05047f6f-5041-466e-9804-1f4e048327eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cea5640a-d322-4483-90fd-a2f73af32daf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:24 np0005603609 nova_compute[221550]: 2026-01-31 08:05:24.332 221554 DEBUG oslo_concurrency.lockutils [req-fd3f7358-0ca9-41f2-abfc-252f5b32428e req-05047f6f-5041-466e-9804-1f4e048327eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cea5640a-d322-4483-90fd-a2f73af32daf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:24 np0005603609 nova_compute[221550]: 2026-01-31 08:05:24.332 221554 DEBUG oslo_concurrency.lockutils [req-fd3f7358-0ca9-41f2-abfc-252f5b32428e req-05047f6f-5041-466e-9804-1f4e048327eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cea5640a-d322-4483-90fd-a2f73af32daf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:24 np0005603609 nova_compute[221550]: 2026-01-31 08:05:24.332 221554 DEBUG nova.compute.manager [req-fd3f7358-0ca9-41f2-abfc-252f5b32428e req-05047f6f-5041-466e-9804-1f4e048327eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] No waiting events found dispatching network-vif-plugged-8202c002-6374-4377-a0ab-a026c0592b0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:05:24 np0005603609 nova_compute[221550]: 2026-01-31 08:05:24.332 221554 WARNING nova.compute.manager [req-fd3f7358-0ca9-41f2-abfc-252f5b32428e req-05047f6f-5041-466e-9804-1f4e048327eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Received unexpected event network-vif-plugged-8202c002-6374-4377-a0ab-a026c0592b0d for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:05:25 np0005603609 nova_compute[221550]: 2026-01-31 08:05:25.207 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:25 np0005603609 nova_compute[221550]: 2026-01-31 08:05:25.207 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:25 np0005603609 nova_compute[221550]: 2026-01-31 08:05:25.207 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:05:25 np0005603609 nova_compute[221550]: 2026-01-31 08:05:25.207 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:05:25 np0005603609 nova_compute[221550]: 2026-01-31 08:05:25.231 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 31 03:05:25 np0005603609 nova_compute[221550]: 2026-01-31 08:05:25.232 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:05:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:25.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.003000070s ======
Jan 31 03:05:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:26.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000070s
Jan 31 03:05:26 np0005603609 nova_compute[221550]: 2026-01-31 08:05:26.260 221554 INFO nova.virt.libvirt.driver [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Deleting instance files /var/lib/nova/instances/cea5640a-d322-4483-90fd-a2f73af32daf_del#033[00m
Jan 31 03:05:26 np0005603609 nova_compute[221550]: 2026-01-31 08:05:26.261 221554 INFO nova.virt.libvirt.driver [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Deletion of /var/lib/nova/instances/cea5640a-d322-4483-90fd-a2f73af32daf_del complete#033[00m
Jan 31 03:05:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:26.381 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:26 np0005603609 nova_compute[221550]: 2026-01-31 08:05:26.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:26 np0005603609 nova_compute[221550]: 2026-01-31 08:05:26.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:26 np0005603609 nova_compute[221550]: 2026-01-31 08:05:26.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:05:27 np0005603609 podman[261519]: 2026-01-31 08:05:27.158849439 +0000 UTC m=+0.043466548 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:05:27 np0005603609 podman[261518]: 2026-01-31 08:05:27.180738736 +0000 UTC m=+0.066890451 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:05:27 np0005603609 nova_compute[221550]: 2026-01-31 08:05:27.337 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:05:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:27.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:05:27 np0005603609 nova_compute[221550]: 2026-01-31 08:05:27.777 221554 INFO nova.compute.manager [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Took 5.75 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:05:27 np0005603609 nova_compute[221550]: 2026-01-31 08:05:27.777 221554 DEBUG oslo.service.loopingcall [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:05:27 np0005603609 nova_compute[221550]: 2026-01-31 08:05:27.777 221554 DEBUG nova.compute.manager [-] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:05:27 np0005603609 nova_compute[221550]: 2026-01-31 08:05:27.778 221554 DEBUG nova.network.neutron [-] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:05:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:05:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:28.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:05:28 np0005603609 nova_compute[221550]: 2026-01-31 08:05:28.889 221554 DEBUG nova.network.neutron [-] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:05:28 np0005603609 nova_compute[221550]: 2026-01-31 08:05:28.920 221554 INFO nova.compute.manager [-] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Took 1.14 seconds to deallocate network for instance.#033[00m
Jan 31 03:05:29 np0005603609 nova_compute[221550]: 2026-01-31 08:05:29.020 221554 DEBUG oslo_concurrency.lockutils [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:29 np0005603609 nova_compute[221550]: 2026-01-31 08:05:29.020 221554 DEBUG oslo_concurrency.lockutils [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:29 np0005603609 nova_compute[221550]: 2026-01-31 08:05:29.022 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:29 np0005603609 nova_compute[221550]: 2026-01-31 08:05:29.024 221554 DEBUG nova.compute.manager [req-a70e8734-534c-458e-bfbb-470f237f2546 req-d361189c-fade-4d37-b052-64acb4220c23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Received event network-vif-deleted-8202c002-6374-4377-a0ab-a026c0592b0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:29 np0005603609 nova_compute[221550]: 2026-01-31 08:05:29.071 221554 DEBUG oslo_concurrency.processutils [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:05:29 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1934745028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:05:29 np0005603609 nova_compute[221550]: 2026-01-31 08:05:29.493 221554 DEBUG oslo_concurrency.processutils [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:29 np0005603609 nova_compute[221550]: 2026-01-31 08:05:29.497 221554 DEBUG nova.compute.provider_tree [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:05:29 np0005603609 nova_compute[221550]: 2026-01-31 08:05:29.534 221554 DEBUG nova.scheduler.client.report [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:05:29 np0005603609 nova_compute[221550]: 2026-01-31 08:05:29.560 221554 DEBUG oslo_concurrency.lockutils [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:29 np0005603609 nova_compute[221550]: 2026-01-31 08:05:29.582 221554 INFO nova.scheduler.client.report [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Deleted allocations for instance cea5640a-d322-4483-90fd-a2f73af32daf#033[00m
Jan 31 03:05:29 np0005603609 nova_compute[221550]: 2026-01-31 08:05:29.705 221554 DEBUG oslo_concurrency.lockutils [None req-7510eaa9-dede-4d20-ac96-49a063b21408 a4721a8ec2c640e9974410c4fff33dfc 37e2e9f699f541eda70b24ee42388ea2 - - default default] Lock "cea5640a-d322-4483-90fd-a2f73af32daf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:29.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:30.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:05:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:31.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:05:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:32.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:32 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:05:32 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:05:32 np0005603609 nova_compute[221550]: 2026-01-31 08:05:32.340 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:33 np0005603609 nova_compute[221550]: 2026-01-31 08:05:33.344 221554 DEBUG oslo_concurrency.lockutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:33 np0005603609 nova_compute[221550]: 2026-01-31 08:05:33.345 221554 DEBUG oslo_concurrency.lockutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:33 np0005603609 nova_compute[221550]: 2026-01-31 08:05:33.382 221554 DEBUG nova.compute.manager [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:05:33 np0005603609 nova_compute[221550]: 2026-01-31 08:05:33.529 221554 DEBUG oslo_concurrency.lockutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:33 np0005603609 nova_compute[221550]: 2026-01-31 08:05:33.530 221554 DEBUG oslo_concurrency.lockutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:33 np0005603609 nova_compute[221550]: 2026-01-31 08:05:33.542 221554 DEBUG nova.virt.hardware [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:05:33 np0005603609 nova_compute[221550]: 2026-01-31 08:05:33.543 221554 INFO nova.compute.claims [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:05:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:05:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:33.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:05:34 np0005603609 nova_compute[221550]: 2026-01-31 08:05:34.051 221554 DEBUG oslo_concurrency.processutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:34 np0005603609 nova_compute[221550]: 2026-01-31 08:05:34.066 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:34.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:05:34 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1535432295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:05:34 np0005603609 nova_compute[221550]: 2026-01-31 08:05:34.452 221554 DEBUG oslo_concurrency.processutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:34 np0005603609 nova_compute[221550]: 2026-01-31 08:05:34.457 221554 DEBUG nova.compute.provider_tree [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:05:34 np0005603609 nova_compute[221550]: 2026-01-31 08:05:34.485 221554 DEBUG nova.scheduler.client.report [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:05:34 np0005603609 nova_compute[221550]: 2026-01-31 08:05:34.520 221554 DEBUG oslo_concurrency.lockutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:34 np0005603609 nova_compute[221550]: 2026-01-31 08:05:34.521 221554 DEBUG nova.compute.manager [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:05:34 np0005603609 nova_compute[221550]: 2026-01-31 08:05:34.578 221554 DEBUG nova.compute.manager [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:05:34 np0005603609 nova_compute[221550]: 2026-01-31 08:05:34.579 221554 DEBUG nova.network.neutron [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:05:34 np0005603609 nova_compute[221550]: 2026-01-31 08:05:34.761 221554 DEBUG nova.policy [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31043e345f6b48b585fb7b8ab7304764', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd352316ff6534075952e2d0c28061b09', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:05:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:35.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:36.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.116 221554 INFO nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.179 221554 DEBUG nova.compute.manager [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.237 221554 INFO nova.virt.block_device [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Booting with volume 0eed4460-dbe8-45e7-8d1d-2c1a6334d70e at /dev/vda#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.395 221554 DEBUG os_brick.utils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.396 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.405 226739 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.406 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[fd0bd15b-902d-44cf-a21f-00779cf92420]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.407 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.413 226739 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.414 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[8f4fdf7c-1119-4642-b839-7b19ee164607]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1765b9b6275c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.415 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.421 226739 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.422 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[876fa0a5-3bbb-44c5-867d-eb22a5743863]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.423 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[4138106a-8b27-428f-8750-956717b53342]: (4, '231927d4-1ded-4b84-843c-456d697af567') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.423 221554 DEBUG oslo_concurrency.processutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.440 221554 DEBUG oslo_concurrency.processutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "nvme version" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.442 221554 DEBUG os_brick.initiator.connectors.lightos [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.442 221554 DEBUG os_brick.initiator.connectors.lightos [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.443 221554 DEBUG os_brick.initiator.connectors.lightos [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.443 221554 DEBUG os_brick.utils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] <== get_connector_properties: return (47ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1765b9b6275c', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '231927d4-1ded-4b84-843c-456d697af567', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.443 221554 DEBUG nova.virt.block_device [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Updating existing volume attachment record: 2277f03c-62a2-425b-9c8f-4f58661ba4de _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.683 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:36 np0005603609 nova_compute[221550]: 2026-01-31 08:05:36.759 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:37 np0005603609 nova_compute[221550]: 2026-01-31 08:05:37.159 221554 DEBUG nova.network.neutron [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Successfully created port: 67a82948-b72f-49d6-b07c-3058947bd453 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:05:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:05:37 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1753826075' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:05:37 np0005603609 nova_compute[221550]: 2026-01-31 08:05:37.261 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846722.2595768, cea5640a-d322-4483-90fd-a2f73af32daf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:05:37 np0005603609 nova_compute[221550]: 2026-01-31 08:05:37.261 221554 INFO nova.compute.manager [-] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:05:37 np0005603609 nova_compute[221550]: 2026-01-31 08:05:37.291 221554 DEBUG nova.compute.manager [None req-21f2206f-04cc-4ed6-bc29-b5b7f348402f - - - - - -] [instance: cea5640a-d322-4483-90fd-a2f73af32daf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:05:37 np0005603609 nova_compute[221550]: 2026-01-31 08:05:37.341 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:37 np0005603609 nova_compute[221550]: 2026-01-31 08:05:37.656 221554 DEBUG nova.compute.manager [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:05:37 np0005603609 nova_compute[221550]: 2026-01-31 08:05:37.659 221554 DEBUG nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:05:37 np0005603609 nova_compute[221550]: 2026-01-31 08:05:37.659 221554 INFO nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Creating image(s)#033[00m
Jan 31 03:05:37 np0005603609 nova_compute[221550]: 2026-01-31 08:05:37.660 221554 DEBUG nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 03:05:37 np0005603609 nova_compute[221550]: 2026-01-31 08:05:37.661 221554 DEBUG nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Ensure instance console log exists: /var/lib/nova/instances/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:05:37 np0005603609 nova_compute[221550]: 2026-01-31 08:05:37.661 221554 DEBUG oslo_concurrency.lockutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:37 np0005603609 nova_compute[221550]: 2026-01-31 08:05:37.662 221554 DEBUG oslo_concurrency.lockutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:37 np0005603609 nova_compute[221550]: 2026-01-31 08:05:37.663 221554 DEBUG oslo_concurrency.lockutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:37.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:05:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:38.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:05:39 np0005603609 nova_compute[221550]: 2026-01-31 08:05:39.052 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:39.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:40.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.660 221554 DEBUG nova.network.neutron [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Successfully updated port: 67a82948-b72f-49d6-b07c-3058947bd453 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.661 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.662 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.662 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.662 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.663 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.663 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.663 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.691 221554 DEBUG oslo_concurrency.lockutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.691 221554 DEBUG oslo_concurrency.lockutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquired lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.691 221554 DEBUG nova.network.neutron [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.725 221554 DEBUG nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.726 221554 DEBUG nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.726 221554 DEBUG nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.727 221554 WARNING nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.727 221554 WARNING nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.727 221554 INFO nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Removable base files: /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.727 221554 INFO nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.727 221554 INFO nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.728 221554 DEBUG nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.728 221554 DEBUG nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.728 221554 DEBUG nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Jan 31 03:05:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:41.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.832 221554 DEBUG nova.compute.manager [req-3381081f-4fa2-45e1-b60d-3c399b66266e req-9f0ab846-9698-4c66-87d9-87e7085a74db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received event network-changed-67a82948-b72f-49d6-b07c-3058947bd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.832 221554 DEBUG nova.compute.manager [req-3381081f-4fa2-45e1-b60d-3c399b66266e req-9f0ab846-9698-4c66-87d9-87e7085a74db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Refreshing instance network info cache due to event network-changed-67a82948-b72f-49d6-b07c-3058947bd453. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:05:41 np0005603609 nova_compute[221550]: 2026-01-31 08:05:41.833 221554 DEBUG oslo_concurrency.lockutils [req-3381081f-4fa2-45e1-b60d-3c399b66266e req-9f0ab846-9698-4c66-87d9-87e7085a74db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.018 221554 DEBUG nova.network.neutron [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:05:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:42.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.343 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.892 221554 DEBUG nova.network.neutron [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Updating instance_info_cache with network_info: [{"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.928 221554 DEBUG oslo_concurrency.lockutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Releasing lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.929 221554 DEBUG nova.compute.manager [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Instance network_info: |[{"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.929 221554 DEBUG oslo_concurrency.lockutils [req-3381081f-4fa2-45e1-b60d-3c399b66266e req-9f0ab846-9698-4c66-87d9-87e7085a74db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.930 221554 DEBUG nova.network.neutron [req-3381081f-4fa2-45e1-b60d-3c399b66266e req-9f0ab846-9698-4c66-87d9-87e7085a74db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Refreshing network info cache for port 67a82948-b72f-49d6-b07c-3058947bd453 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.933 221554 DEBUG nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Start _get_guest_xml network_info=[{"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '2277f03c-62a2-425b-9c8f-4f58661ba4de', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0eed4460-dbe8-45e7-8d1d-2c1a6334d70e', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0eed4460-dbe8-45e7-8d1d-2c1a6334d70e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '0f94fbbc-a8e1-4d6e-838f-925bcbdf538e', 'attached_at': '', 'detached_at': '', 'volume_id': '0eed4460-dbe8-45e7-8d1d-2c1a6334d70e', 'serial': '0eed4460-dbe8-45e7-8d1d-2c1a6334d70e'}, 'delete_on_termination': True, 'device_type': 'disk', 'disk_bus': 'virtio', 'guest_format': None, 'mount_device': '/dev/vda', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.942 221554 WARNING nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.949 221554 DEBUG nova.virt.libvirt.host [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.950 221554 DEBUG nova.virt.libvirt.host [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.958 221554 DEBUG nova.virt.libvirt.host [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.959 221554 DEBUG nova.virt.libvirt.host [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.960 221554 DEBUG nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.961 221554 DEBUG nova.virt.hardware [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.961 221554 DEBUG nova.virt.hardware [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.961 221554 DEBUG nova.virt.hardware [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.962 221554 DEBUG nova.virt.hardware [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.962 221554 DEBUG nova.virt.hardware [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.962 221554 DEBUG nova.virt.hardware [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.962 221554 DEBUG nova.virt.hardware [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.963 221554 DEBUG nova.virt.hardware [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.963 221554 DEBUG nova.virt.hardware [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.963 221554 DEBUG nova.virt.hardware [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.963 221554 DEBUG nova.virt.hardware [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.992 221554 DEBUG nova.storage.rbd_utils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:05:42 np0005603609 nova_compute[221550]: 2026-01-31 08:05:42.996 221554 DEBUG oslo_concurrency.processutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:05:43 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1434333554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.396 221554 DEBUG oslo_concurrency.processutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.455 221554 DEBUG nova.virt.libvirt.vif [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:05:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-372861281',display_name='tempest-ServerActionsTestOtherA-server-372861281',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-372861281',id=103,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDZ2ulplX2H2AL5NuU34s6GJNVqGMriUIj2eQg4OgerjQ8NWhsk6znxGcALW3k4Z9H1uedU1AeWtQAxMMtaMSBGS2G2VQQrwipi4fvjn/GJrPshiFNiDq6ym/pNUZzm75g==',key_name='tempest-keypair-1727230566',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-sctr4iai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:05:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='31043e345f6b48b585fb7b8ab7304764',uuid=0f94fbbc-a8e1-4d6e-838f-925bcbdf538e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.456 221554 DEBUG nova.network.os_vif_util [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.456 221554 DEBUG nova.network.os_vif_util [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:8d:f4,bridge_name='br-int',has_traffic_filtering=True,id=67a82948-b72f-49d6-b07c-3058947bd453,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a82948-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.457 221554 DEBUG nova.objects.instance [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.487 221554 DEBUG nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:05:43 np0005603609 nova_compute[221550]:  <uuid>0f94fbbc-a8e1-4d6e-838f-925bcbdf538e</uuid>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:  <name>instance-00000067</name>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerActionsTestOtherA-server-372861281</nova:name>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:05:42</nova:creationTime>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:05:43 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:        <nova:user uuid="31043e345f6b48b585fb7b8ab7304764">tempest-ServerActionsTestOtherA-527878807-project-member</nova:user>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:        <nova:project uuid="d352316ff6534075952e2d0c28061b09">tempest-ServerActionsTestOtherA-527878807</nova:project>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:        <nova:port uuid="67a82948-b72f-49d6-b07c-3058947bd453">
Jan 31 03:05:43 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <entry name="serial">0f94fbbc-a8e1-4d6e-838f-925bcbdf538e</entry>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <entry name="uuid">0f94fbbc-a8e1-4d6e-838f-925bcbdf538e</entry>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e_disk.config">
Jan 31 03:05:43 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:05:43 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="volumes/volume-0eed4460-dbe8-45e7-8d1d-2c1a6334d70e">
Jan 31 03:05:43 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:05:43 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <serial>0eed4460-dbe8-45e7-8d1d-2c1a6334d70e</serial>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:48:8d:f4"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <target dev="tap67a82948-b7"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e/console.log" append="off"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:05:43 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:05:43 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:05:43 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:05:43 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.488 221554 DEBUG nova.compute.manager [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Preparing to wait for external event network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.488 221554 DEBUG oslo_concurrency.lockutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.488 221554 DEBUG oslo_concurrency.lockutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.488 221554 DEBUG oslo_concurrency.lockutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.489 221554 DEBUG nova.virt.libvirt.vif [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:05:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-372861281',display_name='tempest-ServerActionsTestOtherA-server-372861281',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-372861281',id=103,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDZ2ulplX2H2AL5NuU34s6GJNVqGMriUIj2eQg4OgerjQ8NWhsk6znxGcALW3k4Z9H1uedU1AeWtQAxMMtaMSBGS2G2VQQrwipi4fvjn/GJrPshiFNiDq6ym/pNUZzm75g==',key_name='tempest-keypair-1727230566',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-sctr4iai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:05:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='31043e345f6b48b585fb7b8ab7304764',uuid=0f94fbbc-a8e1-4d6e-838f-925bcbdf538e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.489 221554 DEBUG nova.network.os_vif_util [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.490 221554 DEBUG nova.network.os_vif_util [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:8d:f4,bridge_name='br-int',has_traffic_filtering=True,id=67a82948-b72f-49d6-b07c-3058947bd453,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a82948-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.490 221554 DEBUG os_vif [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:8d:f4,bridge_name='br-int',has_traffic_filtering=True,id=67a82948-b72f-49d6-b07c-3058947bd453,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a82948-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.491 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.491 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.491 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.493 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.494 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67a82948-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.494 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap67a82948-b7, col_values=(('external_ids', {'iface-id': '67a82948-b72f-49d6-b07c-3058947bd453', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:8d:f4', 'vm-uuid': '0f94fbbc-a8e1-4d6e-838f-925bcbdf538e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.496 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:43 np0005603609 NetworkManager[49064]: <info>  [1769846743.4968] manager: (tap67a82948-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/195)
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.498 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.500 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.501 221554 INFO os_vif [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:8d:f4,bridge_name='br-int',has_traffic_filtering=True,id=67a82948-b72f-49d6-b07c-3058947bd453,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a82948-b7')#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.572 221554 DEBUG nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.573 221554 DEBUG nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.574 221554 DEBUG nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No VIF found with MAC fa:16:3e:48:8d:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.575 221554 INFO nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Using config drive#033[00m
Jan 31 03:05:43 np0005603609 nova_compute[221550]: 2026-01-31 08:05:43.602 221554 DEBUG nova.storage.rbd_utils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:05:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:05:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:43.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:05:44 np0005603609 nova_compute[221550]: 2026-01-31 08:05:44.054 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:44 np0005603609 nova_compute[221550]: 2026-01-31 08:05:44.091 221554 INFO nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Creating config drive at /var/lib/nova/instances/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e/disk.config#033[00m
Jan 31 03:05:44 np0005603609 nova_compute[221550]: 2026-01-31 08:05:44.100 221554 DEBUG oslo_concurrency.processutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpilyrhf12 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:05:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:44.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:05:44 np0005603609 nova_compute[221550]: 2026-01-31 08:05:44.244 221554 DEBUG oslo_concurrency.processutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpilyrhf12" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:44 np0005603609 nova_compute[221550]: 2026-01-31 08:05:44.268 221554 DEBUG nova.storage.rbd_utils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:05:44 np0005603609 nova_compute[221550]: 2026-01-31 08:05:44.271 221554 DEBUG oslo_concurrency.processutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e/disk.config 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:44 np0005603609 nova_compute[221550]: 2026-01-31 08:05:44.560 221554 DEBUG oslo_concurrency.processutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e/disk.config 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:44 np0005603609 nova_compute[221550]: 2026-01-31 08:05:44.561 221554 INFO nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Deleting local config drive /var/lib/nova/instances/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e/disk.config because it was imported into RBD.#033[00m
Jan 31 03:05:44 np0005603609 kernel: tap67a82948-b7: entered promiscuous mode
Jan 31 03:05:44 np0005603609 NetworkManager[49064]: <info>  [1769846744.6164] manager: (tap67a82948-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/196)
Jan 31 03:05:44 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:44Z|00398|binding|INFO|Claiming lport 67a82948-b72f-49d6-b07c-3058947bd453 for this chassis.
Jan 31 03:05:44 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:44Z|00399|binding|INFO|67a82948-b72f-49d6-b07c-3058947bd453: Claiming fa:16:3e:48:8d:f4 10.100.0.6
Jan 31 03:05:44 np0005603609 nova_compute[221550]: 2026-01-31 08:05:44.619 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:44 np0005603609 nova_compute[221550]: 2026-01-31 08:05:44.629 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:44 np0005603609 nova_compute[221550]: 2026-01-31 08:05:44.633 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:44 np0005603609 systemd-udevd[261777]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:05:44 np0005603609 nova_compute[221550]: 2026-01-31 08:05:44.644 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:44 np0005603609 NetworkManager[49064]: <info>  [1769846744.6498] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Jan 31 03:05:44 np0005603609 NetworkManager[49064]: <info>  [1769846744.6504] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/198)
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.649 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:8d:f4 10.100.0.6'], port_security=['fa:16:3e:48:8d:f4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0f94fbbc-a8e1-4d6e-838f-925bcbdf538e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd352316ff6534075952e2d0c28061b09', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd8897503-0019-487c-aacb-6eb623a53e29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b782827-2c44-4671-8f12-bb67b4f64804, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=67a82948-b72f-49d6-b07c-3058947bd453) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.650 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 67a82948-b72f-49d6-b07c-3058947bd453 in datapath 79cb2b81-3369-468a-8bf6-7e13d5df334b bound to our chassis#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.652 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79cb2b81-3369-468a-8bf6-7e13d5df334b#033[00m
Jan 31 03:05:44 np0005603609 systemd-machined[190912]: New machine qemu-50-instance-00000067.
Jan 31 03:05:44 np0005603609 NetworkManager[49064]: <info>  [1769846744.6570] device (tap67a82948-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:05:44 np0005603609 NetworkManager[49064]: <info>  [1769846744.6578] device (tap67a82948-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:05:44 np0005603609 systemd[1]: Started Virtual Machine qemu-50-instance-00000067.
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.661 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d009f402-353a-430d-9161-921e1044a2ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.661 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap79cb2b81-31 in ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.664 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap79cb2b81-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.664 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b7990bd8-1f29-46b3-893a-169853f3de48]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.665 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3baad74f-f04d-4c40-a86a-3076a8f030b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.674 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[29850a79-9938-4dec-b543-1e4823900b48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.695 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[15b0d09f-25dd-4d9f-b8fb-09263c618128]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.715 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[cf64ac58-983c-45db-a07a-47ca4118cae9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:44 np0005603609 nova_compute[221550]: 2026-01-31 08:05:44.718 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:44 np0005603609 NetworkManager[49064]: <info>  [1769846744.7223] manager: (tap79cb2b81-30): new Veth device (/org/freedesktop/NetworkManager/Devices/199)
Jan 31 03:05:44 np0005603609 systemd-udevd[261781]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.723 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c394ac52-6f10-4e42-9510-d9347cf2bd3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:44 np0005603609 nova_compute[221550]: 2026-01-31 08:05:44.747 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:44 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:44Z|00400|binding|INFO|Setting lport 67a82948-b72f-49d6-b07c-3058947bd453 ovn-installed in OVS
Jan 31 03:05:44 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:44Z|00401|binding|INFO|Setting lport 67a82948-b72f-49d6-b07c-3058947bd453 up in Southbound
Jan 31 03:05:44 np0005603609 nova_compute[221550]: 2026-01-31 08:05:44.750 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.755 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[e06abb8c-dd54-4a7c-bcb8-6279076fd22b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.757 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[870e7c99-f9cd-4c43-9aa3-35a35a899ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:44 np0005603609 NetworkManager[49064]: <info>  [1769846744.7774] device (tap79cb2b81-30): carrier: link connected
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.781 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[eb6a211a-77dd-4794-b9f4-312e56988b25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.796 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[692be531-4e37-4534-ae65-ccc37534bde0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79cb2b81-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:12:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 124], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698447, 'reachable_time': 43787, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261811, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.812 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2e07c13d-108e-43e0-adc5-f2ab67e90a68]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:12e3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 698447, 'tstamp': 698447}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261812, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.826 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0e1f79-eb81-42bc-a41b-c257cbf13194]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79cb2b81-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:12:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 124], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698447, 'reachable_time': 43787, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261813, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.852 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6e1359bf-06cd-4499-8fc4-15f09443566b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.900 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6404c01f-7a99-41b9-a8f2-ac8813a79719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.903 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79cb2b81-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.903 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.903 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79cb2b81-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:44 np0005603609 nova_compute[221550]: 2026-01-31 08:05:44.905 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:44 np0005603609 kernel: tap79cb2b81-30: entered promiscuous mode
Jan 31 03:05:44 np0005603609 NetworkManager[49064]: <info>  [1769846744.9059] manager: (tap79cb2b81-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.909 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79cb2b81-30, col_values=(('external_ids', {'iface-id': '9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:44 np0005603609 nova_compute[221550]: 2026-01-31 08:05:44.910 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:44 np0005603609 ovn_controller[130359]: 2026-01-31T08:05:44Z|00402|binding|INFO|Releasing lport 9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef from this chassis (sb_readonly=0)
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.913 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:05:44 np0005603609 nova_compute[221550]: 2026-01-31 08:05:44.916 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.915 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c468c042-925f-451c-baaa-f2deec1dbdc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.918 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:05:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:05:44.919 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'env', 'PROCESS_TAG=haproxy-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/79cb2b81-3369-468a-8bf6-7e13d5df334b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:05:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:05:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3064647811' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:05:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:05:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3064647811' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:05:45 np0005603609 podman[261845]: 2026-01-31 08:05:45.268084401 +0000 UTC m=+0.076326708 container create c24b1804c400a9b409be80d7f488a4a00278b938e279a88a514e7ecf76dacfe4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 03:05:45 np0005603609 nova_compute[221550]: 2026-01-31 08:05:45.272 221554 DEBUG nova.network.neutron [req-3381081f-4fa2-45e1-b60d-3c399b66266e req-9f0ab846-9698-4c66-87d9-87e7085a74db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Updated VIF entry in instance network info cache for port 67a82948-b72f-49d6-b07c-3058947bd453. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:05:45 np0005603609 nova_compute[221550]: 2026-01-31 08:05:45.273 221554 DEBUG nova.network.neutron [req-3381081f-4fa2-45e1-b60d-3c399b66266e req-9f0ab846-9698-4c66-87d9-87e7085a74db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Updating instance_info_cache with network_info: [{"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:05:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:45 np0005603609 nova_compute[221550]: 2026-01-31 08:05:45.290 221554 DEBUG oslo_concurrency.lockutils [req-3381081f-4fa2-45e1-b60d-3c399b66266e req-9f0ab846-9698-4c66-87d9-87e7085a74db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:05:45 np0005603609 systemd[1]: Started libpod-conmon-c24b1804c400a9b409be80d7f488a4a00278b938e279a88a514e7ecf76dacfe4.scope.
Jan 31 03:05:45 np0005603609 podman[261845]: 2026-01-31 08:05:45.211710725 +0000 UTC m=+0.019953052 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:05:45 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:05:45 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c4b85a08ff67184d32949cfe4c84e39d902b71e2ba2323caab223dfe45af74/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:05:45 np0005603609 podman[261845]: 2026-01-31 08:05:45.332685186 +0000 UTC m=+0.140927513 container init c24b1804c400a9b409be80d7f488a4a00278b938e279a88a514e7ecf76dacfe4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:05:45 np0005603609 podman[261845]: 2026-01-31 08:05:45.337660227 +0000 UTC m=+0.145902534 container start c24b1804c400a9b409be80d7f488a4a00278b938e279a88a514e7ecf76dacfe4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 03:05:45 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[261885]: [NOTICE]   (261900) : New worker (261907) forked
Jan 31 03:05:45 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[261885]: [NOTICE]   (261900) : Loading success.
Jan 31 03:05:45 np0005603609 nova_compute[221550]: 2026-01-31 08:05:45.435 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846745.4339333, 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:05:45 np0005603609 nova_compute[221550]: 2026-01-31 08:05:45.435 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] VM Started (Lifecycle Event)#033[00m
Jan 31 03:05:45 np0005603609 nova_compute[221550]: 2026-01-31 08:05:45.454 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:05:45 np0005603609 nova_compute[221550]: 2026-01-31 08:05:45.460 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846745.434754, 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:05:45 np0005603609 nova_compute[221550]: 2026-01-31 08:05:45.460 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:05:45 np0005603609 nova_compute[221550]: 2026-01-31 08:05:45.496 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:05:45 np0005603609 nova_compute[221550]: 2026-01-31 08:05:45.500 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:05:45 np0005603609 nova_compute[221550]: 2026-01-31 08:05:45.525 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:05:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:45.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:46.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.743 221554 DEBUG nova.compute.manager [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received event network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.743 221554 DEBUG oslo_concurrency.lockutils [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.743 221554 DEBUG oslo_concurrency.lockutils [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.743 221554 DEBUG oslo_concurrency.lockutils [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.744 221554 DEBUG nova.compute.manager [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Processing event network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.744 221554 DEBUG nova.compute.manager [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received event network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.744 221554 DEBUG oslo_concurrency.lockutils [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.744 221554 DEBUG oslo_concurrency.lockutils [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.744 221554 DEBUG oslo_concurrency.lockutils [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.744 221554 DEBUG nova.compute.manager [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] No waiting events found dispatching network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.744 221554 WARNING nova.compute.manager [req-609eafe4-b3fb-4fdb-a0b8-db5a3e4b62df req-0a5ec68c-f179-4582-8e4e-3a2bff1f7293 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received unexpected event network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.745 221554 DEBUG nova.compute.manager [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.747 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846746.74752, 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.748 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.749 221554 DEBUG nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.752 221554 INFO nova.virt.libvirt.driver [-] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Instance spawned successfully.#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.752 221554 DEBUG nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.784 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.791 221554 DEBUG nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.791 221554 DEBUG nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.791 221554 DEBUG nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.792 221554 DEBUG nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.792 221554 DEBUG nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.792 221554 DEBUG nova.virt.libvirt.driver [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.795 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.828 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.880 221554 INFO nova.compute.manager [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Took 9.22 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.880 221554 DEBUG nova.compute.manager [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:05:46 np0005603609 nova_compute[221550]: 2026-01-31 08:05:46.960 221554 INFO nova.compute.manager [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Took 13.49 seconds to build instance.#033[00m
Jan 31 03:05:47 np0005603609 nova_compute[221550]: 2026-01-31 08:05:47.010 221554 DEBUG oslo_concurrency.lockutils [None req-17235c69-da09-4253-8f48-b5db7b1aac8a 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:05:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:47.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:05:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:05:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:48.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:05:48 np0005603609 nova_compute[221550]: 2026-01-31 08:05:48.497 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:49 np0005603609 nova_compute[221550]: 2026-01-31 08:05:49.042 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:49 np0005603609 nova_compute[221550]: 2026-01-31 08:05:49.055 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:49.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:05:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:50.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:05:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:05:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:51.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:05:51 np0005603609 nova_compute[221550]: 2026-01-31 08:05:51.879 221554 DEBUG nova.compute.manager [req-17bdc151-e221-4ed9-807f-64323ab841d0 req-88b30bb9-e409-4d3c-9ba2-51c8006d0b2a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received event network-changed-67a82948-b72f-49d6-b07c-3058947bd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:51 np0005603609 nova_compute[221550]: 2026-01-31 08:05:51.879 221554 DEBUG nova.compute.manager [req-17bdc151-e221-4ed9-807f-64323ab841d0 req-88b30bb9-e409-4d3c-9ba2-51c8006d0b2a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Refreshing instance network info cache due to event network-changed-67a82948-b72f-49d6-b07c-3058947bd453. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:05:51 np0005603609 nova_compute[221550]: 2026-01-31 08:05:51.880 221554 DEBUG oslo_concurrency.lockutils [req-17bdc151-e221-4ed9-807f-64323ab841d0 req-88b30bb9-e409-4d3c-9ba2-51c8006d0b2a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:05:51 np0005603609 nova_compute[221550]: 2026-01-31 08:05:51.880 221554 DEBUG oslo_concurrency.lockutils [req-17bdc151-e221-4ed9-807f-64323ab841d0 req-88b30bb9-e409-4d3c-9ba2-51c8006d0b2a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:05:51 np0005603609 nova_compute[221550]: 2026-01-31 08:05:51.880 221554 DEBUG nova.network.neutron [req-17bdc151-e221-4ed9-807f-64323ab841d0 req-88b30bb9-e409-4d3c-9ba2-51c8006d0b2a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Refreshing network info cache for port 67a82948-b72f-49d6-b07c-3058947bd453 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:05:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:05:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:52.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:05:53 np0005603609 nova_compute[221550]: 2026-01-31 08:05:53.289 221554 DEBUG nova.network.neutron [req-17bdc151-e221-4ed9-807f-64323ab841d0 req-88b30bb9-e409-4d3c-9ba2-51c8006d0b2a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Updated VIF entry in instance network info cache for port 67a82948-b72f-49d6-b07c-3058947bd453. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:05:53 np0005603609 nova_compute[221550]: 2026-01-31 08:05:53.289 221554 DEBUG nova.network.neutron [req-17bdc151-e221-4ed9-807f-64323ab841d0 req-88b30bb9-e409-4d3c-9ba2-51c8006d0b2a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Updating instance_info_cache with network_info: [{"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:05:53 np0005603609 nova_compute[221550]: 2026-01-31 08:05:53.311 221554 DEBUG oslo_concurrency.lockutils [req-17bdc151-e221-4ed9-807f-64323ab841d0 req-88b30bb9-e409-4d3c-9ba2-51c8006d0b2a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:05:53 np0005603609 nova_compute[221550]: 2026-01-31 08:05:53.500 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:05:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:53.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:05:53 np0005603609 nova_compute[221550]: 2026-01-31 08:05:53.938 221554 DEBUG oslo_concurrency.lockutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:05:53 np0005603609 nova_compute[221550]: 2026-01-31 08:05:53.938 221554 DEBUG oslo_concurrency.lockutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquired lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:05:53 np0005603609 nova_compute[221550]: 2026-01-31 08:05:53.939 221554 DEBUG nova.network.neutron [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:05:54 np0005603609 nova_compute[221550]: 2026-01-31 08:05:54.057 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:54 np0005603609 nova_compute[221550]: 2026-01-31 08:05:54.111 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:54.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:55.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:05:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:56.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:05:57 np0005603609 nova_compute[221550]: 2026-01-31 08:05:57.543 221554 DEBUG nova.network.neutron [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Updating instance_info_cache with network_info: [{"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:05:57 np0005603609 nova_compute[221550]: 2026-01-31 08:05:57.563 221554 DEBUG oslo_concurrency.lockutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Releasing lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:05:57 np0005603609 nova_compute[221550]: 2026-01-31 08:05:57.661 221554 DEBUG nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Jan 31 03:05:57 np0005603609 nova_compute[221550]: 2026-01-31 08:05:57.662 221554 DEBUG nova.virt.libvirt.volume.remotefs [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Creating file /var/lib/nova/instances/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e/c41c2f1f68f24f1cb97bdf79252399de.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Jan 31 03:05:57 np0005603609 nova_compute[221550]: 2026-01-31 08:05:57.662 221554 DEBUG oslo_concurrency.processutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e/c41c2f1f68f24f1cb97bdf79252399de.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:57.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:58 np0005603609 nova_compute[221550]: 2026-01-31 08:05:58.109 221554 DEBUG oslo_concurrency.processutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e/c41c2f1f68f24f1cb97bdf79252399de.tmp" returned: 1 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:58 np0005603609 nova_compute[221550]: 2026-01-31 08:05:58.110 221554 DEBUG oslo_concurrency.processutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e/c41c2f1f68f24f1cb97bdf79252399de.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 03:05:58 np0005603609 nova_compute[221550]: 2026-01-31 08:05:58.110 221554 DEBUG nova.virt.libvirt.volume.remotefs [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Creating directory /var/lib/nova/instances/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Jan 31 03:05:58 np0005603609 nova_compute[221550]: 2026-01-31 08:05:58.111 221554 DEBUG oslo_concurrency.processutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:58.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:58 np0005603609 podman[261921]: 2026-01-31 08:05:58.172731419 +0000 UTC m=+0.055325033 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:05:58 np0005603609 podman[261920]: 2026-01-31 08:05:58.198380086 +0000 UTC m=+0.080207391 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:05:58 np0005603609 nova_compute[221550]: 2026-01-31 08:05:58.304 221554 DEBUG oslo_concurrency.processutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:58 np0005603609 nova_compute[221550]: 2026-01-31 08:05:58.308 221554 DEBUG nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:05:58 np0005603609 nova_compute[221550]: 2026-01-31 08:05:58.501 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:59 np0005603609 nova_compute[221550]: 2026-01-31 08:05:59.059 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:05:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:59.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:00.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:01 np0005603609 ovn_controller[130359]: 2026-01-31T08:06:01Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:8d:f4 10.100.0.6
Jan 31 03:06:01 np0005603609 ovn_controller[130359]: 2026-01-31T08:06:01Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:8d:f4 10.100.0.6
Jan 31 03:06:01 np0005603609 nova_compute[221550]: 2026-01-31 08:06:01.618 221554 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "892101ee-ea21-41fd-a15d-e0b300965e47" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:01 np0005603609 nova_compute[221550]: 2026-01-31 08:06:01.619 221554 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "892101ee-ea21-41fd-a15d-e0b300965e47" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:01 np0005603609 nova_compute[221550]: 2026-01-31 08:06:01.746 221554 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:06:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:06:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:01.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:06:01 np0005603609 nova_compute[221550]: 2026-01-31 08:06:01.930 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:01.930 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:06:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:01.931 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:06:01 np0005603609 nova_compute[221550]: 2026-01-31 08:06:01.994 221554 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:01 np0005603609 nova_compute[221550]: 2026-01-31 08:06:01.995 221554 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:02 np0005603609 nova_compute[221550]: 2026-01-31 08:06:02.001 221554 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:06:02 np0005603609 nova_compute[221550]: 2026-01-31 08:06:02.001 221554 INFO nova.compute.claims [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:06:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:06:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:02.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:06:02 np0005603609 nova_compute[221550]: 2026-01-31 08:06:02.146 221554 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:06:02 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1697033539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:06:02 np0005603609 nova_compute[221550]: 2026-01-31 08:06:02.576 221554 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:02 np0005603609 nova_compute[221550]: 2026-01-31 08:06:02.582 221554 DEBUG nova.compute.provider_tree [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:06:02 np0005603609 nova_compute[221550]: 2026-01-31 08:06:02.691 221554 DEBUG nova.scheduler.client.report [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:06:02 np0005603609 nova_compute[221550]: 2026-01-31 08:06:02.815 221554 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:02 np0005603609 nova_compute[221550]: 2026-01-31 08:06:02.816 221554 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:06:02 np0005603609 nova_compute[221550]: 2026-01-31 08:06:02.905 221554 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:06:02 np0005603609 nova_compute[221550]: 2026-01-31 08:06:02.906 221554 DEBUG nova.network.neutron [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:06:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:02.932 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:03 np0005603609 nova_compute[221550]: 2026-01-31 08:06:03.035 221554 INFO nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:06:03 np0005603609 nova_compute[221550]: 2026-01-31 08:06:03.094 221554 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:06:03 np0005603609 nova_compute[221550]: 2026-01-31 08:06:03.325 221554 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:06:03 np0005603609 nova_compute[221550]: 2026-01-31 08:06:03.326 221554 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:06:03 np0005603609 nova_compute[221550]: 2026-01-31 08:06:03.326 221554 INFO nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Creating image(s)#033[00m
Jan 31 03:06:03 np0005603609 nova_compute[221550]: 2026-01-31 08:06:03.350 221554 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image 892101ee-ea21-41fd-a15d-e0b300965e47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:06:03 np0005603609 nova_compute[221550]: 2026-01-31 08:06:03.377 221554 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image 892101ee-ea21-41fd-a15d-e0b300965e47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:06:03 np0005603609 nova_compute[221550]: 2026-01-31 08:06:03.411 221554 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image 892101ee-ea21-41fd-a15d-e0b300965e47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:06:03 np0005603609 nova_compute[221550]: 2026-01-31 08:06:03.415 221554 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:03 np0005603609 nova_compute[221550]: 2026-01-31 08:06:03.472 221554 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:03 np0005603609 nova_compute[221550]: 2026-01-31 08:06:03.473 221554 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:03 np0005603609 nova_compute[221550]: 2026-01-31 08:06:03.473 221554 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:03 np0005603609 nova_compute[221550]: 2026-01-31 08:06:03.474 221554 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:03 np0005603609 nova_compute[221550]: 2026-01-31 08:06:03.504 221554 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image 892101ee-ea21-41fd-a15d-e0b300965e47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:06:03 np0005603609 nova_compute[221550]: 2026-01-31 08:06:03.508 221554 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 892101ee-ea21-41fd-a15d-e0b300965e47_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:03 np0005603609 nova_compute[221550]: 2026-01-31 08:06:03.525 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:03 np0005603609 nova_compute[221550]: 2026-01-31 08:06:03.636 221554 DEBUG nova.policy [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '972b6e928f014e5394261f9c8655f1de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '43b462f5b43d48b4a33a13b069618e4c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:06:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:03.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:03 np0005603609 nova_compute[221550]: 2026-01-31 08:06:03.869 221554 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 892101ee-ea21-41fd-a15d-e0b300965e47_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:03 np0005603609 nova_compute[221550]: 2026-01-31 08:06:03.938 221554 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] resizing rbd image 892101ee-ea21-41fd-a15d-e0b300965e47_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:06:04 np0005603609 nova_compute[221550]: 2026-01-31 08:06:04.096 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:04 np0005603609 nova_compute[221550]: 2026-01-31 08:06:04.102 221554 DEBUG nova.objects.instance [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lazy-loading 'migration_context' on Instance uuid 892101ee-ea21-41fd-a15d-e0b300965e47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:06:04 np0005603609 nova_compute[221550]: 2026-01-31 08:06:04.125 221554 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:06:04 np0005603609 nova_compute[221550]: 2026-01-31 08:06:04.126 221554 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Ensure instance console log exists: /var/lib/nova/instances/892101ee-ea21-41fd-a15d-e0b300965e47/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:06:04 np0005603609 nova_compute[221550]: 2026-01-31 08:06:04.126 221554 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:04 np0005603609 nova_compute[221550]: 2026-01-31 08:06:04.126 221554 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:04 np0005603609 nova_compute[221550]: 2026-01-31 08:06:04.127 221554 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:06:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:04.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:06:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000046s ======
Jan 31 03:06:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:05.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000046s
Jan 31 03:06:05 np0005603609 nova_compute[221550]: 2026-01-31 08:06:05.905 221554 DEBUG nova.network.neutron [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Successfully created port: 7f5461cb-1537-4f9b-a71a-47cb1167841a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:06:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:06.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:07.499 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:07.500 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:07.500 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:07.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:06:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:08.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:06:08 np0005603609 nova_compute[221550]: 2026-01-31 08:06:08.458 221554 DEBUG nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:06:08 np0005603609 nova_compute[221550]: 2026-01-31 08:06:08.529 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:08 np0005603609 nova_compute[221550]: 2026-01-31 08:06:08.535 221554 DEBUG nova.network.neutron [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Successfully updated port: 7f5461cb-1537-4f9b-a71a-47cb1167841a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:06:08 np0005603609 nova_compute[221550]: 2026-01-31 08:06:08.566 221554 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "refresh_cache-892101ee-ea21-41fd-a15d-e0b300965e47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:06:08 np0005603609 nova_compute[221550]: 2026-01-31 08:06:08.566 221554 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquired lock "refresh_cache-892101ee-ea21-41fd-a15d-e0b300965e47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:06:08 np0005603609 nova_compute[221550]: 2026-01-31 08:06:08.567 221554 DEBUG nova.network.neutron [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:06:08 np0005603609 nova_compute[221550]: 2026-01-31 08:06:08.954 221554 DEBUG nova.network.neutron [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:06:09 np0005603609 nova_compute[221550]: 2026-01-31 08:06:09.063 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:06:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3616744025' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:06:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:06:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3616744025' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:06:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:06:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:09.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:06:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:10.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:10 np0005603609 kernel: tap67a82948-b7 (unregistering): left promiscuous mode
Jan 31 03:06:10 np0005603609 NetworkManager[49064]: <info>  [1769846770.7862] device (tap67a82948-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:06:10 np0005603609 nova_compute[221550]: 2026-01-31 08:06:10.785 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:10 np0005603609 nova_compute[221550]: 2026-01-31 08:06:10.792 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:10 np0005603609 ovn_controller[130359]: 2026-01-31T08:06:10Z|00403|binding|INFO|Releasing lport 67a82948-b72f-49d6-b07c-3058947bd453 from this chassis (sb_readonly=0)
Jan 31 03:06:10 np0005603609 ovn_controller[130359]: 2026-01-31T08:06:10Z|00404|binding|INFO|Setting lport 67a82948-b72f-49d6-b07c-3058947bd453 down in Southbound
Jan 31 03:06:10 np0005603609 ovn_controller[130359]: 2026-01-31T08:06:10Z|00405|binding|INFO|Removing iface tap67a82948-b7 ovn-installed in OVS
Jan 31 03:06:10 np0005603609 nova_compute[221550]: 2026-01-31 08:06:10.795 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:10 np0005603609 nova_compute[221550]: 2026-01-31 08:06:10.801 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:10.831 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:8d:f4 10.100.0.6'], port_security=['fa:16:3e:48:8d:f4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0f94fbbc-a8e1-4d6e-838f-925bcbdf538e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd352316ff6534075952e2d0c28061b09', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8897503-0019-487c-aacb-6eb623a53e29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.173'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b782827-2c44-4671-8f12-bb67b4f64804, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=67a82948-b72f-49d6-b07c-3058947bd453) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:06:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:10.832 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 67a82948-b72f-49d6-b07c-3058947bd453 in datapath 79cb2b81-3369-468a-8bf6-7e13d5df334b unbound from our chassis#033[00m
Jan 31 03:06:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:10.833 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79cb2b81-3369-468a-8bf6-7e13d5df334b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:06:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:10.835 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7dfbee6d-2273-4790-80e7-2819c40398b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:10 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:10.835 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b namespace which is not needed anymore#033[00m
Jan 31 03:06:10 np0005603609 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000067.scope: Deactivated successfully.
Jan 31 03:06:10 np0005603609 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000067.scope: Consumed 13.406s CPU time.
Jan 31 03:06:10 np0005603609 systemd-machined[190912]: Machine qemu-50-instance-00000067 terminated.
Jan 31 03:06:10 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[261885]: [NOTICE]   (261900) : haproxy version is 2.8.14-c23fe91
Jan 31 03:06:10 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[261885]: [NOTICE]   (261900) : path to executable is /usr/sbin/haproxy
Jan 31 03:06:10 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[261885]: [WARNING]  (261900) : Exiting Master process...
Jan 31 03:06:10 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[261885]: [WARNING]  (261900) : Exiting Master process...
Jan 31 03:06:10 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[261885]: [ALERT]    (261900) : Current worker (261907) exited with code 143 (Terminated)
Jan 31 03:06:10 np0005603609 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[261885]: [WARNING]  (261900) : All workers exited. Exiting... (0)
Jan 31 03:06:10 np0005603609 systemd[1]: libpod-c24b1804c400a9b409be80d7f488a4a00278b938e279a88a514e7ecf76dacfe4.scope: Deactivated successfully.
Jan 31 03:06:10 np0005603609 podman[262174]: 2026-01-31 08:06:10.943222587 +0000 UTC m=+0.044573494 container died c24b1804c400a9b409be80d7f488a4a00278b938e279a88a514e7ecf76dacfe4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:06:10 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c24b1804c400a9b409be80d7f488a4a00278b938e279a88a514e7ecf76dacfe4-userdata-shm.mount: Deactivated successfully.
Jan 31 03:06:10 np0005603609 systemd[1]: var-lib-containers-storage-overlay-a0c4b85a08ff67184d32949cfe4c84e39d902b71e2ba2323caab223dfe45af74-merged.mount: Deactivated successfully.
Jan 31 03:06:10 np0005603609 podman[262174]: 2026-01-31 08:06:10.993334203 +0000 UTC m=+0.094685130 container cleanup c24b1804c400a9b409be80d7f488a4a00278b938e279a88a514e7ecf76dacfe4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:11 np0005603609 systemd[1]: libpod-conmon-c24b1804c400a9b409be80d7f488a4a00278b938e279a88a514e7ecf76dacfe4.scope: Deactivated successfully.
Jan 31 03:06:11 np0005603609 podman[262202]: 2026-01-31 08:06:11.054484836 +0000 UTC m=+0.044587425 container remove c24b1804c400a9b409be80d7f488a4a00278b938e279a88a514e7ecf76dacfe4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 03:06:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:11.062 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7b5908-da37-483b-b3a1-50d74d02491c]: (4, ('Sat Jan 31 08:06:10 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b (c24b1804c400a9b409be80d7f488a4a00278b938e279a88a514e7ecf76dacfe4)\nc24b1804c400a9b409be80d7f488a4a00278b938e279a88a514e7ecf76dacfe4\nSat Jan 31 08:06:10 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b (c24b1804c400a9b409be80d7f488a4a00278b938e279a88a514e7ecf76dacfe4)\nc24b1804c400a9b409be80d7f488a4a00278b938e279a88a514e7ecf76dacfe4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:11.065 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a27e5696-47f7-4735-9f6f-b9bd7742aa89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:11.066 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79cb2b81-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:11 np0005603609 nova_compute[221550]: 2026-01-31 08:06:11.067 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:11 np0005603609 kernel: tap79cb2b81-30: left promiscuous mode
Jan 31 03:06:11 np0005603609 nova_compute[221550]: 2026-01-31 08:06:11.071 221554 DEBUG nova.compute.manager [req-7f5e29ce-c9c7-44ce-895b-e826258865c3 req-9ad92939-e5b3-400e-8dc3-1e4508c4253a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Received event network-changed-7f5461cb-1537-4f9b-a71a-47cb1167841a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:11 np0005603609 nova_compute[221550]: 2026-01-31 08:06:11.072 221554 DEBUG nova.compute.manager [req-7f5e29ce-c9c7-44ce-895b-e826258865c3 req-9ad92939-e5b3-400e-8dc3-1e4508c4253a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Refreshing instance network info cache due to event network-changed-7f5461cb-1537-4f9b-a71a-47cb1167841a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:06:11 np0005603609 nova_compute[221550]: 2026-01-31 08:06:11.072 221554 DEBUG oslo_concurrency.lockutils [req-7f5e29ce-c9c7-44ce-895b-e826258865c3 req-9ad92939-e5b3-400e-8dc3-1e4508c4253a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-892101ee-ea21-41fd-a15d-e0b300965e47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:06:11 np0005603609 nova_compute[221550]: 2026-01-31 08:06:11.077 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:11.080 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c51d21a2-a656-4d4c-ac28-871a5af3582a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:11.106 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b73f6375-8550-47be-bcad-05fbfcb6d65d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:11.108 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3c8e675b-77ea-4d69-8881-6bdd3b8894fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:11.121 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ff297377-c3ae-41bc-8877-679b5a2ee174]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698441, 'reachable_time': 44616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262233, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:11.123 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:06:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:11.124 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f493e2-1a49-4165-bda4-6c2c42afdced]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:11 np0005603609 systemd[1]: run-netns-ovnmeta\x2d79cb2b81\x2d3369\x2d468a\x2d8bf6\x2d7e13d5df334b.mount: Deactivated successfully.
Jan 31 03:06:11 np0005603609 nova_compute[221550]: 2026-01-31 08:06:11.344 221554 DEBUG nova.network.neutron [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Updating instance_info_cache with network_info: [{"id": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "address": "fa:16:3e:9a:b1:ea", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5461cb-15", "ovs_interfaceid": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:06:11 np0005603609 nova_compute[221550]: 2026-01-31 08:06:11.470 221554 INFO nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Instance shutdown successfully after 13 seconds.#033[00m
Jan 31 03:06:11 np0005603609 nova_compute[221550]: 2026-01-31 08:06:11.476 221554 INFO nova.virt.libvirt.driver [-] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Instance destroyed successfully.#033[00m
Jan 31 03:06:11 np0005603609 nova_compute[221550]: 2026-01-31 08:06:11.476 221554 DEBUG nova.virt.libvirt.vif [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:05:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-372861281',display_name='tempest-ServerActionsTestOtherA-server-372861281',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-372861281',id=103,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDZ2ulplX2H2AL5NuU34s6GJNVqGMriUIj2eQg4OgerjQ8NWhsk6znxGcALW3k4Z9H1uedU1AeWtQAxMMtaMSBGS2G2VQQrwipi4fvjn/GJrPshiFNiDq6ym/pNUZzm75g==',key_name='tempest-keypair-1727230566',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:05:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-sctr4iai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:05:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='31043e345f6b48b585fb7b8ab7304764',uuid=0f94fbbc-a8e1-4d6e-838f-925bcbdf538e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1343233929-network", "vif_mac": "fa:16:3e:48:8d:f4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:06:11 np0005603609 nova_compute[221550]: 2026-01-31 08:06:11.477 221554 DEBUG nova.network.os_vif_util [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1343233929-network", "vif_mac": "fa:16:3e:48:8d:f4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:06:11 np0005603609 nova_compute[221550]: 2026-01-31 08:06:11.477 221554 DEBUG nova.network.os_vif_util [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:8d:f4,bridge_name='br-int',has_traffic_filtering=True,id=67a82948-b72f-49d6-b07c-3058947bd453,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a82948-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:06:11 np0005603609 nova_compute[221550]: 2026-01-31 08:06:11.478 221554 DEBUG os_vif [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:8d:f4,bridge_name='br-int',has_traffic_filtering=True,id=67a82948-b72f-49d6-b07c-3058947bd453,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a82948-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:06:11 np0005603609 nova_compute[221550]: 2026-01-31 08:06:11.479 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:11 np0005603609 nova_compute[221550]: 2026-01-31 08:06:11.479 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67a82948-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:11 np0005603609 nova_compute[221550]: 2026-01-31 08:06:11.481 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:11 np0005603609 nova_compute[221550]: 2026-01-31 08:06:11.482 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:11 np0005603609 nova_compute[221550]: 2026-01-31 08:06:11.485 221554 INFO os_vif [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:8d:f4,bridge_name='br-int',has_traffic_filtering=True,id=67a82948-b72f-49d6-b07c-3058947bd453,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a82948-b7')#033[00m
Jan 31 03:06:11 np0005603609 nova_compute[221550]: 2026-01-31 08:06:11.489 221554 DEBUG nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:06:11 np0005603609 nova_compute[221550]: 2026-01-31 08:06:11.490 221554 DEBUG nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:06:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:11.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.093 221554 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Releasing lock "refresh_cache-892101ee-ea21-41fd-a15d-e0b300965e47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.093 221554 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Instance network_info: |[{"id": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "address": "fa:16:3e:9a:b1:ea", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5461cb-15", "ovs_interfaceid": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.093 221554 DEBUG oslo_concurrency.lockutils [req-7f5e29ce-c9c7-44ce-895b-e826258865c3 req-9ad92939-e5b3-400e-8dc3-1e4508c4253a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-892101ee-ea21-41fd-a15d-e0b300965e47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.093 221554 DEBUG nova.network.neutron [req-7f5e29ce-c9c7-44ce-895b-e826258865c3 req-9ad92939-e5b3-400e-8dc3-1e4508c4253a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Refreshing network info cache for port 7f5461cb-1537-4f9b-a71a-47cb1167841a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.096 221554 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Start _get_guest_xml network_info=[{"id": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "address": "fa:16:3e:9a:b1:ea", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5461cb-15", "ovs_interfaceid": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.100 221554 WARNING nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.108 221554 DEBUG nova.virt.libvirt.host [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.108 221554 DEBUG nova.virt.libvirt.host [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.112 221554 DEBUG nova.virt.libvirt.host [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.112 221554 DEBUG nova.virt.libvirt.host [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.114 221554 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.114 221554 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.114 221554 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.115 221554 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.115 221554 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.115 221554 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.115 221554 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.116 221554 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.116 221554 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.116 221554 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.116 221554 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.117 221554 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.120 221554 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:12.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.468 221554 DEBUG nova.compute.manager [req-2034425e-f398-4d3d-b548-24e5af0f69f8 req-eaecf80f-38a6-4906-906e-c4fb0b501344 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received event network-vif-unplugged-67a82948-b72f-49d6-b07c-3058947bd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.469 221554 DEBUG oslo_concurrency.lockutils [req-2034425e-f398-4d3d-b548-24e5af0f69f8 req-eaecf80f-38a6-4906-906e-c4fb0b501344 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.470 221554 DEBUG oslo_concurrency.lockutils [req-2034425e-f398-4d3d-b548-24e5af0f69f8 req-eaecf80f-38a6-4906-906e-c4fb0b501344 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.470 221554 DEBUG oslo_concurrency.lockutils [req-2034425e-f398-4d3d-b548-24e5af0f69f8 req-eaecf80f-38a6-4906-906e-c4fb0b501344 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.470 221554 DEBUG nova.compute.manager [req-2034425e-f398-4d3d-b548-24e5af0f69f8 req-eaecf80f-38a6-4906-906e-c4fb0b501344 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] No waiting events found dispatching network-vif-unplugged-67a82948-b72f-49d6-b07c-3058947bd453 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.470 221554 WARNING nova.compute.manager [req-2034425e-f398-4d3d-b548-24e5af0f69f8 req-eaecf80f-38a6-4906-906e-c4fb0b501344 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received unexpected event network-vif-unplugged-67a82948-b72f-49d6-b07c-3058947bd453 for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 31 03:06:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:06:12 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/219294127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.564 221554 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.587 221554 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image 892101ee-ea21-41fd-a15d-e0b300965e47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:06:12 np0005603609 nova_compute[221550]: 2026-01-31 08:06:12.590 221554 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:06:13 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2044255212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:06:13 np0005603609 nova_compute[221550]: 2026-01-31 08:06:13.035 221554 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:13 np0005603609 nova_compute[221550]: 2026-01-31 08:06:13.037 221554 DEBUG nova.virt.libvirt.vif [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-19898720',display_name='tempest-ListServersNegativeTestJSON-server-19898720-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-19898720-3',id=106,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='43b462f5b43d48b4a33a13b069618e4c',ramdisk_id='',reservation_id='r-c7zkmfmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1511652820',owner_user_name='tempest-ListServersNegativeTestJSON-1511652820-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:06:03Z,user_data=None,user_id='972b6e928f014e5394261f9c8655f1de',uuid=892101ee-ea21-41fd-a15d-e0b300965e47,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "address": "fa:16:3e:9a:b1:ea", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5461cb-15", "ovs_interfaceid": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:06:13 np0005603609 nova_compute[221550]: 2026-01-31 08:06:13.038 221554 DEBUG nova.network.os_vif_util [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Converting VIF {"id": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "address": "fa:16:3e:9a:b1:ea", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5461cb-15", "ovs_interfaceid": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:06:13 np0005603609 nova_compute[221550]: 2026-01-31 08:06:13.039 221554 DEBUG nova.network.os_vif_util [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:b1:ea,bridge_name='br-int',has_traffic_filtering=True,id=7f5461cb-1537-4f9b-a71a-47cb1167841a,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f5461cb-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:06:13 np0005603609 nova_compute[221550]: 2026-01-31 08:06:13.043 221554 DEBUG nova.objects.instance [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lazy-loading 'pci_devices' on Instance uuid 892101ee-ea21-41fd-a15d-e0b300965e47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:06:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:06:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:13.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:06:14 np0005603609 nova_compute[221550]: 2026-01-31 08:06:14.065 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:14.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:14 np0005603609 nova_compute[221550]: 2026-01-31 08:06:14.725 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.735 221554 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:06:15 np0005603609 nova_compute[221550]:  <uuid>892101ee-ea21-41fd-a15d-e0b300965e47</uuid>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:  <name>instance-0000006a</name>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <nova:name>tempest-ListServersNegativeTestJSON-server-19898720-3</nova:name>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:06:12</nova:creationTime>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:06:15 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:        <nova:user uuid="972b6e928f014e5394261f9c8655f1de">tempest-ListServersNegativeTestJSON-1511652820-project-member</nova:user>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:        <nova:project uuid="43b462f5b43d48b4a33a13b069618e4c">tempest-ListServersNegativeTestJSON-1511652820</nova:project>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:        <nova:port uuid="7f5461cb-1537-4f9b-a71a-47cb1167841a">
Jan 31 03:06:15 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <entry name="serial">892101ee-ea21-41fd-a15d-e0b300965e47</entry>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <entry name="uuid">892101ee-ea21-41fd-a15d-e0b300965e47</entry>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/892101ee-ea21-41fd-a15d-e0b300965e47_disk">
Jan 31 03:06:15 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:06:15 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/892101ee-ea21-41fd-a15d-e0b300965e47_disk.config">
Jan 31 03:06:15 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:06:15 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:9a:b1:ea"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <target dev="tap7f5461cb-15"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/892101ee-ea21-41fd-a15d-e0b300965e47/console.log" append="off"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:06:15 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:06:15 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:06:15 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:06:15 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.736 221554 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Preparing to wait for external event network-vif-plugged-7f5461cb-1537-4f9b-a71a-47cb1167841a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.736 221554 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "892101ee-ea21-41fd-a15d-e0b300965e47-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.737 221554 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "892101ee-ea21-41fd-a15d-e0b300965e47-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.737 221554 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "892101ee-ea21-41fd-a15d-e0b300965e47-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.738 221554 DEBUG nova.virt.libvirt.vif [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-19898720',display_name='tempest-ListServersNegativeTestJSON-server-19898720-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-19898720-3',id=106,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='43b462f5b43d48b4a33a13b069618e4c',ramdisk_id='',reservation_id='r-c7zkmfmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1511652820',owner_user_name='tempest-ListServersNegativeTestJSON-1511652820-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:06:03Z,user_data=None,user_id='972b6e928f014e5394261f9c8655f1de',uuid=892101ee-ea21-41fd-a15d-e0b300965e47,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "address": "fa:16:3e:9a:b1:ea", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5461cb-15", "ovs_interfaceid": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.738 221554 DEBUG nova.network.os_vif_util [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Converting VIF {"id": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "address": "fa:16:3e:9a:b1:ea", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5461cb-15", "ovs_interfaceid": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.739 221554 DEBUG nova.network.os_vif_util [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:b1:ea,bridge_name='br-int',has_traffic_filtering=True,id=7f5461cb-1537-4f9b-a71a-47cb1167841a,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f5461cb-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.739 221554 DEBUG os_vif [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:b1:ea,bridge_name='br-int',has_traffic_filtering=True,id=7f5461cb-1537-4f9b-a71a-47cb1167841a,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f5461cb-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.743 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.744 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.744 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.748 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.749 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f5461cb-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.749 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7f5461cb-15, col_values=(('external_ids', {'iface-id': '7f5461cb-1537-4f9b-a71a-47cb1167841a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:b1:ea', 'vm-uuid': '892101ee-ea21-41fd-a15d-e0b300965e47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.751 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:15 np0005603609 NetworkManager[49064]: <info>  [1769846775.7519] manager: (tap7f5461cb-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.754 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.755 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.756 221554 INFO os_vif [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:b1:ea,bridge_name='br-int',has_traffic_filtering=True,id=7f5461cb-1537-4f9b-a71a-47cb1167841a,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f5461cb-15')#033[00m
Jan 31 03:06:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:15.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.851 221554 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.851 221554 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.851 221554 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] No VIF found with MAC fa:16:3e:9a:b1:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.851 221554 INFO nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Using config drive#033[00m
Jan 31 03:06:15 np0005603609 nova_compute[221550]: 2026-01-31 08:06:15.870 221554 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image 892101ee-ea21-41fd-a15d-e0b300965e47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:06:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:06:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:16.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:06:16 np0005603609 nova_compute[221550]: 2026-01-31 08:06:16.312 221554 DEBUG neutronclient.v2_0.client [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 67a82948-b72f-49d6-b07c-3058947bd453 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 31 03:06:16 np0005603609 nova_compute[221550]: 2026-01-31 08:06:16.394 221554 INFO nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Creating config drive at /var/lib/nova/instances/892101ee-ea21-41fd-a15d-e0b300965e47/disk.config#033[00m
Jan 31 03:06:16 np0005603609 nova_compute[221550]: 2026-01-31 08:06:16.397 221554 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/892101ee-ea21-41fd-a15d-e0b300965e47/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpkyo5gpqm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:16 np0005603609 nova_compute[221550]: 2026-01-31 08:06:16.449 221554 DEBUG oslo_concurrency.lockutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:16 np0005603609 nova_compute[221550]: 2026-01-31 08:06:16.449 221554 DEBUG oslo_concurrency.lockutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:16 np0005603609 nova_compute[221550]: 2026-01-31 08:06:16.450 221554 DEBUG oslo_concurrency.lockutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:16 np0005603609 nova_compute[221550]: 2026-01-31 08:06:16.522 221554 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/892101ee-ea21-41fd-a15d-e0b300965e47/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpkyo5gpqm" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:16 np0005603609 nova_compute[221550]: 2026-01-31 08:06:16.670 221554 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image 892101ee-ea21-41fd-a15d-e0b300965e47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:06:16 np0005603609 nova_compute[221550]: 2026-01-31 08:06:16.674 221554 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/892101ee-ea21-41fd-a15d-e0b300965e47/disk.config 892101ee-ea21-41fd-a15d-e0b300965e47_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.045 221554 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/892101ee-ea21-41fd-a15d-e0b300965e47/disk.config 892101ee-ea21-41fd-a15d-e0b300965e47_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.046 221554 INFO nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Deleting local config drive /var/lib/nova/instances/892101ee-ea21-41fd-a15d-e0b300965e47/disk.config because it was imported into RBD.#033[00m
Jan 31 03:06:17 np0005603609 kernel: tap7f5461cb-15: entered promiscuous mode
Jan 31 03:06:17 np0005603609 NetworkManager[49064]: <info>  [1769846777.0941] manager: (tap7f5461cb-15): new Tun device (/org/freedesktop/NetworkManager/Devices/202)
Jan 31 03:06:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:06:17Z|00406|binding|INFO|Claiming lport 7f5461cb-1537-4f9b-a71a-47cb1167841a for this chassis.
Jan 31 03:06:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:06:17Z|00407|binding|INFO|7f5461cb-1537-4f9b-a71a-47cb1167841a: Claiming fa:16:3e:9a:b1:ea 10.100.0.4
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.095 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:06:17Z|00408|binding|INFO|Setting lport 7f5461cb-1537-4f9b-a71a-47cb1167841a ovn-installed in OVS
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.114 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:17 np0005603609 systemd-udevd[262370]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:06:17 np0005603609 systemd-machined[190912]: New machine qemu-51-instance-0000006a.
Jan 31 03:06:17 np0005603609 NetworkManager[49064]: <info>  [1769846777.1456] device (tap7f5461cb-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:06:17 np0005603609 systemd[1]: Started Virtual Machine qemu-51-instance-0000006a.
Jan 31 03:06:17 np0005603609 NetworkManager[49064]: <info>  [1769846777.1472] device (tap7f5461cb-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:06:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:06:17Z|00409|binding|INFO|Setting lport 7f5461cb-1537-4f9b-a71a-47cb1167841a up in Southbound
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.577 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:b1:ea 10.100.0.4'], port_security=['fa:16:3e:9a:b1:ea 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '892101ee-ea21-41fd-a15d-e0b300965e47', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '43b462f5b43d48b4a33a13b069618e4c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '001ae016-61eb-444d-a215-8f70012d923a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aaf32b66-0fe8-4826-8186-77a88483534c, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=7f5461cb-1537-4f9b-a71a-47cb1167841a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.578 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 7f5461cb-1537-4f9b-a71a-47cb1167841a in datapath f3091b0d-0fc9-4172-b2af-6d9c678c6569 bound to our chassis#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.580 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3091b0d-0fc9-4172-b2af-6d9c678c6569#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.588 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d078c81e-cdde-4d8d-a91a-22e5a374a896]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.589 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3091b0d-01 in ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.591 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3091b0d-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.591 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b3365ea6-e310-4673-b0bb-fa6c4d79d7fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.592 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ee05bc24-3c55-4b1d-8b64-d54e068e3a6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.599 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[10fb93e1-57b5-4ab0-ab0b-26178ba990e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.608 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c1799741-dd01-40a6-a215-c2bb0a8539a8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.631 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[1b4243f1-d435-4315-b01c-a46b140d862f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603609 systemd-udevd[262372]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:06:17 np0005603609 NetworkManager[49064]: <info>  [1769846777.6363] manager: (tapf3091b0d-00): new Veth device (/org/freedesktop/NetworkManager/Devices/203)
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.636 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f4478681-008e-41b0-9f5a-95b87e9b1bd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.652 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846777.651816, 892101ee-ea21-41fd-a15d-e0b300965e47 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.652 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] VM Started (Lifecycle Event)#033[00m
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.661 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[1316e7a7-b256-4e9d-ab52-5e077307594e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.664 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[688a85ef-aec2-4746-8820-20b52db53304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603609 NetworkManager[49064]: <info>  [1769846777.6800] device (tapf3091b0d-00): carrier: link connected
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.684 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[fe16c415-24c3-46c5-b52b-f38fce9aa693]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.698 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[214c04b5-d100-403b-a02c-2cf8f21362ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3091b0d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701738, 'reachable_time': 16843, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262445, 'error': None, 'target': 'ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.708 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[43ed15f3-4599-406f-8683-fa01660daca0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:a99e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701738, 'tstamp': 701738}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262446, 'error': None, 'target': 'ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.721 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0488c693-e5f6-4847-9252-02e3b2e96e30]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3091b0d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701738, 'reachable_time': 16843, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262447, 'error': None, 'target': 'ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.741 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[635f3d3d-93b9-4993-b2e2-9a9f1de5585e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.761 221554 DEBUG nova.compute.manager [req-99543007-a0fd-4406-9827-86c0598cafbb req-34513819-1852-4544-9801-8321bb6bac43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received event network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.761 221554 DEBUG oslo_concurrency.lockutils [req-99543007-a0fd-4406-9827-86c0598cafbb req-34513819-1852-4544-9801-8321bb6bac43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.762 221554 DEBUG oslo_concurrency.lockutils [req-99543007-a0fd-4406-9827-86c0598cafbb req-34513819-1852-4544-9801-8321bb6bac43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.762 221554 DEBUG oslo_concurrency.lockutils [req-99543007-a0fd-4406-9827-86c0598cafbb req-34513819-1852-4544-9801-8321bb6bac43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.762 221554 DEBUG nova.compute.manager [req-99543007-a0fd-4406-9827-86c0598cafbb req-34513819-1852-4544-9801-8321bb6bac43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] No waiting events found dispatching network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.762 221554 WARNING nova.compute.manager [req-99543007-a0fd-4406-9827-86c0598cafbb req-34513819-1852-4544-9801-8321bb6bac43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received unexpected event network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.779 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8e5527e9-1ce3-45da-abd2-6fdc65ca0f20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.780 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3091b0d-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.780 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.780 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3091b0d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.782 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:17 np0005603609 kernel: tapf3091b0d-00: entered promiscuous mode
Jan 31 03:06:17 np0005603609 NetworkManager[49064]: <info>  [1769846777.7834] manager: (tapf3091b0d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.783 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.785 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3091b0d-00, col_values=(('external_ids', {'iface-id': '67f0642a-d8ea-421d-83b2-8b692f5bc044'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.786 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:06:17Z|00410|binding|INFO|Releasing lport 67f0642a-d8ea-421d-83b2-8b692f5bc044 from this chassis (sb_readonly=0)
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.787 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3091b0d-0fc9-4172-b2af-6d9c678c6569.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3091b0d-0fc9-4172-b2af-6d9c678c6569.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.787 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.787 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fe38132a-197e-4be0-a735-6c7f428d1aea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.788 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-f3091b0d-0fc9-4172-b2af-6d9c678c6569
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/f3091b0d-0fc9-4172-b2af-6d9c678c6569.pid.haproxy
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.788 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID f3091b0d-0fc9-4172-b2af-6d9c678c6569
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:06:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:17.788 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'env', 'PROCESS_TAG=haproxy-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3091b0d-0fc9-4172-b2af-6d9c678c6569.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.790 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.791 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846777.6519277, 892101ee-ea21-41fd-a15d-e0b300965e47 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.792 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:06:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:17.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.832 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.835 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:06:17 np0005603609 nova_compute[221550]: 2026-01-31 08:06:17.882 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:06:18 np0005603609 podman[262479]: 2026-01-31 08:06:18.082933509 +0000 UTC m=+0.045687741 container create 43add2f5e1d575685d633acb05a2a06391e8a2aa47e31005378dbf660b532e12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:18 np0005603609 systemd[1]: Started libpod-conmon-43add2f5e1d575685d633acb05a2a06391e8a2aa47e31005378dbf660b532e12.scope.
Jan 31 03:06:18 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:06:18 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0de7d7129f94eacc69e17041bc26ed811098890a39a1a13569d38acaca5885/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:18 np0005603609 podman[262479]: 2026-01-31 08:06:18.149575624 +0000 UTC m=+0.112329876 container init 43add2f5e1d575685d633acb05a2a06391e8a2aa47e31005378dbf660b532e12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:06:18 np0005603609 podman[262479]: 2026-01-31 08:06:18.056555144 +0000 UTC m=+0.019309396 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:06:18 np0005603609 podman[262479]: 2026-01-31 08:06:18.15442908 +0000 UTC m=+0.117183312 container start 43add2f5e1d575685d633acb05a2a06391e8a2aa47e31005378dbf660b532e12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 03:06:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:18.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:18 np0005603609 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[262494]: [NOTICE]   (262498) : New worker (262500) forked
Jan 31 03:06:18 np0005603609 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[262494]: [NOTICE]   (262498) : Loading success.
Jan 31 03:06:18 np0005603609 nova_compute[221550]: 2026-01-31 08:06:18.211 221554 DEBUG nova.network.neutron [req-7f5e29ce-c9c7-44ce-895b-e826258865c3 req-9ad92939-e5b3-400e-8dc3-1e4508c4253a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Updated VIF entry in instance network info cache for port 7f5461cb-1537-4f9b-a71a-47cb1167841a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:06:18 np0005603609 nova_compute[221550]: 2026-01-31 08:06:18.212 221554 DEBUG nova.network.neutron [req-7f5e29ce-c9c7-44ce-895b-e826258865c3 req-9ad92939-e5b3-400e-8dc3-1e4508c4253a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Updating instance_info_cache with network_info: [{"id": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "address": "fa:16:3e:9a:b1:ea", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5461cb-15", "ovs_interfaceid": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:06:18 np0005603609 nova_compute[221550]: 2026-01-31 08:06:18.247 221554 DEBUG oslo_concurrency.lockutils [req-7f5e29ce-c9c7-44ce-895b-e826258865c3 req-9ad92939-e5b3-400e-8dc3-1e4508c4253a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-892101ee-ea21-41fd-a15d-e0b300965e47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:06:18 np0005603609 nova_compute[221550]: 2026-01-31 08:06:18.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:19 np0005603609 nova_compute[221550]: 2026-01-31 08:06:19.067 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:06:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:19.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:06:19 np0005603609 nova_compute[221550]: 2026-01-31 08:06:19.956 221554 DEBUG nova.compute.manager [req-fffe0256-e539-406b-96fd-de3d40e43b2e req-41f4865a-165f-46b9-8d78-bb5a01f71237 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Received event network-vif-plugged-7f5461cb-1537-4f9b-a71a-47cb1167841a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:19 np0005603609 nova_compute[221550]: 2026-01-31 08:06:19.957 221554 DEBUG oslo_concurrency.lockutils [req-fffe0256-e539-406b-96fd-de3d40e43b2e req-41f4865a-165f-46b9-8d78-bb5a01f71237 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "892101ee-ea21-41fd-a15d-e0b300965e47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:19 np0005603609 nova_compute[221550]: 2026-01-31 08:06:19.957 221554 DEBUG oslo_concurrency.lockutils [req-fffe0256-e539-406b-96fd-de3d40e43b2e req-41f4865a-165f-46b9-8d78-bb5a01f71237 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "892101ee-ea21-41fd-a15d-e0b300965e47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:19 np0005603609 nova_compute[221550]: 2026-01-31 08:06:19.957 221554 DEBUG oslo_concurrency.lockutils [req-fffe0256-e539-406b-96fd-de3d40e43b2e req-41f4865a-165f-46b9-8d78-bb5a01f71237 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "892101ee-ea21-41fd-a15d-e0b300965e47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:19 np0005603609 nova_compute[221550]: 2026-01-31 08:06:19.958 221554 DEBUG nova.compute.manager [req-fffe0256-e539-406b-96fd-de3d40e43b2e req-41f4865a-165f-46b9-8d78-bb5a01f71237 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Processing event network-vif-plugged-7f5461cb-1537-4f9b-a71a-47cb1167841a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:06:19 np0005603609 nova_compute[221550]: 2026-01-31 08:06:19.958 221554 DEBUG nova.compute.manager [req-fffe0256-e539-406b-96fd-de3d40e43b2e req-41f4865a-165f-46b9-8d78-bb5a01f71237 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Received event network-vif-plugged-7f5461cb-1537-4f9b-a71a-47cb1167841a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:19 np0005603609 nova_compute[221550]: 2026-01-31 08:06:19.958 221554 DEBUG oslo_concurrency.lockutils [req-fffe0256-e539-406b-96fd-de3d40e43b2e req-41f4865a-165f-46b9-8d78-bb5a01f71237 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "892101ee-ea21-41fd-a15d-e0b300965e47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:19 np0005603609 nova_compute[221550]: 2026-01-31 08:06:19.958 221554 DEBUG oslo_concurrency.lockutils [req-fffe0256-e539-406b-96fd-de3d40e43b2e req-41f4865a-165f-46b9-8d78-bb5a01f71237 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "892101ee-ea21-41fd-a15d-e0b300965e47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:19 np0005603609 nova_compute[221550]: 2026-01-31 08:06:19.958 221554 DEBUG oslo_concurrency.lockutils [req-fffe0256-e539-406b-96fd-de3d40e43b2e req-41f4865a-165f-46b9-8d78-bb5a01f71237 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "892101ee-ea21-41fd-a15d-e0b300965e47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:19 np0005603609 nova_compute[221550]: 2026-01-31 08:06:19.958 221554 DEBUG nova.compute.manager [req-fffe0256-e539-406b-96fd-de3d40e43b2e req-41f4865a-165f-46b9-8d78-bb5a01f71237 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] No waiting events found dispatching network-vif-plugged-7f5461cb-1537-4f9b-a71a-47cb1167841a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:06:19 np0005603609 nova_compute[221550]: 2026-01-31 08:06:19.959 221554 WARNING nova.compute.manager [req-fffe0256-e539-406b-96fd-de3d40e43b2e req-41f4865a-165f-46b9-8d78-bb5a01f71237 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Received unexpected event network-vif-plugged-7f5461cb-1537-4f9b-a71a-47cb1167841a for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:06:19 np0005603609 nova_compute[221550]: 2026-01-31 08:06:19.959 221554 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:06:19 np0005603609 nova_compute[221550]: 2026-01-31 08:06:19.962 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846779.962578, 892101ee-ea21-41fd-a15d-e0b300965e47 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:06:19 np0005603609 nova_compute[221550]: 2026-01-31 08:06:19.963 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:06:19 np0005603609 nova_compute[221550]: 2026-01-31 08:06:19.967 221554 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:06:19 np0005603609 nova_compute[221550]: 2026-01-31 08:06:19.970 221554 INFO nova.virt.libvirt.driver [-] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Instance spawned successfully.#033[00m
Jan 31 03:06:19 np0005603609 nova_compute[221550]: 2026-01-31 08:06:19.971 221554 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:06:19 np0005603609 nova_compute[221550]: 2026-01-31 08:06:19.997 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:06:20 np0005603609 nova_compute[221550]: 2026-01-31 08:06:20.002 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:06:20 np0005603609 nova_compute[221550]: 2026-01-31 08:06:20.005 221554 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:06:20 np0005603609 nova_compute[221550]: 2026-01-31 08:06:20.005 221554 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:06:20 np0005603609 nova_compute[221550]: 2026-01-31 08:06:20.005 221554 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:06:20 np0005603609 nova_compute[221550]: 2026-01-31 08:06:20.006 221554 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:06:20 np0005603609 nova_compute[221550]: 2026-01-31 08:06:20.006 221554 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:06:20 np0005603609 nova_compute[221550]: 2026-01-31 08:06:20.006 221554 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:06:20 np0005603609 nova_compute[221550]: 2026-01-31 08:06:20.045 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:06:20 np0005603609 nova_compute[221550]: 2026-01-31 08:06:20.085 221554 INFO nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Took 16.76 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:06:20 np0005603609 nova_compute[221550]: 2026-01-31 08:06:20.086 221554 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:06:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:06:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:20.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:06:20 np0005603609 nova_compute[221550]: 2026-01-31 08:06:20.176 221554 INFO nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Took 18.22 seconds to build instance.#033[00m
Jan 31 03:06:20 np0005603609 nova_compute[221550]: 2026-01-31 08:06:20.206 221554 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "892101ee-ea21-41fd-a15d-e0b300965e47" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:20 np0005603609 nova_compute[221550]: 2026-01-31 08:06:20.751 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:21 np0005603609 nova_compute[221550]: 2026-01-31 08:06:21.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:21 np0005603609 ovn_controller[130359]: 2026-01-31T08:06:21Z|00411|binding|INFO|Releasing lport 67f0642a-d8ea-421d-83b2-8b692f5bc044 from this chassis (sb_readonly=0)
Jan 31 03:06:21 np0005603609 nova_compute[221550]: 2026-01-31 08:06:21.722 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:21 np0005603609 ovn_controller[130359]: 2026-01-31T08:06:21Z|00412|binding|INFO|Releasing lport 67f0642a-d8ea-421d-83b2-8b692f5bc044 from this chassis (sb_readonly=0)
Jan 31 03:06:21 np0005603609 nova_compute[221550]: 2026-01-31 08:06:21.789 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:21.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:22.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:22 np0005603609 nova_compute[221550]: 2026-01-31 08:06:22.188 221554 DEBUG nova.compute.manager [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received event network-changed-67a82948-b72f-49d6-b07c-3058947bd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:22 np0005603609 nova_compute[221550]: 2026-01-31 08:06:22.189 221554 DEBUG nova.compute.manager [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Refreshing instance network info cache due to event network-changed-67a82948-b72f-49d6-b07c-3058947bd453. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:06:22 np0005603609 nova_compute[221550]: 2026-01-31 08:06:22.189 221554 DEBUG oslo_concurrency.lockutils [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:06:22 np0005603609 nova_compute[221550]: 2026-01-31 08:06:22.189 221554 DEBUG oslo_concurrency.lockutils [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:06:22 np0005603609 nova_compute[221550]: 2026-01-31 08:06:22.189 221554 DEBUG nova.network.neutron [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Refreshing network info cache for port 67a82948-b72f-49d6-b07c-3058947bd453 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:06:23 np0005603609 nova_compute[221550]: 2026-01-31 08:06:23.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:23 np0005603609 nova_compute[221550]: 2026-01-31 08:06:23.697 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:23 np0005603609 nova_compute[221550]: 2026-01-31 08:06:23.697 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:23 np0005603609 nova_compute[221550]: 2026-01-31 08:06:23.697 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:23 np0005603609 nova_compute[221550]: 2026-01-31 08:06:23.697 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:06:23 np0005603609 nova_compute[221550]: 2026-01-31 08:06:23.698 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:06:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:23.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:06:24 np0005603609 nova_compute[221550]: 2026-01-31 08:06:24.070 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/46344885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:06:24 np0005603609 nova_compute[221550]: 2026-01-31 08:06:24.101 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:24.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:24 np0005603609 nova_compute[221550]: 2026-01-31 08:06:24.182 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:06:24 np0005603609 nova_compute[221550]: 2026-01-31 08:06:24.183 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:06:24 np0005603609 nova_compute[221550]: 2026-01-31 08:06:24.186 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:06:24 np0005603609 nova_compute[221550]: 2026-01-31 08:06:24.186 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:06:24 np0005603609 nova_compute[221550]: 2026-01-31 08:06:24.299 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:06:24 np0005603609 nova_compute[221550]: 2026-01-31 08:06:24.300 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4404MB free_disk=20.788639068603516GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:06:24 np0005603609 nova_compute[221550]: 2026-01-31 08:06:24.300 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:24 np0005603609 nova_compute[221550]: 2026-01-31 08:06:24.301 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:24 np0005603609 nova_compute[221550]: 2026-01-31 08:06:24.391 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Migration for instance 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 31 03:06:24 np0005603609 nova_compute[221550]: 2026-01-31 08:06:24.471 221554 INFO nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Updating resource usage from migration 4a86c12b-502d-451d-8b81-01c67cc03b3a#033[00m
Jan 31 03:06:24 np0005603609 nova_compute[221550]: 2026-01-31 08:06:24.472 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Starting to track outgoing migration 4a86c12b-502d-451d-8b81-01c67cc03b3a with flavor fea01737-128b-41fa-a695-aaaa6e96e4b2 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Jan 31 03:06:24 np0005603609 nova_compute[221550]: 2026-01-31 08:06:24.527 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Migration 4a86c12b-502d-451d-8b81-01c67cc03b3a is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 03:06:24 np0005603609 nova_compute[221550]: 2026-01-31 08:06:24.527 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 892101ee-ea21-41fd-a15d-e0b300965e47 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:06:24 np0005603609 nova_compute[221550]: 2026-01-31 08:06:24.527 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:06:24 np0005603609 nova_compute[221550]: 2026-01-31 08:06:24.528 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e260 e260: 3 total, 3 up, 3 in
Jan 31 03:06:24 np0005603609 nova_compute[221550]: 2026-01-31 08:06:24.805 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #91. Immutable memtables: 0.
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:24.821700) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 91
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846784821768, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 2011, "num_deletes": 252, "total_data_size": 4535257, "memory_usage": 4615544, "flush_reason": "Manual Compaction"}
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #92: started
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846784830176, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 92, "file_size": 1805591, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47002, "largest_seqno": 49007, "table_properties": {"data_size": 1799383, "index_size": 3154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16854, "raw_average_key_size": 21, "raw_value_size": 1785465, "raw_average_value_size": 2254, "num_data_blocks": 141, "num_entries": 792, "num_filter_entries": 792, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846621, "oldest_key_time": 1769846621, "file_creation_time": 1769846784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 8505 microseconds, and 3432 cpu microseconds.
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:24.830213) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #92: 1805591 bytes OK
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:24.830229) [db/memtable_list.cc:519] [default] Level-0 commit table #92 started
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:24.831312) [db/memtable_list.cc:722] [default] Level-0 commit table #92: memtable #1 done
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:24.831324) EVENT_LOG_v1 {"time_micros": 1769846784831320, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:24.831339) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 4526256, prev total WAL file size 4526256, number of live WAL files 2.
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000088.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:24.832163) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353038' seq:72057594037927935, type:22 .. '6D6772737461740031373539' seq:0, type:0; will stop at (end)
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [92(1763KB)], [90(10MB)]
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846784832333, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [92], "files_L6": [90], "score": -1, "input_data_size": 13261094, "oldest_snapshot_seqno": -1}
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #93: 7530 keys, 10585634 bytes, temperature: kUnknown
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846784951269, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 93, "file_size": 10585634, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10537278, "index_size": 28404, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18885, "raw_key_size": 192860, "raw_average_key_size": 25, "raw_value_size": 10404848, "raw_average_value_size": 1381, "num_data_blocks": 1128, "num_entries": 7530, "num_filter_entries": 7530, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769846784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 93, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:24.951520) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 10585634 bytes
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:24.952827) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.4 rd, 89.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 10.9 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(13.2) write-amplify(5.9) OK, records in: 7974, records dropped: 444 output_compression: NoCompression
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:24.952861) EVENT_LOG_v1 {"time_micros": 1769846784952835, "job": 56, "event": "compaction_finished", "compaction_time_micros": 118990, "compaction_time_cpu_micros": 20288, "output_level": 6, "num_output_files": 1, "total_output_size": 10585634, "num_input_records": 7974, "num_output_records": 7530, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846784953096, "job": 56, "event": "table_file_deletion", "file_number": 92}
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000090.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846784954047, "job": 56, "event": "table_file_deletion", "file_number": 90}
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:24.831996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:24.954091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:24.954097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:24.954099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:24.954101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:24.954103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:06:24 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2562189325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:06:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:06:25 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/207896373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:06:25 np0005603609 nova_compute[221550]: 2026-01-31 08:06:25.210 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:25 np0005603609 nova_compute[221550]: 2026-01-31 08:06:25.215 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:06:25 np0005603609 nova_compute[221550]: 2026-01-31 08:06:25.255 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:06:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:25 np0005603609 nova_compute[221550]: 2026-01-31 08:06:25.357 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:06:25 np0005603609 nova_compute[221550]: 2026-01-31 08:06:25.358 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:25 np0005603609 nova_compute[221550]: 2026-01-31 08:06:25.358 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:25 np0005603609 nova_compute[221550]: 2026-01-31 08:06:25.358 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:06:25 np0005603609 nova_compute[221550]: 2026-01-31 08:06:25.528 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:06:25 np0005603609 nova_compute[221550]: 2026-01-31 08:06:25.529 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:25 np0005603609 nova_compute[221550]: 2026-01-31 08:06:25.529 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:06:25 np0005603609 nova_compute[221550]: 2026-01-31 08:06:25.753 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:25 np0005603609 nova_compute[221550]: 2026-01-31 08:06:25.784 221554 DEBUG nova.network.neutron [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Updated VIF entry in instance network info cache for port 67a82948-b72f-49d6-b07c-3058947bd453. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:06:25 np0005603609 nova_compute[221550]: 2026-01-31 08:06:25.784 221554 DEBUG nova.network.neutron [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Updating instance_info_cache with network_info: [{"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:06:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:06:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:25.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:06:26 np0005603609 nova_compute[221550]: 2026-01-31 08:06:26.017 221554 DEBUG oslo_concurrency.lockutils [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:06:26 np0005603609 nova_compute[221550]: 2026-01-31 08:06:26.024 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846771.0237982, 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:06:26 np0005603609 nova_compute[221550]: 2026-01-31 08:06:26.025 221554 INFO nova.compute.manager [-] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:06:26 np0005603609 nova_compute[221550]: 2026-01-31 08:06:26.052 221554 DEBUG nova.compute.manager [None req-99c255c7-427e-45f8-9c51-d5c2f48a0c96 - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:06:26 np0005603609 nova_compute[221550]: 2026-01-31 08:06:26.054 221554 DEBUG nova.compute.manager [None req-99c255c7-427e-45f8-9c51-d5c2f48a0c96 - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:06:26 np0005603609 nova_compute[221550]: 2026-01-31 08:06:26.078 221554 INFO nova.compute.manager [None req-99c255c7-427e-45f8-9c51-d5c2f48a0c96 - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Jan 31 03:06:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:06:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:26.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:06:27 np0005603609 nova_compute[221550]: 2026-01-31 08:06:27.665 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:27 np0005603609 nova_compute[221550]: 2026-01-31 08:06:27.666 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:27 np0005603609 nova_compute[221550]: 2026-01-31 08:06:27.666 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:06:27 np0005603609 nova_compute[221550]: 2026-01-31 08:06:27.666 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:06:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:27.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:28 np0005603609 nova_compute[221550]: 2026-01-31 08:06:28.027 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-892101ee-ea21-41fd-a15d-e0b300965e47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:06:28 np0005603609 nova_compute[221550]: 2026-01-31 08:06:28.027 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-892101ee-ea21-41fd-a15d-e0b300965e47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:06:28 np0005603609 nova_compute[221550]: 2026-01-31 08:06:28.028 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:06:28 np0005603609 nova_compute[221550]: 2026-01-31 08:06:28.028 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 892101ee-ea21-41fd-a15d-e0b300965e47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:06:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:06:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:28.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:06:28 np0005603609 nova_compute[221550]: 2026-01-31 08:06:28.781 221554 DEBUG nova.compute.manager [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received event network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:28 np0005603609 nova_compute[221550]: 2026-01-31 08:06:28.782 221554 DEBUG oslo_concurrency.lockutils [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:28 np0005603609 nova_compute[221550]: 2026-01-31 08:06:28.782 221554 DEBUG oslo_concurrency.lockutils [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:28 np0005603609 nova_compute[221550]: 2026-01-31 08:06:28.782 221554 DEBUG oslo_concurrency.lockutils [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:28 np0005603609 nova_compute[221550]: 2026-01-31 08:06:28.782 221554 DEBUG nova.compute.manager [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] No waiting events found dispatching network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:06:28 np0005603609 nova_compute[221550]: 2026-01-31 08:06:28.783 221554 WARNING nova.compute.manager [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received unexpected event network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 for instance with vm_state resized and task_state None.#033[00m
Jan 31 03:06:28 np0005603609 nova_compute[221550]: 2026-01-31 08:06:28.783 221554 DEBUG nova.compute.manager [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received event network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:28 np0005603609 nova_compute[221550]: 2026-01-31 08:06:28.783 221554 DEBUG oslo_concurrency.lockutils [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:28 np0005603609 nova_compute[221550]: 2026-01-31 08:06:28.783 221554 DEBUG oslo_concurrency.lockutils [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:28 np0005603609 nova_compute[221550]: 2026-01-31 08:06:28.783 221554 DEBUG oslo_concurrency.lockutils [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:28 np0005603609 nova_compute[221550]: 2026-01-31 08:06:28.784 221554 DEBUG nova.compute.manager [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] No waiting events found dispatching network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:06:28 np0005603609 nova_compute[221550]: 2026-01-31 08:06:28.784 221554 WARNING nova.compute.manager [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received unexpected event network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 for instance with vm_state resized and task_state None.#033[00m
Jan 31 03:06:29 np0005603609 nova_compute[221550]: 2026-01-31 08:06:29.072 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:29 np0005603609 podman[262557]: 2026-01-31 08:06:29.197288237 +0000 UTC m=+0.073962262 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 31 03:06:29 np0005603609 podman[262556]: 2026-01-31 08:06:29.203146568 +0000 UTC m=+0.079626428 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:06:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:29.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 32K writes, 129K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.04 MB/s#012Cumulative WAL: 32K writes, 11K syncs, 2.85 writes per sync, written: 0.12 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5316 writes, 20K keys, 5316 commit groups, 1.0 writes per commit group, ingest: 21.16 MB, 0.04 MB/s#012Interval WAL: 5315 writes, 2092 syncs, 2.54 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:06:30 np0005603609 nova_compute[221550]: 2026-01-31 08:06:30.085 221554 DEBUG oslo_concurrency.lockutils [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:30 np0005603609 nova_compute[221550]: 2026-01-31 08:06:30.085 221554 DEBUG oslo_concurrency.lockutils [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:30 np0005603609 nova_compute[221550]: 2026-01-31 08:06:30.086 221554 DEBUG nova.compute.manager [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Going to confirm migration 15 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Jan 31 03:06:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:30.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:30 np0005603609 nova_compute[221550]: 2026-01-31 08:06:30.755 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:30 np0005603609 nova_compute[221550]: 2026-01-31 08:06:30.894 221554 DEBUG neutronclient.v2_0.client [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 67a82948-b72f-49d6-b07c-3058947bd453 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 31 03:06:30 np0005603609 nova_compute[221550]: 2026-01-31 08:06:30.895 221554 DEBUG oslo_concurrency.lockutils [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:06:30 np0005603609 nova_compute[221550]: 2026-01-31 08:06:30.895 221554 DEBUG oslo_concurrency.lockutils [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquired lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:06:30 np0005603609 nova_compute[221550]: 2026-01-31 08:06:30.895 221554 DEBUG nova.network.neutron [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:06:30 np0005603609 nova_compute[221550]: 2026-01-31 08:06:30.896 221554 DEBUG nova.objects.instance [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'info_cache' on Instance uuid 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:06:30 np0005603609 nova_compute[221550]: 2026-01-31 08:06:30.935 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Updating instance_info_cache with network_info: [{"id": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "address": "fa:16:3e:9a:b1:ea", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5461cb-15", "ovs_interfaceid": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:06:31 np0005603609 nova_compute[221550]: 2026-01-31 08:06:31.025 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-892101ee-ea21-41fd-a15d-e0b300965e47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:06:31 np0005603609 nova_compute[221550]: 2026-01-31 08:06:31.025 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:06:31 np0005603609 nova_compute[221550]: 2026-01-31 08:06:31.026 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:31 np0005603609 nova_compute[221550]: 2026-01-31 08:06:31.026 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:31 np0005603609 nova_compute[221550]: 2026-01-31 08:06:31.102 221554 WARNING nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 2 instances on the hypervisor.#033[00m
Jan 31 03:06:31 np0005603609 nova_compute[221550]: 2026-01-31 08:06:31.103 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Triggering sync for uuid 892101ee-ea21-41fd-a15d-e0b300965e47 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:06:31 np0005603609 nova_compute[221550]: 2026-01-31 08:06:31.103 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "892101ee-ea21-41fd-a15d-e0b300965e47" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:31 np0005603609 nova_compute[221550]: 2026-01-31 08:06:31.103 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "892101ee-ea21-41fd-a15d-e0b300965e47" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:31 np0005603609 nova_compute[221550]: 2026-01-31 08:06:31.104 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:31 np0005603609 nova_compute[221550]: 2026-01-31 08:06:31.104 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:06:31 np0005603609 nova_compute[221550]: 2026-01-31 08:06:31.282 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "892101ee-ea21-41fd-a15d-e0b300965e47" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:31.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:06:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:32.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:06:33 np0005603609 nova_compute[221550]: 2026-01-31 08:06:33.631 221554 DEBUG nova.network.neutron [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Updating instance_info_cache with network_info: [{"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:06:33 np0005603609 nova_compute[221550]: 2026-01-31 08:06:33.670 221554 DEBUG oslo_concurrency.lockutils [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Releasing lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:06:33 np0005603609 nova_compute[221550]: 2026-01-31 08:06:33.671 221554 DEBUG nova.objects.instance [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'migration_context' on Instance uuid 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:06:33 np0005603609 nova_compute[221550]: 2026-01-31 08:06:33.721 221554 DEBUG nova.storage.rbd_utils [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:06:33 np0005603609 nova_compute[221550]: 2026-01-31 08:06:33.728 221554 DEBUG nova.virt.libvirt.vif [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:05:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-372861281',display_name='tempest-ServerActionsTestOtherA-server-372861281',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-372861281',id=103,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDZ2ulplX2H2AL5NuU34s6GJNVqGMriUIj2eQg4OgerjQ8NWhsk6znxGcALW3k4Z9H1uedU1AeWtQAxMMtaMSBGS2G2VQQrwipi4fvjn/GJrPshiFNiDq6ym/pNUZzm75g==',key_name='tempest-keypair-1727230566',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:06:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-sctr4iai',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:06:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='31043e345f6b48b585fb7b8ab7304764',uuid=0f94fbbc-a8e1-4d6e-838f-925bcbdf538e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:06:33 np0005603609 nova_compute[221550]: 2026-01-31 08:06:33.729 221554 DEBUG nova.network.os_vif_util [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:06:33 np0005603609 nova_compute[221550]: 2026-01-31 08:06:33.729 221554 DEBUG nova.network.os_vif_util [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:8d:f4,bridge_name='br-int',has_traffic_filtering=True,id=67a82948-b72f-49d6-b07c-3058947bd453,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a82948-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:06:33 np0005603609 nova_compute[221550]: 2026-01-31 08:06:33.730 221554 DEBUG os_vif [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:8d:f4,bridge_name='br-int',has_traffic_filtering=True,id=67a82948-b72f-49d6-b07c-3058947bd453,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a82948-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:06:33 np0005603609 nova_compute[221550]: 2026-01-31 08:06:33.731 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:33 np0005603609 nova_compute[221550]: 2026-01-31 08:06:33.731 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67a82948-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:33 np0005603609 nova_compute[221550]: 2026-01-31 08:06:33.732 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:06:33 np0005603609 nova_compute[221550]: 2026-01-31 08:06:33.733 221554 INFO os_vif [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:8d:f4,bridge_name='br-int',has_traffic_filtering=True,id=67a82948-b72f-49d6-b07c-3058947bd453,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a82948-b7')#033[00m
Jan 31 03:06:33 np0005603609 nova_compute[221550]: 2026-01-31 08:06:33.734 221554 DEBUG oslo_concurrency.lockutils [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:33 np0005603609 nova_compute[221550]: 2026-01-31 08:06:33.734 221554 DEBUG oslo_concurrency.lockutils [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:06:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:33.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:06:33 np0005603609 nova_compute[221550]: 2026-01-31 08:06:33.881 221554 DEBUG oslo_concurrency.processutils [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:34 np0005603609 nova_compute[221550]: 2026-01-31 08:06:34.073 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:06:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:34.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3270420668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:06:34 np0005603609 nova_compute[221550]: 2026-01-31 08:06:34.424 221554 DEBUG oslo_concurrency.processutils [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:34 np0005603609 nova_compute[221550]: 2026-01-31 08:06:34.429 221554 DEBUG nova.compute.provider_tree [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:06:34 np0005603609 nova_compute[221550]: 2026-01-31 08:06:34.447 221554 DEBUG nova.scheduler.client.report [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:06:34 np0005603609 nova_compute[221550]: 2026-01-31 08:06:34.497 221554 DEBUG oslo_concurrency.lockutils [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:34 np0005603609 nova_compute[221550]: 2026-01-31 08:06:34.657 221554 INFO nova.scheduler.client.report [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Deleted allocation for migration 4a86c12b-502d-451d-8b81-01c67cc03b3a#033[00m
Jan 31 03:06:34 np0005603609 nova_compute[221550]: 2026-01-31 08:06:34.729 221554 DEBUG oslo_concurrency.lockutils [None req-f4fc111d-d08a-42b8-a843-d3003c79d147 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #94. Immutable memtables: 0.
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:34.776924) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 94
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846794777003, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 409, "num_deletes": 256, "total_data_size": 428069, "memory_usage": 437176, "flush_reason": "Manual Compaction"}
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #95: started
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846794785604, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 95, "file_size": 282570, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49012, "largest_seqno": 49416, "table_properties": {"data_size": 280152, "index_size": 518, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6039, "raw_average_key_size": 18, "raw_value_size": 275223, "raw_average_value_size": 846, "num_data_blocks": 21, "num_entries": 325, "num_filter_entries": 325, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846785, "oldest_key_time": 1769846785, "file_creation_time": 1769846794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 8703 microseconds, and 1226 cpu microseconds.
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:34.785638) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #95: 282570 bytes OK
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:34.785651) [db/memtable_list.cc:519] [default] Level-0 commit table #95 started
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:34.788166) [db/memtable_list.cc:722] [default] Level-0 commit table #95: memtable #1 done
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:34.788180) EVENT_LOG_v1 {"time_micros": 1769846794788175, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:34.788196) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 425382, prev total WAL file size 425382, number of live WAL files 2.
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000091.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:34.788616) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353036' seq:72057594037927935, type:22 .. '6C6F676D0031373538' seq:0, type:0; will stop at (end)
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [95(275KB)], [93(10MB)]
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846794788677, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [95], "files_L6": [93], "score": -1, "input_data_size": 10868204, "oldest_snapshot_seqno": -1}
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #96: 7330 keys, 10720505 bytes, temperature: kUnknown
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846794907667, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 96, "file_size": 10720505, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10672890, "index_size": 28175, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18373, "raw_key_size": 189687, "raw_average_key_size": 25, "raw_value_size": 10543295, "raw_average_value_size": 1438, "num_data_blocks": 1116, "num_entries": 7330, "num_filter_entries": 7330, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769846794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 96, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:34.907881) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10720505 bytes
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:34.917898) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 91.4 rd, 90.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.1 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(76.4) write-amplify(37.9) OK, records in: 7855, records dropped: 525 output_compression: NoCompression
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:34.917924) EVENT_LOG_v1 {"time_micros": 1769846794917913, "job": 58, "event": "compaction_finished", "compaction_time_micros": 118925, "compaction_time_cpu_micros": 17063, "output_level": 6, "num_output_files": 1, "total_output_size": 10720505, "num_input_records": 7855, "num_output_records": 7330, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846794918102, "job": 58, "event": "table_file_deletion", "file_number": 95}
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000093.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846794919003, "job": 58, "event": "table_file_deletion", "file_number": 93}
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:34.788434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:34.919050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:34.919055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:34.919057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:34.919058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:06:34 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:34.919060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:06:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:06:35Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9a:b1:ea 10.100.0.4
Jan 31 03:06:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:06:35Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9a:b1:ea 10.100.0.4
Jan 31 03:06:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.303 221554 DEBUG oslo_concurrency.lockutils [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "892101ee-ea21-41fd-a15d-e0b300965e47" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.303 221554 DEBUG oslo_concurrency.lockutils [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "892101ee-ea21-41fd-a15d-e0b300965e47" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.304 221554 DEBUG oslo_concurrency.lockutils [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "892101ee-ea21-41fd-a15d-e0b300965e47-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.304 221554 DEBUG oslo_concurrency.lockutils [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "892101ee-ea21-41fd-a15d-e0b300965e47-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.304 221554 DEBUG oslo_concurrency.lockutils [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "892101ee-ea21-41fd-a15d-e0b300965e47-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.305 221554 INFO nova.compute.manager [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Terminating instance#033[00m
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.306 221554 DEBUG nova.compute.manager [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:06:35 np0005603609 kernel: tap7f5461cb-15 (unregistering): left promiscuous mode
Jan 31 03:06:35 np0005603609 NetworkManager[49064]: <info>  [1769846795.3798] device (tap7f5461cb-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.386 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:06:35Z|00413|binding|INFO|Releasing lport 7f5461cb-1537-4f9b-a71a-47cb1167841a from this chassis (sb_readonly=0)
Jan 31 03:06:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:06:35Z|00414|binding|INFO|Setting lport 7f5461cb-1537-4f9b-a71a-47cb1167841a down in Southbound
Jan 31 03:06:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:06:35Z|00415|binding|INFO|Removing iface tap7f5461cb-15 ovn-installed in OVS
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.388 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:35.393 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:b1:ea 10.100.0.4'], port_security=['fa:16:3e:9a:b1:ea 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '892101ee-ea21-41fd-a15d-e0b300965e47', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '43b462f5b43d48b4a33a13b069618e4c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '001ae016-61eb-444d-a215-8f70012d923a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aaf32b66-0fe8-4826-8186-77a88483534c, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=7f5461cb-1537-4f9b-a71a-47cb1167841a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:06:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:35.395 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 7f5461cb-1537-4f9b-a71a-47cb1167841a in datapath f3091b0d-0fc9-4172-b2af-6d9c678c6569 unbound from our chassis#033[00m
Jan 31 03:06:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:35.396 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3091b0d-0fc9-4172-b2af-6d9c678c6569, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:06:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:35.398 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[40b258cb-7ae5-46e6-ad11-dce3dd26470d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:35.398 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569 namespace which is not needed anymore#033[00m
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.399 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:35 np0005603609 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Jan 31 03:06:35 np0005603609 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000006a.scope: Consumed 12.744s CPU time.
Jan 31 03:06:35 np0005603609 systemd-machined[190912]: Machine qemu-51-instance-0000006a terminated.
Jan 31 03:06:35 np0005603609 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[262494]: [NOTICE]   (262498) : haproxy version is 2.8.14-c23fe91
Jan 31 03:06:35 np0005603609 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[262494]: [NOTICE]   (262498) : path to executable is /usr/sbin/haproxy
Jan 31 03:06:35 np0005603609 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[262494]: [WARNING]  (262498) : Exiting Master process...
Jan 31 03:06:35 np0005603609 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[262494]: [ALERT]    (262498) : Current worker (262500) exited with code 143 (Terminated)
Jan 31 03:06:35 np0005603609 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[262494]: [WARNING]  (262498) : All workers exited. Exiting... (0)
Jan 31 03:06:35 np0005603609 systemd[1]: libpod-43add2f5e1d575685d633acb05a2a06391e8a2aa47e31005378dbf660b532e12.scope: Deactivated successfully.
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.520 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.525 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:35 np0005603609 podman[262795]: 2026-01-31 08:06:35.525826721 +0000 UTC m=+0.047001763 container died 43add2f5e1d575685d633acb05a2a06391e8a2aa47e31005378dbf660b532e12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.539 221554 INFO nova.virt.libvirt.driver [-] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Instance destroyed successfully.#033[00m
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.540 221554 DEBUG nova.objects.instance [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lazy-loading 'resources' on Instance uuid 892101ee-ea21-41fd-a15d-e0b300965e47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.557 221554 DEBUG nova.virt.libvirt.vif [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-19898720',display_name='tempest-ListServersNegativeTestJSON-server-19898720-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-19898720-3',id=106,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2026-01-31T08:06:20Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='43b462f5b43d48b4a33a13b069618e4c',ramdisk_id='',reservation_id='r-c7zkmfmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1511652820',owner_user_name='tempest-ListServersNegativeTestJSON-1511652820-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:06:20Z,user_data=None,user_id='972b6e928f014e5394261f9c8655f1de',uuid=892101ee-ea21-41fd-a15d-e0b300965e47,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "address": "fa:16:3e:9a:b1:ea", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5461cb-15", "ovs_interfaceid": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.558 221554 DEBUG nova.network.os_vif_util [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Converting VIF {"id": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "address": "fa:16:3e:9a:b1:ea", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5461cb-15", "ovs_interfaceid": "7f5461cb-1537-4f9b-a71a-47cb1167841a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.559 221554 DEBUG nova.network.os_vif_util [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:b1:ea,bridge_name='br-int',has_traffic_filtering=True,id=7f5461cb-1537-4f9b-a71a-47cb1167841a,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f5461cb-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.561 221554 DEBUG os_vif [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:b1:ea,bridge_name='br-int',has_traffic_filtering=True,id=7f5461cb-1537-4f9b-a71a-47cb1167841a,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f5461cb-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.562 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.563 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f5461cb-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:35 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-43add2f5e1d575685d633acb05a2a06391e8a2aa47e31005378dbf660b532e12-userdata-shm.mount: Deactivated successfully.
Jan 31 03:06:35 np0005603609 systemd[1]: var-lib-containers-storage-overlay-0b0de7d7129f94eacc69e17041bc26ed811098890a39a1a13569d38acaca5885-merged.mount: Deactivated successfully.
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.566 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.569 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.571 221554 INFO os_vif [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:b1:ea,bridge_name='br-int',has_traffic_filtering=True,id=7f5461cb-1537-4f9b-a71a-47cb1167841a,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f5461cb-15')#033[00m
Jan 31 03:06:35 np0005603609 podman[262795]: 2026-01-31 08:06:35.588879359 +0000 UTC m=+0.110054401 container cleanup 43add2f5e1d575685d633acb05a2a06391e8a2aa47e31005378dbf660b532e12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:35 np0005603609 systemd[1]: libpod-conmon-43add2f5e1d575685d633acb05a2a06391e8a2aa47e31005378dbf660b532e12.scope: Deactivated successfully.
Jan 31 03:06:35 np0005603609 podman[262850]: 2026-01-31 08:06:35.657170833 +0000 UTC m=+0.045110258 container remove 43add2f5e1d575685d633acb05a2a06391e8a2aa47e31005378dbf660b532e12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 03:06:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:35.661 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f9c99e5d-6188-445b-be43-64313587c73b]: (4, ('Sat Jan 31 08:06:35 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569 (43add2f5e1d575685d633acb05a2a06391e8a2aa47e31005378dbf660b532e12)\n43add2f5e1d575685d633acb05a2a06391e8a2aa47e31005378dbf660b532e12\nSat Jan 31 08:06:35 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569 (43add2f5e1d575685d633acb05a2a06391e8a2aa47e31005378dbf660b532e12)\n43add2f5e1d575685d633acb05a2a06391e8a2aa47e31005378dbf660b532e12\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:35.662 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[279efe44-1e30-4b19-9e08-10beefe1c772]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:35.663 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3091b0d-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.665 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:35 np0005603609 kernel: tapf3091b0d-00: left promiscuous mode
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.667 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:35.670 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[87f41ddc-0034-4ed7-8d1a-14e19c70cdbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:35 np0005603609 nova_compute[221550]: 2026-01-31 08:06:35.672 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:35.684 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[32c7fa30-7589-47cf-9c10-a12bab6dfa0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:35.686 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[69eab3fe-c374-4f1c-850f-274a3852a8b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:35.695 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3f858189-6289-415d-ac5c-cfddd1275be1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701732, 'reachable_time': 17390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262867, 'error': None, 'target': 'ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:35.697 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:06:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:06:35.697 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[90661fd5-3824-4fc9-be31-5f4acf5ee054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:35 np0005603609 systemd[1]: run-netns-ovnmeta\x2df3091b0d\x2d0fc9\x2d4172\x2db2af\x2d6d9c678c6569.mount: Deactivated successfully.
Jan 31 03:06:35 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:06:35 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:06:35 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:06:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:35.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:06:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:36.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:06:36 np0005603609 nova_compute[221550]: 2026-01-31 08:06:36.442 221554 INFO nova.virt.libvirt.driver [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Deleting instance files /var/lib/nova/instances/892101ee-ea21-41fd-a15d-e0b300965e47_del#033[00m
Jan 31 03:06:36 np0005603609 nova_compute[221550]: 2026-01-31 08:06:36.444 221554 INFO nova.virt.libvirt.driver [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Deletion of /var/lib/nova/instances/892101ee-ea21-41fd-a15d-e0b300965e47_del complete#033[00m
Jan 31 03:06:36 np0005603609 nova_compute[221550]: 2026-01-31 08:06:36.517 221554 INFO nova.compute.manager [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Took 1.21 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:06:36 np0005603609 nova_compute[221550]: 2026-01-31 08:06:36.518 221554 DEBUG oslo.service.loopingcall [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:06:36 np0005603609 nova_compute[221550]: 2026-01-31 08:06:36.518 221554 DEBUG nova.compute.manager [-] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:06:36 np0005603609 nova_compute[221550]: 2026-01-31 08:06:36.518 221554 DEBUG nova.network.neutron [-] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.285 221554 DEBUG nova.network.neutron [-] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.340 221554 INFO nova.compute.manager [-] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Took 0.82 seconds to deallocate network for instance.#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.364 221554 DEBUG nova.compute.manager [req-ae99ef88-a2dd-41bb-9de6-3c748272b69e req-6cab62fd-08af-47ca-a279-5de69be66c09 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Received event network-vif-deleted-7f5461cb-1537-4f9b-a71a-47cb1167841a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.391 221554 DEBUG oslo_concurrency.lockutils [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.391 221554 DEBUG oslo_concurrency.lockutils [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.452 221554 DEBUG oslo_concurrency.processutils [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.784 221554 DEBUG nova.compute.manager [req-8e3c7c99-d8bd-4d4f-822c-e16d77d17372 req-59245392-38b8-431a-9e6d-1caf74b724ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Received event network-vif-unplugged-7f5461cb-1537-4f9b-a71a-47cb1167841a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.785 221554 DEBUG oslo_concurrency.lockutils [req-8e3c7c99-d8bd-4d4f-822c-e16d77d17372 req-59245392-38b8-431a-9e6d-1caf74b724ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "892101ee-ea21-41fd-a15d-e0b300965e47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.785 221554 DEBUG oslo_concurrency.lockutils [req-8e3c7c99-d8bd-4d4f-822c-e16d77d17372 req-59245392-38b8-431a-9e6d-1caf74b724ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "892101ee-ea21-41fd-a15d-e0b300965e47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.786 221554 DEBUG oslo_concurrency.lockutils [req-8e3c7c99-d8bd-4d4f-822c-e16d77d17372 req-59245392-38b8-431a-9e6d-1caf74b724ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "892101ee-ea21-41fd-a15d-e0b300965e47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.786 221554 DEBUG nova.compute.manager [req-8e3c7c99-d8bd-4d4f-822c-e16d77d17372 req-59245392-38b8-431a-9e6d-1caf74b724ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] No waiting events found dispatching network-vif-unplugged-7f5461cb-1537-4f9b-a71a-47cb1167841a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.787 221554 WARNING nova.compute.manager [req-8e3c7c99-d8bd-4d4f-822c-e16d77d17372 req-59245392-38b8-431a-9e6d-1caf74b724ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Received unexpected event network-vif-unplugged-7f5461cb-1537-4f9b-a71a-47cb1167841a for instance with vm_state deleted and task_state None.#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.787 221554 DEBUG nova.compute.manager [req-8e3c7c99-d8bd-4d4f-822c-e16d77d17372 req-59245392-38b8-431a-9e6d-1caf74b724ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Received event network-vif-plugged-7f5461cb-1537-4f9b-a71a-47cb1167841a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.787 221554 DEBUG oslo_concurrency.lockutils [req-8e3c7c99-d8bd-4d4f-822c-e16d77d17372 req-59245392-38b8-431a-9e6d-1caf74b724ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "892101ee-ea21-41fd-a15d-e0b300965e47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.788 221554 DEBUG oslo_concurrency.lockutils [req-8e3c7c99-d8bd-4d4f-822c-e16d77d17372 req-59245392-38b8-431a-9e6d-1caf74b724ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "892101ee-ea21-41fd-a15d-e0b300965e47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.788 221554 DEBUG oslo_concurrency.lockutils [req-8e3c7c99-d8bd-4d4f-822c-e16d77d17372 req-59245392-38b8-431a-9e6d-1caf74b724ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "892101ee-ea21-41fd-a15d-e0b300965e47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.789 221554 DEBUG nova.compute.manager [req-8e3c7c99-d8bd-4d4f-822c-e16d77d17372 req-59245392-38b8-431a-9e6d-1caf74b724ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] No waiting events found dispatching network-vif-plugged-7f5461cb-1537-4f9b-a71a-47cb1167841a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.789 221554 WARNING nova.compute.manager [req-8e3c7c99-d8bd-4d4f-822c-e16d77d17372 req-59245392-38b8-431a-9e6d-1caf74b724ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Received unexpected event network-vif-plugged-7f5461cb-1537-4f9b-a71a-47cb1167841a for instance with vm_state deleted and task_state None.#033[00m
Jan 31 03:06:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:06:37 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2586767570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:06:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:37.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.848 221554 DEBUG oslo_concurrency.processutils [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.853 221554 DEBUG nova.compute.provider_tree [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:06:37 np0005603609 nova_compute[221550]: 2026-01-31 08:06:37.890 221554 DEBUG nova.scheduler.client.report [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:06:38 np0005603609 nova_compute[221550]: 2026-01-31 08:06:38.072 221554 DEBUG oslo_concurrency.lockutils [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:38 np0005603609 nova_compute[221550]: 2026-01-31 08:06:38.117 221554 INFO nova.scheduler.client.report [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Deleted allocations for instance 892101ee-ea21-41fd-a15d-e0b300965e47#033[00m
Jan 31 03:06:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:06:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:38.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:06:38 np0005603609 nova_compute[221550]: 2026-01-31 08:06:38.275 221554 DEBUG oslo_concurrency.lockutils [None req-004c7a84-de22-4dd1-a011-0eb90ce14ff9 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "892101ee-ea21-41fd-a15d-e0b300965e47" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:39 np0005603609 nova_compute[221550]: 2026-01-31 08:06:39.075 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:06:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:39.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:06:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:06:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:40.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:06:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:40 np0005603609 nova_compute[221550]: 2026-01-31 08:06:40.566 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:41 np0005603609 nova_compute[221550]: 2026-01-31 08:06:41.093 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:41.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:42.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:43 np0005603609 nova_compute[221550]: 2026-01-31 08:06:43.455 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e261 e261: 3 total, 3 up, 3 in
Jan 31 03:06:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:43.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:44 np0005603609 nova_compute[221550]: 2026-01-31 08:06:44.077 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:06:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:44.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:06:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:06:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1300736106' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:06:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:06:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1300736106' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:06:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:45 np0005603609 nova_compute[221550]: 2026-01-31 08:06:45.568 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:06:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:45.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:06:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:46.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:47 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:06:47 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:06:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:06:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:47.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:06:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:06:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:48.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:06:49 np0005603609 nova_compute[221550]: 2026-01-31 08:06:49.079 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:49.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:50.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:50 np0005603609 nova_compute[221550]: 2026-01-31 08:06:50.536 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846795.5343344, 892101ee-ea21-41fd-a15d-e0b300965e47 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:06:50 np0005603609 nova_compute[221550]: 2026-01-31 08:06:50.536 221554 INFO nova.compute.manager [-] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:06:50 np0005603609 nova_compute[221550]: 2026-01-31 08:06:50.570 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:50 np0005603609 nova_compute[221550]: 2026-01-31 08:06:50.655 221554 DEBUG nova.compute.manager [None req-feb40b63-e191-4fd0-8062-4d15a3cee8cf - - - - - -] [instance: 892101ee-ea21-41fd-a15d-e0b300965e47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:06:50 np0005603609 nova_compute[221550]: 2026-01-31 08:06:50.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:51.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:52.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:06:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:53.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:06:54 np0005603609 nova_compute[221550]: 2026-01-31 08:06:54.080 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:54.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e262 e262: 3 total, 3 up, 3 in
Jan 31 03:06:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:55 np0005603609 nova_compute[221550]: 2026-01-31 08:06:55.573 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:55.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:56.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:57.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:06:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:58.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:06:59 np0005603609 nova_compute[221550]: 2026-01-31 08:06:59.082 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #97. Immutable memtables: 0.
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:59.693832) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 97
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846819693870, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 537, "num_deletes": 252, "total_data_size": 793236, "memory_usage": 803472, "flush_reason": "Manual Compaction"}
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #98: started
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846819724818, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 98, "file_size": 523258, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49421, "largest_seqno": 49953, "table_properties": {"data_size": 520325, "index_size": 905, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7142, "raw_average_key_size": 19, "raw_value_size": 514368, "raw_average_value_size": 1413, "num_data_blocks": 39, "num_entries": 364, "num_filter_entries": 364, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846794, "oldest_key_time": 1769846794, "file_creation_time": 1769846819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 31048 microseconds, and 1834 cpu microseconds.
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:59.724879) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #98: 523258 bytes OK
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:59.724902) [db/memtable_list.cc:519] [default] Level-0 commit table #98 started
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:59.736424) [db/memtable_list.cc:722] [default] Level-0 commit table #98: memtable #1 done
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:59.736470) EVENT_LOG_v1 {"time_micros": 1769846819736460, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:59.736496) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 790077, prev total WAL file size 790077, number of live WAL files 2.
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000094.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:59.737000) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [98(510KB)], [96(10MB)]
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846819737044, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [98], "files_L6": [96], "score": -1, "input_data_size": 11243763, "oldest_snapshot_seqno": -1}
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #99: 7174 keys, 9276881 bytes, temperature: kUnknown
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846819826002, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 99, "file_size": 9276881, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9231398, "index_size": 26427, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17989, "raw_key_size": 187189, "raw_average_key_size": 26, "raw_value_size": 9105723, "raw_average_value_size": 1269, "num_data_blocks": 1034, "num_entries": 7174, "num_filter_entries": 7174, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769846819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 99, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:59.826232) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9276881 bytes
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:59.829195) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.3 rd, 104.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.2 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(39.2) write-amplify(17.7) OK, records in: 7694, records dropped: 520 output_compression: NoCompression
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:59.829213) EVENT_LOG_v1 {"time_micros": 1769846819829205, "job": 60, "event": "compaction_finished", "compaction_time_micros": 89026, "compaction_time_cpu_micros": 15576, "output_level": 6, "num_output_files": 1, "total_output_size": 9276881, "num_input_records": 7694, "num_output_records": 7174, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846819829374, "job": 60, "event": "table_file_deletion", "file_number": 98}
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000096.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846819830470, "job": 60, "event": "table_file_deletion", "file_number": 96}
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:59.736893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:59.830509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:59.830513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:59.830515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:59.830517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:06:59 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:06:59.830519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:06:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:06:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:59.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:00 np0005603609 podman[262941]: 2026-01-31 08:07:00.200558844 +0000 UTC m=+0.077063797 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 03:07:00 np0005603609 podman[262942]: 2026-01-31 08:07:00.204946759 +0000 UTC m=+0.079530706 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 03:07:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:00.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:00 np0005603609 nova_compute[221550]: 2026-01-31 08:07:00.574 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:01.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:02.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:07:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.3 total, 600.0 interval#012Cumulative writes: 9733 writes, 49K keys, 9733 commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 9733 writes, 9733 syncs, 1.00 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1582 writes, 8005 keys, 1582 commit groups, 1.0 writes per commit group, ingest: 15.78 MB, 0.03 MB/s#012Interval WAL: 1582 writes, 1582 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     57.5      1.05              0.13        30    0.035       0      0       0.0       0.0#012  L6      1/0    8.85 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.4     80.6     67.6      3.98              0.58        29    0.137    175K    16K       0.0       0.0#012 Sum      1/0    8.85 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.4     63.7     65.5      5.03              0.71        59    0.085    175K    16K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.6     54.5     52.8      1.48              0.18        14    0.106     53K   3525       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0     80.6     67.6      3.98              0.58        29    0.137    175K    16K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     59.3      1.02              0.13        29    0.035       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.3 total, 600.0 interval#012Flush(GB): cumulative 0.059, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.32 GB write, 0.09 MB/s write, 0.31 GB read, 0.09 MB/s read, 5.0 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 1.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619ad8ed1f0#2 capacity: 304.00 MB usage: 35.79 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000262 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2088,34.48 MB,11.3424%) FilterBlock(59,494.86 KB,0.158967%) IndexBlock(59,844.75 KB,0.271366%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 03:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:07:03 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2669144292' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:07:03 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2669144292' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:07:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:07:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:03.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:07:04 np0005603609 nova_compute[221550]: 2026-01-31 08:07:04.083 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:07:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:04.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:07:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:05 np0005603609 nova_compute[221550]: 2026-01-31 08:07:05.348 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:05.349 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:07:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:05.350 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:07:05 np0005603609 nova_compute[221550]: 2026-01-31 08:07:05.575 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:07:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:05.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:07:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:07:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:06.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:07:07 np0005603609 nova_compute[221550]: 2026-01-31 08:07:07.341 221554 DEBUG oslo_concurrency.lockutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "2552b865-8435-46e6-823f-87ef00de854b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:07 np0005603609 nova_compute[221550]: 2026-01-31 08:07:07.342 221554 DEBUG oslo_concurrency.lockutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:07.352 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:07 np0005603609 nova_compute[221550]: 2026-01-31 08:07:07.453 221554 DEBUG nova.compute.manager [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:07:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:07.500 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:07.501 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:07.501 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:07 np0005603609 nova_compute[221550]: 2026-01-31 08:07:07.686 221554 DEBUG oslo_concurrency.lockutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:07 np0005603609 nova_compute[221550]: 2026-01-31 08:07:07.687 221554 DEBUG oslo_concurrency.lockutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:07 np0005603609 nova_compute[221550]: 2026-01-31 08:07:07.693 221554 DEBUG nova.virt.hardware [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:07:07 np0005603609 nova_compute[221550]: 2026-01-31 08:07:07.693 221554 INFO nova.compute.claims [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:07:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:07.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:08 np0005603609 nova_compute[221550]: 2026-01-31 08:07:08.070 221554 DEBUG oslo_concurrency.processutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:07:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:08.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:07:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:07:08 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1370603499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:07:08 np0005603609 nova_compute[221550]: 2026-01-31 08:07:08.556 221554 DEBUG oslo_concurrency.processutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:08 np0005603609 nova_compute[221550]: 2026-01-31 08:07:08.563 221554 DEBUG nova.compute.provider_tree [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:07:08 np0005603609 nova_compute[221550]: 2026-01-31 08:07:08.647 221554 DEBUG nova.scheduler.client.report [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:07:08 np0005603609 nova_compute[221550]: 2026-01-31 08:07:08.797 221554 DEBUG oslo_concurrency.lockutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:08 np0005603609 nova_compute[221550]: 2026-01-31 08:07:08.798 221554 DEBUG nova.compute.manager [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:07:08 np0005603609 nova_compute[221550]: 2026-01-31 08:07:08.946 221554 DEBUG oslo_concurrency.lockutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:08 np0005603609 nova_compute[221550]: 2026-01-31 08:07:08.947 221554 DEBUG oslo_concurrency.lockutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:08 np0005603609 nova_compute[221550]: 2026-01-31 08:07:08.958 221554 DEBUG nova.compute.manager [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:07:08 np0005603609 nova_compute[221550]: 2026-01-31 08:07:08.959 221554 DEBUG nova.network.neutron [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.084 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.094 221554 DEBUG nova.compute.manager [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.227 221554 INFO nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.508 221554 DEBUG nova.compute.manager [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.542 221554 DEBUG oslo_concurrency.lockutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.543 221554 DEBUG oslo_concurrency.lockutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.552 221554 DEBUG nova.virt.hardware [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.553 221554 INFO nova.compute.claims [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.608 221554 DEBUG nova.policy [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd9ed446fb2cf4fc0a4e619c6c766fddc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1fcec9ca13964c7191134db4420ab049', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.773 221554 DEBUG nova.compute.manager [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.774 221554 DEBUG nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.774 221554 INFO nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Creating image(s)#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.794 221554 DEBUG nova.storage.rbd_utils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 2552b865-8435-46e6-823f-87ef00de854b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.820 221554 DEBUG nova.storage.rbd_utils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 2552b865-8435-46e6-823f-87ef00de854b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.843 221554 DEBUG nova.storage.rbd_utils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 2552b865-8435-46e6-823f-87ef00de854b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.846 221554 DEBUG oslo_concurrency.processutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.867 221554 DEBUG oslo_concurrency.processutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.902 221554 DEBUG oslo_concurrency.processutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.903 221554 DEBUG oslo_concurrency.lockutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.904 221554 DEBUG oslo_concurrency.lockutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.904 221554 DEBUG oslo_concurrency.lockutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:07:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:09.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.926 221554 DEBUG nova.storage.rbd_utils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 2552b865-8435-46e6-823f-87ef00de854b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:09 np0005603609 nova_compute[221550]: 2026-01-31 08:07:09.929 221554 DEBUG oslo_concurrency.processutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 2552b865-8435-46e6-823f-87ef00de854b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:10.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:10 np0005603609 nova_compute[221550]: 2026-01-31 08:07:10.249 221554 DEBUG oslo_concurrency.processutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 2552b865-8435-46e6-823f-87ef00de854b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:07:10 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/325742308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:07:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:10 np0005603609 nova_compute[221550]: 2026-01-31 08:07:10.288 221554 DEBUG oslo_concurrency.processutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:10 np0005603609 nova_compute[221550]: 2026-01-31 08:07:10.332 221554 DEBUG nova.storage.rbd_utils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] resizing rbd image 2552b865-8435-46e6-823f-87ef00de854b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:07:10 np0005603609 nova_compute[221550]: 2026-01-31 08:07:10.373 221554 DEBUG nova.compute.provider_tree [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:07:10 np0005603609 nova_compute[221550]: 2026-01-31 08:07:10.449 221554 DEBUG nova.scheduler.client.report [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:07:10 np0005603609 nova_compute[221550]: 2026-01-31 08:07:10.461 221554 DEBUG nova.objects.instance [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'migration_context' on Instance uuid 2552b865-8435-46e6-823f-87ef00de854b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:10 np0005603609 nova_compute[221550]: 2026-01-31 08:07:10.524 221554 DEBUG oslo_concurrency.lockutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:10 np0005603609 nova_compute[221550]: 2026-01-31 08:07:10.525 221554 DEBUG nova.compute.manager [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:07:10 np0005603609 nova_compute[221550]: 2026-01-31 08:07:10.543 221554 DEBUG nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:07:10 np0005603609 nova_compute[221550]: 2026-01-31 08:07:10.544 221554 DEBUG nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Ensure instance console log exists: /var/lib/nova/instances/2552b865-8435-46e6-823f-87ef00de854b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:07:10 np0005603609 nova_compute[221550]: 2026-01-31 08:07:10.544 221554 DEBUG oslo_concurrency.lockutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:10 np0005603609 nova_compute[221550]: 2026-01-31 08:07:10.545 221554 DEBUG oslo_concurrency.lockutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:10 np0005603609 nova_compute[221550]: 2026-01-31 08:07:10.545 221554 DEBUG oslo_concurrency.lockutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:10 np0005603609 nova_compute[221550]: 2026-01-31 08:07:10.576 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:10 np0005603609 nova_compute[221550]: 2026-01-31 08:07:10.628 221554 DEBUG nova.compute.manager [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:07:10 np0005603609 nova_compute[221550]: 2026-01-31 08:07:10.629 221554 DEBUG nova.network.neutron [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:07:10 np0005603609 nova_compute[221550]: 2026-01-31 08:07:10.689 221554 INFO nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:07:10 np0005603609 nova_compute[221550]: 2026-01-31 08:07:10.804 221554 DEBUG nova.compute.manager [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:07:10 np0005603609 nova_compute[221550]: 2026-01-31 08:07:10.909 221554 DEBUG nova.network.neutron [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Successfully created port: 9256ad7f-5794-4914-b5dd-12cc282f6172 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:07:11 np0005603609 nova_compute[221550]: 2026-01-31 08:07:11.021 221554 DEBUG nova.policy [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '18aee9d81d404f77ac81cde538f140d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:07:11 np0005603609 nova_compute[221550]: 2026-01-31 08:07:11.086 221554 DEBUG nova.compute.manager [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:07:11 np0005603609 nova_compute[221550]: 2026-01-31 08:07:11.088 221554 DEBUG nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:07:11 np0005603609 nova_compute[221550]: 2026-01-31 08:07:11.088 221554 INFO nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Creating image(s)#033[00m
Jan 31 03:07:11 np0005603609 nova_compute[221550]: 2026-01-31 08:07:11.120 221554 DEBUG nova.storage.rbd_utils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:11 np0005603609 nova_compute[221550]: 2026-01-31 08:07:11.154 221554 DEBUG nova.storage.rbd_utils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:11 np0005603609 nova_compute[221550]: 2026-01-31 08:07:11.183 221554 DEBUG nova.storage.rbd_utils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:11 np0005603609 nova_compute[221550]: 2026-01-31 08:07:11.187 221554 DEBUG oslo_concurrency.processutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:11 np0005603609 nova_compute[221550]: 2026-01-31 08:07:11.253 221554 DEBUG oslo_concurrency.processutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:11 np0005603609 nova_compute[221550]: 2026-01-31 08:07:11.253 221554 DEBUG oslo_concurrency.lockutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:11 np0005603609 nova_compute[221550]: 2026-01-31 08:07:11.254 221554 DEBUG oslo_concurrency.lockutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:11 np0005603609 nova_compute[221550]: 2026-01-31 08:07:11.254 221554 DEBUG oslo_concurrency.lockutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:11 np0005603609 nova_compute[221550]: 2026-01-31 08:07:11.277 221554 DEBUG nova.storage.rbd_utils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:11 np0005603609 nova_compute[221550]: 2026-01-31 08:07:11.281 221554 DEBUG oslo_concurrency.processutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:11 np0005603609 nova_compute[221550]: 2026-01-31 08:07:11.676 221554 DEBUG oslo_concurrency.processutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:11 np0005603609 nova_compute[221550]: 2026-01-31 08:07:11.754 221554 DEBUG nova.storage.rbd_utils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] resizing rbd image 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:07:11 np0005603609 nova_compute[221550]: 2026-01-31 08:07:11.884 221554 DEBUG nova.objects.instance [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'migration_context' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:11.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:12 np0005603609 nova_compute[221550]: 2026-01-31 08:07:12.038 221554 DEBUG nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:07:12 np0005603609 nova_compute[221550]: 2026-01-31 08:07:12.039 221554 DEBUG nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Ensure instance console log exists: /var/lib/nova/instances/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:07:12 np0005603609 nova_compute[221550]: 2026-01-31 08:07:12.040 221554 DEBUG oslo_concurrency.lockutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:12 np0005603609 nova_compute[221550]: 2026-01-31 08:07:12.040 221554 DEBUG oslo_concurrency.lockutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:12 np0005603609 nova_compute[221550]: 2026-01-31 08:07:12.041 221554 DEBUG oslo_concurrency.lockutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:12.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:13.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:14 np0005603609 nova_compute[221550]: 2026-01-31 08:07:14.085 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:14.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:15 np0005603609 nova_compute[221550]: 2026-01-31 08:07:15.579 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:15 np0005603609 nova_compute[221550]: 2026-01-31 08:07:15.691 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:07:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:15.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:16 np0005603609 nova_compute[221550]: 2026-01-31 08:07:16.001 221554 DEBUG nova.network.neutron [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Successfully created port: f99a05c5-fb6e-4cb4-a735-e492526b8a2c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:07:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:07:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:16.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:07:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:07:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3047707787' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:07:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:07:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3047707787' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:07:16 np0005603609 nova_compute[221550]: 2026-01-31 08:07:16.326 221554 DEBUG nova.network.neutron [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Successfully updated port: 9256ad7f-5794-4914-b5dd-12cc282f6172 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:07:16 np0005603609 nova_compute[221550]: 2026-01-31 08:07:16.483 221554 DEBUG oslo_concurrency.lockutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "refresh_cache-2552b865-8435-46e6-823f-87ef00de854b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:07:16 np0005603609 nova_compute[221550]: 2026-01-31 08:07:16.484 221554 DEBUG oslo_concurrency.lockutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquired lock "refresh_cache-2552b865-8435-46e6-823f-87ef00de854b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:07:16 np0005603609 nova_compute[221550]: 2026-01-31 08:07:16.484 221554 DEBUG nova.network.neutron [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:07:16 np0005603609 nova_compute[221550]: 2026-01-31 08:07:16.600 221554 DEBUG nova.compute.manager [req-4839eca6-fd28-42ca-bc22-6434224d1ce8 req-a618a0d4-aa34-41aa-96cd-260c5dd50ee9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received event network-changed-9256ad7f-5794-4914-b5dd-12cc282f6172 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:16 np0005603609 nova_compute[221550]: 2026-01-31 08:07:16.601 221554 DEBUG nova.compute.manager [req-4839eca6-fd28-42ca-bc22-6434224d1ce8 req-a618a0d4-aa34-41aa-96cd-260c5dd50ee9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Refreshing instance network info cache due to event network-changed-9256ad7f-5794-4914-b5dd-12cc282f6172. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:07:16 np0005603609 nova_compute[221550]: 2026-01-31 08:07:16.601 221554 DEBUG oslo_concurrency.lockutils [req-4839eca6-fd28-42ca-bc22-6434224d1ce8 req-a618a0d4-aa34-41aa-96cd-260c5dd50ee9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2552b865-8435-46e6-823f-87ef00de854b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:07:16 np0005603609 nova_compute[221550]: 2026-01-31 08:07:16.828 221554 DEBUG nova.network.neutron [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:07:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:07:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:17.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:07:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:18.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:18 np0005603609 nova_compute[221550]: 2026-01-31 08:07:18.789 221554 DEBUG nova.network.neutron [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Successfully updated port: f99a05c5-fb6e-4cb4-a735-e492526b8a2c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:07:19 np0005603609 nova_compute[221550]: 2026-01-31 08:07:19.087 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:19 np0005603609 nova_compute[221550]: 2026-01-31 08:07:19.388 221554 DEBUG nova.compute.manager [req-c9dc25e5-e5cd-42a4-8df2-250e7a6f1a1f req-40f1dbd0-9430-4506-8427-bc14b5cfc572 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Received event network-changed-f99a05c5-fb6e-4cb4-a735-e492526b8a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:19 np0005603609 nova_compute[221550]: 2026-01-31 08:07:19.388 221554 DEBUG nova.compute.manager [req-c9dc25e5-e5cd-42a4-8df2-250e7a6f1a1f req-40f1dbd0-9430-4506-8427-bc14b5cfc572 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Refreshing instance network info cache due to event network-changed-f99a05c5-fb6e-4cb4-a735-e492526b8a2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:07:19 np0005603609 nova_compute[221550]: 2026-01-31 08:07:19.389 221554 DEBUG oslo_concurrency.lockutils [req-c9dc25e5-e5cd-42a4-8df2-250e7a6f1a1f req-40f1dbd0-9430-4506-8427-bc14b5cfc572 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:07:19 np0005603609 nova_compute[221550]: 2026-01-31 08:07:19.389 221554 DEBUG oslo_concurrency.lockutils [req-c9dc25e5-e5cd-42a4-8df2-250e7a6f1a1f req-40f1dbd0-9430-4506-8427-bc14b5cfc572 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:07:19 np0005603609 nova_compute[221550]: 2026-01-31 08:07:19.389 221554 DEBUG nova.network.neutron [req-c9dc25e5-e5cd-42a4-8df2-250e7a6f1a1f req-40f1dbd0-9430-4506-8427-bc14b5cfc572 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Refreshing network info cache for port f99a05c5-fb6e-4cb4-a735-e492526b8a2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:07:19 np0005603609 nova_compute[221550]: 2026-01-31 08:07:19.407 221554 DEBUG oslo_concurrency.lockutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:07:19 np0005603609 nova_compute[221550]: 2026-01-31 08:07:19.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:07:19 np0005603609 nova_compute[221550]: 2026-01-31 08:07:19.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:07:19 np0005603609 nova_compute[221550]: 2026-01-31 08:07:19.727 221554 DEBUG nova.network.neutron [req-c9dc25e5-e5cd-42a4-8df2-250e7a6f1a1f req-40f1dbd0-9430-4506-8427-bc14b5cfc572 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:07:19 np0005603609 nova_compute[221550]: 2026-01-31 08:07:19.734 221554 DEBUG nova.network.neutron [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Updating instance_info_cache with network_info: [{"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:07:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:19.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.120 221554 DEBUG oslo_concurrency.lockutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Releasing lock "refresh_cache-2552b865-8435-46e6-823f-87ef00de854b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.121 221554 DEBUG nova.compute.manager [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Instance network_info: |[{"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.121 221554 DEBUG oslo_concurrency.lockutils [req-4839eca6-fd28-42ca-bc22-6434224d1ce8 req-a618a0d4-aa34-41aa-96cd-260c5dd50ee9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2552b865-8435-46e6-823f-87ef00de854b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.121 221554 DEBUG nova.network.neutron [req-4839eca6-fd28-42ca-bc22-6434224d1ce8 req-a618a0d4-aa34-41aa-96cd-260c5dd50ee9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Refreshing network info cache for port 9256ad7f-5794-4914-b5dd-12cc282f6172 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.124 221554 DEBUG nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Start _get_guest_xml network_info=[{"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.129 221554 WARNING nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.133 221554 DEBUG nova.virt.libvirt.host [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.134 221554 DEBUG nova.virt.libvirt.host [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.138 221554 DEBUG nova.virt.libvirt.host [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.138 221554 DEBUG nova.virt.libvirt.host [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.139 221554 DEBUG nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.139 221554 DEBUG nova.virt.hardware [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.140 221554 DEBUG nova.virt.hardware [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.140 221554 DEBUG nova.virt.hardware [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.140 221554 DEBUG nova.virt.hardware [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.140 221554 DEBUG nova.virt.hardware [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.140 221554 DEBUG nova.virt.hardware [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.141 221554 DEBUG nova.virt.hardware [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.141 221554 DEBUG nova.virt.hardware [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.141 221554 DEBUG nova.virt.hardware [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.141 221554 DEBUG nova.virt.hardware [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.141 221554 DEBUG nova.virt.hardware [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.144 221554 DEBUG oslo_concurrency.processutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:20.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.581 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:07:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3349957308' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.837 221554 DEBUG nova.network.neutron [req-c9dc25e5-e5cd-42a4-8df2-250e7a6f1a1f req-40f1dbd0-9430-4506-8427-bc14b5cfc572 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.844 221554 DEBUG oslo_concurrency.processutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.701s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.872 221554 DEBUG nova.storage.rbd_utils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 2552b865-8435-46e6-823f-87ef00de854b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.877 221554 DEBUG oslo_concurrency.processutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.909 221554 DEBUG oslo_concurrency.lockutils [req-c9dc25e5-e5cd-42a4-8df2-250e7a6f1a1f req-40f1dbd0-9430-4506-8427-bc14b5cfc572 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.910 221554 DEBUG oslo_concurrency.lockutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquired lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:07:20 np0005603609 nova_compute[221550]: 2026-01-31 08:07:20.911 221554 DEBUG nova.network.neutron [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:07:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:07:21 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2887923547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.329 221554 DEBUG oslo_concurrency.processutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.331 221554 DEBUG nova.virt.libvirt.vif [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:07:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1274879352',display_name='tempest-ServerActionsTestJSON-server-1274879352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1274879352',id=107,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-7zccd5vw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:07:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=2552b865-8435-46e6-823f-87ef00de854b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.332 221554 DEBUG nova.network.os_vif_util [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.334 221554 DEBUG nova.network.os_vif_util [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:92:b0,bridge_name='br-int',has_traffic_filtering=True,id=9256ad7f-5794-4914-b5dd-12cc282f6172,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9256ad7f-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.335 221554 DEBUG nova.objects.instance [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2552b865-8435-46e6-823f-87ef00de854b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.367 221554 DEBUG nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:07:21 np0005603609 nova_compute[221550]:  <uuid>2552b865-8435-46e6-823f-87ef00de854b</uuid>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:  <name>instance-0000006b</name>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerActionsTestJSON-server-1274879352</nova:name>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:07:20</nova:creationTime>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:07:21 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:        <nova:user uuid="d9ed446fb2cf4fc0a4e619c6c766fddc">tempest-ServerActionsTestJSON-1391450973-project-member</nova:user>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:        <nova:project uuid="1fcec9ca13964c7191134db4420ab049">tempest-ServerActionsTestJSON-1391450973</nova:project>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:        <nova:port uuid="9256ad7f-5794-4914-b5dd-12cc282f6172">
Jan 31 03:07:21 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <entry name="serial">2552b865-8435-46e6-823f-87ef00de854b</entry>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <entry name="uuid">2552b865-8435-46e6-823f-87ef00de854b</entry>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/2552b865-8435-46e6-823f-87ef00de854b_disk">
Jan 31 03:07:21 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:07:21 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/2552b865-8435-46e6-823f-87ef00de854b_disk.config">
Jan 31 03:07:21 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:07:21 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:a7:92:b0"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <target dev="tap9256ad7f-57"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/2552b865-8435-46e6-823f-87ef00de854b/console.log" append="off"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:07:21 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:07:21 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:07:21 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:07:21 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.368 221554 DEBUG nova.compute.manager [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Preparing to wait for external event network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.368 221554 DEBUG oslo_concurrency.lockutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "2552b865-8435-46e6-823f-87ef00de854b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.368 221554 DEBUG oslo_concurrency.lockutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.368 221554 DEBUG oslo_concurrency.lockutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.369 221554 DEBUG nova.virt.libvirt.vif [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:07:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1274879352',display_name='tempest-ServerActionsTestJSON-server-1274879352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1274879352',id=107,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-7zccd5vw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:07:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=2552b865-8435-46e6-823f-87ef00de854b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.369 221554 DEBUG nova.network.os_vif_util [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.370 221554 DEBUG nova.network.os_vif_util [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:92:b0,bridge_name='br-int',has_traffic_filtering=True,id=9256ad7f-5794-4914-b5dd-12cc282f6172,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9256ad7f-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.370 221554 DEBUG os_vif [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:92:b0,bridge_name='br-int',has_traffic_filtering=True,id=9256ad7f-5794-4914-b5dd-12cc282f6172,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9256ad7f-57') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.371 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.371 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.372 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.376 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.376 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9256ad7f-57, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.377 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9256ad7f-57, col_values=(('external_ids', {'iface-id': '9256ad7f-5794-4914-b5dd-12cc282f6172', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:92:b0', 'vm-uuid': '2552b865-8435-46e6-823f-87ef00de854b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.378 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:21 np0005603609 NetworkManager[49064]: <info>  [1769846841.3799] manager: (tap9256ad7f-57): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.380 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.384 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.385 221554 INFO os_vif [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:92:b0,bridge_name='br-int',has_traffic_filtering=True,id=9256ad7f-5794-4914-b5dd-12cc282f6172,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9256ad7f-57')#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.440 221554 DEBUG nova.network.neutron [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.838 221554 DEBUG nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.839 221554 DEBUG nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.839 221554 DEBUG nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] No VIF found with MAC fa:16:3e:a7:92:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.839 221554 INFO nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Using config drive#033[00m
Jan 31 03:07:21 np0005603609 nova_compute[221550]: 2026-01-31 08:07:21.864 221554 DEBUG nova.storage.rbd_utils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 2552b865-8435-46e6-823f-87ef00de854b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:21.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:22.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:22 np0005603609 nova_compute[221550]: 2026-01-31 08:07:22.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:07:22 np0005603609 nova_compute[221550]: 2026-01-31 08:07:22.673 221554 DEBUG nova.network.neutron [req-4839eca6-fd28-42ca-bc22-6434224d1ce8 req-a618a0d4-aa34-41aa-96cd-260c5dd50ee9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Updated VIF entry in instance network info cache for port 9256ad7f-5794-4914-b5dd-12cc282f6172. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:07:22 np0005603609 nova_compute[221550]: 2026-01-31 08:07:22.673 221554 DEBUG nova.network.neutron [req-4839eca6-fd28-42ca-bc22-6434224d1ce8 req-a618a0d4-aa34-41aa-96cd-260c5dd50ee9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Updating instance_info_cache with network_info: [{"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:07:22 np0005603609 nova_compute[221550]: 2026-01-31 08:07:22.897 221554 DEBUG oslo_concurrency.lockutils [req-4839eca6-fd28-42ca-bc22-6434224d1ce8 req-a618a0d4-aa34-41aa-96cd-260c5dd50ee9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2552b865-8435-46e6-823f-87ef00de854b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:07:23 np0005603609 nova_compute[221550]: 2026-01-31 08:07:23.040 221554 INFO nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Creating config drive at /var/lib/nova/instances/2552b865-8435-46e6-823f-87ef00de854b/disk.config#033[00m
Jan 31 03:07:23 np0005603609 nova_compute[221550]: 2026-01-31 08:07:23.044 221554 DEBUG oslo_concurrency.processutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2552b865-8435-46e6-823f-87ef00de854b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfx6mucdr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:23 np0005603609 nova_compute[221550]: 2026-01-31 08:07:23.168 221554 DEBUG oslo_concurrency.processutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2552b865-8435-46e6-823f-87ef00de854b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfx6mucdr" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:23 np0005603609 nova_compute[221550]: 2026-01-31 08:07:23.201 221554 DEBUG nova.storage.rbd_utils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] rbd image 2552b865-8435-46e6-823f-87ef00de854b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:23 np0005603609 nova_compute[221550]: 2026-01-31 08:07:23.205 221554 DEBUG oslo_concurrency.processutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2552b865-8435-46e6-823f-87ef00de854b/disk.config 2552b865-8435-46e6-823f-87ef00de854b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:07:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:23.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:07:23 np0005603609 nova_compute[221550]: 2026-01-31 08:07:23.960 221554 DEBUG nova.network.neutron [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Updating instance_info_cache with network_info: [{"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.027 221554 DEBUG oslo_concurrency.lockutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Releasing lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.028 221554 DEBUG nova.compute.manager [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Instance network_info: |[{"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.031 221554 DEBUG nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Start _get_guest_xml network_info=[{"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.036 221554 WARNING nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.041 221554 DEBUG nova.virt.libvirt.host [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.042 221554 DEBUG nova.virt.libvirt.host [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.048 221554 DEBUG nova.virt.libvirt.host [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.049 221554 DEBUG nova.virt.libvirt.host [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.050 221554 DEBUG nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.050 221554 DEBUG nova.virt.hardware [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.051 221554 DEBUG nova.virt.hardware [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.051 221554 DEBUG nova.virt.hardware [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.051 221554 DEBUG nova.virt.hardware [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.051 221554 DEBUG nova.virt.hardware [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.051 221554 DEBUG nova.virt.hardware [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.052 221554 DEBUG nova.virt.hardware [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.052 221554 DEBUG nova.virt.hardware [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.052 221554 DEBUG nova.virt.hardware [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.052 221554 DEBUG nova.virt.hardware [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.053 221554 DEBUG nova.virt.hardware [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.056 221554 DEBUG oslo_concurrency.processutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.090 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:24.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.487 221554 DEBUG oslo_concurrency.processutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2552b865-8435-46e6-823f-87ef00de854b/disk.config 2552b865-8435-46e6-823f-87ef00de854b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.487 221554 INFO nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Deleting local config drive /var/lib/nova/instances/2552b865-8435-46e6-823f-87ef00de854b/disk.config because it was imported into RBD.#033[00m
Jan 31 03:07:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:07:24 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/853840470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:07:24 np0005603609 kernel: tap9256ad7f-57: entered promiscuous mode
Jan 31 03:07:24 np0005603609 NetworkManager[49064]: <info>  [1769846844.5313] manager: (tap9256ad7f-57): new Tun device (/org/freedesktop/NetworkManager/Devices/206)
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.531 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:24 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:24Z|00416|binding|INFO|Claiming lport 9256ad7f-5794-4914-b5dd-12cc282f6172 for this chassis.
Jan 31 03:07:24 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:24Z|00417|binding|INFO|9256ad7f-5794-4914-b5dd-12cc282f6172: Claiming fa:16:3e:a7:92:b0 10.100.0.9
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.534 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:24 np0005603609 systemd-udevd[263520]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.564 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.563 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:92:b0 10.100.0.9'], port_security=['fa:16:3e:a7:92:b0 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2552b865-8435-46e6-823f-87ef00de854b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=9256ad7f-5794-4914-b5dd-12cc282f6172) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.566 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 9256ad7f-5794-4914-b5dd-12cc282f6172 in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 bound to our chassis#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.568 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc2535f-0f8f-4713-a35c-9805048a29a8#033[00m
Jan 31 03:07:24 np0005603609 NetworkManager[49064]: <info>  [1769846844.5699] device (tap9256ad7f-57): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:07:24 np0005603609 NetworkManager[49064]: <info>  [1769846844.5708] device (tap9256ad7f-57): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:07:24 np0005603609 systemd-machined[190912]: New machine qemu-52-instance-0000006b.
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.578 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9c05f1c3-61b4-46ae-9653-bb116883f3f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.579 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5cc2535f-01 in ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.580 221554 DEBUG oslo_concurrency.processutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:24 np0005603609 systemd[1]: Started Virtual Machine qemu-52-instance-0000006b.
Jan 31 03:07:24 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:24Z|00418|binding|INFO|Setting lport 9256ad7f-5794-4914-b5dd-12cc282f6172 ovn-installed in OVS
Jan 31 03:07:24 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:24Z|00419|binding|INFO|Setting lport 9256ad7f-5794-4914-b5dd-12cc282f6172 up in Southbound
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.585 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5cc2535f-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.585 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ec5dd22f-58a0-4e01-9cde-094cd97224a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.586 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[99a39f7e-6367-4583-b7ac-2e68e6a83b66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.594 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[1ad853fa-11a5-4d2e-a100-5d8c3ca877af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.606 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[55fa866a-7fef-4aa0-b231-3fbd7cbc43b6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.623 221554 DEBUG nova.storage.rbd_utils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.632 221554 DEBUG oslo_concurrency.processutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.632 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[86fbdcb0-f677-47da-aa7f-c4ef34953293]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.637 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3cc7f9af-0df4-4687-9193-d97bd6917cbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:24 np0005603609 NetworkManager[49064]: <info>  [1769846844.6379] manager: (tap5cc2535f-00): new Veth device (/org/freedesktop/NetworkManager/Devices/207)
Jan 31 03:07:24 np0005603609 systemd-udevd[263525]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.652 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.660 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ade6ad-a641-45f5-aa94-4f354761ba83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.663 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[1a8f23cd-d494-4931-8c8e-fb28d77cc93f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:24 np0005603609 NetworkManager[49064]: <info>  [1769846844.6772] device (tap5cc2535f-00): carrier: link connected
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.680 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[5c314ce7-2991-4c6e-bed9-4c2c8664905c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.694 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb12518-3973-4c8b-b007-670505e92792]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708437, 'reachable_time': 34936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263575, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.705 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ba0967ad-c8e6-423d-9b83-afa0f313e988]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:76f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 708437, 'tstamp': 708437}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263576, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.715 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f291cc49-96ce-4e94-a38b-37ba76d20382]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708437, 'reachable_time': 34936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263577, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.735 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9eba05d0-1b6f-4683-b70e-9d7b4a5c4fe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.781 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[662590de-16f1-4f26-a9a2-341c4811065e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.782 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.783 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.783 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc2535f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:24 np0005603609 NetworkManager[49064]: <info>  [1769846844.7853] manager: (tap5cc2535f-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/208)
Jan 31 03:07:24 np0005603609 kernel: tap5cc2535f-00: entered promiscuous mode
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.787 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc2535f-00, col_values=(('external_ids', {'iface-id': 'ab077a7e-cc79-4948-8987-2cd87d88deff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.787 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:24 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:24Z|00420|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.795 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.797 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.798 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1587a4f0-725a-4fbb-94d8-a9e6197b7c5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.799 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:07:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:24.799 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'env', 'PROCESS_TAG=haproxy-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5cc2535f-0f8f-4713-a35c-9805048a29a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.962 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846844.9617941, 2552b865-8435-46e6-823f-87ef00de854b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:07:24 np0005603609 nova_compute[221550]: 2026-01-31 08:07:24.962 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] VM Started (Lifecycle Event)#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.007 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.012 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846844.9620256, 2552b865-8435-46e6-823f-87ef00de854b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.012 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:07:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:07:25 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1519004852' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.048 221554 DEBUG oslo_concurrency.processutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.049 221554 DEBUG nova.virt.libvirt.vif [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:07:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-487797123',display_name='tempest-ServerActionsTestOtherB-server-487797123',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-487797123',id=108,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-opoua6f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:07:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=1d463d8d-cbf7-43ec-8042-83d2c9aa5c69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.050 221554 DEBUG nova.network.os_vif_util [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.051 221554 DEBUG nova.network.os_vif_util [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.052 221554 DEBUG nova.objects.instance [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.092 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.095 221554 DEBUG nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:07:25 np0005603609 nova_compute[221550]:  <uuid>1d463d8d-cbf7-43ec-8042-83d2c9aa5c69</uuid>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:  <name>instance-0000006c</name>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerActionsTestOtherB-server-487797123</nova:name>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:07:24</nova:creationTime>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:07:25 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:        <nova:user uuid="18aee9d81d404f77ac81cde538f140d8">tempest-ServerActionsTestOtherB-2012907318-project-member</nova:user>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:        <nova:project uuid="c3ddadeb950a490db5c99da98a32c9ec">tempest-ServerActionsTestOtherB-2012907318</nova:project>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:        <nova:port uuid="f99a05c5-fb6e-4cb4-a735-e492526b8a2c">
Jan 31 03:07:25 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <entry name="serial">1d463d8d-cbf7-43ec-8042-83d2c9aa5c69</entry>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <entry name="uuid">1d463d8d-cbf7-43ec-8042-83d2c9aa5c69</entry>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk">
Jan 31 03:07:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:07:25 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk.config">
Jan 31 03:07:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:07:25 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:e8:a2:84"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <target dev="tapf99a05c5-fb"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69/console.log" append="off"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:07:25 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:07:25 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:07:25 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:07:25 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.095 221554 DEBUG nova.compute.manager [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Preparing to wait for external event network-vif-plugged-f99a05c5-fb6e-4cb4-a735-e492526b8a2c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.095 221554 DEBUG oslo_concurrency.lockutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.096 221554 DEBUG oslo_concurrency.lockutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.096 221554 DEBUG oslo_concurrency.lockutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.096 221554 DEBUG nova.virt.libvirt.vif [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:07:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-487797123',display_name='tempest-ServerActionsTestOtherB-server-487797123',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-487797123',id=108,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-opoua6f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:07:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=1d463d8d-cbf7-43ec-8042-83d2c9aa5c69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.097 221554 DEBUG nova.network.os_vif_util [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.098 221554 DEBUG nova.network.os_vif_util [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.098 221554 DEBUG os_vif [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.099 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.099 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.100 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.103 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.103 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf99a05c5-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.103 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf99a05c5-fb, col_values=(('external_ids', {'iface-id': 'f99a05c5-fb6e-4cb4-a735-e492526b8a2c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:a2:84', 'vm-uuid': '1d463d8d-cbf7-43ec-8042-83d2c9aa5c69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.105 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:25 np0005603609 NetworkManager[49064]: <info>  [1769846845.1067] manager: (tapf99a05c5-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/209)
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.107 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.108 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.112 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.113 221554 INFO os_vif [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb')#033[00m
Jan 31 03:07:25 np0005603609 podman[263672]: 2026-01-31 08:07:25.151020034 +0000 UTC m=+0.061130373 container create 7c958ab883c3c7adb3bc4ce9e950ce6d6863bf7af897622eb351df48ab189bbe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 03:07:25 np0005603609 systemd[1]: Started libpod-conmon-7c958ab883c3c7adb3bc4ce9e950ce6d6863bf7af897622eb351df48ab189bbe.scope.
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.191 221554 DEBUG nova.compute.manager [req-634a9396-a9bb-4407-8cec-cdb8232c03ec req-9d7a3229-9938-4ef2-b64e-27484e65306a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received event network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.192 221554 DEBUG oslo_concurrency.lockutils [req-634a9396-a9bb-4407-8cec-cdb8232c03ec req-9d7a3229-9938-4ef2-b64e-27484e65306a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2552b865-8435-46e6-823f-87ef00de854b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.192 221554 DEBUG oslo_concurrency.lockutils [req-634a9396-a9bb-4407-8cec-cdb8232c03ec req-9d7a3229-9938-4ef2-b64e-27484e65306a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.192 221554 DEBUG oslo_concurrency.lockutils [req-634a9396-a9bb-4407-8cec-cdb8232c03ec req-9d7a3229-9938-4ef2-b64e-27484e65306a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.192 221554 DEBUG nova.compute.manager [req-634a9396-a9bb-4407-8cec-cdb8232c03ec req-9d7a3229-9938-4ef2-b64e-27484e65306a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Processing event network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.193 221554 DEBUG nova.compute.manager [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.196 221554 DEBUG nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.199 221554 INFO nova.virt.libvirt.driver [-] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Instance spawned successfully.#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.199 221554 DEBUG nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:07:25 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:07:25 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/926227c7e776d303e9e57c024eb5c0225453c79d397cab32d33c65bae531a3e2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:25 np0005603609 podman[263672]: 2026-01-31 08:07:25.110349065 +0000 UTC m=+0.020459394 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:07:25 np0005603609 podman[263672]: 2026-01-31 08:07:25.217557005 +0000 UTC m=+0.127667324 container init 7c958ab883c3c7adb3bc4ce9e950ce6d6863bf7af897622eb351df48ab189bbe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 03:07:25 np0005603609 podman[263672]: 2026-01-31 08:07:25.221767006 +0000 UTC m=+0.131877305 container start 7c958ab883c3c7adb3bc4ce9e950ce6d6863bf7af897622eb351df48ab189bbe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:07:25 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[263688]: [NOTICE]   (263692) : New worker (263694) forked
Jan 31 03:07:25 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[263688]: [NOTICE]   (263692) : Loading success.
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.259 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.260 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846845.1959057, 2552b865-8435-46e6-823f-87ef00de854b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.260 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:07:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.302 221554 DEBUG nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.302 221554 DEBUG nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.303 221554 DEBUG nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.303 221554 DEBUG nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.303 221554 DEBUG nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.304 221554 DEBUG nova.virt.libvirt.driver [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.325 221554 DEBUG nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.326 221554 DEBUG nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.326 221554 DEBUG nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No VIF found with MAC fa:16:3e:e8:a2:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.326 221554 INFO nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Using config drive#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.348 221554 DEBUG nova.storage.rbd_utils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.388 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.392 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.461 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.515 221554 INFO nova.compute.manager [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Took 15.74 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.515 221554 DEBUG nova.compute.manager [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.658 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.661 221554 INFO nova.compute.manager [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Took 18.07 seconds to build instance.#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.724 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:07:25 np0005603609 nova_compute[221550]: 2026-01-31 08:07:25.806 221554 DEBUG oslo_concurrency.lockutils [None req-9a7cad3f-d760-4332-936a-68bb0073bd2f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:25.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:26 np0005603609 nova_compute[221550]: 2026-01-31 08:07:26.183 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-2552b865-8435-46e6-823f-87ef00de854b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:07:26 np0005603609 nova_compute[221550]: 2026-01-31 08:07:26.183 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-2552b865-8435-46e6-823f-87ef00de854b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:07:26 np0005603609 nova_compute[221550]: 2026-01-31 08:07:26.184 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:07:26 np0005603609 nova_compute[221550]: 2026-01-31 08:07:26.184 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2552b865-8435-46e6-823f-87ef00de854b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:07:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:26.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:07:26 np0005603609 nova_compute[221550]: 2026-01-31 08:07:26.300 221554 INFO nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Creating config drive at /var/lib/nova/instances/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69/disk.config#033[00m
Jan 31 03:07:26 np0005603609 nova_compute[221550]: 2026-01-31 08:07:26.304 221554 DEBUG oslo_concurrency.processutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpenj136v2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:26 np0005603609 nova_compute[221550]: 2026-01-31 08:07:26.426 221554 DEBUG oslo_concurrency.processutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpenj136v2" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:26 np0005603609 nova_compute[221550]: 2026-01-31 08:07:26.450 221554 DEBUG nova.storage.rbd_utils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:26 np0005603609 nova_compute[221550]: 2026-01-31 08:07:26.456 221554 DEBUG oslo_concurrency.processutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69/disk.config 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:26 np0005603609 nova_compute[221550]: 2026-01-31 08:07:26.644 221554 DEBUG oslo_concurrency.processutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69/disk.config 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:26 np0005603609 nova_compute[221550]: 2026-01-31 08:07:26.646 221554 INFO nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Deleting local config drive /var/lib/nova/instances/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69/disk.config because it was imported into RBD.#033[00m
Jan 31 03:07:26 np0005603609 kernel: tapf99a05c5-fb: entered promiscuous mode
Jan 31 03:07:26 np0005603609 NetworkManager[49064]: <info>  [1769846846.7176] manager: (tapf99a05c5-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/210)
Jan 31 03:07:26 np0005603609 systemd-udevd[263572]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:07:26 np0005603609 NetworkManager[49064]: <info>  [1769846846.7537] device (tapf99a05c5-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:07:26 np0005603609 NetworkManager[49064]: <info>  [1769846846.7541] device (tapf99a05c5-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:07:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:26Z|00421|binding|INFO|Claiming lport f99a05c5-fb6e-4cb4-a735-e492526b8a2c for this chassis.
Jan 31 03:07:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:26Z|00422|binding|INFO|f99a05c5-fb6e-4cb4-a735-e492526b8a2c: Claiming fa:16:3e:e8:a2:84 10.100.0.5
Jan 31 03:07:26 np0005603609 nova_compute[221550]: 2026-01-31 08:07:26.795 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:26 np0005603609 systemd-machined[190912]: New machine qemu-53-instance-0000006c.
Jan 31 03:07:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:26Z|00423|binding|INFO|Setting lport f99a05c5-fb6e-4cb4-a735-e492526b8a2c ovn-installed in OVS
Jan 31 03:07:26 np0005603609 nova_compute[221550]: 2026-01-31 08:07:26.815 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:26 np0005603609 systemd[1]: Started Virtual Machine qemu-53-instance-0000006c.
Jan 31 03:07:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:26Z|00424|binding|INFO|Setting lport f99a05c5-fb6e-4cb4-a735-e492526b8a2c up in Southbound
Jan 31 03:07:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:26.832 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:a2:84 10.100.0.5'], port_security=['fa:16:3e:e8:a2:84 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1d463d8d-cbf7-43ec-8042-83d2c9aa5c69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5b02cc0a-856b-4d31-80e9-eccd1c696448', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e596e7-33b3-44a6-9cbf-f9eacfd974b4, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=f99a05c5-fb6e-4cb4-a735-e492526b8a2c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:07:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:26.833 140058 INFO neutron.agent.ovn.metadata.agent [-] Port f99a05c5-fb6e-4cb4-a735-e492526b8a2c in datapath e8014d6b-23e1-41ef-b5e2-3d770d302e72 bound to our chassis#033[00m
Jan 31 03:07:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:26.835 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e8014d6b-23e1-41ef-b5e2-3d770d302e72#033[00m
Jan 31 03:07:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:26.843 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2311a0fd-592b-4e8f-8dc4-dcbc42d5aa42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:26.844 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape8014d6b-21 in ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:07:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:26.845 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape8014d6b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:07:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:26.845 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[479c3654-8e3d-4604-9939-0a873e24edb5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:26.846 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ae9e7cc9-0644-43c7-a7e0-54d70cc395b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:26.853 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[fa89ff7a-b5f4-4738-9253-3b6e316e421f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:26.875 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3a3e93-8b67-4900-b8db-ddf19acf8902]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:26.898 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[65ed4665-f8b8-4446-a806-ae6b12650992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:26 np0005603609 NetworkManager[49064]: <info>  [1769846846.9041] manager: (tape8014d6b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/211)
Jan 31 03:07:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:26.903 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d658c416-559f-4730-b49c-cc7f784ce568]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:26.927 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[35b15e0b-3093-43cc-9a4e-ba7e5e2eff86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:26.930 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[bc551c0d-70f9-4334-9e3d-dfe38478b01e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:26 np0005603609 NetworkManager[49064]: <info>  [1769846846.9482] device (tape8014d6b-20): carrier: link connected
Jan 31 03:07:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:26.950 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[55176c8f-f531-4fc6-920e-8b07c7f43d93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:26.964 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[472f5571-d5be-457e-847d-8a10571346f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8014d6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:c1:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708664, 'reachable_time': 44076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263791, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:26.976 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7b7b2a9e-0a84-4b8a-aaf8-21d4663007f1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:c1c3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 708664, 'tstamp': 708664}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263792, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:26.988 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[52d0c070-27e9-4b83-9ada-4aff42282bdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8014d6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:c1:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708664, 'reachable_time': 44076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263793, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:27.011 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8c5a202f-e2fd-464f-9d05-d12fac5bab17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:27.050 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8511d736-f4a8-41b3-849e-0f5d5220b618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:27.051 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8014d6b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:27.051 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:27.052 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8014d6b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:27 np0005603609 NetworkManager[49064]: <info>  [1769846847.0541] manager: (tape8014d6b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.053 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:27 np0005603609 kernel: tape8014d6b-20: entered promiscuous mode
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.057 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:27.057 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape8014d6b-20, col_values=(('external_ids', {'iface-id': '4bb3ff19-f70b-4c8d-a829-66ff18233b61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.058 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:27 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:27Z|00425|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:27.059 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e8014d6b-23e1-41ef-b5e2-3d770d302e72.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e8014d6b-23e1-41ef-b5e2-3d770d302e72.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:27.060 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f407f9-4b05-40ed-aaca-f64a5c66b866]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:27.061 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/e8014d6b-23e1-41ef-b5e2-3d770d302e72.pid.haproxy
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:07:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:27.061 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'env', 'PROCESS_TAG=haproxy-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e8014d6b-23e1-41ef-b5e2-3d770d302e72.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.064 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.208 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846847.2083485, 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.209 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] VM Started (Lifecycle Event)#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.278 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.282 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846847.2084444, 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.282 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.341 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.345 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.394 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.429 221554 DEBUG nova.compute.manager [req-805dfe98-f1ad-40c9-9f6f-659b2b827174 req-21cf0402-a0c8-4c09-b71d-3b54d7f87485 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Received event network-vif-plugged-f99a05c5-fb6e-4cb4-a735-e492526b8a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.429 221554 DEBUG oslo_concurrency.lockutils [req-805dfe98-f1ad-40c9-9f6f-659b2b827174 req-21cf0402-a0c8-4c09-b71d-3b54d7f87485 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.429 221554 DEBUG oslo_concurrency.lockutils [req-805dfe98-f1ad-40c9-9f6f-659b2b827174 req-21cf0402-a0c8-4c09-b71d-3b54d7f87485 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.429 221554 DEBUG oslo_concurrency.lockutils [req-805dfe98-f1ad-40c9-9f6f-659b2b827174 req-21cf0402-a0c8-4c09-b71d-3b54d7f87485 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.430 221554 DEBUG nova.compute.manager [req-805dfe98-f1ad-40c9-9f6f-659b2b827174 req-21cf0402-a0c8-4c09-b71d-3b54d7f87485 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Processing event network-vif-plugged-f99a05c5-fb6e-4cb4-a735-e492526b8a2c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.430 221554 DEBUG nova.compute.manager [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.433 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846847.4330382, 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.433 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.435 221554 DEBUG nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:07:27 np0005603609 podman[263868]: 2026-01-31 08:07:27.340379061 +0000 UTC m=+0.018756333 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.437 221554 INFO nova.virt.libvirt.driver [-] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Instance spawned successfully.#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.437 221554 DEBUG nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.562 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.568 221554 DEBUG nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.569 221554 DEBUG nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.569 221554 DEBUG nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.570 221554 DEBUG nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:07:27 np0005603609 podman[263868]: 2026-01-31 08:07:27.570686704 +0000 UTC m=+0.249063986 container create e39c16e4b9888c8bd1953feb7dc8a876448306197c27a6ca3f8bd32a636b3db3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.570 221554 DEBUG nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.571 221554 DEBUG nova.virt.libvirt.driver [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.583 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:07:27 np0005603609 systemd[1]: Started libpod-conmon-e39c16e4b9888c8bd1953feb7dc8a876448306197c27a6ca3f8bd32a636b3db3.scope.
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.672 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:07:27 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:07:27 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d88ea037eb73fe9e43b4b60f3ca5f4e8c0b6936753eb22241fd71df0dac6dc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.741 221554 DEBUG nova.compute.manager [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received event network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.742 221554 DEBUG oslo_concurrency.lockutils [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2552b865-8435-46e6-823f-87ef00de854b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.742 221554 DEBUG oslo_concurrency.lockutils [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.742 221554 DEBUG oslo_concurrency.lockutils [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.742 221554 DEBUG nova.compute.manager [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] No waiting events found dispatching network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.742 221554 WARNING nova.compute.manager [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received unexpected event network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:07:27 np0005603609 podman[263868]: 2026-01-31 08:07:27.743849374 +0000 UTC m=+0.422226656 container init e39c16e4b9888c8bd1953feb7dc8a876448306197c27a6ca3f8bd32a636b3db3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:07:27 np0005603609 podman[263868]: 2026-01-31 08:07:27.749603162 +0000 UTC m=+0.427980414 container start e39c16e4b9888c8bd1953feb7dc8a876448306197c27a6ca3f8bd32a636b3db3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:27 np0005603609 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[263883]: [NOTICE]   (263887) : New worker (263889) forked
Jan 31 03:07:27 np0005603609 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[263883]: [NOTICE]   (263887) : Loading success.
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.852 221554 INFO nova.compute.manager [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Took 16.77 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:07:27 np0005603609 nova_compute[221550]: 2026-01-31 08:07:27.853 221554 DEBUG nova.compute.manager [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:27.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:28 np0005603609 nova_compute[221550]: 2026-01-31 08:07:28.108 221554 INFO nova.compute.manager [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Took 18.59 seconds to build instance.#033[00m
Jan 31 03:07:28 np0005603609 nova_compute[221550]: 2026-01-31 08:07:28.212 221554 DEBUG oslo_concurrency.lockutils [None req-8030eac8-3b8f-464a-bfd4-bced7231e75a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.265s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:07:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:28.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:07:28 np0005603609 nova_compute[221550]: 2026-01-31 08:07:28.999 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:29 np0005603609 NetworkManager[49064]: <info>  [1769846849.0136] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Jan 31 03:07:29 np0005603609 NetworkManager[49064]: <info>  [1769846849.0150] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Jan 31 03:07:29 np0005603609 nova_compute[221550]: 2026-01-31 08:07:29.038 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:29 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:29Z|00426|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 03:07:29 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:29Z|00427|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 03:07:29 np0005603609 nova_compute[221550]: 2026-01-31 08:07:29.058 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:29 np0005603609 nova_compute[221550]: 2026-01-31 08:07:29.091 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:29 np0005603609 nova_compute[221550]: 2026-01-31 08:07:29.679 221554 DEBUG nova.compute.manager [req-d1b61fac-876b-4b05-9820-b8d1c179dbaa req-22f73cf5-c054-4be2-88c3-21e8b7b4b395 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received event network-changed-9256ad7f-5794-4914-b5dd-12cc282f6172 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:29 np0005603609 nova_compute[221550]: 2026-01-31 08:07:29.679 221554 DEBUG nova.compute.manager [req-d1b61fac-876b-4b05-9820-b8d1c179dbaa req-22f73cf5-c054-4be2-88c3-21e8b7b4b395 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Refreshing instance network info cache due to event network-changed-9256ad7f-5794-4914-b5dd-12cc282f6172. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:07:29 np0005603609 nova_compute[221550]: 2026-01-31 08:07:29.679 221554 DEBUG oslo_concurrency.lockutils [req-d1b61fac-876b-4b05-9820-b8d1c179dbaa req-22f73cf5-c054-4be2-88c3-21e8b7b4b395 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2552b865-8435-46e6-823f-87ef00de854b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:07:29 np0005603609 nova_compute[221550]: 2026-01-31 08:07:29.868 221554 DEBUG nova.compute.manager [req-29a3f0d8-c01c-4c40-bc95-fd7090e90560 req-2122fffa-6bd6-4594-a3a4-dd2fe8d4b44a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Received event network-vif-plugged-f99a05c5-fb6e-4cb4-a735-e492526b8a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:29 np0005603609 nova_compute[221550]: 2026-01-31 08:07:29.869 221554 DEBUG oslo_concurrency.lockutils [req-29a3f0d8-c01c-4c40-bc95-fd7090e90560 req-2122fffa-6bd6-4594-a3a4-dd2fe8d4b44a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:29 np0005603609 nova_compute[221550]: 2026-01-31 08:07:29.869 221554 DEBUG oslo_concurrency.lockutils [req-29a3f0d8-c01c-4c40-bc95-fd7090e90560 req-2122fffa-6bd6-4594-a3a4-dd2fe8d4b44a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:29 np0005603609 nova_compute[221550]: 2026-01-31 08:07:29.869 221554 DEBUG oslo_concurrency.lockutils [req-29a3f0d8-c01c-4c40-bc95-fd7090e90560 req-2122fffa-6bd6-4594-a3a4-dd2fe8d4b44a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:29 np0005603609 nova_compute[221550]: 2026-01-31 08:07:29.869 221554 DEBUG nova.compute.manager [req-29a3f0d8-c01c-4c40-bc95-fd7090e90560 req-2122fffa-6bd6-4594-a3a4-dd2fe8d4b44a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] No waiting events found dispatching network-vif-plugged-f99a05c5-fb6e-4cb4-a735-e492526b8a2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:07:29 np0005603609 nova_compute[221550]: 2026-01-31 08:07:29.870 221554 WARNING nova.compute.manager [req-29a3f0d8-c01c-4c40-bc95-fd7090e90560 req-2122fffa-6bd6-4594-a3a4-dd2fe8d4b44a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Received unexpected event network-vif-plugged-f99a05c5-fb6e-4cb4-a735-e492526b8a2c for instance with vm_state active and task_state None.#033[00m
Jan 31 03:07:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:29.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:30 np0005603609 nova_compute[221550]: 2026-01-31 08:07:30.106 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:30.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:30 np0005603609 nova_compute[221550]: 2026-01-31 08:07:30.437 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Updating instance_info_cache with network_info: [{"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:07:30 np0005603609 nova_compute[221550]: 2026-01-31 08:07:30.636 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-2552b865-8435-46e6-823f-87ef00de854b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:07:30 np0005603609 nova_compute[221550]: 2026-01-31 08:07:30.637 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:07:30 np0005603609 nova_compute[221550]: 2026-01-31 08:07:30.637 221554 DEBUG oslo_concurrency.lockutils [req-d1b61fac-876b-4b05-9820-b8d1c179dbaa req-22f73cf5-c054-4be2-88c3-21e8b7b4b395 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2552b865-8435-46e6-823f-87ef00de854b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:07:30 np0005603609 nova_compute[221550]: 2026-01-31 08:07:30.637 221554 DEBUG nova.network.neutron [req-d1b61fac-876b-4b05-9820-b8d1c179dbaa req-22f73cf5-c054-4be2-88c3-21e8b7b4b395 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Refreshing network info cache for port 9256ad7f-5794-4914-b5dd-12cc282f6172 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:07:30 np0005603609 nova_compute[221550]: 2026-01-31 08:07:30.638 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:07:30 np0005603609 nova_compute[221550]: 2026-01-31 08:07:30.639 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:07:30 np0005603609 nova_compute[221550]: 2026-01-31 08:07:30.639 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:07:30 np0005603609 nova_compute[221550]: 2026-01-31 08:07:30.639 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:07:30 np0005603609 nova_compute[221550]: 2026-01-31 08:07:30.802 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:30 np0005603609 nova_compute[221550]: 2026-01-31 08:07:30.803 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:30 np0005603609 nova_compute[221550]: 2026-01-31 08:07:30.803 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:30 np0005603609 nova_compute[221550]: 2026-01-31 08:07:30.803 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:07:30 np0005603609 nova_compute[221550]: 2026-01-31 08:07:30.803 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:31 np0005603609 podman[263920]: 2026-01-31 08:07:31.163722323 +0000 UTC m=+0.049147084 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Jan 31 03:07:31 np0005603609 podman[263919]: 2026-01-31 08:07:31.178814717 +0000 UTC m=+0.065825756 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller)
Jan 31 03:07:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:07:31 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1252239058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:07:31 np0005603609 nova_compute[221550]: 2026-01-31 08:07:31.253 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:31 np0005603609 nova_compute[221550]: 2026-01-31 08:07:31.398 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:07:31 np0005603609 nova_compute[221550]: 2026-01-31 08:07:31.399 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:07:31 np0005603609 nova_compute[221550]: 2026-01-31 08:07:31.401 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:07:31 np0005603609 nova_compute[221550]: 2026-01-31 08:07:31.401 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:07:31 np0005603609 nova_compute[221550]: 2026-01-31 08:07:31.555 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:07:31 np0005603609 nova_compute[221550]: 2026-01-31 08:07:31.556 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4230MB free_disk=20.904937744140625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:07:31 np0005603609 nova_compute[221550]: 2026-01-31 08:07:31.557 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:31 np0005603609 nova_compute[221550]: 2026-01-31 08:07:31.557 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:31 np0005603609 nova_compute[221550]: 2026-01-31 08:07:31.776 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 2552b865-8435-46e6-823f-87ef00de854b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:07:31 np0005603609 nova_compute[221550]: 2026-01-31 08:07:31.777 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:07:31 np0005603609 nova_compute[221550]: 2026-01-31 08:07:31.777 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:07:31 np0005603609 nova_compute[221550]: 2026-01-31 08:07:31.777 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:07:31 np0005603609 nova_compute[221550]: 2026-01-31 08:07:31.809 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:07:31 np0005603609 nova_compute[221550]: 2026-01-31 08:07:31.868 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:07:31 np0005603609 nova_compute[221550]: 2026-01-31 08:07:31.868 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:07:31 np0005603609 nova_compute[221550]: 2026-01-31 08:07:31.911 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:07:31 np0005603609 nova_compute[221550]: 2026-01-31 08:07:31.949 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:07:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:31.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:32 np0005603609 nova_compute[221550]: 2026-01-31 08:07:32.069 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:07:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:32.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:07:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:07:32 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1848110150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:07:32 np0005603609 nova_compute[221550]: 2026-01-31 08:07:32.553 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:32 np0005603609 nova_compute[221550]: 2026-01-31 08:07:32.559 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:07:32 np0005603609 nova_compute[221550]: 2026-01-31 08:07:32.638 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:07:32 np0005603609 nova_compute[221550]: 2026-01-31 08:07:32.726 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:07:32 np0005603609 nova_compute[221550]: 2026-01-31 08:07:32.727 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:33 np0005603609 nova_compute[221550]: 2026-01-31 08:07:33.108 221554 DEBUG nova.network.neutron [req-d1b61fac-876b-4b05-9820-b8d1c179dbaa req-22f73cf5-c054-4be2-88c3-21e8b7b4b395 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Updated VIF entry in instance network info cache for port 9256ad7f-5794-4914-b5dd-12cc282f6172. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:07:33 np0005603609 nova_compute[221550]: 2026-01-31 08:07:33.109 221554 DEBUG nova.network.neutron [req-d1b61fac-876b-4b05-9820-b8d1c179dbaa req-22f73cf5-c054-4be2-88c3-21e8b7b4b395 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Updating instance_info_cache with network_info: [{"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:07:33 np0005603609 nova_compute[221550]: 2026-01-31 08:07:33.192 221554 DEBUG oslo_concurrency.lockutils [req-d1b61fac-876b-4b05-9820-b8d1c179dbaa req-22f73cf5-c054-4be2-88c3-21e8b7b4b395 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2552b865-8435-46e6-823f-87ef00de854b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:07:33 np0005603609 nova_compute[221550]: 2026-01-31 08:07:33.722 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:07:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:33.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:34 np0005603609 nova_compute[221550]: 2026-01-31 08:07:34.093 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:34.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:34 np0005603609 nova_compute[221550]: 2026-01-31 08:07:34.361 221554 DEBUG nova.compute.manager [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Received event network-changed-f99a05c5-fb6e-4cb4-a735-e492526b8a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:34 np0005603609 nova_compute[221550]: 2026-01-31 08:07:34.362 221554 DEBUG nova.compute.manager [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Refreshing instance network info cache due to event network-changed-f99a05c5-fb6e-4cb4-a735-e492526b8a2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:07:34 np0005603609 nova_compute[221550]: 2026-01-31 08:07:34.362 221554 DEBUG oslo_concurrency.lockutils [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:07:34 np0005603609 nova_compute[221550]: 2026-01-31 08:07:34.362 221554 DEBUG oslo_concurrency.lockutils [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:07:34 np0005603609 nova_compute[221550]: 2026-01-31 08:07:34.362 221554 DEBUG nova.network.neutron [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Refreshing network info cache for port f99a05c5-fb6e-4cb4-a735-e492526b8a2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:07:35 np0005603609 nova_compute[221550]: 2026-01-31 08:07:35.107 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:35.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:36.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:36 np0005603609 ceph-mgr[82033]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 03:07:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:07:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:37.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:07:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:07:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:38.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:07:38 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:38Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a7:92:b0 10.100.0.9
Jan 31 03:07:38 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:38Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:92:b0 10.100.0.9
Jan 31 03:07:38 np0005603609 nova_compute[221550]: 2026-01-31 08:07:38.727 221554 DEBUG nova.network.neutron [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Updated VIF entry in instance network info cache for port f99a05c5-fb6e-4cb4-a735-e492526b8a2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:07:38 np0005603609 nova_compute[221550]: 2026-01-31 08:07:38.728 221554 DEBUG nova.network.neutron [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Updating instance_info_cache with network_info: [{"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:07:38 np0005603609 nova_compute[221550]: 2026-01-31 08:07:38.816 221554 DEBUG oslo_concurrency.lockutils [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:07:39 np0005603609 nova_compute[221550]: 2026-01-31 08:07:39.095 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:39 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:39Z|00428|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 03:07:39 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:39Z|00429|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 03:07:39 np0005603609 nova_compute[221550]: 2026-01-31 08:07:39.862 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:39.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:40 np0005603609 nova_compute[221550]: 2026-01-31 08:07:40.109 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:07:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:40.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:07:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:41 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:41Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e8:a2:84 10.100.0.5
Jan 31 03:07:41 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:41Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:a2:84 10.100.0.5
Jan 31 03:07:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:41.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:42.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:42 np0005603609 nova_compute[221550]: 2026-01-31 08:07:42.739 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:42.740 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:07:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:42.741 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:07:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:42.742 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:07:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:43.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:07:44 np0005603609 nova_compute[221550]: 2026-01-31 08:07:44.098 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:44.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:44 np0005603609 nova_compute[221550]: 2026-01-31 08:07:44.815 221554 DEBUG oslo_concurrency.lockutils [None req-356b63bf-680a-4f4c-85a7-b751c613b67c d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "2552b865-8435-46e6-823f-87ef00de854b" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:44 np0005603609 nova_compute[221550]: 2026-01-31 08:07:44.815 221554 DEBUG oslo_concurrency.lockutils [None req-356b63bf-680a-4f4c-85a7-b751c613b67c d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:44 np0005603609 nova_compute[221550]: 2026-01-31 08:07:44.815 221554 DEBUG nova.compute.manager [None req-356b63bf-680a-4f4c-85a7-b751c613b67c d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:44 np0005603609 nova_compute[221550]: 2026-01-31 08:07:44.819 221554 DEBUG nova.compute.manager [None req-356b63bf-680a-4f4c-85a7-b751c613b67c d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 31 03:07:44 np0005603609 nova_compute[221550]: 2026-01-31 08:07:44.820 221554 DEBUG nova.objects.instance [None req-356b63bf-680a-4f4c-85a7-b751c613b67c d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'flavor' on Instance uuid 2552b865-8435-46e6-823f-87ef00de854b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:44 np0005603609 nova_compute[221550]: 2026-01-31 08:07:44.878 221554 DEBUG nova.virt.libvirt.driver [None req-356b63bf-680a-4f4c-85a7-b751c613b67c d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:07:45 np0005603609 nova_compute[221550]: 2026-01-31 08:07:45.111 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:45.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:46.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:47 np0005603609 kernel: tap9256ad7f-57 (unregistering): left promiscuous mode
Jan 31 03:07:47 np0005603609 NetworkManager[49064]: <info>  [1769846867.3823] device (tap9256ad7f-57): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:07:47 np0005603609 nova_compute[221550]: 2026-01-31 08:07:47.388 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:47 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:47Z|00430|binding|INFO|Releasing lport 9256ad7f-5794-4914-b5dd-12cc282f6172 from this chassis (sb_readonly=0)
Jan 31 03:07:47 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:47Z|00431|binding|INFO|Setting lport 9256ad7f-5794-4914-b5dd-12cc282f6172 down in Southbound
Jan 31 03:07:47 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:47Z|00432|binding|INFO|Removing iface tap9256ad7f-57 ovn-installed in OVS
Jan 31 03:07:47 np0005603609 nova_compute[221550]: 2026-01-31 08:07:47.390 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:47 np0005603609 nova_compute[221550]: 2026-01-31 08:07:47.394 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:47 np0005603609 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Jan 31 03:07:47 np0005603609 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000006b.scope: Consumed 12.939s CPU time.
Jan 31 03:07:47 np0005603609 systemd-machined[190912]: Machine qemu-52-instance-0000006b terminated.
Jan 31 03:07:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:47.536 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:92:b0 10.100.0.9'], port_security=['fa:16:3e:a7:92:b0 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2552b865-8435-46e6-823f-87ef00de854b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=9256ad7f-5794-4914-b5dd-12cc282f6172) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:07:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:47.537 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 9256ad7f-5794-4914-b5dd-12cc282f6172 in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 unbound from our chassis#033[00m
Jan 31 03:07:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:47.538 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5cc2535f-0f8f-4713-a35c-9805048a29a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:07:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:47.539 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[07f64fc4-7ca5-40fe-aea2-d361aa447b5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:47.540 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace which is not needed anymore#033[00m
Jan 31 03:07:47 np0005603609 nova_compute[221550]: 2026-01-31 08:07:47.603 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:47 np0005603609 nova_compute[221550]: 2026-01-31 08:07:47.607 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:47 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[263688]: [NOTICE]   (263692) : haproxy version is 2.8.14-c23fe91
Jan 31 03:07:47 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[263688]: [NOTICE]   (263692) : path to executable is /usr/sbin/haproxy
Jan 31 03:07:47 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[263688]: [WARNING]  (263692) : Exiting Master process...
Jan 31 03:07:47 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[263688]: [ALERT]    (263692) : Current worker (263694) exited with code 143 (Terminated)
Jan 31 03:07:47 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[263688]: [WARNING]  (263692) : All workers exited. Exiting... (0)
Jan 31 03:07:47 np0005603609 systemd[1]: libpod-7c958ab883c3c7adb3bc4ce9e950ce6d6863bf7af897622eb351df48ab189bbe.scope: Deactivated successfully.
Jan 31 03:07:47 np0005603609 podman[264134]: 2026-01-31 08:07:47.659136652 +0000 UTC m=+0.040340231 container died 7c958ab883c3c7adb3bc4ce9e950ce6d6863bf7af897622eb351df48ab189bbe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:07:47 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7c958ab883c3c7adb3bc4ce9e950ce6d6863bf7af897622eb351df48ab189bbe-userdata-shm.mount: Deactivated successfully.
Jan 31 03:07:47 np0005603609 systemd[1]: var-lib-containers-storage-overlay-926227c7e776d303e9e57c024eb5c0225453c79d397cab32d33c65bae531a3e2-merged.mount: Deactivated successfully.
Jan 31 03:07:47 np0005603609 podman[264134]: 2026-01-31 08:07:47.693330733 +0000 UTC m=+0.074534312 container cleanup 7c958ab883c3c7adb3bc4ce9e950ce6d6863bf7af897622eb351df48ab189bbe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:47 np0005603609 systemd[1]: libpod-conmon-7c958ab883c3c7adb3bc4ce9e950ce6d6863bf7af897622eb351df48ab189bbe.scope: Deactivated successfully.
Jan 31 03:07:47 np0005603609 podman[264182]: 2026-01-31 08:07:47.740854405 +0000 UTC m=+0.034071159 container remove 7c958ab883c3c7adb3bc4ce9e950ce6d6863bf7af897622eb351df48ab189bbe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:07:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:47.744 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[52979eab-dc80-4471-b642-7b8e40ef3a1c]: (4, ('Sat Jan 31 08:07:47 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (7c958ab883c3c7adb3bc4ce9e950ce6d6863bf7af897622eb351df48ab189bbe)\n7c958ab883c3c7adb3bc4ce9e950ce6d6863bf7af897622eb351df48ab189bbe\nSat Jan 31 08:07:47 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (7c958ab883c3c7adb3bc4ce9e950ce6d6863bf7af897622eb351df48ab189bbe)\n7c958ab883c3c7adb3bc4ce9e950ce6d6863bf7af897622eb351df48ab189bbe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:47.746 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f568816e-d98b-430c-b97c-ccf85fa1f8c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:47.747 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:47 np0005603609 nova_compute[221550]: 2026-01-31 08:07:47.748 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:47 np0005603609 kernel: tap5cc2535f-00: left promiscuous mode
Jan 31 03:07:47 np0005603609 nova_compute[221550]: 2026-01-31 08:07:47.756 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:47.758 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5217a7-404a-4580-ba1b-b4f06b6de21a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:47.771 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[352c7aa5-5981-4567-9aba-5ef1e9a00f66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:47.772 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ea606e-1284-4daa-bb82-c8913f4454a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:47.782 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2592ea23-d2e5-48da-820b-a9d845c831f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708432, 'reachable_time': 42348, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264201, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:47 np0005603609 systemd[1]: run-netns-ovnmeta\x2d5cc2535f\x2d0f8f\x2d4713\x2da35c\x2d9805048a29a8.mount: Deactivated successfully.
Jan 31 03:07:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:47.786 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:07:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:47.786 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[d56d88b1-7cbd-4f15-8e7e-8a81320aab51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:47 np0005603609 nova_compute[221550]: 2026-01-31 08:07:47.897 221554 INFO nova.virt.libvirt.driver [None req-356b63bf-680a-4f4c-85a7-b751c613b67c d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 03:07:47 np0005603609 nova_compute[221550]: 2026-01-31 08:07:47.906 221554 INFO nova.virt.libvirt.driver [-] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Instance destroyed successfully.#033[00m
Jan 31 03:07:47 np0005603609 nova_compute[221550]: 2026-01-31 08:07:47.907 221554 DEBUG nova.objects.instance [None req-356b63bf-680a-4f4c-85a7-b751c613b67c d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2552b865-8435-46e6-823f-87ef00de854b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:47.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:47 np0005603609 nova_compute[221550]: 2026-01-31 08:07:47.977 221554 DEBUG nova.compute.manager [None req-356b63bf-680a-4f4c-85a7-b751c613b67c d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:07:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:48.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:07:48 np0005603609 nova_compute[221550]: 2026-01-31 08:07:48.574 221554 DEBUG oslo_concurrency.lockutils [None req-356b63bf-680a-4f4c-85a7-b751c613b67c d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:07:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:07:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:07:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:07:48 np0005603609 podman[264461]: 2026-01-31 08:07:48.738096511 +0000 UTC m=+0.028562227 container create 19621941b3bc428d59ac82b9afcace56fd082be753c2a380566a754ccdcbd703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:48 np0005603609 systemd[1]: Started libpod-conmon-19621941b3bc428d59ac82b9afcace56fd082be753c2a380566a754ccdcbd703.scope.
Jan 31 03:07:48 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:07:48 np0005603609 podman[264461]: 2026-01-31 08:07:48.790577133 +0000 UTC m=+0.081042879 container init 19621941b3bc428d59ac82b9afcace56fd082be753c2a380566a754ccdcbd703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 03:07:48 np0005603609 podman[264461]: 2026-01-31 08:07:48.797044328 +0000 UTC m=+0.087510044 container start 19621941b3bc428d59ac82b9afcace56fd082be753c2a380566a754ccdcbd703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 03:07:48 np0005603609 podman[264461]: 2026-01-31 08:07:48.801135936 +0000 UTC m=+0.091601672 container attach 19621941b3bc428d59ac82b9afcace56fd082be753c2a380566a754ccdcbd703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 03:07:48 np0005603609 gallant_boyd[264477]: 167 167
Jan 31 03:07:48 np0005603609 systemd[1]: libpod-19621941b3bc428d59ac82b9afcace56fd082be753c2a380566a754ccdcbd703.scope: Deactivated successfully.
Jan 31 03:07:48 np0005603609 podman[264461]: 2026-01-31 08:07:48.801875874 +0000 UTC m=+0.092341620 container died 19621941b3bc428d59ac82b9afcace56fd082be753c2a380566a754ccdcbd703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:48 np0005603609 systemd[1]: var-lib-containers-storage-overlay-6b5c16a230ccada2a8c10ceeda84e82c1735cc1f5ef95cf5a6468750a45b45b8-merged.mount: Deactivated successfully.
Jan 31 03:07:48 np0005603609 podman[264461]: 2026-01-31 08:07:48.725492538 +0000 UTC m=+0.015958274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:07:48 np0005603609 podman[264461]: 2026-01-31 08:07:48.833740059 +0000 UTC m=+0.124205785 container remove 19621941b3bc428d59ac82b9afcace56fd082be753c2a380566a754ccdcbd703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 03:07:48 np0005603609 systemd[1]: libpod-conmon-19621941b3bc428d59ac82b9afcace56fd082be753c2a380566a754ccdcbd703.scope: Deactivated successfully.
Jan 31 03:07:48 np0005603609 podman[264501]: 2026-01-31 08:07:48.970686131 +0000 UTC m=+0.045234148 container create b28d20d6d094f25bad61c3d838ca6cc7a756b662bbcffb85b6ad83b49f5b43b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shannon, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 03:07:49 np0005603609 systemd[1]: Started libpod-conmon-b28d20d6d094f25bad61c3d838ca6cc7a756b662bbcffb85b6ad83b49f5b43b4.scope.
Jan 31 03:07:49 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:07:49 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa54bd4225c20554e72193048b02340e7269cee10f7ac7cea252886827d68557/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:49 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa54bd4225c20554e72193048b02340e7269cee10f7ac7cea252886827d68557/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:49 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa54bd4225c20554e72193048b02340e7269cee10f7ac7cea252886827d68557/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:49 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa54bd4225c20554e72193048b02340e7269cee10f7ac7cea252886827d68557/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:49 np0005603609 podman[264501]: 2026-01-31 08:07:49.03179009 +0000 UTC m=+0.106338127 container init b28d20d6d094f25bad61c3d838ca6cc7a756b662bbcffb85b6ad83b49f5b43b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shannon, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:07:49 np0005603609 podman[264501]: 2026-01-31 08:07:49.037029876 +0000 UTC m=+0.111577893 container start b28d20d6d094f25bad61c3d838ca6cc7a756b662bbcffb85b6ad83b49f5b43b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 31 03:07:49 np0005603609 podman[264501]: 2026-01-31 08:07:49.044861123 +0000 UTC m=+0.119409150 container attach b28d20d6d094f25bad61c3d838ca6cc7a756b662bbcffb85b6ad83b49f5b43b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shannon, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:07:49 np0005603609 podman[264501]: 2026-01-31 08:07:48.953335484 +0000 UTC m=+0.027883541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:07:49 np0005603609 nova_compute[221550]: 2026-01-31 08:07:49.100 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:49 np0005603609 nova_compute[221550]: 2026-01-31 08:07:49.164 221554 DEBUG nova.compute.manager [req-3d99b577-3d32-45a8-a19b-aa30e6540527 req-445e4215-ac1c-4a1a-90e8-834b405bd48f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received event network-vif-unplugged-9256ad7f-5794-4914-b5dd-12cc282f6172 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:49 np0005603609 nova_compute[221550]: 2026-01-31 08:07:49.165 221554 DEBUG oslo_concurrency.lockutils [req-3d99b577-3d32-45a8-a19b-aa30e6540527 req-445e4215-ac1c-4a1a-90e8-834b405bd48f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2552b865-8435-46e6-823f-87ef00de854b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:49 np0005603609 nova_compute[221550]: 2026-01-31 08:07:49.165 221554 DEBUG oslo_concurrency.lockutils [req-3d99b577-3d32-45a8-a19b-aa30e6540527 req-445e4215-ac1c-4a1a-90e8-834b405bd48f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:49 np0005603609 nova_compute[221550]: 2026-01-31 08:07:49.166 221554 DEBUG oslo_concurrency.lockutils [req-3d99b577-3d32-45a8-a19b-aa30e6540527 req-445e4215-ac1c-4a1a-90e8-834b405bd48f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:49 np0005603609 nova_compute[221550]: 2026-01-31 08:07:49.166 221554 DEBUG nova.compute.manager [req-3d99b577-3d32-45a8-a19b-aa30e6540527 req-445e4215-ac1c-4a1a-90e8-834b405bd48f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] No waiting events found dispatching network-vif-unplugged-9256ad7f-5794-4914-b5dd-12cc282f6172 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:07:49 np0005603609 nova_compute[221550]: 2026-01-31 08:07:49.166 221554 WARNING nova.compute.manager [req-3d99b577-3d32-45a8-a19b-aa30e6540527 req-445e4215-ac1c-4a1a-90e8-834b405bd48f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received unexpected event network-vif-unplugged-9256ad7f-5794-4914-b5dd-12cc282f6172 for instance with vm_state stopped and task_state None.#033[00m
Jan 31 03:07:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:49.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]: [
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:    {
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:        "available": false,
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:        "ceph_device": false,
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:        "lsm_data": {},
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:        "lvs": [],
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:        "path": "/dev/sr0",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:        "rejected_reasons": [
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "Has a FileSystem",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "Insufficient space (<5GB)"
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:        ],
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:        "sys_api": {
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "actuators": null,
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "device_nodes": "sr0",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "devname": "sr0",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "human_readable_size": "482.00 KB",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "id_bus": "ata",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "model": "QEMU DVD-ROM",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "nr_requests": "2",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "parent": "/dev/sr0",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "partitions": {},
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "path": "/dev/sr0",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "removable": "1",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "rev": "2.5+",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "ro": "0",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "rotational": "1",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "sas_address": "",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "sas_device_handle": "",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "scheduler_mode": "mq-deadline",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "sectors": 0,
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "sectorsize": "2048",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "size": 493568.0,
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "support_discard": "2048",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "type": "disk",
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:            "vendor": "QEMU"
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:        }
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]:    }
Jan 31 03:07:50 np0005603609 goofy_shannon[264517]: ]
Jan 31 03:07:50 np0005603609 systemd[1]: libpod-b28d20d6d094f25bad61c3d838ca6cc7a756b662bbcffb85b6ad83b49f5b43b4.scope: Deactivated successfully.
Jan 31 03:07:50 np0005603609 systemd[1]: libpod-b28d20d6d094f25bad61c3d838ca6cc7a756b662bbcffb85b6ad83b49f5b43b4.scope: Consumed 1.040s CPU time.
Jan 31 03:07:50 np0005603609 nova_compute[221550]: 2026-01-31 08:07:50.113 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:50 np0005603609 podman[265739]: 2026-01-31 08:07:50.126276543 +0000 UTC m=+0.026158499 container died b28d20d6d094f25bad61c3d838ca6cc7a756b662bbcffb85b6ad83b49f5b43b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 03:07:50 np0005603609 systemd[1]: var-lib-containers-storage-overlay-fa54bd4225c20554e72193048b02340e7269cee10f7ac7cea252886827d68557-merged.mount: Deactivated successfully.
Jan 31 03:07:50 np0005603609 podman[265739]: 2026-01-31 08:07:50.18399766 +0000 UTC m=+0.083879596 container remove b28d20d6d094f25bad61c3d838ca6cc7a756b662bbcffb85b6ad83b49f5b43b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:07:50 np0005603609 systemd[1]: libpod-conmon-b28d20d6d094f25bad61c3d838ca6cc7a756b662bbcffb85b6ad83b49f5b43b4.scope: Deactivated successfully.
Jan 31 03:07:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:07:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:50.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:07:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:50 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:07:50 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:07:50 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:07:50 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:07:50 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:07:50 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:07:50 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:07:51 np0005603609 nova_compute[221550]: 2026-01-31 08:07:51.299 221554 DEBUG nova.objects.instance [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'flavor' on Instance uuid 2552b865-8435-46e6-823f-87ef00de854b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:51 np0005603609 nova_compute[221550]: 2026-01-31 08:07:51.346 221554 DEBUG oslo_concurrency.lockutils [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "refresh_cache-2552b865-8435-46e6-823f-87ef00de854b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:07:51 np0005603609 nova_compute[221550]: 2026-01-31 08:07:51.347 221554 DEBUG oslo_concurrency.lockutils [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquired lock "refresh_cache-2552b865-8435-46e6-823f-87ef00de854b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:07:51 np0005603609 nova_compute[221550]: 2026-01-31 08:07:51.347 221554 DEBUG nova.network.neutron [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:07:51 np0005603609 nova_compute[221550]: 2026-01-31 08:07:51.348 221554 DEBUG nova.objects.instance [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'info_cache' on Instance uuid 2552b865-8435-46e6-823f-87ef00de854b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:51 np0005603609 nova_compute[221550]: 2026-01-31 08:07:51.370 221554 DEBUG nova.compute.manager [req-72549737-d687-4e3b-a2d7-56ee5ed62f25 req-f813b6e2-04ca-4f1e-951c-c98f26fbba72 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received event network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:51 np0005603609 nova_compute[221550]: 2026-01-31 08:07:51.371 221554 DEBUG oslo_concurrency.lockutils [req-72549737-d687-4e3b-a2d7-56ee5ed62f25 req-f813b6e2-04ca-4f1e-951c-c98f26fbba72 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2552b865-8435-46e6-823f-87ef00de854b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:51 np0005603609 nova_compute[221550]: 2026-01-31 08:07:51.371 221554 DEBUG oslo_concurrency.lockutils [req-72549737-d687-4e3b-a2d7-56ee5ed62f25 req-f813b6e2-04ca-4f1e-951c-c98f26fbba72 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:51 np0005603609 nova_compute[221550]: 2026-01-31 08:07:51.372 221554 DEBUG oslo_concurrency.lockutils [req-72549737-d687-4e3b-a2d7-56ee5ed62f25 req-f813b6e2-04ca-4f1e-951c-c98f26fbba72 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:51 np0005603609 nova_compute[221550]: 2026-01-31 08:07:51.372 221554 DEBUG nova.compute.manager [req-72549737-d687-4e3b-a2d7-56ee5ed62f25 req-f813b6e2-04ca-4f1e-951c-c98f26fbba72 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] No waiting events found dispatching network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:07:51 np0005603609 nova_compute[221550]: 2026-01-31 08:07:51.373 221554 WARNING nova.compute.manager [req-72549737-d687-4e3b-a2d7-56ee5ed62f25 req-f813b6e2-04ca-4f1e-951c-c98f26fbba72 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received unexpected event network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 for instance with vm_state stopped and task_state powering-on.#033[00m
Jan 31 03:07:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:51.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:07:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:52.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.366 221554 DEBUG nova.network.neutron [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Updating instance_info_cache with network_info: [{"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.436 221554 DEBUG oslo_concurrency.lockutils [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Releasing lock "refresh_cache-2552b865-8435-46e6-823f-87ef00de854b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.445 221554 DEBUG nova.compute.manager [None req-002ec949-5f2d-4d0a-be9e-7cf24afa7ee8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.494 221554 INFO nova.virt.libvirt.driver [-] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Instance destroyed successfully.#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.494 221554 DEBUG nova.objects.instance [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2552b865-8435-46e6-823f-87ef00de854b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.506 221554 INFO nova.compute.manager [None req-002ec949-5f2d-4d0a-be9e-7cf24afa7ee8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] instance snapshotting#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.506 221554 DEBUG nova.objects.instance [None req-002ec949-5f2d-4d0a-be9e-7cf24afa7ee8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'flavor' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.530 221554 DEBUG nova.objects.instance [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'resources' on Instance uuid 2552b865-8435-46e6-823f-87ef00de854b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.607 221554 DEBUG nova.virt.libvirt.vif [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1274879352',display_name='tempest-ServerActionsTestJSON-server-1274879352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1274879352',id=107,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:07:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-7zccd5vw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:07:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=2552b865-8435-46e6-823f-87ef00de854b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.608 221554 DEBUG nova.network.os_vif_util [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.609 221554 DEBUG nova.network.os_vif_util [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:92:b0,bridge_name='br-int',has_traffic_filtering=True,id=9256ad7f-5794-4914-b5dd-12cc282f6172,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9256ad7f-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.609 221554 DEBUG os_vif [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:92:b0,bridge_name='br-int',has_traffic_filtering=True,id=9256ad7f-5794-4914-b5dd-12cc282f6172,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9256ad7f-57') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.612 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.613 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9256ad7f-57, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.615 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.621 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.623 221554 INFO os_vif [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:92:b0,bridge_name='br-int',has_traffic_filtering=True,id=9256ad7f-5794-4914-b5dd-12cc282f6172,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9256ad7f-57')#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.630 221554 DEBUG nova.virt.libvirt.driver [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Start _get_guest_xml network_info=[{"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.634 221554 WARNING nova.virt.libvirt.driver [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.639 221554 DEBUG nova.virt.libvirt.host [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.641 221554 DEBUG nova.virt.libvirt.host [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.644 221554 DEBUG nova.virt.libvirt.host [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.644 221554 DEBUG nova.virt.libvirt.host [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.645 221554 DEBUG nova.virt.libvirt.driver [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.645 221554 DEBUG nova.virt.hardware [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.646 221554 DEBUG nova.virt.hardware [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.646 221554 DEBUG nova.virt.hardware [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.646 221554 DEBUG nova.virt.hardware [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.646 221554 DEBUG nova.virt.hardware [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.646 221554 DEBUG nova.virt.hardware [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.647 221554 DEBUG nova.virt.hardware [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.647 221554 DEBUG nova.virt.hardware [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.647 221554 DEBUG nova.virt.hardware [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.647 221554 DEBUG nova.virt.hardware [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.648 221554 DEBUG nova.virt.hardware [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.648 221554 DEBUG nova.objects.instance [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2552b865-8435-46e6-823f-87ef00de854b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:53 np0005603609 nova_compute[221550]: 2026-01-31 08:07:53.696 221554 DEBUG oslo_concurrency.processutils [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:53.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.096 221554 INFO nova.virt.libvirt.driver [None req-002ec949-5f2d-4d0a-be9e-7cf24afa7ee8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Beginning live snapshot process#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.103 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:07:54 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/755635357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.146 221554 DEBUG oslo_concurrency.processutils [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.187 221554 DEBUG oslo_concurrency.processutils [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:07:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:54.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:07:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:07:54 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3162634110' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.670 221554 DEBUG oslo_concurrency.processutils [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.671 221554 DEBUG nova.virt.libvirt.vif [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1274879352',display_name='tempest-ServerActionsTestJSON-server-1274879352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1274879352',id=107,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:07:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-7zccd5vw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:07:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=2552b865-8435-46e6-823f-87ef00de854b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.671 221554 DEBUG nova.network.os_vif_util [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.672 221554 DEBUG nova.network.os_vif_util [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:92:b0,bridge_name='br-int',has_traffic_filtering=True,id=9256ad7f-5794-4914-b5dd-12cc282f6172,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9256ad7f-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.673 221554 DEBUG nova.objects.instance [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2552b865-8435-46e6-823f-87ef00de854b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.966 221554 DEBUG nova.virt.libvirt.driver [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:07:54 np0005603609 nova_compute[221550]:  <uuid>2552b865-8435-46e6-823f-87ef00de854b</uuid>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:  <name>instance-0000006b</name>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerActionsTestJSON-server-1274879352</nova:name>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:07:53</nova:creationTime>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:07:54 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:        <nova:user uuid="d9ed446fb2cf4fc0a4e619c6c766fddc">tempest-ServerActionsTestJSON-1391450973-project-member</nova:user>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:        <nova:project uuid="1fcec9ca13964c7191134db4420ab049">tempest-ServerActionsTestJSON-1391450973</nova:project>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:        <nova:port uuid="9256ad7f-5794-4914-b5dd-12cc282f6172">
Jan 31 03:07:54 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <entry name="serial">2552b865-8435-46e6-823f-87ef00de854b</entry>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <entry name="uuid">2552b865-8435-46e6-823f-87ef00de854b</entry>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/2552b865-8435-46e6-823f-87ef00de854b_disk">
Jan 31 03:07:54 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:07:54 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/2552b865-8435-46e6-823f-87ef00de854b_disk.config">
Jan 31 03:07:54 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:07:54 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:a7:92:b0"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <target dev="tap9256ad7f-57"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/2552b865-8435-46e6-823f-87ef00de854b/console.log" append="off"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <input type="keyboard" bus="usb"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:07:54 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:07:54 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:07:54 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:07:54 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.967 221554 DEBUG nova.virt.libvirt.driver [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.967 221554 DEBUG nova.virt.libvirt.driver [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.968 221554 DEBUG nova.virt.libvirt.vif [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1274879352',display_name='tempest-ServerActionsTestJSON-server-1274879352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1274879352',id=107,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:07:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-7zccd5vw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:07:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=2552b865-8435-46e6-823f-87ef00de854b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.968 221554 DEBUG nova.network.os_vif_util [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.968 221554 DEBUG nova.network.os_vif_util [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:92:b0,bridge_name='br-int',has_traffic_filtering=True,id=9256ad7f-5794-4914-b5dd-12cc282f6172,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9256ad7f-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.969 221554 DEBUG os_vif [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:92:b0,bridge_name='br-int',has_traffic_filtering=True,id=9256ad7f-5794-4914-b5dd-12cc282f6172,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9256ad7f-57') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.969 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.970 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.970 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.972 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.972 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9256ad7f-57, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.973 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9256ad7f-57, col_values=(('external_ids', {'iface-id': '9256ad7f-5794-4914-b5dd-12cc282f6172', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:92:b0', 'vm-uuid': '2552b865-8435-46e6-823f-87ef00de854b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:54 np0005603609 NetworkManager[49064]: <info>  [1769846874.9753] manager: (tap9256ad7f-57): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/215)
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.981 221554 DEBUG nova.virt.libvirt.imagebackend [None req-002ec949-5f2d-4d0a-be9e-7cf24afa7ee8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.984 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.986 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:07:54 np0005603609 nova_compute[221550]: 2026-01-31 08:07:54.987 221554 INFO os_vif [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:92:b0,bridge_name='br-int',has_traffic_filtering=True,id=9256ad7f-5794-4914-b5dd-12cc282f6172,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9256ad7f-57')#033[00m
Jan 31 03:07:55 np0005603609 kernel: tap9256ad7f-57: entered promiscuous mode
Jan 31 03:07:55 np0005603609 NetworkManager[49064]: <info>  [1769846875.0536] manager: (tap9256ad7f-57): new Tun device (/org/freedesktop/NetworkManager/Devices/216)
Jan 31 03:07:55 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:55Z|00433|binding|INFO|Claiming lport 9256ad7f-5794-4914-b5dd-12cc282f6172 for this chassis.
Jan 31 03:07:55 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:55Z|00434|binding|INFO|9256ad7f-5794-4914-b5dd-12cc282f6172: Claiming fa:16:3e:a7:92:b0 10.100.0.9
Jan 31 03:07:55 np0005603609 nova_compute[221550]: 2026-01-31 08:07:55.055 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:55 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:55Z|00435|binding|INFO|Setting lport 9256ad7f-5794-4914-b5dd-12cc282f6172 ovn-installed in OVS
Jan 31 03:07:55 np0005603609 nova_compute[221550]: 2026-01-31 08:07:55.060 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:55 np0005603609 systemd-udevd[265863]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:07:55 np0005603609 systemd-machined[190912]: New machine qemu-54-instance-0000006b.
Jan 31 03:07:55 np0005603609 NetworkManager[49064]: <info>  [1769846875.0827] device (tap9256ad7f-57): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:07:55 np0005603609 NetworkManager[49064]: <info>  [1769846875.0833] device (tap9256ad7f-57): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:07:55 np0005603609 systemd[1]: Started Virtual Machine qemu-54-instance-0000006b.
Jan 31 03:07:55 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:55Z|00436|binding|INFO|Setting lport 9256ad7f-5794-4914-b5dd-12cc282f6172 up in Southbound
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.100 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:92:b0 10.100.0.9'], port_security=['fa:16:3e:a7:92:b0 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2552b865-8435-46e6-823f-87ef00de854b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=9256ad7f-5794-4914-b5dd-12cc282f6172) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.101 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 9256ad7f-5794-4914-b5dd-12cc282f6172 in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 bound to our chassis#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.102 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc2535f-0f8f-4713-a35c-9805048a29a8#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.109 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f945313c-ca13-406f-bd7c-fb3dcdcb5444]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.110 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5cc2535f-01 in ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.111 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5cc2535f-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.112 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d49d2651-f67e-4599-972c-20de62b2da0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.113 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[dfe6b7f4-96ac-429a-bdee-76ad9a7d9040]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.122 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcad55a-0fb6-4a46-b051-dfcb5a904020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.129 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[324f8b59-1e1a-40b9-8676-863c8555ea13]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.146 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b05ce3-dc53-4bf8-b3a0-37a598bcb950]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603609 systemd-udevd[265865]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:07:55 np0005603609 NetworkManager[49064]: <info>  [1769846875.1528] manager: (tap5cc2535f-00): new Veth device (/org/freedesktop/NetworkManager/Devices/217)
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.151 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3eaaae4c-5bdb-4e5c-8332-8b17e0104f58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.178 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f54b69f4-f898-4803-b301-5b254441dec0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.180 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[048ca903-0d6d-403a-9ec8-cbc984b9febd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603609 NetworkManager[49064]: <info>  [1769846875.1946] device (tap5cc2535f-00): carrier: link connected
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.199 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[dd2f9c2c-ab11-44a8-b88d-e1303773d5ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.216 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[419cd768-05b4-469a-80c8-da8e5607ec89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 135], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711489, 'reachable_time': 38989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265896, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.230 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a3164970-3469-448c-9092-86902f254249]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:76f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711489, 'tstamp': 711489}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265897, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.241 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a37120d1-eb7e-4a09-bc92-b12d543a6aa3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 135], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711489, 'reachable_time': 38989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265898, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.264 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[63dcd6b5-3247-4a43-8161-18fa0d7ce6cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.299 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3e583494-05c7-4d58-bde9-319ede8e18f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.300 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.301 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.302 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc2535f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:55 np0005603609 nova_compute[221550]: 2026-01-31 08:07:55.304 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:55 np0005603609 kernel: tap5cc2535f-00: entered promiscuous mode
Jan 31 03:07:55 np0005603609 NetworkManager[49064]: <info>  [1769846875.3049] manager: (tap5cc2535f-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Jan 31 03:07:55 np0005603609 nova_compute[221550]: 2026-01-31 08:07:55.306 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.311 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc2535f-00, col_values=(('external_ids', {'iface-id': 'ab077a7e-cc79-4948-8987-2cd87d88deff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:55 np0005603609 nova_compute[221550]: 2026-01-31 08:07:55.312 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:55 np0005603609 ovn_controller[130359]: 2026-01-31T08:07:55Z|00437|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 03:07:55 np0005603609 nova_compute[221550]: 2026-01-31 08:07:55.313 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.315 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:07:55 np0005603609 nova_compute[221550]: 2026-01-31 08:07:55.316 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.316 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[94fd6530-724a-4c45-9b22-217bbe8c8784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.317 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:07:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:07:55.318 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'env', 'PROCESS_TAG=haproxy-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5cc2535f-0f8f-4713-a35c-9805048a29a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:07:55 np0005603609 nova_compute[221550]: 2026-01-31 08:07:55.528 221554 DEBUG nova.storage.rbd_utils [None req-002ec949-5f2d-4d0a-be9e-7cf24afa7ee8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] creating snapshot(972a318a206c47e8baa6ffe1ff81cfa4) on rbd image(1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:07:55 np0005603609 podman[265988]: 2026-01-31 08:07:55.663705311 +0000 UTC m=+0.056309664 container create 9277b38ffddb160b29093113fb801164d44e94ce640b93b5a696d6d6aaa79091 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:07:55 np0005603609 systemd[1]: Started libpod-conmon-9277b38ffddb160b29093113fb801164d44e94ce640b93b5a696d6d6aaa79091.scope.
Jan 31 03:07:55 np0005603609 podman[265988]: 2026-01-31 08:07:55.626369044 +0000 UTC m=+0.018973457 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:07:55 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:07:55 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c368c8abfe874519ce860df95f906185b40c89067026d3879229f972d1b8cd31/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:55 np0005603609 podman[265988]: 2026-01-31 08:07:55.752178688 +0000 UTC m=+0.144783091 container init 9277b38ffddb160b29093113fb801164d44e94ce640b93b5a696d6d6aaa79091 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:07:55 np0005603609 podman[265988]: 2026-01-31 08:07:55.757040124 +0000 UTC m=+0.149644477 container start 9277b38ffddb160b29093113fb801164d44e94ce640b93b5a696d6d6aaa79091 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:55 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266003]: [NOTICE]   (266007) : New worker (266009) forked
Jan 31 03:07:55 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266003]: [NOTICE]   (266007) : Loading success.
Jan 31 03:07:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:07:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:55.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:07:56 np0005603609 nova_compute[221550]: 2026-01-31 08:07:56.152 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 2552b865-8435-46e6-823f-87ef00de854b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:07:56 np0005603609 nova_compute[221550]: 2026-01-31 08:07:56.152 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846875.6323795, 2552b865-8435-46e6-823f-87ef00de854b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:07:56 np0005603609 nova_compute[221550]: 2026-01-31 08:07:56.153 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:07:56 np0005603609 nova_compute[221550]: 2026-01-31 08:07:56.154 221554 DEBUG nova.compute.manager [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:07:56 np0005603609 nova_compute[221550]: 2026-01-31 08:07:56.158 221554 INFO nova.virt.libvirt.driver [-] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Instance rebooted successfully.#033[00m
Jan 31 03:07:56 np0005603609 nova_compute[221550]: 2026-01-31 08:07:56.158 221554 DEBUG nova.compute.manager [None req-7f497bd6-f0d3-4347-8a05-998c0a3d2223 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:56 np0005603609 nova_compute[221550]: 2026-01-31 08:07:56.223 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:56 np0005603609 nova_compute[221550]: 2026-01-31 08:07:56.228 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:07:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:56.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:56 np0005603609 nova_compute[221550]: 2026-01-31 08:07:56.432 221554 DEBUG nova.compute.manager [req-a0f272d9-ad1b-4656-ae2f-03e4fa46cd1b req-052c08d4-73cf-49eb-9ba1-fc4b0265de2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received event network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:56 np0005603609 nova_compute[221550]: 2026-01-31 08:07:56.434 221554 DEBUG oslo_concurrency.lockutils [req-a0f272d9-ad1b-4656-ae2f-03e4fa46cd1b req-052c08d4-73cf-49eb-9ba1-fc4b0265de2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2552b865-8435-46e6-823f-87ef00de854b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:56 np0005603609 nova_compute[221550]: 2026-01-31 08:07:56.434 221554 DEBUG oslo_concurrency.lockutils [req-a0f272d9-ad1b-4656-ae2f-03e4fa46cd1b req-052c08d4-73cf-49eb-9ba1-fc4b0265de2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:56 np0005603609 nova_compute[221550]: 2026-01-31 08:07:56.435 221554 DEBUG oslo_concurrency.lockutils [req-a0f272d9-ad1b-4656-ae2f-03e4fa46cd1b req-052c08d4-73cf-49eb-9ba1-fc4b0265de2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:56 np0005603609 nova_compute[221550]: 2026-01-31 08:07:56.435 221554 DEBUG nova.compute.manager [req-a0f272d9-ad1b-4656-ae2f-03e4fa46cd1b req-052c08d4-73cf-49eb-9ba1-fc4b0265de2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] No waiting events found dispatching network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:07:56 np0005603609 nova_compute[221550]: 2026-01-31 08:07:56.436 221554 WARNING nova.compute.manager [req-a0f272d9-ad1b-4656-ae2f-03e4fa46cd1b req-052c08d4-73cf-49eb-9ba1-fc4b0265de2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received unexpected event network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 for instance with vm_state stopped and task_state powering-on.#033[00m
Jan 31 03:07:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e263 e263: 3 total, 3 up, 3 in
Jan 31 03:07:57 np0005603609 nova_compute[221550]: 2026-01-31 08:07:57.070 221554 DEBUG nova.storage.rbd_utils [None req-002ec949-5f2d-4d0a-be9e-7cf24afa7ee8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] cloning vms/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk@972a318a206c47e8baa6ffe1ff81cfa4 to images/94599989-66dd-45c0-95bf-4799dd46478c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:07:57 np0005603609 nova_compute[221550]: 2026-01-31 08:07:57.197 221554 DEBUG nova.storage.rbd_utils [None req-002ec949-5f2d-4d0a-be9e-7cf24afa7ee8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] flattening images/94599989-66dd-45c0-95bf-4799dd46478c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:07:57 np0005603609 nova_compute[221550]: 2026-01-31 08:07:57.260 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846875.6325946, 2552b865-8435-46e6-823f-87ef00de854b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:07:57 np0005603609 nova_compute[221550]: 2026-01-31 08:07:57.261 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] VM Started (Lifecycle Event)#033[00m
Jan 31 03:07:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:07:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:07:57 np0005603609 nova_compute[221550]: 2026-01-31 08:07:57.340 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:57 np0005603609 nova_compute[221550]: 2026-01-31 08:07:57.353 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:07:57 np0005603609 nova_compute[221550]: 2026-01-31 08:07:57.676 221554 DEBUG nova.storage.rbd_utils [None req-002ec949-5f2d-4d0a-be9e-7cf24afa7ee8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] removing snapshot(972a318a206c47e8baa6ffe1ff81cfa4) on rbd image(1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:07:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:57.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:58.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e264 e264: 3 total, 3 up, 3 in
Jan 31 03:07:58 np0005603609 nova_compute[221550]: 2026-01-31 08:07:58.317 221554 DEBUG nova.storage.rbd_utils [None req-002ec949-5f2d-4d0a-be9e-7cf24afa7ee8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] creating snapshot(snap) on rbd image(94599989-66dd-45c0-95bf-4799dd46478c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:07:58 np0005603609 nova_compute[221550]: 2026-01-31 08:07:58.657 221554 DEBUG nova.compute.manager [req-69dee0d8-9eed-4321-9658-853dbf317524 req-2f8bb3f3-fb7e-434b-b662-3238d58bc701 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received event network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:58 np0005603609 nova_compute[221550]: 2026-01-31 08:07:58.657 221554 DEBUG oslo_concurrency.lockutils [req-69dee0d8-9eed-4321-9658-853dbf317524 req-2f8bb3f3-fb7e-434b-b662-3238d58bc701 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2552b865-8435-46e6-823f-87ef00de854b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:58 np0005603609 nova_compute[221550]: 2026-01-31 08:07:58.657 221554 DEBUG oslo_concurrency.lockutils [req-69dee0d8-9eed-4321-9658-853dbf317524 req-2f8bb3f3-fb7e-434b-b662-3238d58bc701 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:58 np0005603609 nova_compute[221550]: 2026-01-31 08:07:58.658 221554 DEBUG oslo_concurrency.lockutils [req-69dee0d8-9eed-4321-9658-853dbf317524 req-2f8bb3f3-fb7e-434b-b662-3238d58bc701 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:58 np0005603609 nova_compute[221550]: 2026-01-31 08:07:58.658 221554 DEBUG nova.compute.manager [req-69dee0d8-9eed-4321-9658-853dbf317524 req-2f8bb3f3-fb7e-434b-b662-3238d58bc701 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] No waiting events found dispatching network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:07:58 np0005603609 nova_compute[221550]: 2026-01-31 08:07:58.658 221554 WARNING nova.compute.manager [req-69dee0d8-9eed-4321-9658-853dbf317524 req-2f8bb3f3-fb7e-434b-b662-3238d58bc701 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received unexpected event network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:07:59 np0005603609 nova_compute[221550]: 2026-01-31 08:07:59.104 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e265 e265: 3 total, 3 up, 3 in
Jan 31 03:07:59 np0005603609 nova_compute[221550]: 2026-01-31 08:07:59.974 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:07:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:59.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:08:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:00.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:08:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:00 np0005603609 nova_compute[221550]: 2026-01-31 08:08:00.959 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:01 np0005603609 nova_compute[221550]: 2026-01-31 08:08:01.252 221554 DEBUG nova.objects.instance [None req-1e06e64e-e334-4071-871a-17cf798c38f7 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2552b865-8435-46e6-823f-87ef00de854b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:08:01 np0005603609 nova_compute[221550]: 2026-01-31 08:08:01.285 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846881.2852395, 2552b865-8435-46e6-823f-87ef00de854b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:08:01 np0005603609 nova_compute[221550]: 2026-01-31 08:08:01.286 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:08:01 np0005603609 nova_compute[221550]: 2026-01-31 08:08:01.322 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:08:01 np0005603609 nova_compute[221550]: 2026-01-31 08:08:01.326 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:08:01 np0005603609 nova_compute[221550]: 2026-01-31 08:08:01.425 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 31 03:08:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:01.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:02 np0005603609 kernel: tap9256ad7f-57 (unregistering): left promiscuous mode
Jan 31 03:08:02 np0005603609 NetworkManager[49064]: <info>  [1769846882.0586] device (tap9256ad7f-57): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:08:02 np0005603609 ovn_controller[130359]: 2026-01-31T08:08:02Z|00438|binding|INFO|Releasing lport 9256ad7f-5794-4914-b5dd-12cc282f6172 from this chassis (sb_readonly=0)
Jan 31 03:08:02 np0005603609 ovn_controller[130359]: 2026-01-31T08:08:02Z|00439|binding|INFO|Setting lport 9256ad7f-5794-4914-b5dd-12cc282f6172 down in Southbound
Jan 31 03:08:02 np0005603609 nova_compute[221550]: 2026-01-31 08:08:02.060 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:02 np0005603609 ovn_controller[130359]: 2026-01-31T08:08:02Z|00440|binding|INFO|Removing iface tap9256ad7f-57 ovn-installed in OVS
Jan 31 03:08:02 np0005603609 nova_compute[221550]: 2026-01-31 08:08:02.065 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:02 np0005603609 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Jan 31 03:08:02 np0005603609 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000006b.scope: Consumed 6.558s CPU time.
Jan 31 03:08:02 np0005603609 systemd-machined[190912]: Machine qemu-54-instance-0000006b terminated.
Jan 31 03:08:02 np0005603609 podman[266167]: 2026-01-31 08:08:02.138704933 +0000 UTC m=+0.057226256 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 03:08:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:02.147 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:92:b0 10.100.0.9'], port_security=['fa:16:3e:a7:92:b0 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2552b865-8435-46e6-823f-87ef00de854b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=9256ad7f-5794-4914-b5dd-12cc282f6172) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:08:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:02.148 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 9256ad7f-5794-4914-b5dd-12cc282f6172 in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 unbound from our chassis#033[00m
Jan 31 03:08:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:02.149 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5cc2535f-0f8f-4713-a35c-9805048a29a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:08:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:02.150 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0505ef-dd8e-427b-a15a-bc51a4da5f34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:02.150 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace which is not needed anymore#033[00m
Jan 31 03:08:02 np0005603609 nova_compute[221550]: 2026-01-31 08:08:02.181 221554 INFO nova.virt.libvirt.driver [None req-002ec949-5f2d-4d0a-be9e-7cf24afa7ee8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Snapshot image upload complete#033[00m
Jan 31 03:08:02 np0005603609 nova_compute[221550]: 2026-01-31 08:08:02.181 221554 INFO nova.compute.manager [None req-002ec949-5f2d-4d0a-be9e-7cf24afa7ee8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Took 8.58 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:08:02 np0005603609 podman[266166]: 2026-01-31 08:08:02.184077983 +0000 UTC m=+0.104610354 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:08:02 np0005603609 nova_compute[221550]: 2026-01-31 08:08:02.252 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:02 np0005603609 nova_compute[221550]: 2026-01-31 08:08:02.255 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:02 np0005603609 nova_compute[221550]: 2026-01-31 08:08:02.260 221554 DEBUG nova.compute.manager [None req-1e06e64e-e334-4071-871a-17cf798c38f7 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:08:02 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266003]: [NOTICE]   (266007) : haproxy version is 2.8.14-c23fe91
Jan 31 03:08:02 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266003]: [NOTICE]   (266007) : path to executable is /usr/sbin/haproxy
Jan 31 03:08:02 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266003]: [WARNING]  (266007) : Exiting Master process...
Jan 31 03:08:02 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266003]: [ALERT]    (266007) : Current worker (266009) exited with code 143 (Terminated)
Jan 31 03:08:02 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266003]: [WARNING]  (266007) : All workers exited. Exiting... (0)
Jan 31 03:08:02 np0005603609 systemd[1]: libpod-9277b38ffddb160b29093113fb801164d44e94ce640b93b5a696d6d6aaa79091.scope: Deactivated successfully.
Jan 31 03:08:02 np0005603609 podman[266236]: 2026-01-31 08:08:02.281749251 +0000 UTC m=+0.057855281 container died 9277b38ffddb160b29093113fb801164d44e94ce640b93b5a696d6d6aaa79091 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:08:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:08:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:02.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:08:02 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9277b38ffddb160b29093113fb801164d44e94ce640b93b5a696d6d6aaa79091-userdata-shm.mount: Deactivated successfully.
Jan 31 03:08:02 np0005603609 systemd[1]: var-lib-containers-storage-overlay-c368c8abfe874519ce860df95f906185b40c89067026d3879229f972d1b8cd31-merged.mount: Deactivated successfully.
Jan 31 03:08:02 np0005603609 podman[266236]: 2026-01-31 08:08:02.357066761 +0000 UTC m=+0.133172711 container cleanup 9277b38ffddb160b29093113fb801164d44e94ce640b93b5a696d6d6aaa79091 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:08:02 np0005603609 systemd[1]: libpod-conmon-9277b38ffddb160b29093113fb801164d44e94ce640b93b5a696d6d6aaa79091.scope: Deactivated successfully.
Jan 31 03:08:02 np0005603609 podman[266280]: 2026-01-31 08:08:02.414589483 +0000 UTC m=+0.041844207 container remove 9277b38ffddb160b29093113fb801164d44e94ce640b93b5a696d6d6aaa79091 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:08:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:02.417 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[641cae93-7198-41f0-9d50-62a35f760437]: (4, ('Sat Jan 31 08:08:02 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (9277b38ffddb160b29093113fb801164d44e94ce640b93b5a696d6d6aaa79091)\n9277b38ffddb160b29093113fb801164d44e94ce640b93b5a696d6d6aaa79091\nSat Jan 31 08:08:02 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (9277b38ffddb160b29093113fb801164d44e94ce640b93b5a696d6d6aaa79091)\n9277b38ffddb160b29093113fb801164d44e94ce640b93b5a696d6d6aaa79091\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:02.419 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[10b07394-d10c-42f5-b325-4f9beae6944f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:02.420 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:08:02 np0005603609 nova_compute[221550]: 2026-01-31 08:08:02.422 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:02 np0005603609 kernel: tap5cc2535f-00: left promiscuous mode
Jan 31 03:08:02 np0005603609 nova_compute[221550]: 2026-01-31 08:08:02.432 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:02.433 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e954e15c-da5e-4779-a7d9-eb9b6bfcbca1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:02.446 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[49a2c6a8-d3c8-4f8d-a7ff-4a4a06ea4a4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:02.447 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e4ec61-1086-4904-83ee-ce396364d00b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:02.458 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcb933f-d564-453c-8426-da164c143597]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711484, 'reachable_time': 26066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266296, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:02 np0005603609 systemd[1]: run-netns-ovnmeta\x2d5cc2535f\x2d0f8f\x2d4713\x2da35c\x2d9805048a29a8.mount: Deactivated successfully.
Jan 31 03:08:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:02.460 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:08:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:02.460 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[8445884b-9fa6-4b27-bda9-b9231515a147]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:02 np0005603609 nova_compute[221550]: 2026-01-31 08:08:02.904 221554 DEBUG nova.compute.manager [None req-002ec949-5f2d-4d0a-be9e-7cf24afa7ee8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Jan 31 03:08:03 np0005603609 nova_compute[221550]: 2026-01-31 08:08:03.162 221554 DEBUG nova.compute.manager [req-78b614a6-1d24-4b1c-bd1d-da2c37d32bbc req-d3d4b8b1-b7ac-46d4-ad8e-8ee32ed08afe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received event network-vif-unplugged-9256ad7f-5794-4914-b5dd-12cc282f6172 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:08:03 np0005603609 nova_compute[221550]: 2026-01-31 08:08:03.162 221554 DEBUG oslo_concurrency.lockutils [req-78b614a6-1d24-4b1c-bd1d-da2c37d32bbc req-d3d4b8b1-b7ac-46d4-ad8e-8ee32ed08afe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2552b865-8435-46e6-823f-87ef00de854b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:03 np0005603609 nova_compute[221550]: 2026-01-31 08:08:03.163 221554 DEBUG oslo_concurrency.lockutils [req-78b614a6-1d24-4b1c-bd1d-da2c37d32bbc req-d3d4b8b1-b7ac-46d4-ad8e-8ee32ed08afe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:03 np0005603609 nova_compute[221550]: 2026-01-31 08:08:03.163 221554 DEBUG oslo_concurrency.lockutils [req-78b614a6-1d24-4b1c-bd1d-da2c37d32bbc req-d3d4b8b1-b7ac-46d4-ad8e-8ee32ed08afe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:03 np0005603609 nova_compute[221550]: 2026-01-31 08:08:03.163 221554 DEBUG nova.compute.manager [req-78b614a6-1d24-4b1c-bd1d-da2c37d32bbc req-d3d4b8b1-b7ac-46d4-ad8e-8ee32ed08afe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] No waiting events found dispatching network-vif-unplugged-9256ad7f-5794-4914-b5dd-12cc282f6172 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:08:03 np0005603609 nova_compute[221550]: 2026-01-31 08:08:03.163 221554 WARNING nova.compute.manager [req-78b614a6-1d24-4b1c-bd1d-da2c37d32bbc req-d3d4b8b1-b7ac-46d4-ad8e-8ee32ed08afe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received unexpected event network-vif-unplugged-9256ad7f-5794-4914-b5dd-12cc282f6172 for instance with vm_state suspended and task_state None.#033[00m
Jan 31 03:08:03 np0005603609 nova_compute[221550]: 2026-01-31 08:08:03.845 221554 INFO nova.compute.manager [None req-bb22df04-b9c0-4784-8fcb-a8d94c4d06e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Resuming#033[00m
Jan 31 03:08:03 np0005603609 nova_compute[221550]: 2026-01-31 08:08:03.847 221554 DEBUG nova.objects.instance [None req-bb22df04-b9c0-4784-8fcb-a8d94c4d06e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'flavor' on Instance uuid 2552b865-8435-46e6-823f-87ef00de854b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:08:03 np0005603609 nova_compute[221550]: 2026-01-31 08:08:03.924 221554 DEBUG oslo_concurrency.lockutils [None req-bb22df04-b9c0-4784-8fcb-a8d94c4d06e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "refresh_cache-2552b865-8435-46e6-823f-87ef00de854b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:08:03 np0005603609 nova_compute[221550]: 2026-01-31 08:08:03.925 221554 DEBUG oslo_concurrency.lockutils [None req-bb22df04-b9c0-4784-8fcb-a8d94c4d06e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquired lock "refresh_cache-2552b865-8435-46e6-823f-87ef00de854b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:08:03 np0005603609 nova_compute[221550]: 2026-01-31 08:08:03.925 221554 DEBUG nova.network.neutron [None req-bb22df04-b9c0-4784-8fcb-a8d94c4d06e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:08:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:03.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:04 np0005603609 nova_compute[221550]: 2026-01-31 08:08:04.107 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:04.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e266 e266: 3 total, 3 up, 3 in
Jan 31 03:08:04 np0005603609 nova_compute[221550]: 2026-01-31 08:08:04.976 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:05 np0005603609 nova_compute[221550]: 2026-01-31 08:08:05.338 221554 DEBUG nova.compute.manager [req-74d18e23-df47-4a6f-a25b-8794462cb54b req-7c7ee28e-8f7a-4af4-a49f-e25b768db493 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received event network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:08:05 np0005603609 nova_compute[221550]: 2026-01-31 08:08:05.339 221554 DEBUG oslo_concurrency.lockutils [req-74d18e23-df47-4a6f-a25b-8794462cb54b req-7c7ee28e-8f7a-4af4-a49f-e25b768db493 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2552b865-8435-46e6-823f-87ef00de854b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:05 np0005603609 nova_compute[221550]: 2026-01-31 08:08:05.339 221554 DEBUG oslo_concurrency.lockutils [req-74d18e23-df47-4a6f-a25b-8794462cb54b req-7c7ee28e-8f7a-4af4-a49f-e25b768db493 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:05 np0005603609 nova_compute[221550]: 2026-01-31 08:08:05.339 221554 DEBUG oslo_concurrency.lockutils [req-74d18e23-df47-4a6f-a25b-8794462cb54b req-7c7ee28e-8f7a-4af4-a49f-e25b768db493 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:05 np0005603609 nova_compute[221550]: 2026-01-31 08:08:05.339 221554 DEBUG nova.compute.manager [req-74d18e23-df47-4a6f-a25b-8794462cb54b req-7c7ee28e-8f7a-4af4-a49f-e25b768db493 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] No waiting events found dispatching network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:08:05 np0005603609 nova_compute[221550]: 2026-01-31 08:08:05.340 221554 WARNING nova.compute.manager [req-74d18e23-df47-4a6f-a25b-8794462cb54b req-7c7ee28e-8f7a-4af4-a49f-e25b768db493 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received unexpected event network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 for instance with vm_state suspended and task_state resuming.#033[00m
Jan 31 03:08:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:05.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:06 np0005603609 nova_compute[221550]: 2026-01-31 08:08:06.010 221554 DEBUG nova.compute.manager [None req-d26119c2-3c0a-4d76-bf2a-23902ce69f88 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:08:06 np0005603609 nova_compute[221550]: 2026-01-31 08:08:06.150 221554 INFO nova.compute.manager [None req-d26119c2-3c0a-4d76-bf2a-23902ce69f88 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] instance snapshotting#033[00m
Jan 31 03:08:06 np0005603609 nova_compute[221550]: 2026-01-31 08:08:06.152 221554 DEBUG nova.objects.instance [None req-d26119c2-3c0a-4d76-bf2a-23902ce69f88 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'flavor' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:08:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:06.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:06 np0005603609 nova_compute[221550]: 2026-01-31 08:08:06.767 221554 INFO nova.virt.libvirt.driver [None req-d26119c2-3c0a-4d76-bf2a-23902ce69f88 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Beginning live snapshot process#033[00m
Jan 31 03:08:07 np0005603609 nova_compute[221550]: 2026-01-31 08:08:07.037 221554 DEBUG nova.virt.libvirt.imagebackend [None req-d26119c2-3c0a-4d76-bf2a-23902ce69f88 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:08:07 np0005603609 nova_compute[221550]: 2026-01-31 08:08:07.359 221554 DEBUG nova.storage.rbd_utils [None req-d26119c2-3c0a-4d76-bf2a-23902ce69f88 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] creating snapshot(e4759f4b29ca41e584c73cb424ab59ca) on rbd image(1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:08:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:07.501 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:07.502 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:07.502 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e267 e267: 3 total, 3 up, 3 in
Jan 31 03:08:07 np0005603609 nova_compute[221550]: 2026-01-31 08:08:07.941 221554 DEBUG nova.storage.rbd_utils [None req-d26119c2-3c0a-4d76-bf2a-23902ce69f88 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] cloning vms/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk@e4759f4b29ca41e584c73cb424ab59ca to images/c4db27fa-394f-44cc-aad2-82f8c71ac7b7 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:08:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:08.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.061 221554 DEBUG nova.storage.rbd_utils [None req-d26119c2-3c0a-4d76-bf2a-23902ce69f88 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] flattening images/c4db27fa-394f-44cc-aad2-82f8c71ac7b7 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.107 221554 DEBUG nova.network.neutron [None req-bb22df04-b9c0-4784-8fcb-a8d94c4d06e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Updating instance_info_cache with network_info: [{"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.184 221554 DEBUG oslo_concurrency.lockutils [None req-bb22df04-b9c0-4784-8fcb-a8d94c4d06e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Releasing lock "refresh_cache-2552b865-8435-46e6-823f-87ef00de854b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.192 221554 DEBUG nova.virt.libvirt.vif [None req-bb22df04-b9c0-4784-8fcb-a8d94c4d06e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1274879352',display_name='tempest-ServerActionsTestJSON-server-1274879352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1274879352',id=107,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:07:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-7zccd5vw',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:08:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=2552b865-8435-46e6-823f-87ef00de854b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.193 221554 DEBUG nova.network.os_vif_util [None req-bb22df04-b9c0-4784-8fcb-a8d94c4d06e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.194 221554 DEBUG nova.network.os_vif_util [None req-bb22df04-b9c0-4784-8fcb-a8d94c4d06e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:92:b0,bridge_name='br-int',has_traffic_filtering=True,id=9256ad7f-5794-4914-b5dd-12cc282f6172,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9256ad7f-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.195 221554 DEBUG os_vif [None req-bb22df04-b9c0-4784-8fcb-a8d94c4d06e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:92:b0,bridge_name='br-int',has_traffic_filtering=True,id=9256ad7f-5794-4914-b5dd-12cc282f6172,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9256ad7f-57') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.196 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.196 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.197 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.201 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.202 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9256ad7f-57, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.202 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9256ad7f-57, col_values=(('external_ids', {'iface-id': '9256ad7f-5794-4914-b5dd-12cc282f6172', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:92:b0', 'vm-uuid': '2552b865-8435-46e6-823f-87ef00de854b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.203 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.203 221554 INFO os_vif [None req-bb22df04-b9c0-4784-8fcb-a8d94c4d06e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:92:b0,bridge_name='br-int',has_traffic_filtering=True,id=9256ad7f-5794-4914-b5dd-12cc282f6172,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9256ad7f-57')#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.235 221554 DEBUG nova.objects.instance [None req-bb22df04-b9c0-4784-8fcb-a8d94c4d06e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2552b865-8435-46e6-823f-87ef00de854b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:08:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:08.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:08 np0005603609 kernel: tap9256ad7f-57: entered promiscuous mode
Jan 31 03:08:08 np0005603609 NetworkManager[49064]: <info>  [1769846888.3570] manager: (tap9256ad7f-57): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.358 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:08:08Z|00441|binding|INFO|Claiming lport 9256ad7f-5794-4914-b5dd-12cc282f6172 for this chassis.
Jan 31 03:08:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:08:08Z|00442|binding|INFO|9256ad7f-5794-4914-b5dd-12cc282f6172: Claiming fa:16:3e:a7:92:b0 10.100.0.9
Jan 31 03:08:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:08:08Z|00443|binding|INFO|Setting lport 9256ad7f-5794-4914-b5dd-12cc282f6172 ovn-installed in OVS
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.367 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.370 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:08 np0005603609 systemd-machined[190912]: New machine qemu-55-instance-0000006b.
Jan 31 03:08:08 np0005603609 systemd[1]: Started Virtual Machine qemu-55-instance-0000006b.
Jan 31 03:08:08 np0005603609 systemd-udevd[266416]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:08:08 np0005603609 NetworkManager[49064]: <info>  [1769846888.4215] device (tap9256ad7f-57): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:08:08 np0005603609 NetworkManager[49064]: <info>  [1769846888.4225] device (tap9256ad7f-57): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:08:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:08:08Z|00444|binding|INFO|Setting lport 9256ad7f-5794-4914-b5dd-12cc282f6172 up in Southbound
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.522 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:92:b0 10.100.0.9'], port_security=['fa:16:3e:a7:92:b0 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2552b865-8435-46e6-823f-87ef00de854b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '7', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=9256ad7f-5794-4914-b5dd-12cc282f6172) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.523 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 9256ad7f-5794-4914-b5dd-12cc282f6172 in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 bound to our chassis#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.524 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc2535f-0f8f-4713-a35c-9805048a29a8#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.534 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[394fe023-ebff-401f-bd2d-1d91d580e903]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.535 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5cc2535f-01 in ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.539 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5cc2535f-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.539 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d368ba-be56-4514-9f8b-04f851b6bb45]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.539 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[50209aac-a9e5-42f7-808b-d493fbe85eb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.549 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[939f9431-6509-4509-8ab7-c16070280f3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.565 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9f4434-345c-41eb-8bf9-171891c90598]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.596 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[ca3b6370-c336-4ff4-89c0-01f17152d1d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:08 np0005603609 NetworkManager[49064]: <info>  [1769846888.6032] manager: (tap5cc2535f-00): new Veth device (/org/freedesktop/NetworkManager/Devices/220)
Jan 31 03:08:08 np0005603609 systemd-udevd[266418]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.602 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[20bf2e0e-05d4-42d6-b31d-67f4c8dbaad3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.633 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[8b9ca3e6-9b4a-41ef-91e8-8dd9dca4fe68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.636 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[6284227c-ce77-4e39-a5bf-329c68baa8a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:08:08Z|00445|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 03:08:08 np0005603609 NetworkManager[49064]: <info>  [1769846888.6571] device (tap5cc2535f-00): carrier: link connected
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.667 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.673 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[133af79a-e4cc-451a-923e-cf1d9e706d56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.689 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1a43ced0-e57e-4b86-96f8-eab0db488b8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712835, 'reachable_time': 25101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266449, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.709 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b94627bd-f51c-4fa8-ac4d-2f7d4477dac8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:76f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712835, 'tstamp': 712835}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266450, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.726 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bd5fdaaa-95ef-435f-b858-09e60a495e02]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc2535f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:76:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712835, 'reachable_time': 25101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266458, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.762 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e5cfcebb-dbea-4b66-aecc-0ec1dc7b658c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.772 221554 DEBUG nova.storage.rbd_utils [None req-d26119c2-3c0a-4d76-bf2a-23902ce69f88 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] removing snapshot(e4759f4b29ca41e584c73cb424ab59ca) on rbd image(1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.829 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[32392798-89c2-4316-84d6-84bc9df04440]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.831 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.831 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.832 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc2535f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.834 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:08 np0005603609 NetworkManager[49064]: <info>  [1769846888.8350] manager: (tap5cc2535f-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Jan 31 03:08:08 np0005603609 kernel: tap5cc2535f-00: entered promiscuous mode
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.838 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.843 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc2535f-00, col_values=(('external_ids', {'iface-id': 'ab077a7e-cc79-4948-8987-2cd87d88deff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.844 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:08:08Z|00446|binding|INFO|Releasing lport ab077a7e-cc79-4948-8987-2cd87d88deff from this chassis (sb_readonly=0)
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.845 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.846 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.846 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5ae78469-b397-41a7-980d-36d0c6bea545]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.847 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/5cc2535f-0f8f-4713-a35c-9805048a29a8.pid.haproxy
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 5cc2535f-0f8f-4713-a35c-9805048a29a8
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:08:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:08.847 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'env', 'PROCESS_TAG=haproxy-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5cc2535f-0f8f-4713-a35c-9805048a29a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.851 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e268 e268: 3 total, 3 up, 3 in
Jan 31 03:08:08 np0005603609 nova_compute[221550]: 2026-01-31 08:08:08.965 221554 DEBUG nova.storage.rbd_utils [None req-d26119c2-3c0a-4d76-bf2a-23902ce69f88 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] creating snapshot(snap) on rbd image(c4db27fa-394f-44cc-aad2-82f8c71ac7b7) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.110 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.217 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 2552b865-8435-46e6-823f-87ef00de854b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:08:09 np0005603609 podman[266560]: 2026-01-31 08:08:09.217401664 +0000 UTC m=+0.065305381 container create b4b94eb6c7d148fab1cde6a97a996a6545a894b555f6fa5d91ca1d1654ac362a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.218 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846889.2162592, 2552b865-8435-46e6-823f-87ef00de854b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.218 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] VM Started (Lifecycle Event)#033[00m
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.226 221554 DEBUG nova.compute.manager [None req-bb22df04-b9c0-4784-8fcb-a8d94c4d06e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.227 221554 DEBUG nova.objects.instance [None req-bb22df04-b9c0-4784-8fcb-a8d94c4d06e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2552b865-8435-46e6-823f-87ef00de854b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:08:09 np0005603609 systemd[1]: Started libpod-conmon-b4b94eb6c7d148fab1cde6a97a996a6545a894b555f6fa5d91ca1d1654ac362a.scope.
Jan 31 03:08:09 np0005603609 podman[266560]: 2026-01-31 08:08:09.175110868 +0000 UTC m=+0.023014595 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:08:09 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:08:09 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2346bd71e6ca1e37a6d868c371dff436beb701c6b07445985ea34b74fc69fea5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.292 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.296 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:08:09 np0005603609 podman[266560]: 2026-01-31 08:08:09.307402027 +0000 UTC m=+0.155305754 container init b4b94eb6c7d148fab1cde6a97a996a6545a894b555f6fa5d91ca1d1654ac362a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 03:08:09 np0005603609 podman[266560]: 2026-01-31 08:08:09.312074929 +0000 UTC m=+0.159978636 container start b4b94eb6c7d148fab1cde6a97a996a6545a894b555f6fa5d91ca1d1654ac362a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.321 221554 DEBUG nova.compute.manager [req-d5616fb7-76d1-492f-8d13-794667372155 req-2dd2463f-fddc-4718-b8b0-1f9cd28defb9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received event network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.322 221554 DEBUG oslo_concurrency.lockutils [req-d5616fb7-76d1-492f-8d13-794667372155 req-2dd2463f-fddc-4718-b8b0-1f9cd28defb9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2552b865-8435-46e6-823f-87ef00de854b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.322 221554 DEBUG oslo_concurrency.lockutils [req-d5616fb7-76d1-492f-8d13-794667372155 req-2dd2463f-fddc-4718-b8b0-1f9cd28defb9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.322 221554 DEBUG oslo_concurrency.lockutils [req-d5616fb7-76d1-492f-8d13-794667372155 req-2dd2463f-fddc-4718-b8b0-1f9cd28defb9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.323 221554 DEBUG nova.compute.manager [req-d5616fb7-76d1-492f-8d13-794667372155 req-2dd2463f-fddc-4718-b8b0-1f9cd28defb9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] No waiting events found dispatching network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.323 221554 WARNING nova.compute.manager [req-d5616fb7-76d1-492f-8d13-794667372155 req-2dd2463f-fddc-4718-b8b0-1f9cd28defb9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received unexpected event network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 for instance with vm_state suspended and task_state resuming.#033[00m
Jan 31 03:08:09 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266575]: [NOTICE]   (266579) : New worker (266581) forked
Jan 31 03:08:09 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266575]: [NOTICE]   (266579) : Loading success.
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.333 221554 INFO nova.virt.libvirt.driver [-] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Instance running successfully.#033[00m
Jan 31 03:08:09 np0005603609 virtqemud[221292]: argument unsupported: QEMU guest agent is not configured
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.336 221554 DEBUG nova.virt.libvirt.guest [None req-bb22df04-b9c0-4784-8fcb-a8d94c4d06e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.336 221554 DEBUG nova.compute.manager [None req-bb22df04-b9c0-4784-8fcb-a8d94c4d06e8 d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.392 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.393 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846889.2184467, 2552b865-8435-46e6-823f-87ef00de854b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.393 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.495 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.499 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:08:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e269 e269: 3 total, 3 up, 3 in
Jan 31 03:08:09 np0005603609 nova_compute[221550]: 2026-01-31 08:08:09.977 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:10.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:10.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:11 np0005603609 nova_compute[221550]: 2026-01-31 08:08:11.750 221554 DEBUG nova.compute.manager [req-887406c7-0910-4006-87a8-f55cbff4d28c req-4a30909e-9d6a-4ca2-9ef3-af69d73aac23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received event network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:08:11 np0005603609 nova_compute[221550]: 2026-01-31 08:08:11.750 221554 DEBUG oslo_concurrency.lockutils [req-887406c7-0910-4006-87a8-f55cbff4d28c req-4a30909e-9d6a-4ca2-9ef3-af69d73aac23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2552b865-8435-46e6-823f-87ef00de854b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:11 np0005603609 nova_compute[221550]: 2026-01-31 08:08:11.750 221554 DEBUG oslo_concurrency.lockutils [req-887406c7-0910-4006-87a8-f55cbff4d28c req-4a30909e-9d6a-4ca2-9ef3-af69d73aac23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:11 np0005603609 nova_compute[221550]: 2026-01-31 08:08:11.750 221554 DEBUG oslo_concurrency.lockutils [req-887406c7-0910-4006-87a8-f55cbff4d28c req-4a30909e-9d6a-4ca2-9ef3-af69d73aac23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:11 np0005603609 nova_compute[221550]: 2026-01-31 08:08:11.751 221554 DEBUG nova.compute.manager [req-887406c7-0910-4006-87a8-f55cbff4d28c req-4a30909e-9d6a-4ca2-9ef3-af69d73aac23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] No waiting events found dispatching network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:08:11 np0005603609 nova_compute[221550]: 2026-01-31 08:08:11.751 221554 WARNING nova.compute.manager [req-887406c7-0910-4006-87a8-f55cbff4d28c req-4a30909e-9d6a-4ca2-9ef3-af69d73aac23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received unexpected event network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:08:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:08:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:12.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:08:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:08:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:12.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:08:12 np0005603609 nova_compute[221550]: 2026-01-31 08:08:12.778 221554 INFO nova.virt.libvirt.driver [None req-d26119c2-3c0a-4d76-bf2a-23902ce69f88 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Snapshot image upload complete#033[00m
Jan 31 03:08:12 np0005603609 nova_compute[221550]: 2026-01-31 08:08:12.778 221554 INFO nova.compute.manager [None req-d26119c2-3c0a-4d76-bf2a-23902ce69f88 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Took 6.57 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.139 221554 DEBUG oslo_concurrency.lockutils [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "2552b865-8435-46e6-823f-87ef00de854b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.139 221554 DEBUG oslo_concurrency.lockutils [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.139 221554 DEBUG oslo_concurrency.lockutils [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "2552b865-8435-46e6-823f-87ef00de854b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.140 221554 DEBUG oslo_concurrency.lockutils [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.140 221554 DEBUG oslo_concurrency.lockutils [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.141 221554 INFO nova.compute.manager [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Terminating instance#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.142 221554 DEBUG nova.compute.manager [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.268 221554 DEBUG nova.compute.manager [None req-d26119c2-3c0a-4d76-bf2a-23902ce69f88 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Jan 31 03:08:13 np0005603609 kernel: tap9256ad7f-57 (unregistering): left promiscuous mode
Jan 31 03:08:13 np0005603609 NetworkManager[49064]: <info>  [1769846893.2864] device (tap9256ad7f-57): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.296 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:08:13Z|00447|binding|INFO|Releasing lport 9256ad7f-5794-4914-b5dd-12cc282f6172 from this chassis (sb_readonly=0)
Jan 31 03:08:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:08:13Z|00448|binding|INFO|Setting lport 9256ad7f-5794-4914-b5dd-12cc282f6172 down in Southbound
Jan 31 03:08:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:08:13Z|00449|binding|INFO|Removing iface tap9256ad7f-57 ovn-installed in OVS
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.298 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.302 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:13 np0005603609 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Jan 31 03:08:13 np0005603609 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000006b.scope: Consumed 4.623s CPU time.
Jan 31 03:08:13 np0005603609 systemd-machined[190912]: Machine qemu-55-instance-0000006b terminated.
Jan 31 03:08:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:13.345 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:92:b0 10.100.0.9'], port_security=['fa:16:3e:a7:92:b0 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2552b865-8435-46e6-823f-87ef00de854b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1fcec9ca13964c7191134db4420ab049', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7be0d68e-c4ff-4356-97f2-bd58246f6e46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=edec0084-a602-4e07-be10-e2ea3f713e0b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=9256ad7f-5794-4914-b5dd-12cc282f6172) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:08:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:13.346 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 9256ad7f-5794-4914-b5dd-12cc282f6172 in datapath 5cc2535f-0f8f-4713-a35c-9805048a29a8 unbound from our chassis#033[00m
Jan 31 03:08:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:13.347 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5cc2535f-0f8f-4713-a35c-9805048a29a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:08:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:13.348 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[956875fc-572a-440e-9530-f23b8b876c8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:13.348 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 namespace which is not needed anymore#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.372 221554 INFO nova.virt.libvirt.driver [-] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Instance destroyed successfully.#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.373 221554 DEBUG nova.objects.instance [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lazy-loading 'resources' on Instance uuid 2552b865-8435-46e6-823f-87ef00de854b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:08:13 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266575]: [NOTICE]   (266579) : haproxy version is 2.8.14-c23fe91
Jan 31 03:08:13 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266575]: [NOTICE]   (266579) : path to executable is /usr/sbin/haproxy
Jan 31 03:08:13 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266575]: [WARNING]  (266579) : Exiting Master process...
Jan 31 03:08:13 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266575]: [WARNING]  (266579) : Exiting Master process...
Jan 31 03:08:13 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266575]: [ALERT]    (266579) : Current worker (266581) exited with code 143 (Terminated)
Jan 31 03:08:13 np0005603609 neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8[266575]: [WARNING]  (266579) : All workers exited. Exiting... (0)
Jan 31 03:08:13 np0005603609 systemd[1]: libpod-b4b94eb6c7d148fab1cde6a97a996a6545a894b555f6fa5d91ca1d1654ac362a.scope: Deactivated successfully.
Jan 31 03:08:13 np0005603609 podman[266624]: 2026-01-31 08:08:13.484133025 +0000 UTC m=+0.060135157 container died b4b94eb6c7d148fab1cde6a97a996a6545a894b555f6fa5d91ca1d1654ac362a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 03:08:13 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4b94eb6c7d148fab1cde6a97a996a6545a894b555f6fa5d91ca1d1654ac362a-userdata-shm.mount: Deactivated successfully.
Jan 31 03:08:13 np0005603609 systemd[1]: var-lib-containers-storage-overlay-2346bd71e6ca1e37a6d868c371dff436beb701c6b07445985ea34b74fc69fea5-merged.mount: Deactivated successfully.
Jan 31 03:08:13 np0005603609 podman[266624]: 2026-01-31 08:08:13.531226267 +0000 UTC m=+0.107228389 container cleanup b4b94eb6c7d148fab1cde6a97a996a6545a894b555f6fa5d91ca1d1654ac362a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.533 221554 DEBUG nova.virt.libvirt.vif [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1274879352',display_name='tempest-ServerActionsTestJSON-server-1274879352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1274879352',id=107,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXWGbWS2QVJNaewBtvtTvHVuAswcZ4aBah3dnudm7AiGOrBtXYf3L4O7q1zMaySLJ/p/4JNpF+Y0p8p8tof6T0lF6BIQ9/oCdDpVXVBSrxW+zwXPG1Zm9rSlBDlr1LhuQ==',key_name='tempest-keypair-1337985484',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:07:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1fcec9ca13964c7191134db4420ab049',ramdisk_id='',reservation_id='r-7zccd5vw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1391450973',owner_user_name='tempest-ServerActionsTestJSON-1391450973-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:08:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d9ed446fb2cf4fc0a4e619c6c766fddc',uuid=2552b865-8435-46e6-823f-87ef00de854b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.533 221554 DEBUG nova.network.os_vif_util [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converting VIF {"id": "9256ad7f-5794-4914-b5dd-12cc282f6172", "address": "fa:16:3e:a7:92:b0", "network": {"id": "5cc2535f-0f8f-4713-a35c-9805048a29a8", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1076068063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1fcec9ca13964c7191134db4420ab049", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9256ad7f-57", "ovs_interfaceid": "9256ad7f-5794-4914-b5dd-12cc282f6172", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.534 221554 DEBUG nova.network.os_vif_util [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:92:b0,bridge_name='br-int',has_traffic_filtering=True,id=9256ad7f-5794-4914-b5dd-12cc282f6172,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9256ad7f-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.534 221554 DEBUG os_vif [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:92:b0,bridge_name='br-int',has_traffic_filtering=True,id=9256ad7f-5794-4914-b5dd-12cc282f6172,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9256ad7f-57') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:08:13 np0005603609 systemd[1]: libpod-conmon-b4b94eb6c7d148fab1cde6a97a996a6545a894b555f6fa5d91ca1d1654ac362a.scope: Deactivated successfully.
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.536 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.536 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9256ad7f-57, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.539 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.541 221554 INFO os_vif [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:92:b0,bridge_name='br-int',has_traffic_filtering=True,id=9256ad7f-5794-4914-b5dd-12cc282f6172,network=Network(5cc2535f-0f8f-4713-a35c-9805048a29a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9256ad7f-57')#033[00m
Jan 31 03:08:13 np0005603609 podman[266654]: 2026-01-31 08:08:13.605486511 +0000 UTC m=+0.055090935 container remove b4b94eb6c7d148fab1cde6a97a996a6545a894b555f6fa5d91ca1d1654ac362a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:08:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:13.609 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[aeec7820-ef0b-40ed-99bb-9fc812e747cb]: (4, ('Sat Jan 31 08:08:13 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (b4b94eb6c7d148fab1cde6a97a996a6545a894b555f6fa5d91ca1d1654ac362a)\nb4b94eb6c7d148fab1cde6a97a996a6545a894b555f6fa5d91ca1d1654ac362a\nSat Jan 31 08:08:13 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 (b4b94eb6c7d148fab1cde6a97a996a6545a894b555f6fa5d91ca1d1654ac362a)\nb4b94eb6c7d148fab1cde6a97a996a6545a894b555f6fa5d91ca1d1654ac362a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:13.610 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ad39a9f9-f15a-4802-8641-c3403cc5d63e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:13.611 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc2535f-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.613 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:13 np0005603609 kernel: tap5cc2535f-00: left promiscuous mode
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.618 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:13.621 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[151dc934-fc61-4880-a73c-145af76b6838]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:13.640 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[11cf3b19-2525-4599-9770-639df3cf7736]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:13.641 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2f8132d3-9a48-4eb8-b010-ca9aa9125d67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:13.654 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[95385441-7bd1-4a47-8212-26247f3755b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712829, 'reachable_time': 37723, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266687, 'error': None, 'target': 'ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:13.657 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5cc2535f-0f8f-4713-a35c-9805048a29a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:08:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:13.657 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[1cf4b6c7-f42f-4408-92ee-28ed5ac4a840]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:13 np0005603609 systemd[1]: run-netns-ovnmeta\x2d5cc2535f\x2d0f8f\x2d4713\x2da35c\x2d9805048a29a8.mount: Deactivated successfully.
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.993 221554 DEBUG nova.compute.manager [req-9b5f8ede-05b9-4962-b183-a7fd197e8f95 req-5ffaf910-1948-42ad-a368-25df6bd25a50 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received event network-vif-unplugged-9256ad7f-5794-4914-b5dd-12cc282f6172 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.994 221554 DEBUG oslo_concurrency.lockutils [req-9b5f8ede-05b9-4962-b183-a7fd197e8f95 req-5ffaf910-1948-42ad-a368-25df6bd25a50 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2552b865-8435-46e6-823f-87ef00de854b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.994 221554 DEBUG oslo_concurrency.lockutils [req-9b5f8ede-05b9-4962-b183-a7fd197e8f95 req-5ffaf910-1948-42ad-a368-25df6bd25a50 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.994 221554 DEBUG oslo_concurrency.lockutils [req-9b5f8ede-05b9-4962-b183-a7fd197e8f95 req-5ffaf910-1948-42ad-a368-25df6bd25a50 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.995 221554 DEBUG nova.compute.manager [req-9b5f8ede-05b9-4962-b183-a7fd197e8f95 req-5ffaf910-1948-42ad-a368-25df6bd25a50 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] No waiting events found dispatching network-vif-unplugged-9256ad7f-5794-4914-b5dd-12cc282f6172 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:08:13 np0005603609 nova_compute[221550]: 2026-01-31 08:08:13.995 221554 DEBUG nova.compute.manager [req-9b5f8ede-05b9-4962-b183-a7fd197e8f95 req-5ffaf910-1948-42ad-a368-25df6bd25a50 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received event network-vif-unplugged-9256ad7f-5794-4914-b5dd-12cc282f6172 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:08:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:08:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:14.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:08:14 np0005603609 nova_compute[221550]: 2026-01-31 08:08:14.147 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:14.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:15 np0005603609 nova_compute[221550]: 2026-01-31 08:08:15.150 221554 DEBUG nova.compute.manager [None req-aa9b8894-091a-41f2-b4d1-18402f58b6ca 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:08:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:15 np0005603609 nova_compute[221550]: 2026-01-31 08:08:15.386 221554 INFO nova.virt.libvirt.driver [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Deleting instance files /var/lib/nova/instances/2552b865-8435-46e6-823f-87ef00de854b_del#033[00m
Jan 31 03:08:15 np0005603609 nova_compute[221550]: 2026-01-31 08:08:15.388 221554 INFO nova.virt.libvirt.driver [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Deletion of /var/lib/nova/instances/2552b865-8435-46e6-823f-87ef00de854b_del complete#033[00m
Jan 31 03:08:15 np0005603609 nova_compute[221550]: 2026-01-31 08:08:15.396 221554 INFO nova.compute.manager [None req-aa9b8894-091a-41f2-b4d1-18402f58b6ca 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] instance snapshotting#033[00m
Jan 31 03:08:15 np0005603609 nova_compute[221550]: 2026-01-31 08:08:15.398 221554 DEBUG nova.objects.instance [None req-aa9b8894-091a-41f2-b4d1-18402f58b6ca 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'flavor' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:08:15 np0005603609 nova_compute[221550]: 2026-01-31 08:08:15.551 221554 INFO nova.compute.manager [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Took 2.41 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:08:15 np0005603609 nova_compute[221550]: 2026-01-31 08:08:15.551 221554 DEBUG oslo.service.loopingcall [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:08:15 np0005603609 nova_compute[221550]: 2026-01-31 08:08:15.552 221554 DEBUG nova.compute.manager [-] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:08:15 np0005603609 nova_compute[221550]: 2026-01-31 08:08:15.552 221554 DEBUG nova.network.neutron [-] [instance: 2552b865-8435-46e6-823f-87ef00de854b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:08:15 np0005603609 nova_compute[221550]: 2026-01-31 08:08:15.752 221554 INFO nova.virt.libvirt.driver [None req-aa9b8894-091a-41f2-b4d1-18402f58b6ca 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Beginning live snapshot process#033[00m
Jan 31 03:08:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:16.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:16 np0005603609 nova_compute[221550]: 2026-01-31 08:08:16.056 221554 DEBUG nova.virt.libvirt.imagebackend [None req-aa9b8894-091a-41f2-b4d1-18402f58b6ca 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:08:16 np0005603609 nova_compute[221550]: 2026-01-31 08:08:16.190 221554 DEBUG nova.compute.manager [req-1f2fd86d-59e4-405e-896f-41bdea81de89 req-ad12dd2f-e5e0-4bef-9255-ea4ff06c32d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received event network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:08:16 np0005603609 nova_compute[221550]: 2026-01-31 08:08:16.190 221554 DEBUG oslo_concurrency.lockutils [req-1f2fd86d-59e4-405e-896f-41bdea81de89 req-ad12dd2f-e5e0-4bef-9255-ea4ff06c32d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2552b865-8435-46e6-823f-87ef00de854b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:16 np0005603609 nova_compute[221550]: 2026-01-31 08:08:16.191 221554 DEBUG oslo_concurrency.lockutils [req-1f2fd86d-59e4-405e-896f-41bdea81de89 req-ad12dd2f-e5e0-4bef-9255-ea4ff06c32d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:16 np0005603609 nova_compute[221550]: 2026-01-31 08:08:16.191 221554 DEBUG oslo_concurrency.lockutils [req-1f2fd86d-59e4-405e-896f-41bdea81de89 req-ad12dd2f-e5e0-4bef-9255-ea4ff06c32d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:16 np0005603609 nova_compute[221550]: 2026-01-31 08:08:16.191 221554 DEBUG nova.compute.manager [req-1f2fd86d-59e4-405e-896f-41bdea81de89 req-ad12dd2f-e5e0-4bef-9255-ea4ff06c32d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] No waiting events found dispatching network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:08:16 np0005603609 nova_compute[221550]: 2026-01-31 08:08:16.191 221554 WARNING nova.compute.manager [req-1f2fd86d-59e4-405e-896f-41bdea81de89 req-ad12dd2f-e5e0-4bef-9255-ea4ff06c32d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received unexpected event network-vif-plugged-9256ad7f-5794-4914-b5dd-12cc282f6172 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:08:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:16.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:16 np0005603609 nova_compute[221550]: 2026-01-31 08:08:16.426 221554 DEBUG nova.storage.rbd_utils [None req-aa9b8894-091a-41f2-b4d1-18402f58b6ca 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] creating snapshot(11d93edb28e54f0084f62a62e25d24fb) on rbd image(1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:08:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e270 e270: 3 total, 3 up, 3 in
Jan 31 03:08:16 np0005603609 nova_compute[221550]: 2026-01-31 08:08:16.763 221554 DEBUG nova.storage.rbd_utils [None req-aa9b8894-091a-41f2-b4d1-18402f58b6ca 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] cloning vms/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk@11d93edb28e54f0084f62a62e25d24fb to images/29ae106c-3bc3-4dca-bd69-8cd8e500fc3f clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:08:16 np0005603609 nova_compute[221550]: 2026-01-31 08:08:16.823 221554 DEBUG nova.network.neutron [-] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:08:16 np0005603609 nova_compute[221550]: 2026-01-31 08:08:16.911 221554 DEBUG nova.storage.rbd_utils [None req-aa9b8894-091a-41f2-b4d1-18402f58b6ca 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] flattening images/29ae106c-3bc3-4dca-bd69-8cd8e500fc3f flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:08:16 np0005603609 nova_compute[221550]: 2026-01-31 08:08:16.987 221554 DEBUG nova.compute.manager [req-84d38e39-e623-4f08-ab2d-1e58d9456eea req-5503b1ff-92ea-4aa0-9c05-34ceefc6e965 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Received event network-vif-deleted-9256ad7f-5794-4914-b5dd-12cc282f6172 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:08:16 np0005603609 nova_compute[221550]: 2026-01-31 08:08:16.987 221554 INFO nova.compute.manager [-] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Took 1.44 seconds to deallocate network for instance.#033[00m
Jan 31 03:08:17 np0005603609 nova_compute[221550]: 2026-01-31 08:08:17.129 221554 DEBUG oslo_concurrency.lockutils [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:17 np0005603609 nova_compute[221550]: 2026-01-31 08:08:17.130 221554 DEBUG oslo_concurrency.lockutils [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:17 np0005603609 nova_compute[221550]: 2026-01-31 08:08:17.222 221554 DEBUG oslo_concurrency.processutils [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:08:17 np0005603609 nova_compute[221550]: 2026-01-31 08:08:17.399 221554 DEBUG nova.storage.rbd_utils [None req-aa9b8894-091a-41f2-b4d1-18402f58b6ca 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] removing snapshot(11d93edb28e54f0084f62a62e25d24fb) on rbd image(1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:08:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:08:17 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3096185768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:08:17 np0005603609 nova_compute[221550]: 2026-01-31 08:08:17.646 221554 DEBUG oslo_concurrency.processutils [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:08:17 np0005603609 nova_compute[221550]: 2026-01-31 08:08:17.652 221554 DEBUG nova.compute.provider_tree [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:08:17 np0005603609 nova_compute[221550]: 2026-01-31 08:08:17.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e271 e271: 3 total, 3 up, 3 in
Jan 31 03:08:17 np0005603609 nova_compute[221550]: 2026-01-31 08:08:17.754 221554 DEBUG nova.scheduler.client.report [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:08:17 np0005603609 nova_compute[221550]: 2026-01-31 08:08:17.775 221554 DEBUG nova.storage.rbd_utils [None req-aa9b8894-091a-41f2-b4d1-18402f58b6ca 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] creating snapshot(snap) on rbd image(29ae106c-3bc3-4dca-bd69-8cd8e500fc3f) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:08:17 np0005603609 nova_compute[221550]: 2026-01-31 08:08:17.960 221554 DEBUG oslo_concurrency.lockutils [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:18.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:18 np0005603609 nova_compute[221550]: 2026-01-31 08:08:18.199 221554 INFO nova.scheduler.client.report [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Deleted allocations for instance 2552b865-8435-46e6-823f-87ef00de854b#033[00m
Jan 31 03:08:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:18.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:18 np0005603609 nova_compute[221550]: 2026-01-31 08:08:18.539 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:18 np0005603609 nova_compute[221550]: 2026-01-31 08:08:18.737 221554 DEBUG oslo_concurrency.lockutils [None req-7fb28110-0ac1-4bc3-9158-08f1dc61349f d9ed446fb2cf4fc0a4e619c6c766fddc 1fcec9ca13964c7191134db4420ab049 - - default default] Lock "2552b865-8435-46e6-823f-87ef00de854b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e272 e272: 3 total, 3 up, 3 in
Jan 31 03:08:19 np0005603609 nova_compute[221550]: 2026-01-31 08:08:19.150 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e273 e273: 3 total, 3 up, 3 in
Jan 31 03:08:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:20.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:08:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:20.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:08:21 np0005603609 nova_compute[221550]: 2026-01-31 08:08:21.533 221554 INFO nova.virt.libvirt.driver [None req-aa9b8894-091a-41f2-b4d1-18402f58b6ca 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Snapshot image upload complete#033[00m
Jan 31 03:08:21 np0005603609 nova_compute[221550]: 2026-01-31 08:08:21.534 221554 INFO nova.compute.manager [None req-aa9b8894-091a-41f2-b4d1-18402f58b6ca 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Took 6.05 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:08:21 np0005603609 nova_compute[221550]: 2026-01-31 08:08:21.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:21 np0005603609 nova_compute[221550]: 2026-01-31 08:08:21.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:21 np0005603609 ovn_controller[130359]: 2026-01-31T08:08:21Z|00450|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 03:08:21 np0005603609 nova_compute[221550]: 2026-01-31 08:08:21.904 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:22.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:22 np0005603609 nova_compute[221550]: 2026-01-31 08:08:22.089 221554 DEBUG nova.compute.manager [None req-aa9b8894-091a-41f2-b4d1-18402f58b6ca 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Jan 31 03:08:22 np0005603609 nova_compute[221550]: 2026-01-31 08:08:22.090 221554 DEBUG nova.compute.manager [None req-aa9b8894-091a-41f2-b4d1-18402f58b6ca 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458#033[00m
Jan 31 03:08:22 np0005603609 nova_compute[221550]: 2026-01-31 08:08:22.090 221554 DEBUG nova.compute.manager [None req-aa9b8894-091a-41f2-b4d1-18402f58b6ca 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Deleting image 94599989-66dd-45c0-95bf-4799dd46478c _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463#033[00m
Jan 31 03:08:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:22.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e274 e274: 3 total, 3 up, 3 in
Jan 31 03:08:23 np0005603609 nova_compute[221550]: 2026-01-31 08:08:23.541 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:23 np0005603609 nova_compute[221550]: 2026-01-31 08:08:23.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 03:08:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:24.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 03:08:24 np0005603609 nova_compute[221550]: 2026-01-31 08:08:24.151 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:24.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e275 e275: 3 total, 3 up, 3 in
Jan 31 03:08:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:26.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:26.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:27 np0005603609 nova_compute[221550]: 2026-01-31 08:08:27.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:27 np0005603609 nova_compute[221550]: 2026-01-31 08:08:27.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:27 np0005603609 nova_compute[221550]: 2026-01-31 08:08:27.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:08:27 np0005603609 nova_compute[221550]: 2026-01-31 08:08:27.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:08:27 np0005603609 nova_compute[221550]: 2026-01-31 08:08:27.927 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:08:27 np0005603609 nova_compute[221550]: 2026-01-31 08:08:27.928 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:08:27 np0005603609 nova_compute[221550]: 2026-01-31 08:08:27.928 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:08:27 np0005603609 nova_compute[221550]: 2026-01-31 08:08:27.928 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:08:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:28.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:28.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:28 np0005603609 nova_compute[221550]: 2026-01-31 08:08:28.369 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846893.3687239, 2552b865-8435-46e6-823f-87ef00de854b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:08:28 np0005603609 nova_compute[221550]: 2026-01-31 08:08:28.370 221554 INFO nova.compute.manager [-] [instance: 2552b865-8435-46e6-823f-87ef00de854b] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:08:28 np0005603609 nova_compute[221550]: 2026-01-31 08:08:28.543 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:28 np0005603609 nova_compute[221550]: 2026-01-31 08:08:28.729 221554 DEBUG nova.compute.manager [None req-c883c874-dedf-4328-84f0-edbb88c63315 - - - - - -] [instance: 2552b865-8435-46e6-823f-87ef00de854b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:08:29 np0005603609 nova_compute[221550]: 2026-01-31 08:08:29.153 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e276 e276: 3 total, 3 up, 3 in
Jan 31 03:08:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e277 e277: 3 total, 3 up, 3 in
Jan 31 03:08:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:30.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:30.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:30 np0005603609 ovn_controller[130359]: 2026-01-31T08:08:30Z|00451|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 03:08:30 np0005603609 nova_compute[221550]: 2026-01-31 08:08:30.617 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:08:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:32.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:08:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:08:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:32.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.033 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Updating instance_info_cache with network_info: [{"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.162 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.163 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.164 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.164 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.165 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.165 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:33 np0005603609 podman[266858]: 2026-01-31 08:08:33.184719701 +0000 UTC m=+0.054532132 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Jan 31 03:08:33 np0005603609 podman[266857]: 2026-01-31 08:08:33.228949053 +0000 UTC m=+0.104527693 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.294 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.294 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.294 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.294 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.295 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.545 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:08:33 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1859182276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.737 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.835 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.836 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.962 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.963 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4263MB free_disk=20.921798706054688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.963 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:33 np0005603609 nova_compute[221550]: 2026-01-31 08:08:33.964 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:34.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:34 np0005603609 nova_compute[221550]: 2026-01-31 08:08:34.053 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:08:34 np0005603609 nova_compute[221550]: 2026-01-31 08:08:34.054 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:08:34 np0005603609 nova_compute[221550]: 2026-01-31 08:08:34.054 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:08:34 np0005603609 nova_compute[221550]: 2026-01-31 08:08:34.102 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:08:34 np0005603609 nova_compute[221550]: 2026-01-31 08:08:34.156 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:08:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:34.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:08:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:08:34 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2428103131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:08:34 np0005603609 nova_compute[221550]: 2026-01-31 08:08:34.494 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:08:34 np0005603609 nova_compute[221550]: 2026-01-31 08:08:34.499 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:08:34 np0005603609 nova_compute[221550]: 2026-01-31 08:08:34.524 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:08:34 np0005603609 nova_compute[221550]: 2026-01-31 08:08:34.544 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:08:34 np0005603609 nova_compute[221550]: 2026-01-31 08:08:34.545 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e278 e278: 3 total, 3 up, 3 in
Jan 31 03:08:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:36.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:36.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:36 np0005603609 nova_compute[221550]: 2026-01-31 08:08:36.449 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:36.449 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:08:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:36.451 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:08:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:08:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:38.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:08:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:38.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:38 np0005603609 nova_compute[221550]: 2026-01-31 08:08:38.549 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:39 np0005603609 nova_compute[221550]: 2026-01-31 08:08:39.157 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:08:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:40.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:08:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e279 e279: 3 total, 3 up, 3 in
Jan 31 03:08:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:08:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:40.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:08:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:08:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:42.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:08:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:42.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:43 np0005603609 nova_compute[221550]: 2026-01-31 08:08:43.366 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:43 np0005603609 nova_compute[221550]: 2026-01-31 08:08:43.551 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:44.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:44 np0005603609 nova_compute[221550]: 2026-01-31 08:08:44.160 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:08:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:44.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:08:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:08:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/537857212' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:08:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:08:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/537857212' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:08:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:46.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:08:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:46.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:08:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:08:46.454 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:08:47 np0005603609 nova_compute[221550]: 2026-01-31 08:08:47.541 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:08:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:48.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:08:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:48.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:48 np0005603609 nova_compute[221550]: 2026-01-31 08:08:48.554 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:49 np0005603609 nova_compute[221550]: 2026-01-31 08:08:49.198 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:49 np0005603609 nova_compute[221550]: 2026-01-31 08:08:49.460 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:50.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:50.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:52.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:52.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:53 np0005603609 nova_compute[221550]: 2026-01-31 08:08:53.137 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:53 np0005603609 nova_compute[221550]: 2026-01-31 08:08:53.556 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:54.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:54 np0005603609 nova_compute[221550]: 2026-01-31 08:08:54.200 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:08:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:54.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:08:54 np0005603609 ovn_controller[130359]: 2026-01-31T08:08:54Z|00452|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 03:08:54 np0005603609 nova_compute[221550]: 2026-01-31 08:08:54.631 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:56.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:08:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:56.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:08:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:58.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:58 np0005603609 nova_compute[221550]: 2026-01-31 08:08:58.234 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:58 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:08:58 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:08:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:08:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:58.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:58 np0005603609 nova_compute[221550]: 2026-01-31 08:08:58.558 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:59 np0005603609 nova_compute[221550]: 2026-01-31 08:08:59.202 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:00.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:09:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:00.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:09:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:09:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:02.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:09:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:02.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:09:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:09:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:09:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:09:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:09:03 np0005603609 nova_compute[221550]: 2026-01-31 08:09:03.560 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:04.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:04 np0005603609 podman[267082]: 2026-01-31 08:09:04.158077345 +0000 UTC m=+0.046243382 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 03:09:04 np0005603609 nova_compute[221550]: 2026-01-31 08:09:04.203 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:04 np0005603609 podman[267081]: 2026-01-31 08:09:04.216412208 +0000 UTC m=+0.104191646 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Jan 31 03:09:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:04.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:04 np0005603609 nova_compute[221550]: 2026-01-31 08:09:04.641 221554 DEBUG oslo_concurrency.lockutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Acquiring lock "95d81c30-cf89-4075-8b93-6ed225df1262" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:04 np0005603609 nova_compute[221550]: 2026-01-31 08:09:04.642 221554 DEBUG oslo_concurrency.lockutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Lock "95d81c30-cf89-4075-8b93-6ed225df1262" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:04 np0005603609 nova_compute[221550]: 2026-01-31 08:09:04.709 221554 DEBUG nova.compute.manager [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:09:04 np0005603609 nova_compute[221550]: 2026-01-31 08:09:04.830 221554 DEBUG oslo_concurrency.lockutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:04 np0005603609 nova_compute[221550]: 2026-01-31 08:09:04.831 221554 DEBUG oslo_concurrency.lockutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:04 np0005603609 nova_compute[221550]: 2026-01-31 08:09:04.841 221554 DEBUG nova.virt.hardware [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:09:04 np0005603609 nova_compute[221550]: 2026-01-31 08:09:04.841 221554 INFO nova.compute.claims [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:09:05 np0005603609 nova_compute[221550]: 2026-01-31 08:09:05.008 221554 DEBUG oslo_concurrency.processutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:09:05 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2926926082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:09:05 np0005603609 nova_compute[221550]: 2026-01-31 08:09:05.413 221554 DEBUG oslo_concurrency.processutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:05 np0005603609 nova_compute[221550]: 2026-01-31 08:09:05.421 221554 DEBUG nova.compute.provider_tree [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:09:05 np0005603609 nova_compute[221550]: 2026-01-31 08:09:05.561 221554 DEBUG nova.scheduler.client.report [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:09:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:05 np0005603609 nova_compute[221550]: 2026-01-31 08:09:05.649 221554 DEBUG oslo_concurrency.lockutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:05 np0005603609 nova_compute[221550]: 2026-01-31 08:09:05.651 221554 DEBUG nova.compute.manager [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:09:05 np0005603609 nova_compute[221550]: 2026-01-31 08:09:05.706 221554 DEBUG nova.compute.manager [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:09:05 np0005603609 nova_compute[221550]: 2026-01-31 08:09:05.707 221554 DEBUG nova.network.neutron [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:09:05 np0005603609 nova_compute[221550]: 2026-01-31 08:09:05.734 221554 INFO nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:09:05 np0005603609 nova_compute[221550]: 2026-01-31 08:09:05.749 221554 DEBUG nova.compute.manager [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:09:05 np0005603609 nova_compute[221550]: 2026-01-31 08:09:05.901 221554 DEBUG nova.compute.manager [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:09:05 np0005603609 nova_compute[221550]: 2026-01-31 08:09:05.903 221554 DEBUG nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:09:05 np0005603609 nova_compute[221550]: 2026-01-31 08:09:05.903 221554 INFO nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Creating image(s)#033[00m
Jan 31 03:09:05 np0005603609 nova_compute[221550]: 2026-01-31 08:09:05.933 221554 DEBUG nova.storage.rbd_utils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] rbd image 95d81c30-cf89-4075-8b93-6ed225df1262_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:05 np0005603609 nova_compute[221550]: 2026-01-31 08:09:05.962 221554 DEBUG nova.storage.rbd_utils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] rbd image 95d81c30-cf89-4075-8b93-6ed225df1262_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:05 np0005603609 nova_compute[221550]: 2026-01-31 08:09:05.991 221554 DEBUG nova.storage.rbd_utils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] rbd image 95d81c30-cf89-4075-8b93-6ed225df1262_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:05 np0005603609 nova_compute[221550]: 2026-01-31 08:09:05.995 221554 DEBUG oslo_concurrency.processutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:06 np0005603609 nova_compute[221550]: 2026-01-31 08:09:06.016 221554 DEBUG nova.policy [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '221dc0e49994443e8b86a6a9949ee1aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0e5e834897a04940b210da74cc674879', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:09:06 np0005603609 nova_compute[221550]: 2026-01-31 08:09:06.069 221554 DEBUG oslo_concurrency.processutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:06 np0005603609 nova_compute[221550]: 2026-01-31 08:09:06.070 221554 DEBUG oslo_concurrency.lockutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:06 np0005603609 nova_compute[221550]: 2026-01-31 08:09:06.070 221554 DEBUG oslo_concurrency.lockutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:06 np0005603609 nova_compute[221550]: 2026-01-31 08:09:06.071 221554 DEBUG oslo_concurrency.lockutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:06 np0005603609 nova_compute[221550]: 2026-01-31 08:09:06.092 221554 DEBUG nova.storage.rbd_utils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] rbd image 95d81c30-cf89-4075-8b93-6ed225df1262_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:06.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:06 np0005603609 nova_compute[221550]: 2026-01-31 08:09:06.096 221554 DEBUG oslo_concurrency.processutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 95d81c30-cf89-4075-8b93-6ed225df1262_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:06.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:06 np0005603609 nova_compute[221550]: 2026-01-31 08:09:06.873 221554 DEBUG nova.network.neutron [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Successfully created port: 798bc67d-757c-4e6e-9c3c-9ef096c1e455 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:09:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:07.502 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:07.503 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:07.504 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:08 np0005603609 nova_compute[221550]: 2026-01-31 08:09:08.023 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:08.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:08 np0005603609 nova_compute[221550]: 2026-01-31 08:09:08.125 221554 DEBUG nova.network.neutron [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Successfully updated port: 798bc67d-757c-4e6e-9c3c-9ef096c1e455 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:09:08 np0005603609 nova_compute[221550]: 2026-01-31 08:09:08.153 221554 DEBUG oslo_concurrency.lockutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Acquiring lock "refresh_cache-95d81c30-cf89-4075-8b93-6ed225df1262" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:09:08 np0005603609 nova_compute[221550]: 2026-01-31 08:09:08.153 221554 DEBUG oslo_concurrency.lockutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Acquired lock "refresh_cache-95d81c30-cf89-4075-8b93-6ed225df1262" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:09:08 np0005603609 nova_compute[221550]: 2026-01-31 08:09:08.153 221554 DEBUG nova.network.neutron [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:09:08 np0005603609 nova_compute[221550]: 2026-01-31 08:09:08.241 221554 DEBUG nova.compute.manager [req-fff9f6aa-5f08-4a8d-bd93-7a07cc727abe req-dd3a65a8-dd53-4984-9ae8-b1c14745f4b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Received event network-changed-798bc67d-757c-4e6e-9c3c-9ef096c1e455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:09:08 np0005603609 nova_compute[221550]: 2026-01-31 08:09:08.241 221554 DEBUG nova.compute.manager [req-fff9f6aa-5f08-4a8d-bd93-7a07cc727abe req-dd3a65a8-dd53-4984-9ae8-b1c14745f4b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Refreshing instance network info cache due to event network-changed-798bc67d-757c-4e6e-9c3c-9ef096c1e455. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:09:08 np0005603609 nova_compute[221550]: 2026-01-31 08:09:08.242 221554 DEBUG oslo_concurrency.lockutils [req-fff9f6aa-5f08-4a8d-bd93-7a07cc727abe req-dd3a65a8-dd53-4984-9ae8-b1c14745f4b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-95d81c30-cf89-4075-8b93-6ed225df1262" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:09:08 np0005603609 nova_compute[221550]: 2026-01-31 08:09:08.339 221554 DEBUG nova.network.neutron [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:09:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:09:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:08.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:09:08 np0005603609 nova_compute[221550]: 2026-01-31 08:09:08.563 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:08 np0005603609 nova_compute[221550]: 2026-01-31 08:09:08.809 221554 DEBUG oslo_concurrency.processutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 95d81c30-cf89-4075-8b93-6ed225df1262_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.713s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:08 np0005603609 nova_compute[221550]: 2026-01-31 08:09:08.873 221554 DEBUG nova.storage.rbd_utils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] resizing rbd image 95d81c30-cf89-4075-8b93-6ed225df1262_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:09:09 np0005603609 nova_compute[221550]: 2026-01-31 08:09:09.205 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:09 np0005603609 nova_compute[221550]: 2026-01-31 08:09:09.651 221554 DEBUG nova.objects.instance [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Lazy-loading 'migration_context' on Instance uuid 95d81c30-cf89-4075-8b93-6ed225df1262 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:09:09 np0005603609 nova_compute[221550]: 2026-01-31 08:09:09.948 221554 DEBUG nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:09:09 np0005603609 nova_compute[221550]: 2026-01-31 08:09:09.949 221554 DEBUG nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Ensure instance console log exists: /var/lib/nova/instances/95d81c30-cf89-4075-8b93-6ed225df1262/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:09:09 np0005603609 nova_compute[221550]: 2026-01-31 08:09:09.950 221554 DEBUG oslo_concurrency.lockutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:09 np0005603609 nova_compute[221550]: 2026-01-31 08:09:09.950 221554 DEBUG oslo_concurrency.lockutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:09 np0005603609 nova_compute[221550]: 2026-01-31 08:09:09.951 221554 DEBUG oslo_concurrency.lockutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:09:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:10.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.237 221554 DEBUG nova.network.neutron [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Updating instance_info_cache with network_info: [{"id": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "address": "fa:16:3e:e1:45:80", "network": {"id": "a910f7eb-b133-465c-9a0c-9d67755a2227", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1097130539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e5e834897a04940b210da74cc674879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798bc67d-75", "ovs_interfaceid": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.277 221554 DEBUG oslo_concurrency.lockutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Releasing lock "refresh_cache-95d81c30-cf89-4075-8b93-6ed225df1262" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.278 221554 DEBUG nova.compute.manager [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Instance network_info: |[{"id": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "address": "fa:16:3e:e1:45:80", "network": {"id": "a910f7eb-b133-465c-9a0c-9d67755a2227", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1097130539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e5e834897a04940b210da74cc674879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798bc67d-75", "ovs_interfaceid": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.278 221554 DEBUG oslo_concurrency.lockutils [req-fff9f6aa-5f08-4a8d-bd93-7a07cc727abe req-dd3a65a8-dd53-4984-9ae8-b1c14745f4b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-95d81c30-cf89-4075-8b93-6ed225df1262" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.278 221554 DEBUG nova.network.neutron [req-fff9f6aa-5f08-4a8d-bd93-7a07cc727abe req-dd3a65a8-dd53-4984-9ae8-b1c14745f4b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Refreshing network info cache for port 798bc67d-757c-4e6e-9c3c-9ef096c1e455 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.280 221554 DEBUG nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Start _get_guest_xml network_info=[{"id": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "address": "fa:16:3e:e1:45:80", "network": {"id": "a910f7eb-b133-465c-9a0c-9d67755a2227", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1097130539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e5e834897a04940b210da74cc674879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798bc67d-75", "ovs_interfaceid": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.285 221554 WARNING nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.292 221554 DEBUG nova.virt.libvirt.host [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.293 221554 DEBUG nova.virt.libvirt.host [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.297 221554 DEBUG nova.virt.libvirt.host [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.297 221554 DEBUG nova.virt.libvirt.host [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.298 221554 DEBUG nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.299 221554 DEBUG nova.virt.hardware [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.299 221554 DEBUG nova.virt.hardware [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.299 221554 DEBUG nova.virt.hardware [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.300 221554 DEBUG nova.virt.hardware [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.300 221554 DEBUG nova.virt.hardware [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.300 221554 DEBUG nova.virt.hardware [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.300 221554 DEBUG nova.virt.hardware [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.301 221554 DEBUG nova.virt.hardware [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.301 221554 DEBUG nova.virt.hardware [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.301 221554 DEBUG nova.virt.hardware [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.301 221554 DEBUG nova.virt.hardware [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.304 221554 DEBUG oslo_concurrency.processutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:10.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:09:10 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/686841026' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.878 221554 DEBUG oslo_concurrency.processutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.906 221554 DEBUG nova.storage.rbd_utils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] rbd image 95d81c30-cf89-4075-8b93-6ed225df1262_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:10 np0005603609 nova_compute[221550]: 2026-01-31 08:09:10.910 221554 DEBUG oslo_concurrency.processutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:09:11 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1666735505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.366 221554 DEBUG oslo_concurrency.processutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.368 221554 DEBUG nova.virt.libvirt.vif [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:09:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-587846133',display_name='tempest-ServerAddressesTestJSON-server-587846133',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-587846133',id=116,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0e5e834897a04940b210da74cc674879',ramdisk_id='',reservation_id='r-p57tpwsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1804900960',owner_user_name='tempest-ServerAddressesTestJSON-1804900960-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:09:05Z,user_data=None,user_id='221dc0e49994443e8b86a6a9949ee1aa',uuid=95d81c30-cf89-4075-8b93-6ed225df1262,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "address": "fa:16:3e:e1:45:80", "network": {"id": "a910f7eb-b133-465c-9a0c-9d67755a2227", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1097130539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e5e834897a04940b210da74cc674879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798bc67d-75", "ovs_interfaceid": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.368 221554 DEBUG nova.network.os_vif_util [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Converting VIF {"id": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "address": "fa:16:3e:e1:45:80", "network": {"id": "a910f7eb-b133-465c-9a0c-9d67755a2227", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1097130539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e5e834897a04940b210da74cc674879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798bc67d-75", "ovs_interfaceid": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.369 221554 DEBUG nova.network.os_vif_util [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:45:80,bridge_name='br-int',has_traffic_filtering=True,id=798bc67d-757c-4e6e-9c3c-9ef096c1e455,network=Network(a910f7eb-b133-465c-9a0c-9d67755a2227),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798bc67d-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.370 221554 DEBUG nova.objects.instance [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Lazy-loading 'pci_devices' on Instance uuid 95d81c30-cf89-4075-8b93-6ed225df1262 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.426 221554 DEBUG nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:09:11 np0005603609 nova_compute[221550]:  <uuid>95d81c30-cf89-4075-8b93-6ed225df1262</uuid>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:  <name>instance-00000074</name>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerAddressesTestJSON-server-587846133</nova:name>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:09:10</nova:creationTime>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:09:11 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:        <nova:user uuid="221dc0e49994443e8b86a6a9949ee1aa">tempest-ServerAddressesTestJSON-1804900960-project-member</nova:user>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:        <nova:project uuid="0e5e834897a04940b210da74cc674879">tempest-ServerAddressesTestJSON-1804900960</nova:project>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:        <nova:port uuid="798bc67d-757c-4e6e-9c3c-9ef096c1e455">
Jan 31 03:09:11 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <entry name="serial">95d81c30-cf89-4075-8b93-6ed225df1262</entry>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <entry name="uuid">95d81c30-cf89-4075-8b93-6ed225df1262</entry>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/95d81c30-cf89-4075-8b93-6ed225df1262_disk">
Jan 31 03:09:11 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:09:11 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/95d81c30-cf89-4075-8b93-6ed225df1262_disk.config">
Jan 31 03:09:11 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:09:11 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:e1:45:80"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <target dev="tap798bc67d-75"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/95d81c30-cf89-4075-8b93-6ed225df1262/console.log" append="off"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:09:11 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:09:11 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:09:11 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:09:11 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.427 221554 DEBUG nova.compute.manager [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Preparing to wait for external event network-vif-plugged-798bc67d-757c-4e6e-9c3c-9ef096c1e455 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.428 221554 DEBUG oslo_concurrency.lockutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Acquiring lock "95d81c30-cf89-4075-8b93-6ed225df1262-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.428 221554 DEBUG oslo_concurrency.lockutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Lock "95d81c30-cf89-4075-8b93-6ed225df1262-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.428 221554 DEBUG oslo_concurrency.lockutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Lock "95d81c30-cf89-4075-8b93-6ed225df1262-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.429 221554 DEBUG nova.virt.libvirt.vif [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:09:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-587846133',display_name='tempest-ServerAddressesTestJSON-server-587846133',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-587846133',id=116,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0e5e834897a04940b210da74cc674879',ramdisk_id='',reservation_id='r-p57tpwsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1804900960',owner_user_name='tempest-ServerAddressesTestJSON-1804900960-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:09:05Z,user_data=None,user_id='221dc0e49994443e8b86a6a9949ee1aa',uuid=95d81c30-cf89-4075-8b93-6ed225df1262,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "address": "fa:16:3e:e1:45:80", "network": {"id": "a910f7eb-b133-465c-9a0c-9d67755a2227", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1097130539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e5e834897a04940b210da74cc674879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798bc67d-75", "ovs_interfaceid": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.429 221554 DEBUG nova.network.os_vif_util [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Converting VIF {"id": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "address": "fa:16:3e:e1:45:80", "network": {"id": "a910f7eb-b133-465c-9a0c-9d67755a2227", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1097130539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e5e834897a04940b210da74cc674879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798bc67d-75", "ovs_interfaceid": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.430 221554 DEBUG nova.network.os_vif_util [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:45:80,bridge_name='br-int',has_traffic_filtering=True,id=798bc67d-757c-4e6e-9c3c-9ef096c1e455,network=Network(a910f7eb-b133-465c-9a0c-9d67755a2227),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798bc67d-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.430 221554 DEBUG os_vif [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:45:80,bridge_name='br-int',has_traffic_filtering=True,id=798bc67d-757c-4e6e-9c3c-9ef096c1e455,network=Network(a910f7eb-b133-465c-9a0c-9d67755a2227),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798bc67d-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.431 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.431 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.432 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.435 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.435 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap798bc67d-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.436 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap798bc67d-75, col_values=(('external_ids', {'iface-id': '798bc67d-757c-4e6e-9c3c-9ef096c1e455', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:45:80', 'vm-uuid': '95d81c30-cf89-4075-8b93-6ed225df1262'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:11 np0005603609 NetworkManager[49064]: <info>  [1769846951.4397] manager: (tap798bc67d-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.440 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.444 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.445 221554 INFO os_vif [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:45:80,bridge_name='br-int',has_traffic_filtering=True,id=798bc67d-757c-4e6e-9c3c-9ef096c1e455,network=Network(a910f7eb-b133-465c-9a0c-9d67755a2227),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798bc67d-75')#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.528 221554 DEBUG nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.529 221554 DEBUG nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.529 221554 DEBUG nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] No VIF found with MAC fa:16:3e:e1:45:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.529 221554 INFO nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Using config drive#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.552 221554 DEBUG nova.storage.rbd_utils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] rbd image 95d81c30-cf89-4075-8b93-6ed225df1262_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.920 221554 DEBUG nova.network.neutron [req-fff9f6aa-5f08-4a8d-bd93-7a07cc727abe req-dd3a65a8-dd53-4984-9ae8-b1c14745f4b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Updated VIF entry in instance network info cache for port 798bc67d-757c-4e6e-9c3c-9ef096c1e455. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.920 221554 DEBUG nova.network.neutron [req-fff9f6aa-5f08-4a8d-bd93-7a07cc727abe req-dd3a65a8-dd53-4984-9ae8-b1c14745f4b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Updating instance_info_cache with network_info: [{"id": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "address": "fa:16:3e:e1:45:80", "network": {"id": "a910f7eb-b133-465c-9a0c-9d67755a2227", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1097130539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e5e834897a04940b210da74cc674879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798bc67d-75", "ovs_interfaceid": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:09:11 np0005603609 nova_compute[221550]: 2026-01-31 08:09:11.963 221554 DEBUG oslo_concurrency.lockutils [req-fff9f6aa-5f08-4a8d-bd93-7a07cc727abe req-dd3a65a8-dd53-4984-9ae8-b1c14745f4b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-95d81c30-cf89-4075-8b93-6ed225df1262" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:09:12 np0005603609 nova_compute[221550]: 2026-01-31 08:09:12.032 221554 INFO nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Creating config drive at /var/lib/nova/instances/95d81c30-cf89-4075-8b93-6ed225df1262/disk.config#033[00m
Jan 31 03:09:12 np0005603609 nova_compute[221550]: 2026-01-31 08:09:12.035 221554 DEBUG oslo_concurrency.processutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/95d81c30-cf89-4075-8b93-6ed225df1262/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9bkb4dw0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:12.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:12 np0005603609 nova_compute[221550]: 2026-01-31 08:09:12.156 221554 DEBUG oslo_concurrency.processutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/95d81c30-cf89-4075-8b93-6ed225df1262/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9bkb4dw0" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:12 np0005603609 nova_compute[221550]: 2026-01-31 08:09:12.184 221554 DEBUG nova.storage.rbd_utils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] rbd image 95d81c30-cf89-4075-8b93-6ed225df1262_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:12 np0005603609 nova_compute[221550]: 2026-01-31 08:09:12.189 221554 DEBUG oslo_concurrency.processutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/95d81c30-cf89-4075-8b93-6ed225df1262/disk.config 95d81c30-cf89-4075-8b93-6ed225df1262_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:12 np0005603609 nova_compute[221550]: 2026-01-31 08:09:12.364 221554 DEBUG oslo_concurrency.processutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/95d81c30-cf89-4075-8b93-6ed225df1262/disk.config 95d81c30-cf89-4075-8b93-6ed225df1262_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:12 np0005603609 nova_compute[221550]: 2026-01-31 08:09:12.365 221554 INFO nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Deleting local config drive /var/lib/nova/instances/95d81c30-cf89-4075-8b93-6ed225df1262/disk.config because it was imported into RBD.#033[00m
Jan 31 03:09:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:12.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:12 np0005603609 kernel: tap798bc67d-75: entered promiscuous mode
Jan 31 03:09:12 np0005603609 NetworkManager[49064]: <info>  [1769846952.4125] manager: (tap798bc67d-75): new Tun device (/org/freedesktop/NetworkManager/Devices/223)
Jan 31 03:09:12 np0005603609 nova_compute[221550]: 2026-01-31 08:09:12.412 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:12 np0005603609 ovn_controller[130359]: 2026-01-31T08:09:12Z|00453|binding|INFO|Claiming lport 798bc67d-757c-4e6e-9c3c-9ef096c1e455 for this chassis.
Jan 31 03:09:12 np0005603609 ovn_controller[130359]: 2026-01-31T08:09:12Z|00454|binding|INFO|798bc67d-757c-4e6e-9c3c-9ef096c1e455: Claiming fa:16:3e:e1:45:80 10.100.0.8
Jan 31 03:09:12 np0005603609 nova_compute[221550]: 2026-01-31 08:09:12.419 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:12 np0005603609 ovn_controller[130359]: 2026-01-31T08:09:12Z|00455|binding|INFO|Setting lport 798bc67d-757c-4e6e-9c3c-9ef096c1e455 ovn-installed in OVS
Jan 31 03:09:12 np0005603609 ovn_controller[130359]: 2026-01-31T08:09:12Z|00456|binding|INFO|Setting lport 798bc67d-757c-4e6e-9c3c-9ef096c1e455 up in Southbound
Jan 31 03:09:12 np0005603609 nova_compute[221550]: 2026-01-31 08:09:12.422 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.422 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:45:80 10.100.0.8'], port_security=['fa:16:3e:e1:45:80 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '95d81c30-cf89-4075-8b93-6ed225df1262', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a910f7eb-b133-465c-9a0c-9d67755a2227', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e5e834897a04940b210da74cc674879', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e0905a9-311e-46be-b18e-7ae968f0e213', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3e815344-d445-4fe3-890c-a16ce0848be3, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=798bc67d-757c-4e6e-9c3c-9ef096c1e455) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.424 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 798bc67d-757c-4e6e-9c3c-9ef096c1e455 in datapath a910f7eb-b133-465c-9a0c-9d67755a2227 bound to our chassis#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.426 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a910f7eb-b133-465c-9a0c-9d67755a2227#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.434 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[44968233-8960-4ee2-90bb-e70940fa1d89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.436 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa910f7eb-b1 in ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.438 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa910f7eb-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.438 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cea16fc0-9a0a-42f7-b458-5e8ca7dc52bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.439 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1d885a93-9a66-43ef-abdc-ca533ee0ae16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:12 np0005603609 systemd-machined[190912]: New machine qemu-56-instance-00000074.
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.448 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff00ae5-2c4a-4fdf-8773-50edb70add61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:12 np0005603609 systemd[1]: Started Virtual Machine qemu-56-instance-00000074.
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.469 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9b219cf9-3257-452b-883f-afbc1f03b43f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:12 np0005603609 systemd-udevd[267451]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:09:12 np0005603609 NetworkManager[49064]: <info>  [1769846952.4854] device (tap798bc67d-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:09:12 np0005603609 NetworkManager[49064]: <info>  [1769846952.4862] device (tap798bc67d-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.489 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[a439d689-bc78-470d-b282-9d411b3256fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.494 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[dc2463b3-323e-4c7e-9f17-39b1e948cbbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:12 np0005603609 systemd-udevd[267459]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:09:12 np0005603609 NetworkManager[49064]: <info>  [1769846952.4961] manager: (tapa910f7eb-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/224)
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.518 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[48f48f69-e7d0-445c-b1c1-64e97a0488f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.520 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[505221d3-d721-4a3d-8018-4e581ddefa69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:12 np0005603609 NetworkManager[49064]: <info>  [1769846952.5379] device (tapa910f7eb-b0): carrier: link connected
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.541 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6cf060-645c-422b-803d-1229236b0313]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.554 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bbeadc34-2de9-493d-8ac4-40463879e6de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa910f7eb-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:b6:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719223, 'reachable_time': 18708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267481, 'error': None, 'target': 'ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.564 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb92772-5405-4760-b4c8-822ab9c26a76]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:b60e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719223, 'tstamp': 719223}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267482, 'error': None, 'target': 'ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.578 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[48f6802f-8f4a-432c-baa1-8d3acc3ed020]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa910f7eb-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:b6:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719223, 'reachable_time': 18708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267483, 'error': None, 'target': 'ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.597 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ce744971-e68e-4ad7-96fc-8548a5a25792]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.637 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5b54cf0a-5f8d-441b-b64d-fe66ded4999d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.639 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa910f7eb-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.639 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.639 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa910f7eb-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:12 np0005603609 kernel: tapa910f7eb-b0: entered promiscuous mode
Jan 31 03:09:12 np0005603609 nova_compute[221550]: 2026-01-31 08:09:12.641 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:12 np0005603609 NetworkManager[49064]: <info>  [1769846952.6415] manager: (tapa910f7eb-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.644 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa910f7eb-b0, col_values=(('external_ids', {'iface-id': '3a450b1e-b5b9-49b4-a7b7-ee598eb1f815'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:12 np0005603609 ovn_controller[130359]: 2026-01-31T08:09:12Z|00457|binding|INFO|Releasing lport 3a450b1e-b5b9-49b4-a7b7-ee598eb1f815 from this chassis (sb_readonly=0)
Jan 31 03:09:12 np0005603609 nova_compute[221550]: 2026-01-31 08:09:12.645 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:12 np0005603609 nova_compute[221550]: 2026-01-31 08:09:12.645 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.646 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a910f7eb-b133-465c-9a0c-9d67755a2227.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a910f7eb-b133-465c-9a0c-9d67755a2227.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.647 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2ce06a69-97e9-4493-95a8-0d40ecf96103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.647 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-a910f7eb-b133-465c-9a0c-9d67755a2227
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/a910f7eb-b133-465c-9a0c-9d67755a2227.pid.haproxy
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID a910f7eb-b133-465c-9a0c-9d67755a2227
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:09:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:12.648 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227', 'env', 'PROCESS_TAG=haproxy-a910f7eb-b133-465c-9a0c-9d67755a2227', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a910f7eb-b133-465c-9a0c-9d67755a2227.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:09:12 np0005603609 nova_compute[221550]: 2026-01-31 08:09:12.649 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:12 np0005603609 nova_compute[221550]: 2026-01-31 08:09:12.998 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846952.997258, 95d81c30-cf89-4075-8b93-6ed225df1262 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:09:12 np0005603609 nova_compute[221550]: 2026-01-31 08:09:12.999 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] VM Started (Lifecycle Event)#033[00m
Jan 31 03:09:13 np0005603609 nova_compute[221550]: 2026-01-31 08:09:13.034 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:09:13 np0005603609 nova_compute[221550]: 2026-01-31 08:09:13.041 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846952.9978578, 95d81c30-cf89-4075-8b93-6ed225df1262 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:09:13 np0005603609 nova_compute[221550]: 2026-01-31 08:09:13.041 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:09:13 np0005603609 podman[267553]: 2026-01-31 08:09:12.954682471 +0000 UTC m=+0.023202669 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:09:13 np0005603609 podman[267553]: 2026-01-31 08:09:13.152450783 +0000 UTC m=+0.220970891 container create 5ae92177315dab9e45e19b06e304cc488c13c386aa2596519b9b2f438a38bf28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 03:09:13 np0005603609 systemd[1]: Started libpod-conmon-5ae92177315dab9e45e19b06e304cc488c13c386aa2596519b9b2f438a38bf28.scope.
Jan 31 03:09:13 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:09:13 np0005603609 nova_compute[221550]: 2026-01-31 08:09:13.218 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:09:13 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cbbb01a6c45e6b49e7e14ea9165802c59d6bd6ecddc122513eb2d080b942f9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:13 np0005603609 nova_compute[221550]: 2026-01-31 08:09:13.222 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:09:13 np0005603609 podman[267553]: 2026-01-31 08:09:13.232358204 +0000 UTC m=+0.300878342 container init 5ae92177315dab9e45e19b06e304cc488c13c386aa2596519b9b2f438a38bf28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Jan 31 03:09:13 np0005603609 podman[267553]: 2026-01-31 08:09:13.237447666 +0000 UTC m=+0.305967774 container start 5ae92177315dab9e45e19b06e304cc488c13c386aa2596519b9b2f438a38bf28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:09:13 np0005603609 nova_compute[221550]: 2026-01-31 08:09:13.252 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:09:13 np0005603609 neutron-haproxy-ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227[267572]: [NOTICE]   (267576) : New worker (267578) forked
Jan 31 03:09:13 np0005603609 neutron-haproxy-ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227[267572]: [NOTICE]   (267576) : Loading success.
Jan 31 03:09:13 np0005603609 nova_compute[221550]: 2026-01-31 08:09:13.326 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:09:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:14.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.207 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:14.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.471 221554 DEBUG nova.compute.manager [req-9153d527-3837-481a-b299-7c818857dbdd req-f9d8e4c5-8fd4-4624-8f3d-6576ec4496b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Received event network-vif-plugged-798bc67d-757c-4e6e-9c3c-9ef096c1e455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.472 221554 DEBUG oslo_concurrency.lockutils [req-9153d527-3837-481a-b299-7c818857dbdd req-f9d8e4c5-8fd4-4624-8f3d-6576ec4496b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "95d81c30-cf89-4075-8b93-6ed225df1262-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.472 221554 DEBUG oslo_concurrency.lockutils [req-9153d527-3837-481a-b299-7c818857dbdd req-f9d8e4c5-8fd4-4624-8f3d-6576ec4496b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "95d81c30-cf89-4075-8b93-6ed225df1262-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.472 221554 DEBUG oslo_concurrency.lockutils [req-9153d527-3837-481a-b299-7c818857dbdd req-f9d8e4c5-8fd4-4624-8f3d-6576ec4496b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "95d81c30-cf89-4075-8b93-6ed225df1262-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.472 221554 DEBUG nova.compute.manager [req-9153d527-3837-481a-b299-7c818857dbdd req-f9d8e4c5-8fd4-4624-8f3d-6576ec4496b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Processing event network-vif-plugged-798bc67d-757c-4e6e-9c3c-9ef096c1e455 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.473 221554 DEBUG nova.compute.manager [req-9153d527-3837-481a-b299-7c818857dbdd req-f9d8e4c5-8fd4-4624-8f3d-6576ec4496b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Received event network-vif-plugged-798bc67d-757c-4e6e-9c3c-9ef096c1e455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.473 221554 DEBUG oslo_concurrency.lockutils [req-9153d527-3837-481a-b299-7c818857dbdd req-f9d8e4c5-8fd4-4624-8f3d-6576ec4496b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "95d81c30-cf89-4075-8b93-6ed225df1262-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.473 221554 DEBUG oslo_concurrency.lockutils [req-9153d527-3837-481a-b299-7c818857dbdd req-f9d8e4c5-8fd4-4624-8f3d-6576ec4496b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "95d81c30-cf89-4075-8b93-6ed225df1262-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.473 221554 DEBUG oslo_concurrency.lockutils [req-9153d527-3837-481a-b299-7c818857dbdd req-f9d8e4c5-8fd4-4624-8f3d-6576ec4496b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "95d81c30-cf89-4075-8b93-6ed225df1262-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.473 221554 DEBUG nova.compute.manager [req-9153d527-3837-481a-b299-7c818857dbdd req-f9d8e4c5-8fd4-4624-8f3d-6576ec4496b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] No waiting events found dispatching network-vif-plugged-798bc67d-757c-4e6e-9c3c-9ef096c1e455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.474 221554 WARNING nova.compute.manager [req-9153d527-3837-481a-b299-7c818857dbdd req-f9d8e4c5-8fd4-4624-8f3d-6576ec4496b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Received unexpected event network-vif-plugged-798bc67d-757c-4e6e-9c3c-9ef096c1e455 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.474 221554 DEBUG nova.compute.manager [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.478 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769846954.4780912, 95d81c30-cf89-4075-8b93-6ed225df1262 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.478 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.480 221554 DEBUG nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.482 221554 INFO nova.virt.libvirt.driver [-] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Instance spawned successfully.#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.483 221554 DEBUG nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.535 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.539 221554 DEBUG nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.540 221554 DEBUG nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.541 221554 DEBUG nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.541 221554 DEBUG nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.541 221554 DEBUG nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.542 221554 DEBUG nova.virt.libvirt.driver [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.547 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.624 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.678 221554 INFO nova.compute.manager [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Took 8.78 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.679 221554 DEBUG nova.compute.manager [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:09:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:09:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.755 221554 INFO nova.compute.manager [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Took 9.96 seconds to build instance.#033[00m
Jan 31 03:09:14 np0005603609 nova_compute[221550]: 2026-01-31 08:09:14.793 221554 DEBUG oslo_concurrency.lockutils [None req-d5ff1d6f-9448-4994-a89e-24b778b3f0b8 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Lock "95d81c30-cf89-4075-8b93-6ed225df1262" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:15 np0005603609 nova_compute[221550]: 2026-01-31 08:09:15.040 221554 DEBUG nova.compute.manager [None req-fdfb03e1-5232-4b1b-bf10-5ca12df9d4ae 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196#033[00m
Jan 31 03:09:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:09:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:16.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:09:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:16.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:16 np0005603609 nova_compute[221550]: 2026-01-31 08:09:16.438 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:17 np0005603609 nova_compute[221550]: 2026-01-31 08:09:17.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:09:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:18.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:18.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:18 np0005603609 nova_compute[221550]: 2026-01-31 08:09:18.568 221554 DEBUG oslo_concurrency.lockutils [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Acquiring lock "95d81c30-cf89-4075-8b93-6ed225df1262" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:18 np0005603609 nova_compute[221550]: 2026-01-31 08:09:18.568 221554 DEBUG oslo_concurrency.lockutils [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Lock "95d81c30-cf89-4075-8b93-6ed225df1262" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:18 np0005603609 nova_compute[221550]: 2026-01-31 08:09:18.568 221554 DEBUG oslo_concurrency.lockutils [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Acquiring lock "95d81c30-cf89-4075-8b93-6ed225df1262-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:18 np0005603609 nova_compute[221550]: 2026-01-31 08:09:18.569 221554 DEBUG oslo_concurrency.lockutils [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Lock "95d81c30-cf89-4075-8b93-6ed225df1262-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:18 np0005603609 nova_compute[221550]: 2026-01-31 08:09:18.569 221554 DEBUG oslo_concurrency.lockutils [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Lock "95d81c30-cf89-4075-8b93-6ed225df1262-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:18 np0005603609 nova_compute[221550]: 2026-01-31 08:09:18.570 221554 INFO nova.compute.manager [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Terminating instance#033[00m
Jan 31 03:09:18 np0005603609 nova_compute[221550]: 2026-01-31 08:09:18.570 221554 DEBUG nova.compute.manager [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:09:19 np0005603609 nova_compute[221550]: 2026-01-31 08:09:19.209 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:19 np0005603609 kernel: tap798bc67d-75 (unregistering): left promiscuous mode
Jan 31 03:09:19 np0005603609 NetworkManager[49064]: <info>  [1769846959.4530] device (tap798bc67d-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:09:19 np0005603609 nova_compute[221550]: 2026-01-31 08:09:19.457 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:19 np0005603609 ovn_controller[130359]: 2026-01-31T08:09:19Z|00458|binding|INFO|Releasing lport 798bc67d-757c-4e6e-9c3c-9ef096c1e455 from this chassis (sb_readonly=0)
Jan 31 03:09:19 np0005603609 ovn_controller[130359]: 2026-01-31T08:09:19Z|00459|binding|INFO|Setting lport 798bc67d-757c-4e6e-9c3c-9ef096c1e455 down in Southbound
Jan 31 03:09:19 np0005603609 ovn_controller[130359]: 2026-01-31T08:09:19Z|00460|binding|INFO|Removing iface tap798bc67d-75 ovn-installed in OVS
Jan 31 03:09:19 np0005603609 nova_compute[221550]: 2026-01-31 08:09:19.460 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:19 np0005603609 nova_compute[221550]: 2026-01-31 08:09:19.468 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:19 np0005603609 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000074.scope: Deactivated successfully.
Jan 31 03:09:19 np0005603609 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000074.scope: Consumed 4.725s CPU time.
Jan 31 03:09:19 np0005603609 systemd-machined[190912]: Machine qemu-56-instance-00000074 terminated.
Jan 31 03:09:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:19.563 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:45:80 10.100.0.8'], port_security=['fa:16:3e:e1:45:80 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '95d81c30-cf89-4075-8b93-6ed225df1262', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a910f7eb-b133-465c-9a0c-9d67755a2227', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e5e834897a04940b210da74cc674879', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e0905a9-311e-46be-b18e-7ae968f0e213', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3e815344-d445-4fe3-890c-a16ce0848be3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=798bc67d-757c-4e6e-9c3c-9ef096c1e455) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:09:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:19.565 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 798bc67d-757c-4e6e-9c3c-9ef096c1e455 in datapath a910f7eb-b133-465c-9a0c-9d67755a2227 unbound from our chassis#033[00m
Jan 31 03:09:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:19.566 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a910f7eb-b133-465c-9a0c-9d67755a2227, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:09:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:19.567 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6ebc61e3-fc13-4123-b39f-1bc8682bd46a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:19.568 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227 namespace which is not needed anymore#033[00m
Jan 31 03:09:19 np0005603609 nova_compute[221550]: 2026-01-31 08:09:19.588 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:19 np0005603609 nova_compute[221550]: 2026-01-31 08:09:19.592 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:19 np0005603609 nova_compute[221550]: 2026-01-31 08:09:19.604 221554 INFO nova.virt.libvirt.driver [-] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Instance destroyed successfully.#033[00m
Jan 31 03:09:19 np0005603609 nova_compute[221550]: 2026-01-31 08:09:19.604 221554 DEBUG nova.objects.instance [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Lazy-loading 'resources' on Instance uuid 95d81c30-cf89-4075-8b93-6ed225df1262 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:09:19 np0005603609 nova_compute[221550]: 2026-01-31 08:09:19.766 221554 DEBUG nova.virt.libvirt.vif [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:09:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-587846133',display_name='tempest-ServerAddressesTestJSON-server-587846133',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-587846133',id=116,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:09:14Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0e5e834897a04940b210da74cc674879',ramdisk_id='',reservation_id='r-p57tpwsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1804900960',owner_user_name='tempest-ServerAddressesTestJSON-1804900960-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:09:14Z,user_data=None,user_id='221dc0e49994443e8b86a6a9949ee1aa',uuid=95d81c30-cf89-4075-8b93-6ed225df1262,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "address": "fa:16:3e:e1:45:80", "network": {"id": "a910f7eb-b133-465c-9a0c-9d67755a2227", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1097130539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e5e834897a04940b210da74cc674879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798bc67d-75", "ovs_interfaceid": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:09:19 np0005603609 nova_compute[221550]: 2026-01-31 08:09:19.767 221554 DEBUG nova.network.os_vif_util [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Converting VIF {"id": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "address": "fa:16:3e:e1:45:80", "network": {"id": "a910f7eb-b133-465c-9a0c-9d67755a2227", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1097130539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e5e834897a04940b210da74cc674879", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798bc67d-75", "ovs_interfaceid": "798bc67d-757c-4e6e-9c3c-9ef096c1e455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:09:19 np0005603609 nova_compute[221550]: 2026-01-31 08:09:19.768 221554 DEBUG nova.network.os_vif_util [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:45:80,bridge_name='br-int',has_traffic_filtering=True,id=798bc67d-757c-4e6e-9c3c-9ef096c1e455,network=Network(a910f7eb-b133-465c-9a0c-9d67755a2227),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798bc67d-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:09:19 np0005603609 nova_compute[221550]: 2026-01-31 08:09:19.769 221554 DEBUG os_vif [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:45:80,bridge_name='br-int',has_traffic_filtering=True,id=798bc67d-757c-4e6e-9c3c-9ef096c1e455,network=Network(a910f7eb-b133-465c-9a0c-9d67755a2227),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798bc67d-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:09:19 np0005603609 nova_compute[221550]: 2026-01-31 08:09:19.770 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:19 np0005603609 nova_compute[221550]: 2026-01-31 08:09:19.771 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap798bc67d-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:19 np0005603609 nova_compute[221550]: 2026-01-31 08:09:19.772 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:19 np0005603609 nova_compute[221550]: 2026-01-31 08:09:19.773 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:19 np0005603609 nova_compute[221550]: 2026-01-31 08:09:19.776 221554 INFO os_vif [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:45:80,bridge_name='br-int',has_traffic_filtering=True,id=798bc67d-757c-4e6e-9c3c-9ef096c1e455,network=Network(a910f7eb-b133-465c-9a0c-9d67755a2227),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798bc67d-75')#033[00m
Jan 31 03:09:19 np0005603609 neutron-haproxy-ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227[267572]: [NOTICE]   (267576) : haproxy version is 2.8.14-c23fe91
Jan 31 03:09:19 np0005603609 neutron-haproxy-ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227[267572]: [NOTICE]   (267576) : path to executable is /usr/sbin/haproxy
Jan 31 03:09:19 np0005603609 neutron-haproxy-ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227[267572]: [WARNING]  (267576) : Exiting Master process...
Jan 31 03:09:19 np0005603609 neutron-haproxy-ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227[267572]: [WARNING]  (267576) : Exiting Master process...
Jan 31 03:09:19 np0005603609 neutron-haproxy-ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227[267572]: [ALERT]    (267576) : Current worker (267578) exited with code 143 (Terminated)
Jan 31 03:09:19 np0005603609 neutron-haproxy-ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227[267572]: [WARNING]  (267576) : All workers exited. Exiting... (0)
Jan 31 03:09:19 np0005603609 systemd[1]: libpod-5ae92177315dab9e45e19b06e304cc488c13c386aa2596519b9b2f438a38bf28.scope: Deactivated successfully.
Jan 31 03:09:19 np0005603609 podman[267670]: 2026-01-31 08:09:19.960489558 +0000 UTC m=+0.315351429 container died 5ae92177315dab9e45e19b06e304cc488c13c386aa2596519b9b2f438a38bf28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:09:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:20.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:20 np0005603609 nova_compute[221550]: 2026-01-31 08:09:20.148 221554 DEBUG oslo_concurrency.lockutils [None req-54795ac5-718c-4d6d-8cf8-caed453ac0dc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:20 np0005603609 nova_compute[221550]: 2026-01-31 08:09:20.149 221554 DEBUG oslo_concurrency.lockutils [None req-54795ac5-718c-4d6d-8cf8-caed453ac0dc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:20 np0005603609 nova_compute[221550]: 2026-01-31 08:09:20.149 221554 DEBUG nova.compute.manager [None req-54795ac5-718c-4d6d-8cf8-caed453ac0dc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:09:20 np0005603609 nova_compute[221550]: 2026-01-31 08:09:20.155 221554 DEBUG nova.compute.manager [None req-54795ac5-718c-4d6d-8cf8-caed453ac0dc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 31 03:09:20 np0005603609 nova_compute[221550]: 2026-01-31 08:09:20.156 221554 DEBUG nova.objects.instance [None req-54795ac5-718c-4d6d-8cf8-caed453ac0dc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'flavor' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:09:20 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5ae92177315dab9e45e19b06e304cc488c13c386aa2596519b9b2f438a38bf28-userdata-shm.mount: Deactivated successfully.
Jan 31 03:09:20 np0005603609 systemd[1]: var-lib-containers-storage-overlay-89cbbb01a6c45e6b49e7e14ea9165802c59d6bd6ecddc122513eb2d080b942f9-merged.mount: Deactivated successfully.
Jan 31 03:09:20 np0005603609 nova_compute[221550]: 2026-01-31 08:09:20.249 221554 DEBUG nova.virt.libvirt.driver [None req-54795ac5-718c-4d6d-8cf8-caed453ac0dc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:09:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:20.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:20 np0005603609 podman[267670]: 2026-01-31 08:09:20.397196304 +0000 UTC m=+0.752058105 container cleanup 5ae92177315dab9e45e19b06e304cc488c13c386aa2596519b9b2f438a38bf28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:09:20 np0005603609 systemd[1]: libpod-conmon-5ae92177315dab9e45e19b06e304cc488c13c386aa2596519b9b2f438a38bf28.scope: Deactivated successfully.
Jan 31 03:09:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:20 np0005603609 podman[267719]: 2026-01-31 08:09:20.913914212 +0000 UTC m=+0.495624163 container remove 5ae92177315dab9e45e19b06e304cc488c13c386aa2596519b9b2f438a38bf28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 03:09:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:20.919 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a8595f52-b253-465f-976b-abeec83779bb]: (4, ('Sat Jan 31 08:09:19 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227 (5ae92177315dab9e45e19b06e304cc488c13c386aa2596519b9b2f438a38bf28)\n5ae92177315dab9e45e19b06e304cc488c13c386aa2596519b9b2f438a38bf28\nSat Jan 31 08:09:20 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227 (5ae92177315dab9e45e19b06e304cc488c13c386aa2596519b9b2f438a38bf28)\n5ae92177315dab9e45e19b06e304cc488c13c386aa2596519b9b2f438a38bf28\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:20.920 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[501f01c1-df20-49d8-bf0c-9f174e62b047]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:20.921 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa910f7eb-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:20 np0005603609 nova_compute[221550]: 2026-01-31 08:09:20.923 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:20 np0005603609 kernel: tapa910f7eb-b0: left promiscuous mode
Jan 31 03:09:20 np0005603609 nova_compute[221550]: 2026-01-31 08:09:20.930 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:20.932 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2af8a4-eef4-4eee-b657-c99d2a5c3f20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:20.947 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[21f2b00b-6d26-46fd-9f5c-4a3834da534f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:20.948 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2839cea5-9bcf-47a0-aa1b-9893abd9ff0e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:20.960 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9d96d4c2-8953-4fd9-a71c-a4ab491f4aae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719218, 'reachable_time': 30526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267735, 'error': None, 'target': 'ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:20.962 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a910f7eb-b133-465c-9a0c-9d67755a2227 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:09:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:20.962 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[69c047d5-5ade-41ec-8fb8-36e6f71a73f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:20 np0005603609 systemd[1]: run-netns-ovnmeta\x2da910f7eb\x2db133\x2d465c\x2d9a0c\x2d9d67755a2227.mount: Deactivated successfully.
Jan 31 03:09:21 np0005603609 nova_compute[221550]: 2026-01-31 08:09:21.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:09:21 np0005603609 nova_compute[221550]: 2026-01-31 08:09:21.859 221554 DEBUG nova.compute.manager [req-1a7fdc19-1503-4cfa-8aba-e705d891f922 req-db503193-16fe-4998-9f99-bccb15f2003d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Received event network-vif-unplugged-798bc67d-757c-4e6e-9c3c-9ef096c1e455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:09:21 np0005603609 nova_compute[221550]: 2026-01-31 08:09:21.860 221554 DEBUG oslo_concurrency.lockutils [req-1a7fdc19-1503-4cfa-8aba-e705d891f922 req-db503193-16fe-4998-9f99-bccb15f2003d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "95d81c30-cf89-4075-8b93-6ed225df1262-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:21 np0005603609 nova_compute[221550]: 2026-01-31 08:09:21.860 221554 DEBUG oslo_concurrency.lockutils [req-1a7fdc19-1503-4cfa-8aba-e705d891f922 req-db503193-16fe-4998-9f99-bccb15f2003d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "95d81c30-cf89-4075-8b93-6ed225df1262-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:21 np0005603609 nova_compute[221550]: 2026-01-31 08:09:21.861 221554 DEBUG oslo_concurrency.lockutils [req-1a7fdc19-1503-4cfa-8aba-e705d891f922 req-db503193-16fe-4998-9f99-bccb15f2003d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "95d81c30-cf89-4075-8b93-6ed225df1262-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:21 np0005603609 nova_compute[221550]: 2026-01-31 08:09:21.861 221554 DEBUG nova.compute.manager [req-1a7fdc19-1503-4cfa-8aba-e705d891f922 req-db503193-16fe-4998-9f99-bccb15f2003d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] No waiting events found dispatching network-vif-unplugged-798bc67d-757c-4e6e-9c3c-9ef096c1e455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:09:21 np0005603609 nova_compute[221550]: 2026-01-31 08:09:21.861 221554 DEBUG nova.compute.manager [req-1a7fdc19-1503-4cfa-8aba-e705d891f922 req-db503193-16fe-4998-9f99-bccb15f2003d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Received event network-vif-unplugged-798bc67d-757c-4e6e-9c3c-9ef096c1e455 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:09:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:22.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:22.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:23 np0005603609 nova_compute[221550]: 2026-01-31 08:09:23.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:09:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:09:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:24.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:09:24 np0005603609 nova_compute[221550]: 2026-01-31 08:09:24.154 221554 DEBUG nova.compute.manager [req-bf0eb756-df1a-4b4f-bee3-c5ab69d9e56d req-fc10ca4f-b466-4eca-9db1-b09aa4206a15 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Received event network-vif-plugged-798bc67d-757c-4e6e-9c3c-9ef096c1e455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:09:24 np0005603609 nova_compute[221550]: 2026-01-31 08:09:24.155 221554 DEBUG oslo_concurrency.lockutils [req-bf0eb756-df1a-4b4f-bee3-c5ab69d9e56d req-fc10ca4f-b466-4eca-9db1-b09aa4206a15 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "95d81c30-cf89-4075-8b93-6ed225df1262-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:24 np0005603609 nova_compute[221550]: 2026-01-31 08:09:24.155 221554 DEBUG oslo_concurrency.lockutils [req-bf0eb756-df1a-4b4f-bee3-c5ab69d9e56d req-fc10ca4f-b466-4eca-9db1-b09aa4206a15 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "95d81c30-cf89-4075-8b93-6ed225df1262-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:24 np0005603609 nova_compute[221550]: 2026-01-31 08:09:24.156 221554 DEBUG oslo_concurrency.lockutils [req-bf0eb756-df1a-4b4f-bee3-c5ab69d9e56d req-fc10ca4f-b466-4eca-9db1-b09aa4206a15 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "95d81c30-cf89-4075-8b93-6ed225df1262-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:24 np0005603609 nova_compute[221550]: 2026-01-31 08:09:24.156 221554 DEBUG nova.compute.manager [req-bf0eb756-df1a-4b4f-bee3-c5ab69d9e56d req-fc10ca4f-b466-4eca-9db1-b09aa4206a15 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] No waiting events found dispatching network-vif-plugged-798bc67d-757c-4e6e-9c3c-9ef096c1e455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:09:24 np0005603609 nova_compute[221550]: 2026-01-31 08:09:24.156 221554 WARNING nova.compute.manager [req-bf0eb756-df1a-4b4f-bee3-c5ab69d9e56d req-fc10ca4f-b466-4eca-9db1-b09aa4206a15 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Received unexpected event network-vif-plugged-798bc67d-757c-4e6e-9c3c-9ef096c1e455 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:09:24 np0005603609 nova_compute[221550]: 2026-01-31 08:09:24.212 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:09:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:24.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:09:24 np0005603609 nova_compute[221550]: 2026-01-31 08:09:24.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:09:24 np0005603609 nova_compute[221550]: 2026-01-31 08:09:24.773 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:26.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:26 np0005603609 nova_compute[221550]: 2026-01-31 08:09:26.282 221554 INFO nova.virt.libvirt.driver [None req-54795ac5-718c-4d6d-8cf8-caed453ac0dc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Instance shutdown successfully after 6 seconds.#033[00m
Jan 31 03:09:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:26.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:27 np0005603609 kernel: tapf99a05c5-fb (unregistering): left promiscuous mode
Jan 31 03:09:27 np0005603609 NetworkManager[49064]: <info>  [1769846967.3060] device (tapf99a05c5-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:09:27 np0005603609 nova_compute[221550]: 2026-01-31 08:09:27.310 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:27 np0005603609 ovn_controller[130359]: 2026-01-31T08:09:27Z|00461|binding|INFO|Releasing lport f99a05c5-fb6e-4cb4-a735-e492526b8a2c from this chassis (sb_readonly=0)
Jan 31 03:09:27 np0005603609 ovn_controller[130359]: 2026-01-31T08:09:27Z|00462|binding|INFO|Setting lport f99a05c5-fb6e-4cb4-a735-e492526b8a2c down in Southbound
Jan 31 03:09:27 np0005603609 ovn_controller[130359]: 2026-01-31T08:09:27Z|00463|binding|INFO|Removing iface tapf99a05c5-fb ovn-installed in OVS
Jan 31 03:09:27 np0005603609 nova_compute[221550]: 2026-01-31 08:09:27.313 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:27 np0005603609 nova_compute[221550]: 2026-01-31 08:09:27.323 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:27 np0005603609 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Jan 31 03:09:27 np0005603609 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000006c.scope: Consumed 18.314s CPU time.
Jan 31 03:09:27 np0005603609 systemd-machined[190912]: Machine qemu-53-instance-0000006c terminated.
Jan 31 03:09:27 np0005603609 nova_compute[221550]: 2026-01-31 08:09:27.517 221554 INFO nova.virt.libvirt.driver [-] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Instance destroyed successfully.#033[00m
Jan 31 03:09:27 np0005603609 nova_compute[221550]: 2026-01-31 08:09:27.518 221554 DEBUG nova.objects.instance [None req-54795ac5-718c-4d6d-8cf8-caed453ac0dc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'numa_topology' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:09:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:27.599 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:a2:84 10.100.0.5'], port_security=['fa:16:3e:e8:a2:84 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1d463d8d-cbf7-43ec-8042-83d2c9aa5c69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5b02cc0a-856b-4d31-80e9-eccd1c696448', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e596e7-33b3-44a6-9cbf-f9eacfd974b4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=f99a05c5-fb6e-4cb4-a735-e492526b8a2c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:09:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:27.600 140058 INFO neutron.agent.ovn.metadata.agent [-] Port f99a05c5-fb6e-4cb4-a735-e492526b8a2c in datapath e8014d6b-23e1-41ef-b5e2-3d770d302e72 unbound from our chassis#033[00m
Jan 31 03:09:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:27.601 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e8014d6b-23e1-41ef-b5e2-3d770d302e72, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:09:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:27.603 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[06440782-394b-4dad-a0db-6e62e7f45091]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:27.603 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 namespace which is not needed anymore#033[00m
Jan 31 03:09:27 np0005603609 nova_compute[221550]: 2026-01-31 08:09:27.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:09:27 np0005603609 nova_compute[221550]: 2026-01-31 08:09:27.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:09:27 np0005603609 nova_compute[221550]: 2026-01-31 08:09:27.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:09:27 np0005603609 nova_compute[221550]: 2026-01-31 08:09:27.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:09:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:28.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:28 np0005603609 nova_compute[221550]: 2026-01-31 08:09:28.240 221554 DEBUG nova.compute.manager [None req-54795ac5-718c-4d6d-8cf8-caed453ac0dc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:09:28 np0005603609 nova_compute[221550]: 2026-01-31 08:09:28.244 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 31 03:09:28 np0005603609 nova_compute[221550]: 2026-01-31 08:09:28.244 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:09:28 np0005603609 nova_compute[221550]: 2026-01-31 08:09:28.244 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:09:28 np0005603609 nova_compute[221550]: 2026-01-31 08:09:28.245 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:09:28 np0005603609 nova_compute[221550]: 2026-01-31 08:09:28.245 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:09:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:09:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:28.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:09:28 np0005603609 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[263883]: [NOTICE]   (263887) : haproxy version is 2.8.14-c23fe91
Jan 31 03:09:28 np0005603609 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[263883]: [NOTICE]   (263887) : path to executable is /usr/sbin/haproxy
Jan 31 03:09:28 np0005603609 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[263883]: [WARNING]  (263887) : Exiting Master process...
Jan 31 03:09:28 np0005603609 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[263883]: [ALERT]    (263887) : Current worker (263889) exited with code 143 (Terminated)
Jan 31 03:09:28 np0005603609 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[263883]: [WARNING]  (263887) : All workers exited. Exiting... (0)
Jan 31 03:09:28 np0005603609 systemd[1]: libpod-e39c16e4b9888c8bd1953feb7dc8a876448306197c27a6ca3f8bd32a636b3db3.scope: Deactivated successfully.
Jan 31 03:09:28 np0005603609 podman[267774]: 2026-01-31 08:09:28.469651337 +0000 UTC m=+0.763880609 container died e39c16e4b9888c8bd1953feb7dc8a876448306197c27a6ca3f8bd32a636b3db3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 03:09:29 np0005603609 nova_compute[221550]: 2026-01-31 08:09:29.214 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:29 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e39c16e4b9888c8bd1953feb7dc8a876448306197c27a6ca3f8bd32a636b3db3-userdata-shm.mount: Deactivated successfully.
Jan 31 03:09:29 np0005603609 systemd[1]: var-lib-containers-storage-overlay-85d88ea037eb73fe9e43b4b60f3ca5f4e8c0b6936753eb22241fd71df0dac6dc-merged.mount: Deactivated successfully.
Jan 31 03:09:29 np0005603609 nova_compute[221550]: 2026-01-31 08:09:29.775 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:30 np0005603609 podman[267774]: 2026-01-31 08:09:30.059026285 +0000 UTC m=+2.353255537 container cleanup e39c16e4b9888c8bd1953feb7dc8a876448306197c27a6ca3f8bd32a636b3db3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 03:09:30 np0005603609 systemd[1]: libpod-conmon-e39c16e4b9888c8bd1953feb7dc8a876448306197c27a6ca3f8bd32a636b3db3.scope: Deactivated successfully.
Jan 31 03:09:30 np0005603609 nova_compute[221550]: 2026-01-31 08:09:30.114 221554 DEBUG oslo_concurrency.lockutils [None req-54795ac5-718c-4d6d-8cf8-caed453ac0dc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 9.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:30.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:09:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:30.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:09:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:31 np0005603609 podman[267805]: 2026-01-31 08:09:31.020199395 +0000 UTC m=+0.946160400 container remove e39c16e4b9888c8bd1953feb7dc8a876448306197c27a6ca3f8bd32a636b3db3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:09:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:31.024 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ade15d55-02ab-4deb-9095-8f5c630518eb]: (4, ('Sat Jan 31 08:09:27 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 (e39c16e4b9888c8bd1953feb7dc8a876448306197c27a6ca3f8bd32a636b3db3)\ne39c16e4b9888c8bd1953feb7dc8a876448306197c27a6ca3f8bd32a636b3db3\nSat Jan 31 08:09:30 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 (e39c16e4b9888c8bd1953feb7dc8a876448306197c27a6ca3f8bd32a636b3db3)\ne39c16e4b9888c8bd1953feb7dc8a876448306197c27a6ca3f8bd32a636b3db3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:31.025 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ec8450b6-f458-4046-a9f0-245651cf14ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:31.026 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8014d6b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:31 np0005603609 nova_compute[221550]: 2026-01-31 08:09:31.028 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:31 np0005603609 kernel: tape8014d6b-20: left promiscuous mode
Jan 31 03:09:31 np0005603609 nova_compute[221550]: 2026-01-31 08:09:31.038 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:31.040 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[198164fc-b4c6-4813-a4a9-e515f24066b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:31.060 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d2393b4d-c91f-4262-b48a-1d23482f65c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:31.061 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e23878-4ab5-447d-b12a-627915101d8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:31.073 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed5ad1f-019f-4d0c-9846-b0f45cf85730]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708659, 'reachable_time': 25893, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267823, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:31.074 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:09:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:31.074 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[22773d2c-c243-45e2-92ed-a716ce0efc5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:31 np0005603609 systemd[1]: run-netns-ovnmeta\x2de8014d6b\x2d23e1\x2d41ef\x2db5e2\x2d3d770d302e72.mount: Deactivated successfully.
Jan 31 03:09:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:09:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:32.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:09:32 np0005603609 nova_compute[221550]: 2026-01-31 08:09:32.168 221554 INFO nova.virt.libvirt.driver [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Deleting instance files /var/lib/nova/instances/95d81c30-cf89-4075-8b93-6ed225df1262_del#033[00m
Jan 31 03:09:32 np0005603609 nova_compute[221550]: 2026-01-31 08:09:32.169 221554 INFO nova.virt.libvirt.driver [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Deletion of /var/lib/nova/instances/95d81c30-cf89-4075-8b93-6ed225df1262_del complete#033[00m
Jan 31 03:09:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:32.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:33 np0005603609 nova_compute[221550]: 2026-01-31 08:09:33.045 221554 INFO nova.compute.manager [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Took 14.47 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:09:33 np0005603609 nova_compute[221550]: 2026-01-31 08:09:33.046 221554 DEBUG oslo.service.loopingcall [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:09:33 np0005603609 nova_compute[221550]: 2026-01-31 08:09:33.046 221554 DEBUG nova.compute.manager [-] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:09:33 np0005603609 nova_compute[221550]: 2026-01-31 08:09:33.047 221554 DEBUG nova.network.neutron [-] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:09:33 np0005603609 nova_compute[221550]: 2026-01-31 08:09:33.188 221554 DEBUG nova.compute.manager [req-c3d0a2f7-7542-493e-b8bb-3ccea6027829 req-8aedbf51-212c-40c8-8c91-3c50b81f53b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Received event network-vif-unplugged-f99a05c5-fb6e-4cb4-a735-e492526b8a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:09:33 np0005603609 nova_compute[221550]: 2026-01-31 08:09:33.189 221554 DEBUG oslo_concurrency.lockutils [req-c3d0a2f7-7542-493e-b8bb-3ccea6027829 req-8aedbf51-212c-40c8-8c91-3c50b81f53b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:33 np0005603609 nova_compute[221550]: 2026-01-31 08:09:33.189 221554 DEBUG oslo_concurrency.lockutils [req-c3d0a2f7-7542-493e-b8bb-3ccea6027829 req-8aedbf51-212c-40c8-8c91-3c50b81f53b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:33 np0005603609 nova_compute[221550]: 2026-01-31 08:09:33.189 221554 DEBUG oslo_concurrency.lockutils [req-c3d0a2f7-7542-493e-b8bb-3ccea6027829 req-8aedbf51-212c-40c8-8c91-3c50b81f53b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:33 np0005603609 nova_compute[221550]: 2026-01-31 08:09:33.190 221554 DEBUG nova.compute.manager [req-c3d0a2f7-7542-493e-b8bb-3ccea6027829 req-8aedbf51-212c-40c8-8c91-3c50b81f53b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] No waiting events found dispatching network-vif-unplugged-f99a05c5-fb6e-4cb4-a735-e492526b8a2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:09:33 np0005603609 nova_compute[221550]: 2026-01-31 08:09:33.190 221554 WARNING nova.compute.manager [req-c3d0a2f7-7542-493e-b8bb-3ccea6027829 req-8aedbf51-212c-40c8-8c91-3c50b81f53b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Received unexpected event network-vif-unplugged-f99a05c5-fb6e-4cb4-a735-e492526b8a2c for instance with vm_state stopped and task_state None.#033[00m
Jan 31 03:09:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:34.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:34 np0005603609 nova_compute[221550]: 2026-01-31 08:09:34.217 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:34.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:34 np0005603609 nova_compute[221550]: 2026-01-31 08:09:34.602 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846959.6014314, 95d81c30-cf89-4075-8b93-6ed225df1262 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:09:34 np0005603609 nova_compute[221550]: 2026-01-31 08:09:34.602 221554 INFO nova.compute.manager [-] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:09:34 np0005603609 nova_compute[221550]: 2026-01-31 08:09:34.778 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:35 np0005603609 nova_compute[221550]: 2026-01-31 08:09:35.170 221554 DEBUG nova.compute.manager [None req-28ec9f0c-5f92-43c7-be62-ce88519efc03 - - - - - -] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:09:35 np0005603609 podman[267830]: 2026-01-31 08:09:35.170750123 +0000 UTC m=+0.053249820 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Jan 31 03:09:35 np0005603609 podman[267829]: 2026-01-31 08:09:35.192560527 +0000 UTC m=+0.073481146 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 03:09:35 np0005603609 nova_compute[221550]: 2026-01-31 08:09:35.212 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Updating instance_info_cache with network_info: [{"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:09:35 np0005603609 nova_compute[221550]: 2026-01-31 08:09:35.385 221554 DEBUG nova.network.neutron [-] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:09:35 np0005603609 nova_compute[221550]: 2026-01-31 08:09:35.443 221554 DEBUG nova.compute.manager [req-e6b6cef9-e631-47e1-b6f6-83bf23e318ab req-a07896b8-8a04-49dc-971d-5fa356271ed3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Received event network-vif-deleted-798bc67d-757c-4e6e-9c3c-9ef096c1e455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:09:35 np0005603609 nova_compute[221550]: 2026-01-31 08:09:35.443 221554 INFO nova.compute.manager [req-e6b6cef9-e631-47e1-b6f6-83bf23e318ab req-a07896b8-8a04-49dc-971d-5fa356271ed3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Neutron deleted interface 798bc67d-757c-4e6e-9c3c-9ef096c1e455; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:09:35 np0005603609 nova_compute[221550]: 2026-01-31 08:09:35.444 221554 DEBUG nova.network.neutron [req-e6b6cef9-e631-47e1-b6f6-83bf23e318ab req-a07896b8-8a04-49dc-971d-5fa356271ed3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:09:35 np0005603609 nova_compute[221550]: 2026-01-31 08:09:35.476 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:09:35 np0005603609 nova_compute[221550]: 2026-01-31 08:09:35.476 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:09:35 np0005603609 nova_compute[221550]: 2026-01-31 08:09:35.476 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:09:35 np0005603609 nova_compute[221550]: 2026-01-31 08:09:35.477 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:09:35 np0005603609 nova_compute[221550]: 2026-01-31 08:09:35.477 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:09:35 np0005603609 nova_compute[221550]: 2026-01-31 08:09:35.477 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:09:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:36.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:36.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:36 np0005603609 nova_compute[221550]: 2026-01-31 08:09:36.408 221554 DEBUG nova.compute.manager [req-759439ba-d711-45bd-83e0-38a58a3e48bc req-1d6ff771-5f39-4d8c-99ef-92ac93e19ed9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Received event network-vif-plugged-f99a05c5-fb6e-4cb4-a735-e492526b8a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:09:36 np0005603609 nova_compute[221550]: 2026-01-31 08:09:36.409 221554 DEBUG oslo_concurrency.lockutils [req-759439ba-d711-45bd-83e0-38a58a3e48bc req-1d6ff771-5f39-4d8c-99ef-92ac93e19ed9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:36 np0005603609 nova_compute[221550]: 2026-01-31 08:09:36.409 221554 DEBUG oslo_concurrency.lockutils [req-759439ba-d711-45bd-83e0-38a58a3e48bc req-1d6ff771-5f39-4d8c-99ef-92ac93e19ed9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:36 np0005603609 nova_compute[221550]: 2026-01-31 08:09:36.409 221554 DEBUG oslo_concurrency.lockutils [req-759439ba-d711-45bd-83e0-38a58a3e48bc req-1d6ff771-5f39-4d8c-99ef-92ac93e19ed9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:36 np0005603609 nova_compute[221550]: 2026-01-31 08:09:36.410 221554 DEBUG nova.compute.manager [req-759439ba-d711-45bd-83e0-38a58a3e48bc req-1d6ff771-5f39-4d8c-99ef-92ac93e19ed9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] No waiting events found dispatching network-vif-plugged-f99a05c5-fb6e-4cb4-a735-e492526b8a2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:09:36 np0005603609 nova_compute[221550]: 2026-01-31 08:09:36.410 221554 WARNING nova.compute.manager [req-759439ba-d711-45bd-83e0-38a58a3e48bc req-1d6ff771-5f39-4d8c-99ef-92ac93e19ed9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Received unexpected event network-vif-plugged-f99a05c5-fb6e-4cb4-a735-e492526b8a2c for instance with vm_state stopped and task_state resize_prep.#033[00m
Jan 31 03:09:36 np0005603609 nova_compute[221550]: 2026-01-31 08:09:36.425 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:36 np0005603609 nova_compute[221550]: 2026-01-31 08:09:36.425 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:36 np0005603609 nova_compute[221550]: 2026-01-31 08:09:36.426 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:36 np0005603609 nova_compute[221550]: 2026-01-31 08:09:36.426 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:09:36 np0005603609 nova_compute[221550]: 2026-01-31 08:09:36.426 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:36 np0005603609 nova_compute[221550]: 2026-01-31 08:09:36.447 221554 INFO nova.compute.manager [-] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Took 3.40 seconds to deallocate network for instance.#033[00m
Jan 31 03:09:36 np0005603609 nova_compute[221550]: 2026-01-31 08:09:36.536 221554 DEBUG nova.compute.manager [req-e6b6cef9-e631-47e1-b6f6-83bf23e318ab req-a07896b8-8a04-49dc-971d-5fa356271ed3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 95d81c30-cf89-4075-8b93-6ed225df1262] Detach interface failed, port_id=798bc67d-757c-4e6e-9c3c-9ef096c1e455, reason: Instance 95d81c30-cf89-4075-8b93-6ed225df1262 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:09:36 np0005603609 nova_compute[221550]: 2026-01-31 08:09:36.708 221554 DEBUG oslo_concurrency.lockutils [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:36 np0005603609 nova_compute[221550]: 2026-01-31 08:09:36.709 221554 DEBUG oslo_concurrency.lockutils [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:36 np0005603609 nova_compute[221550]: 2026-01-31 08:09:36.789 221554 DEBUG oslo_concurrency.processutils [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:09:36 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/969850163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:09:36 np0005603609 nova_compute[221550]: 2026-01-31 08:09:36.990 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:09:37 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1831518582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:09:37 np0005603609 nova_compute[221550]: 2026-01-31 08:09:37.249 221554 DEBUG oslo_concurrency.processutils [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:37 np0005603609 nova_compute[221550]: 2026-01-31 08:09:37.253 221554 DEBUG nova.compute.provider_tree [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:09:37 np0005603609 nova_compute[221550]: 2026-01-31 08:09:37.602 221554 DEBUG nova.scheduler.client.report [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:09:37 np0005603609 nova_compute[221550]: 2026-01-31 08:09:37.711 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:09:37 np0005603609 nova_compute[221550]: 2026-01-31 08:09:37.711 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:09:37 np0005603609 nova_compute[221550]: 2026-01-31 08:09:37.828 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:09:37 np0005603609 nova_compute[221550]: 2026-01-31 08:09:37.829 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4524MB free_disk=20.876785278320312GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:09:37 np0005603609 nova_compute[221550]: 2026-01-31 08:09:37.829 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:38 np0005603609 nova_compute[221550]: 2026-01-31 08:09:38.022 221554 DEBUG oslo_concurrency.lockutils [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.313s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:38 np0005603609 nova_compute[221550]: 2026-01-31 08:09:38.024 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:09:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:38.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:09:38 np0005603609 nova_compute[221550]: 2026-01-31 08:09:38.375 221554 INFO nova.scheduler.client.report [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Deleted allocations for instance 95d81c30-cf89-4075-8b93-6ed225df1262#033[00m
Jan 31 03:09:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:38.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:38 np0005603609 nova_compute[221550]: 2026-01-31 08:09:38.705 221554 INFO nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Updating resource usage from migration 44c1e556-6b95-4a35-bb73-ed66132c0f35#033[00m
Jan 31 03:09:38 np0005603609 nova_compute[221550]: 2026-01-31 08:09:38.734 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Migration 44c1e556-6b95-4a35-bb73-ed66132c0f35 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 03:09:38 np0005603609 nova_compute[221550]: 2026-01-31 08:09:38.735 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:09:38 np0005603609 nova_compute[221550]: 2026-01-31 08:09:38.735 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:09:38 np0005603609 nova_compute[221550]: 2026-01-31 08:09:38.776 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:38 np0005603609 nova_compute[221550]: 2026-01-31 08:09:38.917 221554 DEBUG oslo_concurrency.lockutils [None req-a330adff-069c-4e2d-a6b2-fd07f737cf39 221dc0e49994443e8b86a6a9949ee1aa 0e5e834897a04940b210da74cc674879 - - default default] Lock "95d81c30-cf89-4075-8b93-6ed225df1262" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 20.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:09:39 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/821602593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:09:39 np0005603609 nova_compute[221550]: 2026-01-31 08:09:39.192 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:39 np0005603609 nova_compute[221550]: 2026-01-31 08:09:39.198 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:09:39 np0005603609 nova_compute[221550]: 2026-01-31 08:09:39.218 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:39 np0005603609 nova_compute[221550]: 2026-01-31 08:09:39.227 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:09:39 np0005603609 nova_compute[221550]: 2026-01-31 08:09:39.375 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:09:39 np0005603609 nova_compute[221550]: 2026-01-31 08:09:39.376 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:39 np0005603609 nova_compute[221550]: 2026-01-31 08:09:39.516 221554 DEBUG nova.compute.manager [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Stashing vm_state: stopped _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 31 03:09:39 np0005603609 nova_compute[221550]: 2026-01-31 08:09:39.610 221554 DEBUG oslo_concurrency.lockutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:39 np0005603609 nova_compute[221550]: 2026-01-31 08:09:39.611 221554 DEBUG oslo_concurrency.lockutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:39 np0005603609 nova_compute[221550]: 2026-01-31 08:09:39.674 221554 DEBUG nova.objects.instance [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'pci_requests' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:09:39 np0005603609 nova_compute[221550]: 2026-01-31 08:09:39.699 221554 DEBUG nova.virt.hardware [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:09:39 np0005603609 nova_compute[221550]: 2026-01-31 08:09:39.699 221554 INFO nova.compute.claims [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:09:39 np0005603609 nova_compute[221550]: 2026-01-31 08:09:39.699 221554 DEBUG nova.objects.instance [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'resources' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:09:39 np0005603609 nova_compute[221550]: 2026-01-31 08:09:39.714 221554 DEBUG nova.objects.instance [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:09:39 np0005603609 nova_compute[221550]: 2026-01-31 08:09:39.780 221554 INFO nova.compute.resource_tracker [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Updating resource usage from migration 44c1e556-6b95-4a35-bb73-ed66132c0f35#033[00m
Jan 31 03:09:39 np0005603609 nova_compute[221550]: 2026-01-31 08:09:39.782 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:39 np0005603609 nova_compute[221550]: 2026-01-31 08:09:39.860 221554 DEBUG oslo_concurrency.processutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:40.073 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:09:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:40.074 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:09:40 np0005603609 nova_compute[221550]: 2026-01-31 08:09:40.116 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:40.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:09:40 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2439194594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:09:40 np0005603609 nova_compute[221550]: 2026-01-31 08:09:40.295 221554 DEBUG oslo_concurrency.processutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:40 np0005603609 nova_compute[221550]: 2026-01-31 08:09:40.299 221554 DEBUG nova.compute.provider_tree [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:09:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:40.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:40 np0005603609 nova_compute[221550]: 2026-01-31 08:09:40.899 221554 DEBUG nova.scheduler.client.report [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:09:40 np0005603609 nova_compute[221550]: 2026-01-31 08:09:40.927 221554 DEBUG oslo_concurrency.lockutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:40 np0005603609 nova_compute[221550]: 2026-01-31 08:09:40.928 221554 INFO nova.compute.manager [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Migrating#033[00m
Jan 31 03:09:40 np0005603609 nova_compute[221550]: 2026-01-31 08:09:40.968 221554 DEBUG oslo_concurrency.lockutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:09:40 np0005603609 nova_compute[221550]: 2026-01-31 08:09:40.969 221554 DEBUG oslo_concurrency.lockutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquired lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:09:40 np0005603609 nova_compute[221550]: 2026-01-31 08:09:40.969 221554 DEBUG nova.network.neutron [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:09:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:09:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:42.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:09:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:09:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:42.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:09:42 np0005603609 nova_compute[221550]: 2026-01-31 08:09:42.515 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846967.5139456, 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:09:42 np0005603609 nova_compute[221550]: 2026-01-31 08:09:42.516 221554 INFO nova.compute.manager [-] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:09:42 np0005603609 nova_compute[221550]: 2026-01-31 08:09:42.681 221554 DEBUG nova.compute.manager [None req-accefa81-706c-46a9-9624-8542c20316e0 - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:09:42 np0005603609 nova_compute[221550]: 2026-01-31 08:09:42.685 221554 DEBUG nova.compute.manager [None req-accefa81-706c-46a9-9624-8542c20316e0 - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: resize_prep, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:09:42 np0005603609 nova_compute[221550]: 2026-01-31 08:09:42.820 221554 INFO nova.compute.manager [None req-accefa81-706c-46a9-9624-8542c20316e0 - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] During sync_power_state the instance has a pending task (resize_prep). Skip.#033[00m
Jan 31 03:09:43 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:09:43.076 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:43 np0005603609 nova_compute[221550]: 2026-01-31 08:09:43.189 221554 DEBUG nova.network.neutron [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Updating instance_info_cache with network_info: [{"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:09:43 np0005603609 nova_compute[221550]: 2026-01-31 08:09:43.763 221554 DEBUG oslo_concurrency.lockutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Releasing lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:09:44 np0005603609 nova_compute[221550]: 2026-01-31 08:09:44.045 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:44 np0005603609 nova_compute[221550]: 2026-01-31 08:09:44.110 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:44.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:44 np0005603609 nova_compute[221550]: 2026-01-31 08:09:44.224 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:44.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:44 np0005603609 nova_compute[221550]: 2026-01-31 08:09:44.784 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:44 np0005603609 nova_compute[221550]: 2026-01-31 08:09:44.890 221554 DEBUG nova.virt.libvirt.driver [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Jan 31 03:09:44 np0005603609 nova_compute[221550]: 2026-01-31 08:09:44.893 221554 INFO nova.virt.libvirt.driver [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Instance already shutdown.#033[00m
Jan 31 03:09:44 np0005603609 nova_compute[221550]: 2026-01-31 08:09:44.898 221554 INFO nova.virt.libvirt.driver [-] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Instance destroyed successfully.#033[00m
Jan 31 03:09:44 np0005603609 nova_compute[221550]: 2026-01-31 08:09:44.899 221554 DEBUG nova.virt.libvirt.vif [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-487797123',display_name='tempest-ServerActionsTestOtherB-server-487797123',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-487797123',id=108,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:07:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-opoua6f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:09:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=1d463d8d-cbf7-43ec-8042-83d2c9aa5c69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1220738298-network", "vif_mac": "fa:16:3e:e8:a2:84"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:09:44 np0005603609 nova_compute[221550]: 2026-01-31 08:09:44.899 221554 DEBUG nova.network.os_vif_util [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1220738298-network", "vif_mac": "fa:16:3e:e8:a2:84"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:09:44 np0005603609 nova_compute[221550]: 2026-01-31 08:09:44.900 221554 DEBUG nova.network.os_vif_util [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:09:44 np0005603609 nova_compute[221550]: 2026-01-31 08:09:44.900 221554 DEBUG os_vif [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:09:44 np0005603609 nova_compute[221550]: 2026-01-31 08:09:44.902 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:44 np0005603609 nova_compute[221550]: 2026-01-31 08:09:44.902 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf99a05c5-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:44 np0005603609 nova_compute[221550]: 2026-01-31 08:09:44.903 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:44 np0005603609 nova_compute[221550]: 2026-01-31 08:09:44.905 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:44 np0005603609 nova_compute[221550]: 2026-01-31 08:09:44.907 221554 INFO os_vif [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb')#033[00m
Jan 31 03:09:44 np0005603609 nova_compute[221550]: 2026-01-31 08:09:44.911 221554 DEBUG nova.virt.libvirt.driver [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:09:44 np0005603609 nova_compute[221550]: 2026-01-31 08:09:44.911 221554 DEBUG nova.virt.libvirt.driver [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:09:45 np0005603609 nova_compute[221550]: 2026-01-31 08:09:45.280 221554 DEBUG nova.network.neutron [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Port f99a05c5-fb6e-4cb4-a735-e492526b8a2c binding to destination host compute-1.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171#033[00m
Jan 31 03:09:45 np0005603609 nova_compute[221550]: 2026-01-31 08:09:45.387 221554 DEBUG oslo_concurrency.lockutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:45 np0005603609 nova_compute[221550]: 2026-01-31 08:09:45.388 221554 DEBUG oslo_concurrency.lockutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:45 np0005603609 nova_compute[221550]: 2026-01-31 08:09:45.388 221554 DEBUG oslo_concurrency.lockutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:45 np0005603609 nova_compute[221550]: 2026-01-31 08:09:45.675 221554 DEBUG oslo_concurrency.lockutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:09:45 np0005603609 nova_compute[221550]: 2026-01-31 08:09:45.676 221554 DEBUG oslo_concurrency.lockutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquired lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:09:45 np0005603609 nova_compute[221550]: 2026-01-31 08:09:45.676 221554 DEBUG nova.network.neutron [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:09:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:09:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:46.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:09:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:46.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:47 np0005603609 nova_compute[221550]: 2026-01-31 08:09:47.372 221554 DEBUG nova.network.neutron [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Updating instance_info_cache with network_info: [{"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:09:47 np0005603609 nova_compute[221550]: 2026-01-31 08:09:47.525 221554 DEBUG oslo_concurrency.lockutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Releasing lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:09:48 np0005603609 nova_compute[221550]: 2026-01-31 08:09:48.038 221554 DEBUG nova.virt.libvirt.driver [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 31 03:09:48 np0005603609 nova_compute[221550]: 2026-01-31 08:09:48.040 221554 DEBUG nova.virt.libvirt.driver [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:09:48 np0005603609 nova_compute[221550]: 2026-01-31 08:09:48.040 221554 INFO nova.virt.libvirt.driver [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Creating image(s)#033[00m
Jan 31 03:09:48 np0005603609 nova_compute[221550]: 2026-01-31 08:09:48.073 221554 DEBUG nova.storage.rbd_utils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] creating snapshot(nova-resize) on rbd image(1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:09:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:09:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:48.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:09:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:09:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:48.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:09:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e280 e280: 3 total, 3 up, 3 in
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.220 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.362 221554 DEBUG nova.objects.instance [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.722 221554 DEBUG nova.virt.libvirt.driver [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.723 221554 DEBUG nova.virt.libvirt.driver [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Ensure instance console log exists: /var/lib/nova/instances/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.724 221554 DEBUG oslo_concurrency.lockutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.725 221554 DEBUG oslo_concurrency.lockutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.725 221554 DEBUG oslo_concurrency.lockutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.730 221554 DEBUG nova.virt.libvirt.driver [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Start _get_guest_xml network_info=[{"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1220738298-network", "vif_mac": "fa:16:3e:e8:a2:84"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.736 221554 WARNING nova.virt.libvirt.driver [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.741 221554 DEBUG nova.virt.libvirt.host [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.742 221554 DEBUG nova.virt.libvirt.host [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.745 221554 DEBUG nova.virt.libvirt.host [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.746 221554 DEBUG nova.virt.libvirt.host [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.747 221554 DEBUG nova.virt.libvirt.driver [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.747 221554 DEBUG nova.virt.hardware [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e3bd1dad-95f3-4ed9-94b4-27245cd798b5',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.748 221554 DEBUG nova.virt.hardware [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.748 221554 DEBUG nova.virt.hardware [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.748 221554 DEBUG nova.virt.hardware [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.748 221554 DEBUG nova.virt.hardware [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.749 221554 DEBUG nova.virt.hardware [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.749 221554 DEBUG nova.virt.hardware [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.749 221554 DEBUG nova.virt.hardware [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.749 221554 DEBUG nova.virt.hardware [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.749 221554 DEBUG nova.virt.hardware [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.749 221554 DEBUG nova.virt.hardware [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.750 221554 DEBUG nova.objects.instance [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.846 221554 DEBUG oslo_concurrency.processutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:49 np0005603609 nova_compute[221550]: 2026-01-31 08:09:49.904 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:50.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:09:50 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/611290882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.335 221554 DEBUG oslo_concurrency.processutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.374 221554 DEBUG oslo_concurrency.processutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:09:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:50.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:09:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:09:50 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2197638865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.813 221554 DEBUG oslo_concurrency.processutils [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.815 221554 DEBUG nova.virt.libvirt.vif [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-487797123',display_name='tempest-ServerActionsTestOtherB-server-487797123',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-487797123',id=108,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:07:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-opoua6f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:09:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=1d463d8d-cbf7-43ec-8042-83d2c9aa5c69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1220738298-network", "vif_mac": "fa:16:3e:e8:a2:84"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.816 221554 DEBUG nova.network.os_vif_util [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1220738298-network", "vif_mac": "fa:16:3e:e8:a2:84"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.817 221554 DEBUG nova.network.os_vif_util [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.819 221554 DEBUG nova.virt.libvirt.driver [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:09:50 np0005603609 nova_compute[221550]:  <uuid>1d463d8d-cbf7-43ec-8042-83d2c9aa5c69</uuid>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:  <name>instance-0000006c</name>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:  <memory>196608</memory>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerActionsTestOtherB-server-487797123</nova:name>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:09:49</nova:creationTime>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.micro">
Jan 31 03:09:50 np0005603609 nova_compute[221550]:        <nova:memory>192</nova:memory>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:        <nova:user uuid="18aee9d81d404f77ac81cde538f140d8">tempest-ServerActionsTestOtherB-2012907318-project-member</nova:user>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:        <nova:project uuid="c3ddadeb950a490db5c99da98a32c9ec">tempest-ServerActionsTestOtherB-2012907318</nova:project>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:        <nova:port uuid="f99a05c5-fb6e-4cb4-a735-e492526b8a2c">
Jan 31 03:09:50 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <entry name="serial">1d463d8d-cbf7-43ec-8042-83d2c9aa5c69</entry>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <entry name="uuid">1d463d8d-cbf7-43ec-8042-83d2c9aa5c69</entry>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk">
Jan 31 03:09:50 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:09:50 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk.config">
Jan 31 03:09:50 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:09:50 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:e8:a2:84"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <target dev="tapf99a05c5-fb"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69/console.log" append="off"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:09:50 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:09:50 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:09:50 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:09:50 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.821 221554 DEBUG nova.virt.libvirt.vif [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-487797123',display_name='tempest-ServerActionsTestOtherB-server-487797123',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-487797123',id=108,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:07:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-opoua6f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:09:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=1d463d8d-cbf7-43ec-8042-83d2c9aa5c69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1220738298-network", "vif_mac": "fa:16:3e:e8:a2:84"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.821 221554 DEBUG nova.network.os_vif_util [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1220738298-network", "vif_mac": "fa:16:3e:e8:a2:84"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.822 221554 DEBUG nova.network.os_vif_util [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.822 221554 DEBUG os_vif [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.823 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.823 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.824 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.825 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.826 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf99a05c5-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.826 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf99a05c5-fb, col_values=(('external_ids', {'iface-id': 'f99a05c5-fb6e-4cb4-a735-e492526b8a2c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:a2:84', 'vm-uuid': '1d463d8d-cbf7-43ec-8042-83d2c9aa5c69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.828 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:50 np0005603609 NetworkManager[49064]: <info>  [1769846990.8290] manager: (tapf99a05c5-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/226)
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.830 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.831 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:50 np0005603609 nova_compute[221550]: 2026-01-31 08:09:50.832 221554 INFO os_vif [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb')#033[00m
Jan 31 03:09:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:51 np0005603609 nova_compute[221550]: 2026-01-31 08:09:51.204 221554 DEBUG nova.virt.libvirt.driver [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:09:51 np0005603609 nova_compute[221550]: 2026-01-31 08:09:51.205 221554 DEBUG nova.virt.libvirt.driver [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:09:51 np0005603609 nova_compute[221550]: 2026-01-31 08:09:51.205 221554 DEBUG nova.virt.libvirt.driver [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No VIF found with MAC fa:16:3e:e8:a2:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:09:51 np0005603609 nova_compute[221550]: 2026-01-31 08:09:51.206 221554 INFO nova.virt.libvirt.driver [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Using config drive#033[00m
Jan 31 03:09:51 np0005603609 nova_compute[221550]: 2026-01-31 08:09:51.237 221554 DEBUG nova.compute.manager [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:09:51 np0005603609 nova_compute[221550]: 2026-01-31 08:09:51.238 221554 DEBUG nova.virt.libvirt.driver [None req-3711464b-93a3-43e1-aeb1-d18dbb2fe4cc 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 31 03:09:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:09:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:52.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:09:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:52.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:53 np0005603609 nova_compute[221550]: 2026-01-31 08:09:53.689 221554 DEBUG oslo_concurrency.lockutils [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:53 np0005603609 nova_compute[221550]: 2026-01-31 08:09:53.689 221554 DEBUG oslo_concurrency.lockutils [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:53 np0005603609 nova_compute[221550]: 2026-01-31 08:09:53.689 221554 DEBUG nova.compute.manager [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Going to confirm migration 17 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Jan 31 03:09:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:54.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:54 np0005603609 nova_compute[221550]: 2026-01-31 08:09:54.222 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:54 np0005603609 nova_compute[221550]: 2026-01-31 08:09:54.377 221554 DEBUG oslo_concurrency.lockutils [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:09:54 np0005603609 nova_compute[221550]: 2026-01-31 08:09:54.377 221554 DEBUG oslo_concurrency.lockutils [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquired lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:09:54 np0005603609 nova_compute[221550]: 2026-01-31 08:09:54.378 221554 DEBUG nova.network.neutron [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:09:54 np0005603609 nova_compute[221550]: 2026-01-31 08:09:54.379 221554 DEBUG nova.objects.instance [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'info_cache' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:09:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:54.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:55 np0005603609 nova_compute[221550]: 2026-01-31 08:09:55.828 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:09:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:56.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:09:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:56.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:58.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:09:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:09:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:58.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:09:59 np0005603609 nova_compute[221550]: 2026-01-31 08:09:59.270 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:00.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:00.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:00 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 03:10:00 np0005603609 nova_compute[221550]: 2026-01-31 08:10:00.830 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:10:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:02.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:10:02 np0005603609 nova_compute[221550]: 2026-01-31 08:10:02.305 221554 DEBUG nova.network.neutron [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Updating instance_info_cache with network_info: [{"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:10:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:10:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:02.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:10:02 np0005603609 nova_compute[221550]: 2026-01-31 08:10:02.468 221554 DEBUG oslo_concurrency.lockutils [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Releasing lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:10:02 np0005603609 nova_compute[221550]: 2026-01-31 08:10:02.468 221554 DEBUG nova.objects.instance [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'migration_context' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:10:02 np0005603609 nova_compute[221550]: 2026-01-31 08:10:02.748 221554 DEBUG nova.storage.rbd_utils [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] removing snapshot(nova-resize) on rbd image(1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:10:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e281 e281: 3 total, 3 up, 3 in
Jan 31 03:10:02 np0005603609 nova_compute[221550]: 2026-01-31 08:10:02.924 221554 DEBUG oslo_concurrency.lockutils [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:02 np0005603609 nova_compute[221550]: 2026-01-31 08:10:02.925 221554 DEBUG oslo_concurrency.lockutils [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:03 np0005603609 nova_compute[221550]: 2026-01-31 08:10:03.580 221554 DEBUG oslo_concurrency.processutils [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:10:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:10:03 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1587739989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:10:04 np0005603609 nova_compute[221550]: 2026-01-31 08:10:04.010 221554 DEBUG oslo_concurrency.processutils [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:10:04 np0005603609 nova_compute[221550]: 2026-01-31 08:10:04.015 221554 DEBUG nova.compute.provider_tree [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:10:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:04.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:04 np0005603609 nova_compute[221550]: 2026-01-31 08:10:04.227 221554 DEBUG nova.scheduler.client.report [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:10:04 np0005603609 nova_compute[221550]: 2026-01-31 08:10:04.271 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:10:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:04.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:10:04 np0005603609 nova_compute[221550]: 2026-01-31 08:10:04.910 221554 DEBUG oslo_concurrency.lockutils [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 1.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:04 np0005603609 nova_compute[221550]: 2026-01-31 08:10:04.911 221554 DEBUG nova.compute.manager [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Resized/migrated instance is powered off. Setting vm_state to 'stopped'. _confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4805#033[00m
Jan 31 03:10:05 np0005603609 nova_compute[221550]: 2026-01-31 08:10:05.179 221554 INFO nova.scheduler.client.report [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Deleted allocation for migration 44c1e556-6b95-4a35-bb73-ed66132c0f35#033[00m
Jan 31 03:10:05 np0005603609 nova_compute[221550]: 2026-01-31 08:10:05.766 221554 DEBUG oslo_concurrency.lockutils [None req-8235c403-23c0-4c81-9dd5-801923971edb 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 12.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:05 np0005603609 nova_compute[221550]: 2026-01-31 08:10:05.832 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:06 np0005603609 podman[268179]: 2026-01-31 08:10:06.178324464 +0000 UTC m=+0.059831999 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Jan 31 03:10:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:06.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:06 np0005603609 podman[268178]: 2026-01-31 08:10:06.253263125 +0000 UTC m=+0.136579043 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 03:10:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:06.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:07.503 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:07.503 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:07.503 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:07 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #100. Immutable memtables: 0.
Jan 31 03:10:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:10:07.917509) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:10:07 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 100
Jan 31 03:10:07 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847007917571, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2490, "num_deletes": 258, "total_data_size": 5806108, "memory_usage": 5877616, "flush_reason": "Manual Compaction"}
Jan 31 03:10:07 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #101: started
Jan 31 03:10:07 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847007960884, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 101, "file_size": 3773074, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49958, "largest_seqno": 52443, "table_properties": {"data_size": 3762830, "index_size": 6546, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21677, "raw_average_key_size": 20, "raw_value_size": 3742244, "raw_average_value_size": 3619, "num_data_blocks": 282, "num_entries": 1034, "num_filter_entries": 1034, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846820, "oldest_key_time": 1769846820, "file_creation_time": 1769847007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:10:07 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 43464 microseconds, and 5768 cpu microseconds.
Jan 31 03:10:07 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:10:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:10:07.960939) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #101: 3773074 bytes OK
Jan 31 03:10:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:10:07.960999) [db/memtable_list.cc:519] [default] Level-0 commit table #101 started
Jan 31 03:10:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:10:07.966011) [db/memtable_list.cc:722] [default] Level-0 commit table #101: memtable #1 done
Jan 31 03:10:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:10:07.966030) EVENT_LOG_v1 {"time_micros": 1769847007966023, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:10:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:10:07.966050) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:10:07 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 5795116, prev total WAL file size 5795116, number of live WAL files 2.
Jan 31 03:10:07 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000097.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:10:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:10:07.967189) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Jan 31 03:10:07 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:10:07 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [101(3684KB)], [99(9059KB)]
Jan 31 03:10:07 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847007967252, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [101], "files_L6": [99], "score": -1, "input_data_size": 13049955, "oldest_snapshot_seqno": -1}
Jan 31 03:10:08 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #102: 7676 keys, 11137163 bytes, temperature: kUnknown
Jan 31 03:10:08 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847008079205, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 102, "file_size": 11137163, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11086601, "index_size": 30290, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19205, "raw_key_size": 198657, "raw_average_key_size": 25, "raw_value_size": 10950389, "raw_average_value_size": 1426, "num_data_blocks": 1192, "num_entries": 7676, "num_filter_entries": 7676, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769847007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 102, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:10:08 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:10:08 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:10:08.079467) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11137163 bytes
Jan 31 03:10:08 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:10:08.089035) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 116.5 rd, 99.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.8 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 8208, records dropped: 532 output_compression: NoCompression
Jan 31 03:10:08 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:10:08.089085) EVENT_LOG_v1 {"time_micros": 1769847008089065, "job": 62, "event": "compaction_finished", "compaction_time_micros": 112034, "compaction_time_cpu_micros": 34096, "output_level": 6, "num_output_files": 1, "total_output_size": 11137163, "num_input_records": 8208, "num_output_records": 7676, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:10:08 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:10:08 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847008089780, "job": 62, "event": "table_file_deletion", "file_number": 101}
Jan 31 03:10:08 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000099.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:10:08 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847008091421, "job": 62, "event": "table_file_deletion", "file_number": 99}
Jan 31 03:10:08 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:10:07.966945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:10:08 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:10:08.091508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:10:08 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:10:08.091514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:10:08 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:10:08.091518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:10:08 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:10:08.091521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:10:08 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:10:08.091524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:10:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:08.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:10:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:08.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:10:09 np0005603609 nova_compute[221550]: 2026-01-31 08:10:09.273 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:09 np0005603609 nova_compute[221550]: 2026-01-31 08:10:09.678 221554 DEBUG nova.objects.instance [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'flavor' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:10:09 np0005603609 nova_compute[221550]: 2026-01-31 08:10:09.943 221554 DEBUG oslo_concurrency.lockutils [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:10:09 np0005603609 nova_compute[221550]: 2026-01-31 08:10:09.943 221554 DEBUG oslo_concurrency.lockutils [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquired lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:10:09 np0005603609 nova_compute[221550]: 2026-01-31 08:10:09.943 221554 DEBUG nova.network.neutron [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:10:09 np0005603609 nova_compute[221550]: 2026-01-31 08:10:09.944 221554 DEBUG nova.objects.instance [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'info_cache' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:10:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:10.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:10.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e282 e282: 3 total, 3 up, 3 in
Jan 31 03:10:10 np0005603609 nova_compute[221550]: 2026-01-31 08:10:10.835 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.604 221554 DEBUG nova.network.neutron [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Updating instance_info_cache with network_info: [{"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.639 221554 DEBUG oslo_concurrency.lockutils [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Releasing lock "refresh_cache-1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.683 221554 INFO nova.virt.libvirt.driver [-] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Instance destroyed successfully.#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.683 221554 DEBUG nova.objects.instance [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'numa_topology' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.760 221554 DEBUG nova.objects.instance [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'resources' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.801 221554 DEBUG nova.virt.libvirt.vif [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-487797123',display_name='tempest-ServerActionsTestOtherB-server-487797123',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-487797123',id=108,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:09:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-opoua6f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:10:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=1d463d8d-cbf7-43ec-8042-83d2c9aa5c69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.801 221554 DEBUG nova.network.os_vif_util [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.802 221554 DEBUG nova.network.os_vif_util [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.802 221554 DEBUG os_vif [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.803 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.804 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf99a05c5-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.805 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.806 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.808 221554 INFO os_vif [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb')#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.812 221554 DEBUG nova.virt.libvirt.driver [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Start _get_guest_xml network_info=[{"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.815 221554 WARNING nova.virt.libvirt.driver [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.822 221554 DEBUG nova.virt.libvirt.host [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.823 221554 DEBUG nova.virt.libvirt.host [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.826 221554 DEBUG nova.virt.libvirt.host [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.826 221554 DEBUG nova.virt.libvirt.host [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.827 221554 DEBUG nova.virt.libvirt.driver [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.827 221554 DEBUG nova.virt.hardware [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e3bd1dad-95f3-4ed9-94b4-27245cd798b5',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.828 221554 DEBUG nova.virt.hardware [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.828 221554 DEBUG nova.virt.hardware [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.828 221554 DEBUG nova.virt.hardware [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.828 221554 DEBUG nova.virt.hardware [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.829 221554 DEBUG nova.virt.hardware [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.829 221554 DEBUG nova.virt.hardware [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.829 221554 DEBUG nova.virt.hardware [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.829 221554 DEBUG nova.virt.hardware [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.829 221554 DEBUG nova.virt.hardware [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.830 221554 DEBUG nova.virt.hardware [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.830 221554 DEBUG nova.objects.instance [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:10:11 np0005603609 nova_compute[221550]: 2026-01-31 08:10:11.855 221554 DEBUG oslo_concurrency.processutils [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:10:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:12.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:10:12 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1128975270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.272 221554 DEBUG oslo_concurrency.processutils [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.306 221554 DEBUG oslo_concurrency.processutils [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:10:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:10:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:12.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:10:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:10:12 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/14629453' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.724 221554 DEBUG oslo_concurrency.processutils [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.726 221554 DEBUG nova.virt.libvirt.vif [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-487797123',display_name='tempest-ServerActionsTestOtherB-server-487797123',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-487797123',id=108,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:09:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-opoua6f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:10:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=1d463d8d-cbf7-43ec-8042-83d2c9aa5c69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.727 221554 DEBUG nova.network.os_vif_util [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.728 221554 DEBUG nova.network.os_vif_util [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.729 221554 DEBUG nova.objects.instance [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.912 221554 DEBUG nova.virt.libvirt.driver [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:10:12 np0005603609 nova_compute[221550]:  <uuid>1d463d8d-cbf7-43ec-8042-83d2c9aa5c69</uuid>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:  <name>instance-0000006c</name>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:  <memory>196608</memory>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerActionsTestOtherB-server-487797123</nova:name>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:10:11</nova:creationTime>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.micro">
Jan 31 03:10:12 np0005603609 nova_compute[221550]:        <nova:memory>192</nova:memory>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:        <nova:user uuid="18aee9d81d404f77ac81cde538f140d8">tempest-ServerActionsTestOtherB-2012907318-project-member</nova:user>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:        <nova:project uuid="c3ddadeb950a490db5c99da98a32c9ec">tempest-ServerActionsTestOtherB-2012907318</nova:project>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:        <nova:port uuid="f99a05c5-fb6e-4cb4-a735-e492526b8a2c">
Jan 31 03:10:12 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <entry name="serial">1d463d8d-cbf7-43ec-8042-83d2c9aa5c69</entry>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <entry name="uuid">1d463d8d-cbf7-43ec-8042-83d2c9aa5c69</entry>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk">
Jan 31 03:10:12 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:10:12 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_disk.config">
Jan 31 03:10:12 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:10:12 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:e8:a2:84"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <target dev="tapf99a05c5-fb"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69/console.log" append="off"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <input type="keyboard" bus="usb"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:10:12 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:10:12 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:10:12 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:10:12 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.914 221554 DEBUG nova.virt.libvirt.driver [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.915 221554 DEBUG nova.virt.libvirt.driver [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.916 221554 DEBUG nova.virt.libvirt.vif [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-487797123',display_name='tempest-ServerActionsTestOtherB-server-487797123',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-487797123',id=108,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:09:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-opoua6f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:10:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=1d463d8d-cbf7-43ec-8042-83d2c9aa5c69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.916 221554 DEBUG nova.network.os_vif_util [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.917 221554 DEBUG nova.network.os_vif_util [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.918 221554 DEBUG os_vif [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.918 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.919 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.919 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.923 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.923 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf99a05c5-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.923 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf99a05c5-fb, col_values=(('external_ids', {'iface-id': 'f99a05c5-fb6e-4cb4-a735-e492526b8a2c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:a2:84', 'vm-uuid': '1d463d8d-cbf7-43ec-8042-83d2c9aa5c69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.925 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:12 np0005603609 NetworkManager[49064]: <info>  [1769847012.9262] manager: (tapf99a05c5-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/227)
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.928 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.930 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:12 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.932 221554 INFO os_vif [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb')#033[00m
Jan 31 03:10:12 np0005603609 kernel: tapf99a05c5-fb: entered promiscuous mode
Jan 31 03:10:12 np0005603609 NetworkManager[49064]: <info>  [1769847012.9967] manager: (tapf99a05c5-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/228)
Jan 31 03:10:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:10:12Z|00464|binding|INFO|Claiming lport f99a05c5-fb6e-4cb4-a735-e492526b8a2c for this chassis.
Jan 31 03:10:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:10:12Z|00465|binding|INFO|f99a05c5-fb6e-4cb4-a735-e492526b8a2c: Claiming fa:16:3e:e8:a2:84 10.100.0.5
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:12.998 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.009 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.011 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:13 np0005603609 NetworkManager[49064]: <info>  [1769847013.0133] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Jan 31 03:10:13 np0005603609 NetworkManager[49064]: <info>  [1769847013.0140] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Jan 31 03:10:13 np0005603609 systemd-udevd[268296]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:10:13 np0005603609 systemd-machined[190912]: New machine qemu-57-instance-0000006c.
Jan 31 03:10:13 np0005603609 NetworkManager[49064]: <info>  [1769847013.0336] device (tapf99a05c5-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:10:13 np0005603609 NetworkManager[49064]: <info>  [1769847013.0344] device (tapf99a05c5-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:10:13 np0005603609 systemd[1]: Started Virtual Machine qemu-57-instance-0000006c.
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.055 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.064 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.217 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:a2:84 10.100.0.5'], port_security=['fa:16:3e:e8:a2:84 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1d463d8d-cbf7-43ec-8042-83d2c9aa5c69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5b02cc0a-856b-4d31-80e9-eccd1c696448', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e596e7-33b3-44a6-9cbf-f9eacfd974b4, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=f99a05c5-fb6e-4cb4-a735-e492526b8a2c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.218 140058 INFO neutron.agent.ovn.metadata.agent [-] Port f99a05c5-fb6e-4cb4-a735-e492526b8a2c in datapath e8014d6b-23e1-41ef-b5e2-3d770d302e72 bound to our chassis#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.220 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e8014d6b-23e1-41ef-b5e2-3d770d302e72#033[00m
Jan 31 03:10:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:10:13Z|00466|binding|INFO|Setting lport f99a05c5-fb6e-4cb4-a735-e492526b8a2c ovn-installed in OVS
Jan 31 03:10:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:10:13Z|00467|binding|INFO|Setting lport f99a05c5-fb6e-4cb4-a735-e492526b8a2c up in Southbound
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.221 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.225 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.226 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6cbc7500-12e7-4323-a129-b86df41cf0ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.227 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape8014d6b-21 in ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.228 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape8014d6b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.228 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3c9e9087-b336-49de-906a-99674d571040]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.230 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c56c203b-baab-4a2e-8e98-ab25a61f2890]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.239 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[faba6243-e203-43ac-bbdb-efc1f5bceb16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.260 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f31560-ebc4-4e35-960a-c66a429bf91f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.278 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[05576704-b085-4f14-83ff-23354d0d174f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:13 np0005603609 NetworkManager[49064]: <info>  [1769847013.2855] manager: (tape8014d6b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/231)
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.284 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3d79cdc7-3481-4fb4-b1cc-8e82672a566f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.307 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f210ae-1746-49a5-a508-c55a84032ecb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.309 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[9da3378a-6e02-44c2-ac7b-cfbf7aa60d8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:13 np0005603609 NetworkManager[49064]: <info>  [1769847013.3226] device (tape8014d6b-20): carrier: link connected
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.327 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f354aae2-7c1d-41d4-9017-4e3ab52df714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.340 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8b843ffb-b883-4f71-8520-9e76c1301f22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8014d6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:c1:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725302, 'reachable_time': 27268, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268332, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.351 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4c9b919a-b699-42d7-9d4a-8db514c71376]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:c1c3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725302, 'tstamp': 725302}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268333, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.363 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3133aa68-db9d-4ef9-9a6d-17f1e8dd8247]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8014d6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:c1:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725302, 'reachable_time': 27268, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268334, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.384 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc4fd40-8bdd-4323-bfa1-44cea159e769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.424 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[023ec887-e2d4-456e-b60d-98c6ecf7c0d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.425 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8014d6b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.426 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.426 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8014d6b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:13 np0005603609 kernel: tape8014d6b-20: entered promiscuous mode
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.427 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:13 np0005603609 NetworkManager[49064]: <info>  [1769847013.4294] manager: (tape8014d6b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.431 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.434 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape8014d6b-20, col_values=(('external_ids', {'iface-id': '4bb3ff19-f70b-4c8d-a829-66ff18233b61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.436 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:10:13Z|00468|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.437 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.440 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.441 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e8014d6b-23e1-41ef-b5e2-3d770d302e72.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e8014d6b-23e1-41ef-b5e2-3d770d302e72.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.442 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bad5abbb-03f6-4de9-887e-eeabd539c689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.443 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/e8014d6b-23e1-41ef-b5e2-3d770d302e72.pid.haproxy
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:10:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:13.443 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'env', 'PROCESS_TAG=haproxy-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e8014d6b-23e1-41ef-b5e2-3d770d302e72.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.680 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847013.6796796, 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.680 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.683 221554 DEBUG nova.compute.manager [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.686 221554 INFO nova.virt.libvirt.driver [-] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Instance rebooted successfully.#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.687 221554 DEBUG nova.compute.manager [None req-e8899bd4-5ba7-42a2-800d-978b01bf8306 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.709 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.712 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.754 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.754 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847013.6807122, 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.755 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] VM Started (Lifecycle Event)#033[00m
Jan 31 03:10:13 np0005603609 podman[268408]: 2026-01-31 08:10:13.756010395 +0000 UTC m=+0.047488542 container create a179ca6dc843402ef45566d29b3d57eda34968b81363051125aafe309d666151 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:10:13 np0005603609 systemd[1]: Started libpod-conmon-a179ca6dc843402ef45566d29b3d57eda34968b81363051125aafe309d666151.scope.
Jan 31 03:10:13 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:10:13 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/779edc5b41d10b2b64b8ccb704e48504a16381a2c550717ec70beb4d43b3dc3e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:13 np0005603609 podman[268408]: 2026-01-31 08:10:13.825933316 +0000 UTC m=+0.117411463 container init a179ca6dc843402ef45566d29b3d57eda34968b81363051125aafe309d666151 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:10:13 np0005603609 podman[268408]: 2026-01-31 08:10:13.731209449 +0000 UTC m=+0.022687616 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:10:13 np0005603609 podman[268408]: 2026-01-31 08:10:13.831247053 +0000 UTC m=+0.122725200 container start a179ca6dc843402ef45566d29b3d57eda34968b81363051125aafe309d666151 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 03:10:13 np0005603609 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[268423]: [NOTICE]   (268427) : New worker (268429) forked
Jan 31 03:10:13 np0005603609 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[268423]: [NOTICE]   (268427) : Loading success.
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.882 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.886 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.952 221554 DEBUG nova.compute.manager [req-f967bd95-6cb2-41fe-b554-146dcb4d3185 req-4d886019-c65c-433a-89a0-b86ec6544f45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Received event network-vif-plugged-f99a05c5-fb6e-4cb4-a735-e492526b8a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.952 221554 DEBUG oslo_concurrency.lockutils [req-f967bd95-6cb2-41fe-b554-146dcb4d3185 req-4d886019-c65c-433a-89a0-b86ec6544f45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.952 221554 DEBUG oslo_concurrency.lockutils [req-f967bd95-6cb2-41fe-b554-146dcb4d3185 req-4d886019-c65c-433a-89a0-b86ec6544f45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.952 221554 DEBUG oslo_concurrency.lockutils [req-f967bd95-6cb2-41fe-b554-146dcb4d3185 req-4d886019-c65c-433a-89a0-b86ec6544f45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.953 221554 DEBUG nova.compute.manager [req-f967bd95-6cb2-41fe-b554-146dcb4d3185 req-4d886019-c65c-433a-89a0-b86ec6544f45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] No waiting events found dispatching network-vif-plugged-f99a05c5-fb6e-4cb4-a735-e492526b8a2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:10:13 np0005603609 nova_compute[221550]: 2026-01-31 08:10:13.953 221554 WARNING nova.compute.manager [req-f967bd95-6cb2-41fe-b554-146dcb4d3185 req-4d886019-c65c-433a-89a0-b86ec6544f45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Received unexpected event network-vif-plugged-f99a05c5-fb6e-4cb4-a735-e492526b8a2c for instance with vm_state active and task_state None.#033[00m
Jan 31 03:10:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:14.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:14 np0005603609 nova_compute[221550]: 2026-01-31 08:10:14.277 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:14.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:10:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:10:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:10:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:10:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 03:10:15 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 03:10:15 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:10:15 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:10:15 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:10:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:15 np0005603609 nova_compute[221550]: 2026-01-31 08:10:15.902 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:15 np0005603609 nova_compute[221550]: 2026-01-31 08:10:15.930 221554 DEBUG oslo_concurrency.lockutils [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:15 np0005603609 nova_compute[221550]: 2026-01-31 08:10:15.930 221554 DEBUG oslo_concurrency.lockutils [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:15 np0005603609 nova_compute[221550]: 2026-01-31 08:10:15.930 221554 DEBUG oslo_concurrency.lockutils [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:15 np0005603609 nova_compute[221550]: 2026-01-31 08:10:15.931 221554 DEBUG oslo_concurrency.lockutils [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:15 np0005603609 nova_compute[221550]: 2026-01-31 08:10:15.931 221554 DEBUG oslo_concurrency.lockutils [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:15 np0005603609 nova_compute[221550]: 2026-01-31 08:10:15.932 221554 INFO nova.compute.manager [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Terminating instance#033[00m
Jan 31 03:10:15 np0005603609 nova_compute[221550]: 2026-01-31 08:10:15.933 221554 DEBUG nova.compute.manager [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:10:15 np0005603609 kernel: tapf99a05c5-fb (unregistering): left promiscuous mode
Jan 31 03:10:15 np0005603609 NetworkManager[49064]: <info>  [1769847015.9767] device (tapf99a05c5-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:10:15 np0005603609 ovn_controller[130359]: 2026-01-31T08:10:15Z|00469|binding|INFO|Releasing lport f99a05c5-fb6e-4cb4-a735-e492526b8a2c from this chassis (sb_readonly=0)
Jan 31 03:10:15 np0005603609 ovn_controller[130359]: 2026-01-31T08:10:15Z|00470|binding|INFO|Setting lport f99a05c5-fb6e-4cb4-a735-e492526b8a2c down in Southbound
Jan 31 03:10:15 np0005603609 nova_compute[221550]: 2026-01-31 08:10:15.984 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:15 np0005603609 ovn_controller[130359]: 2026-01-31T08:10:15Z|00471|binding|INFO|Removing iface tapf99a05c5-fb ovn-installed in OVS
Jan 31 03:10:15 np0005603609 nova_compute[221550]: 2026-01-31 08:10:15.993 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:16 np0005603609 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Jan 31 03:10:16 np0005603609 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d0000006c.scope: Consumed 2.908s CPU time.
Jan 31 03:10:16 np0005603609 systemd-machined[190912]: Machine qemu-57-instance-0000006c terminated.
Jan 31 03:10:16 np0005603609 NetworkManager[49064]: <info>  [1769847016.1488] manager: (tapf99a05c5-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/233)
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.160 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.174 221554 INFO nova.virt.libvirt.driver [-] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Instance destroyed successfully.#033[00m
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.174 221554 DEBUG nova.objects.instance [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'resources' on Instance uuid 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:10:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:16.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.278 221554 DEBUG nova.compute.manager [req-1298acec-85b4-4012-8691-c1b281b548d1 req-25d5dee8-9580-4a58-8829-83957acc30b9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Received event network-vif-plugged-f99a05c5-fb6e-4cb4-a735-e492526b8a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.279 221554 DEBUG oslo_concurrency.lockutils [req-1298acec-85b4-4012-8691-c1b281b548d1 req-25d5dee8-9580-4a58-8829-83957acc30b9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.279 221554 DEBUG oslo_concurrency.lockutils [req-1298acec-85b4-4012-8691-c1b281b548d1 req-25d5dee8-9580-4a58-8829-83957acc30b9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.279 221554 DEBUG oslo_concurrency.lockutils [req-1298acec-85b4-4012-8691-c1b281b548d1 req-25d5dee8-9580-4a58-8829-83957acc30b9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.279 221554 DEBUG nova.compute.manager [req-1298acec-85b4-4012-8691-c1b281b548d1 req-25d5dee8-9580-4a58-8829-83957acc30b9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] No waiting events found dispatching network-vif-plugged-f99a05c5-fb6e-4cb4-a735-e492526b8a2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.280 221554 WARNING nova.compute.manager [req-1298acec-85b4-4012-8691-c1b281b548d1 req-25d5dee8-9580-4a58-8829-83957acc30b9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Received unexpected event network-vif-plugged-f99a05c5-fb6e-4cb4-a735-e492526b8a2c for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:10:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:10:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:16.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.630 221554 DEBUG nova.virt.libvirt.vif [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-487797123',display_name='tempest-ServerActionsTestOtherB-server-487797123',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-487797123',id=108,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:09:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-opoua6f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:10:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=1d463d8d-cbf7-43ec-8042-83d2c9aa5c69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.631 221554 DEBUG nova.network.os_vif_util [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "address": "fa:16:3e:e8:a2:84", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf99a05c5-fb", "ovs_interfaceid": "f99a05c5-fb6e-4cb4-a735-e492526b8a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.632 221554 DEBUG nova.network.os_vif_util [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.632 221554 DEBUG os_vif [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.635 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.636 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf99a05c5-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.638 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.639 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.643 221554 INFO os_vif [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a2:84,bridge_name='br-int',has_traffic_filtering=True,id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf99a05c5-fb')#033[00m
Jan 31 03:10:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:16.729 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:a2:84 10.100.0.5'], port_security=['fa:16:3e:e8:a2:84 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1d463d8d-cbf7-43ec-8042-83d2c9aa5c69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5b02cc0a-856b-4d31-80e9-eccd1c696448', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.236', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e596e7-33b3-44a6-9cbf-f9eacfd974b4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=f99a05c5-fb6e-4cb4-a735-e492526b8a2c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:10:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:16.731 140058 INFO neutron.agent.ovn.metadata.agent [-] Port f99a05c5-fb6e-4cb4-a735-e492526b8a2c in datapath e8014d6b-23e1-41ef-b5e2-3d770d302e72 unbound from our chassis#033[00m
Jan 31 03:10:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:16.732 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e8014d6b-23e1-41ef-b5e2-3d770d302e72, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:10:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:16.733 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1a55df51-e7ca-458f-b290-1382763e68fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:16.734 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 namespace which is not needed anymore#033[00m
Jan 31 03:10:16 np0005603609 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[268423]: [NOTICE]   (268427) : haproxy version is 2.8.14-c23fe91
Jan 31 03:10:16 np0005603609 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[268423]: [NOTICE]   (268427) : path to executable is /usr/sbin/haproxy
Jan 31 03:10:16 np0005603609 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[268423]: [WARNING]  (268427) : Exiting Master process...
Jan 31 03:10:16 np0005603609 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[268423]: [ALERT]    (268427) : Current worker (268429) exited with code 143 (Terminated)
Jan 31 03:10:16 np0005603609 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[268423]: [WARNING]  (268427) : All workers exited. Exiting... (0)
Jan 31 03:10:16 np0005603609 systemd[1]: libpod-a179ca6dc843402ef45566d29b3d57eda34968b81363051125aafe309d666151.scope: Deactivated successfully.
Jan 31 03:10:16 np0005603609 podman[268741]: 2026-01-31 08:10:16.874092632 +0000 UTC m=+0.051446958 container died a179ca6dc843402ef45566d29b3d57eda34968b81363051125aafe309d666151 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:10:16 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a179ca6dc843402ef45566d29b3d57eda34968b81363051125aafe309d666151-userdata-shm.mount: Deactivated successfully.
Jan 31 03:10:16 np0005603609 systemd[1]: var-lib-containers-storage-overlay-779edc5b41d10b2b64b8ccb704e48504a16381a2c550717ec70beb4d43b3dc3e-merged.mount: Deactivated successfully.
Jan 31 03:10:16 np0005603609 podman[268741]: 2026-01-31 08:10:16.928342966 +0000 UTC m=+0.105697292 container cleanup a179ca6dc843402ef45566d29b3d57eda34968b81363051125aafe309d666151 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:10:16 np0005603609 systemd[1]: libpod-conmon-a179ca6dc843402ef45566d29b3d57eda34968b81363051125aafe309d666151.scope: Deactivated successfully.
Jan 31 03:10:16 np0005603609 podman[268771]: 2026-01-31 08:10:16.979803632 +0000 UTC m=+0.037715206 container remove a179ca6dc843402ef45566d29b3d57eda34968b81363051125aafe309d666151 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 03:10:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:16.985 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[34634d2b-e9a9-4764-b04f-8c52f5c036b0]: (4, ('Sat Jan 31 08:10:16 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 (a179ca6dc843402ef45566d29b3d57eda34968b81363051125aafe309d666151)\na179ca6dc843402ef45566d29b3d57eda34968b81363051125aafe309d666151\nSat Jan 31 08:10:16 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 (a179ca6dc843402ef45566d29b3d57eda34968b81363051125aafe309d666151)\na179ca6dc843402ef45566d29b3d57eda34968b81363051125aafe309d666151\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:16.987 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c41ab264-6b99-451f-8518-d75d1923921b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:16.989 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8014d6b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:16 np0005603609 kernel: tape8014d6b-20: left promiscuous mode
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.991 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:16 np0005603609 nova_compute[221550]: 2026-01-31 08:10:16.996 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:17.000 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3b4c4e6b-65c0-4e35-aded-c79d1991fc90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:17.019 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[635687b4-d617-411d-972e-75f4c2920068]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:17.021 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bcafe8ec-f594-4949-b67e-018dbc5b8233]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:17.032 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5d41ec54-06ef-4153-b4f7-2b163adf1762]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725297, 'reachable_time': 39363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268786, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:17.034 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:10:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:17.034 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[7bb0de22-c2a6-4c2d-929e-9f693b474515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:17 np0005603609 systemd[1]: run-netns-ovnmeta\x2de8014d6b\x2d23e1\x2d41ef\x2db5e2\x2d3d770d302e72.mount: Deactivated successfully.
Jan 31 03:10:17 np0005603609 nova_compute[221550]: 2026-01-31 08:10:17.134 221554 INFO nova.virt.libvirt.driver [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Deleting instance files /var/lib/nova/instances/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_del#033[00m
Jan 31 03:10:17 np0005603609 nova_compute[221550]: 2026-01-31 08:10:17.135 221554 INFO nova.virt.libvirt.driver [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Deletion of /var/lib/nova/instances/1d463d8d-cbf7-43ec-8042-83d2c9aa5c69_del complete#033[00m
Jan 31 03:10:17 np0005603609 nova_compute[221550]: 2026-01-31 08:10:17.413 221554 INFO nova.compute.manager [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Took 1.48 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:10:17 np0005603609 nova_compute[221550]: 2026-01-31 08:10:17.414 221554 DEBUG oslo.service.loopingcall [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:10:17 np0005603609 nova_compute[221550]: 2026-01-31 08:10:17.414 221554 DEBUG nova.compute.manager [-] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:10:17 np0005603609 nova_compute[221550]: 2026-01-31 08:10:17.415 221554 DEBUG nova.network.neutron [-] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:10:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:18.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:18.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:19 np0005603609 nova_compute[221550]: 2026-01-31 08:10:19.277 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:19 np0005603609 nova_compute[221550]: 2026-01-31 08:10:19.820 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:19.821 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:10:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:19.823 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:10:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:20.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:20 np0005603609 nova_compute[221550]: 2026-01-31 08:10:20.369 221554 DEBUG nova.network.neutron [-] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:10:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:20.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:20 np0005603609 nova_compute[221550]: 2026-01-31 08:10:20.564 221554 DEBUG nova.compute.manager [req-4490243f-c120-4049-a6a3-c38d850c5f0b req-55972a02-3d64-4c4d-b0a7-7c0e64344d96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Received event network-vif-deleted-f99a05c5-fb6e-4cb4-a735-e492526b8a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:10:20 np0005603609 nova_compute[221550]: 2026-01-31 08:10:20.565 221554 INFO nova.compute.manager [req-4490243f-c120-4049-a6a3-c38d850c5f0b req-55972a02-3d64-4c4d-b0a7-7c0e64344d96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Neutron deleted interface f99a05c5-fb6e-4cb4-a735-e492526b8a2c; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:10:20 np0005603609 nova_compute[221550]: 2026-01-31 08:10:20.565 221554 DEBUG nova.network.neutron [req-4490243f-c120-4049-a6a3-c38d850c5f0b req-55972a02-3d64-4c4d-b0a7-7c0e64344d96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:10:20 np0005603609 nova_compute[221550]: 2026-01-31 08:10:20.595 221554 INFO nova.compute.manager [-] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Took 3.18 seconds to deallocate network for instance.#033[00m
Jan 31 03:10:20 np0005603609 nova_compute[221550]: 2026-01-31 08:10:20.638 221554 DEBUG nova.compute.manager [req-4490243f-c120-4049-a6a3-c38d850c5f0b req-55972a02-3d64-4c4d-b0a7-7c0e64344d96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Detach interface failed, port_id=f99a05c5-fb6e-4cb4-a735-e492526b8a2c, reason: Instance 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:10:20 np0005603609 nova_compute[221550]: 2026-01-31 08:10:20.681 221554 DEBUG oslo_concurrency.lockutils [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:20 np0005603609 nova_compute[221550]: 2026-01-31 08:10:20.682 221554 DEBUG oslo_concurrency.lockutils [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:20 np0005603609 nova_compute[221550]: 2026-01-31 08:10:20.687 221554 DEBUG oslo_concurrency.lockutils [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:20 np0005603609 nova_compute[221550]: 2026-01-31 08:10:20.746 221554 INFO nova.scheduler.client.report [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Deleted allocations for instance 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69#033[00m
Jan 31 03:10:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:20 np0005603609 nova_compute[221550]: 2026-01-31 08:10:20.971 221554 DEBUG oslo_concurrency.lockutils [None req-25fd5608-a03c-4473-b76b-745370f1768b 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "1d463d8d-cbf7-43ec-8042-83d2c9aa5c69" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:21 np0005603609 nova_compute[221550]: 2026-01-31 08:10:21.004 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:21 np0005603609 nova_compute[221550]: 2026-01-31 08:10:21.638 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:22.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:10:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:10:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:22.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:22 np0005603609 nova_compute[221550]: 2026-01-31 08:10:22.942 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:23 np0005603609 nova_compute[221550]: 2026-01-31 08:10:23.559 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:23 np0005603609 nova_compute[221550]: 2026-01-31 08:10:23.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:24.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:24 np0005603609 nova_compute[221550]: 2026-01-31 08:10:24.280 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:24.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:10:24.825 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:25 np0005603609 nova_compute[221550]: 2026-01-31 08:10:25.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:25 np0005603609 nova_compute[221550]: 2026-01-31 08:10:25.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:26.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:10:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:26.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:10:26 np0005603609 nova_compute[221550]: 2026-01-31 08:10:26.659 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:28.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:10:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:28.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:10:28 np0005603609 nova_compute[221550]: 2026-01-31 08:10:28.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:29 np0005603609 nova_compute[221550]: 2026-01-31 08:10:29.070 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:29 np0005603609 nova_compute[221550]: 2026-01-31 08:10:29.137 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:29 np0005603609 nova_compute[221550]: 2026-01-31 08:10:29.281 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:29 np0005603609 nova_compute[221550]: 2026-01-31 08:10:29.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:29 np0005603609 nova_compute[221550]: 2026-01-31 08:10:29.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:10:29 np0005603609 nova_compute[221550]: 2026-01-31 08:10:29.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:10:29 np0005603609 nova_compute[221550]: 2026-01-31 08:10:29.677 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:10:29 np0005603609 nova_compute[221550]: 2026-01-31 08:10:29.678 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:29 np0005603609 nova_compute[221550]: 2026-01-31 08:10:29.678 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:10:29 np0005603609 nova_compute[221550]: 2026-01-31 08:10:29.678 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:29 np0005603609 nova_compute[221550]: 2026-01-31 08:10:29.706 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:29 np0005603609 nova_compute[221550]: 2026-01-31 08:10:29.706 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:29 np0005603609 nova_compute[221550]: 2026-01-31 08:10:29.706 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:29 np0005603609 nova_compute[221550]: 2026-01-31 08:10:29.707 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:10:29 np0005603609 nova_compute[221550]: 2026-01-31 08:10:29.707 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:10:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:10:30 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2174295167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:10:30 np0005603609 nova_compute[221550]: 2026-01-31 08:10:30.100 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:10:30 np0005603609 nova_compute[221550]: 2026-01-31 08:10:30.235 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:10:30 np0005603609 nova_compute[221550]: 2026-01-31 08:10:30.236 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4524MB free_disk=20.85269546508789GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:10:30 np0005603609 nova_compute[221550]: 2026-01-31 08:10:30.236 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:30 np0005603609 nova_compute[221550]: 2026-01-31 08:10:30.237 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:10:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:30.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:10:30 np0005603609 nova_compute[221550]: 2026-01-31 08:10:30.310 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:10:30 np0005603609 nova_compute[221550]: 2026-01-31 08:10:30.311 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:10:30 np0005603609 nova_compute[221550]: 2026-01-31 08:10:30.339 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:10:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:30.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:10:30 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2742948316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:10:30 np0005603609 nova_compute[221550]: 2026-01-31 08:10:30.743 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:10:30 np0005603609 nova_compute[221550]: 2026-01-31 08:10:30.747 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:10:30 np0005603609 nova_compute[221550]: 2026-01-31 08:10:30.779 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:10:30 np0005603609 nova_compute[221550]: 2026-01-31 08:10:30.830 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:10:30 np0005603609 nova_compute[221550]: 2026-01-31 08:10:30.830 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:31 np0005603609 nova_compute[221550]: 2026-01-31 08:10:31.173 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847016.1713147, 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:10:31 np0005603609 nova_compute[221550]: 2026-01-31 08:10:31.173 221554 INFO nova.compute.manager [-] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:10:31 np0005603609 nova_compute[221550]: 2026-01-31 08:10:31.204 221554 DEBUG nova.compute.manager [None req-597624b3-e20a-4a7d-a067-7ade13fc0c54 - - - - - -] [instance: 1d463d8d-cbf7-43ec-8042-83d2c9aa5c69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:10:31 np0005603609 nova_compute[221550]: 2026-01-31 08:10:31.662 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:31 np0005603609 nova_compute[221550]: 2026-01-31 08:10:31.811 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:32.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:32.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:34.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:34 np0005603609 nova_compute[221550]: 2026-01-31 08:10:34.284 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:10:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:34.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:10:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:10:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:36.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:10:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:36.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:36 np0005603609 nova_compute[221550]: 2026-01-31 08:10:36.665 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:37 np0005603609 podman[268885]: 2026-01-31 08:10:37.169659336 +0000 UTC m=+0.049972371 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 03:10:37 np0005603609 podman[268884]: 2026-01-31 08:10:37.203073569 +0000 UTC m=+0.079707747 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:10:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:38.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:38.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:39 np0005603609 nova_compute[221550]: 2026-01-31 08:10:39.597 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:10:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:40.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:10:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:40.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:41 np0005603609 nova_compute[221550]: 2026-01-31 08:10:41.668 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:42.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:10:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:42.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:10:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:10:43 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4087330589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:10:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:10:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:44.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:10:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:10:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:44.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:10:44 np0005603609 nova_compute[221550]: 2026-01-31 08:10:44.599 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:10:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4221475382' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:10:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:10:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4221475382' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:10:45 np0005603609 nova_compute[221550]: 2026-01-31 08:10:45.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:10:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:46.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:10:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:10:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:46.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:10:46 np0005603609 nova_compute[221550]: 2026-01-31 08:10:46.671 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:48.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:48.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:49 np0005603609 nova_compute[221550]: 2026-01-31 08:10:49.601 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:50.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:10:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:50.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:10:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:51 np0005603609 nova_compute[221550]: 2026-01-31 08:10:51.674 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:10:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:52.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:10:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:52.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:10:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:54.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:10:54 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 31 03:10:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:10:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:54.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:10:54 np0005603609 nova_compute[221550]: 2026-01-31 08:10:54.604 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:10:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:56.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:10:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:10:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:56.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:10:56 np0005603609 nova_compute[221550]: 2026-01-31 08:10:56.676 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:58.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:10:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:58.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:59 np0005603609 nova_compute[221550]: 2026-01-31 08:10:59.606 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:00.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:00.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:01 np0005603609 nova_compute[221550]: 2026-01-31 08:11:01.679 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:02.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:02.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:04.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:11:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:04.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:11:04 np0005603609 nova_compute[221550]: 2026-01-31 08:11:04.608 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:06.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:06.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:06 np0005603609 nova_compute[221550]: 2026-01-31 08:11:06.683 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:07.504 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:07.504 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:07.505 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:08 np0005603609 podman[268933]: 2026-01-31 08:11:08.179974957 +0000 UTC m=+0.054111330 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 31 03:11:08 np0005603609 podman[268932]: 2026-01-31 08:11:08.226515387 +0000 UTC m=+0.106785328 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 31 03:11:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:08.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:08.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:09 np0005603609 nova_compute[221550]: 2026-01-31 08:11:09.610 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:10.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:10.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:10 np0005603609 nova_compute[221550]: 2026-01-31 08:11:10.868 221554 DEBUG oslo_concurrency.lockutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "981eb990-ec0c-4673-98e7-102afbe0bb51" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:10 np0005603609 nova_compute[221550]: 2026-01-31 08:11:10.868 221554 DEBUG oslo_concurrency.lockutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:10 np0005603609 nova_compute[221550]: 2026-01-31 08:11:10.930 221554 DEBUG nova.compute.manager [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:11:11 np0005603609 nova_compute[221550]: 2026-01-31 08:11:11.093 221554 DEBUG oslo_concurrency.lockutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:11 np0005603609 nova_compute[221550]: 2026-01-31 08:11:11.094 221554 DEBUG oslo_concurrency.lockutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:11 np0005603609 nova_compute[221550]: 2026-01-31 08:11:11.100 221554 DEBUG nova.virt.hardware [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:11:11 np0005603609 nova_compute[221550]: 2026-01-31 08:11:11.100 221554 INFO nova.compute.claims [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:11:11 np0005603609 nova_compute[221550]: 2026-01-31 08:11:11.503 221554 DEBUG oslo_concurrency.processutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:11 np0005603609 nova_compute[221550]: 2026-01-31 08:11:11.685 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:11:11 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1691286314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:11:11 np0005603609 nova_compute[221550]: 2026-01-31 08:11:11.974 221554 DEBUG oslo_concurrency.processutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:11 np0005603609 nova_compute[221550]: 2026-01-31 08:11:11.981 221554 DEBUG nova.compute.provider_tree [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:11:12 np0005603609 nova_compute[221550]: 2026-01-31 08:11:12.031 221554 DEBUG nova.scheduler.client.report [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:11:12 np0005603609 nova_compute[221550]: 2026-01-31 08:11:12.157 221554 DEBUG oslo_concurrency.lockutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:12 np0005603609 nova_compute[221550]: 2026-01-31 08:11:12.158 221554 DEBUG nova.compute.manager [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:11:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:11:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:12.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:11:12 np0005603609 nova_compute[221550]: 2026-01-31 08:11:12.409 221554 DEBUG nova.compute.manager [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:11:12 np0005603609 nova_compute[221550]: 2026-01-31 08:11:12.410 221554 DEBUG nova.network.neutron [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:11:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:12.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:12 np0005603609 nova_compute[221550]: 2026-01-31 08:11:12.852 221554 DEBUG nova.policy [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd7d9a44201d548aba1e1654e136ddd06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:11:12 np0005603609 nova_compute[221550]: 2026-01-31 08:11:12.859 221554 INFO nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.045 221554 DEBUG nova.compute.manager [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.241 221554 DEBUG oslo_concurrency.lockutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "878fb573-f407-49e8-9643-f5a766a77df9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.241 221554 DEBUG oslo_concurrency.lockutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "878fb573-f407-49e8-9643-f5a766a77df9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.457 221554 DEBUG nova.compute.manager [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.479 221554 DEBUG oslo_concurrency.lockutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Acquiring lock "f2ba5390-a359-4227-8ce2-17e1573171ef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.479 221554 DEBUG oslo_concurrency.lockutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Lock "f2ba5390-a359-4227-8ce2-17e1573171ef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.612 221554 DEBUG nova.compute.manager [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.613 221554 DEBUG nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.613 221554 INFO nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Creating image(s)#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.641 221554 DEBUG nova.storage.rbd_utils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 981eb990-ec0c-4673-98e7-102afbe0bb51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.667 221554 DEBUG nova.storage.rbd_utils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 981eb990-ec0c-4673-98e7-102afbe0bb51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.691 221554 DEBUG nova.storage.rbd_utils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 981eb990-ec0c-4673-98e7-102afbe0bb51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.694 221554 DEBUG oslo_concurrency.processutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.727 221554 DEBUG oslo_concurrency.lockutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.728 221554 DEBUG oslo_concurrency.lockutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.735 221554 DEBUG nova.virt.hardware [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.735 221554 INFO nova.compute.claims [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.743 221554 DEBUG oslo_concurrency.processutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.744 221554 DEBUG oslo_concurrency.lockutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.744 221554 DEBUG oslo_concurrency.lockutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.745 221554 DEBUG oslo_concurrency.lockutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.766 221554 DEBUG nova.storage.rbd_utils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 981eb990-ec0c-4673-98e7-102afbe0bb51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.769 221554 DEBUG oslo_concurrency.processutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 981eb990-ec0c-4673-98e7-102afbe0bb51_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:13 np0005603609 nova_compute[221550]: 2026-01-31 08:11:13.790 221554 DEBUG nova.compute.manager [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:11:14 np0005603609 nova_compute[221550]: 2026-01-31 08:11:14.058 221554 DEBUG oslo_concurrency.lockutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:14 np0005603609 nova_compute[221550]: 2026-01-31 08:11:14.092 221554 DEBUG oslo_concurrency.processutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 981eb990-ec0c-4673-98e7-102afbe0bb51_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:14 np0005603609 nova_compute[221550]: 2026-01-31 08:11:14.162 221554 DEBUG nova.network.neutron [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Successfully created port: 05c06d54-1257-48d3-8a5a-f92423aadbd8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:11:14 np0005603609 nova_compute[221550]: 2026-01-31 08:11:14.169 221554 DEBUG nova.storage.rbd_utils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] resizing rbd image 981eb990-ec0c-4673-98e7-102afbe0bb51_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:11:14 np0005603609 nova_compute[221550]: 2026-01-31 08:11:14.202 221554 DEBUG oslo_concurrency.processutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:14 np0005603609 nova_compute[221550]: 2026-01-31 08:11:14.281 221554 DEBUG nova.objects.instance [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'migration_context' on Instance uuid 981eb990-ec0c-4673-98e7-102afbe0bb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:11:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:14.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:11:14 np0005603609 nova_compute[221550]: 2026-01-31 08:11:14.342 221554 DEBUG nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:11:14 np0005603609 nova_compute[221550]: 2026-01-31 08:11:14.342 221554 DEBUG nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Ensure instance console log exists: /var/lib/nova/instances/981eb990-ec0c-4673-98e7-102afbe0bb51/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:11:14 np0005603609 nova_compute[221550]: 2026-01-31 08:11:14.343 221554 DEBUG oslo_concurrency.lockutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:14 np0005603609 nova_compute[221550]: 2026-01-31 08:11:14.344 221554 DEBUG oslo_concurrency.lockutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:14 np0005603609 nova_compute[221550]: 2026-01-31 08:11:14.344 221554 DEBUG oslo_concurrency.lockutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:11:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:14.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:11:14 np0005603609 nova_compute[221550]: 2026-01-31 08:11:14.612 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:11:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3639175740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:11:14 np0005603609 nova_compute[221550]: 2026-01-31 08:11:14.637 221554 DEBUG oslo_concurrency.processutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:14 np0005603609 nova_compute[221550]: 2026-01-31 08:11:14.643 221554 DEBUG nova.compute.provider_tree [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:11:14 np0005603609 nova_compute[221550]: 2026-01-31 08:11:14.938 221554 DEBUG nova.scheduler.client.report [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:11:15 np0005603609 nova_compute[221550]: 2026-01-31 08:11:15.359 221554 DEBUG oslo_concurrency.lockutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:15 np0005603609 nova_compute[221550]: 2026-01-31 08:11:15.361 221554 DEBUG nova.compute.manager [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:11:15 np0005603609 nova_compute[221550]: 2026-01-31 08:11:15.366 221554 DEBUG oslo_concurrency.lockutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:15 np0005603609 nova_compute[221550]: 2026-01-31 08:11:15.382 221554 DEBUG nova.virt.hardware [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:11:15 np0005603609 nova_compute[221550]: 2026-01-31 08:11:15.382 221554 INFO nova.compute.claims [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:11:15 np0005603609 nova_compute[221550]: 2026-01-31 08:11:15.554 221554 DEBUG nova.compute.manager [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:11:15 np0005603609 nova_compute[221550]: 2026-01-31 08:11:15.555 221554 DEBUG nova.network.neutron [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:11:15 np0005603609 nova_compute[221550]: 2026-01-31 08:11:15.635 221554 DEBUG nova.network.neutron [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Successfully updated port: 05c06d54-1257-48d3-8a5a-f92423aadbd8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:11:15 np0005603609 nova_compute[221550]: 2026-01-31 08:11:15.663 221554 INFO nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:11:15 np0005603609 nova_compute[221550]: 2026-01-31 08:11:15.772 221554 DEBUG nova.compute.manager [req-d0ada800-9c24-40fc-8afe-6402afbd7d7e req-9a707ab3-6d8f-417d-ba51-e1039cf7aa3c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received event network-changed-05c06d54-1257-48d3-8a5a-f92423aadbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:15 np0005603609 nova_compute[221550]: 2026-01-31 08:11:15.772 221554 DEBUG nova.compute.manager [req-d0ada800-9c24-40fc-8afe-6402afbd7d7e req-9a707ab3-6d8f-417d-ba51-e1039cf7aa3c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Refreshing instance network info cache due to event network-changed-05c06d54-1257-48d3-8a5a-f92423aadbd8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:11:15 np0005603609 nova_compute[221550]: 2026-01-31 08:11:15.773 221554 DEBUG oslo_concurrency.lockutils [req-d0ada800-9c24-40fc-8afe-6402afbd7d7e req-9a707ab3-6d8f-417d-ba51-e1039cf7aa3c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:11:15 np0005603609 nova_compute[221550]: 2026-01-31 08:11:15.773 221554 DEBUG oslo_concurrency.lockutils [req-d0ada800-9c24-40fc-8afe-6402afbd7d7e req-9a707ab3-6d8f-417d-ba51-e1039cf7aa3c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:11:15 np0005603609 nova_compute[221550]: 2026-01-31 08:11:15.773 221554 DEBUG nova.network.neutron [req-d0ada800-9c24-40fc-8afe-6402afbd7d7e req-9a707ab3-6d8f-417d-ba51-e1039cf7aa3c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Refreshing network info cache for port 05c06d54-1257-48d3-8a5a-f92423aadbd8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:11:15 np0005603609 nova_compute[221550]: 2026-01-31 08:11:15.843 221554 DEBUG nova.policy [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5366d122b359489fb9d2bda8d19611a6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:11:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:15 np0005603609 nova_compute[221550]: 2026-01-31 08:11:15.875 221554 DEBUG nova.compute.manager [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:11:15 np0005603609 nova_compute[221550]: 2026-01-31 08:11:15.880 221554 DEBUG oslo_concurrency.lockutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.018 221554 DEBUG oslo_concurrency.processutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.068 221554 DEBUG nova.network.neutron [req-d0ada800-9c24-40fc-8afe-6402afbd7d7e req-9a707ab3-6d8f-417d-ba51-e1039cf7aa3c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.237 221554 DEBUG nova.compute.manager [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.241 221554 DEBUG nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.242 221554 INFO nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Creating image(s)#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.279 221554 DEBUG nova.storage.rbd_utils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 878fb573-f407-49e8-9643-f5a766a77df9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:11:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:16.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.326 221554 DEBUG nova.storage.rbd_utils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 878fb573-f407-49e8-9643-f5a766a77df9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.358 221554 DEBUG nova.storage.rbd_utils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 878fb573-f407-49e8-9643-f5a766a77df9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.362 221554 DEBUG oslo_concurrency.processutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.420 221554 DEBUG oslo_concurrency.processutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.421 221554 DEBUG oslo_concurrency.lockutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.421 221554 DEBUG oslo_concurrency.lockutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.422 221554 DEBUG oslo_concurrency.lockutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.452 221554 DEBUG nova.storage.rbd_utils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 878fb573-f407-49e8-9643-f5a766a77df9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.456 221554 DEBUG oslo_concurrency.processutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 878fb573-f407-49e8-9643-f5a766a77df9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.497 221554 DEBUG nova.network.neutron [req-d0ada800-9c24-40fc-8afe-6402afbd7d7e req-9a707ab3-6d8f-417d-ba51-e1039cf7aa3c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:11:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:11:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3159024888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:11:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.543 221554 DEBUG oslo_concurrency.processutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:11:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:16.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.550 221554 DEBUG nova.compute.provider_tree [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.666 221554 DEBUG oslo_concurrency.lockutils [req-d0ada800-9c24-40fc-8afe-6402afbd7d7e req-9a707ab3-6d8f-417d-ba51-e1039cf7aa3c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.668 221554 DEBUG oslo_concurrency.lockutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquired lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.668 221554 DEBUG nova.network.neutron [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.727 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.731 221554 DEBUG nova.scheduler.client.report [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.880 221554 DEBUG oslo_concurrency.lockutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:16 np0005603609 nova_compute[221550]: 2026-01-31 08:11:16.881 221554 DEBUG nova.compute.manager [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:11:17 np0005603609 nova_compute[221550]: 2026-01-31 08:11:17.039 221554 DEBUG oslo_concurrency.processutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 878fb573-f407-49e8-9643-f5a766a77df9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:17 np0005603609 nova_compute[221550]: 2026-01-31 08:11:17.078 221554 DEBUG nova.compute.manager [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:11:17 np0005603609 nova_compute[221550]: 2026-01-31 08:11:17.078 221554 DEBUG nova.network.neutron [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:11:17 np0005603609 nova_compute[221550]: 2026-01-31 08:11:17.126 221554 DEBUG nova.storage.rbd_utils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] resizing rbd image 878fb573-f407-49e8-9643-f5a766a77df9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:11:17 np0005603609 nova_compute[221550]: 2026-01-31 08:11:17.236 221554 DEBUG nova.objects.instance [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'migration_context' on Instance uuid 878fb573-f407-49e8-9643-f5a766a77df9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:17 np0005603609 nova_compute[221550]: 2026-01-31 08:11:17.242 221554 INFO nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:11:17 np0005603609 nova_compute[221550]: 2026-01-31 08:11:17.298 221554 DEBUG nova.network.neutron [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:11:17 np0005603609 nova_compute[221550]: 2026-01-31 08:11:17.301 221554 DEBUG nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:11:17 np0005603609 nova_compute[221550]: 2026-01-31 08:11:17.301 221554 DEBUG nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Ensure instance console log exists: /var/lib/nova/instances/878fb573-f407-49e8-9643-f5a766a77df9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:11:17 np0005603609 nova_compute[221550]: 2026-01-31 08:11:17.301 221554 DEBUG oslo_concurrency.lockutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:17 np0005603609 nova_compute[221550]: 2026-01-31 08:11:17.302 221554 DEBUG oslo_concurrency.lockutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:17 np0005603609 nova_compute[221550]: 2026-01-31 08:11:17.302 221554 DEBUG oslo_concurrency.lockutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:17 np0005603609 nova_compute[221550]: 2026-01-31 08:11:17.431 221554 DEBUG nova.compute.manager [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:11:17 np0005603609 nova_compute[221550]: 2026-01-31 08:11:17.491 221554 DEBUG nova.policy [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '358ec3f014264f7c89f19a9ad2ce8166', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0268430b3cb6412c94362feb0e114d3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:11:17 np0005603609 nova_compute[221550]: 2026-01-31 08:11:17.616 221554 DEBUG nova.network.neutron [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Successfully created port: 424ad9b9-f72c-470d-8fb9-0b6381c8bc6b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.028 221554 DEBUG nova.compute.manager [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.030 221554 DEBUG nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.030 221554 INFO nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Creating image(s)#033[00m
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.063 221554 DEBUG nova.storage.rbd_utils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] rbd image f2ba5390-a359-4227-8ce2-17e1573171ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.091 221554 DEBUG nova.storage.rbd_utils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] rbd image f2ba5390-a359-4227-8ce2-17e1573171ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.122 221554 DEBUG nova.storage.rbd_utils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] rbd image f2ba5390-a359-4227-8ce2-17e1573171ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.127 221554 DEBUG oslo_concurrency.processutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.210 221554 DEBUG oslo_concurrency.processutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.212 221554 DEBUG oslo_concurrency.lockutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.213 221554 DEBUG oslo_concurrency.lockutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.213 221554 DEBUG oslo_concurrency.lockutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.249 221554 DEBUG nova.storage.rbd_utils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] rbd image f2ba5390-a359-4227-8ce2-17e1573171ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.253 221554 DEBUG oslo_concurrency.processutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 f2ba5390-a359-4227-8ce2-17e1573171ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:18.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.510 221554 DEBUG oslo_concurrency.processutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 f2ba5390-a359-4227-8ce2-17e1573171ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:18.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.589 221554 DEBUG nova.storage.rbd_utils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] resizing rbd image f2ba5390-a359-4227-8ce2-17e1573171ef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.700 221554 DEBUG nova.objects.instance [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Lazy-loading 'migration_context' on Instance uuid f2ba5390-a359-4227-8ce2-17e1573171ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.718 221554 DEBUG nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.719 221554 DEBUG nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Ensure instance console log exists: /var/lib/nova/instances/f2ba5390-a359-4227-8ce2-17e1573171ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.719 221554 DEBUG oslo_concurrency.lockutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.720 221554 DEBUG oslo_concurrency.lockutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:18 np0005603609 nova_compute[221550]: 2026-01-31 08:11:18.720 221554 DEBUG oslo_concurrency.lockutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.452 221554 DEBUG nova.network.neutron [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Successfully created port: 4ee573ab-a829-4705-a1e2-814b74c9d270 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.456 221554 DEBUG nova.network.neutron [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Successfully updated port: 424ad9b9-f72c-470d-8fb9-0b6381c8bc6b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.478 221554 DEBUG oslo_concurrency.lockutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "refresh_cache-878fb573-f407-49e8-9643-f5a766a77df9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.479 221554 DEBUG oslo_concurrency.lockutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquired lock "refresh_cache-878fb573-f407-49e8-9643-f5a766a77df9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.479 221554 DEBUG nova.network.neutron [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.613 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.621 221554 DEBUG nova.compute.manager [req-6c431ef3-bab2-44c0-ba89-53419e0105d8 req-d6661594-876f-4e89-815a-af34902ba3d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Received event network-changed-424ad9b9-f72c-470d-8fb9-0b6381c8bc6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.622 221554 DEBUG nova.compute.manager [req-6c431ef3-bab2-44c0-ba89-53419e0105d8 req-d6661594-876f-4e89-815a-af34902ba3d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Refreshing instance network info cache due to event network-changed-424ad9b9-f72c-470d-8fb9-0b6381c8bc6b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.623 221554 DEBUG oslo_concurrency.lockutils [req-6c431ef3-bab2-44c0-ba89-53419e0105d8 req-d6661594-876f-4e89-815a-af34902ba3d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-878fb573-f407-49e8-9643-f5a766a77df9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.657 221554 DEBUG nova.network.neutron [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Updating instance_info_cache with network_info: [{"id": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "address": "fa:16:3e:3e:5b:36", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c06d54-12", "ovs_interfaceid": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.661 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.695 221554 DEBUG oslo_concurrency.lockutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Releasing lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.696 221554 DEBUG nova.compute.manager [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Instance network_info: |[{"id": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "address": "fa:16:3e:3e:5b:36", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c06d54-12", "ovs_interfaceid": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.701 221554 DEBUG nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Start _get_guest_xml network_info=[{"id": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "address": "fa:16:3e:3e:5b:36", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c06d54-12", "ovs_interfaceid": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.708 221554 WARNING nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.715 221554 DEBUG nova.virt.libvirt.host [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.716 221554 DEBUG nova.virt.libvirt.host [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.721 221554 DEBUG nova.virt.libvirt.host [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.722 221554 DEBUG nova.virt.libvirt.host [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.724 221554 DEBUG nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.725 221554 DEBUG nova.virt.hardware [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.726 221554 DEBUG nova.virt.hardware [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.727 221554 DEBUG nova.virt.hardware [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.727 221554 DEBUG nova.virt.hardware [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.728 221554 DEBUG nova.virt.hardware [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.728 221554 DEBUG nova.virt.hardware [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.729 221554 DEBUG nova.virt.hardware [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.729 221554 DEBUG nova.virt.hardware [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.730 221554 DEBUG nova.virt.hardware [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.730 221554 DEBUG nova.virt.hardware [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.731 221554 DEBUG nova.virt.hardware [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.736 221554 DEBUG oslo_concurrency.processutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:19 np0005603609 nova_compute[221550]: 2026-01-31 08:11:19.762 221554 DEBUG nova.network.neutron [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:11:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:11:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/788118537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.210 221554 DEBUG oslo_concurrency.processutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.232 221554 DEBUG nova.storage.rbd_utils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 981eb990-ec0c-4673-98e7-102afbe0bb51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.236 221554 DEBUG oslo_concurrency.processutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:20.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:20.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:11:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/810772048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.645 221554 DEBUG oslo_concurrency.processutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.647 221554 DEBUG nova.virt.libvirt.vif [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:11:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-562797465',display_name='tempest-ServerStableDeviceRescueTest-server-562797465',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-562797465',id=121,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1633c84ea1bf46b080aaafd30bbcf25f',ramdisk_id='',reservation_id='r-orbojs0e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-569420416',owner_user_name='tempest-ServerStableDeviceRescueTest-569420416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:11:13Z,user_data=None,user_id='d7d9a44201d548aba1e1654e136ddd06',uuid=981eb990-ec0c-4673-98e7-102afbe0bb51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "address": "fa:16:3e:3e:5b:36", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c06d54-12", "ovs_interfaceid": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.647 221554 DEBUG nova.network.os_vif_util [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converting VIF {"id": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "address": "fa:16:3e:3e:5b:36", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c06d54-12", "ovs_interfaceid": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.648 221554 DEBUG nova.network.os_vif_util [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:5b:36,bridge_name='br-int',has_traffic_filtering=True,id=05c06d54-1257-48d3-8a5a-f92423aadbd8,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c06d54-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.649 221554 DEBUG nova.objects.instance [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'pci_devices' on Instance uuid 981eb990-ec0c-4673-98e7-102afbe0bb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.663 221554 DEBUG nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:11:20 np0005603609 nova_compute[221550]:  <uuid>981eb990-ec0c-4673-98e7-102afbe0bb51</uuid>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:  <name>instance-00000079</name>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-562797465</nova:name>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:11:19</nova:creationTime>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:11:20 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:        <nova:user uuid="d7d9a44201d548aba1e1654e136ddd06">tempest-ServerStableDeviceRescueTest-569420416-project-member</nova:user>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:        <nova:project uuid="1633c84ea1bf46b080aaafd30bbcf25f">tempest-ServerStableDeviceRescueTest-569420416</nova:project>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:        <nova:port uuid="05c06d54-1257-48d3-8a5a-f92423aadbd8">
Jan 31 03:11:20 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <entry name="serial">981eb990-ec0c-4673-98e7-102afbe0bb51</entry>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <entry name="uuid">981eb990-ec0c-4673-98e7-102afbe0bb51</entry>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/981eb990-ec0c-4673-98e7-102afbe0bb51_disk">
Jan 31 03:11:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:11:20 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/981eb990-ec0c-4673-98e7-102afbe0bb51_disk.config">
Jan 31 03:11:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:11:20 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:3e:5b:36"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <target dev="tap05c06d54-12"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/981eb990-ec0c-4673-98e7-102afbe0bb51/console.log" append="off"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:11:20 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:11:20 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:11:20 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:11:20 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.664 221554 DEBUG nova.compute.manager [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Preparing to wait for external event network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.664 221554 DEBUG oslo_concurrency.lockutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.665 221554 DEBUG oslo_concurrency.lockutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.665 221554 DEBUG oslo_concurrency.lockutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.666 221554 DEBUG nova.virt.libvirt.vif [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:11:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-562797465',display_name='tempest-ServerStableDeviceRescueTest-server-562797465',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-562797465',id=121,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1633c84ea1bf46b080aaafd30bbcf25f',ramdisk_id='',reservation_id='r-orbojs0e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-569420416',owner_user_name='tempest-ServerStableDeviceRescueTest-569420416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:11:13Z,user_data=None,user_id='d7d9a44201d548aba1e1654e136ddd06',uuid=981eb990-ec0c-4673-98e7-102afbe0bb51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "address": "fa:16:3e:3e:5b:36", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c06d54-12", "ovs_interfaceid": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.666 221554 DEBUG nova.network.os_vif_util [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converting VIF {"id": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "address": "fa:16:3e:3e:5b:36", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c06d54-12", "ovs_interfaceid": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.667 221554 DEBUG nova.network.os_vif_util [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:5b:36,bridge_name='br-int',has_traffic_filtering=True,id=05c06d54-1257-48d3-8a5a-f92423aadbd8,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c06d54-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.667 221554 DEBUG os_vif [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:5b:36,bridge_name='br-int',has_traffic_filtering=True,id=05c06d54-1257-48d3-8a5a-f92423aadbd8,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c06d54-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.668 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.668 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.669 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.672 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.672 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05c06d54-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.673 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05c06d54-12, col_values=(('external_ids', {'iface-id': '05c06d54-1257-48d3-8a5a-f92423aadbd8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:5b:36', 'vm-uuid': '981eb990-ec0c-4673-98e7-102afbe0bb51'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.674 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:20 np0005603609 NetworkManager[49064]: <info>  [1769847080.6751] manager: (tap05c06d54-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.677 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.679 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.680 221554 INFO os_vif [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:5b:36,bridge_name='br-int',has_traffic_filtering=True,id=05c06d54-1257-48d3-8a5a-f92423aadbd8,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c06d54-12')#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.727 221554 DEBUG nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.727 221554 DEBUG nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.728 221554 DEBUG nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No VIF found with MAC fa:16:3e:3e:5b:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.728 221554 INFO nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Using config drive#033[00m
Jan 31 03:11:20 np0005603609 nova_compute[221550]: 2026-01-31 08:11:20.752 221554 DEBUG nova.storage.rbd_utils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 981eb990-ec0c-4673-98e7-102afbe0bb51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.037 221554 DEBUG nova.network.neutron [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Updating instance_info_cache with network_info: [{"id": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "address": "fa:16:3e:a9:69:81", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424ad9b9-f7", "ovs_interfaceid": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.059 221554 DEBUG oslo_concurrency.lockutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Releasing lock "refresh_cache-878fb573-f407-49e8-9643-f5a766a77df9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.059 221554 DEBUG nova.compute.manager [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Instance network_info: |[{"id": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "address": "fa:16:3e:a9:69:81", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424ad9b9-f7", "ovs_interfaceid": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.060 221554 DEBUG oslo_concurrency.lockutils [req-6c431ef3-bab2-44c0-ba89-53419e0105d8 req-d6661594-876f-4e89-815a-af34902ba3d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-878fb573-f407-49e8-9643-f5a766a77df9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.060 221554 DEBUG nova.network.neutron [req-6c431ef3-bab2-44c0-ba89-53419e0105d8 req-d6661594-876f-4e89-815a-af34902ba3d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Refreshing network info cache for port 424ad9b9-f72c-470d-8fb9-0b6381c8bc6b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.062 221554 DEBUG nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Start _get_guest_xml network_info=[{"id": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "address": "fa:16:3e:a9:69:81", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424ad9b9-f7", "ovs_interfaceid": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.066 221554 WARNING nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.071 221554 DEBUG nova.virt.libvirt.host [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.072 221554 DEBUG nova.virt.libvirt.host [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.076 221554 DEBUG nova.virt.libvirt.host [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.076 221554 DEBUG nova.virt.libvirt.host [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.077 221554 DEBUG nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.077 221554 DEBUG nova.virt.hardware [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.078 221554 DEBUG nova.virt.hardware [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.078 221554 DEBUG nova.virt.hardware [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.078 221554 DEBUG nova.virt.hardware [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.078 221554 DEBUG nova.virt.hardware [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.078 221554 DEBUG nova.virt.hardware [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.078 221554 DEBUG nova.virt.hardware [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.078 221554 DEBUG nova.virt.hardware [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.079 221554 DEBUG nova.virt.hardware [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.079 221554 DEBUG nova.virt.hardware [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.079 221554 DEBUG nova.virt.hardware [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.081 221554 DEBUG oslo_concurrency.processutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.238 221554 DEBUG nova.network.neutron [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Successfully updated port: 4ee573ab-a829-4705-a1e2-814b74c9d270 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.257 221554 DEBUG oslo_concurrency.lockutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Acquiring lock "refresh_cache-f2ba5390-a359-4227-8ce2-17e1573171ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.258 221554 DEBUG oslo_concurrency.lockutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Acquired lock "refresh_cache-f2ba5390-a359-4227-8ce2-17e1573171ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.258 221554 DEBUG nova.network.neutron [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.262 221554 INFO nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Creating config drive at /var/lib/nova/instances/981eb990-ec0c-4673-98e7-102afbe0bb51/disk.config#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.265 221554 DEBUG oslo_concurrency.processutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/981eb990-ec0c-4673-98e7-102afbe0bb51/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp3ju_vhyw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.341 221554 DEBUG nova.compute.manager [req-2d59c2d8-24f7-4957-9df2-c90e8752be87 req-e5a31131-64a7-413d-b4bb-8c030bbb4a0b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Received event network-changed-4ee573ab-a829-4705-a1e2-814b74c9d270 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.341 221554 DEBUG nova.compute.manager [req-2d59c2d8-24f7-4957-9df2-c90e8752be87 req-e5a31131-64a7-413d-b4bb-8c030bbb4a0b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Refreshing instance network info cache due to event network-changed-4ee573ab-a829-4705-a1e2-814b74c9d270. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.341 221554 DEBUG oslo_concurrency.lockutils [req-2d59c2d8-24f7-4957-9df2-c90e8752be87 req-e5a31131-64a7-413d-b4bb-8c030bbb4a0b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-f2ba5390-a359-4227-8ce2-17e1573171ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.387 221554 DEBUG oslo_concurrency.processutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/981eb990-ec0c-4673-98e7-102afbe0bb51/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp3ju_vhyw" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.424 221554 DEBUG nova.storage.rbd_utils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 981eb990-ec0c-4673-98e7-102afbe0bb51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.427 221554 DEBUG oslo_concurrency.processutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/981eb990-ec0c-4673-98e7-102afbe0bb51/disk.config 981eb990-ec0c-4673-98e7-102afbe0bb51_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.449 221554 DEBUG nova.network.neutron [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:11:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:11:21 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/728860884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.503 221554 DEBUG oslo_concurrency.processutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.525 221554 DEBUG nova.storage.rbd_utils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 878fb573-f407-49e8-9643-f5a766a77df9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.530 221554 DEBUG oslo_concurrency.processutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.573 221554 DEBUG oslo_concurrency.processutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/981eb990-ec0c-4673-98e7-102afbe0bb51/disk.config 981eb990-ec0c-4673-98e7-102afbe0bb51_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.574 221554 INFO nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Deleting local config drive /var/lib/nova/instances/981eb990-ec0c-4673-98e7-102afbe0bb51/disk.config because it was imported into RBD.#033[00m
Jan 31 03:11:21 np0005603609 kernel: tap05c06d54-12: entered promiscuous mode
Jan 31 03:11:21 np0005603609 NetworkManager[49064]: <info>  [1769847081.6072] manager: (tap05c06d54-12): new Tun device (/org/freedesktop/NetworkManager/Devices/235)
Jan 31 03:11:21 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:21Z|00472|binding|INFO|Claiming lport 05c06d54-1257-48d3-8a5a-f92423aadbd8 for this chassis.
Jan 31 03:11:21 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:21Z|00473|binding|INFO|05c06d54-1257-48d3-8a5a-f92423aadbd8: Claiming fa:16:3e:3e:5b:36 10.100.0.6
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.620 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:21 np0005603609 systemd-udevd[269718]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.626 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:5b:36 10.100.0.6'], port_security=['fa:16:3e:3e:5b:36 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '981eb990-ec0c-4673-98e7-102afbe0bb51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6bf409ce-5d7d-4755-a538-3f1e81a177fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=05c06d54-1257-48d3-8a5a-f92423aadbd8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.627 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 05c06d54-1257-48d3-8a5a-f92423aadbd8 in datapath 088d6992-6ba6-4719-a977-b3d306740157 bound to our chassis#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.628 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 088d6992-6ba6-4719-a977-b3d306740157#033[00m
Jan 31 03:11:21 np0005603609 systemd-machined[190912]: New machine qemu-58-instance-00000079.
Jan 31 03:11:21 np0005603609 NetworkManager[49064]: <info>  [1769847081.6374] device (tap05c06d54-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:11:21 np0005603609 NetworkManager[49064]: <info>  [1769847081.6380] device (tap05c06d54-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.637 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bb91ad55-2cbe-46bc-a55f-de1785a8908d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.638 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap088d6992-61 in ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.639 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap088d6992-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.639 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b63a0ae4-ea28-4062-bf88-18c137e8f7f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.639 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6313f10c-2962-4a5d-870c-c35cb97f284a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:21 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:21Z|00474|binding|INFO|Setting lport 05c06d54-1257-48d3-8a5a-f92423aadbd8 ovn-installed in OVS
Jan 31 03:11:21 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:21Z|00475|binding|INFO|Setting lport 05c06d54-1257-48d3-8a5a-f92423aadbd8 up in Southbound
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.641 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:21 np0005603609 systemd[1]: Started Virtual Machine qemu-58-instance-00000079.
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.648 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d0c8ac-26eb-4999-854d-d7f993754bb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.659 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c5da8cd6-80a8-4582-bef2-f609c87e1479]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.680 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[9b15caab-f57f-48a5-9ed7-b36f9cb896c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.684 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4350b9e1-d74a-47a0-8e98-e714ff5d9251]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:21 np0005603609 NetworkManager[49064]: <info>  [1769847081.6859] manager: (tap088d6992-60): new Veth device (/org/freedesktop/NetworkManager/Devices/236)
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.707 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[a424f3c4-b582-48ea-874b-2e55659430aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.710 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[4246b48d-a6e0-4a64-9799-183a1faa097d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:21 np0005603609 NetworkManager[49064]: <info>  [1769847081.7292] device (tap088d6992-60): carrier: link connected
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.729 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[e56e8c73-c93d-4970-b41c-319c43fb4050]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.741 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[861651cb-39cf-4820-9359-a5a0a269fd41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732142, 'reachable_time': 21836, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269771, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.753 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6d9f8b37-a3f9-493f-90b8-cc39703d5a8c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:87bc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 732142, 'tstamp': 732142}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269772, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.766 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5de7b085-2f36-4ad9-b1d3-4352ab27fee3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732142, 'reachable_time': 21836, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269773, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.785 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d61f6329-9c9e-4f2b-88bd-17a9c5817797]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.827 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c419889b-1041-4f40-974f-8b0c25d0e81d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.830 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.830 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.831 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088d6992-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:21 np0005603609 kernel: tap088d6992-60: entered promiscuous mode
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.835 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:21 np0005603609 NetworkManager[49064]: <info>  [1769847081.8365] manager: (tap088d6992-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.840 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap088d6992-60, col_values=(('external_ids', {'iface-id': '7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:21 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:21Z|00476|binding|INFO|Releasing lport 7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8 from this chassis (sb_readonly=0)
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.842 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.844 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/088d6992-6ba6-4719-a977-b3d306740157.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/088d6992-6ba6-4719-a977-b3d306740157.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.846 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.846 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9418cc83-5687-490a-b05d-fb1508703e70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.847 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-088d6992-6ba6-4719-a977-b3d306740157
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/088d6992-6ba6-4719-a977-b3d306740157.pid.haproxy
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 088d6992-6ba6-4719-a977-b3d306740157
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:11:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:21.848 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'env', 'PROCESS_TAG=haproxy-088d6992-6ba6-4719-a977-b3d306740157', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/088d6992-6ba6-4719-a977-b3d306740157.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.946 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847081.9457707, 981eb990-ec0c-4673-98e7-102afbe0bb51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.947 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] VM Started (Lifecycle Event)#033[00m
Jan 31 03:11:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:11:21 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1219824679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.968 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.973 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847081.9474697, 981eb990-ec0c-4673-98e7-102afbe0bb51 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.973 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.979 221554 DEBUG oslo_concurrency.processutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.981 221554 DEBUG nova.virt.libvirt.vif [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:11:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-673203214',display_name='tempest-ServersTestJSON-server-673203214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-673203214',id=122,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-exmbb07p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:11:15Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=878fb573-f407-49e8-9643-f5a766a77df9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "address": "fa:16:3e:a9:69:81", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424ad9b9-f7", "ovs_interfaceid": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.981 221554 DEBUG nova.network.os_vif_util [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "address": "fa:16:3e:a9:69:81", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424ad9b9-f7", "ovs_interfaceid": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.982 221554 DEBUG nova.network.os_vif_util [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:69:81,bridge_name='br-int',has_traffic_filtering=True,id=424ad9b9-f72c-470d-8fb9-0b6381c8bc6b,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424ad9b9-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:11:21 np0005603609 nova_compute[221550]: 2026-01-31 08:11:21.983 221554 DEBUG nova.objects.instance [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'pci_devices' on Instance uuid 878fb573-f407-49e8-9643-f5a766a77df9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.120 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.124 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.179 221554 DEBUG nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:11:22 np0005603609 nova_compute[221550]:  <uuid>878fb573-f407-49e8-9643-f5a766a77df9</uuid>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:  <name>instance-0000007a</name>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServersTestJSON-server-673203214</nova:name>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:11:21</nova:creationTime>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:11:22 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:        <nova:user uuid="5366d122b359489fb9d2bda8d19611a6">tempest-ServersTestJSON-327201738-project-member</nova:user>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:        <nova:project uuid="4aa06cf35d8c468fb16884f19dc8ce71">tempest-ServersTestJSON-327201738</nova:project>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:        <nova:port uuid="424ad9b9-f72c-470d-8fb9-0b6381c8bc6b">
Jan 31 03:11:22 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <entry name="serial">878fb573-f407-49e8-9643-f5a766a77df9</entry>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <entry name="uuid">878fb573-f407-49e8-9643-f5a766a77df9</entry>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/878fb573-f407-49e8-9643-f5a766a77df9_disk">
Jan 31 03:11:22 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:11:22 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/878fb573-f407-49e8-9643-f5a766a77df9_disk.config">
Jan 31 03:11:22 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:11:22 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:a9:69:81"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <target dev="tap424ad9b9-f7"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/878fb573-f407-49e8-9643-f5a766a77df9/console.log" append="off"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:11:22 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:11:22 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:11:22 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:11:22 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.181 221554 DEBUG nova.compute.manager [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Preparing to wait for external event network-vif-plugged-424ad9b9-f72c-470d-8fb9-0b6381c8bc6b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.181 221554 DEBUG oslo_concurrency.lockutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "878fb573-f407-49e8-9643-f5a766a77df9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.181 221554 DEBUG oslo_concurrency.lockutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "878fb573-f407-49e8-9643-f5a766a77df9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.182 221554 DEBUG oslo_concurrency.lockutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "878fb573-f407-49e8-9643-f5a766a77df9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.183 221554 DEBUG nova.virt.libvirt.vif [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:11:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-673203214',display_name='tempest-ServersTestJSON-server-673203214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-673203214',id=122,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-exmbb07p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:11:15Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=878fb573-f407-49e8-9643-f5a766a77df9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "address": "fa:16:3e:a9:69:81", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424ad9b9-f7", "ovs_interfaceid": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.183 221554 DEBUG nova.network.os_vif_util [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "address": "fa:16:3e:a9:69:81", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424ad9b9-f7", "ovs_interfaceid": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.184 221554 DEBUG nova.network.os_vif_util [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:69:81,bridge_name='br-int',has_traffic_filtering=True,id=424ad9b9-f72c-470d-8fb9-0b6381c8bc6b,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424ad9b9-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.184 221554 DEBUG os_vif [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:69:81,bridge_name='br-int',has_traffic_filtering=True,id=424ad9b9-f72c-470d-8fb9-0b6381c8bc6b,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424ad9b9-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.185 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.185 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.186 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.188 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.188 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap424ad9b9-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.189 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap424ad9b9-f7, col_values=(('external_ids', {'iface-id': '424ad9b9-f72c-470d-8fb9-0b6381c8bc6b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:69:81', 'vm-uuid': '878fb573-f407-49e8-9643-f5a766a77df9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:22 np0005603609 podman[269948]: 2026-01-31 08:11:22.189450941 +0000 UTC m=+0.050792471 container create a220f369d0e60aef2aaa17ce9d7a563d5898800e02cf853c2a9baf2c970f6e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.190 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:22 np0005603609 NetworkManager[49064]: <info>  [1769847082.1920] manager: (tap424ad9b9-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.197 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.199 221554 INFO os_vif [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:69:81,bridge_name='br-int',has_traffic_filtering=True,id=424ad9b9-f72c-470d-8fb9-0b6381c8bc6b,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424ad9b9-f7')#033[00m
Jan 31 03:11:22 np0005603609 systemd[1]: Started libpod-conmon-a220f369d0e60aef2aaa17ce9d7a563d5898800e02cf853c2a9baf2c970f6e3c.scope.
Jan 31 03:11:22 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:11:22 np0005603609 podman[269948]: 2026-01-31 08:11:22.158294642 +0000 UTC m=+0.019636192 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:11:22 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/383bb815936b3f1ba74990b12994890a51c0bff1a6d497383034e0680767c412/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:11:22 np0005603609 podman[269948]: 2026-01-31 08:11:22.268316026 +0000 UTC m=+0.129657586 container init a220f369d0e60aef2aaa17ce9d7a563d5898800e02cf853c2a9baf2c970f6e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:11:22 np0005603609 podman[269948]: 2026-01-31 08:11:22.273384738 +0000 UTC m=+0.134726268 container start a220f369d0e60aef2aaa17ce9d7a563d5898800e02cf853c2a9baf2c970f6e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:11:22 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[269977]: [NOTICE]   (269981) : New worker (269986) forked
Jan 31 03:11:22 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[269977]: [NOTICE]   (269981) : Loading success.
Jan 31 03:11:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:11:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:22.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.316 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:11:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:11:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:22.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.588 221554 DEBUG nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.588 221554 DEBUG nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.588 221554 DEBUG nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No VIF found with MAC fa:16:3e:a9:69:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.589 221554 INFO nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Using config drive#033[00m
Jan 31 03:11:22 np0005603609 nova_compute[221550]: 2026-01-31 08:11:22.611 221554 DEBUG nova.storage.rbd_utils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 878fb573-f407-49e8-9643-f5a766a77df9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 03:11:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:11:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:11:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.036 221554 DEBUG nova.compute.manager [req-2fcf6b00-d26b-46b5-9a13-3fce10ffa5c7 req-305f9e9b-df25-4727-ac75-d2854d6855fb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received event network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.037 221554 DEBUG oslo_concurrency.lockutils [req-2fcf6b00-d26b-46b5-9a13-3fce10ffa5c7 req-305f9e9b-df25-4727-ac75-d2854d6855fb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.037 221554 DEBUG oslo_concurrency.lockutils [req-2fcf6b00-d26b-46b5-9a13-3fce10ffa5c7 req-305f9e9b-df25-4727-ac75-d2854d6855fb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.039 221554 DEBUG oslo_concurrency.lockutils [req-2fcf6b00-d26b-46b5-9a13-3fce10ffa5c7 req-305f9e9b-df25-4727-ac75-d2854d6855fb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.039 221554 DEBUG nova.compute.manager [req-2fcf6b00-d26b-46b5-9a13-3fce10ffa5c7 req-305f9e9b-df25-4727-ac75-d2854d6855fb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Processing event network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.040 221554 DEBUG nova.compute.manager [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.053 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847083.0528588, 981eb990-ec0c-4673-98e7-102afbe0bb51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.053 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.056 221554 DEBUG nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.063 221554 INFO nova.virt.libvirt.driver [-] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Instance spawned successfully.#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.064 221554 DEBUG nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.096 221554 DEBUG nova.network.neutron [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Updating instance_info_cache with network_info: [{"id": "4ee573ab-a829-4705-a1e2-814b74c9d270", "address": "fa:16:3e:3b:1a:2e", "network": {"id": "75c0cba5-a2ed-46a3-b041-4e3af3562ea4", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-748484067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0268430b3cb6412c94362feb0e114d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ee573ab-a8", "ovs_interfaceid": "4ee573ab-a829-4705-a1e2-814b74c9d270", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.224 221554 INFO nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Creating config drive at /var/lib/nova/instances/878fb573-f407-49e8-9643-f5a766a77df9/disk.config#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.227 221554 DEBUG oslo_concurrency.processutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/878fb573-f407-49e8-9643-f5a766a77df9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpulrr55js execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.244 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.247 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.306 221554 DEBUG nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.306 221554 DEBUG nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.307 221554 DEBUG nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.307 221554 DEBUG nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.307 221554 DEBUG nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.307 221554 DEBUG nova.virt.libvirt.driver [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.347 221554 DEBUG oslo_concurrency.processutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/878fb573-f407-49e8-9643-f5a766a77df9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpulrr55js" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.367 221554 DEBUG nova.storage.rbd_utils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 878fb573-f407-49e8-9643-f5a766a77df9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.370 221554 DEBUG oslo_concurrency.processutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/878fb573-f407-49e8-9643-f5a766a77df9/disk.config 878fb573-f407-49e8-9643-f5a766a77df9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:23 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:11:23 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.426 221554 DEBUG nova.network.neutron [req-6c431ef3-bab2-44c0-ba89-53419e0105d8 req-d6661594-876f-4e89-815a-af34902ba3d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Updated VIF entry in instance network info cache for port 424ad9b9-f72c-470d-8fb9-0b6381c8bc6b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.427 221554 DEBUG nova.network.neutron [req-6c431ef3-bab2-44c0-ba89-53419e0105d8 req-d6661594-876f-4e89-815a-af34902ba3d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Updating instance_info_cache with network_info: [{"id": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "address": "fa:16:3e:a9:69:81", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424ad9b9-f7", "ovs_interfaceid": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.512 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.520 221554 DEBUG oslo_concurrency.processutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/878fb573-f407-49e8-9643-f5a766a77df9/disk.config 878fb573-f407-49e8-9643-f5a766a77df9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.521 221554 INFO nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Deleting local config drive /var/lib/nova/instances/878fb573-f407-49e8-9643-f5a766a77df9/disk.config because it was imported into RBD.#033[00m
Jan 31 03:11:23 np0005603609 kernel: tap424ad9b9-f7: entered promiscuous mode
Jan 31 03:11:23 np0005603609 NetworkManager[49064]: <info>  [1769847083.5686] manager: (tap424ad9b9-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/239)
Jan 31 03:11:23 np0005603609 systemd-udevd[269763]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:11:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:23Z|00477|binding|INFO|Claiming lport 424ad9b9-f72c-470d-8fb9-0b6381c8bc6b for this chassis.
Jan 31 03:11:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:23Z|00478|binding|INFO|424ad9b9-f72c-470d-8fb9-0b6381c8bc6b: Claiming fa:16:3e:a9:69:81 10.100.0.13
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.570 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:23 np0005603609 NetworkManager[49064]: <info>  [1769847083.5829] device (tap424ad9b9-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:11:23 np0005603609 NetworkManager[49064]: <info>  [1769847083.5857] device (tap424ad9b9-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.597 221554 DEBUG oslo_concurrency.lockutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Releasing lock "refresh_cache-f2ba5390-a359-4227-8ce2-17e1573171ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.598 221554 DEBUG nova.compute.manager [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Instance network_info: |[{"id": "4ee573ab-a829-4705-a1e2-814b74c9d270", "address": "fa:16:3e:3b:1a:2e", "network": {"id": "75c0cba5-a2ed-46a3-b041-4e3af3562ea4", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-748484067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0268430b3cb6412c94362feb0e114d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ee573ab-a8", "ovs_interfaceid": "4ee573ab-a829-4705-a1e2-814b74c9d270", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.598 221554 DEBUG oslo_concurrency.lockutils [req-2d59c2d8-24f7-4957-9df2-c90e8752be87 req-e5a31131-64a7-413d-b4bb-8c030bbb4a0b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-f2ba5390-a359-4227-8ce2-17e1573171ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.598 221554 DEBUG nova.network.neutron [req-2d59c2d8-24f7-4957-9df2-c90e8752be87 req-e5a31131-64a7-413d-b4bb-8c030bbb4a0b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Refreshing network info cache for port 4ee573ab-a829-4705-a1e2-814b74c9d270 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.603 221554 DEBUG nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Start _get_guest_xml network_info=[{"id": "4ee573ab-a829-4705-a1e2-814b74c9d270", "address": "fa:16:3e:3b:1a:2e", "network": {"id": "75c0cba5-a2ed-46a3-b041-4e3af3562ea4", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-748484067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0268430b3cb6412c94362feb0e114d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ee573ab-a8", "ovs_interfaceid": "4ee573ab-a829-4705-a1e2-814b74c9d270", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:11:23 np0005603609 systemd-machined[190912]: New machine qemu-59-instance-0000007a.
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.611 221554 WARNING nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.616 221554 DEBUG nova.virt.libvirt.host [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.617 221554 DEBUG nova.virt.libvirt.host [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:11:23 np0005603609 systemd[1]: Started Virtual Machine qemu-59-instance-0000007a.
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.619 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:23Z|00479|binding|INFO|Setting lport 424ad9b9-f72c-470d-8fb9-0b6381c8bc6b ovn-installed in OVS
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.623 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.625 221554 DEBUG nova.virt.libvirt.host [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.626 221554 DEBUG nova.virt.libvirt.host [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.627 221554 DEBUG nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.627 221554 DEBUG nova.virt.hardware [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.628 221554 DEBUG nova.virt.hardware [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.628 221554 DEBUG nova.virt.hardware [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.628 221554 DEBUG nova.virt.hardware [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.629 221554 DEBUG nova.virt.hardware [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.629 221554 DEBUG nova.virt.hardware [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.629 221554 DEBUG nova.virt.hardware [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.629 221554 DEBUG nova.virt.hardware [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.630 221554 DEBUG nova.virt.hardware [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.630 221554 DEBUG nova.virt.hardware [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.630 221554 DEBUG nova.virt.hardware [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.634 221554 DEBUG oslo_concurrency.processutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.654 221554 DEBUG oslo_concurrency.lockutils [req-6c431ef3-bab2-44c0-ba89-53419e0105d8 req-d6661594-876f-4e89-815a-af34902ba3d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-878fb573-f407-49e8-9643-f5a766a77df9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:11:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:23Z|00480|binding|INFO|Setting lport 424ad9b9-f72c-470d-8fb9-0b6381c8bc6b up in Southbound
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.665 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:69:81 10.100.0.13'], port_security=['fa:16:3e:a9:69:81 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '878fb573-f407-49e8-9643-f5a766a77df9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b88251fc-7610-460a-ba55-2ed186c6f696', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9f076789-2616-4234-8eab-1fc3da7d63b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7bb3690e-e43b-4d54-9d64-4797e471bf50, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=424ad9b9-f72c-470d-8fb9-0b6381c8bc6b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.667 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 424ad9b9-f72c-470d-8fb9-0b6381c8bc6b in datapath b88251fc-7610-460a-ba55-2ed186c6f696 bound to our chassis#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.669 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b88251fc-7610-460a-ba55-2ed186c6f696#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.677 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2c42aecf-471e-47ac-a1ae-cd74b3136392]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.678 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb88251fc-71 in ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.680 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb88251fc-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.680 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[775b2e41-3d31-4a16-9d18-3e916a61bb2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.682 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ac504c0e-075d-4727-877c-60218815e055]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.692 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[44ed5ea9-8d20-47c6-9a9e-55b4170a5869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.700 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[82f84994-1f0b-40aa-a8ec-9d0bf270af56]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.722 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b141b53a-6689-48b4-8a79-58211079691d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603609 NetworkManager[49064]: <info>  [1769847083.7295] manager: (tapb88251fc-70): new Veth device (/org/freedesktop/NetworkManager/Devices/240)
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.730 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e575ec97-f9e3-41ab-8ad1-521cbec7c63a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.759 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[016d1da9-4cbf-4ed6-9fef-108a39dfb16c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.762 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[e23cb2ce-b576-49ff-aec1-94aad63b97bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603609 NetworkManager[49064]: <info>  [1769847083.7795] device (tapb88251fc-70): carrier: link connected
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.782 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[058f360a-f99a-46b0-9145-c87a4862157a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.794 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5d35dabc-9afd-4ed3-9491-1070337e43b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb88251fc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2a:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732347, 'reachable_time': 26556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270118, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.806 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f8c6e0d0-b481-42a7-910d-bad81d0c053a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:2a68'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 732347, 'tstamp': 732347}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270119, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.819 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c2415380-e9b4-46d3-ac67-a2549b22e8b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb88251fc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2a:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732347, 'reachable_time': 26556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270120, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.848 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd42c76-c9d7-4cf7-99f4-4484d955fdc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.899 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1d8c5c3d-988e-4814-a3ed-ff15a14a204f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.900 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb88251fc-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.900 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.901 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb88251fc-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.902 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:23 np0005603609 NetworkManager[49064]: <info>  [1769847083.9030] manager: (tapb88251fc-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/241)
Jan 31 03:11:23 np0005603609 kernel: tapb88251fc-70: entered promiscuous mode
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.904 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.905 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb88251fc-70, col_values=(('external_ids', {'iface-id': '950341c4-aa2a-4261-8207-ff7e92fd4830'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.905 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:23Z|00481|binding|INFO|Releasing lport 950341c4-aa2a-4261-8207-ff7e92fd4830 from this chassis (sb_readonly=0)
Jan 31 03:11:23 np0005603609 nova_compute[221550]: 2026-01-31 08:11:23.912 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.912 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.913 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9ebb38a3-70b5-4bb9-ab25-8b7311364cac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.914 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-b88251fc-7610-460a-ba55-2ed186c6f696
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID b88251fc-7610-460a-ba55-2ed186c6f696
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:11:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:23.914 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'env', 'PROCESS_TAG=haproxy-b88251fc-7610-460a-ba55-2ed186c6f696', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b88251fc-7610-460a-ba55-2ed186c6f696.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.020 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847084.019519, 878fb573-f407-49e8-9643-f5a766a77df9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.020 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] VM Started (Lifecycle Event)#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.039 221554 INFO nova.compute.manager [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Took 10.43 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.040 221554 DEBUG nova.compute.manager [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:11:24 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3711382014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.141 221554 DEBUG oslo_concurrency.processutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.164 221554 DEBUG nova.storage.rbd_utils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] rbd image f2ba5390-a359-4227-8ce2-17e1573171ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.171 221554 DEBUG oslo_concurrency.processutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:24 np0005603609 podman[270213]: 2026-01-31 08:11:24.21166065 +0000 UTC m=+0.039054610 container create f6946f1878e17b2e0bad5d6033cc6d3945d9f5e1db67102587e1bfa63ea9cb9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.227 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.234 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847084.0198388, 878fb573-f407-49e8-9643-f5a766a77df9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.234 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:11:24 np0005603609 systemd[1]: Started libpod-conmon-f6946f1878e17b2e0bad5d6033cc6d3945d9f5e1db67102587e1bfa63ea9cb9e.scope.
Jan 31 03:11:24 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:11:24 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caf6456dbfe3911d3e2d18ac9622073c687986dcc1a72fb5de80a2c54718847/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:11:24 np0005603609 podman[270213]: 2026-01-31 08:11:24.190736437 +0000 UTC m=+0.018130417 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:11:24 np0005603609 podman[270213]: 2026-01-31 08:11:24.291613711 +0000 UTC m=+0.119007671 container init f6946f1878e17b2e0bad5d6033cc6d3945d9f5e1db67102587e1bfa63ea9cb9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:11:24 np0005603609 podman[270213]: 2026-01-31 08:11:24.29698787 +0000 UTC m=+0.124381840 container start f6946f1878e17b2e0bad5d6033cc6d3945d9f5e1db67102587e1bfa63ea9cb9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:11:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:24.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:24 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[270229]: [NOTICE]   (270251) : New worker (270254) forked
Jan 31 03:11:24 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[270229]: [NOTICE]   (270251) : Loading success.
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.447 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.452 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.491 221554 INFO nova.compute.manager [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Took 13.43 seconds to build instance.#033[00m
Jan 31 03:11:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:24.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:11:24 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1353661443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.599 221554 DEBUG nova.compute.manager [req-863e8f39-ed2c-4ab3-afe5-f642d063add6 req-93312295-cf7c-4a03-b30d-dfb345cb2b18 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Received event network-vif-plugged-424ad9b9-f72c-470d-8fb9-0b6381c8bc6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.600 221554 DEBUG oslo_concurrency.lockutils [req-863e8f39-ed2c-4ab3-afe5-f642d063add6 req-93312295-cf7c-4a03-b30d-dfb345cb2b18 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "878fb573-f407-49e8-9643-f5a766a77df9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.600 221554 DEBUG oslo_concurrency.lockutils [req-863e8f39-ed2c-4ab3-afe5-f642d063add6 req-93312295-cf7c-4a03-b30d-dfb345cb2b18 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "878fb573-f407-49e8-9643-f5a766a77df9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.601 221554 DEBUG oslo_concurrency.lockutils [req-863e8f39-ed2c-4ab3-afe5-f642d063add6 req-93312295-cf7c-4a03-b30d-dfb345cb2b18 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "878fb573-f407-49e8-9643-f5a766a77df9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.601 221554 DEBUG nova.compute.manager [req-863e8f39-ed2c-4ab3-afe5-f642d063add6 req-93312295-cf7c-4a03-b30d-dfb345cb2b18 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Processing event network-vif-plugged-424ad9b9-f72c-470d-8fb9-0b6381c8bc6b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.602 221554 DEBUG oslo_concurrency.processutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.602 221554 DEBUG nova.compute.manager [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.603 221554 DEBUG nova.virt.libvirt.vif [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:11:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-2081972552',display_name='tempest-ServerPasswordTestJSON-server-2081972552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-2081972552',id=123,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0268430b3cb6412c94362feb0e114d3b',ramdisk_id='',reservation_id='r-yx2wj02a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1593705067',owner_user_name='tempest-ServerPasswordTestJSON-1593705067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:11:17Z,user_data=None,user_id='358ec3f014264f7c89f19a9ad2ce8166',uuid=f2ba5390-a359-4227-8ce2-17e1573171ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ee573ab-a829-4705-a1e2-814b74c9d270", "address": "fa:16:3e:3b:1a:2e", "network": {"id": "75c0cba5-a2ed-46a3-b041-4e3af3562ea4", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-748484067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0268430b3cb6412c94362feb0e114d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ee573ab-a8", "ovs_interfaceid": "4ee573ab-a829-4705-a1e2-814b74c9d270", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.603 221554 DEBUG nova.network.os_vif_util [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Converting VIF {"id": "4ee573ab-a829-4705-a1e2-814b74c9d270", "address": "fa:16:3e:3b:1a:2e", "network": {"id": "75c0cba5-a2ed-46a3-b041-4e3af3562ea4", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-748484067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0268430b3cb6412c94362feb0e114d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ee573ab-a8", "ovs_interfaceid": "4ee573ab-a829-4705-a1e2-814b74c9d270", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.604 221554 DEBUG nova.network.os_vif_util [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:1a:2e,bridge_name='br-int',has_traffic_filtering=True,id=4ee573ab-a829-4705-a1e2-814b74c9d270,network=Network(75c0cba5-a2ed-46a3-b041-4e3af3562ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ee573ab-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.605 221554 DEBUG nova.objects.instance [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Lazy-loading 'pci_devices' on Instance uuid f2ba5390-a359-4227-8ce2-17e1573171ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.607 221554 DEBUG nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.609 221554 INFO nova.virt.libvirt.driver [-] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Instance spawned successfully.#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.609 221554 DEBUG nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.652 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.765 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.765 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847084.606398, 878fb573-f407-49e8-9643-f5a766a77df9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:24 np0005603609 nova_compute[221550]: 2026-01-31 08:11:24.765 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.033 221554 DEBUG nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:11:25 np0005603609 nova_compute[221550]:  <uuid>f2ba5390-a359-4227-8ce2-17e1573171ef</uuid>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:  <name>instance-0000007b</name>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerPasswordTestJSON-server-2081972552</nova:name>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:11:23</nova:creationTime>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:11:25 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:        <nova:user uuid="358ec3f014264f7c89f19a9ad2ce8166">tempest-ServerPasswordTestJSON-1593705067-project-member</nova:user>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:        <nova:project uuid="0268430b3cb6412c94362feb0e114d3b">tempest-ServerPasswordTestJSON-1593705067</nova:project>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:        <nova:port uuid="4ee573ab-a829-4705-a1e2-814b74c9d270">
Jan 31 03:11:25 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <entry name="serial">f2ba5390-a359-4227-8ce2-17e1573171ef</entry>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <entry name="uuid">f2ba5390-a359-4227-8ce2-17e1573171ef</entry>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/f2ba5390-a359-4227-8ce2-17e1573171ef_disk">
Jan 31 03:11:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:11:25 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/f2ba5390-a359-4227-8ce2-17e1573171ef_disk.config">
Jan 31 03:11:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:11:25 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:3b:1a:2e"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <target dev="tap4ee573ab-a8"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/f2ba5390-a359-4227-8ce2-17e1573171ef/console.log" append="off"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:11:25 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:11:25 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:11:25 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:11:25 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.034 221554 DEBUG nova.compute.manager [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Preparing to wait for external event network-vif-plugged-4ee573ab-a829-4705-a1e2-814b74c9d270 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.035 221554 DEBUG oslo_concurrency.lockutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Acquiring lock "f2ba5390-a359-4227-8ce2-17e1573171ef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.036 221554 DEBUG oslo_concurrency.lockutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Lock "f2ba5390-a359-4227-8ce2-17e1573171ef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.037 221554 DEBUG oslo_concurrency.lockutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Lock "f2ba5390-a359-4227-8ce2-17e1573171ef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.038 221554 DEBUG nova.virt.libvirt.vif [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:11:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-2081972552',display_name='tempest-ServerPasswordTestJSON-server-2081972552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-2081972552',id=123,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0268430b3cb6412c94362feb0e114d3b',ramdisk_id='',reservation_id='r-yx2wj02a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1593705067',owner_user_name='tempest-ServerPasswordTestJSON-1593705067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:11:17Z,user_data=None,user_id='358ec3f014264f7c89f19a9ad2ce8166',uuid=f2ba5390-a359-4227-8ce2-17e1573171ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ee573ab-a829-4705-a1e2-814b74c9d270", "address": "fa:16:3e:3b:1a:2e", "network": {"id": "75c0cba5-a2ed-46a3-b041-4e3af3562ea4", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-748484067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0268430b3cb6412c94362feb0e114d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ee573ab-a8", "ovs_interfaceid": "4ee573ab-a829-4705-a1e2-814b74c9d270", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.038 221554 DEBUG nova.network.os_vif_util [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Converting VIF {"id": "4ee573ab-a829-4705-a1e2-814b74c9d270", "address": "fa:16:3e:3b:1a:2e", "network": {"id": "75c0cba5-a2ed-46a3-b041-4e3af3562ea4", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-748484067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0268430b3cb6412c94362feb0e114d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ee573ab-a8", "ovs_interfaceid": "4ee573ab-a829-4705-a1e2-814b74c9d270", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.039 221554 DEBUG nova.network.os_vif_util [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:1a:2e,bridge_name='br-int',has_traffic_filtering=True,id=4ee573ab-a829-4705-a1e2-814b74c9d270,network=Network(75c0cba5-a2ed-46a3-b041-4e3af3562ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ee573ab-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.039 221554 DEBUG os_vif [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:1a:2e,bridge_name='br-int',has_traffic_filtering=True,id=4ee573ab-a829-4705-a1e2-814b74c9d270,network=Network(75c0cba5-a2ed-46a3-b041-4e3af3562ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ee573ab-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.041 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.041 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.041 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.048 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.049 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ee573ab-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.050 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4ee573ab-a8, col_values=(('external_ids', {'iface-id': '4ee573ab-a829-4705-a1e2-814b74c9d270', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3b:1a:2e', 'vm-uuid': 'f2ba5390-a359-4227-8ce2-17e1573171ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.051 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:25 np0005603609 NetworkManager[49064]: <info>  [1769847085.0527] manager: (tap4ee573ab-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/242)
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.053 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.056 221554 DEBUG nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.057 221554 DEBUG nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.057 221554 DEBUG nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.058 221554 DEBUG nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.058 221554 DEBUG nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.059 221554 DEBUG nova.virt.libvirt.driver [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.062 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.064 221554 INFO os_vif [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:1a:2e,bridge_name='br-int',has_traffic_filtering=True,id=4ee573ab-a829-4705-a1e2-814b74c9d270,network=Network(75c0cba5-a2ed-46a3-b041-4e3af3562ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ee573ab-a8')#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.155 221554 DEBUG oslo_concurrency.lockutils [None req-c3fe96c7-ea95-4483-8025-8da74954eea8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.287s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.274 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.277 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.292 221554 DEBUG nova.compute.manager [req-a1a667da-3ddf-468b-9e76-e0c46feb0629 req-83074cb3-cbdd-4abb-89d7-ec4dfc9abbee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received event network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.293 221554 DEBUG oslo_concurrency.lockutils [req-a1a667da-3ddf-468b-9e76-e0c46feb0629 req-83074cb3-cbdd-4abb-89d7-ec4dfc9abbee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.294 221554 DEBUG oslo_concurrency.lockutils [req-a1a667da-3ddf-468b-9e76-e0c46feb0629 req-83074cb3-cbdd-4abb-89d7-ec4dfc9abbee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.294 221554 DEBUG oslo_concurrency.lockutils [req-a1a667da-3ddf-468b-9e76-e0c46feb0629 req-83074cb3-cbdd-4abb-89d7-ec4dfc9abbee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.295 221554 DEBUG nova.compute.manager [req-a1a667da-3ddf-468b-9e76-e0c46feb0629 req-83074cb3-cbdd-4abb-89d7-ec4dfc9abbee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] No waiting events found dispatching network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.296 221554 WARNING nova.compute.manager [req-a1a667da-3ddf-468b-9e76-e0c46feb0629 req-83074cb3-cbdd-4abb-89d7-ec4dfc9abbee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received unexpected event network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.379 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.436 221554 INFO nova.compute.manager [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Took 9.20 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.436 221554 DEBUG nova.compute.manager [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.444 221554 DEBUG nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.445 221554 DEBUG nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.445 221554 DEBUG nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] No VIF found with MAC fa:16:3e:3b:1a:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.445 221554 INFO nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Using config drive#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.466 221554 DEBUG nova.storage.rbd_utils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] rbd image f2ba5390-a359-4227-8ce2-17e1573171ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.521 221554 DEBUG nova.network.neutron [req-2d59c2d8-24f7-4957-9df2-c90e8752be87 req-e5a31131-64a7-413d-b4bb-8c030bbb4a0b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Updated VIF entry in instance network info cache for port 4ee573ab-a829-4705-a1e2-814b74c9d270. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.522 221554 DEBUG nova.network.neutron [req-2d59c2d8-24f7-4957-9df2-c90e8752be87 req-e5a31131-64a7-413d-b4bb-8c030bbb4a0b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Updating instance_info_cache with network_info: [{"id": "4ee573ab-a829-4705-a1e2-814b74c9d270", "address": "fa:16:3e:3b:1a:2e", "network": {"id": "75c0cba5-a2ed-46a3-b041-4e3af3562ea4", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-748484067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0268430b3cb6412c94362feb0e114d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ee573ab-a8", "ovs_interfaceid": "4ee573ab-a829-4705-a1e2-814b74c9d270", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.748 221554 DEBUG oslo_concurrency.lockutils [req-2d59c2d8-24f7-4957-9df2-c90e8752be87 req-e5a31131-64a7-413d-b4bb-8c030bbb4a0b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-f2ba5390-a359-4227-8ce2-17e1573171ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.798 221554 INFO nova.compute.manager [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Took 12.09 seconds to build instance.#033[00m
Jan 31 03:11:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:25 np0005603609 nova_compute[221550]: 2026-01-31 08:11:25.957 221554 DEBUG oslo_concurrency.lockutils [None req-632e9b4f-7ecc-4f8a-819d-539adeeba39d 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "878fb573-f407-49e8-9643-f5a766a77df9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:11:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:26.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:11:26 np0005603609 nova_compute[221550]: 2026-01-31 08:11:26.428 221554 INFO nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Creating config drive at /var/lib/nova/instances/f2ba5390-a359-4227-8ce2-17e1573171ef/disk.config#033[00m
Jan 31 03:11:26 np0005603609 nova_compute[221550]: 2026-01-31 08:11:26.432 221554 DEBUG oslo_concurrency.processutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f2ba5390-a359-4227-8ce2-17e1573171ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8c9_f_3n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:26 np0005603609 nova_compute[221550]: 2026-01-31 08:11:26.553 221554 DEBUG oslo_concurrency.processutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f2ba5390-a359-4227-8ce2-17e1573171ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8c9_f_3n" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:11:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:26.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:11:26 np0005603609 nova_compute[221550]: 2026-01-31 08:11:26.580 221554 DEBUG nova.storage.rbd_utils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] rbd image f2ba5390-a359-4227-8ce2-17e1573171ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:26 np0005603609 nova_compute[221550]: 2026-01-31 08:11:26.585 221554 DEBUG oslo_concurrency.processutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f2ba5390-a359-4227-8ce2-17e1573171ef/disk.config f2ba5390-a359-4227-8ce2-17e1573171ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:26 np0005603609 nova_compute[221550]: 2026-01-31 08:11:26.737 221554 DEBUG oslo_concurrency.processutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f2ba5390-a359-4227-8ce2-17e1573171ef/disk.config f2ba5390-a359-4227-8ce2-17e1573171ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:26 np0005603609 nova_compute[221550]: 2026-01-31 08:11:26.738 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:26 np0005603609 nova_compute[221550]: 2026-01-31 08:11:26.738 221554 INFO nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Deleting local config drive /var/lib/nova/instances/f2ba5390-a359-4227-8ce2-17e1573171ef/disk.config because it was imported into RBD.#033[00m
Jan 31 03:11:26 np0005603609 NetworkManager[49064]: <info>  [1769847086.7763] manager: (tap4ee573ab-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/243)
Jan 31 03:11:26 np0005603609 kernel: tap4ee573ab-a8: entered promiscuous mode
Jan 31 03:11:26 np0005603609 nova_compute[221550]: 2026-01-31 08:11:26.779 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:26Z|00482|binding|INFO|Claiming lport 4ee573ab-a829-4705-a1e2-814b74c9d270 for this chassis.
Jan 31 03:11:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:26Z|00483|binding|INFO|4ee573ab-a829-4705-a1e2-814b74c9d270: Claiming fa:16:3e:3b:1a:2e 10.100.0.6
Jan 31 03:11:26 np0005603609 nova_compute[221550]: 2026-01-31 08:11:26.787 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:26 np0005603609 systemd-udevd[270338]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:11:26 np0005603609 nova_compute[221550]: 2026-01-31 08:11:26.808 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:26 np0005603609 systemd-machined[190912]: New machine qemu-60-instance-0000007b.
Jan 31 03:11:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:26Z|00484|binding|INFO|Setting lport 4ee573ab-a829-4705-a1e2-814b74c9d270 ovn-installed in OVS
Jan 31 03:11:26 np0005603609 nova_compute[221550]: 2026-01-31 08:11:26.813 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:26 np0005603609 systemd[1]: Started Virtual Machine qemu-60-instance-0000007b.
Jan 31 03:11:26 np0005603609 NetworkManager[49064]: <info>  [1769847086.8213] device (tap4ee573ab-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:11:26 np0005603609 NetworkManager[49064]: <info>  [1769847086.8218] device (tap4ee573ab-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:11:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:26Z|00485|binding|INFO|Setting lport 4ee573ab-a829-4705-a1e2-814b74c9d270 up in Southbound
Jan 31 03:11:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:26.945 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:1a:2e 10.100.0.6'], port_security=['fa:16:3e:3b:1a:2e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f2ba5390-a359-4227-8ce2-17e1573171ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75c0cba5-a2ed-46a3-b041-4e3af3562ea4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0268430b3cb6412c94362feb0e114d3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'db43c84b-3572-4b1f-b436-f479ca007161', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=47e43cb3-5de5-4e60-9ac0-559d677b18a4, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=4ee573ab-a829-4705-a1e2-814b74c9d270) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:11:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:26.946 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 4ee573ab-a829-4705-a1e2-814b74c9d270 in datapath 75c0cba5-a2ed-46a3-b041-4e3af3562ea4 bound to our chassis#033[00m
Jan 31 03:11:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:26.947 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 75c0cba5-a2ed-46a3-b041-4e3af3562ea4#033[00m
Jan 31 03:11:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:26.956 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c4045ad5-9540-44e8-b470-d8b52aa411a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:26.956 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap75c0cba5-a1 in ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:11:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:26.958 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap75c0cba5-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:11:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:26.958 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[397f382f-56b2-41d3-975f-b0e37a30a815]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:26.959 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0ece04-ecfc-4ed0-92d1-219387399aa7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:26.969 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[a23c3ef4-6a8d-4434-abcf-ea96a1ab81e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:26.993 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cc684f37-31a0-43cf-82a3-1bdc28d2d81e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:27.016 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[aca4d13c-601f-4a85-b6a5-043e25b19418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:27.021 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8007e696-d893-4cd5-a384-16ff8d673a6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:27 np0005603609 NetworkManager[49064]: <info>  [1769847087.0223] manager: (tap75c0cba5-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/244)
Jan 31 03:11:27 np0005603609 systemd-udevd[270341]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:27.044 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c22362cf-868f-47c4-8590-767e5e2198e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:27.047 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[57b55903-3436-4948-bac0-0127441e9bd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:27 np0005603609 NetworkManager[49064]: <info>  [1769847087.0627] device (tap75c0cba5-a0): carrier: link connected
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:27.064 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[4cac5fb0-4482-4eb7-92ad-12d3d0754c00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:27.081 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e8a68e-9cd0-4104-b5bc-4e27e9f2be86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75c0cba5-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:6e:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732676, 'reachable_time': 35185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270388, 'error': None, 'target': 'ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:27.094 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4c60a692-10fb-4a62-96eb-514f65c22829]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe76:6e4d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 732676, 'tstamp': 732676}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270398, 'error': None, 'target': 'ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.098 221554 DEBUG nova.compute.manager [req-8dfc7a3d-cfdf-4752-9308-810bd65b7c7e req-b0677e74-78ce-4374-9870-7375763238d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Received event network-vif-plugged-424ad9b9-f72c-470d-8fb9-0b6381c8bc6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.098 221554 DEBUG oslo_concurrency.lockutils [req-8dfc7a3d-cfdf-4752-9308-810bd65b7c7e req-b0677e74-78ce-4374-9870-7375763238d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "878fb573-f407-49e8-9643-f5a766a77df9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.099 221554 DEBUG oslo_concurrency.lockutils [req-8dfc7a3d-cfdf-4752-9308-810bd65b7c7e req-b0677e74-78ce-4374-9870-7375763238d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "878fb573-f407-49e8-9643-f5a766a77df9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.099 221554 DEBUG oslo_concurrency.lockutils [req-8dfc7a3d-cfdf-4752-9308-810bd65b7c7e req-b0677e74-78ce-4374-9870-7375763238d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "878fb573-f407-49e8-9643-f5a766a77df9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.099 221554 DEBUG nova.compute.manager [req-8dfc7a3d-cfdf-4752-9308-810bd65b7c7e req-b0677e74-78ce-4374-9870-7375763238d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] No waiting events found dispatching network-vif-plugged-424ad9b9-f72c-470d-8fb9-0b6381c8bc6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.099 221554 WARNING nova.compute.manager [req-8dfc7a3d-cfdf-4752-9308-810bd65b7c7e req-b0677e74-78ce-4374-9870-7375763238d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Received unexpected event network-vif-plugged-424ad9b9-f72c-470d-8fb9-0b6381c8bc6b for instance with vm_state active and task_state None.#033[00m
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:27.110 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[57bea639-e2ec-4a76-ad8b-844252bf5444]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75c0cba5-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:6e:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732676, 'reachable_time': 35185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270407, 'error': None, 'target': 'ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:27.133 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[795fd7fd-33f4-4056-9220-1e93a3a07505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:27.173 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c153ec33-3eed-4d5f-992d-03185334ef78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:27.174 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75c0cba5-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:27.175 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:27.179 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75c0cba5-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:27 np0005603609 NetworkManager[49064]: <info>  [1769847087.1817] manager: (tap75c0cba5-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Jan 31 03:11:27 np0005603609 kernel: tap75c0cba5-a0: entered promiscuous mode
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:27.183 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap75c0cba5-a0, col_values=(('external_ids', {'iface-id': '6984ccef-2db0-4809-b956-882abd5320b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.183 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:27 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:27Z|00486|binding|INFO|Releasing lport 6984ccef-2db0-4809-b956-882abd5320b8 from this chassis (sb_readonly=0)
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.195 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:27.196 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/75c0cba5-a2ed-46a3-b041-4e3af3562ea4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/75c0cba5-a2ed-46a3-b041-4e3af3562ea4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:27.197 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[54789ddc-c637-4b73-ade9-92133d768d36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:27.198 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-75c0cba5-a2ed-46a3-b041-4e3af3562ea4
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/75c0cba5-a2ed-46a3-b041-4e3af3562ea4.pid.haproxy
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 75c0cba5-a2ed-46a3-b041-4e3af3562ea4
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:11:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:27.200 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4', 'env', 'PROCESS_TAG=haproxy-75c0cba5-a2ed-46a3-b041-4e3af3562ea4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/75c0cba5-a2ed-46a3-b041-4e3af3562ea4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.213 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847087.2135286, f2ba5390-a359-4227-8ce2-17e1573171ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.214 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] VM Started (Lifecycle Event)#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.375 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.379 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847087.2136447, f2ba5390-a359-4227-8ce2-17e1573171ef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.379 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:11:27 np0005603609 podman[270449]: 2026-01-31 08:11:27.499522897 +0000 UTC m=+0.039947012 container create 611a170ad871ffb305d952d54b1b62da23d062b9b239677478fa1c816ce9600d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:11:27 np0005603609 systemd[1]: Started libpod-conmon-611a170ad871ffb305d952d54b1b62da23d062b9b239677478fa1c816ce9600d.scope.
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.556 221554 DEBUG nova.compute.manager [req-ebdbd4fd-71bb-4cf3-8505-13818d7866fc req-22f63a21-15c5-41ea-8428-c40396b8e880 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Received event network-vif-plugged-4ee573ab-a829-4705-a1e2-814b74c9d270 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.556 221554 DEBUG oslo_concurrency.lockutils [req-ebdbd4fd-71bb-4cf3-8505-13818d7866fc req-22f63a21-15c5-41ea-8428-c40396b8e880 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f2ba5390-a359-4227-8ce2-17e1573171ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.556 221554 DEBUG oslo_concurrency.lockutils [req-ebdbd4fd-71bb-4cf3-8505-13818d7866fc req-22f63a21-15c5-41ea-8428-c40396b8e880 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f2ba5390-a359-4227-8ce2-17e1573171ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.557 221554 DEBUG oslo_concurrency.lockutils [req-ebdbd4fd-71bb-4cf3-8505-13818d7866fc req-22f63a21-15c5-41ea-8428-c40396b8e880 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f2ba5390-a359-4227-8ce2-17e1573171ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.557 221554 DEBUG nova.compute.manager [req-ebdbd4fd-71bb-4cf3-8505-13818d7866fc req-22f63a21-15c5-41ea-8428-c40396b8e880 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Processing event network-vif-plugged-4ee573ab-a829-4705-a1e2-814b74c9d270 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.558 221554 DEBUG nova.compute.manager [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:11:27 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:11:27 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb6737597b722dc1dc00e28fcce9a9d85ec697f90b1c73261eadc2545e997fb7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.572 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.573 221554 DEBUG nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:11:27 np0005603609 podman[270449]: 2026-01-31 08:11:27.4784547 +0000 UTC m=+0.018878875 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.577 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847087.5692837, f2ba5390-a359-4227-8ce2-17e1573171ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:27 np0005603609 podman[270449]: 2026-01-31 08:11:27.578145956 +0000 UTC m=+0.118570091 container init 611a170ad871ffb305d952d54b1b62da23d062b9b239677478fa1c816ce9600d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.578 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.580 221554 INFO nova.virt.libvirt.driver [-] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Instance spawned successfully.#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.580 221554 DEBUG nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:11:27 np0005603609 podman[270449]: 2026-01-31 08:11:27.582390828 +0000 UTC m=+0.122814943 container start 611a170ad871ffb305d952d54b1b62da23d062b9b239677478fa1c816ce9600d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 03:11:27 np0005603609 neutron-haproxy-ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4[270463]: [NOTICE]   (270467) : New worker (270469) forked
Jan 31 03:11:27 np0005603609 neutron-haproxy-ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4[270463]: [NOTICE]   (270467) : Loading success.
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.604 221554 DEBUG nova.compute.manager [None req-c5735d2b-2364-4bb8-bc64-9746084a5eaa d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.666 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.669 221554 DEBUG nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.669 221554 DEBUG nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.670 221554 DEBUG nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.670 221554 DEBUG nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.671 221554 DEBUG nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.671 221554 DEBUG nova.virt.libvirt.driver [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.674 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.686 221554 INFO nova.compute.manager [None req-c5735d2b-2364-4bb8-bc64-9746084a5eaa d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] instance snapshotting#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.816 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.866 221554 INFO nova.compute.manager [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Took 9.84 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.867 221554 DEBUG nova.compute.manager [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:27 np0005603609 nova_compute[221550]: 2026-01-31 08:11:27.937 221554 INFO nova.virt.libvirt.driver [None req-c5735d2b-2364-4bb8-bc64-9746084a5eaa d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Beginning live snapshot process#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.083 221554 DEBUG oslo_concurrency.lockutils [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "878fb573-f407-49e8-9643-f5a766a77df9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.084 221554 DEBUG oslo_concurrency.lockutils [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "878fb573-f407-49e8-9643-f5a766a77df9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.084 221554 DEBUG oslo_concurrency.lockutils [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "878fb573-f407-49e8-9643-f5a766a77df9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.085 221554 DEBUG oslo_concurrency.lockutils [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "878fb573-f407-49e8-9643-f5a766a77df9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.085 221554 DEBUG oslo_concurrency.lockutils [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "878fb573-f407-49e8-9643-f5a766a77df9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.087 221554 INFO nova.compute.manager [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Terminating instance#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.089 221554 DEBUG nova.compute.manager [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:11:28 np0005603609 kernel: tap424ad9b9-f7 (unregistering): left promiscuous mode
Jan 31 03:11:28 np0005603609 NetworkManager[49064]: <info>  [1769847088.1457] device (tap424ad9b9-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:11:28 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:28Z|00487|binding|INFO|Releasing lport 424ad9b9-f72c-470d-8fb9-0b6381c8bc6b from this chassis (sb_readonly=0)
Jan 31 03:11:28 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:28Z|00488|binding|INFO|Setting lport 424ad9b9-f72c-470d-8fb9-0b6381c8bc6b down in Southbound
Jan 31 03:11:28 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:28Z|00489|binding|INFO|Removing iface tap424ad9b9-f7 ovn-installed in OVS
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.181 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.189 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:28 np0005603609 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Jan 31 03:11:28 np0005603609 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000007a.scope: Consumed 3.915s CPU time.
Jan 31 03:11:28 np0005603609 systemd-machined[190912]: Machine qemu-59-instance-0000007a terminated.
Jan 31 03:11:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:28.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:28.327 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:69:81 10.100.0.13'], port_security=['fa:16:3e:a9:69:81 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '878fb573-f407-49e8-9643-f5a766a77df9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b88251fc-7610-460a-ba55-2ed186c6f696', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9f076789-2616-4234-8eab-1fc3da7d63b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7bb3690e-e43b-4d54-9d64-4797e471bf50, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=424ad9b9-f72c-470d-8fb9-0b6381c8bc6b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:11:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:28.328 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 424ad9b9-f72c-470d-8fb9-0b6381c8bc6b in datapath b88251fc-7610-460a-ba55-2ed186c6f696 unbound from our chassis#033[00m
Jan 31 03:11:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:28.330 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b88251fc-7610-460a-ba55-2ed186c6f696, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:11:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:28.331 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b8762bf2-1537-4e58-80b8-405c865fd47d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:28.332 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 namespace which is not needed anymore#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.343 221554 INFO nova.virt.libvirt.driver [-] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Instance destroyed successfully.#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.344 221554 DEBUG nova.objects.instance [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'resources' on Instance uuid 878fb573-f407-49e8-9643-f5a766a77df9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.362 221554 INFO nova.compute.manager [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Took 14.32 seconds to build instance.#033[00m
Jan 31 03:11:28 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[270229]: [NOTICE]   (270251) : haproxy version is 2.8.14-c23fe91
Jan 31 03:11:28 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[270229]: [NOTICE]   (270251) : path to executable is /usr/sbin/haproxy
Jan 31 03:11:28 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[270229]: [WARNING]  (270251) : Exiting Master process...
Jan 31 03:11:28 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[270229]: [WARNING]  (270251) : Exiting Master process...
Jan 31 03:11:28 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[270229]: [ALERT]    (270251) : Current worker (270254) exited with code 143 (Terminated)
Jan 31 03:11:28 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[270229]: [WARNING]  (270251) : All workers exited. Exiting... (0)
Jan 31 03:11:28 np0005603609 systemd[1]: libpod-f6946f1878e17b2e0bad5d6033cc6d3945d9f5e1db67102587e1bfa63ea9cb9e.scope: Deactivated successfully.
Jan 31 03:11:28 np0005603609 podman[270509]: 2026-01-31 08:11:28.44048895 +0000 UTC m=+0.037908182 container died f6946f1878e17b2e0bad5d6033cc6d3945d9f5e1db67102587e1bfa63ea9cb9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.452 221554 DEBUG nova.virt.libvirt.vif [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:11:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-673203214',display_name='tempest-ServersTestJSON-server-673203214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-673203214',id=122,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:11:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-exmbb07p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:11:25Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=878fb573-f407-49e8-9643-f5a766a77df9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "address": "fa:16:3e:a9:69:81", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424ad9b9-f7", "ovs_interfaceid": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.452 221554 DEBUG nova.network.os_vif_util [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "address": "fa:16:3e:a9:69:81", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap424ad9b9-f7", "ovs_interfaceid": "424ad9b9-f72c-470d-8fb9-0b6381c8bc6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.453 221554 DEBUG nova.network.os_vif_util [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:69:81,bridge_name='br-int',has_traffic_filtering=True,id=424ad9b9-f72c-470d-8fb9-0b6381c8bc6b,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424ad9b9-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.453 221554 DEBUG os_vif [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:69:81,bridge_name='br-int',has_traffic_filtering=True,id=424ad9b9-f72c-470d-8fb9-0b6381c8bc6b,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424ad9b9-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.455 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.455 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap424ad9b9-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.456 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.458 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.460 221554 INFO os_vif [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:69:81,bridge_name='br-int',has_traffic_filtering=True,id=424ad9b9-f72c-470d-8fb9-0b6381c8bc6b,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap424ad9b9-f7')#033[00m
Jan 31 03:11:28 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6946f1878e17b2e0bad5d6033cc6d3945d9f5e1db67102587e1bfa63ea9cb9e-userdata-shm.mount: Deactivated successfully.
Jan 31 03:11:28 np0005603609 systemd[1]: var-lib-containers-storage-overlay-4caf6456dbfe3911d3e2d18ac9622073c687986dcc1a72fb5de80a2c54718847-merged.mount: Deactivated successfully.
Jan 31 03:11:28 np0005603609 podman[270509]: 2026-01-31 08:11:28.482315316 +0000 UTC m=+0.079734548 container cleanup f6946f1878e17b2e0bad5d6033cc6d3945d9f5e1db67102587e1bfa63ea9cb9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:11:28 np0005603609 systemd[1]: libpod-conmon-f6946f1878e17b2e0bad5d6033cc6d3945d9f5e1db67102587e1bfa63ea9cb9e.scope: Deactivated successfully.
Jan 31 03:11:28 np0005603609 podman[270552]: 2026-01-31 08:11:28.535916014 +0000 UTC m=+0.038705492 container remove f6946f1878e17b2e0bad5d6033cc6d3945d9f5e1db67102587e1bfa63ea9cb9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:11:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:28.544 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[287c6521-efd1-4182-8d35-d56c21c70f2a]: (4, ('Sat Jan 31 08:11:28 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 (f6946f1878e17b2e0bad5d6033cc6d3945d9f5e1db67102587e1bfa63ea9cb9e)\nf6946f1878e17b2e0bad5d6033cc6d3945d9f5e1db67102587e1bfa63ea9cb9e\nSat Jan 31 08:11:28 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 (f6946f1878e17b2e0bad5d6033cc6d3945d9f5e1db67102587e1bfa63ea9cb9e)\nf6946f1878e17b2e0bad5d6033cc6d3945d9f5e1db67102587e1bfa63ea9cb9e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:28.546 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[44d6e8ea-31c8-4f31-a7a4-1b915986d69e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:28.547 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb88251fc-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.548 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:28 np0005603609 kernel: tapb88251fc-70: left promiscuous mode
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.555 221554 DEBUG oslo_concurrency.lockutils [None req-a9fd7ebd-1c8f-4369-b7e2-b2450de7d651 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Lock "f2ba5390-a359-4227-8ce2-17e1573171ef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.556 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.557 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:28.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:28.568 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5675d35b-9fbc-4e51-bff5-18e2aab69106]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:28.585 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4018ea25-7cb4-4216-b8d4-13c97fd46e66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:28.587 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a50d8372-218c-49c1-b488-3fec4bbe9c89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:28.597 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[80657a04-dee4-4aaf-a7d8-ce5ddc558f17]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732341, 'reachable_time': 38305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270597, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:28 np0005603609 systemd[1]: run-netns-ovnmeta\x2db88251fc\x2d7610\x2d460a\x2dba55\x2d2ed186c6f696.mount: Deactivated successfully.
Jan 31 03:11:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:28.601 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:11:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:28.601 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[e04f4300-33fe-407c-88f0-6820264def9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.724 221554 DEBUG nova.virt.libvirt.imagebackend [None req-c5735d2b-2364-4bb8-bc64-9746084a5eaa d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:11:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:11:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.895 221554 INFO nova.virt.libvirt.driver [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Deleting instance files /var/lib/nova/instances/878fb573-f407-49e8-9643-f5a766a77df9_del#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.895 221554 INFO nova.virt.libvirt.driver [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Deletion of /var/lib/nova/instances/878fb573-f407-49e8-9643-f5a766a77df9_del complete#033[00m
Jan 31 03:11:28 np0005603609 nova_compute[221550]: 2026-01-31 08:11:28.960 221554 DEBUG nova.storage.rbd_utils [None req-c5735d2b-2364-4bb8-bc64-9746084a5eaa d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] creating snapshot(3a30e27c9c1d43c28017c39c3e3a4e1d) on rbd image(981eb990-ec0c-4673-98e7-102afbe0bb51_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.318 221554 INFO nova.compute.manager [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Took 1.23 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.318 221554 DEBUG oslo.service.loopingcall [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.329 221554 DEBUG nova.compute.manager [-] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.329 221554 DEBUG nova.network.neutron [-] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.367 221554 DEBUG nova.compute.manager [req-bafd1250-50d8-4da4-b733-850feebddf85 req-67a49671-e4d7-49e7-9826-9b8c26950719 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Received event network-vif-unplugged-424ad9b9-f72c-470d-8fb9-0b6381c8bc6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.368 221554 DEBUG oslo_concurrency.lockutils [req-bafd1250-50d8-4da4-b733-850feebddf85 req-67a49671-e4d7-49e7-9826-9b8c26950719 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "878fb573-f407-49e8-9643-f5a766a77df9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.368 221554 DEBUG oslo_concurrency.lockutils [req-bafd1250-50d8-4da4-b733-850feebddf85 req-67a49671-e4d7-49e7-9826-9b8c26950719 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "878fb573-f407-49e8-9643-f5a766a77df9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.368 221554 DEBUG oslo_concurrency.lockutils [req-bafd1250-50d8-4da4-b733-850feebddf85 req-67a49671-e4d7-49e7-9826-9b8c26950719 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "878fb573-f407-49e8-9643-f5a766a77df9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.368 221554 DEBUG nova.compute.manager [req-bafd1250-50d8-4da4-b733-850feebddf85 req-67a49671-e4d7-49e7-9826-9b8c26950719 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] No waiting events found dispatching network-vif-unplugged-424ad9b9-f72c-470d-8fb9-0b6381c8bc6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.368 221554 DEBUG nova.compute.manager [req-bafd1250-50d8-4da4-b733-850feebddf85 req-67a49671-e4d7-49e7-9826-9b8c26950719 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Received event network-vif-unplugged-424ad9b9-f72c-470d-8fb9-0b6381c8bc6b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.654 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.718 221554 DEBUG nova.compute.manager [req-e2598b05-10a3-4cfb-b431-ca26d1d58f0a req-060da39e-dd16-4f93-bef0-0fc5b1e6f97d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Received event network-vif-plugged-4ee573ab-a829-4705-a1e2-814b74c9d270 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.718 221554 DEBUG oslo_concurrency.lockutils [req-e2598b05-10a3-4cfb-b431-ca26d1d58f0a req-060da39e-dd16-4f93-bef0-0fc5b1e6f97d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f2ba5390-a359-4227-8ce2-17e1573171ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.718 221554 DEBUG oslo_concurrency.lockutils [req-e2598b05-10a3-4cfb-b431-ca26d1d58f0a req-060da39e-dd16-4f93-bef0-0fc5b1e6f97d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f2ba5390-a359-4227-8ce2-17e1573171ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.718 221554 DEBUG oslo_concurrency.lockutils [req-e2598b05-10a3-4cfb-b431-ca26d1d58f0a req-060da39e-dd16-4f93-bef0-0fc5b1e6f97d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f2ba5390-a359-4227-8ce2-17e1573171ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.719 221554 DEBUG nova.compute.manager [req-e2598b05-10a3-4cfb-b431-ca26d1d58f0a req-060da39e-dd16-4f93-bef0-0fc5b1e6f97d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] No waiting events found dispatching network-vif-plugged-4ee573ab-a829-4705-a1e2-814b74c9d270 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.719 221554 WARNING nova.compute.manager [req-e2598b05-10a3-4cfb-b431-ca26d1d58f0a req-060da39e-dd16-4f93-bef0-0fc5b1e6f97d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Received unexpected event network-vif-plugged-4ee573ab-a829-4705-a1e2-814b74c9d270 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.740 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.740 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.741 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.741 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.741 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 981eb990-ec0c-4673-98e7-102afbe0bb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e283 e283: 3 total, 3 up, 3 in
Jan 31 03:11:29 np0005603609 nova_compute[221550]: 2026-01-31 08:11:29.885 221554 DEBUG nova.storage.rbd_utils [None req-c5735d2b-2364-4bb8-bc64-9746084a5eaa d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] cloning vms/981eb990-ec0c-4673-98e7-102afbe0bb51_disk@3a30e27c9c1d43c28017c39c3e3a4e1d to images/7a74ff6f-b876-4da2-8910-2a6929d0fcbc clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:11:30 np0005603609 nova_compute[221550]: 2026-01-31 08:11:30.025 221554 DEBUG nova.storage.rbd_utils [None req-c5735d2b-2364-4bb8-bc64-9746084a5eaa d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] flattening images/7a74ff6f-b876-4da2-8910-2a6929d0fcbc flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:11:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:30.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:30 np0005603609 nova_compute[221550]: 2026-01-31 08:11:30.413 221554 DEBUG nova.storage.rbd_utils [None req-c5735d2b-2364-4bb8-bc64-9746084a5eaa d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] removing snapshot(3a30e27c9c1d43c28017c39c3e3a4e1d) on rbd image(981eb990-ec0c-4673-98e7-102afbe0bb51_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:11:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:30.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:30 np0005603609 nova_compute[221550]: 2026-01-31 08:11:30.709 221554 DEBUG nova.network.neutron [-] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:11:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:30.713 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:11:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:30.714 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:11:30 np0005603609 nova_compute[221550]: 2026-01-31 08:11:30.715 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:30 np0005603609 nova_compute[221550]: 2026-01-31 08:11:30.723 221554 DEBUG nova.compute.manager [req-85e5b478-87cf-43c0-9827-3a338072411a req-923262f3-c1e5-4a13-8883-dbadf0e29ac0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Received event network-vif-deleted-424ad9b9-f72c-470d-8fb9-0b6381c8bc6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:30 np0005603609 nova_compute[221550]: 2026-01-31 08:11:30.723 221554 INFO nova.compute.manager [req-85e5b478-87cf-43c0-9827-3a338072411a req-923262f3-c1e5-4a13-8883-dbadf0e29ac0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Neutron deleted interface 424ad9b9-f72c-470d-8fb9-0b6381c8bc6b; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:11:30 np0005603609 nova_compute[221550]: 2026-01-31 08:11:30.723 221554 DEBUG nova.network.neutron [req-85e5b478-87cf-43c0-9827-3a338072411a req-923262f3-c1e5-4a13-8883-dbadf0e29ac0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:11:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e284 e284: 3 total, 3 up, 3 in
Jan 31 03:11:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:30 np0005603609 nova_compute[221550]: 2026-01-31 08:11:30.911 221554 DEBUG nova.storage.rbd_utils [None req-c5735d2b-2364-4bb8-bc64-9746084a5eaa d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] creating snapshot(snap) on rbd image(7a74ff6f-b876-4da2-8910-2a6929d0fcbc) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.031 221554 INFO nova.compute.manager [-] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Took 1.70 seconds to deallocate network for instance.#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.037 221554 DEBUG nova.compute.manager [req-85e5b478-87cf-43c0-9827-3a338072411a req-923262f3-c1e5-4a13-8883-dbadf0e29ac0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Detach interface failed, port_id=424ad9b9-f72c-470d-8fb9-0b6381c8bc6b, reason: Instance 878fb573-f407-49e8-9643-f5a766a77df9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.267 221554 DEBUG oslo_concurrency.lockutils [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.268 221554 DEBUG oslo_concurrency.lockutils [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.348 221554 DEBUG oslo_concurrency.processutils [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.677 221554 DEBUG oslo_concurrency.lockutils [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Acquiring lock "f2ba5390-a359-4227-8ce2-17e1573171ef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.678 221554 DEBUG oslo_concurrency.lockutils [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Lock "f2ba5390-a359-4227-8ce2-17e1573171ef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.679 221554 DEBUG oslo_concurrency.lockutils [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Acquiring lock "f2ba5390-a359-4227-8ce2-17e1573171ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.679 221554 DEBUG oslo_concurrency.lockutils [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Lock "f2ba5390-a359-4227-8ce2-17e1573171ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.679 221554 DEBUG oslo_concurrency.lockutils [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Lock "f2ba5390-a359-4227-8ce2-17e1573171ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.681 221554 INFO nova.compute.manager [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Terminating instance#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.682 221554 DEBUG nova.compute.manager [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.689 221554 DEBUG nova.compute.manager [req-b47604f1-ee0f-43a2-a4ed-63d8702696ec req-7d9c0832-757b-4d95-93a6-e0f3c6abd305 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Received event network-vif-plugged-424ad9b9-f72c-470d-8fb9-0b6381c8bc6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.689 221554 DEBUG oslo_concurrency.lockutils [req-b47604f1-ee0f-43a2-a4ed-63d8702696ec req-7d9c0832-757b-4d95-93a6-e0f3c6abd305 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "878fb573-f407-49e8-9643-f5a766a77df9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.689 221554 DEBUG oslo_concurrency.lockutils [req-b47604f1-ee0f-43a2-a4ed-63d8702696ec req-7d9c0832-757b-4d95-93a6-e0f3c6abd305 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "878fb573-f407-49e8-9643-f5a766a77df9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.689 221554 DEBUG oslo_concurrency.lockutils [req-b47604f1-ee0f-43a2-a4ed-63d8702696ec req-7d9c0832-757b-4d95-93a6-e0f3c6abd305 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "878fb573-f407-49e8-9643-f5a766a77df9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.690 221554 DEBUG nova.compute.manager [req-b47604f1-ee0f-43a2-a4ed-63d8702696ec req-7d9c0832-757b-4d95-93a6-e0f3c6abd305 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] No waiting events found dispatching network-vif-plugged-424ad9b9-f72c-470d-8fb9-0b6381c8bc6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.690 221554 WARNING nova.compute.manager [req-b47604f1-ee0f-43a2-a4ed-63d8702696ec req-7d9c0832-757b-4d95-93a6-e0f3c6abd305 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Received unexpected event network-vif-plugged-424ad9b9-f72c-470d-8fb9-0b6381c8bc6b for instance with vm_state deleted and task_state None.#033[00m
Jan 31 03:11:31 np0005603609 kernel: tap4ee573ab-a8 (unregistering): left promiscuous mode
Jan 31 03:11:31 np0005603609 NetworkManager[49064]: <info>  [1769847091.7274] device (tap4ee573ab-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.733 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:31 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:31Z|00490|binding|INFO|Releasing lport 4ee573ab-a829-4705-a1e2-814b74c9d270 from this chassis (sb_readonly=0)
Jan 31 03:11:31 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:31Z|00491|binding|INFO|Setting lport 4ee573ab-a829-4705-a1e2-814b74c9d270 down in Southbound
Jan 31 03:11:31 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:31Z|00492|binding|INFO|Removing iface tap4ee573ab-a8 ovn-installed in OVS
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.744 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:11:31 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3890559462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.762 221554 DEBUG oslo_concurrency.processutils [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.767 221554 DEBUG nova.compute.provider_tree [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:11:31 np0005603609 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Jan 31 03:11:31 np0005603609 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d0000007b.scope: Consumed 4.559s CPU time.
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.776 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Updating instance_info_cache with network_info: [{"id": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "address": "fa:16:3e:3e:5b:36", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c06d54-12", "ovs_interfaceid": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:11:31 np0005603609 systemd-machined[190912]: Machine qemu-60-instance-0000007b terminated.
Jan 31 03:11:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e285 e285: 3 total, 3 up, 3 in
Jan 31 03:11:31 np0005603609 kernel: tap4ee573ab-a8: entered promiscuous mode
Jan 31 03:11:31 np0005603609 NetworkManager[49064]: <info>  [1769847091.9037] manager: (tap4ee573ab-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/246)
Jan 31 03:11:31 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:31Z|00493|if_status|INFO|Not updating pb chassis for 4ee573ab-a829-4705-a1e2-814b74c9d270 now as sb is readonly
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.902 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:31 np0005603609 kernel: tap4ee573ab-a8 (unregistering): left promiscuous mode
Jan 31 03:11:31 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:31Z|00494|binding|INFO|Releasing lport 4ee573ab-a829-4705-a1e2-814b74c9d270 from this chassis (sb_readonly=1)
Jan 31 03:11:31 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:31Z|00495|if_status|INFO|Dropped 2 log messages in last 385 seconds (most recently, 385 seconds ago) due to excessive rate
Jan 31 03:11:31 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:31Z|00496|if_status|INFO|Not setting lport 4ee573ab-a829-4705-a1e2-814b74c9d270 down as sb is readonly
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.911 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.926 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.930 221554 INFO nova.virt.libvirt.driver [-] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Instance destroyed successfully.#033[00m
Jan 31 03:11:31 np0005603609 nova_compute[221550]: 2026-01-31 08:11:31.930 221554 DEBUG nova.objects.instance [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Lazy-loading 'resources' on Instance uuid f2ba5390-a359-4227-8ce2-17e1573171ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:31.995 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:1a:2e 10.100.0.6'], port_security=['fa:16:3e:3b:1a:2e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f2ba5390-a359-4227-8ce2-17e1573171ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75c0cba5-a2ed-46a3-b041-4e3af3562ea4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0268430b3cb6412c94362feb0e114d3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'db43c84b-3572-4b1f-b436-f479ca007161', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=47e43cb3-5de5-4e60-9ac0-559d677b18a4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=4ee573ab-a829-4705-a1e2-814b74c9d270) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:11:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:31.997 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 4ee573ab-a829-4705-a1e2-814b74c9d270 in datapath 75c0cba5-a2ed-46a3-b041-4e3af3562ea4 unbound from our chassis#033[00m
Jan 31 03:11:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:31.998 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 75c0cba5-a2ed-46a3-b041-4e3af3562ea4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:11:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:31.999 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[942eca53-b0b1-4a10-9fc4-6db498e15b0a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:32.000 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4 namespace which is not needed anymore#033[00m
Jan 31 03:11:32 np0005603609 neutron-haproxy-ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4[270463]: [NOTICE]   (270467) : haproxy version is 2.8.14-c23fe91
Jan 31 03:11:32 np0005603609 neutron-haproxy-ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4[270463]: [NOTICE]   (270467) : path to executable is /usr/sbin/haproxy
Jan 31 03:11:32 np0005603609 neutron-haproxy-ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4[270463]: [WARNING]  (270467) : Exiting Master process...
Jan 31 03:11:32 np0005603609 neutron-haproxy-ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4[270463]: [WARNING]  (270467) : Exiting Master process...
Jan 31 03:11:32 np0005603609 neutron-haproxy-ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4[270463]: [ALERT]    (270467) : Current worker (270469) exited with code 143 (Terminated)
Jan 31 03:11:32 np0005603609 neutron-haproxy-ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4[270463]: [WARNING]  (270467) : All workers exited. Exiting... (0)
Jan 31 03:11:32 np0005603609 systemd[1]: libpod-611a170ad871ffb305d952d54b1b62da23d062b9b239677478fa1c816ce9600d.scope: Deactivated successfully.
Jan 31 03:11:32 np0005603609 conmon[270463]: conmon 611a170ad871ffb305d9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-611a170ad871ffb305d952d54b1b62da23d062b9b239677478fa1c816ce9600d.scope/container/memory.events
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.107 221554 DEBUG nova.virt.libvirt.vif [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:11:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-2081972552',display_name='tempest-ServerPasswordTestJSON-server-2081972552',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-2081972552',id=123,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:11:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0268430b3cb6412c94362feb0e114d3b',ramdisk_id='',reservation_id='r-yx2wj02a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-1593705067',owner_user_name='tempest-ServerPasswordTestJSON-1593705067-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:11:30Z,user_data=None,user_id='358ec3f014264f7c89f19a9ad2ce8166',uuid=f2ba5390-a359-4227-8ce2-17e1573171ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4ee573ab-a829-4705-a1e2-814b74c9d270", "address": "fa:16:3e:3b:1a:2e", "network": {"id": "75c0cba5-a2ed-46a3-b041-4e3af3562ea4", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-748484067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0268430b3cb6412c94362feb0e114d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ee573ab-a8", "ovs_interfaceid": "4ee573ab-a829-4705-a1e2-814b74c9d270", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.108 221554 DEBUG nova.network.os_vif_util [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Converting VIF {"id": "4ee573ab-a829-4705-a1e2-814b74c9d270", "address": "fa:16:3e:3b:1a:2e", "network": {"id": "75c0cba5-a2ed-46a3-b041-4e3af3562ea4", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-748484067-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0268430b3cb6412c94362feb0e114d3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ee573ab-a8", "ovs_interfaceid": "4ee573ab-a829-4705-a1e2-814b74c9d270", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.109 221554 DEBUG nova.network.os_vif_util [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:1a:2e,bridge_name='br-int',has_traffic_filtering=True,id=4ee573ab-a829-4705-a1e2-814b74c9d270,network=Network(75c0cba5-a2ed-46a3-b041-4e3af3562ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ee573ab-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.109 221554 DEBUG os_vif [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:1a:2e,bridge_name='br-int',has_traffic_filtering=True,id=4ee573ab-a829-4705-a1e2-814b74c9d270,network=Network(75c0cba5-a2ed-46a3-b041-4e3af3562ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ee573ab-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.111 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.111 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ee573ab-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.112 221554 DEBUG nova.scheduler.client.report [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.115 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.117 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.119 221554 INFO os_vif [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:1a:2e,bridge_name='br-int',has_traffic_filtering=True,id=4ee573ab-a829-4705-a1e2-814b74c9d270,network=Network(75c0cba5-a2ed-46a3-b041-4e3af3562ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ee573ab-a8')#033[00m
Jan 31 03:11:32 np0005603609 podman[270820]: 2026-01-31 08:11:32.124566238 +0000 UTC m=+0.058614520 container died 611a170ad871ffb305d952d54b1b62da23d062b9b239677478fa1c816ce9600d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:11:32 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-611a170ad871ffb305d952d54b1b62da23d062b9b239677478fa1c816ce9600d-userdata-shm.mount: Deactivated successfully.
Jan 31 03:11:32 np0005603609 systemd[1]: var-lib-containers-storage-overlay-bb6737597b722dc1dc00e28fcce9a9d85ec697f90b1c73261eadc2545e997fb7-merged.mount: Deactivated successfully.
Jan 31 03:11:32 np0005603609 podman[270820]: 2026-01-31 08:11:32.160241775 +0000 UTC m=+0.094290057 container cleanup 611a170ad871ffb305d952d54b1b62da23d062b9b239677478fa1c816ce9600d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 03:11:32 np0005603609 systemd[1]: libpod-conmon-611a170ad871ffb305d952d54b1b62da23d062b9b239677478fa1c816ce9600d.scope: Deactivated successfully.
Jan 31 03:11:32 np0005603609 podman[270863]: 2026-01-31 08:11:32.223112796 +0000 UTC m=+0.047629945 container remove 611a170ad871ffb305d952d54b1b62da23d062b9b239677478fa1c816ce9600d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.228 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:11:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:32.228 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[26416995-252d-491d-a510-8c69c7b7bd35]: (4, ('Sat Jan 31 08:11:32 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4 (611a170ad871ffb305d952d54b1b62da23d062b9b239677478fa1c816ce9600d)\n611a170ad871ffb305d952d54b1b62da23d062b9b239677478fa1c816ce9600d\nSat Jan 31 08:11:32 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4 (611a170ad871ffb305d952d54b1b62da23d062b9b239677478fa1c816ce9600d)\n611a170ad871ffb305d952d54b1b62da23d062b9b239677478fa1c816ce9600d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.229 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.230 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.230 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.231 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.231 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:32.230 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[64a4742f-adf7-443d-b812-dec07493cb94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:32.232 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75c0cba5-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.234 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:32 np0005603609 kernel: tap75c0cba5-a0: left promiscuous mode
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.237 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.244 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:32.248 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bdce3ae4-6302-4cef-93c9-52bc534f81b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:32.263 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d3ddd0-d435-4b3b-8030-536667811253]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:32.264 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[def318f8-aed6-4eec-ada7-3b0a5e64a773]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:32.280 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[455313bf-922f-4db6-9362-2f1089766e85]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732671, 'reachable_time': 28915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270879, 'error': None, 'target': 'ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:32 np0005603609 systemd[1]: run-netns-ovnmeta\x2d75c0cba5\x2da2ed\x2d46a3\x2db041\x2d4e3af3562ea4.mount: Deactivated successfully.
Jan 31 03:11:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:32.286 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-75c0cba5-a2ed-46a3-b041-4e3af3562ea4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:11:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:32.286 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[385dba78-1ae2-4fa5-85ae-82dd4aef2e90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:11:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:32.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.404 221554 DEBUG oslo_concurrency.lockutils [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:32.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.577 221554 INFO nova.virt.libvirt.driver [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Deleting instance files /var/lib/nova/instances/f2ba5390-a359-4227-8ce2-17e1573171ef_del#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.578 221554 INFO nova.virt.libvirt.driver [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Deletion of /var/lib/nova/instances/f2ba5390-a359-4227-8ce2-17e1573171ef_del complete#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.660 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.661 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.661 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.662 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.662 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:32 np0005603609 nova_compute[221550]: 2026-01-31 08:11:32.848 221554 INFO nova.scheduler.client.report [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Deleted allocations for instance 878fb573-f407-49e8-9643-f5a766a77df9#033[00m
Jan 31 03:11:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:11:33 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2795073161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.068 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.129 221554 INFO nova.compute.manager [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Took 1.45 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.129 221554 DEBUG oslo.service.loopingcall [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.130 221554 DEBUG nova.compute.manager [-] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.130 221554 DEBUG nova.network.neutron [-] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.477 221554 DEBUG nova.compute.manager [req-641872b4-be9d-4a4a-a82d-2c5e63478117 req-693ba63a-2e6c-40b1-a602-794d2778bb79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Received event network-vif-unplugged-4ee573ab-a829-4705-a1e2-814b74c9d270 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.477 221554 DEBUG oslo_concurrency.lockutils [req-641872b4-be9d-4a4a-a82d-2c5e63478117 req-693ba63a-2e6c-40b1-a602-794d2778bb79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f2ba5390-a359-4227-8ce2-17e1573171ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.478 221554 DEBUG oslo_concurrency.lockutils [req-641872b4-be9d-4a4a-a82d-2c5e63478117 req-693ba63a-2e6c-40b1-a602-794d2778bb79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f2ba5390-a359-4227-8ce2-17e1573171ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.478 221554 DEBUG oslo_concurrency.lockutils [req-641872b4-be9d-4a4a-a82d-2c5e63478117 req-693ba63a-2e6c-40b1-a602-794d2778bb79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f2ba5390-a359-4227-8ce2-17e1573171ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.478 221554 DEBUG nova.compute.manager [req-641872b4-be9d-4a4a-a82d-2c5e63478117 req-693ba63a-2e6c-40b1-a602-794d2778bb79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] No waiting events found dispatching network-vif-unplugged-4ee573ab-a829-4705-a1e2-814b74c9d270 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.479 221554 DEBUG nova.compute.manager [req-641872b4-be9d-4a4a-a82d-2c5e63478117 req-693ba63a-2e6c-40b1-a602-794d2778bb79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Received event network-vif-unplugged-4ee573ab-a829-4705-a1e2-814b74c9d270 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.488 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.489 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.490 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Error from libvirt while getting description of instance-0000007b: [Error Code 42] Domain not found: no domain with matching uuid 'f2ba5390-a359-4227-8ce2-17e1573171ef' (instance-0000007b): libvirt.libvirtError: Domain not found: no domain with matching uuid 'f2ba5390-a359-4227-8ce2-17e1573171ef' (instance-0000007b)#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.570 221554 DEBUG oslo_concurrency.lockutils [None req-86b9cfef-7946-43a2-9250-87cc754753e6 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "878fb573-f407-49e8-9643-f5a766a77df9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.625 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.626 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4241MB free_disk=20.800498962402344GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.627 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.627 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.765 221554 INFO nova.virt.libvirt.driver [None req-c5735d2b-2364-4bb8-bc64-9746084a5eaa d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Snapshot image upload complete#033[00m
Jan 31 03:11:33 np0005603609 nova_compute[221550]: 2026-01-31 08:11:33.765 221554 INFO nova.compute.manager [None req-c5735d2b-2364-4bb8-bc64-9746084a5eaa d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Took 6.08 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:11:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:34.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:34.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:34 np0005603609 nova_compute[221550]: 2026-01-31 08:11:34.657 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:35Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3e:5b:36 10.100.0.6
Jan 31 03:11:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:35Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:5b:36 10.100.0.6
Jan 31 03:11:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:35.718 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:36 np0005603609 nova_compute[221550]: 2026-01-31 08:11:36.221 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 981eb990-ec0c-4673-98e7-102afbe0bb51 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:11:36 np0005603609 nova_compute[221550]: 2026-01-31 08:11:36.221 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance f2ba5390-a359-4227-8ce2-17e1573171ef actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:11:36 np0005603609 nova_compute[221550]: 2026-01-31 08:11:36.222 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:11:36 np0005603609 nova_compute[221550]: 2026-01-31 08:11:36.222 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:11:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:36.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:36.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:36 np0005603609 nova_compute[221550]: 2026-01-31 08:11:36.731 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:37 np0005603609 nova_compute[221550]: 2026-01-31 08:11:37.113 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:11:37 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1539722970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:11:37 np0005603609 nova_compute[221550]: 2026-01-31 08:11:37.176 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:37 np0005603609 nova_compute[221550]: 2026-01-31 08:11:37.181 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:11:37 np0005603609 nova_compute[221550]: 2026-01-31 08:11:37.250 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:11:37 np0005603609 nova_compute[221550]: 2026-01-31 08:11:37.610 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:11:37 np0005603609 nova_compute[221550]: 2026-01-31 08:11:37.611 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.984s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:37 np0005603609 nova_compute[221550]: 2026-01-31 08:11:37.611 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:37 np0005603609 nova_compute[221550]: 2026-01-31 08:11:37.611 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:11:37 np0005603609 nova_compute[221550]: 2026-01-31 08:11:37.800 221554 DEBUG nova.compute.manager [req-98dfc4b9-7f6c-4cb1-b461-817a5f03d735 req-0eb84281-c9a8-448e-8a21-5bf94751b5c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Received event network-vif-plugged-4ee573ab-a829-4705-a1e2-814b74c9d270 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:37 np0005603609 nova_compute[221550]: 2026-01-31 08:11:37.801 221554 DEBUG oslo_concurrency.lockutils [req-98dfc4b9-7f6c-4cb1-b461-817a5f03d735 req-0eb84281-c9a8-448e-8a21-5bf94751b5c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f2ba5390-a359-4227-8ce2-17e1573171ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:37 np0005603609 nova_compute[221550]: 2026-01-31 08:11:37.802 221554 DEBUG oslo_concurrency.lockutils [req-98dfc4b9-7f6c-4cb1-b461-817a5f03d735 req-0eb84281-c9a8-448e-8a21-5bf94751b5c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f2ba5390-a359-4227-8ce2-17e1573171ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:37 np0005603609 nova_compute[221550]: 2026-01-31 08:11:37.802 221554 DEBUG oslo_concurrency.lockutils [req-98dfc4b9-7f6c-4cb1-b461-817a5f03d735 req-0eb84281-c9a8-448e-8a21-5bf94751b5c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f2ba5390-a359-4227-8ce2-17e1573171ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:37 np0005603609 nova_compute[221550]: 2026-01-31 08:11:37.803 221554 DEBUG nova.compute.manager [req-98dfc4b9-7f6c-4cb1-b461-817a5f03d735 req-0eb84281-c9a8-448e-8a21-5bf94751b5c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] No waiting events found dispatching network-vif-plugged-4ee573ab-a829-4705-a1e2-814b74c9d270 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:11:37 np0005603609 nova_compute[221550]: 2026-01-31 08:11:37.803 221554 WARNING nova.compute.manager [req-98dfc4b9-7f6c-4cb1-b461-817a5f03d735 req-0eb84281-c9a8-448e-8a21-5bf94751b5c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Received unexpected event network-vif-plugged-4ee573ab-a829-4705-a1e2-814b74c9d270 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:11:37 np0005603609 nova_compute[221550]: 2026-01-31 08:11:37.979 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:11:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:11:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:38.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:11:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:38.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:38 np0005603609 nova_compute[221550]: 2026-01-31 08:11:38.974 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:39 np0005603609 podman[270927]: 2026-01-31 08:11:39.195133974 +0000 UTC m=+0.077262388 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 03:11:39 np0005603609 podman[270928]: 2026-01-31 08:11:39.203627478 +0000 UTC m=+0.077214267 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:11:39 np0005603609 nova_compute[221550]: 2026-01-31 08:11:39.402 221554 DEBUG nova.compute.manager [req-02aee059-feb6-4e1f-a833-28f069aaaf1c req-b4497f21-39c4-4d22-b9cb-bb5026729110 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Received event network-vif-deleted-4ee573ab-a829-4705-a1e2-814b74c9d270 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:39 np0005603609 nova_compute[221550]: 2026-01-31 08:11:39.403 221554 INFO nova.compute.manager [req-02aee059-feb6-4e1f-a833-28f069aaaf1c req-b4497f21-39c4-4d22-b9cb-bb5026729110 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Neutron deleted interface 4ee573ab-a829-4705-a1e2-814b74c9d270; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:11:39 np0005603609 nova_compute[221550]: 2026-01-31 08:11:39.403 221554 DEBUG nova.network.neutron [req-02aee059-feb6-4e1f-a833-28f069aaaf1c req-b4497f21-39c4-4d22-b9cb-bb5026729110 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:11:39 np0005603609 nova_compute[221550]: 2026-01-31 08:11:39.468 221554 DEBUG nova.network.neutron [-] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:11:39 np0005603609 nova_compute[221550]: 2026-01-31 08:11:39.567 221554 DEBUG nova.compute.manager [req-02aee059-feb6-4e1f-a833-28f069aaaf1c req-b4497f21-39c4-4d22-b9cb-bb5026729110 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Detach interface failed, port_id=4ee573ab-a829-4705-a1e2-814b74c9d270, reason: Instance f2ba5390-a359-4227-8ce2-17e1573171ef could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:11:39 np0005603609 nova_compute[221550]: 2026-01-31 08:11:39.568 221554 INFO nova.compute.manager [-] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Took 6.44 seconds to deallocate network for instance.#033[00m
Jan 31 03:11:39 np0005603609 nova_compute[221550]: 2026-01-31 08:11:39.660 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:39 np0005603609 nova_compute[221550]: 2026-01-31 08:11:39.871 221554 DEBUG oslo_concurrency.lockutils [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:39 np0005603609 nova_compute[221550]: 2026-01-31 08:11:39.872 221554 DEBUG oslo_concurrency.lockutils [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:39 np0005603609 nova_compute[221550]: 2026-01-31 08:11:39.939 221554 DEBUG oslo_concurrency.processutils [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:40.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:11:40 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2209944268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:11:40 np0005603609 nova_compute[221550]: 2026-01-31 08:11:40.375 221554 DEBUG oslo_concurrency.processutils [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:40 np0005603609 nova_compute[221550]: 2026-01-31 08:11:40.380 221554 DEBUG nova.compute.provider_tree [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:11:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:40.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e286 e286: 3 total, 3 up, 3 in
Jan 31 03:11:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:40 np0005603609 nova_compute[221550]: 2026-01-31 08:11:40.952 221554 DEBUG nova.scheduler.client.report [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:11:40 np0005603609 nova_compute[221550]: 2026-01-31 08:11:40.977 221554 INFO nova.compute.manager [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Rescuing#033[00m
Jan 31 03:11:40 np0005603609 nova_compute[221550]: 2026-01-31 08:11:40.977 221554 DEBUG oslo_concurrency.lockutils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:11:40 np0005603609 nova_compute[221550]: 2026-01-31 08:11:40.978 221554 DEBUG oslo_concurrency.lockutils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquired lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:11:40 np0005603609 nova_compute[221550]: 2026-01-31 08:11:40.978 221554 DEBUG nova.network.neutron [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:11:41 np0005603609 nova_compute[221550]: 2026-01-31 08:11:41.492 221554 DEBUG oslo_concurrency.lockutils [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:41 np0005603609 nova_compute[221550]: 2026-01-31 08:11:41.792 221554 INFO nova.scheduler.client.report [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Deleted allocations for instance f2ba5390-a359-4227-8ce2-17e1573171ef#033[00m
Jan 31 03:11:42 np0005603609 nova_compute[221550]: 2026-01-31 08:11:42.116 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:42 np0005603609 nova_compute[221550]: 2026-01-31 08:11:42.127 221554 DEBUG oslo_concurrency.lockutils [None req-331574e5-bc71-4e60-bd84-548348ff2a33 358ec3f014264f7c89f19a9ad2ce8166 0268430b3cb6412c94362feb0e114d3b - - default default] Lock "f2ba5390-a359-4227-8ce2-17e1573171ef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.449s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:42.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:42.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:42 np0005603609 nova_compute[221550]: 2026-01-31 08:11:42.788 221554 DEBUG oslo_concurrency.lockutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:42 np0005603609 nova_compute[221550]: 2026-01-31 08:11:42.789 221554 DEBUG oslo_concurrency.lockutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:42 np0005603609 nova_compute[221550]: 2026-01-31 08:11:42.811 221554 DEBUG nova.compute.manager [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:11:42 np0005603609 nova_compute[221550]: 2026-01-31 08:11:42.876 221554 DEBUG nova.network.neutron [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Updating instance_info_cache with network_info: [{"id": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "address": "fa:16:3e:3e:5b:36", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c06d54-12", "ovs_interfaceid": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:11:42 np0005603609 nova_compute[221550]: 2026-01-31 08:11:42.880 221554 DEBUG oslo_concurrency.lockutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:42 np0005603609 nova_compute[221550]: 2026-01-31 08:11:42.880 221554 DEBUG oslo_concurrency.lockutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:42 np0005603609 nova_compute[221550]: 2026-01-31 08:11:42.886 221554 DEBUG nova.virt.hardware [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:11:42 np0005603609 nova_compute[221550]: 2026-01-31 08:11:42.887 221554 INFO nova.compute.claims [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:11:42 np0005603609 nova_compute[221550]: 2026-01-31 08:11:42.892 221554 DEBUG oslo_concurrency.lockutils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Releasing lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.006 221554 DEBUG oslo_concurrency.processutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.168 221554 DEBUG nova.virt.libvirt.driver [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.337 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847088.336465, 878fb573-f407-49e8-9643-f5a766a77df9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.338 221554 INFO nova.compute.manager [-] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.360 221554 DEBUG nova.compute.manager [None req-e0a2d79a-ef10-449c-ae52-c038429b96eb - - - - - -] [instance: 878fb573-f407-49e8-9643-f5a766a77df9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:11:43 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1515250436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.436 221554 DEBUG oslo_concurrency.processutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.440 221554 DEBUG nova.compute.provider_tree [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.458 221554 DEBUG nova.scheduler.client.report [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.486 221554 DEBUG oslo_concurrency.lockutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.487 221554 DEBUG nova.compute.manager [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.567 221554 DEBUG nova.compute.manager [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.568 221554 DEBUG nova.network.neutron [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.586 221554 INFO nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.638 221554 DEBUG nova.compute.manager [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.736 221554 DEBUG nova.compute.manager [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.738 221554 DEBUG nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.739 221554 INFO nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Creating image(s)#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.765 221554 DEBUG nova.storage.rbd_utils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.796 221554 DEBUG nova.storage.rbd_utils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.821 221554 DEBUG nova.storage.rbd_utils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.825 221554 DEBUG oslo_concurrency.processutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.849 221554 DEBUG nova.policy [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5366d122b359489fb9d2bda8d19611a6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.894 221554 DEBUG oslo_concurrency.processutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.895 221554 DEBUG oslo_concurrency.lockutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.896 221554 DEBUG oslo_concurrency.lockutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.896 221554 DEBUG oslo_concurrency.lockutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.939 221554 DEBUG nova.storage.rbd_utils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:43 np0005603609 nova_compute[221550]: 2026-01-31 08:11:43.942 221554 DEBUG oslo_concurrency.processutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:44 np0005603609 nova_compute[221550]: 2026-01-31 08:11:44.235 221554 DEBUG oslo_concurrency.processutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:44 np0005603609 nova_compute[221550]: 2026-01-31 08:11:44.312 221554 DEBUG nova.storage.rbd_utils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] resizing rbd image cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:11:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:44.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:44 np0005603609 nova_compute[221550]: 2026-01-31 08:11:44.474 221554 DEBUG nova.objects.instance [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'migration_context' on Instance uuid cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:44 np0005603609 nova_compute[221550]: 2026-01-31 08:11:44.499 221554 DEBUG nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:11:44 np0005603609 nova_compute[221550]: 2026-01-31 08:11:44.500 221554 DEBUG nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Ensure instance console log exists: /var/lib/nova/instances/cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:11:44 np0005603609 nova_compute[221550]: 2026-01-31 08:11:44.501 221554 DEBUG oslo_concurrency.lockutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:44 np0005603609 nova_compute[221550]: 2026-01-31 08:11:44.501 221554 DEBUG oslo_concurrency.lockutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:44 np0005603609 nova_compute[221550]: 2026-01-31 08:11:44.501 221554 DEBUG oslo_concurrency.lockutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:44 np0005603609 nova_compute[221550]: 2026-01-31 08:11:44.661 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:44.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:44 np0005603609 nova_compute[221550]: 2026-01-31 08:11:44.878 221554 DEBUG nova.network.neutron [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Successfully created port: b02f2fdc-eebe-4514-b149-c60b25a69d10 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:11:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:11:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3253062455' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:11:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:11:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3253062455' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:11:45 np0005603609 kernel: tap05c06d54-12 (unregistering): left promiscuous mode
Jan 31 03:11:45 np0005603609 NetworkManager[49064]: <info>  [1769847105.5858] device (tap05c06d54-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:11:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:45Z|00497|binding|INFO|Releasing lport 05c06d54-1257-48d3-8a5a-f92423aadbd8 from this chassis (sb_readonly=0)
Jan 31 03:11:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:45Z|00498|binding|INFO|Setting lport 05c06d54-1257-48d3-8a5a-f92423aadbd8 down in Southbound
Jan 31 03:11:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:45Z|00499|binding|INFO|Removing iface tap05c06d54-12 ovn-installed in OVS
Jan 31 03:11:45 np0005603609 nova_compute[221550]: 2026-01-31 08:11:45.596 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:45.612 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:5b:36 10.100.0.6'], port_security=['fa:16:3e:3e:5b:36 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '981eb990-ec0c-4673-98e7-102afbe0bb51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6bf409ce-5d7d-4755-a538-3f1e81a177fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=05c06d54-1257-48d3-8a5a-f92423aadbd8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:11:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:45.613 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 05c06d54-1257-48d3-8a5a-f92423aadbd8 in datapath 088d6992-6ba6-4719-a977-b3d306740157 unbound from our chassis#033[00m
Jan 31 03:11:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:45.614 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 088d6992-6ba6-4719-a977-b3d306740157, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:11:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:45.616 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7666ba28-add8-4909-a2a7-089294495721]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:45.616 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 namespace which is not needed anymore#033[00m
Jan 31 03:11:45 np0005603609 nova_compute[221550]: 2026-01-31 08:11:45.624 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:45 np0005603609 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000079.scope: Deactivated successfully.
Jan 31 03:11:45 np0005603609 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000079.scope: Consumed 12.792s CPU time.
Jan 31 03:11:45 np0005603609 systemd-machined[190912]: Machine qemu-58-instance-00000079 terminated.
Jan 31 03:11:45 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[269977]: [NOTICE]   (269981) : haproxy version is 2.8.14-c23fe91
Jan 31 03:11:45 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[269977]: [NOTICE]   (269981) : path to executable is /usr/sbin/haproxy
Jan 31 03:11:45 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[269977]: [WARNING]  (269981) : Exiting Master process...
Jan 31 03:11:45 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[269977]: [ALERT]    (269981) : Current worker (269986) exited with code 143 (Terminated)
Jan 31 03:11:45 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[269977]: [WARNING]  (269981) : All workers exited. Exiting... (0)
Jan 31 03:11:45 np0005603609 systemd[1]: libpod-a220f369d0e60aef2aaa17ce9d7a563d5898800e02cf853c2a9baf2c970f6e3c.scope: Deactivated successfully.
Jan 31 03:11:45 np0005603609 podman[271209]: 2026-01-31 08:11:45.729247426 +0000 UTC m=+0.043212889 container died a220f369d0e60aef2aaa17ce9d7a563d5898800e02cf853c2a9baf2c970f6e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 03:11:45 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a220f369d0e60aef2aaa17ce9d7a563d5898800e02cf853c2a9baf2c970f6e3c-userdata-shm.mount: Deactivated successfully.
Jan 31 03:11:45 np0005603609 systemd[1]: var-lib-containers-storage-overlay-383bb815936b3f1ba74990b12994890a51c0bff1a6d497383034e0680767c412-merged.mount: Deactivated successfully.
Jan 31 03:11:45 np0005603609 podman[271209]: 2026-01-31 08:11:45.784403331 +0000 UTC m=+0.098368784 container cleanup a220f369d0e60aef2aaa17ce9d7a563d5898800e02cf853c2a9baf2c970f6e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 03:11:45 np0005603609 systemd[1]: libpod-conmon-a220f369d0e60aef2aaa17ce9d7a563d5898800e02cf853c2a9baf2c970f6e3c.scope: Deactivated successfully.
Jan 31 03:11:45 np0005603609 nova_compute[221550]: 2026-01-31 08:11:45.816 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:45 np0005603609 nova_compute[221550]: 2026-01-31 08:11:45.820 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:45 np0005603609 podman[271240]: 2026-01-31 08:11:45.856577255 +0000 UTC m=+0.053819484 container remove a220f369d0e60aef2aaa17ce9d7a563d5898800e02cf853c2a9baf2c970f6e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:11:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:45.860 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c3349420-ef21-4e6f-8925-805e47b99390]: (4, ('Sat Jan 31 08:11:45 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 (a220f369d0e60aef2aaa17ce9d7a563d5898800e02cf853c2a9baf2c970f6e3c)\na220f369d0e60aef2aaa17ce9d7a563d5898800e02cf853c2a9baf2c970f6e3c\nSat Jan 31 08:11:45 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 (a220f369d0e60aef2aaa17ce9d7a563d5898800e02cf853c2a9baf2c970f6e3c)\na220f369d0e60aef2aaa17ce9d7a563d5898800e02cf853c2a9baf2c970f6e3c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:45.862 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3727ff7f-972f-4869-a14e-a85cf6fe34e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:45.863 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:45 np0005603609 nova_compute[221550]: 2026-01-31 08:11:45.864 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:45 np0005603609 kernel: tap088d6992-60: left promiscuous mode
Jan 31 03:11:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:45 np0005603609 nova_compute[221550]: 2026-01-31 08:11:45.873 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:45.876 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c62f4492-e603-4f20-ac5f-ec2261c190ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:45.891 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[af260267-ef4b-4847-b3c6-4a0492e4afcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:45.892 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1227cc-4df1-4657-8047-bc912c438841]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:45.904 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1def557f-0722-48e4-9564-c9e53c1f53f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732137, 'reachable_time': 35549, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271268, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:45.906 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:11:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:45.906 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[81f331c4-a698-4185-a165-7c2e51c14408]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:45 np0005603609 systemd[1]: run-netns-ovnmeta\x2d088d6992\x2d6ba6\x2d4719\x2da977\x2db3d306740157.mount: Deactivated successfully.
Jan 31 03:11:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e287 e287: 3 total, 3 up, 3 in
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.176 221554 DEBUG nova.network.neutron [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Successfully updated port: b02f2fdc-eebe-4514-b149-c60b25a69d10 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.182 221554 INFO nova.virt.libvirt.driver [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.186 221554 INFO nova.virt.libvirt.driver [-] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Instance destroyed successfully.#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.187 221554 DEBUG nova.objects.instance [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'numa_topology' on Instance uuid 981eb990-ec0c-4673-98e7-102afbe0bb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.206 221554 DEBUG oslo_concurrency.lockutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "refresh_cache-cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.207 221554 DEBUG oslo_concurrency.lockutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquired lock "refresh_cache-cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.207 221554 DEBUG nova.network.neutron [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.210 221554 INFO nova.virt.libvirt.driver [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Attempting a stable device rescue#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.217 221554 DEBUG nova.compute.manager [req-1e81ddab-6102-40cb-84ca-4c6e6ba54428 req-5af98eef-7af6-49cf-a411-8636cc24fce7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received event network-vif-unplugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.218 221554 DEBUG oslo_concurrency.lockutils [req-1e81ddab-6102-40cb-84ca-4c6e6ba54428 req-5af98eef-7af6-49cf-a411-8636cc24fce7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.218 221554 DEBUG oslo_concurrency.lockutils [req-1e81ddab-6102-40cb-84ca-4c6e6ba54428 req-5af98eef-7af6-49cf-a411-8636cc24fce7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.219 221554 DEBUG oslo_concurrency.lockutils [req-1e81ddab-6102-40cb-84ca-4c6e6ba54428 req-5af98eef-7af6-49cf-a411-8636cc24fce7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.219 221554 DEBUG nova.compute.manager [req-1e81ddab-6102-40cb-84ca-4c6e6ba54428 req-5af98eef-7af6-49cf-a411-8636cc24fce7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] No waiting events found dispatching network-vif-unplugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.220 221554 WARNING nova.compute.manager [req-1e81ddab-6102-40cb-84ca-4c6e6ba54428 req-5af98eef-7af6-49cf-a411-8636cc24fce7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received unexpected event network-vif-unplugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.321 221554 DEBUG nova.compute.manager [req-8bd07fa3-a241-496e-92aa-452e5efd848b req-830e9f01-c080-44e0-86de-568a2f844c2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Received event network-changed-b02f2fdc-eebe-4514-b149-c60b25a69d10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.322 221554 DEBUG nova.compute.manager [req-8bd07fa3-a241-496e-92aa-452e5efd848b req-830e9f01-c080-44e0-86de-568a2f844c2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Refreshing instance network info cache due to event network-changed-b02f2fdc-eebe-4514-b149-c60b25a69d10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.322 221554 DEBUG oslo_concurrency.lockutils [req-8bd07fa3-a241-496e-92aa-452e5efd848b req-830e9f01-c080-44e0-86de-568a2f844c2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:11:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:46.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.369 221554 DEBUG nova.network.neutron [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.432 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.445 221554 DEBUG nova.virt.libvirt.driver [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'scsi', 'dev': 'sdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.448 221554 DEBUG nova.virt.libvirt.driver [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.449 221554 INFO nova.virt.libvirt.driver [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Creating image(s)#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.472 221554 DEBUG nova.storage.rbd_utils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 981eb990-ec0c-4673-98e7-102afbe0bb51_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.476 221554 DEBUG nova.objects.instance [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 981eb990-ec0c-4673-98e7-102afbe0bb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.525 221554 DEBUG nova.storage.rbd_utils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 981eb990-ec0c-4673-98e7-102afbe0bb51_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.550 221554 DEBUG nova.storage.rbd_utils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 981eb990-ec0c-4673-98e7-102afbe0bb51_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.553 221554 DEBUG oslo_concurrency.lockutils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "a77eb6d1b1d756ae00c1c88ef4d2e7925d62829a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.554 221554 DEBUG oslo_concurrency.lockutils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "a77eb6d1b1d756ae00c1c88ef4d2e7925d62829a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:46.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.790 221554 DEBUG nova.virt.libvirt.imagebackend [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/7a74ff6f-b876-4da2-8910-2a6929d0fcbc/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/7a74ff6f-b876-4da2-8910-2a6929d0fcbc/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.837 221554 DEBUG nova.virt.libvirt.imagebackend [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Selected location: {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/7a74ff6f-b876-4da2-8910-2a6929d0fcbc/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.837 221554 DEBUG nova.storage.rbd_utils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] cloning images/7a74ff6f-b876-4da2-8910-2a6929d0fcbc@snap to None/981eb990-ec0c-4673-98e7-102afbe0bb51_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.928 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847091.9275372, f2ba5390-a359-4227-8ce2-17e1573171ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.929 221554 INFO nova.compute.manager [-] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:11:46 np0005603609 nova_compute[221550]: 2026-01-31 08:11:46.945 221554 DEBUG nova.compute.manager [None req-e6582b3b-475a-420c-a1c0-f69a8686ab1c - - - - - -] [instance: f2ba5390-a359-4227-8ce2-17e1573171ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.080 221554 DEBUG oslo_concurrency.lockutils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "a77eb6d1b1d756ae00c1c88ef4d2e7925d62829a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.127 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.132 221554 DEBUG nova.objects.instance [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'migration_context' on Instance uuid 981eb990-ec0c-4673-98e7-102afbe0bb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.176 221554 DEBUG nova.virt.libvirt.driver [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.178 221554 DEBUG nova.virt.libvirt.driver [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Start _get_guest_xml network_info=[{"id": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "address": "fa:16:3e:3e:5b:36", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "vif_mac": "fa:16:3e:3e:5b:36"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c06d54-12", "ovs_interfaceid": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'scsi', 'dev': 'sdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '7a74ff6f-b876-4da2-8910-2a6929d0fcbc', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.178 221554 DEBUG nova.objects.instance [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'resources' on Instance uuid 981eb990-ec0c-4673-98e7-102afbe0bb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.209 221554 WARNING nova.virt.libvirt.driver [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.214 221554 DEBUG nova.virt.libvirt.host [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.215 221554 DEBUG nova.virt.libvirt.host [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.219 221554 DEBUG nova.virt.libvirt.host [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.219 221554 DEBUG nova.virt.libvirt.host [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.220 221554 DEBUG nova.virt.libvirt.driver [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.221 221554 DEBUG nova.virt.hardware [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.221 221554 DEBUG nova.virt.hardware [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.221 221554 DEBUG nova.virt.hardware [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.221 221554 DEBUG nova.virt.hardware [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.222 221554 DEBUG nova.virt.hardware [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.222 221554 DEBUG nova.virt.hardware [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.222 221554 DEBUG nova.virt.hardware [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.222 221554 DEBUG nova.virt.hardware [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.222 221554 DEBUG nova.virt.hardware [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.222 221554 DEBUG nova.virt.hardware [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.223 221554 DEBUG nova.virt.hardware [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.223 221554 DEBUG nova.objects.instance [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 981eb990-ec0c-4673-98e7-102afbe0bb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.245 221554 DEBUG oslo_concurrency.processutils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.377 221554 DEBUG nova.network.neutron [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Updating instance_info_cache with network_info: [{"id": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "address": "fa:16:3e:4c:a9:e0", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02f2fdc-ee", "ovs_interfaceid": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.416 221554 DEBUG oslo_concurrency.lockutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Releasing lock "refresh_cache-cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.416 221554 DEBUG nova.compute.manager [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Instance network_info: |[{"id": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "address": "fa:16:3e:4c:a9:e0", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02f2fdc-ee", "ovs_interfaceid": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.417 221554 DEBUG oslo_concurrency.lockutils [req-8bd07fa3-a241-496e-92aa-452e5efd848b req-830e9f01-c080-44e0-86de-568a2f844c2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.417 221554 DEBUG nova.network.neutron [req-8bd07fa3-a241-496e-92aa-452e5efd848b req-830e9f01-c080-44e0-86de-568a2f844c2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Refreshing network info cache for port b02f2fdc-eebe-4514-b149-c60b25a69d10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.419 221554 DEBUG nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Start _get_guest_xml network_info=[{"id": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "address": "fa:16:3e:4c:a9:e0", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02f2fdc-ee", "ovs_interfaceid": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.422 221554 WARNING nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.429 221554 DEBUG nova.virt.libvirt.host [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.429 221554 DEBUG nova.virt.libvirt.host [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.435 221554 DEBUG nova.virt.libvirt.host [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.436 221554 DEBUG nova.virt.libvirt.host [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.437 221554 DEBUG nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.437 221554 DEBUG nova.virt.hardware [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.438 221554 DEBUG nova.virt.hardware [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.438 221554 DEBUG nova.virt.hardware [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.438 221554 DEBUG nova.virt.hardware [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.438 221554 DEBUG nova.virt.hardware [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.439 221554 DEBUG nova.virt.hardware [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.439 221554 DEBUG nova.virt.hardware [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.439 221554 DEBUG nova.virt.hardware [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.439 221554 DEBUG nova.virt.hardware [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.439 221554 DEBUG nova.virt.hardware [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.440 221554 DEBUG nova.virt.hardware [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.442 221554 DEBUG oslo_concurrency.processutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:11:47 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4215506736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.686 221554 DEBUG oslo_concurrency.processutils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.718 221554 DEBUG oslo_concurrency.processutils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:11:47 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2970087483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.850 221554 DEBUG oslo_concurrency.processutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.883 221554 DEBUG nova.storage.rbd_utils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:47 np0005603609 nova_compute[221550]: 2026-01-31 08:11:47.886 221554 DEBUG oslo_concurrency.processutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:11:48 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1997915657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.112 221554 DEBUG oslo_concurrency.processutils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.114 221554 DEBUG oslo_concurrency.processutils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:11:48 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2564914381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.279 221554 DEBUG oslo_concurrency.processutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.281 221554 DEBUG nova.virt.libvirt.vif [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:11:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1204367585',display_name='tempest-ServersTestJSON-server-1204367585',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-1204367585',id=124,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUMsWS/2njbCwpa8W/g+RZeeBxwDeZUyYnm2d3fVMNtF6rQuYEGAl4axDfhINo266hc09EfeFpXW/Sbkb7jtNhQtAmpKq74OeE5rpRAFX5MWuUVo0Z0N7GLaehTFWSX5A==',key_name='tempest-key-954900079',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-m3j374ow',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:11:43Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "address": "fa:16:3e:4c:a9:e0", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02f2fdc-ee", "ovs_interfaceid": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.282 221554 DEBUG nova.network.os_vif_util [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "address": "fa:16:3e:4c:a9:e0", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02f2fdc-ee", "ovs_interfaceid": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.283 221554 DEBUG nova.network.os_vif_util [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:a9:e0,bridge_name='br-int',has_traffic_filtering=True,id=b02f2fdc-eebe-4514-b149-c60b25a69d10,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb02f2fdc-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.284 221554 DEBUG nova.objects.instance [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'pci_devices' on Instance uuid cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:11:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:48.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.501 221554 DEBUG nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <uuid>cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4</uuid>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <name>instance-0000007c</name>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServersTestJSON-server-1204367585</nova:name>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:11:47</nova:creationTime>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <nova:user uuid="5366d122b359489fb9d2bda8d19611a6">tempest-ServersTestJSON-327201738-project-member</nova:user>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <nova:project uuid="4aa06cf35d8c468fb16884f19dc8ce71">tempest-ServersTestJSON-327201738</nova:project>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <nova:port uuid="b02f2fdc-eebe-4514-b149-c60b25a69d10">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <entry name="serial">cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4</entry>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <entry name="uuid">cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4</entry>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4_disk">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4_disk.config">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:4c:a9:e0"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <target dev="tapb02f2fdc-ee"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4/console.log" append="off"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:11:48 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:11:48 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.505 221554 DEBUG nova.compute.manager [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Preparing to wait for external event network-vif-plugged-b02f2fdc-eebe-4514-b149-c60b25a69d10 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.505 221554 DEBUG oslo_concurrency.lockutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.506 221554 DEBUG oslo_concurrency.lockutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.506 221554 DEBUG oslo_concurrency.lockutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.507 221554 DEBUG nova.virt.libvirt.vif [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:11:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1204367585',display_name='tempest-ServersTestJSON-server-1204367585',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-1204367585',id=124,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUMsWS/2njbCwpa8W/g+RZeeBxwDeZUyYnm2d3fVMNtF6rQuYEGAl4axDfhINo266hc09EfeFpXW/Sbkb7jtNhQtAmpKq74OeE5rpRAFX5MWuUVo0Z0N7GLaehTFWSX5A==',key_name='tempest-key-954900079',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-m3j374ow',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:11:43Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "address": "fa:16:3e:4c:a9:e0", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02f2fdc-ee", "ovs_interfaceid": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.507 221554 DEBUG nova.network.os_vif_util [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "address": "fa:16:3e:4c:a9:e0", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02f2fdc-ee", "ovs_interfaceid": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.508 221554 DEBUG nova.network.os_vif_util [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:a9:e0,bridge_name='br-int',has_traffic_filtering=True,id=b02f2fdc-eebe-4514-b149-c60b25a69d10,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb02f2fdc-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.508 221554 DEBUG os_vif [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:a9:e0,bridge_name='br-int',has_traffic_filtering=True,id=b02f2fdc-eebe-4514-b149-c60b25a69d10,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb02f2fdc-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.509 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.509 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.509 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.511 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.512 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb02f2fdc-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.512 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb02f2fdc-ee, col_values=(('external_ids', {'iface-id': 'b02f2fdc-eebe-4514-b149-c60b25a69d10', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4c:a9:e0', 'vm-uuid': 'cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.513 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:48 np0005603609 NetworkManager[49064]: <info>  [1769847108.5148] manager: (tapb02f2fdc-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.516 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:11:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:11:48 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2760709419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.521 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.521 221554 INFO os_vif [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:a9:e0,bridge_name='br-int',has_traffic_filtering=True,id=b02f2fdc-eebe-4514-b149-c60b25a69d10,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb02f2fdc-ee')#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.537 221554 DEBUG oslo_concurrency.processutils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.538 221554 DEBUG nova.virt.libvirt.vif [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:11:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-562797465',display_name='tempest-ServerStableDeviceRescueTest-server-562797465',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-562797465',id=121,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:11:24Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1633c84ea1bf46b080aaafd30bbcf25f',ramdisk_id='',reservation_id='r-orbojs0e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-569420416',owner_user_name='tempest-ServerStableDeviceRescueTest-569420416-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:11:34Z,user_data=None,user_id='d7d9a44201d548aba1e1654e136ddd06',uuid=981eb990-ec0c-4673-98e7-102afbe0bb51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "address": "fa:16:3e:3e:5b:36", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "vif_mac": "fa:16:3e:3e:5b:36"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c06d54-12", "ovs_interfaceid": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.539 221554 DEBUG nova.network.os_vif_util [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converting VIF {"id": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "address": "fa:16:3e:3e:5b:36", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "vif_mac": "fa:16:3e:3e:5b:36"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c06d54-12", "ovs_interfaceid": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.539 221554 DEBUG nova.network.os_vif_util [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:5b:36,bridge_name='br-int',has_traffic_filtering=True,id=05c06d54-1257-48d3-8a5a-f92423aadbd8,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c06d54-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.541 221554 DEBUG nova.objects.instance [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'pci_devices' on Instance uuid 981eb990-ec0c-4673-98e7-102afbe0bb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.688 221554 DEBUG nova.virt.libvirt.driver [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <uuid>981eb990-ec0c-4673-98e7-102afbe0bb51</uuid>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <name>instance-00000079</name>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-562797465</nova:name>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:11:47</nova:creationTime>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <nova:user uuid="d7d9a44201d548aba1e1654e136ddd06">tempest-ServerStableDeviceRescueTest-569420416-project-member</nova:user>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <nova:project uuid="1633c84ea1bf46b080aaafd30bbcf25f">tempest-ServerStableDeviceRescueTest-569420416</nova:project>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <nova:port uuid="05c06d54-1257-48d3-8a5a-f92423aadbd8">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <entry name="serial">981eb990-ec0c-4673-98e7-102afbe0bb51</entry>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <entry name="uuid">981eb990-ec0c-4673-98e7-102afbe0bb51</entry>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/981eb990-ec0c-4673-98e7-102afbe0bb51_disk">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/981eb990-ec0c-4673-98e7-102afbe0bb51_disk.config">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/981eb990-ec0c-4673-98e7-102afbe0bb51_disk.rescue">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <target dev="sdb" bus="scsi"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <boot order="1"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:3e:5b:36"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <target dev="tap05c06d54-12"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/981eb990-ec0c-4673-98e7-102afbe0bb51/console.log" append="off"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:11:48 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:11:48 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:11:48 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:11:48 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.699 221554 INFO nova.virt.libvirt.driver [-] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Instance destroyed successfully.#033[00m
Jan 31 03:11:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:48.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.865 221554 DEBUG nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.866 221554 DEBUG nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.866 221554 DEBUG nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No VIF found with MAC fa:16:3e:4c:a9:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.866 221554 INFO nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Using config drive#033[00m
Jan 31 03:11:48 np0005603609 nova_compute[221550]: 2026-01-31 08:11:48.891 221554 DEBUG nova.storage.rbd_utils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:49 np0005603609 nova_compute[221550]: 2026-01-31 08:11:49.053 221554 DEBUG nova.virt.libvirt.driver [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:11:49 np0005603609 nova_compute[221550]: 2026-01-31 08:11:49.054 221554 DEBUG nova.virt.libvirt.driver [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:11:49 np0005603609 nova_compute[221550]: 2026-01-31 08:11:49.054 221554 DEBUG nova.virt.libvirt.driver [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:11:49 np0005603609 nova_compute[221550]: 2026-01-31 08:11:49.054 221554 DEBUG nova.virt.libvirt.driver [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No VIF found with MAC fa:16:3e:3e:5b:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:11:49 np0005603609 nova_compute[221550]: 2026-01-31 08:11:49.055 221554 INFO nova.virt.libvirt.driver [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Using config drive#033[00m
Jan 31 03:11:49 np0005603609 nova_compute[221550]: 2026-01-31 08:11:49.082 221554 DEBUG nova.storage.rbd_utils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 981eb990-ec0c-4673-98e7-102afbe0bb51_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:49 np0005603609 nova_compute[221550]: 2026-01-31 08:11:49.518 221554 DEBUG nova.objects.instance [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 981eb990-ec0c-4673-98e7-102afbe0bb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:49 np0005603609 nova_compute[221550]: 2026-01-31 08:11:49.663 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:49 np0005603609 nova_compute[221550]: 2026-01-31 08:11:49.727 221554 DEBUG nova.objects.instance [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'keypairs' on Instance uuid 981eb990-ec0c-4673-98e7-102afbe0bb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:50.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:50 np0005603609 nova_compute[221550]: 2026-01-31 08:11:50.457 221554 DEBUG nova.compute.manager [req-b6403214-f910-495c-9c00-9fb98dde4a67 req-c12152c8-e9fe-4e51-a3c8-e0ce76990988 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received event network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:50 np0005603609 nova_compute[221550]: 2026-01-31 08:11:50.457 221554 DEBUG oslo_concurrency.lockutils [req-b6403214-f910-495c-9c00-9fb98dde4a67 req-c12152c8-e9fe-4e51-a3c8-e0ce76990988 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:50 np0005603609 nova_compute[221550]: 2026-01-31 08:11:50.458 221554 DEBUG oslo_concurrency.lockutils [req-b6403214-f910-495c-9c00-9fb98dde4a67 req-c12152c8-e9fe-4e51-a3c8-e0ce76990988 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:50 np0005603609 nova_compute[221550]: 2026-01-31 08:11:50.458 221554 DEBUG oslo_concurrency.lockutils [req-b6403214-f910-495c-9c00-9fb98dde4a67 req-c12152c8-e9fe-4e51-a3c8-e0ce76990988 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:50 np0005603609 nova_compute[221550]: 2026-01-31 08:11:50.459 221554 DEBUG nova.compute.manager [req-b6403214-f910-495c-9c00-9fb98dde4a67 req-c12152c8-e9fe-4e51-a3c8-e0ce76990988 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] No waiting events found dispatching network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:11:50 np0005603609 nova_compute[221550]: 2026-01-31 08:11:50.459 221554 WARNING nova.compute.manager [req-b6403214-f910-495c-9c00-9fb98dde4a67 req-c12152c8-e9fe-4e51-a3c8-e0ce76990988 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received unexpected event network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:11:50 np0005603609 nova_compute[221550]: 2026-01-31 08:11:50.658 221554 INFO nova.virt.libvirt.driver [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Creating config drive at /var/lib/nova/instances/981eb990-ec0c-4673-98e7-102afbe0bb51/disk.config.rescue#033[00m
Jan 31 03:11:50 np0005603609 nova_compute[221550]: 2026-01-31 08:11:50.664 221554 DEBUG oslo_concurrency.processutils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/981eb990-ec0c-4673-98e7-102afbe0bb51/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbk9iknae execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:50 np0005603609 nova_compute[221550]: 2026-01-31 08:11:50.688 221554 INFO nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Creating config drive at /var/lib/nova/instances/cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4/disk.config#033[00m
Jan 31 03:11:50 np0005603609 nova_compute[221550]: 2026-01-31 08:11:50.692 221554 DEBUG oslo_concurrency.processutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpjk9lt8kj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:50.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:50 np0005603609 nova_compute[221550]: 2026-01-31 08:11:50.789 221554 DEBUG oslo_concurrency.processutils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/981eb990-ec0c-4673-98e7-102afbe0bb51/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbk9iknae" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:50 np0005603609 nova_compute[221550]: 2026-01-31 08:11:50.816 221554 DEBUG nova.storage.rbd_utils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 981eb990-ec0c-4673-98e7-102afbe0bb51_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:50 np0005603609 nova_compute[221550]: 2026-01-31 08:11:50.820 221554 DEBUG oslo_concurrency.processutils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/981eb990-ec0c-4673-98e7-102afbe0bb51/disk.config.rescue 981eb990-ec0c-4673-98e7-102afbe0bb51_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:50 np0005603609 nova_compute[221550]: 2026-01-31 08:11:50.835 221554 DEBUG oslo_concurrency.processutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpjk9lt8kj" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:50 np0005603609 nova_compute[221550]: 2026-01-31 08:11:50.860 221554 DEBUG nova.storage.rbd_utils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:50 np0005603609 nova_compute[221550]: 2026-01-31 08:11:50.863 221554 DEBUG oslo_concurrency.processutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4/disk.config cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.017 221554 DEBUG oslo_concurrency.processutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4/disk.config cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.018 221554 INFO nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Deleting local config drive /var/lib/nova/instances/cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4/disk.config because it was imported into RBD.#033[00m
Jan 31 03:11:51 np0005603609 kernel: tapb02f2fdc-ee: entered promiscuous mode
Jan 31 03:11:51 np0005603609 NetworkManager[49064]: <info>  [1769847111.0683] manager: (tapb02f2fdc-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/248)
Jan 31 03:11:51 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:51Z|00500|binding|INFO|Claiming lport b02f2fdc-eebe-4514-b149-c60b25a69d10 for this chassis.
Jan 31 03:11:51 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:51Z|00501|binding|INFO|b02f2fdc-eebe-4514-b149-c60b25a69d10: Claiming fa:16:3e:4c:a9:e0 10.100.0.13
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.070 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.082 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:a9:e0 10.100.0.13'], port_security=['fa:16:3e:4c:a9:e0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b88251fc-7610-460a-ba55-2ed186c6f696', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9f076789-2616-4234-8eab-1fc3da7d63b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7bb3690e-e43b-4d54-9d64-4797e471bf50, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=b02f2fdc-eebe-4514-b149-c60b25a69d10) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.084 140058 INFO neutron.agent.ovn.metadata.agent [-] Port b02f2fdc-eebe-4514-b149-c60b25a69d10 in datapath b88251fc-7610-460a-ba55-2ed186c6f696 bound to our chassis#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.086 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b88251fc-7610-460a-ba55-2ed186c6f696#033[00m
Jan 31 03:11:51 np0005603609 systemd-machined[190912]: New machine qemu-61-instance-0000007c.
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.097 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d1cc2c1d-bb76-4916-bdb3-ae64eb8b0235]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.098 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb88251fc-71 in ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.099 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb88251fc-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.100 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[baa77fa3-9687-41be-838b-4d91b1b9ca9a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.100 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0422ff37-6140-4ee4-9305-1f953996bee3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.108 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[56b2d0d4-e334-4fc3-8d2c-7b2b096119db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 systemd[1]: Started Virtual Machine qemu-61-instance-0000007c.
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.112 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:51 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:51Z|00502|binding|INFO|Setting lport b02f2fdc-eebe-4514-b149-c60b25a69d10 ovn-installed in OVS
Jan 31 03:11:51 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:51Z|00503|binding|INFO|Setting lport b02f2fdc-eebe-4514-b149-c60b25a69d10 up in Southbound
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.117 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:51 np0005603609 systemd-udevd[271695]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.122 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[25f36a8d-c560-4abe-aafa-87c34d265d7f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 NetworkManager[49064]: <info>  [1769847111.1350] device (tapb02f2fdc-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:11:51 np0005603609 NetworkManager[49064]: <info>  [1769847111.1357] device (tapb02f2fdc-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.148 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce82187-d36b-43d7-8902-538dad80c222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 systemd-udevd[271699]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:11:51 np0005603609 NetworkManager[49064]: <info>  [1769847111.1553] manager: (tapb88251fc-70): new Veth device (/org/freedesktop/NetworkManager/Devices/249)
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.154 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8e1392-768f-4dc8-af1c-59e71feb3bcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.178 221554 DEBUG oslo_concurrency.processutils [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/981eb990-ec0c-4673-98e7-102afbe0bb51/disk.config.rescue 981eb990-ec0c-4673-98e7-102afbe0bb51_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.179 221554 INFO nova.virt.libvirt.driver [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Deleting local config drive /var/lib/nova/instances/981eb990-ec0c-4673-98e7-102afbe0bb51/disk.config.rescue because it was imported into RBD.#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.184 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f519f49a-cfa5-41e1-8149-d754945cbffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.187 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[1813d07d-8887-4e0e-8cdf-a84bec1faa13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 NetworkManager[49064]: <info>  [1769847111.2027] device (tapb88251fc-70): carrier: link connected
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.207 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[a0621039-2a54-4b56-8d75-e94198cbf138]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 kernel: tap05c06d54-12: entered promiscuous mode
Jan 31 03:11:51 np0005603609 NetworkManager[49064]: <info>  [1769847111.2205] manager: (tap05c06d54-12): new Tun device (/org/freedesktop/NetworkManager/Devices/250)
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.221 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:51 np0005603609 systemd-udevd[271708]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:11:51 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:51Z|00504|binding|INFO|Claiming lport 05c06d54-1257-48d3-8a5a-f92423aadbd8 for this chassis.
Jan 31 03:11:51 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:51Z|00505|binding|INFO|05c06d54-1257-48d3-8a5a-f92423aadbd8: Claiming fa:16:3e:3e:5b:36 10.100.0.6
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.225 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[84219217-8229-4791-8839-831f149a5be8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb88251fc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2a:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 735090, 'reachable_time': 17065, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271734, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 NetworkManager[49064]: <info>  [1769847111.2336] device (tap05c06d54-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:11:51 np0005603609 NetworkManager[49064]: <info>  [1769847111.2340] device (tap05c06d54-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.237 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:5b:36 10.100.0.6'], port_security=['fa:16:3e:3e:5b:36 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '981eb990-ec0c-4673-98e7-102afbe0bb51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '6bf409ce-5d7d-4755-a538-3f1e81a177fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=05c06d54-1257-48d3-8a5a-f92423aadbd8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.242 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3596f688-0d0b-4006-a26e-d925b8b7485c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:2a68'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 735090, 'tstamp': 735090}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271739, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.244 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:51 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:51Z|00506|binding|INFO|Setting lport 05c06d54-1257-48d3-8a5a-f92423aadbd8 ovn-installed in OVS
Jan 31 03:11:51 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:51Z|00507|binding|INFO|Setting lport 05c06d54-1257-48d3-8a5a-f92423aadbd8 up in Southbound
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.250 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:51 np0005603609 systemd-machined[190912]: New machine qemu-62-instance-00000079.
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.256 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[763790b5-6f0f-4411-94da-08e0264e7640]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb88251fc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2a:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 735090, 'reachable_time': 17065, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271742, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 systemd[1]: Started Virtual Machine qemu-62-instance-00000079.
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.278 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[27cfe5ed-d830-4a47-afbd-043b136a740f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.317 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6515a61e-2515-4fc7-9737-719e4b8f0520]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.318 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb88251fc-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.318 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.318 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb88251fc-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:51 np0005603609 kernel: tapb88251fc-70: entered promiscuous mode
Jan 31 03:11:51 np0005603609 NetworkManager[49064]: <info>  [1769847111.3207] manager: (tapb88251fc-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.322 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb88251fc-70, col_values=(('external_ids', {'iface-id': '950341c4-aa2a-4261-8207-ff7e92fd4830'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.322 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:51 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:51Z|00508|binding|INFO|Releasing lport 950341c4-aa2a-4261-8207-ff7e92fd4830 from this chassis (sb_readonly=0)
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.329 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.330 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.331 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2a6d8f79-3e9b-489d-aadf-713dc9b3e618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.332 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-b88251fc-7610-460a-ba55-2ed186c6f696
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID b88251fc-7610-460a-ba55-2ed186c6f696
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.333 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'env', 'PROCESS_TAG=haproxy-b88251fc-7610-460a-ba55-2ed186c6f696', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b88251fc-7610-460a-ba55-2ed186c6f696.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.545 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847111.5450873, cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.546 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] VM Started (Lifecycle Event)#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.574 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.580 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847111.546005, cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.580 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.615 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.619 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.657 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:11:51 np0005603609 podman[271839]: 2026-01-31 08:11:51.667277652 +0000 UTC m=+0.045366562 container create 227f0294b12b7d9839714a546e7e306b5e3abebc1a012560a41d13affa3272b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 03:11:51 np0005603609 systemd[1]: Started libpod-conmon-227f0294b12b7d9839714a546e7e306b5e3abebc1a012560a41d13affa3272b6.scope.
Jan 31 03:11:51 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:11:51 np0005603609 podman[271839]: 2026-01-31 08:11:51.646364289 +0000 UTC m=+0.024453219 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:11:51 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95127cf4551a34fc1714a854c7c5ffdab7c745ad013ba9c96d319801a138127/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:11:51 np0005603609 podman[271839]: 2026-01-31 08:11:51.755530963 +0000 UTC m=+0.133619893 container init 227f0294b12b7d9839714a546e7e306b5e3abebc1a012560a41d13affa3272b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 03:11:51 np0005603609 podman[271839]: 2026-01-31 08:11:51.76247659 +0000 UTC m=+0.140565500 container start 227f0294b12b7d9839714a546e7e306b5e3abebc1a012560a41d13affa3272b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:11:51 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[271892]: [NOTICE]   (271901) : New worker (271903) forked
Jan 31 03:11:51 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[271892]: [NOTICE]   (271901) : Loading success.
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.822 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 05c06d54-1257-48d3-8a5a-f92423aadbd8 in datapath 088d6992-6ba6-4719-a977-b3d306740157 unbound from our chassis#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.824 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 088d6992-6ba6-4719-a977-b3d306740157#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.833 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[76e551ed-5913-45cb-91ce-13cd4debcc10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.834 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap088d6992-61 in ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.836 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap088d6992-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.836 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f485c2-8f2a-4b09-971d-70d91c9f3f9a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.838 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[20a03614-2835-4552-90f3-cf2cde53df9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.846 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[5c44f31a-3a4d-494e-aa04-b79bdb2d9ba1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.848 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 981eb990-ec0c-4673-98e7-102afbe0bb51 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.849 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847111.84809, 981eb990-ec0c-4673-98e7-102afbe0bb51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.850 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.853 221554 DEBUG nova.compute.manager [None req-af4230ed-5ade-44c4-b5a1-edc65ffdd7fe d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.862 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd5cc93-028c-44da-adc6-36da92c85064]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.878 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.881 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.888 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[75ad41fd-a644-46fd-92eb-3b769c6674c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.893 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f81a1d43-500a-45bd-8bcc-26afd4ae6b95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 NetworkManager[49064]: <info>  [1769847111.8952] manager: (tap088d6992-60): new Veth device (/org/freedesktop/NetworkManager/Devices/252)
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.920 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.919 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[84fa378c-b7ca-44ca-9e7f-724e30e72c5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.921 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847111.849401, 981eb990-ec0c-4673-98e7-102afbe0bb51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.921 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] VM Started (Lifecycle Event)#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.923 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f28b4ca9-8146-43b7-859b-56d23f7add89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 NetworkManager[49064]: <info>  [1769847111.9381] device (tap088d6992-60): carrier: link connected
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.941 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f83b00-09cb-4356-ad17-8e69928278f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.952 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9a980ea4-a6c8-46db-8be5-225f5361e053]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 735163, 'reachable_time': 34769, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271923, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.964 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e3b8ee48-fdaa-487e-ad30-8bdcaa276057]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:87bc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 735163, 'tstamp': 735163}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271924, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.964 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:51 np0005603609 nova_compute[221550]: 2026-01-31 08:11:51.968 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.975 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7aeee02c-1d6f-4de9-aa06-23924a2535cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 735163, 'reachable_time': 34769, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271925, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:51.997 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf21cd5-d3cb-466b-8411-35533b8b423e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:52.052 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7045bbbc-8111-4578-afc7-9b904b42ecd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:52.053 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:52.054 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:52.054 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088d6992-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:52 np0005603609 kernel: tap088d6992-60: entered promiscuous mode
Jan 31 03:11:52 np0005603609 NetworkManager[49064]: <info>  [1769847112.0570] manager: (tap088d6992-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:52.058 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap088d6992-60, col_values=(('external_ids', {'iface-id': '7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:52 np0005603609 ovn_controller[130359]: 2026-01-31T08:11:52Z|00509|binding|INFO|Releasing lport 7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8 from this chassis (sb_readonly=0)
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.062 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.067 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:52.067 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/088d6992-6ba6-4719-a977-b3d306740157.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/088d6992-6ba6-4719-a977-b3d306740157.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:52.068 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[834ed22e-667d-4716-97f5-5eb5610d78c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:52.069 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-088d6992-6ba6-4719-a977-b3d306740157
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/088d6992-6ba6-4719-a977-b3d306740157.pid.haproxy
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 088d6992-6ba6-4719-a977-b3d306740157
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:11:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:11:52.069 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'env', 'PROCESS_TAG=haproxy-088d6992-6ba6-4719-a977-b3d306740157', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/088d6992-6ba6-4719-a977-b3d306740157.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:11:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:52.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:52 np0005603609 podman[271957]: 2026-01-31 08:11:52.41703456 +0000 UTC m=+0.045350980 container create d70c840530f01893a7734fadb41c6169fcd45e6cac249929943467eb0df6f8f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 03:11:52 np0005603609 systemd[1]: Started libpod-conmon-d70c840530f01893a7734fadb41c6169fcd45e6cac249929943467eb0df6f8f8.scope.
Jan 31 03:11:52 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:11:52 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed7f80cd3af10f0fd7f79a3770d44f51d53945a3ace03a6ac46736625e1dcac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:11:52 np0005603609 podman[271957]: 2026-01-31 08:11:52.393023733 +0000 UTC m=+0.021340173 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:11:52 np0005603609 podman[271957]: 2026-01-31 08:11:52.492812221 +0000 UTC m=+0.121128651 container init d70c840530f01893a7734fadb41c6169fcd45e6cac249929943467eb0df6f8f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 03:11:52 np0005603609 podman[271957]: 2026-01-31 08:11:52.497321559 +0000 UTC m=+0.125637979 container start d70c840530f01893a7734fadb41c6169fcd45e6cac249929943467eb0df6f8f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 03:11:52 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[271972]: [NOTICE]   (271976) : New worker (271978) forked
Jan 31 03:11:52 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[271972]: [NOTICE]   (271976) : Loading success.
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.713 221554 DEBUG nova.compute.manager [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Received event network-vif-plugged-b02f2fdc-eebe-4514-b149-c60b25a69d10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.714 221554 DEBUG oslo_concurrency.lockutils [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.714 221554 DEBUG oslo_concurrency.lockutils [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.715 221554 DEBUG oslo_concurrency.lockutils [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.716 221554 DEBUG nova.compute.manager [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Processing event network-vif-plugged-b02f2fdc-eebe-4514-b149-c60b25a69d10 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.716 221554 DEBUG nova.compute.manager [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Received event network-vif-plugged-b02f2fdc-eebe-4514-b149-c60b25a69d10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.717 221554 DEBUG oslo_concurrency.lockutils [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.718 221554 DEBUG oslo_concurrency.lockutils [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.718 221554 DEBUG oslo_concurrency.lockutils [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.719 221554 DEBUG nova.compute.manager [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] No waiting events found dispatching network-vif-plugged-b02f2fdc-eebe-4514-b149-c60b25a69d10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.719 221554 WARNING nova.compute.manager [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Received unexpected event network-vif-plugged-b02f2fdc-eebe-4514-b149-c60b25a69d10 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.720 221554 DEBUG nova.compute.manager [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received event network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.721 221554 DEBUG oslo_concurrency.lockutils [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.721 221554 DEBUG oslo_concurrency.lockutils [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.722 221554 DEBUG oslo_concurrency.lockutils [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.722 221554 DEBUG nova.compute.manager [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] No waiting events found dispatching network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.723 221554 WARNING nova.compute.manager [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received unexpected event network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 for instance with vm_state rescued and task_state None.#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.724 221554 DEBUG nova.compute.manager [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.739 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847112.7282364, cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.740 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.743 221554 DEBUG nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.758 221554 INFO nova.virt.libvirt.driver [-] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Instance spawned successfully.#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.759 221554 DEBUG nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:11:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.790 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:52.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.794 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.821 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.842 221554 DEBUG nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.843 221554 DEBUG nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.843 221554 DEBUG nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.844 221554 DEBUG nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.844 221554 DEBUG nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.845 221554 DEBUG nova.virt.libvirt.driver [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.869 221554 DEBUG nova.network.neutron [req-8bd07fa3-a241-496e-92aa-452e5efd848b req-830e9f01-c080-44e0-86de-568a2f844c2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Updated VIF entry in instance network info cache for port b02f2fdc-eebe-4514-b149-c60b25a69d10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.870 221554 DEBUG nova.network.neutron [req-8bd07fa3-a241-496e-92aa-452e5efd848b req-830e9f01-c080-44e0-86de-568a2f844c2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Updating instance_info_cache with network_info: [{"id": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "address": "fa:16:3e:4c:a9:e0", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02f2fdc-ee", "ovs_interfaceid": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.923 221554 DEBUG oslo_concurrency.lockutils [req-8bd07fa3-a241-496e-92aa-452e5efd848b req-830e9f01-c080-44e0-86de-568a2f844c2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.990 221554 INFO nova.compute.manager [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Took 9.25 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:11:52 np0005603609 nova_compute[221550]: 2026-01-31 08:11:52.991 221554 DEBUG nova.compute.manager [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:53 np0005603609 nova_compute[221550]: 2026-01-31 08:11:53.106 221554 INFO nova.compute.manager [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Took 10.25 seconds to build instance.#033[00m
Jan 31 03:11:53 np0005603609 nova_compute[221550]: 2026-01-31 08:11:53.135 221554 DEBUG oslo_concurrency.lockutils [None req-b025e86a-0ac2-4222-9cdb-2d4c5cf66ede 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:53 np0005603609 nova_compute[221550]: 2026-01-31 08:11:53.514 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:54.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:54 np0005603609 nova_compute[221550]: 2026-01-31 08:11:54.665 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:11:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:54.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:11:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:11:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:56.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:11:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:56.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:11:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:58.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:11:58 np0005603609 nova_compute[221550]: 2026-01-31 08:11:58.516 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:11:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:11:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:58.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:11:58 np0005603609 nova_compute[221550]: 2026-01-31 08:11:58.854 221554 DEBUG nova.compute.manager [req-71551ff6-43fe-48ac-9b67-64f4a32a3aed req-d34fd4fd-2a91-4b48-aa1a-8eb3d1501d84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received event network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:58 np0005603609 nova_compute[221550]: 2026-01-31 08:11:58.855 221554 DEBUG oslo_concurrency.lockutils [req-71551ff6-43fe-48ac-9b67-64f4a32a3aed req-d34fd4fd-2a91-4b48-aa1a-8eb3d1501d84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:58 np0005603609 nova_compute[221550]: 2026-01-31 08:11:58.856 221554 DEBUG oslo_concurrency.lockutils [req-71551ff6-43fe-48ac-9b67-64f4a32a3aed req-d34fd4fd-2a91-4b48-aa1a-8eb3d1501d84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:58 np0005603609 nova_compute[221550]: 2026-01-31 08:11:58.856 221554 DEBUG oslo_concurrency.lockutils [req-71551ff6-43fe-48ac-9b67-64f4a32a3aed req-d34fd4fd-2a91-4b48-aa1a-8eb3d1501d84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:58 np0005603609 nova_compute[221550]: 2026-01-31 08:11:58.857 221554 DEBUG nova.compute.manager [req-71551ff6-43fe-48ac-9b67-64f4a32a3aed req-d34fd4fd-2a91-4b48-aa1a-8eb3d1501d84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] No waiting events found dispatching network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:11:58 np0005603609 nova_compute[221550]: 2026-01-31 08:11:58.858 221554 WARNING nova.compute.manager [req-71551ff6-43fe-48ac-9b67-64f4a32a3aed req-d34fd4fd-2a91-4b48-aa1a-8eb3d1501d84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received unexpected event network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 for instance with vm_state rescued and task_state None.#033[00m
Jan 31 03:11:59 np0005603609 nova_compute[221550]: 2026-01-31 08:11:59.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:59 np0005603609 nova_compute[221550]: 2026-01-31 08:11:59.666 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:59 np0005603609 nova_compute[221550]: 2026-01-31 08:11:59.727 221554 INFO nova.compute.manager [None req-70f11728-946e-49c1-82b7-c3e0cd4530b3 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Unrescuing#033[00m
Jan 31 03:11:59 np0005603609 nova_compute[221550]: 2026-01-31 08:11:59.729 221554 DEBUG oslo_concurrency.lockutils [None req-70f11728-946e-49c1-82b7-c3e0cd4530b3 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:11:59 np0005603609 nova_compute[221550]: 2026-01-31 08:11:59.729 221554 DEBUG oslo_concurrency.lockutils [None req-70f11728-946e-49c1-82b7-c3e0cd4530b3 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquired lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:11:59 np0005603609 nova_compute[221550]: 2026-01-31 08:11:59.730 221554 DEBUG nova.network.neutron [None req-70f11728-946e-49c1-82b7-c3e0cd4530b3 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:12:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:00.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:00.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.180 221554 DEBUG oslo_concurrency.lockutils [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.181 221554 DEBUG oslo_concurrency.lockutils [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.182 221554 DEBUG oslo_concurrency.lockutils [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.182 221554 DEBUG oslo_concurrency.lockutils [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.182 221554 DEBUG oslo_concurrency.lockutils [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.183 221554 INFO nova.compute.manager [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Terminating instance#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.184 221554 DEBUG nova.compute.manager [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:12:01 np0005603609 kernel: tapb02f2fdc-ee (unregistering): left promiscuous mode
Jan 31 03:12:01 np0005603609 NetworkManager[49064]: <info>  [1769847121.2172] device (tapb02f2fdc-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:12:01 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:01Z|00510|binding|INFO|Releasing lport b02f2fdc-eebe-4514-b149-c60b25a69d10 from this chassis (sb_readonly=0)
Jan 31 03:12:01 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:01Z|00511|binding|INFO|Setting lport b02f2fdc-eebe-4514-b149-c60b25a69d10 down in Southbound
Jan 31 03:12:01 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:01Z|00512|binding|INFO|Removing iface tapb02f2fdc-ee ovn-installed in OVS
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.222 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.231 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:01.236 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:a9:e0 10.100.0.13'], port_security=['fa:16:3e:4c:a9:e0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b88251fc-7610-460a-ba55-2ed186c6f696', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9f076789-2616-4234-8eab-1fc3da7d63b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7bb3690e-e43b-4d54-9d64-4797e471bf50, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=b02f2fdc-eebe-4514-b149-c60b25a69d10) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:12:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:01.238 140058 INFO neutron.agent.ovn.metadata.agent [-] Port b02f2fdc-eebe-4514-b149-c60b25a69d10 in datapath b88251fc-7610-460a-ba55-2ed186c6f696 unbound from our chassis#033[00m
Jan 31 03:12:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:01.239 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b88251fc-7610-460a-ba55-2ed186c6f696, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:12:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:01.240 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c99c66-fee9-4bb0-b4bc-060b18638beb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:01.240 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 namespace which is not needed anymore#033[00m
Jan 31 03:12:01 np0005603609 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Jan 31 03:12:01 np0005603609 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d0000007c.scope: Consumed 9.074s CPU time.
Jan 31 03:12:01 np0005603609 systemd-machined[190912]: Machine qemu-61-instance-0000007c terminated.
Jan 31 03:12:01 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[271892]: [NOTICE]   (271901) : haproxy version is 2.8.14-c23fe91
Jan 31 03:12:01 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[271892]: [NOTICE]   (271901) : path to executable is /usr/sbin/haproxy
Jan 31 03:12:01 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[271892]: [WARNING]  (271901) : Exiting Master process...
Jan 31 03:12:01 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[271892]: [ALERT]    (271901) : Current worker (271903) exited with code 143 (Terminated)
Jan 31 03:12:01 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[271892]: [WARNING]  (271901) : All workers exited. Exiting... (0)
Jan 31 03:12:01 np0005603609 systemd[1]: libpod-227f0294b12b7d9839714a546e7e306b5e3abebc1a012560a41d13affa3272b6.scope: Deactivated successfully.
Jan 31 03:12:01 np0005603609 conmon[271892]: conmon 227f0294b12b7d983971 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-227f0294b12b7d9839714a546e7e306b5e3abebc1a012560a41d13affa3272b6.scope/container/memory.events
Jan 31 03:12:01 np0005603609 podman[272010]: 2026-01-31 08:12:01.363128868 +0000 UTC m=+0.049109621 container died 227f0294b12b7d9839714a546e7e306b5e3abebc1a012560a41d13affa3272b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 03:12:01 np0005603609 systemd[1]: var-lib-containers-storage-overlay-b95127cf4551a34fc1714a854c7c5ffdab7c745ad013ba9c96d319801a138127-merged.mount: Deactivated successfully.
Jan 31 03:12:01 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-227f0294b12b7d9839714a546e7e306b5e3abebc1a012560a41d13affa3272b6-userdata-shm.mount: Deactivated successfully.
Jan 31 03:12:01 np0005603609 podman[272010]: 2026-01-31 08:12:01.391848659 +0000 UTC m=+0.077829382 container cleanup 227f0294b12b7d9839714a546e7e306b5e3abebc1a012560a41d13affa3272b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:12:01 np0005603609 systemd[1]: libpod-conmon-227f0294b12b7d9839714a546e7e306b5e3abebc1a012560a41d13affa3272b6.scope: Deactivated successfully.
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.410 221554 INFO nova.virt.libvirt.driver [-] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Instance destroyed successfully.#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.410 221554 DEBUG nova.objects.instance [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'resources' on Instance uuid cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:01 np0005603609 podman[272044]: 2026-01-31 08:12:01.446946763 +0000 UTC m=+0.037796590 container remove 227f0294b12b7d9839714a546e7e306b5e3abebc1a012560a41d13affa3272b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 03:12:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:01.453 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a7fe4d28-c8a0-4f0d-8b9e-603608087b66]: (4, ('Sat Jan 31 08:12:01 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 (227f0294b12b7d9839714a546e7e306b5e3abebc1a012560a41d13affa3272b6)\n227f0294b12b7d9839714a546e7e306b5e3abebc1a012560a41d13affa3272b6\nSat Jan 31 08:12:01 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 (227f0294b12b7d9839714a546e7e306b5e3abebc1a012560a41d13affa3272b6)\n227f0294b12b7d9839714a546e7e306b5e3abebc1a012560a41d13affa3272b6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:01.455 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7f9e2818-a2e2-428c-b7ce-92506d335978]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:01.456 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb88251fc-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.458 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:01 np0005603609 kernel: tapb88251fc-70: left promiscuous mode
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.467 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:01.468 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8c277f-f2ca-40b2-81bd-34ad0666bfde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:01.486 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[70137cb8-d72a-4783-8661-0270b8f0d3b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:01.487 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f71549db-be68-428c-a59f-31456c059a94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:01.497 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[133f8cba-dbe8-4417-957d-6f1b208b7be1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 735084, 'reachable_time': 34963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272070, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:01.499 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:12:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:01.500 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[44c61a3f-6ed3-478a-8039-6aa6e6b2175c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:01 np0005603609 systemd[1]: run-netns-ovnmeta\x2db88251fc\x2d7610\x2d460a\x2dba55\x2d2ed186c6f696.mount: Deactivated successfully.
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.541 221554 DEBUG nova.virt.libvirt.vif [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:11:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1204367585',display_name='tempest-ServersTestJSON-server-1204367585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-1204367585',id=124,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUMsWS/2njbCwpa8W/g+RZeeBxwDeZUyYnm2d3fVMNtF6rQuYEGAl4axDfhINo266hc09EfeFpXW/Sbkb7jtNhQtAmpKq74OeE5rpRAFX5MWuUVo0Z0N7GLaehTFWSX5A==',key_name='tempest-key-954900079',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:11:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-m3j374ow',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:11:53Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "address": "fa:16:3e:4c:a9:e0", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02f2fdc-ee", "ovs_interfaceid": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.541 221554 DEBUG nova.network.os_vif_util [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "address": "fa:16:3e:4c:a9:e0", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02f2fdc-ee", "ovs_interfaceid": "b02f2fdc-eebe-4514-b149-c60b25a69d10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.542 221554 DEBUG nova.network.os_vif_util [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:a9:e0,bridge_name='br-int',has_traffic_filtering=True,id=b02f2fdc-eebe-4514-b149-c60b25a69d10,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb02f2fdc-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.543 221554 DEBUG os_vif [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:a9:e0,bridge_name='br-int',has_traffic_filtering=True,id=b02f2fdc-eebe-4514-b149-c60b25a69d10,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb02f2fdc-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.545 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.545 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb02f2fdc-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.549 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.555 221554 INFO os_vif [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:a9:e0,bridge_name='br-int',has_traffic_filtering=True,id=b02f2fdc-eebe-4514-b149-c60b25a69d10,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb02f2fdc-ee')#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.574 221554 DEBUG nova.compute.manager [req-5bed5c69-0439-4602-b513-c863319a4815 req-519d4987-ae74-4f9c-a5c4-05c318d29600 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Received event network-vif-unplugged-b02f2fdc-eebe-4514-b149-c60b25a69d10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.574 221554 DEBUG oslo_concurrency.lockutils [req-5bed5c69-0439-4602-b513-c863319a4815 req-519d4987-ae74-4f9c-a5c4-05c318d29600 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.574 221554 DEBUG oslo_concurrency.lockutils [req-5bed5c69-0439-4602-b513-c863319a4815 req-519d4987-ae74-4f9c-a5c4-05c318d29600 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.574 221554 DEBUG oslo_concurrency.lockutils [req-5bed5c69-0439-4602-b513-c863319a4815 req-519d4987-ae74-4f9c-a5c4-05c318d29600 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.575 221554 DEBUG nova.compute.manager [req-5bed5c69-0439-4602-b513-c863319a4815 req-519d4987-ae74-4f9c-a5c4-05c318d29600 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] No waiting events found dispatching network-vif-unplugged-b02f2fdc-eebe-4514-b149-c60b25a69d10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.575 221554 DEBUG nova.compute.manager [req-5bed5c69-0439-4602-b513-c863319a4815 req-519d4987-ae74-4f9c-a5c4-05c318d29600 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Received event network-vif-unplugged-b02f2fdc-eebe-4514-b149-c60b25a69d10 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.982 221554 INFO nova.virt.libvirt.driver [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Deleting instance files /var/lib/nova/instances/cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4_del#033[00m
Jan 31 03:12:01 np0005603609 nova_compute[221550]: 2026-01-31 08:12:01.983 221554 INFO nova.virt.libvirt.driver [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Deletion of /var/lib/nova/instances/cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4_del complete#033[00m
Jan 31 03:12:02 np0005603609 nova_compute[221550]: 2026-01-31 08:12:02.071 221554 INFO nova.compute.manager [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:12:02 np0005603609 nova_compute[221550]: 2026-01-31 08:12:02.072 221554 DEBUG oslo.service.loopingcall [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:12:02 np0005603609 nova_compute[221550]: 2026-01-31 08:12:02.072 221554 DEBUG nova.compute.manager [-] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:12:02 np0005603609 nova_compute[221550]: 2026-01-31 08:12:02.073 221554 DEBUG nova.network.neutron [-] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:12:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:12:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:02.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:12:02 np0005603609 nova_compute[221550]: 2026-01-31 08:12:02.742 221554 DEBUG nova.network.neutron [None req-70f11728-946e-49c1-82b7-c3e0cd4530b3 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Updating instance_info_cache with network_info: [{"id": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "address": "fa:16:3e:3e:5b:36", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c06d54-12", "ovs_interfaceid": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:12:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:12:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:02.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:12:02 np0005603609 nova_compute[221550]: 2026-01-31 08:12:02.823 221554 DEBUG oslo_concurrency.lockutils [None req-70f11728-946e-49c1-82b7-c3e0cd4530b3 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Releasing lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:12:02 np0005603609 nova_compute[221550]: 2026-01-31 08:12:02.824 221554 DEBUG nova.objects.instance [None req-70f11728-946e-49c1-82b7-c3e0cd4530b3 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'flavor' on Instance uuid 981eb990-ec0c-4673-98e7-102afbe0bb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:03 np0005603609 kernel: tap05c06d54-12 (unregistering): left promiscuous mode
Jan 31 03:12:03 np0005603609 NetworkManager[49064]: <info>  [1769847123.0835] device (tap05c06d54-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.083 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.088 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:03 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:03Z|00513|binding|INFO|Releasing lport 05c06d54-1257-48d3-8a5a-f92423aadbd8 from this chassis (sb_readonly=0)
Jan 31 03:12:03 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:03Z|00514|binding|INFO|Setting lport 05c06d54-1257-48d3-8a5a-f92423aadbd8 down in Southbound
Jan 31 03:12:03 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:03Z|00515|binding|INFO|Removing iface tap05c06d54-12 ovn-installed in OVS
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.089 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.094 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:03 np0005603609 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000079.scope: Deactivated successfully.
Jan 31 03:12:03 np0005603609 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000079.scope: Consumed 12.118s CPU time.
Jan 31 03:12:03 np0005603609 systemd-machined[190912]: Machine qemu-62-instance-00000079 terminated.
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.187 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:5b:36 10.100.0.6'], port_security=['fa:16:3e:3e:5b:36 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '981eb990-ec0c-4673-98e7-102afbe0bb51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6bf409ce-5d7d-4755-a538-3f1e81a177fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=05c06d54-1257-48d3-8a5a-f92423aadbd8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.190 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 05c06d54-1257-48d3-8a5a-f92423aadbd8 in datapath 088d6992-6ba6-4719-a977-b3d306740157 unbound from our chassis#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.193 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 088d6992-6ba6-4719-a977-b3d306740157, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.194 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7155d3e9-535b-4b89-bfc7-bfc2f7685160]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.195 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 namespace which is not needed anymore#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.249 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.253 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.264 221554 INFO nova.virt.libvirt.driver [-] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Instance destroyed successfully.#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.264 221554 DEBUG nova.objects.instance [None req-70f11728-946e-49c1-82b7-c3e0cd4530b3 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'numa_topology' on Instance uuid 981eb990-ec0c-4673-98e7-102afbe0bb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:03 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[271972]: [NOTICE]   (271976) : haproxy version is 2.8.14-c23fe91
Jan 31 03:12:03 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[271972]: [NOTICE]   (271976) : path to executable is /usr/sbin/haproxy
Jan 31 03:12:03 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[271972]: [WARNING]  (271976) : Exiting Master process...
Jan 31 03:12:03 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[271972]: [WARNING]  (271976) : Exiting Master process...
Jan 31 03:12:03 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[271972]: [ALERT]    (271976) : Current worker (271978) exited with code 143 (Terminated)
Jan 31 03:12:03 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[271972]: [WARNING]  (271976) : All workers exited. Exiting... (0)
Jan 31 03:12:03 np0005603609 systemd[1]: libpod-d70c840530f01893a7734fadb41c6169fcd45e6cac249929943467eb0df6f8f8.scope: Deactivated successfully.
Jan 31 03:12:03 np0005603609 podman[272119]: 2026-01-31 08:12:03.321479443 +0000 UTC m=+0.047147554 container died d70c840530f01893a7734fadb41c6169fcd45e6cac249929943467eb0df6f8f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:12:03 np0005603609 kernel: tap05c06d54-12: entered promiscuous mode
Jan 31 03:12:03 np0005603609 systemd-udevd[271990]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:12:03 np0005603609 NetworkManager[49064]: <info>  [1769847123.3359] manager: (tap05c06d54-12): new Tun device (/org/freedesktop/NetworkManager/Devices/254)
Jan 31 03:12:03 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:03Z|00516|binding|INFO|Claiming lport 05c06d54-1257-48d3-8a5a-f92423aadbd8 for this chassis.
Jan 31 03:12:03 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:03Z|00517|binding|INFO|05c06d54-1257-48d3-8a5a-f92423aadbd8: Claiming fa:16:3e:3e:5b:36 10.100.0.6
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.336 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:03 np0005603609 NetworkManager[49064]: <info>  [1769847123.3450] device (tap05c06d54-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:12:03 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:03Z|00518|binding|INFO|Setting lport 05c06d54-1257-48d3-8a5a-f92423aadbd8 ovn-installed in OVS
Jan 31 03:12:03 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:03Z|00519|binding|INFO|Setting lport 05c06d54-1257-48d3-8a5a-f92423aadbd8 up in Southbound
Jan 31 03:12:03 np0005603609 NetworkManager[49064]: <info>  [1769847123.3458] device (tap05c06d54-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.346 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.347 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:5b:36 10.100.0.6'], port_security=['fa:16:3e:3e:5b:36 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '981eb990-ec0c-4673-98e7-102afbe0bb51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6bf409ce-5d7d-4755-a538-3f1e81a177fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=05c06d54-1257-48d3-8a5a-f92423aadbd8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:12:03 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d70c840530f01893a7734fadb41c6169fcd45e6cac249929943467eb0df6f8f8-userdata-shm.mount: Deactivated successfully.
Jan 31 03:12:03 np0005603609 systemd[1]: var-lib-containers-storage-overlay-1ed7f80cd3af10f0fd7f79a3770d44f51d53945a3ace03a6ac46736625e1dcac-merged.mount: Deactivated successfully.
Jan 31 03:12:03 np0005603609 podman[272119]: 2026-01-31 08:12:03.359890356 +0000 UTC m=+0.085558477 container cleanup d70c840530f01893a7734fadb41c6169fcd45e6cac249929943467eb0df6f8f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:12:03 np0005603609 systemd-machined[190912]: New machine qemu-63-instance-00000079.
Jan 31 03:12:03 np0005603609 systemd[1]: Started Virtual Machine qemu-63-instance-00000079.
Jan 31 03:12:03 np0005603609 systemd[1]: libpod-conmon-d70c840530f01893a7734fadb41c6169fcd45e6cac249929943467eb0df6f8f8.scope: Deactivated successfully.
Jan 31 03:12:03 np0005603609 podman[272160]: 2026-01-31 08:12:03.411867705 +0000 UTC m=+0.032460640 container remove d70c840530f01893a7734fadb41c6169fcd45e6cac249929943467eb0df6f8f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.415 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[81d30763-93e1-46a9-92b1-5c0245a47429]: (4, ('Sat Jan 31 08:12:03 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 (d70c840530f01893a7734fadb41c6169fcd45e6cac249929943467eb0df6f8f8)\nd70c840530f01893a7734fadb41c6169fcd45e6cac249929943467eb0df6f8f8\nSat Jan 31 08:12:03 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 (d70c840530f01893a7734fadb41c6169fcd45e6cac249929943467eb0df6f8f8)\nd70c840530f01893a7734fadb41c6169fcd45e6cac249929943467eb0df6f8f8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.416 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7aee9470-fd9e-418b-8408-c7ba56e8622a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.418 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.419 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:03 np0005603609 kernel: tap088d6992-60: left promiscuous mode
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.424 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.427 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[419a49e1-bb75-4ba0-b540-c0b2c7e21aef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.439 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6eef7c94-f40c-4e8c-9ac6-7c16691c9b24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.440 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[57513fda-7180-4cf1-bb94-144c54f1d288]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.452 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[72b7421a-4b36-43b2-bd62-e998a03356b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 735158, 'reachable_time': 19702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272180, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.454 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.454 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[ed99a958-7d06-405c-b00e-8293b40bf96b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 systemd[1]: run-netns-ovnmeta\x2d088d6992\x2d6ba6\x2d4719\x2da977\x2db3d306740157.mount: Deactivated successfully.
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.454 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 05c06d54-1257-48d3-8a5a-f92423aadbd8 in datapath 088d6992-6ba6-4719-a977-b3d306740157 unbound from our chassis#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.456 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 088d6992-6ba6-4719-a977-b3d306740157#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.463 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc6a4da-ee19-41d6-a549-31bfb68666cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.463 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap088d6992-61 in ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.465 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap088d6992-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.465 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb1982e-150d-43b0-ad85-33ec72d8cbac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.466 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8e8d1a00-e749-4d64-91a9-fb20f014d16d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.473 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[c700c2bc-2828-4904-830d-cec9d99e1367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.483 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e300efc3-32b1-4b41-91b8-44603659544d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.501 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b3fc1ba3-2987-4622-b301-c7d881dab4b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 NetworkManager[49064]: <info>  [1769847123.5085] manager: (tap088d6992-60): new Veth device (/org/freedesktop/NetworkManager/Devices/255)
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.507 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[00448a70-7d60-4178-844b-3fc33fbceef4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.535 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[e1318c9e-c1a5-4a1c-b23b-3f8e2c8c2484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.537 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[758a5fc3-ea30-4fa0-8b9c-8d32a588e554]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.553 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[26f668fd-c722-4eab-832e-d349d0a8c9d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 NetworkManager[49064]: <info>  [1769847123.5553] device (tap088d6992-60): carrier: link connected
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.564 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3159b964-61dc-489f-a3c3-3b2758787f97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736325, 'reachable_time': 27412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272205, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.572 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ffb75fbc-416f-48da-a49d-bf33b89fec4e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:87bc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736325, 'tstamp': 736325}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272206, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.585 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9707e578-70ce-49d6-92e6-bc8136e2f5ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736325, 'reachable_time': 27412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272207, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.601 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[548ebde5-3272-45ba-b63f-c1f0a554ce93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.635 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a874acfe-218d-43ec-bc8d-a67a89cac857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.635 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.635 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.636 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088d6992-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.670 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:03 np0005603609 NetworkManager[49064]: <info>  [1769847123.6715] manager: (tap088d6992-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Jan 31 03:12:03 np0005603609 kernel: tap088d6992-60: entered promiscuous mode
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.673 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.675 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap088d6992-60, col_values=(('external_ids', {'iface-id': '7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.676 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:03 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:03Z|00520|binding|INFO|Releasing lport 7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8 from this chassis (sb_readonly=0)
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.682 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.684 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/088d6992-6ba6-4719-a977-b3d306740157.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/088d6992-6ba6-4719-a977-b3d306740157.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.684 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a439cedc-a3ef-4e20-9b9a-cebc95d0fd91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.685 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-088d6992-6ba6-4719-a977-b3d306740157
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/088d6992-6ba6-4719-a977-b3d306740157.pid.haproxy
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 088d6992-6ba6-4719-a977-b3d306740157
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:12:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:03.685 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'env', 'PROCESS_TAG=haproxy-088d6992-6ba6-4719-a977-b3d306740157', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/088d6992-6ba6-4719-a977-b3d306740157.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.730 221554 DEBUG nova.network.neutron [-] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.866 221554 DEBUG nova.compute.manager [req-75d9a0c3-3717-46c1-908f-4747d6cf645a req-ec3a4b50-d977-4134-adab-946a5820e3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Received event network-vif-plugged-b02f2fdc-eebe-4514-b149-c60b25a69d10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.867 221554 DEBUG oslo_concurrency.lockutils [req-75d9a0c3-3717-46c1-908f-4747d6cf645a req-ec3a4b50-d977-4134-adab-946a5820e3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.868 221554 DEBUG oslo_concurrency.lockutils [req-75d9a0c3-3717-46c1-908f-4747d6cf645a req-ec3a4b50-d977-4134-adab-946a5820e3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.868 221554 DEBUG oslo_concurrency.lockutils [req-75d9a0c3-3717-46c1-908f-4747d6cf645a req-ec3a4b50-d977-4134-adab-946a5820e3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.868 221554 DEBUG nova.compute.manager [req-75d9a0c3-3717-46c1-908f-4747d6cf645a req-ec3a4b50-d977-4134-adab-946a5820e3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] No waiting events found dispatching network-vif-plugged-b02f2fdc-eebe-4514-b149-c60b25a69d10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.868 221554 WARNING nova.compute.manager [req-75d9a0c3-3717-46c1-908f-4747d6cf645a req-ec3a4b50-d977-4134-adab-946a5820e3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Received unexpected event network-vif-plugged-b02f2fdc-eebe-4514-b149-c60b25a69d10 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.871 221554 INFO nova.compute.manager [-] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Took 1.80 seconds to deallocate network for instance.#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.935 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 981eb990-ec0c-4673-98e7-102afbe0bb51 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.936 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847123.9355783, 981eb990-ec0c-4673-98e7-102afbe0bb51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:12:03 np0005603609 nova_compute[221550]: 2026-01-31 08:12:03.936 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:12:03 np0005603609 podman[272282]: 2026-01-31 08:12:03.997476369 +0000 UTC m=+0.044693925 container create cd5f18ff3fc61f465edc51c8ecbd6d32883b66a9654a2aaa315a3bd7f10fb2de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:12:04 np0005603609 nova_compute[221550]: 2026-01-31 08:12:04.006 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:12:04 np0005603609 nova_compute[221550]: 2026-01-31 08:12:04.011 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:12:04 np0005603609 systemd[1]: Started libpod-conmon-cd5f18ff3fc61f465edc51c8ecbd6d32883b66a9654a2aaa315a3bd7f10fb2de.scope.
Jan 31 03:12:04 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:12:04 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b188977afa39a4383fd3ca9c7e37030af5f3328a213c1035da9806b1aeb2ba4c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:04 np0005603609 podman[272282]: 2026-01-31 08:12:04.051180159 +0000 UTC m=+0.098397775 container init cd5f18ff3fc61f465edc51c8ecbd6d32883b66a9654a2aaa315a3bd7f10fb2de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:12:04 np0005603609 nova_compute[221550]: 2026-01-31 08:12:04.051 221554 DEBUG oslo_concurrency.lockutils [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:04 np0005603609 nova_compute[221550]: 2026-01-31 08:12:04.051 221554 DEBUG oslo_concurrency.lockutils [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:04 np0005603609 podman[272282]: 2026-01-31 08:12:04.056693692 +0000 UTC m=+0.103911258 container start cd5f18ff3fc61f465edc51c8ecbd6d32883b66a9654a2aaa315a3bd7f10fb2de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 03:12:04 np0005603609 podman[272282]: 2026-01-31 08:12:03.973553974 +0000 UTC m=+0.020771570 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:12:04 np0005603609 nova_compute[221550]: 2026-01-31 08:12:04.068 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 31 03:12:04 np0005603609 nova_compute[221550]: 2026-01-31 08:12:04.069 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847123.9375224, 981eb990-ec0c-4673-98e7-102afbe0bb51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:12:04 np0005603609 nova_compute[221550]: 2026-01-31 08:12:04.069 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] VM Started (Lifecycle Event)#033[00m
Jan 31 03:12:04 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[272314]: [NOTICE]   (272318) : New worker (272320) forked
Jan 31 03:12:04 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[272314]: [NOTICE]   (272318) : Loading success.
Jan 31 03:12:04 np0005603609 nova_compute[221550]: 2026-01-31 08:12:04.119 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:12:04 np0005603609 nova_compute[221550]: 2026-01-31 08:12:04.122 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:12:04 np0005603609 nova_compute[221550]: 2026-01-31 08:12:04.182 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 31 03:12:04 np0005603609 nova_compute[221550]: 2026-01-31 08:12:04.192 221554 DEBUG nova.compute.manager [req-526ebf13-670b-4bac-8da1-9be7d108b155 req-ed9137d5-eec4-4163-aad5-44d9975e3336 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Received event network-vif-deleted-b02f2fdc-eebe-4514-b149-c60b25a69d10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:04 np0005603609 nova_compute[221550]: 2026-01-31 08:12:04.228 221554 DEBUG oslo_concurrency.processutils [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:04 np0005603609 nova_compute[221550]: 2026-01-31 08:12:04.334 221554 DEBUG nova.compute.manager [None req-70f11728-946e-49c1-82b7-c3e0cd4530b3 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:12:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:04.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:12:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/784879266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:12:04 np0005603609 nova_compute[221550]: 2026-01-31 08:12:04.714 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:04 np0005603609 nova_compute[221550]: 2026-01-31 08:12:04.727 221554 DEBUG oslo_concurrency.processutils [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:04 np0005603609 nova_compute[221550]: 2026-01-31 08:12:04.731 221554 DEBUG nova.compute.provider_tree [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:12:04 np0005603609 nova_compute[221550]: 2026-01-31 08:12:04.753 221554 DEBUG nova.scheduler.client.report [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:12:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:04.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:04 np0005603609 nova_compute[221550]: 2026-01-31 08:12:04.837 221554 DEBUG oslo_concurrency.lockutils [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:05 np0005603609 nova_compute[221550]: 2026-01-31 08:12:05.930 221554 INFO nova.scheduler.client.report [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Deleted allocations for instance cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4#033[00m
Jan 31 03:12:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:06.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:06 np0005603609 nova_compute[221550]: 2026-01-31 08:12:06.549 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:06.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:07.505 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:07.506 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:07.507 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:07 np0005603609 nova_compute[221550]: 2026-01-31 08:12:07.910 221554 DEBUG oslo_concurrency.lockutils [None req-15dcce13-683b-4b23-b2c7-0f99db655084 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:08.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:08 np0005603609 nova_compute[221550]: 2026-01-31 08:12:08.478 221554 DEBUG nova.compute.manager [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received event network-vif-unplugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:08 np0005603609 nova_compute[221550]: 2026-01-31 08:12:08.479 221554 DEBUG oslo_concurrency.lockutils [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:08 np0005603609 nova_compute[221550]: 2026-01-31 08:12:08.480 221554 DEBUG oslo_concurrency.lockutils [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:08 np0005603609 nova_compute[221550]: 2026-01-31 08:12:08.481 221554 DEBUG oslo_concurrency.lockutils [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:08 np0005603609 nova_compute[221550]: 2026-01-31 08:12:08.481 221554 DEBUG nova.compute.manager [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] No waiting events found dispatching network-vif-unplugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:08 np0005603609 nova_compute[221550]: 2026-01-31 08:12:08.482 221554 WARNING nova.compute.manager [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received unexpected event network-vif-unplugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:12:08 np0005603609 nova_compute[221550]: 2026-01-31 08:12:08.482 221554 DEBUG nova.compute.manager [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received event network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:08 np0005603609 nova_compute[221550]: 2026-01-31 08:12:08.483 221554 DEBUG oslo_concurrency.lockutils [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:08 np0005603609 nova_compute[221550]: 2026-01-31 08:12:08.483 221554 DEBUG oslo_concurrency.lockutils [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:08 np0005603609 nova_compute[221550]: 2026-01-31 08:12:08.484 221554 DEBUG oslo_concurrency.lockutils [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:08 np0005603609 nova_compute[221550]: 2026-01-31 08:12:08.485 221554 DEBUG nova.compute.manager [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] No waiting events found dispatching network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:08 np0005603609 nova_compute[221550]: 2026-01-31 08:12:08.485 221554 WARNING nova.compute.manager [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received unexpected event network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:12:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:08.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:09.042 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:12:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:09.043 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:12:09 np0005603609 nova_compute[221550]: 2026-01-31 08:12:09.085 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:09 np0005603609 nova_compute[221550]: 2026-01-31 08:12:09.716 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:10 np0005603609 podman[272353]: 2026-01-31 08:12:10.179144081 +0000 UTC m=+0.064641395 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 03:12:10 np0005603609 podman[272352]: 2026-01-31 08:12:10.20656615 +0000 UTC m=+0.094270547 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:12:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:10.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:10.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:11 np0005603609 nova_compute[221550]: 2026-01-31 08:12:11.552 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:11 np0005603609 nova_compute[221550]: 2026-01-31 08:12:11.569 221554 DEBUG nova.compute.manager [req-7fb162c6-5fb6-400a-b27e-d5fa52382f30 req-dd460a1e-10a6-4949-92de-e93757ff7648 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received event network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:11 np0005603609 nova_compute[221550]: 2026-01-31 08:12:11.570 221554 DEBUG oslo_concurrency.lockutils [req-7fb162c6-5fb6-400a-b27e-d5fa52382f30 req-dd460a1e-10a6-4949-92de-e93757ff7648 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:11 np0005603609 nova_compute[221550]: 2026-01-31 08:12:11.570 221554 DEBUG oslo_concurrency.lockutils [req-7fb162c6-5fb6-400a-b27e-d5fa52382f30 req-dd460a1e-10a6-4949-92de-e93757ff7648 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:11 np0005603609 nova_compute[221550]: 2026-01-31 08:12:11.570 221554 DEBUG oslo_concurrency.lockutils [req-7fb162c6-5fb6-400a-b27e-d5fa52382f30 req-dd460a1e-10a6-4949-92de-e93757ff7648 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:11 np0005603609 nova_compute[221550]: 2026-01-31 08:12:11.571 221554 DEBUG nova.compute.manager [req-7fb162c6-5fb6-400a-b27e-d5fa52382f30 req-dd460a1e-10a6-4949-92de-e93757ff7648 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] No waiting events found dispatching network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:11 np0005603609 nova_compute[221550]: 2026-01-31 08:12:11.571 221554 WARNING nova.compute.manager [req-7fb162c6-5fb6-400a-b27e-d5fa52382f30 req-dd460a1e-10a6-4949-92de-e93757ff7648 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received unexpected event network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:12:11 np0005603609 nova_compute[221550]: 2026-01-31 08:12:11.571 221554 DEBUG nova.compute.manager [req-7fb162c6-5fb6-400a-b27e-d5fa52382f30 req-dd460a1e-10a6-4949-92de-e93757ff7648 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received event network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:11 np0005603609 nova_compute[221550]: 2026-01-31 08:12:11.571 221554 DEBUG oslo_concurrency.lockutils [req-7fb162c6-5fb6-400a-b27e-d5fa52382f30 req-dd460a1e-10a6-4949-92de-e93757ff7648 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:11 np0005603609 nova_compute[221550]: 2026-01-31 08:12:11.572 221554 DEBUG oslo_concurrency.lockutils [req-7fb162c6-5fb6-400a-b27e-d5fa52382f30 req-dd460a1e-10a6-4949-92de-e93757ff7648 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:11 np0005603609 nova_compute[221550]: 2026-01-31 08:12:11.572 221554 DEBUG oslo_concurrency.lockutils [req-7fb162c6-5fb6-400a-b27e-d5fa52382f30 req-dd460a1e-10a6-4949-92de-e93757ff7648 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:11 np0005603609 nova_compute[221550]: 2026-01-31 08:12:11.572 221554 DEBUG nova.compute.manager [req-7fb162c6-5fb6-400a-b27e-d5fa52382f30 req-dd460a1e-10a6-4949-92de-e93757ff7648 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] No waiting events found dispatching network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:11 np0005603609 nova_compute[221550]: 2026-01-31 08:12:11.572 221554 WARNING nova.compute.manager [req-7fb162c6-5fb6-400a-b27e-d5fa52382f30 req-dd460a1e-10a6-4949-92de-e93757ff7648 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received unexpected event network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:12:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:12:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:12.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:12:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:12.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:12 np0005603609 nova_compute[221550]: 2026-01-31 08:12:12.985 221554 DEBUG oslo_concurrency.lockutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "518d36e7-b77a-415d-bd48-f53e637ed0d8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:12 np0005603609 nova_compute[221550]: 2026-01-31 08:12:12.986 221554 DEBUG oslo_concurrency.lockutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:13 np0005603609 nova_compute[221550]: 2026-01-31 08:12:13.277 221554 DEBUG nova.compute.manager [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:12:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:14.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:14 np0005603609 nova_compute[221550]: 2026-01-31 08:12:14.718 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:12:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:14.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:12:15 np0005603609 nova_compute[221550]: 2026-01-31 08:12:15.617 221554 DEBUG oslo_concurrency.lockutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:15 np0005603609 nova_compute[221550]: 2026-01-31 08:12:15.618 221554 DEBUG oslo_concurrency.lockutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:15 np0005603609 nova_compute[221550]: 2026-01-31 08:12:15.627 221554 DEBUG nova.virt.hardware [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:12:15 np0005603609 nova_compute[221550]: 2026-01-31 08:12:15.628 221554 INFO nova.compute.claims [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:12:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:15 np0005603609 nova_compute[221550]: 2026-01-31 08:12:15.996 221554 DEBUG oslo_concurrency.processutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.191 221554 DEBUG oslo_concurrency.lockutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "1483edbd-0387-4068-b62e-4789e777f5ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.192 221554 DEBUG oslo_concurrency.lockutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "1483edbd-0387-4068-b62e-4789e777f5ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.228 221554 DEBUG nova.compute.manager [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:12:16 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:16Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:5b:36 10.100.0.6
Jan 31 03:12:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:12:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2812206464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:12:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:16.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.403 221554 DEBUG oslo_concurrency.processutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.408 221554 DEBUG nova.compute.provider_tree [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.409 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847121.407442, cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.409 221554 INFO nova.compute.manager [-] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.445 221554 DEBUG oslo_concurrency.lockutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.458 221554 DEBUG nova.compute.manager [None req-048f4ca7-7073-4077-bce0-9b4e5ea74e5e - - - - - -] [instance: cb8a6dd0-e2cc-4c18-a2ed-d7aa0c29fac4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.462 221554 DEBUG nova.scheduler.client.report [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.496 221554 DEBUG oslo_concurrency.lockutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.498 221554 DEBUG nova.compute.manager [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.501 221554 DEBUG oslo_concurrency.lockutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.509 221554 DEBUG nova.virt.hardware [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.510 221554 INFO nova.compute.claims [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.554 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.587 221554 DEBUG nova.compute.manager [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.588 221554 DEBUG nova.network.neutron [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.617 221554 INFO nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.647 221554 DEBUG nova.compute.manager [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.717 221554 DEBUG oslo_concurrency.processutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.810 221554 DEBUG nova.compute.manager [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.812 221554 DEBUG nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.812 221554 INFO nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Creating image(s)#033[00m
Jan 31 03:12:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:16.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e288 e288: 3 total, 3 up, 3 in
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.851 221554 DEBUG nova.storage.rbd_utils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 518d36e7-b77a-415d-bd48-f53e637ed0d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.881 221554 DEBUG nova.storage.rbd_utils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 518d36e7-b77a-415d-bd48-f53e637ed0d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.906 221554 DEBUG nova.storage.rbd_utils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 518d36e7-b77a-415d-bd48-f53e637ed0d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.910 221554 DEBUG oslo_concurrency.processutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.971 221554 DEBUG oslo_concurrency.processutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.972 221554 DEBUG oslo_concurrency.lockutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.972 221554 DEBUG oslo_concurrency.lockutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:16 np0005603609 nova_compute[221550]: 2026-01-31 08:12:16.973 221554 DEBUG oslo_concurrency.lockutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.000 221554 DEBUG nova.storage.rbd_utils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 518d36e7-b77a-415d-bd48-f53e637ed0d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.004 221554 DEBUG oslo_concurrency.processutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 518d36e7-b77a-415d-bd48-f53e637ed0d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.137 221554 DEBUG oslo_concurrency.processutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.146 221554 DEBUG nova.compute.provider_tree [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.172 221554 DEBUG nova.scheduler.client.report [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.210 221554 DEBUG oslo_concurrency.lockutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.211 221554 DEBUG nova.compute.manager [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.265 221554 DEBUG oslo_concurrency.processutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 518d36e7-b77a-415d-bd48-f53e637ed0d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.266 221554 DEBUG nova.compute.manager [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.266 221554 DEBUG nova.network.neutron [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.310 221554 INFO nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.354 221554 DEBUG nova.compute.manager [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.361 221554 DEBUG nova.storage.rbd_utils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] resizing rbd image 518d36e7-b77a-415d-bd48-f53e637ed0d8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.432 221554 DEBUG nova.policy [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd7d9a44201d548aba1e1654e136ddd06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.469 221554 DEBUG nova.objects.instance [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'migration_context' on Instance uuid 518d36e7-b77a-415d-bd48-f53e637ed0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.483 221554 DEBUG nova.compute.manager [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.484 221554 DEBUG nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.485 221554 INFO nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Creating image(s)#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.506 221554 DEBUG nova.storage.rbd_utils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 1483edbd-0387-4068-b62e-4789e777f5ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.529 221554 DEBUG nova.storage.rbd_utils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 1483edbd-0387-4068-b62e-4789e777f5ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.557 221554 DEBUG nova.storage.rbd_utils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 1483edbd-0387-4068-b62e-4789e777f5ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.562 221554 DEBUG oslo_concurrency.processutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.586 221554 DEBUG nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.587 221554 DEBUG nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Ensure instance console log exists: /var/lib/nova/instances/518d36e7-b77a-415d-bd48-f53e637ed0d8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.588 221554 DEBUG oslo_concurrency.lockutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.588 221554 DEBUG oslo_concurrency.lockutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.588 221554 DEBUG oslo_concurrency.lockutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.641 221554 DEBUG oslo_concurrency.processutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.642 221554 DEBUG oslo_concurrency.lockutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.642 221554 DEBUG oslo_concurrency.lockutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.643 221554 DEBUG oslo_concurrency.lockutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.673 221554 DEBUG nova.storage.rbd_utils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 1483edbd-0387-4068-b62e-4789e777f5ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:17 np0005603609 nova_compute[221550]: 2026-01-31 08:12:17.679 221554 DEBUG oslo_concurrency.processutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1483edbd-0387-4068-b62e-4789e777f5ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:18 np0005603609 nova_compute[221550]: 2026-01-31 08:12:18.012 221554 DEBUG oslo_concurrency.processutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1483edbd-0387-4068-b62e-4789e777f5ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:18 np0005603609 nova_compute[221550]: 2026-01-31 08:12:18.077 221554 DEBUG nova.storage.rbd_utils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] resizing rbd image 1483edbd-0387-4068-b62e-4789e777f5ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:12:18 np0005603609 nova_compute[221550]: 2026-01-31 08:12:18.178 221554 DEBUG nova.objects.instance [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'migration_context' on Instance uuid 1483edbd-0387-4068-b62e-4789e777f5ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:18 np0005603609 nova_compute[221550]: 2026-01-31 08:12:18.225 221554 DEBUG nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:12:18 np0005603609 nova_compute[221550]: 2026-01-31 08:12:18.226 221554 DEBUG nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Ensure instance console log exists: /var/lib/nova/instances/1483edbd-0387-4068-b62e-4789e777f5ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:12:18 np0005603609 nova_compute[221550]: 2026-01-31 08:12:18.226 221554 DEBUG oslo_concurrency.lockutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:18 np0005603609 nova_compute[221550]: 2026-01-31 08:12:18.227 221554 DEBUG oslo_concurrency.lockutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:18 np0005603609 nova_compute[221550]: 2026-01-31 08:12:18.227 221554 DEBUG oslo_concurrency.lockutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:18 np0005603609 nova_compute[221550]: 2026-01-31 08:12:18.346 221554 DEBUG nova.policy [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5366d122b359489fb9d2bda8d19611a6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:12:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:12:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:18.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:12:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:18.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:19.045 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:19 np0005603609 nova_compute[221550]: 2026-01-31 08:12:19.128 221554 DEBUG nova.network.neutron [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Successfully created port: 726c6b35-bbdd-44f9-81a9-69aa19514bdd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:12:19 np0005603609 nova_compute[221550]: 2026-01-31 08:12:19.717 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:19 np0005603609 nova_compute[221550]: 2026-01-31 08:12:19.720 221554 DEBUG nova.network.neutron [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Successfully created port: 6bd06a16-ffc6-4ae3-9810-f84a1120972a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:12:19 np0005603609 nova_compute[221550]: 2026-01-31 08:12:19.724 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:20.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:20.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:21 np0005603609 nova_compute[221550]: 2026-01-31 08:12:21.558 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:22 np0005603609 nova_compute[221550]: 2026-01-31 08:12:22.350 221554 DEBUG nova.network.neutron [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Successfully updated port: 726c6b35-bbdd-44f9-81a9-69aa19514bdd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:12:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:22.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:22 np0005603609 nova_compute[221550]: 2026-01-31 08:12:22.440 221554 DEBUG oslo_concurrency.lockutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "refresh_cache-518d36e7-b77a-415d-bd48-f53e637ed0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:12:22 np0005603609 nova_compute[221550]: 2026-01-31 08:12:22.440 221554 DEBUG oslo_concurrency.lockutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquired lock "refresh_cache-518d36e7-b77a-415d-bd48-f53e637ed0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:12:22 np0005603609 nova_compute[221550]: 2026-01-31 08:12:22.440 221554 DEBUG nova.network.neutron [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:12:22 np0005603609 nova_compute[221550]: 2026-01-31 08:12:22.706 221554 DEBUG nova.network.neutron [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:12:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:22.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:22 np0005603609 nova_compute[221550]: 2026-01-31 08:12:22.879 221554 DEBUG nova.compute.manager [req-308b5583-6def-4b23-af08-9a1df686df95 req-31597571-44ee-4027-8d16-1dd98f94e78b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received event network-changed-726c6b35-bbdd-44f9-81a9-69aa19514bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:22 np0005603609 nova_compute[221550]: 2026-01-31 08:12:22.879 221554 DEBUG nova.compute.manager [req-308b5583-6def-4b23-af08-9a1df686df95 req-31597571-44ee-4027-8d16-1dd98f94e78b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Refreshing instance network info cache due to event network-changed-726c6b35-bbdd-44f9-81a9-69aa19514bdd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:12:22 np0005603609 nova_compute[221550]: 2026-01-31 08:12:22.879 221554 DEBUG oslo_concurrency.lockutils [req-308b5583-6def-4b23-af08-9a1df686df95 req-31597571-44ee-4027-8d16-1dd98f94e78b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-518d36e7-b77a-415d-bd48-f53e637ed0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:12:23 np0005603609 nova_compute[221550]: 2026-01-31 08:12:23.127 221554 DEBUG nova.network.neutron [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Successfully updated port: 6bd06a16-ffc6-4ae3-9810-f84a1120972a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:12:23 np0005603609 nova_compute[221550]: 2026-01-31 08:12:23.181 221554 DEBUG oslo_concurrency.lockutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "refresh_cache-1483edbd-0387-4068-b62e-4789e777f5ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:12:23 np0005603609 nova_compute[221550]: 2026-01-31 08:12:23.181 221554 DEBUG oslo_concurrency.lockutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquired lock "refresh_cache-1483edbd-0387-4068-b62e-4789e777f5ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:12:23 np0005603609 nova_compute[221550]: 2026-01-31 08:12:23.182 221554 DEBUG nova.network.neutron [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:12:23 np0005603609 nova_compute[221550]: 2026-01-31 08:12:23.643 221554 DEBUG nova.compute.manager [req-4e3cd20e-d848-4265-8836-eb91911bbc27 req-f6aa4060-e306-4753-981a-5b7807691ab7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Received event network-changed-6bd06a16-ffc6-4ae3-9810-f84a1120972a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:23 np0005603609 nova_compute[221550]: 2026-01-31 08:12:23.644 221554 DEBUG nova.compute.manager [req-4e3cd20e-d848-4265-8836-eb91911bbc27 req-f6aa4060-e306-4753-981a-5b7807691ab7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Refreshing instance network info cache due to event network-changed-6bd06a16-ffc6-4ae3-9810-f84a1120972a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:12:23 np0005603609 nova_compute[221550]: 2026-01-31 08:12:23.644 221554 DEBUG oslo_concurrency.lockutils [req-4e3cd20e-d848-4265-8836-eb91911bbc27 req-f6aa4060-e306-4753-981a-5b7807691ab7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-1483edbd-0387-4068-b62e-4789e777f5ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:12:23 np0005603609 nova_compute[221550]: 2026-01-31 08:12:23.924 221554 DEBUG nova.network.neutron [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.245 221554 DEBUG nova.network.neutron [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Updating instance_info_cache with network_info: [{"id": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "address": "fa:16:3e:49:13:77", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap726c6b35-bb", "ovs_interfaceid": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:12:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:12:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:24.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.559 221554 DEBUG oslo_concurrency.lockutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Releasing lock "refresh_cache-518d36e7-b77a-415d-bd48-f53e637ed0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.559 221554 DEBUG nova.compute.manager [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Instance network_info: |[{"id": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "address": "fa:16:3e:49:13:77", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap726c6b35-bb", "ovs_interfaceid": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.560 221554 DEBUG oslo_concurrency.lockutils [req-308b5583-6def-4b23-af08-9a1df686df95 req-31597571-44ee-4027-8d16-1dd98f94e78b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-518d36e7-b77a-415d-bd48-f53e637ed0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.560 221554 DEBUG nova.network.neutron [req-308b5583-6def-4b23-af08-9a1df686df95 req-31597571-44ee-4027-8d16-1dd98f94e78b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Refreshing network info cache for port 726c6b35-bbdd-44f9-81a9-69aa19514bdd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.562 221554 DEBUG nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Start _get_guest_xml network_info=[{"id": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "address": "fa:16:3e:49:13:77", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap726c6b35-bb", "ovs_interfaceid": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.566 221554 WARNING nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.570 221554 DEBUG nova.virt.libvirt.host [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.571 221554 DEBUG nova.virt.libvirt.host [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.575 221554 DEBUG nova.virt.libvirt.host [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.576 221554 DEBUG nova.virt.libvirt.host [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.577 221554 DEBUG nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.577 221554 DEBUG nova.virt.hardware [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.578 221554 DEBUG nova.virt.hardware [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.578 221554 DEBUG nova.virt.hardware [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.578 221554 DEBUG nova.virt.hardware [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.579 221554 DEBUG nova.virt.hardware [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.579 221554 DEBUG nova.virt.hardware [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.579 221554 DEBUG nova.virt.hardware [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.579 221554 DEBUG nova.virt.hardware [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.579 221554 DEBUG nova.virt.hardware [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.580 221554 DEBUG nova.virt.hardware [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.580 221554 DEBUG nova.virt.hardware [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.582 221554 DEBUG oslo_concurrency.processutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.723 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:24.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:12:24 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3098008576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:12:24 np0005603609 nova_compute[221550]: 2026-01-31 08:12:24.987 221554 DEBUG oslo_concurrency.processutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.007 221554 DEBUG nova.storage.rbd_utils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 518d36e7-b77a-415d-bd48-f53e637ed0d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.010 221554 DEBUG oslo_concurrency.processutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:12:25 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4176926919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.404 221554 DEBUG oslo_concurrency.processutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.408 221554 DEBUG nova.virt.libvirt.vif [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:12:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-478071790',display_name='tempest-ServerStableDeviceRescueTest-server-478071790',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-478071790',id=125,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1633c84ea1bf46b080aaafd30bbcf25f',ramdisk_id='',reservation_id='r-dq3p1x4r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-569420416',owner_user_name='tempest-ServerStableDeviceRescueTest-569420416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:12:16Z,user_data=None,user_id='d7d9a44201d548aba1e1654e136ddd06',uuid=518d36e7-b77a-415d-bd48-f53e637ed0d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "address": "fa:16:3e:49:13:77", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap726c6b35-bb", "ovs_interfaceid": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.409 221554 DEBUG nova.network.os_vif_util [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converting VIF {"id": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "address": "fa:16:3e:49:13:77", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap726c6b35-bb", "ovs_interfaceid": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.411 221554 DEBUG nova.network.os_vif_util [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:13:77,bridge_name='br-int',has_traffic_filtering=True,id=726c6b35-bbdd-44f9-81a9-69aa19514bdd,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap726c6b35-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.413 221554 DEBUG nova.objects.instance [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'pci_devices' on Instance uuid 518d36e7-b77a-415d-bd48-f53e637ed0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e289 e289: 3 total, 3 up, 3 in
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.871 221554 DEBUG nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:12:25 np0005603609 nova_compute[221550]:  <uuid>518d36e7-b77a-415d-bd48-f53e637ed0d8</uuid>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:  <name>instance-0000007d</name>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-478071790</nova:name>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:12:24</nova:creationTime>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:12:25 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:        <nova:user uuid="d7d9a44201d548aba1e1654e136ddd06">tempest-ServerStableDeviceRescueTest-569420416-project-member</nova:user>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:        <nova:project uuid="1633c84ea1bf46b080aaafd30bbcf25f">tempest-ServerStableDeviceRescueTest-569420416</nova:project>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:        <nova:port uuid="726c6b35-bbdd-44f9-81a9-69aa19514bdd">
Jan 31 03:12:25 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <entry name="serial">518d36e7-b77a-415d-bd48-f53e637ed0d8</entry>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <entry name="uuid">518d36e7-b77a-415d-bd48-f53e637ed0d8</entry>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/518d36e7-b77a-415d-bd48-f53e637ed0d8_disk">
Jan 31 03:12:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:12:25 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/518d36e7-b77a-415d-bd48-f53e637ed0d8_disk.config">
Jan 31 03:12:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:12:25 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:49:13:77"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <target dev="tap726c6b35-bb"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/518d36e7-b77a-415d-bd48-f53e637ed0d8/console.log" append="off"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:12:25 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:12:25 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:12:25 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:12:25 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.872 221554 DEBUG nova.compute.manager [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Preparing to wait for external event network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.872 221554 DEBUG oslo_concurrency.lockutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.873 221554 DEBUG oslo_concurrency.lockutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.873 221554 DEBUG oslo_concurrency.lockutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.874 221554 DEBUG nova.virt.libvirt.vif [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:12:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-478071790',display_name='tempest-ServerStableDeviceRescueTest-server-478071790',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-478071790',id=125,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1633c84ea1bf46b080aaafd30bbcf25f',ramdisk_id='',reservation_id='r-dq3p1x4r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-569420416',owner_user_name='tempest-ServerStableDeviceRescueTest-569420416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:12:16Z,user_data=None,user_id='d7d9a44201d548aba1e1654e136ddd06',uuid=518d36e7-b77a-415d-bd48-f53e637ed0d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "address": "fa:16:3e:49:13:77", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap726c6b35-bb", "ovs_interfaceid": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.874 221554 DEBUG nova.network.os_vif_util [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converting VIF {"id": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "address": "fa:16:3e:49:13:77", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap726c6b35-bb", "ovs_interfaceid": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.875 221554 DEBUG nova.network.os_vif_util [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:13:77,bridge_name='br-int',has_traffic_filtering=True,id=726c6b35-bbdd-44f9-81a9-69aa19514bdd,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap726c6b35-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.876 221554 DEBUG os_vif [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:13:77,bridge_name='br-int',has_traffic_filtering=True,id=726c6b35-bbdd-44f9-81a9-69aa19514bdd,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap726c6b35-bb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.877 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.877 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.878 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:12:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.881 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.881 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap726c6b35-bb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.881 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap726c6b35-bb, col_values=(('external_ids', {'iface-id': '726c6b35-bbdd-44f9-81a9-69aa19514bdd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:13:77', 'vm-uuid': '518d36e7-b77a-415d-bd48-f53e637ed0d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.883 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:25 np0005603609 NetworkManager[49064]: <info>  [1769847145.8840] manager: (tap726c6b35-bb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/257)
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.885 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.888 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:25 np0005603609 nova_compute[221550]: 2026-01-31 08:12:25.889 221554 INFO os_vif [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:13:77,bridge_name='br-int',has_traffic_filtering=True,id=726c6b35-bbdd-44f9-81a9-69aa19514bdd,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap726c6b35-bb')#033[00m
Jan 31 03:12:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:26.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:26 np0005603609 nova_compute[221550]: 2026-01-31 08:12:26.810 221554 DEBUG nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:12:26 np0005603609 nova_compute[221550]: 2026-01-31 08:12:26.811 221554 DEBUG nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:12:26 np0005603609 nova_compute[221550]: 2026-01-31 08:12:26.811 221554 DEBUG nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No VIF found with MAC fa:16:3e:49:13:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:12:26 np0005603609 nova_compute[221550]: 2026-01-31 08:12:26.812 221554 INFO nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Using config drive#033[00m
Jan 31 03:12:26 np0005603609 nova_compute[221550]: 2026-01-31 08:12:26.836 221554 DEBUG nova.storage.rbd_utils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 518d36e7-b77a-415d-bd48-f53e637ed0d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:12:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:26.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:12:27 np0005603609 nova_compute[221550]: 2026-01-31 08:12:27.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:28.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:28 np0005603609 nova_compute[221550]: 2026-01-31 08:12:28.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:28.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.468 221554 DEBUG nova.network.neutron [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Updating instance_info_cache with network_info: [{"id": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "address": "fa:16:3e:a4:cc:12", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bd06a16-ff", "ovs_interfaceid": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:12:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:12:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:12:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.726 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.820 221554 DEBUG oslo_concurrency.lockutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Releasing lock "refresh_cache-1483edbd-0387-4068-b62e-4789e777f5ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.820 221554 DEBUG nova.compute.manager [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Instance network_info: |[{"id": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "address": "fa:16:3e:a4:cc:12", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bd06a16-ff", "ovs_interfaceid": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.821 221554 DEBUG oslo_concurrency.lockutils [req-4e3cd20e-d848-4265-8836-eb91911bbc27 req-f6aa4060-e306-4753-981a-5b7807691ab7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-1483edbd-0387-4068-b62e-4789e777f5ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.821 221554 DEBUG nova.network.neutron [req-4e3cd20e-d848-4265-8836-eb91911bbc27 req-f6aa4060-e306-4753-981a-5b7807691ab7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Refreshing network info cache for port 6bd06a16-ffc6-4ae3-9810-f84a1120972a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.823 221554 DEBUG nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Start _get_guest_xml network_info=[{"id": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "address": "fa:16:3e:a4:cc:12", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bd06a16-ff", "ovs_interfaceid": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.827 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.829 221554 WARNING nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.834 221554 DEBUG nova.virt.libvirt.host [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.835 221554 DEBUG nova.virt.libvirt.host [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.838 221554 DEBUG nova.virt.libvirt.host [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.838 221554 DEBUG nova.virt.libvirt.host [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.839 221554 DEBUG nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.839 221554 DEBUG nova.virt.hardware [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.840 221554 DEBUG nova.virt.hardware [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.840 221554 DEBUG nova.virt.hardware [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.840 221554 DEBUG nova.virt.hardware [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.840 221554 DEBUG nova.virt.hardware [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.841 221554 DEBUG nova.virt.hardware [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.841 221554 DEBUG nova.virt.hardware [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.841 221554 DEBUG nova.virt.hardware [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.841 221554 DEBUG nova.virt.hardware [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.841 221554 DEBUG nova.virt.hardware [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.842 221554 DEBUG nova.virt.hardware [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.844 221554 DEBUG oslo_concurrency.processutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.865 221554 INFO nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Creating config drive at /var/lib/nova/instances/518d36e7-b77a-415d-bd48-f53e637ed0d8/disk.config#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.869 221554 DEBUG oslo_concurrency.processutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/518d36e7-b77a-415d-bd48-f53e637ed0d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpumbcpl1c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:29 np0005603609 nova_compute[221550]: 2026-01-31 08:12:29.994 221554 DEBUG oslo_concurrency.processutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/518d36e7-b77a-415d-bd48-f53e637ed0d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpumbcpl1c" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.021 221554 DEBUG nova.storage.rbd_utils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 518d36e7-b77a-415d-bd48-f53e637ed0d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.025 221554 DEBUG oslo_concurrency.processutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/518d36e7-b77a-415d-bd48-f53e637ed0d8/disk.config 518d36e7-b77a-415d-bd48-f53e637ed0d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.166 221554 DEBUG oslo_concurrency.processutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/518d36e7-b77a-415d-bd48-f53e637ed0d8/disk.config 518d36e7-b77a-415d-bd48-f53e637ed0d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.166 221554 INFO nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Deleting local config drive /var/lib/nova/instances/518d36e7-b77a-415d-bd48-f53e637ed0d8/disk.config because it was imported into RBD.#033[00m
Jan 31 03:12:30 np0005603609 NetworkManager[49064]: <info>  [1769847150.2134] manager: (tap726c6b35-bb): new Tun device (/org/freedesktop/NetworkManager/Devices/258)
Jan 31 03:12:30 np0005603609 kernel: tap726c6b35-bb: entered promiscuous mode
Jan 31 03:12:30 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:30Z|00521|binding|INFO|Claiming lport 726c6b35-bbdd-44f9-81a9-69aa19514bdd for this chassis.
Jan 31 03:12:30 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:30Z|00522|binding|INFO|726c6b35-bbdd-44f9-81a9-69aa19514bdd: Claiming fa:16:3e:49:13:77 10.100.0.10
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.227 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:30 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:30Z|00523|binding|INFO|Setting lport 726c6b35-bbdd-44f9-81a9-69aa19514bdd ovn-installed in OVS
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.238 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.244 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:30 np0005603609 systemd-udevd[273058]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:12:30 np0005603609 systemd-machined[190912]: New machine qemu-64-instance-0000007d.
Jan 31 03:12:30 np0005603609 NetworkManager[49064]: <info>  [1769847150.2630] device (tap726c6b35-bb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:12:30 np0005603609 NetworkManager[49064]: <info>  [1769847150.2638] device (tap726c6b35-bb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:12:30 np0005603609 systemd[1]: Started Virtual Machine qemu-64-instance-0000007d.
Jan 31 03:12:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:12:30 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3571684988' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.292 221554 DEBUG oslo_concurrency.processutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.320 221554 DEBUG nova.storage.rbd_utils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 1483edbd-0387-4068-b62e-4789e777f5ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.323 221554 DEBUG oslo_concurrency.processutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:30 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:30Z|00524|binding|INFO|Setting lport 726c6b35-bbdd-44f9-81a9-69aa19514bdd up in Southbound
Jan 31 03:12:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:30.390 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:13:77 10.100.0.10'], port_security=['fa:16:3e:49:13:77 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '518d36e7-b77a-415d-bd48-f53e637ed0d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6bf409ce-5d7d-4755-a538-3f1e81a177fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=726c6b35-bbdd-44f9-81a9-69aa19514bdd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:12:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:30.391 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 726c6b35-bbdd-44f9-81a9-69aa19514bdd in datapath 088d6992-6ba6-4719-a977-b3d306740157 bound to our chassis#033[00m
Jan 31 03:12:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:30.393 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 088d6992-6ba6-4719-a977-b3d306740157#033[00m
Jan 31 03:12:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:30.403 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2043c49c-7df4-408c-a44f-04a8f3b19cc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:30.422 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[0219e06e-ab6d-42e1-b6c1-88183e371ea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:30.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:30.426 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[9f21cd14-55de-4848-9949-6e11084b9b98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:30.445 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[ca46f967-be18-4d44-8f2b-64bbc39cb3d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:30.457 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1e8db17c-703a-492b-a7f5-83ade821da78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736325, 'reachable_time': 27412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273112, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:30.468 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2465f8be-727d-4abe-99ad-1812f076d578]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736331, 'tstamp': 736331}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273113, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736333, 'tstamp': 736333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273113, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:30.469 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.470 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:30.471 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088d6992-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:30.472 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:12:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:30.472 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap088d6992-60, col_values=(('external_ids', {'iface-id': '7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:30.472 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.747 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847150.7470212, 518d36e7-b77a-415d-bd48-f53e637ed0d8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.747 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] VM Started (Lifecycle Event)#033[00m
Jan 31 03:12:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:12:30 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3123372633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.764 221554 DEBUG oslo_concurrency.processutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.765 221554 DEBUG nova.virt.libvirt.vif [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:12:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-860967629',display_name='tempest-ServersTestJSON-server-860967629',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-860967629',id=126,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-vouorxq6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:12:17Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=1483edbd-0387-4068-b62e-4789e777f5ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "address": "fa:16:3e:a4:cc:12", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bd06a16-ff", "ovs_interfaceid": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.765 221554 DEBUG nova.network.os_vif_util [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "address": "fa:16:3e:a4:cc:12", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bd06a16-ff", "ovs_interfaceid": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.766 221554 DEBUG nova.network.os_vif_util [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:cc:12,bridge_name='br-int',has_traffic_filtering=True,id=6bd06a16-ffc6-4ae3-9810-f84a1120972a,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bd06a16-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.767 221554 DEBUG nova.objects.instance [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1483edbd-0387-4068-b62e-4789e777f5ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:12:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:30.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:12:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.884 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.891 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.893 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847150.7477598, 518d36e7-b77a-415d-bd48-f53e637ed0d8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.894 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.950 221554 DEBUG nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:12:30 np0005603609 nova_compute[221550]:  <uuid>1483edbd-0387-4068-b62e-4789e777f5ab</uuid>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:  <name>instance-0000007e</name>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServersTestJSON-server-860967629</nova:name>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:12:29</nova:creationTime>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:12:30 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:        <nova:user uuid="5366d122b359489fb9d2bda8d19611a6">tempest-ServersTestJSON-327201738-project-member</nova:user>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:        <nova:project uuid="4aa06cf35d8c468fb16884f19dc8ce71">tempest-ServersTestJSON-327201738</nova:project>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:        <nova:port uuid="6bd06a16-ffc6-4ae3-9810-f84a1120972a">
Jan 31 03:12:30 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <entry name="serial">1483edbd-0387-4068-b62e-4789e777f5ab</entry>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <entry name="uuid">1483edbd-0387-4068-b62e-4789e777f5ab</entry>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/1483edbd-0387-4068-b62e-4789e777f5ab_disk">
Jan 31 03:12:30 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:12:30 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/1483edbd-0387-4068-b62e-4789e777f5ab_disk.config">
Jan 31 03:12:30 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:12:30 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:a4:cc:12"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <target dev="tap6bd06a16-ff"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/1483edbd-0387-4068-b62e-4789e777f5ab/console.log" append="off"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:12:30 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:12:30 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:12:30 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:12:30 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.951 221554 DEBUG nova.compute.manager [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Preparing to wait for external event network-vif-plugged-6bd06a16-ffc6-4ae3-9810-f84a1120972a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.952 221554 DEBUG oslo_concurrency.lockutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "1483edbd-0387-4068-b62e-4789e777f5ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.952 221554 DEBUG oslo_concurrency.lockutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "1483edbd-0387-4068-b62e-4789e777f5ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.952 221554 DEBUG oslo_concurrency.lockutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "1483edbd-0387-4068-b62e-4789e777f5ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.953 221554 DEBUG nova.virt.libvirt.vif [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:12:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-860967629',display_name='tempest-ServersTestJSON-server-860967629',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-860967629',id=126,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-vouorxq6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:12:17Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=1483edbd-0387-4068-b62e-4789e777f5ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "address": "fa:16:3e:a4:cc:12", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bd06a16-ff", "ovs_interfaceid": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.953 221554 DEBUG nova.network.os_vif_util [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "address": "fa:16:3e:a4:cc:12", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bd06a16-ff", "ovs_interfaceid": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.954 221554 DEBUG nova.network.os_vif_util [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:cc:12,bridge_name='br-int',has_traffic_filtering=True,id=6bd06a16-ffc6-4ae3-9810-f84a1120972a,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bd06a16-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.954 221554 DEBUG os_vif [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:cc:12,bridge_name='br-int',has_traffic_filtering=True,id=6bd06a16-ffc6-4ae3-9810-f84a1120972a,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bd06a16-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.955 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.955 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.956 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.958 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.958 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6bd06a16-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.959 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6bd06a16-ff, col_values=(('external_ids', {'iface-id': '6bd06a16-ffc6-4ae3-9810-f84a1120972a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:cc:12', 'vm-uuid': '1483edbd-0387-4068-b62e-4789e777f5ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.960 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:30 np0005603609 NetworkManager[49064]: <info>  [1769847150.9613] manager: (tap6bd06a16-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.963 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.965 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:30 np0005603609 nova_compute[221550]: 2026-01-31 08:12:30.965 221554 INFO os_vif [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:cc:12,bridge_name='br-int',has_traffic_filtering=True,id=6bd06a16-ffc6-4ae3-9810-f84a1120972a,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bd06a16-ff')#033[00m
Jan 31 03:12:31 np0005603609 nova_compute[221550]: 2026-01-31 08:12:31.039 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:12:31 np0005603609 nova_compute[221550]: 2026-01-31 08:12:31.042 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:12:31 np0005603609 nova_compute[221550]: 2026-01-31 08:12:31.117 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:12:31 np0005603609 nova_compute[221550]: 2026-01-31 08:12:31.186 221554 DEBUG nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:12:31 np0005603609 nova_compute[221550]: 2026-01-31 08:12:31.187 221554 DEBUG nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:12:31 np0005603609 nova_compute[221550]: 2026-01-31 08:12:31.187 221554 DEBUG nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No VIF found with MAC fa:16:3e:a4:cc:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:12:31 np0005603609 nova_compute[221550]: 2026-01-31 08:12:31.188 221554 INFO nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Using config drive#033[00m
Jan 31 03:12:31 np0005603609 nova_compute[221550]: 2026-01-31 08:12:31.218 221554 DEBUG nova.storage.rbd_utils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 1483edbd-0387-4068-b62e-4789e777f5ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:31 np0005603609 nova_compute[221550]: 2026-01-31 08:12:31.405 221554 DEBUG nova.network.neutron [req-308b5583-6def-4b23-af08-9a1df686df95 req-31597571-44ee-4027-8d16-1dd98f94e78b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Updated VIF entry in instance network info cache for port 726c6b35-bbdd-44f9-81a9-69aa19514bdd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:12:31 np0005603609 nova_compute[221550]: 2026-01-31 08:12:31.406 221554 DEBUG nova.network.neutron [req-308b5583-6def-4b23-af08-9a1df686df95 req-31597571-44ee-4027-8d16-1dd98f94e78b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Updating instance_info_cache with network_info: [{"id": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "address": "fa:16:3e:49:13:77", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap726c6b35-bb", "ovs_interfaceid": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:12:31 np0005603609 nova_compute[221550]: 2026-01-31 08:12:31.507 221554 DEBUG oslo_concurrency.lockutils [req-308b5583-6def-4b23-af08-9a1df686df95 req-31597571-44ee-4027-8d16-1dd98f94e78b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-518d36e7-b77a-415d-bd48-f53e637ed0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:12:31 np0005603609 nova_compute[221550]: 2026-01-31 08:12:31.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:31 np0005603609 nova_compute[221550]: 2026-01-31 08:12:31.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:31 np0005603609 nova_compute[221550]: 2026-01-31 08:12:31.709 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:31 np0005603609 nova_compute[221550]: 2026-01-31 08:12:31.709 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:31 np0005603609 nova_compute[221550]: 2026-01-31 08:12:31.710 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:31 np0005603609 nova_compute[221550]: 2026-01-31 08:12:31.710 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:12:31 np0005603609 nova_compute[221550]: 2026-01-31 08:12:31.710 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:12:32 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2904659359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.106 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:32.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.440 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.440 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.444 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.444 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.447 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.447 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.455 221554 INFO nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Creating config drive at /var/lib/nova/instances/1483edbd-0387-4068-b62e-4789e777f5ab/disk.config#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.460 221554 DEBUG oslo_concurrency.processutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1483edbd-0387-4068-b62e-4789e777f5ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdvssh1u5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.583 221554 DEBUG oslo_concurrency.processutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1483edbd-0387-4068-b62e-4789e777f5ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdvssh1u5" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.609 221554 DEBUG nova.storage.rbd_utils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 1483edbd-0387-4068-b62e-4789e777f5ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.625 221554 DEBUG oslo_concurrency.processutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1483edbd-0387-4068-b62e-4789e777f5ab/disk.config 1483edbd-0387-4068-b62e-4789e777f5ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.675 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.676 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4173MB free_disk=20.764469146728516GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.676 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.676 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.789 221554 DEBUG oslo_concurrency.processutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1483edbd-0387-4068-b62e-4789e777f5ab/disk.config 1483edbd-0387-4068-b62e-4789e777f5ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.790 221554 INFO nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Deleting local config drive /var/lib/nova/instances/1483edbd-0387-4068-b62e-4789e777f5ab/disk.config because it was imported into RBD.#033[00m
Jan 31 03:12:32 np0005603609 kernel: tap6bd06a16-ff: entered promiscuous mode
Jan 31 03:12:32 np0005603609 NetworkManager[49064]: <info>  [1769847152.8372] manager: (tap6bd06a16-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Jan 31 03:12:32 np0005603609 systemd-udevd[273060]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:12:32 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:32Z|00525|binding|INFO|Claiming lport 6bd06a16-ffc6-4ae3-9810-f84a1120972a for this chassis.
Jan 31 03:12:32 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:32Z|00526|binding|INFO|6bd06a16-ffc6-4ae3-9810-f84a1120972a: Claiming fa:16:3e:a4:cc:12 10.100.0.3
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.840 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:12:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:32.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:12:32 np0005603609 NetworkManager[49064]: <info>  [1769847152.8494] device (tap6bd06a16-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:12:32 np0005603609 NetworkManager[49064]: <info>  [1769847152.8498] device (tap6bd06a16-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:12:32 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:32Z|00527|binding|INFO|Setting lport 6bd06a16-ffc6-4ae3-9810-f84a1120972a ovn-installed in OVS
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.849 221554 DEBUG nova.compute.manager [req-8d4a176f-24ca-4b16-b98b-9bbb3cf3f609 req-e4c10285-dae5-4d83-b171-308e08fe8356 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received event network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.849 221554 DEBUG oslo_concurrency.lockutils [req-8d4a176f-24ca-4b16-b98b-9bbb3cf3f609 req-e4c10285-dae5-4d83-b171-308e08fe8356 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.850 221554 DEBUG oslo_concurrency.lockutils [req-8d4a176f-24ca-4b16-b98b-9bbb3cf3f609 req-e4c10285-dae5-4d83-b171-308e08fe8356 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.850 221554 DEBUG oslo_concurrency.lockutils [req-8d4a176f-24ca-4b16-b98b-9bbb3cf3f609 req-e4c10285-dae5-4d83-b171-308e08fe8356 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.850 221554 DEBUG nova.compute.manager [req-8d4a176f-24ca-4b16-b98b-9bbb3cf3f609 req-e4c10285-dae5-4d83-b171-308e08fe8356 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Processing event network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.851 221554 DEBUG nova.compute.manager [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.851 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.854 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847152.8541481, 518d36e7-b77a-415d-bd48-f53e637ed0d8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.854 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.856 221554 DEBUG nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.859 221554 INFO nova.virt.libvirt.driver [-] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Instance spawned successfully.#033[00m
Jan 31 03:12:32 np0005603609 nova_compute[221550]: 2026-01-31 08:12:32.859 221554 DEBUG nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:12:32 np0005603609 systemd-machined[190912]: New machine qemu-65-instance-0000007e.
Jan 31 03:12:32 np0005603609 systemd[1]: Started Virtual Machine qemu-65-instance-0000007e.
Jan 31 03:12:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:33Z|00528|binding|INFO|Setting lport 6bd06a16-ffc6-4ae3-9810-f84a1120972a up in Southbound
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.034 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:cc:12 10.100.0.3'], port_security=['fa:16:3e:a4:cc:12 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1483edbd-0387-4068-b62e-4789e777f5ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b88251fc-7610-460a-ba55-2ed186c6f696', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9f076789-2616-4234-8eab-1fc3da7d63b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7bb3690e-e43b-4d54-9d64-4797e471bf50, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=6bd06a16-ffc6-4ae3-9810-f84a1120972a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.036 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 6bd06a16-ffc6-4ae3-9810-f84a1120972a in datapath b88251fc-7610-460a-ba55-2ed186c6f696 bound to our chassis#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.037 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b88251fc-7610-460a-ba55-2ed186c6f696#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.048 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bbbe2b6d-7c99-4630-b12b-aedccbe205b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.049 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb88251fc-71 in ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.051 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb88251fc-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.051 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2b27175c-fcc8-4788-9765-4d224b9b3e45]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.052 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d3797014-e6ef-4297-aee6-b036fde31fe4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.065 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[1aac1d3c-5e49-44b0-9e5b-718046f01ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.075 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d4640d-cb0a-49fa-81eb-07f8712703e9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.100 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[4f882f64-900b-4dd2-9d46-74692a7c466f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:33 np0005603609 NetworkManager[49064]: <info>  [1769847153.1059] manager: (tapb88251fc-70): new Veth device (/org/freedesktop/NetworkManager/Devices/261)
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.105 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d881b8cd-f79e-481a-b6c1-d7b636d20715]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.135 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[5726e3d5-92e6-4c8f-9a41-593148132f18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.138 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[12b878b5-202c-4518-ac0b-642bec710d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:33 np0005603609 NetworkManager[49064]: <info>  [1769847153.1583] device (tapb88251fc-70): carrier: link connected
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.161 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[76092321-d31d-4911-86c4-b9b0ddc6d654]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.174 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab9b70e-b343-407b-b807-857f24b90f1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb88251fc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2a:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739285, 'reachable_time': 41180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273300, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.185 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed10ccb-86dd-4dc9-9e52-4218831f519a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:2a68'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739285, 'tstamp': 739285}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273303, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.196 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e4671b27-48c5-4a82-9ee7-a418be0e8a4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb88251fc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2a:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739285, 'reachable_time': 41180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273311, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.217 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[780ecf2d-75f2-47a6-aec3-a1704635d1e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.254 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d411413f-ba25-4118-b260-005333cd2fce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.256 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb88251fc-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.256 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.256 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb88251fc-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:33 np0005603609 kernel: tapb88251fc-70: entered promiscuous mode
Jan 31 03:12:33 np0005603609 NetworkManager[49064]: <info>  [1769847153.2600] manager: (tapb88251fc-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/262)
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.259 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.264 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb88251fc-70, col_values=(('external_ids', {'iface-id': '950341c4-aa2a-4261-8207-ff7e92fd4830'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.265 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:33Z|00529|binding|INFO|Releasing lport 950341c4-aa2a-4261-8207-ff7e92fd4830 from this chassis (sb_readonly=0)
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.268 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.269 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8b9a9999-bf72-4259-a6e4-40cef41c78d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.270 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-b88251fc-7610-460a-ba55-2ed186c6f696
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID b88251fc-7610-460a-ba55-2ed186c6f696
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:12:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:33.271 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'env', 'PROCESS_TAG=haproxy-b88251fc-7610-460a-ba55-2ed186c6f696', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b88251fc-7610-460a-ba55-2ed186c6f696.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.272 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.324 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.328 221554 DEBUG nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.329 221554 DEBUG nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.329 221554 DEBUG nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.329 221554 DEBUG nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.330 221554 DEBUG nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.330 221554 DEBUG nova.virt.libvirt.driver [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.338 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.367 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 981eb990-ec0c-4673-98e7-102afbe0bb51 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.368 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 518d36e7-b77a-415d-bd48-f53e637ed0d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.368 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 1483edbd-0387-4068-b62e-4789e777f5ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.368 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.369 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.388 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.418 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.418 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.451 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.467 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.468 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847153.3143082, 1483edbd-0387-4068-b62e-4789e777f5ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.468 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] VM Started (Lifecycle Event)#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.484 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.513 221554 DEBUG nova.network.neutron [req-4e3cd20e-d848-4265-8836-eb91911bbc27 req-f6aa4060-e306-4753-981a-5b7807691ab7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Updated VIF entry in instance network info cache for port 6bd06a16-ffc6-4ae3-9810-f84a1120972a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.514 221554 DEBUG nova.network.neutron [req-4e3cd20e-d848-4265-8836-eb91911bbc27 req-f6aa4060-e306-4753-981a-5b7807691ab7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Updating instance_info_cache with network_info: [{"id": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "address": "fa:16:3e:a4:cc:12", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bd06a16-ff", "ovs_interfaceid": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:12:33 np0005603609 podman[273361]: 2026-01-31 08:12:33.587507507 +0000 UTC m=+0.043396444 container create e167556c93ede248987ddf2900db1d24ff240e2d8c683b884fae283645430d6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 31 03:12:33 np0005603609 systemd[1]: Started libpod-conmon-e167556c93ede248987ddf2900db1d24ff240e2d8c683b884fae283645430d6c.scope.
Jan 31 03:12:33 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:12:33 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cdc6eee46d2ff5a4d50d374fd89edf68c44424dd3e032052942419e2c095e8f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:33 np0005603609 podman[273361]: 2026-01-31 08:12:33.562226129 +0000 UTC m=+0.018115086 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:12:33 np0005603609 podman[273361]: 2026-01-31 08:12:33.665156022 +0000 UTC m=+0.121044989 container init e167556c93ede248987ddf2900db1d24ff240e2d8c683b884fae283645430d6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:12:33 np0005603609 podman[273361]: 2026-01-31 08:12:33.670722916 +0000 UTC m=+0.126611853 container start e167556c93ede248987ddf2900db1d24ff240e2d8c683b884fae283645430d6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.675 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:33 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[273376]: [NOTICE]   (273380) : New worker (273383) forked
Jan 31 03:12:33 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[273376]: [NOTICE]   (273380) : Loading success.
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.926 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.929 221554 DEBUG oslo_concurrency.lockutils [req-4e3cd20e-d848-4265-8836-eb91911bbc27 req-f6aa4060-e306-4753-981a-5b7807691ab7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-1483edbd-0387-4068-b62e-4789e777f5ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.933 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847153.3144727, 1483edbd-0387-4068-b62e-4789e777f5ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:12:33 np0005603609 nova_compute[221550]: 2026-01-31 08:12:33.933 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.110 221554 INFO nova.compute.manager [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Took 17.30 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.110 221554 DEBUG nova.compute.manager [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:12:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:12:34 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1081685098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.142 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.147 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.367 221554 DEBUG nova.compute.manager [req-eafab162-b3af-4351-ab9e-7658edd6e8c6 req-54e159f9-2a1f-4d41-a46d-bebf6b190402 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Received event network-vif-plugged-6bd06a16-ffc6-4ae3-9810-f84a1120972a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.368 221554 DEBUG oslo_concurrency.lockutils [req-eafab162-b3af-4351-ab9e-7658edd6e8c6 req-54e159f9-2a1f-4d41-a46d-bebf6b190402 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1483edbd-0387-4068-b62e-4789e777f5ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.368 221554 DEBUG oslo_concurrency.lockutils [req-eafab162-b3af-4351-ab9e-7658edd6e8c6 req-54e159f9-2a1f-4d41-a46d-bebf6b190402 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1483edbd-0387-4068-b62e-4789e777f5ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.368 221554 DEBUG oslo_concurrency.lockutils [req-eafab162-b3af-4351-ab9e-7658edd6e8c6 req-54e159f9-2a1f-4d41-a46d-bebf6b190402 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1483edbd-0387-4068-b62e-4789e777f5ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.368 221554 DEBUG nova.compute.manager [req-eafab162-b3af-4351-ab9e-7658edd6e8c6 req-54e159f9-2a1f-4d41-a46d-bebf6b190402 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Processing event network-vif-plugged-6bd06a16-ffc6-4ae3-9810-f84a1120972a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.369 221554 DEBUG nova.compute.manager [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.390 221554 DEBUG nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.405 221554 INFO nova.virt.libvirt.driver [-] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Instance spawned successfully.#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.405 221554 DEBUG nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:12:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:34.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.450 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.454 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.458 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847154.3731244, 1483edbd-0387-4068-b62e-4789e777f5ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.458 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.516 221554 DEBUG nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.517 221554 DEBUG nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.517 221554 DEBUG nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.518 221554 DEBUG nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.518 221554 DEBUG nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.519 221554 DEBUG nova.virt.libvirt.driver [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.522 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.525 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.537 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.537 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.587 221554 INFO nova.compute.manager [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Took 19.19 seconds to build instance.#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.675 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.777 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:34.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.902 221554 DEBUG oslo_concurrency.lockutils [None req-cc33181a-49e0-4b1b-ac82-df089d11ea6a d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.920 221554 INFO nova.compute.manager [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Took 17.44 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:12:34 np0005603609 nova_compute[221550]: 2026-01-31 08:12:34.922 221554 DEBUG nova.compute.manager [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:12:35 np0005603609 nova_compute[221550]: 2026-01-31 08:12:35.042 221554 DEBUG nova.compute.manager [req-d314a14c-d439-4697-ba28-63f7b043dbeb req-6dd6a85b-da4c-41e6-84cc-4d4575944ce2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received event network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:35 np0005603609 nova_compute[221550]: 2026-01-31 08:12:35.043 221554 DEBUG oslo_concurrency.lockutils [req-d314a14c-d439-4697-ba28-63f7b043dbeb req-6dd6a85b-da4c-41e6-84cc-4d4575944ce2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:35 np0005603609 nova_compute[221550]: 2026-01-31 08:12:35.044 221554 DEBUG oslo_concurrency.lockutils [req-d314a14c-d439-4697-ba28-63f7b043dbeb req-6dd6a85b-da4c-41e6-84cc-4d4575944ce2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:35 np0005603609 nova_compute[221550]: 2026-01-31 08:12:35.050 221554 DEBUG oslo_concurrency.lockutils [req-d314a14c-d439-4697-ba28-63f7b043dbeb req-6dd6a85b-da4c-41e6-84cc-4d4575944ce2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:35 np0005603609 nova_compute[221550]: 2026-01-31 08:12:35.051 221554 DEBUG nova.compute.manager [req-d314a14c-d439-4697-ba28-63f7b043dbeb req-6dd6a85b-da4c-41e6-84cc-4d4575944ce2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] No waiting events found dispatching network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:35 np0005603609 nova_compute[221550]: 2026-01-31 08:12:35.052 221554 WARNING nova.compute.manager [req-d314a14c-d439-4697-ba28-63f7b043dbeb req-6dd6a85b-da4c-41e6-84cc-4d4575944ce2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received unexpected event network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd for instance with vm_state active and task_state None.#033[00m
Jan 31 03:12:35 np0005603609 nova_compute[221550]: 2026-01-31 08:12:35.135 221554 INFO nova.compute.manager [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Took 18.81 seconds to build instance.#033[00m
Jan 31 03:12:35 np0005603609 nova_compute[221550]: 2026-01-31 08:12:35.284 221554 DEBUG oslo_concurrency.lockutils [None req-16988b80-c3e6-4310-84f3-db51ec452688 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "1483edbd-0387-4068-b62e-4789e777f5ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:35 np0005603609 nova_compute[221550]: 2026-01-31 08:12:35.534 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:35 np0005603609 nova_compute[221550]: 2026-01-31 08:12:35.536 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:35 np0005603609 nova_compute[221550]: 2026-01-31 08:12:35.536 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:12:35 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:12:35 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:12:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:36 np0005603609 nova_compute[221550]: 2026-01-31 08:12:36.000 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:36.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:36 np0005603609 nova_compute[221550]: 2026-01-31 08:12:36.591 221554 DEBUG nova.compute.manager [req-54465c80-d8b2-47bb-812c-dd5d5cd53b7b req-c4f34e87-6b44-4d3c-bac6-ef25dfdaa37d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Received event network-vif-plugged-6bd06a16-ffc6-4ae3-9810-f84a1120972a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:36 np0005603609 nova_compute[221550]: 2026-01-31 08:12:36.592 221554 DEBUG oslo_concurrency.lockutils [req-54465c80-d8b2-47bb-812c-dd5d5cd53b7b req-c4f34e87-6b44-4d3c-bac6-ef25dfdaa37d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1483edbd-0387-4068-b62e-4789e777f5ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:36 np0005603609 nova_compute[221550]: 2026-01-31 08:12:36.592 221554 DEBUG oslo_concurrency.lockutils [req-54465c80-d8b2-47bb-812c-dd5d5cd53b7b req-c4f34e87-6b44-4d3c-bac6-ef25dfdaa37d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1483edbd-0387-4068-b62e-4789e777f5ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:36 np0005603609 nova_compute[221550]: 2026-01-31 08:12:36.592 221554 DEBUG oslo_concurrency.lockutils [req-54465c80-d8b2-47bb-812c-dd5d5cd53b7b req-c4f34e87-6b44-4d3c-bac6-ef25dfdaa37d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1483edbd-0387-4068-b62e-4789e777f5ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:36 np0005603609 nova_compute[221550]: 2026-01-31 08:12:36.593 221554 DEBUG nova.compute.manager [req-54465c80-d8b2-47bb-812c-dd5d5cd53b7b req-c4f34e87-6b44-4d3c-bac6-ef25dfdaa37d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] No waiting events found dispatching network-vif-plugged-6bd06a16-ffc6-4ae3-9810-f84a1120972a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:36 np0005603609 nova_compute[221550]: 2026-01-31 08:12:36.593 221554 WARNING nova.compute.manager [req-54465c80-d8b2-47bb-812c-dd5d5cd53b7b req-c4f34e87-6b44-4d3c-bac6-ef25dfdaa37d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Received unexpected event network-vif-plugged-6bd06a16-ffc6-4ae3-9810-f84a1120972a for instance with vm_state active and task_state None.#033[00m
Jan 31 03:12:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:36.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:37 np0005603609 nova_compute[221550]: 2026-01-31 08:12:37.882 221554 DEBUG nova.compute.manager [None req-02802152-4adb-4eab-8bd0-27568b8fcca6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:12:37 np0005603609 nova_compute[221550]: 2026-01-31 08:12:37.936 221554 INFO nova.compute.manager [None req-02802152-4adb-4eab-8bd0-27568b8fcca6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] instance snapshotting#033[00m
Jan 31 03:12:38 np0005603609 nova_compute[221550]: 2026-01-31 08:12:38.248 221554 INFO nova.virt.libvirt.driver [None req-02802152-4adb-4eab-8bd0-27568b8fcca6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Beginning live snapshot process#033[00m
Jan 31 03:12:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:38.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:38 np0005603609 nova_compute[221550]: 2026-01-31 08:12:38.634 221554 DEBUG nova.virt.libvirt.imagebackend [None req-02802152-4adb-4eab-8bd0-27568b8fcca6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:12:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:38.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:39 np0005603609 nova_compute[221550]: 2026-01-31 08:12:39.054 221554 DEBUG nova.storage.rbd_utils [None req-02802152-4adb-4eab-8bd0-27568b8fcca6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] creating snapshot(35a97d5435a34d36beb359f14f914831) on rbd image(518d36e7-b77a-415d-bd48-f53e637ed0d8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:12:39 np0005603609 nova_compute[221550]: 2026-01-31 08:12:39.780 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e290 e290: 3 total, 3 up, 3 in
Jan 31 03:12:40 np0005603609 nova_compute[221550]: 2026-01-31 08:12:40.086 221554 DEBUG nova.storage.rbd_utils [None req-02802152-4adb-4eab-8bd0-27568b8fcca6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] cloning vms/518d36e7-b77a-415d-bd48-f53e637ed0d8_disk@35a97d5435a34d36beb359f14f914831 to images/ae58104d-ec03-4878-8203-221e83ac3a09 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:12:40 np0005603609 nova_compute[221550]: 2026-01-31 08:12:40.211 221554 DEBUG nova.storage.rbd_utils [None req-02802152-4adb-4eab-8bd0-27568b8fcca6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] flattening images/ae58104d-ec03-4878-8203-221e83ac3a09 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:12:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:40.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:40 np0005603609 nova_compute[221550]: 2026-01-31 08:12:40.526 221554 DEBUG nova.storage.rbd_utils [None req-02802152-4adb-4eab-8bd0-27568b8fcca6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] removing snapshot(35a97d5435a34d36beb359f14f914831) on rbd image(518d36e7-b77a-415d-bd48-f53e637ed0d8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:12:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:40.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:41 np0005603609 nova_compute[221550]: 2026-01-31 08:12:41.002 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e291 e291: 3 total, 3 up, 3 in
Jan 31 03:12:41 np0005603609 nova_compute[221550]: 2026-01-31 08:12:41.117 221554 DEBUG nova.storage.rbd_utils [None req-02802152-4adb-4eab-8bd0-27568b8fcca6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] creating snapshot(snap) on rbd image(ae58104d-ec03-4878-8203-221e83ac3a09) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:12:41 np0005603609 podman[273587]: 2026-01-31 08:12:41.210407095 +0000 UTC m=+0.090365644 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 03:12:41 np0005603609 podman[273586]: 2026-01-31 08:12:41.23143554 +0000 UTC m=+0.112075294 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:12:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e292 e292: 3 total, 3 up, 3 in
Jan 31 03:12:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:42.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:42.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:44.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:44 np0005603609 nova_compute[221550]: 2026-01-31 08:12:44.782 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:44 np0005603609 nova_compute[221550]: 2026-01-31 08:12:44.845 221554 INFO nova.virt.libvirt.driver [None req-02802152-4adb-4eab-8bd0-27568b8fcca6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Snapshot image upload complete#033[00m
Jan 31 03:12:44 np0005603609 nova_compute[221550]: 2026-01-31 08:12:44.846 221554 INFO nova.compute.manager [None req-02802152-4adb-4eab-8bd0-27568b8fcca6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Took 6.91 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:12:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:44.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:12:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4141045526' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:12:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:12:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4141045526' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:12:45 np0005603609 nova_compute[221550]: 2026-01-31 08:12:45.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:46 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:46Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:49:13:77 10.100.0.10
Jan 31 03:12:46 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:46Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:13:77 10.100.0.10
Jan 31 03:12:46 np0005603609 nova_compute[221550]: 2026-01-31 08:12:46.040 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:12:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:46.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:12:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:46.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:47 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:47Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a4:cc:12 10.100.0.3
Jan 31 03:12:47 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:47Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a4:cc:12 10.100.0.3
Jan 31 03:12:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:12:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:48.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:12:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:12:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:48.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:12:49 np0005603609 nova_compute[221550]: 2026-01-31 08:12:49.784 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:50.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e293 e293: 3 total, 3 up, 3 in
Jan 31 03:12:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:50.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:50.965 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:12:50 np0005603609 nova_compute[221550]: 2026-01-31 08:12:50.965 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:50.966 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:12:51 np0005603609 nova_compute[221550]: 2026-01-31 08:12:51.042 221554 INFO nova.compute.manager [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Rescuing#033[00m
Jan 31 03:12:51 np0005603609 nova_compute[221550]: 2026-01-31 08:12:51.042 221554 DEBUG oslo_concurrency.lockutils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "refresh_cache-518d36e7-b77a-415d-bd48-f53e637ed0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:12:51 np0005603609 nova_compute[221550]: 2026-01-31 08:12:51.043 221554 DEBUG oslo_concurrency.lockutils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquired lock "refresh_cache-518d36e7-b77a-415d-bd48-f53e637ed0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:12:51 np0005603609 nova_compute[221550]: 2026-01-31 08:12:51.043 221554 DEBUG nova.network.neutron [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:12:51 np0005603609 nova_compute[221550]: 2026-01-31 08:12:51.044 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:52.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:12:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:52.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:12:53 np0005603609 nova_compute[221550]: 2026-01-31 08:12:53.407 221554 DEBUG nova.network.neutron [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Updating instance_info_cache with network_info: [{"id": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "address": "fa:16:3e:49:13:77", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap726c6b35-bb", "ovs_interfaceid": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:12:53 np0005603609 nova_compute[221550]: 2026-01-31 08:12:53.633 221554 DEBUG oslo_concurrency.lockutils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Releasing lock "refresh_cache-518d36e7-b77a-415d-bd48-f53e637ed0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:12:54 np0005603609 nova_compute[221550]: 2026-01-31 08:12:54.201 221554 DEBUG nova.virt.libvirt.driver [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:12:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:12:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:54.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:12:54 np0005603609 nova_compute[221550]: 2026-01-31 08:12:54.786 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:54.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:56 np0005603609 nova_compute[221550]: 2026-01-31 08:12:56.045 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:56.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:56 np0005603609 kernel: tap726c6b35-bb (unregistering): left promiscuous mode
Jan 31 03:12:56 np0005603609 NetworkManager[49064]: <info>  [1769847176.4775] device (tap726c6b35-bb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:12:56 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:56Z|00530|binding|INFO|Releasing lport 726c6b35-bbdd-44f9-81a9-69aa19514bdd from this chassis (sb_readonly=0)
Jan 31 03:12:56 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:56Z|00531|binding|INFO|Setting lport 726c6b35-bbdd-44f9-81a9-69aa19514bdd down in Southbound
Jan 31 03:12:56 np0005603609 nova_compute[221550]: 2026-01-31 08:12:56.483 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:56 np0005603609 ovn_controller[130359]: 2026-01-31T08:12:56Z|00532|binding|INFO|Removing iface tap726c6b35-bb ovn-installed in OVS
Jan 31 03:12:56 np0005603609 nova_compute[221550]: 2026-01-31 08:12:56.485 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:56 np0005603609 nova_compute[221550]: 2026-01-31 08:12:56.490 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:56 np0005603609 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Jan 31 03:12:56 np0005603609 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000007d.scope: Consumed 13.209s CPU time.
Jan 31 03:12:56 np0005603609 systemd-machined[190912]: Machine qemu-64-instance-0000007d terminated.
Jan 31 03:12:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:56.583 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:13:77 10.100.0.10'], port_security=['fa:16:3e:49:13:77 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '518d36e7-b77a-415d-bd48-f53e637ed0d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6bf409ce-5d7d-4755-a538-3f1e81a177fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=726c6b35-bbdd-44f9-81a9-69aa19514bdd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:12:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:56.585 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 726c6b35-bbdd-44f9-81a9-69aa19514bdd in datapath 088d6992-6ba6-4719-a977-b3d306740157 unbound from our chassis#033[00m
Jan 31 03:12:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:56.587 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 088d6992-6ba6-4719-a977-b3d306740157#033[00m
Jan 31 03:12:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:56.597 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a4d308-abee-4254-94bd-f6675bde22be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:56.620 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[63620886-e62b-4b82-8269-02f70997062c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:56.623 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c9425061-3e21-4446-9f2f-dac0d2e90d28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:56.641 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[3364152a-d9c6-446f-b4c0-a7871b80d886]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:56.651 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd5d18c-97f2-4161-86cf-5d9db6df851f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736325, 'reachable_time': 27412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273658, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:56.663 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7b482585-9514-4d5f-9f8f-10080c2ddb49]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736331, 'tstamp': 736331}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273659, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736333, 'tstamp': 736333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273659, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:56.664 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:56 np0005603609 nova_compute[221550]: 2026-01-31 08:12:56.666 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:56 np0005603609 nova_compute[221550]: 2026-01-31 08:12:56.669 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:56.669 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088d6992-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:56.669 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:12:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:56.670 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap088d6992-60, col_values=(('external_ids', {'iface-id': '7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:56.670 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:12:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:56.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:12:56.968 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:57 np0005603609 nova_compute[221550]: 2026-01-31 08:12:57.216 221554 INFO nova.virt.libvirt.driver [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 03:12:57 np0005603609 nova_compute[221550]: 2026-01-31 08:12:57.224 221554 INFO nova.virt.libvirt.driver [-] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Instance destroyed successfully.#033[00m
Jan 31 03:12:57 np0005603609 nova_compute[221550]: 2026-01-31 08:12:57.224 221554 DEBUG nova.objects.instance [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'numa_topology' on Instance uuid 518d36e7-b77a-415d-bd48-f53e637ed0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:57 np0005603609 nova_compute[221550]: 2026-01-31 08:12:57.312 221554 INFO nova.virt.libvirt.driver [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Attempting a stable device rescue#033[00m
Jan 31 03:12:57 np0005603609 nova_compute[221550]: 2026-01-31 08:12:57.654 221554 DEBUG nova.virt.libvirt.driver [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'usb', 'dev': 'sdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Jan 31 03:12:57 np0005603609 nova_compute[221550]: 2026-01-31 08:12:57.659 221554 DEBUG nova.virt.libvirt.driver [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:12:57 np0005603609 nova_compute[221550]: 2026-01-31 08:12:57.659 221554 INFO nova.virt.libvirt.driver [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Creating image(s)#033[00m
Jan 31 03:12:57 np0005603609 nova_compute[221550]: 2026-01-31 08:12:57.686 221554 DEBUG nova.storage.rbd_utils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 518d36e7-b77a-415d-bd48-f53e637ed0d8_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:57 np0005603609 nova_compute[221550]: 2026-01-31 08:12:57.691 221554 DEBUG nova.objects.instance [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 518d36e7-b77a-415d-bd48-f53e637ed0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:57 np0005603609 nova_compute[221550]: 2026-01-31 08:12:57.764 221554 DEBUG nova.storage.rbd_utils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 518d36e7-b77a-415d-bd48-f53e637ed0d8_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:57 np0005603609 nova_compute[221550]: 2026-01-31 08:12:57.802 221554 DEBUG nova.storage.rbd_utils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 518d36e7-b77a-415d-bd48-f53e637ed0d8_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:57 np0005603609 nova_compute[221550]: 2026-01-31 08:12:57.805 221554 DEBUG oslo_concurrency.lockutils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "0576f34ec1366f491182e90cd28f1860d38a85a0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:57 np0005603609 nova_compute[221550]: 2026-01-31 08:12:57.806 221554 DEBUG oslo_concurrency.lockutils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "0576f34ec1366f491182e90cd28f1860d38a85a0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:58 np0005603609 nova_compute[221550]: 2026-01-31 08:12:58.022 221554 DEBUG nova.virt.libvirt.imagebackend [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/ae58104d-ec03-4878-8203-221e83ac3a09/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/ae58104d-ec03-4878-8203-221e83ac3a09/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 31 03:12:58 np0005603609 nova_compute[221550]: 2026-01-31 08:12:58.089 221554 DEBUG nova.virt.libvirt.imagebackend [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Selected location: {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/ae58104d-ec03-4878-8203-221e83ac3a09/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 31 03:12:58 np0005603609 nova_compute[221550]: 2026-01-31 08:12:58.089 221554 DEBUG nova.storage.rbd_utils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] cloning images/ae58104d-ec03-4878-8203-221e83ac3a09@snap to None/518d36e7-b77a-415d-bd48-f53e637ed0d8_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:12:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:58.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:58 np0005603609 nova_compute[221550]: 2026-01-31 08:12:58.761 221554 DEBUG oslo_concurrency.lockutils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "0576f34ec1366f491182e90cd28f1860d38a85a0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:58 np0005603609 nova_compute[221550]: 2026-01-31 08:12:58.848 221554 DEBUG nova.objects.instance [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'migration_context' on Instance uuid 518d36e7-b77a-415d-bd48-f53e637ed0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:12:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:12:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:58.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.072 221554 DEBUG nova.virt.libvirt.driver [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.074 221554 DEBUG nova.virt.libvirt.driver [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Start _get_guest_xml network_info=[{"id": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "address": "fa:16:3e:49:13:77", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "vif_mac": "fa:16:3e:49:13:77"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap726c6b35-bb", "ovs_interfaceid": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'usb', 'dev': 'sdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'ae58104d-ec03-4878-8203-221e83ac3a09', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.075 221554 DEBUG nova.objects.instance [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'resources' on Instance uuid 518d36e7-b77a-415d-bd48-f53e637ed0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.160 221554 WARNING nova.virt.libvirt.driver [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.166 221554 DEBUG nova.virt.libvirt.host [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.167 221554 DEBUG nova.virt.libvirt.host [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.170 221554 DEBUG nova.virt.libvirt.host [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.170 221554 DEBUG nova.virt.libvirt.host [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.171 221554 DEBUG nova.virt.libvirt.driver [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.171 221554 DEBUG nova.virt.hardware [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.172 221554 DEBUG nova.virt.hardware [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.172 221554 DEBUG nova.virt.hardware [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.172 221554 DEBUG nova.virt.hardware [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.172 221554 DEBUG nova.virt.hardware [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.173 221554 DEBUG nova.virt.hardware [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.173 221554 DEBUG nova.virt.hardware [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.173 221554 DEBUG nova.virt.hardware [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.173 221554 DEBUG nova.virt.hardware [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.173 221554 DEBUG nova.virt.hardware [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.174 221554 DEBUG nova.virt.hardware [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.174 221554 DEBUG nova.objects.instance [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 518d36e7-b77a-415d-bd48-f53e637ed0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.314 221554 DEBUG oslo_concurrency.processutils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:12:59 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4270166023' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.734 221554 DEBUG oslo_concurrency.processutils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.769 221554 DEBUG oslo_concurrency.processutils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:59 np0005603609 nova_compute[221550]: 2026-01-31 08:12:59.789 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1811768806' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.210 221554 DEBUG oslo_concurrency.processutils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.211 221554 DEBUG oslo_concurrency.processutils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.290 221554 DEBUG nova.compute.manager [req-aabf603b-d4f0-40f7-9739-68a4669a568c req-57a14e4f-5d2f-4169-954e-21ecb33a9c4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received event network-vif-unplugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.291 221554 DEBUG oslo_concurrency.lockutils [req-aabf603b-d4f0-40f7-9739-68a4669a568c req-57a14e4f-5d2f-4169-954e-21ecb33a9c4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.291 221554 DEBUG oslo_concurrency.lockutils [req-aabf603b-d4f0-40f7-9739-68a4669a568c req-57a14e4f-5d2f-4169-954e-21ecb33a9c4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.291 221554 DEBUG oslo_concurrency.lockutils [req-aabf603b-d4f0-40f7-9739-68a4669a568c req-57a14e4f-5d2f-4169-954e-21ecb33a9c4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.291 221554 DEBUG nova.compute.manager [req-aabf603b-d4f0-40f7-9739-68a4669a568c req-57a14e4f-5d2f-4169-954e-21ecb33a9c4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] No waiting events found dispatching network-vif-unplugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.292 221554 WARNING nova.compute.manager [req-aabf603b-d4f0-40f7-9739-68a4669a568c req-57a14e4f-5d2f-4169-954e-21ecb33a9c4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received unexpected event network-vif-unplugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.292 221554 DEBUG nova.compute.manager [req-aabf603b-d4f0-40f7-9739-68a4669a568c req-57a14e4f-5d2f-4169-954e-21ecb33a9c4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received event network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.292 221554 DEBUG oslo_concurrency.lockutils [req-aabf603b-d4f0-40f7-9739-68a4669a568c req-57a14e4f-5d2f-4169-954e-21ecb33a9c4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.292 221554 DEBUG oslo_concurrency.lockutils [req-aabf603b-d4f0-40f7-9739-68a4669a568c req-57a14e4f-5d2f-4169-954e-21ecb33a9c4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.292 221554 DEBUG oslo_concurrency.lockutils [req-aabf603b-d4f0-40f7-9739-68a4669a568c req-57a14e4f-5d2f-4169-954e-21ecb33a9c4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.293 221554 DEBUG nova.compute.manager [req-aabf603b-d4f0-40f7-9739-68a4669a568c req-57a14e4f-5d2f-4169-954e-21ecb33a9c4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] No waiting events found dispatching network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.293 221554 WARNING nova.compute.manager [req-aabf603b-d4f0-40f7-9739-68a4669a568c req-57a14e4f-5d2f-4169-954e-21ecb33a9c4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received unexpected event network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:13:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:00.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2254943178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.642 221554 DEBUG oslo_concurrency.processutils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.643 221554 DEBUG nova.virt.libvirt.vif [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:12:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-478071790',display_name='tempest-ServerStableDeviceRescueTest-server-478071790',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-478071790',id=125,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:12:34Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1633c84ea1bf46b080aaafd30bbcf25f',ramdisk_id='',reservation_id='r-dq3p1x4r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-569420416',owner_user_name='tempest-ServerStableDeviceRescueTest-569420416-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:12:45Z,user_data=None,user_id='d7d9a44201d548aba1e1654e136ddd06',uuid=518d36e7-b77a-415d-bd48-f53e637ed0d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "address": "fa:16:3e:49:13:77", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "vif_mac": "fa:16:3e:49:13:77"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap726c6b35-bb", "ovs_interfaceid": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.644 221554 DEBUG nova.network.os_vif_util [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converting VIF {"id": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "address": "fa:16:3e:49:13:77", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "vif_mac": "fa:16:3e:49:13:77"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap726c6b35-bb", "ovs_interfaceid": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.644 221554 DEBUG nova.network.os_vif_util [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:13:77,bridge_name='br-int',has_traffic_filtering=True,id=726c6b35-bbdd-44f9-81a9-69aa19514bdd,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap726c6b35-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.646 221554 DEBUG nova.objects.instance [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'pci_devices' on Instance uuid 518d36e7-b77a-415d-bd48-f53e637ed0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #103. Immutable memtables: 0.
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:00.685707) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 103
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847180685817, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 2234, "num_deletes": 261, "total_data_size": 5017452, "memory_usage": 5092752, "flush_reason": "Manual Compaction"}
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #104: started
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847180726050, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 104, "file_size": 3284278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52449, "largest_seqno": 54677, "table_properties": {"data_size": 3275094, "index_size": 5678, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19871, "raw_average_key_size": 20, "raw_value_size": 3256358, "raw_average_value_size": 3384, "num_data_blocks": 245, "num_entries": 962, "num_filter_entries": 962, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847008, "oldest_key_time": 1769847008, "file_creation_time": 1769847180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 40434 microseconds, and 5976 cpu microseconds.
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:00.726150) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #104: 3284278 bytes OK
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:00.726185) [db/memtable_list.cc:519] [default] Level-0 commit table #104 started
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:00.728050) [db/memtable_list.cc:722] [default] Level-0 commit table #104: memtable #1 done
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:00.728101) EVENT_LOG_v1 {"time_micros": 1769847180728090, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:00.728126) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 5007445, prev total WAL file size 5007445, number of live WAL files 2.
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000100.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:00.729808) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373537' seq:72057594037927935, type:22 .. '6C6F676D0032303130' seq:0, type:0; will stop at (end)
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [104(3207KB)], [102(10MB)]
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847180729855, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [104], "files_L6": [102], "score": -1, "input_data_size": 14421441, "oldest_snapshot_seqno": -1}
Jan 31 03:13:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:00.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.963 221554 DEBUG nova.virt.libvirt.driver [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:  <uuid>518d36e7-b77a-415d-bd48-f53e637ed0d8</uuid>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:  <name>instance-0000007d</name>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-478071790</nova:name>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:12:59</nova:creationTime>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <nova:user uuid="d7d9a44201d548aba1e1654e136ddd06">tempest-ServerStableDeviceRescueTest-569420416-project-member</nova:user>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <nova:project uuid="1633c84ea1bf46b080aaafd30bbcf25f">tempest-ServerStableDeviceRescueTest-569420416</nova:project>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <nova:port uuid="726c6b35-bbdd-44f9-81a9-69aa19514bdd">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <entry name="serial">518d36e7-b77a-415d-bd48-f53e637ed0d8</entry>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <entry name="uuid">518d36e7-b77a-415d-bd48-f53e637ed0d8</entry>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/518d36e7-b77a-415d-bd48-f53e637ed0d8_disk">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/518d36e7-b77a-415d-bd48-f53e637ed0d8_disk.config">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/518d36e7-b77a-415d-bd48-f53e637ed0d8_disk.rescue">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <target dev="sdb" bus="usb"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <boot order="1"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:49:13:77"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <target dev="tap726c6b35-bb"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/518d36e7-b77a-415d-bd48-f53e637ed0d8/console.log" append="off"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:13:00 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:13:00 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:13:00 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:13:00 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:13:00 np0005603609 nova_compute[221550]: 2026-01-31 08:13:00.970 221554 INFO nova.virt.libvirt.driver [-] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Instance destroyed successfully.#033[00m
Jan 31 03:13:01 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #105: 8097 keys, 14270038 bytes, temperature: kUnknown
Jan 31 03:13:01 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847181047119, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 105, "file_size": 14270038, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14213252, "index_size": 35426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 208709, "raw_average_key_size": 25, "raw_value_size": 14066345, "raw_average_value_size": 1737, "num_data_blocks": 1407, "num_entries": 8097, "num_filter_entries": 8097, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769847180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 105, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:13:01 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:13:01 np0005603609 nova_compute[221550]: 2026-01-31 08:13:01.047 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:01 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:01.047564) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 14270038 bytes
Jan 31 03:13:01 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:01.132576) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 45.4 rd, 45.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 10.6 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(8.7) write-amplify(4.3) OK, records in: 8638, records dropped: 541 output_compression: NoCompression
Jan 31 03:13:01 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:01.132612) EVENT_LOG_v1 {"time_micros": 1769847181132601, "job": 64, "event": "compaction_finished", "compaction_time_micros": 317385, "compaction_time_cpu_micros": 29804, "output_level": 6, "num_output_files": 1, "total_output_size": 14270038, "num_input_records": 8638, "num_output_records": 8097, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:13:01 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:13:01 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847181133250, "job": 64, "event": "table_file_deletion", "file_number": 104}
Jan 31 03:13:01 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000102.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:13:01 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847181134749, "job": 64, "event": "table_file_deletion", "file_number": 102}
Jan 31 03:13:01 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:00.729057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:13:01 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:01.134789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:13:01 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:01.134792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:13:01 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:01.134794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:13:01 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:01.134795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:13:01 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:01.134797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:13:01 np0005603609 nova_compute[221550]: 2026-01-31 08:13:01.497 221554 DEBUG nova.virt.libvirt.driver [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:13:01 np0005603609 nova_compute[221550]: 2026-01-31 08:13:01.498 221554 DEBUG nova.virt.libvirt.driver [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:13:01 np0005603609 nova_compute[221550]: 2026-01-31 08:13:01.498 221554 DEBUG nova.virt.libvirt.driver [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:13:01 np0005603609 nova_compute[221550]: 2026-01-31 08:13:01.498 221554 DEBUG nova.virt.libvirt.driver [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No VIF found with MAC fa:16:3e:49:13:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:13:01 np0005603609 nova_compute[221550]: 2026-01-31 08:13:01.499 221554 INFO nova.virt.libvirt.driver [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Using config drive#033[00m
Jan 31 03:13:01 np0005603609 nova_compute[221550]: 2026-01-31 08:13:01.591 221554 DEBUG nova.storage.rbd_utils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 518d36e7-b77a-415d-bd48-f53e637ed0d8_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:13:01 np0005603609 nova_compute[221550]: 2026-01-31 08:13:01.887 221554 DEBUG nova.objects.instance [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 518d36e7-b77a-415d-bd48-f53e637ed0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:13:02 np0005603609 nova_compute[221550]: 2026-01-31 08:13:02.176 221554 DEBUG nova.objects.instance [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'keypairs' on Instance uuid 518d36e7-b77a-415d-bd48-f53e637ed0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:13:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:13:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:02.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:13:02 np0005603609 nova_compute[221550]: 2026-01-31 08:13:02.660 221554 INFO nova.virt.libvirt.driver [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Creating config drive at /var/lib/nova/instances/518d36e7-b77a-415d-bd48-f53e637ed0d8/disk.config.rescue#033[00m
Jan 31 03:13:02 np0005603609 nova_compute[221550]: 2026-01-31 08:13:02.664 221554 DEBUG oslo_concurrency.processutils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/518d36e7-b77a-415d-bd48-f53e637ed0d8/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpo6knfgvv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:13:02 np0005603609 nova_compute[221550]: 2026-01-31 08:13:02.791 221554 DEBUG oslo_concurrency.processutils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/518d36e7-b77a-415d-bd48-f53e637ed0d8/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpo6knfgvv" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:13:02 np0005603609 nova_compute[221550]: 2026-01-31 08:13:02.817 221554 DEBUG nova.storage.rbd_utils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 518d36e7-b77a-415d-bd48-f53e637ed0d8_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:13:02 np0005603609 nova_compute[221550]: 2026-01-31 08:13:02.823 221554 DEBUG oslo_concurrency.processutils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/518d36e7-b77a-415d-bd48-f53e637ed0d8/disk.config.rescue 518d36e7-b77a-415d-bd48-f53e637ed0d8_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:13:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:13:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:02.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:13:02 np0005603609 nova_compute[221550]: 2026-01-31 08:13:02.994 221554 DEBUG oslo_concurrency.processutils [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/518d36e7-b77a-415d-bd48-f53e637ed0d8/disk.config.rescue 518d36e7-b77a-415d-bd48-f53e637ed0d8_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:13:02 np0005603609 nova_compute[221550]: 2026-01-31 08:13:02.996 221554 INFO nova.virt.libvirt.driver [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Deleting local config drive /var/lib/nova/instances/518d36e7-b77a-415d-bd48-f53e637ed0d8/disk.config.rescue because it was imported into RBD.#033[00m
Jan 31 03:13:03 np0005603609 kernel: tap726c6b35-bb: entered promiscuous mode
Jan 31 03:13:03 np0005603609 NetworkManager[49064]: <info>  [1769847183.0323] manager: (tap726c6b35-bb): new Tun device (/org/freedesktop/NetworkManager/Devices/263)
Jan 31 03:13:03 np0005603609 nova_compute[221550]: 2026-01-31 08:13:03.032 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:03 np0005603609 ovn_controller[130359]: 2026-01-31T08:13:03Z|00533|binding|INFO|Claiming lport 726c6b35-bbdd-44f9-81a9-69aa19514bdd for this chassis.
Jan 31 03:13:03 np0005603609 ovn_controller[130359]: 2026-01-31T08:13:03Z|00534|binding|INFO|726c6b35-bbdd-44f9-81a9-69aa19514bdd: Claiming fa:16:3e:49:13:77 10.100.0.10
Jan 31 03:13:03 np0005603609 nova_compute[221550]: 2026-01-31 08:13:03.035 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:03 np0005603609 ovn_controller[130359]: 2026-01-31T08:13:03Z|00535|binding|INFO|Setting lport 726c6b35-bbdd-44f9-81a9-69aa19514bdd ovn-installed in OVS
Jan 31 03:13:03 np0005603609 nova_compute[221550]: 2026-01-31 08:13:03.043 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:03 np0005603609 nova_compute[221550]: 2026-01-31 08:13:03.044 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:03 np0005603609 systemd-udevd[273968]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:13:03 np0005603609 systemd-machined[190912]: New machine qemu-66-instance-0000007d.
Jan 31 03:13:03 np0005603609 NetworkManager[49064]: <info>  [1769847183.0625] device (tap726c6b35-bb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:13:03 np0005603609 NetworkManager[49064]: <info>  [1769847183.0629] device (tap726c6b35-bb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:13:03 np0005603609 systemd[1]: Started Virtual Machine qemu-66-instance-0000007d.
Jan 31 03:13:03 np0005603609 ovn_controller[130359]: 2026-01-31T08:13:03Z|00536|binding|INFO|Setting lport 726c6b35-bbdd-44f9-81a9-69aa19514bdd up in Southbound
Jan 31 03:13:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:03.167 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:13:77 10.100.0.10'], port_security=['fa:16:3e:49:13:77 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '518d36e7-b77a-415d-bd48-f53e637ed0d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '6bf409ce-5d7d-4755-a538-3f1e81a177fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=726c6b35-bbdd-44f9-81a9-69aa19514bdd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:13:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:03.169 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 726c6b35-bbdd-44f9-81a9-69aa19514bdd in datapath 088d6992-6ba6-4719-a977-b3d306740157 bound to our chassis#033[00m
Jan 31 03:13:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:03.170 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 088d6992-6ba6-4719-a977-b3d306740157#033[00m
Jan 31 03:13:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:03.182 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[199b0d9f-39b9-4056-a55b-5e536ae68398]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:03.202 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c61a574e-c154-41fd-ad73-1caa21d71d62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:03.206 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[7598f103-b0e5-4e1f-a380-eaa9a67a144d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:03.221 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[beae1def-ad61-4714-bc47-c1ee55f9f7d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:03.235 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e2a446-266d-41d0-ac1e-64e494d88cdb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736325, 'reachable_time': 27412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273983, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:03.249 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[40ee7473-e122-4f11-b6bf-962718f82aae]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736331, 'tstamp': 736331}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273984, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736333, 'tstamp': 736333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273984, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:03.251 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:03 np0005603609 nova_compute[221550]: 2026-01-31 08:13:03.253 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:03.255 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088d6992-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:03.255 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:13:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:03.255 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap088d6992-60, col_values=(('external_ids', {'iface-id': '7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:03.256 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:13:03 np0005603609 nova_compute[221550]: 2026-01-31 08:13:03.529 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 518d36e7-b77a-415d-bd48-f53e637ed0d8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:13:03 np0005603609 nova_compute[221550]: 2026-01-31 08:13:03.530 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847183.5287406, 518d36e7-b77a-415d-bd48-f53e637ed0d8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:13:03 np0005603609 nova_compute[221550]: 2026-01-31 08:13:03.530 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:13:03 np0005603609 nova_compute[221550]: 2026-01-31 08:13:03.542 221554 DEBUG nova.compute.manager [None req-b204f13c-e226-4dfb-9db2-4c499714f539 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:13:03 np0005603609 nova_compute[221550]: 2026-01-31 08:13:03.833 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:13:03 np0005603609 nova_compute[221550]: 2026-01-31 08:13:03.837 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:13:04 np0005603609 nova_compute[221550]: 2026-01-31 08:13:04.220 221554 DEBUG nova.compute.manager [req-910ffe46-242d-4f55-a92e-11483efdb1b9 req-7fd56377-273d-418e-ac59-3365617225ad 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received event network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:04 np0005603609 nova_compute[221550]: 2026-01-31 08:13:04.221 221554 DEBUG oslo_concurrency.lockutils [req-910ffe46-242d-4f55-a92e-11483efdb1b9 req-7fd56377-273d-418e-ac59-3365617225ad 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:04 np0005603609 nova_compute[221550]: 2026-01-31 08:13:04.221 221554 DEBUG oslo_concurrency.lockutils [req-910ffe46-242d-4f55-a92e-11483efdb1b9 req-7fd56377-273d-418e-ac59-3365617225ad 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:04 np0005603609 nova_compute[221550]: 2026-01-31 08:13:04.222 221554 DEBUG oslo_concurrency.lockutils [req-910ffe46-242d-4f55-a92e-11483efdb1b9 req-7fd56377-273d-418e-ac59-3365617225ad 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:04 np0005603609 nova_compute[221550]: 2026-01-31 08:13:04.222 221554 DEBUG nova.compute.manager [req-910ffe46-242d-4f55-a92e-11483efdb1b9 req-7fd56377-273d-418e-ac59-3365617225ad 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] No waiting events found dispatching network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:04 np0005603609 nova_compute[221550]: 2026-01-31 08:13:04.222 221554 WARNING nova.compute.manager [req-910ffe46-242d-4f55-a92e-11483efdb1b9 req-7fd56377-273d-418e-ac59-3365617225ad 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received unexpected event network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:13:04 np0005603609 nova_compute[221550]: 2026-01-31 08:13:04.244 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847183.5288582, 518d36e7-b77a-415d-bd48-f53e637ed0d8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:13:04 np0005603609 nova_compute[221550]: 2026-01-31 08:13:04.245 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] VM Started (Lifecycle Event)#033[00m
Jan 31 03:13:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:04.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:04 np0005603609 nova_compute[221550]: 2026-01-31 08:13:04.525 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:13:04 np0005603609 nova_compute[221550]: 2026-01-31 08:13:04.531 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:13:04 np0005603609 nova_compute[221550]: 2026-01-31 08:13:04.792 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:04.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:06 np0005603609 nova_compute[221550]: 2026-01-31 08:13:06.049 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:06 np0005603609 nova_compute[221550]: 2026-01-31 08:13:06.278 221554 INFO nova.compute.manager [None req-8a5bb94a-6f9f-464c-80ba-357d6fee09a4 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Unrescuing#033[00m
Jan 31 03:13:06 np0005603609 nova_compute[221550]: 2026-01-31 08:13:06.278 221554 DEBUG oslo_concurrency.lockutils [None req-8a5bb94a-6f9f-464c-80ba-357d6fee09a4 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "refresh_cache-518d36e7-b77a-415d-bd48-f53e637ed0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:13:06 np0005603609 nova_compute[221550]: 2026-01-31 08:13:06.278 221554 DEBUG oslo_concurrency.lockutils [None req-8a5bb94a-6f9f-464c-80ba-357d6fee09a4 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquired lock "refresh_cache-518d36e7-b77a-415d-bd48-f53e637ed0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:13:06 np0005603609 nova_compute[221550]: 2026-01-31 08:13:06.279 221554 DEBUG nova.network.neutron [None req-8a5bb94a-6f9f-464c-80ba-357d6fee09a4 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:13:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:06.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:06 np0005603609 nova_compute[221550]: 2026-01-31 08:13:06.646 221554 DEBUG nova.compute.manager [req-e23ef9fe-3a74-4cf1-b3f6-9b0a6767b87e req-1d5914be-889b-4a5e-9256-bfdb798697c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received event network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:06 np0005603609 nova_compute[221550]: 2026-01-31 08:13:06.646 221554 DEBUG oslo_concurrency.lockutils [req-e23ef9fe-3a74-4cf1-b3f6-9b0a6767b87e req-1d5914be-889b-4a5e-9256-bfdb798697c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:06 np0005603609 nova_compute[221550]: 2026-01-31 08:13:06.647 221554 DEBUG oslo_concurrency.lockutils [req-e23ef9fe-3a74-4cf1-b3f6-9b0a6767b87e req-1d5914be-889b-4a5e-9256-bfdb798697c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:06 np0005603609 nova_compute[221550]: 2026-01-31 08:13:06.647 221554 DEBUG oslo_concurrency.lockutils [req-e23ef9fe-3a74-4cf1-b3f6-9b0a6767b87e req-1d5914be-889b-4a5e-9256-bfdb798697c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:06 np0005603609 nova_compute[221550]: 2026-01-31 08:13:06.647 221554 DEBUG nova.compute.manager [req-e23ef9fe-3a74-4cf1-b3f6-9b0a6767b87e req-1d5914be-889b-4a5e-9256-bfdb798697c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] No waiting events found dispatching network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:06 np0005603609 nova_compute[221550]: 2026-01-31 08:13:06.647 221554 WARNING nova.compute.manager [req-e23ef9fe-3a74-4cf1-b3f6-9b0a6767b87e req-1d5914be-889b-4a5e-9256-bfdb798697c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received unexpected event network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 31 03:13:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:06.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:07.506 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:07.507 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:07.508 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:08.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:13:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:08.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:13:09 np0005603609 nova_compute[221550]: 2026-01-31 08:13:09.793 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:10.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:10.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:11 np0005603609 nova_compute[221550]: 2026-01-31 08:13:11.051 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:12 np0005603609 podman[274048]: 2026-01-31 08:13:12.168160392 +0000 UTC m=+0.046039697 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:13:12 np0005603609 podman[274047]: 2026-01-31 08:13:12.199827183 +0000 UTC m=+0.078118368 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 03:13:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:13:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:12.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:13:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:13:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:12.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:13:13 np0005603609 nova_compute[221550]: 2026-01-31 08:13:13.390 221554 DEBUG oslo_concurrency.lockutils [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "1483edbd-0387-4068-b62e-4789e777f5ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:13 np0005603609 nova_compute[221550]: 2026-01-31 08:13:13.392 221554 DEBUG oslo_concurrency.lockutils [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "1483edbd-0387-4068-b62e-4789e777f5ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:13 np0005603609 nova_compute[221550]: 2026-01-31 08:13:13.392 221554 DEBUG oslo_concurrency.lockutils [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "1483edbd-0387-4068-b62e-4789e777f5ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:13 np0005603609 nova_compute[221550]: 2026-01-31 08:13:13.392 221554 DEBUG oslo_concurrency.lockutils [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "1483edbd-0387-4068-b62e-4789e777f5ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:13 np0005603609 nova_compute[221550]: 2026-01-31 08:13:13.392 221554 DEBUG oslo_concurrency.lockutils [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "1483edbd-0387-4068-b62e-4789e777f5ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:13 np0005603609 nova_compute[221550]: 2026-01-31 08:13:13.394 221554 INFO nova.compute.manager [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Terminating instance#033[00m
Jan 31 03:13:13 np0005603609 nova_compute[221550]: 2026-01-31 08:13:13.394 221554 DEBUG nova.compute.manager [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:13:13 np0005603609 kernel: tap6bd06a16-ff (unregistering): left promiscuous mode
Jan 31 03:13:13 np0005603609 NetworkManager[49064]: <info>  [1769847193.6202] device (tap6bd06a16-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:13:13 np0005603609 nova_compute[221550]: 2026-01-31 08:13:13.629 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:13:13Z|00537|binding|INFO|Releasing lport 6bd06a16-ffc6-4ae3-9810-f84a1120972a from this chassis (sb_readonly=0)
Jan 31 03:13:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:13:13Z|00538|binding|INFO|Setting lport 6bd06a16-ffc6-4ae3-9810-f84a1120972a down in Southbound
Jan 31 03:13:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:13:13Z|00539|binding|INFO|Removing iface tap6bd06a16-ff ovn-installed in OVS
Jan 31 03:13:13 np0005603609 nova_compute[221550]: 2026-01-31 08:13:13.637 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:13 np0005603609 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Jan 31 03:13:13 np0005603609 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000007e.scope: Consumed 14.346s CPU time.
Jan 31 03:13:13 np0005603609 systemd-machined[190912]: Machine qemu-65-instance-0000007e terminated.
Jan 31 03:13:13 np0005603609 nova_compute[221550]: 2026-01-31 08:13:13.831 221554 INFO nova.virt.libvirt.driver [-] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Instance destroyed successfully.#033[00m
Jan 31 03:13:13 np0005603609 nova_compute[221550]: 2026-01-31 08:13:13.831 221554 DEBUG nova.objects.instance [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'resources' on Instance uuid 1483edbd-0387-4068-b62e-4789e777f5ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:13:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:13.873 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:cc:12 10.100.0.3'], port_security=['fa:16:3e:a4:cc:12 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1483edbd-0387-4068-b62e-4789e777f5ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b88251fc-7610-460a-ba55-2ed186c6f696', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9f076789-2616-4234-8eab-1fc3da7d63b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7bb3690e-e43b-4d54-9d64-4797e471bf50, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=6bd06a16-ffc6-4ae3-9810-f84a1120972a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:13:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:13.875 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 6bd06a16-ffc6-4ae3-9810-f84a1120972a in datapath b88251fc-7610-460a-ba55-2ed186c6f696 unbound from our chassis#033[00m
Jan 31 03:13:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:13.877 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b88251fc-7610-460a-ba55-2ed186c6f696, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:13:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:13.878 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a381ed8b-7957-4b42-a2e1-c82716bbc0a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:13.879 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 namespace which is not needed anymore#033[00m
Jan 31 03:13:13 np0005603609 nova_compute[221550]: 2026-01-31 08:13:13.907 221554 DEBUG nova.network.neutron [None req-8a5bb94a-6f9f-464c-80ba-357d6fee09a4 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Updating instance_info_cache with network_info: [{"id": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "address": "fa:16:3e:49:13:77", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap726c6b35-bb", "ovs_interfaceid": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.056 221554 DEBUG nova.virt.libvirt.vif [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:12:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-860967629',display_name='tempest-ServersTestJSON-server-860967629',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-860967629',id=126,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:12:34Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-vouorxq6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:12:35Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=1483edbd-0387-4068-b62e-4789e777f5ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "address": "fa:16:3e:a4:cc:12", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bd06a16-ff", "ovs_interfaceid": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.057 221554 DEBUG nova.network.os_vif_util [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "address": "fa:16:3e:a4:cc:12", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bd06a16-ff", "ovs_interfaceid": "6bd06a16-ffc6-4ae3-9810-f84a1120972a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.058 221554 DEBUG nova.network.os_vif_util [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:cc:12,bridge_name='br-int',has_traffic_filtering=True,id=6bd06a16-ffc6-4ae3-9810-f84a1120972a,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bd06a16-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.059 221554 DEBUG os_vif [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:cc:12,bridge_name='br-int',has_traffic_filtering=True,id=6bd06a16-ffc6-4ae3-9810-f84a1120972a,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bd06a16-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.060 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.061 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6bd06a16-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.064 221554 DEBUG oslo_concurrency.lockutils [None req-8a5bb94a-6f9f-464c-80ba-357d6fee09a4 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Releasing lock "refresh_cache-518d36e7-b77a-415d-bd48-f53e637ed0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.065 221554 DEBUG nova.objects.instance [None req-8a5bb94a-6f9f-464c-80ba-357d6fee09a4 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'flavor' on Instance uuid 518d36e7-b77a-415d-bd48-f53e637ed0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.067 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.069 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.071 221554 INFO os_vif [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:cc:12,bridge_name='br-int',has_traffic_filtering=True,id=6bd06a16-ffc6-4ae3-9810-f84a1120972a,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bd06a16-ff')#033[00m
Jan 31 03:13:14 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[273376]: [NOTICE]   (273380) : haproxy version is 2.8.14-c23fe91
Jan 31 03:13:14 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[273376]: [NOTICE]   (273380) : path to executable is /usr/sbin/haproxy
Jan 31 03:13:14 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[273376]: [WARNING]  (273380) : Exiting Master process...
Jan 31 03:13:14 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[273376]: [WARNING]  (273380) : Exiting Master process...
Jan 31 03:13:14 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[273376]: [ALERT]    (273380) : Current worker (273383) exited with code 143 (Terminated)
Jan 31 03:13:14 np0005603609 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[273376]: [WARNING]  (273380) : All workers exited. Exiting... (0)
Jan 31 03:13:14 np0005603609 systemd[1]: libpod-e167556c93ede248987ddf2900db1d24ff240e2d8c683b884fae283645430d6c.scope: Deactivated successfully.
Jan 31 03:13:14 np0005603609 podman[274123]: 2026-01-31 08:13:14.083543624 +0000 UTC m=+0.133210063 container died e167556c93ede248987ddf2900db1d24ff240e2d8c683b884fae283645430d6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:13:14 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e167556c93ede248987ddf2900db1d24ff240e2d8c683b884fae283645430d6c-userdata-shm.mount: Deactivated successfully.
Jan 31 03:13:14 np0005603609 systemd[1]: var-lib-containers-storage-overlay-0cdc6eee46d2ff5a4d50d374fd89edf68c44424dd3e032052942419e2c095e8f-merged.mount: Deactivated successfully.
Jan 31 03:13:14 np0005603609 podman[274123]: 2026-01-31 08:13:14.420118283 +0000 UTC m=+0.469784762 container cleanup e167556c93ede248987ddf2900db1d24ff240e2d8c683b884fae283645430d6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 03:13:14 np0005603609 systemd[1]: libpod-conmon-e167556c93ede248987ddf2900db1d24ff240e2d8c683b884fae283645430d6c.scope: Deactivated successfully.
Jan 31 03:13:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:14.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:14 np0005603609 kernel: tap726c6b35-bb (unregistering): left promiscuous mode
Jan 31 03:13:14 np0005603609 NetworkManager[49064]: <info>  [1769847194.6707] device (tap726c6b35-bb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:13:14 np0005603609 ovn_controller[130359]: 2026-01-31T08:13:14Z|00540|binding|INFO|Releasing lport 726c6b35-bbdd-44f9-81a9-69aa19514bdd from this chassis (sb_readonly=0)
Jan 31 03:13:14 np0005603609 ovn_controller[130359]: 2026-01-31T08:13:14Z|00541|binding|INFO|Setting lport 726c6b35-bbdd-44f9-81a9-69aa19514bdd down in Southbound
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.673 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:14 np0005603609 ovn_controller[130359]: 2026-01-31T08:13:14Z|00542|binding|INFO|Removing iface tap726c6b35-bb ovn-installed in OVS
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.675 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.688 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:14 np0005603609 podman[274172]: 2026-01-31 08:13:14.700837289 +0000 UTC m=+0.265011240 container remove e167556c93ede248987ddf2900db1d24ff240e2d8c683b884fae283645430d6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.705 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b4e4fc2d-122b-41eb-8ec2-bebf3540ac75]: (4, ('Sat Jan 31 08:13:13 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 (e167556c93ede248987ddf2900db1d24ff240e2d8c683b884fae283645430d6c)\ne167556c93ede248987ddf2900db1d24ff240e2d8c683b884fae283645430d6c\nSat Jan 31 08:13:14 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 (e167556c93ede248987ddf2900db1d24ff240e2d8c683b884fae283645430d6c)\ne167556c93ede248987ddf2900db1d24ff240e2d8c683b884fae283645430d6c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.706 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[08cf5b35-001b-4681-9596-44772a446d53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.706 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb88251fc-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.708 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:14 np0005603609 kernel: tapb88251fc-70: left promiscuous mode
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.717 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:14 np0005603609 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Jan 31 03:13:14 np0005603609 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000007d.scope: Consumed 11.464s CPU time.
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.721 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8f764916-5e69-43d7-a2a1-e5efd450e4f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:14 np0005603609 systemd-machined[190912]: Machine qemu-66-instance-0000007d terminated.
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.734 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[501ea796-6148-4951-ae2a-db59f4c8ce26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.735 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[54f588e8-b7b2-4f2a-9752-a5d67b77d0e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.745 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[34d47cea-b3cc-4634-a81c-a177f3bcb2ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739279, 'reachable_time': 32803, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274195, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:14 np0005603609 systemd[1]: run-netns-ovnmeta\x2db88251fc\x2d7610\x2d460a\x2dba55\x2d2ed186c6f696.mount: Deactivated successfully.
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.747 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.748 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[51cf3289-d55a-4b4a-9be9-b2b6be2a7e46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.791 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:13:77 10.100.0.10'], port_security=['fa:16:3e:49:13:77 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '518d36e7-b77a-415d-bd48-f53e637ed0d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6bf409ce-5d7d-4755-a538-3f1e81a177fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=726c6b35-bbdd-44f9-81a9-69aa19514bdd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.792 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 726c6b35-bbdd-44f9-81a9-69aa19514bdd in datapath 088d6992-6ba6-4719-a977-b3d306740157 unbound from our chassis#033[00m
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.793 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 088d6992-6ba6-4719-a977-b3d306740157#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.795 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.803 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5a477f8d-2b8f-4eac-88c7-dab8ff64ad97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.822 221554 DEBUG nova.compute.manager [req-2f5d7631-12bb-40f9-bc12-a30cfd008448 req-33b8bbd6-9126-443a-a69f-054fc3a5b4c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Received event network-vif-unplugged-6bd06a16-ffc6-4ae3-9810-f84a1120972a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.822 221554 DEBUG oslo_concurrency.lockutils [req-2f5d7631-12bb-40f9-bc12-a30cfd008448 req-33b8bbd6-9126-443a-a69f-054fc3a5b4c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1483edbd-0387-4068-b62e-4789e777f5ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.823 221554 DEBUG oslo_concurrency.lockutils [req-2f5d7631-12bb-40f9-bc12-a30cfd008448 req-33b8bbd6-9126-443a-a69f-054fc3a5b4c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1483edbd-0387-4068-b62e-4789e777f5ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.823 221554 DEBUG oslo_concurrency.lockutils [req-2f5d7631-12bb-40f9-bc12-a30cfd008448 req-33b8bbd6-9126-443a-a69f-054fc3a5b4c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1483edbd-0387-4068-b62e-4789e777f5ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.823 221554 DEBUG nova.compute.manager [req-2f5d7631-12bb-40f9-bc12-a30cfd008448 req-33b8bbd6-9126-443a-a69f-054fc3a5b4c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] No waiting events found dispatching network-vif-unplugged-6bd06a16-ffc6-4ae3-9810-f84a1120972a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.823 221554 DEBUG nova.compute.manager [req-2f5d7631-12bb-40f9-bc12-a30cfd008448 req-33b8bbd6-9126-443a-a69f-054fc3a5b4c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Received event network-vif-unplugged-6bd06a16-ffc6-4ae3-9810-f84a1120972a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.828 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[afc19089-01db-43d6-b5da-3192d443a35e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.831 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[86bc1dae-7db5-4385-87e8-81b3d00121b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.851 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[8b247ae6-e2cd-4beb-89f8-b075d427c3fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.863 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[01da6d35-2468-43aa-b05c-b2b55a75b2e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736325, 'reachable_time': 27412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274201, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.874 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d790982f-29f0-4d72-a701-1a19893533fc]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736331, 'tstamp': 736331}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274202, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736333, 'tstamp': 736333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274202, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.876 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.877 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.883 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.883 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088d6992-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.883 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.883 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap088d6992-60, col_values=(('external_ids', {'iface-id': '7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:14.884 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.888 221554 INFO nova.virt.libvirt.driver [-] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Instance destroyed successfully.#033[00m
Jan 31 03:13:14 np0005603609 nova_compute[221550]: 2026-01-31 08:13:14.888 221554 DEBUG nova.objects.instance [None req-8a5bb94a-6f9f-464c-80ba-357d6fee09a4 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'numa_topology' on Instance uuid 518d36e7-b77a-415d-bd48-f53e637ed0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:13:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:13:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:14.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:13:15 np0005603609 systemd-udevd[274092]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:13:15 np0005603609 kernel: tap726c6b35-bb: entered promiscuous mode
Jan 31 03:13:15 np0005603609 NetworkManager[49064]: <info>  [1769847195.1370] manager: (tap726c6b35-bb): new Tun device (/org/freedesktop/NetworkManager/Devices/264)
Jan 31 03:13:15 np0005603609 nova_compute[221550]: 2026-01-31 08:13:15.136 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:15 np0005603609 ovn_controller[130359]: 2026-01-31T08:13:15Z|00543|binding|INFO|Claiming lport 726c6b35-bbdd-44f9-81a9-69aa19514bdd for this chassis.
Jan 31 03:13:15 np0005603609 ovn_controller[130359]: 2026-01-31T08:13:15Z|00544|binding|INFO|726c6b35-bbdd-44f9-81a9-69aa19514bdd: Claiming fa:16:3e:49:13:77 10.100.0.10
Jan 31 03:13:15 np0005603609 NetworkManager[49064]: <info>  [1769847195.1442] device (tap726c6b35-bb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:13:15 np0005603609 NetworkManager[49064]: <info>  [1769847195.1448] device (tap726c6b35-bb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:13:15 np0005603609 ovn_controller[130359]: 2026-01-31T08:13:15Z|00545|binding|INFO|Setting lport 726c6b35-bbdd-44f9-81a9-69aa19514bdd ovn-installed in OVS
Jan 31 03:13:15 np0005603609 nova_compute[221550]: 2026-01-31 08:13:15.146 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:15 np0005603609 nova_compute[221550]: 2026-01-31 08:13:15.148 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:15 np0005603609 systemd-machined[190912]: New machine qemu-67-instance-0000007d.
Jan 31 03:13:15 np0005603609 systemd[1]: Started Virtual Machine qemu-67-instance-0000007d.
Jan 31 03:13:15 np0005603609 ovn_controller[130359]: 2026-01-31T08:13:15Z|00546|binding|INFO|Setting lport 726c6b35-bbdd-44f9-81a9-69aa19514bdd up in Southbound
Jan 31 03:13:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:15.345 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:13:77 10.100.0.10'], port_security=['fa:16:3e:49:13:77 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '518d36e7-b77a-415d-bd48-f53e637ed0d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6bf409ce-5d7d-4755-a538-3f1e81a177fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=726c6b35-bbdd-44f9-81a9-69aa19514bdd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:13:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:15.347 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 726c6b35-bbdd-44f9-81a9-69aa19514bdd in datapath 088d6992-6ba6-4719-a977-b3d306740157 bound to our chassis#033[00m
Jan 31 03:13:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:15.348 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 088d6992-6ba6-4719-a977-b3d306740157#033[00m
Jan 31 03:13:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:15.361 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ede011-422f-4969-a32d-62cf189b7074]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:15.381 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[793c4bd9-eb2e-4cca-ba3a-a6bc694ba1e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:15.384 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[65206a34-f1a6-48a1-a9e4-6aa3ea3adf12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:15.405 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff6c4fd-cc0a-417e-9543-706862b7babb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:15.429 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a80af0db-b823-440e-8289-742d19d94cff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736325, 'reachable_time': 27412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274240, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:15.443 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ecefcf7b-409a-40d1-8fe1-c9276ab92521]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736331, 'tstamp': 736331}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274241, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736333, 'tstamp': 736333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274241, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:15.444 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:15 np0005603609 nova_compute[221550]: 2026-01-31 08:13:15.445 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:15.446 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088d6992-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:15.447 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:13:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:15.447 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap088d6992-60, col_values=(('external_ids', {'iface-id': '7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:15.447 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:13:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:15 np0005603609 nova_compute[221550]: 2026-01-31 08:13:15.921 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 518d36e7-b77a-415d-bd48-f53e637ed0d8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:13:15 np0005603609 nova_compute[221550]: 2026-01-31 08:13:15.922 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847195.920734, 518d36e7-b77a-415d-bd48-f53e637ed0d8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:13:15 np0005603609 nova_compute[221550]: 2026-01-31 08:13:15.923 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:13:16 np0005603609 nova_compute[221550]: 2026-01-31 08:13:16.048 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:13:16 np0005603609 nova_compute[221550]: 2026-01-31 08:13:16.052 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:13:16 np0005603609 nova_compute[221550]: 2026-01-31 08:13:16.193 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 31 03:13:16 np0005603609 nova_compute[221550]: 2026-01-31 08:13:16.194 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847195.922203, 518d36e7-b77a-415d-bd48-f53e637ed0d8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:13:16 np0005603609 nova_compute[221550]: 2026-01-31 08:13:16.194 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] VM Started (Lifecycle Event)#033[00m
Jan 31 03:13:16 np0005603609 nova_compute[221550]: 2026-01-31 08:13:16.217 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:13:16 np0005603609 nova_compute[221550]: 2026-01-31 08:13:16.220 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:13:16 np0005603609 nova_compute[221550]: 2026-01-31 08:13:16.388 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 31 03:13:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:13:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:16.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:13:16 np0005603609 nova_compute[221550]: 2026-01-31 08:13:16.638 221554 DEBUG nova.compute.manager [req-a4e796e2-6e4e-4cc1-9713-6c8ea9c8965f req-0899f2a8-59f4-40e8-8c3a-41ce4b517935 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received event network-vif-unplugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:16 np0005603609 nova_compute[221550]: 2026-01-31 08:13:16.639 221554 DEBUG oslo_concurrency.lockutils [req-a4e796e2-6e4e-4cc1-9713-6c8ea9c8965f req-0899f2a8-59f4-40e8-8c3a-41ce4b517935 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:16 np0005603609 nova_compute[221550]: 2026-01-31 08:13:16.639 221554 DEBUG oslo_concurrency.lockutils [req-a4e796e2-6e4e-4cc1-9713-6c8ea9c8965f req-0899f2a8-59f4-40e8-8c3a-41ce4b517935 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:16 np0005603609 nova_compute[221550]: 2026-01-31 08:13:16.640 221554 DEBUG oslo_concurrency.lockutils [req-a4e796e2-6e4e-4cc1-9713-6c8ea9c8965f req-0899f2a8-59f4-40e8-8c3a-41ce4b517935 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:16 np0005603609 nova_compute[221550]: 2026-01-31 08:13:16.640 221554 DEBUG nova.compute.manager [req-a4e796e2-6e4e-4cc1-9713-6c8ea9c8965f req-0899f2a8-59f4-40e8-8c3a-41ce4b517935 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] No waiting events found dispatching network-vif-unplugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:16 np0005603609 nova_compute[221550]: 2026-01-31 08:13:16.640 221554 WARNING nova.compute.manager [req-a4e796e2-6e4e-4cc1-9713-6c8ea9c8965f req-0899f2a8-59f4-40e8-8c3a-41ce4b517935 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received unexpected event network-vif-unplugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 31 03:13:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:16.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:17 np0005603609 nova_compute[221550]: 2026-01-31 08:13:17.042 221554 DEBUG nova.compute.manager [req-28721e57-3c86-4b6f-94a2-25a280066379 req-b16e5438-4809-447b-83b6-93d204013659 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Received event network-vif-plugged-6bd06a16-ffc6-4ae3-9810-f84a1120972a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:17 np0005603609 nova_compute[221550]: 2026-01-31 08:13:17.043 221554 DEBUG oslo_concurrency.lockutils [req-28721e57-3c86-4b6f-94a2-25a280066379 req-b16e5438-4809-447b-83b6-93d204013659 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1483edbd-0387-4068-b62e-4789e777f5ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:17 np0005603609 nova_compute[221550]: 2026-01-31 08:13:17.044 221554 DEBUG oslo_concurrency.lockutils [req-28721e57-3c86-4b6f-94a2-25a280066379 req-b16e5438-4809-447b-83b6-93d204013659 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1483edbd-0387-4068-b62e-4789e777f5ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:17 np0005603609 nova_compute[221550]: 2026-01-31 08:13:17.044 221554 DEBUG oslo_concurrency.lockutils [req-28721e57-3c86-4b6f-94a2-25a280066379 req-b16e5438-4809-447b-83b6-93d204013659 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1483edbd-0387-4068-b62e-4789e777f5ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:17 np0005603609 nova_compute[221550]: 2026-01-31 08:13:17.044 221554 DEBUG nova.compute.manager [req-28721e57-3c86-4b6f-94a2-25a280066379 req-b16e5438-4809-447b-83b6-93d204013659 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] No waiting events found dispatching network-vif-plugged-6bd06a16-ffc6-4ae3-9810-f84a1120972a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:17 np0005603609 nova_compute[221550]: 2026-01-31 08:13:17.045 221554 WARNING nova.compute.manager [req-28721e57-3c86-4b6f-94a2-25a280066379 req-b16e5438-4809-447b-83b6-93d204013659 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Received unexpected event network-vif-plugged-6bd06a16-ffc6-4ae3-9810-f84a1120972a for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:13:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:18.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:13:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:18.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:13:18 np0005603609 nova_compute[221550]: 2026-01-31 08:13:18.932 221554 INFO nova.virt.libvirt.driver [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Deleting instance files /var/lib/nova/instances/1483edbd-0387-4068-b62e-4789e777f5ab_del#033[00m
Jan 31 03:13:18 np0005603609 nova_compute[221550]: 2026-01-31 08:13:18.933 221554 INFO nova.virt.libvirt.driver [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Deletion of /var/lib/nova/instances/1483edbd-0387-4068-b62e-4789e777f5ab_del complete#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.064 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.201 221554 INFO nova.compute.manager [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Took 5.81 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.202 221554 DEBUG oslo.service.loopingcall [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.202 221554 DEBUG nova.compute.manager [-] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.202 221554 DEBUG nova.network.neutron [-] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.366 221554 DEBUG nova.compute.manager [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received event network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.367 221554 DEBUG oslo_concurrency.lockutils [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.367 221554 DEBUG oslo_concurrency.lockutils [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.367 221554 DEBUG oslo_concurrency.lockutils [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.368 221554 DEBUG nova.compute.manager [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] No waiting events found dispatching network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.368 221554 WARNING nova.compute.manager [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received unexpected event network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.368 221554 DEBUG nova.compute.manager [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received event network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.368 221554 DEBUG oslo_concurrency.lockutils [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.369 221554 DEBUG oslo_concurrency.lockutils [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.369 221554 DEBUG oslo_concurrency.lockutils [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.369 221554 DEBUG nova.compute.manager [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] No waiting events found dispatching network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.369 221554 WARNING nova.compute.manager [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received unexpected event network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.370 221554 DEBUG nova.compute.manager [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received event network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.370 221554 DEBUG oslo_concurrency.lockutils [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.370 221554 DEBUG oslo_concurrency.lockutils [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.370 221554 DEBUG oslo_concurrency.lockutils [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.370 221554 DEBUG nova.compute.manager [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] No waiting events found dispatching network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.371 221554 WARNING nova.compute.manager [req-e3df7559-649b-4cc5-a63a-c565fb65832e req-069b424d-d73c-4221-906a-26871d83f5e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received unexpected event network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:13:19 np0005603609 nova_compute[221550]: 2026-01-31 08:13:19.798 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:13:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:20.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:13:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:20.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:20 np0005603609 nova_compute[221550]: 2026-01-31 08:13:20.921 221554 DEBUG nova.network.neutron [-] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:13:20 np0005603609 nova_compute[221550]: 2026-01-31 08:13:20.925 221554 DEBUG nova.compute.manager [req-f54154f0-6fe9-4d9b-ba2f-c23d340f2175 req-1b69e21e-f9dd-425f-85f8-45bf75274df3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Received event network-vif-deleted-6bd06a16-ffc6-4ae3-9810-f84a1120972a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:20 np0005603609 nova_compute[221550]: 2026-01-31 08:13:20.925 221554 INFO nova.compute.manager [req-f54154f0-6fe9-4d9b-ba2f-c23d340f2175 req-1b69e21e-f9dd-425f-85f8-45bf75274df3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Neutron deleted interface 6bd06a16-ffc6-4ae3-9810-f84a1120972a; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:13:20 np0005603609 nova_compute[221550]: 2026-01-31 08:13:20.925 221554 DEBUG nova.network.neutron [req-f54154f0-6fe9-4d9b-ba2f-c23d340f2175 req-1b69e21e-f9dd-425f-85f8-45bf75274df3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:13:21 np0005603609 nova_compute[221550]: 2026-01-31 08:13:21.126 221554 INFO nova.compute.manager [-] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Took 1.92 seconds to deallocate network for instance.#033[00m
Jan 31 03:13:21 np0005603609 nova_compute[221550]: 2026-01-31 08:13:21.135 221554 DEBUG nova.compute.manager [req-f54154f0-6fe9-4d9b-ba2f-c23d340f2175 req-1b69e21e-f9dd-425f-85f8-45bf75274df3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Detach interface failed, port_id=6bd06a16-ffc6-4ae3-9810-f84a1120972a, reason: Instance 1483edbd-0387-4068-b62e-4789e777f5ab could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:13:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:21 np0005603609 nova_compute[221550]: 2026-01-31 08:13:21.387 221554 DEBUG oslo_concurrency.lockutils [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:21 np0005603609 nova_compute[221550]: 2026-01-31 08:13:21.388 221554 DEBUG oslo_concurrency.lockutils [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:21 np0005603609 nova_compute[221550]: 2026-01-31 08:13:21.501 221554 DEBUG oslo_concurrency.processutils [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:13:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:13:22 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/645181621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:13:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:22.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:22 np0005603609 nova_compute[221550]: 2026-01-31 08:13:22.516 221554 DEBUG oslo_concurrency.processutils [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:13:22 np0005603609 nova_compute[221550]: 2026-01-31 08:13:22.523 221554 DEBUG nova.compute.provider_tree [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:13:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:22.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:23 np0005603609 nova_compute[221550]: 2026-01-31 08:13:23.051 221554 DEBUG nova.scheduler.client.report [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:13:24 np0005603609 nova_compute[221550]: 2026-01-31 08:13:24.067 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:24.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:24 np0005603609 nova_compute[221550]: 2026-01-31 08:13:24.798 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:24.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:25 np0005603609 nova_compute[221550]: 2026-01-31 08:13:25.168 221554 DEBUG oslo_concurrency.lockutils [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 3.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:25 np0005603609 nova_compute[221550]: 2026-01-31 08:13:25.448 221554 DEBUG nova.compute.manager [None req-8a5bb94a-6f9f-464c-80ba-357d6fee09a4 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:13:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:13:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:26.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:13:26 np0005603609 nova_compute[221550]: 2026-01-31 08:13:26.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:13:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:26.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:26 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #106. Immutable memtables: 0.
Jan 31 03:13:26 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:26.977592) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:13:26 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 106
Jan 31 03:13:26 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847206977735, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 493, "num_deletes": 251, "total_data_size": 664324, "memory_usage": 674632, "flush_reason": "Manual Compaction"}
Jan 31 03:13:26 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #107: started
Jan 31 03:13:27 np0005603609 nova_compute[221550]: 2026-01-31 08:13:27.025 221554 INFO nova.scheduler.client.report [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Deleted allocations for instance 1483edbd-0387-4068-b62e-4789e777f5ab#033[00m
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847207189258, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 107, "file_size": 437883, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54682, "largest_seqno": 55170, "table_properties": {"data_size": 435225, "index_size": 694, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6410, "raw_average_key_size": 18, "raw_value_size": 429957, "raw_average_value_size": 1268, "num_data_blocks": 31, "num_entries": 339, "num_filter_entries": 339, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847181, "oldest_key_time": 1769847181, "file_creation_time": 1769847206, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 211674 microseconds, and 2852 cpu microseconds.
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:27.189296) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #107: 437883 bytes OK
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:27.189313) [db/memtable_list.cc:519] [default] Level-0 commit table #107 started
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:27.247471) [db/memtable_list.cc:722] [default] Level-0 commit table #107: memtable #1 done
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:27.247512) EVENT_LOG_v1 {"time_micros": 1769847207247504, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:27.247533) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 661378, prev total WAL file size 661378, number of live WAL files 2.
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000103.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:27.248156) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [107(427KB)], [105(13MB)]
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847207248244, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [107], "files_L6": [105], "score": -1, "input_data_size": 14707921, "oldest_snapshot_seqno": -1}
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #108: 7923 keys, 12781487 bytes, temperature: kUnknown
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847207463855, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 108, "file_size": 12781487, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12727194, "index_size": 33381, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19845, "raw_key_size": 205792, "raw_average_key_size": 25, "raw_value_size": 12584531, "raw_average_value_size": 1588, "num_data_blocks": 1314, "num_entries": 7923, "num_filter_entries": 7923, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769847207, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 108, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:27.464282) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 12781487 bytes
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:27.593226) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 68.2 rd, 59.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.6 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(62.8) write-amplify(29.2) OK, records in: 8436, records dropped: 513 output_compression: NoCompression
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:27.593270) EVENT_LOG_v1 {"time_micros": 1769847207593254, "job": 66, "event": "compaction_finished", "compaction_time_micros": 215788, "compaction_time_cpu_micros": 26736, "output_level": 6, "num_output_files": 1, "total_output_size": 12781487, "num_input_records": 8436, "num_output_records": 7923, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847207593594, "job": 66, "event": "table_file_deletion", "file_number": 107}
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000105.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847207595351, "job": 66, "event": "table_file_deletion", "file_number": 105}
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:27.247972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:27.595414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:27.595417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:27.595419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:27.595420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:13:27 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:13:27.595421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:13:27 np0005603609 nova_compute[221550]: 2026-01-31 08:13:27.755 221554 DEBUG oslo_concurrency.lockutils [None req-37b92d77-221f-4d87-af93-ed6d43a95548 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "1483edbd-0387-4068-b62e-4789e777f5ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 14.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:13:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:28.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:13:28 np0005603609 nova_compute[221550]: 2026-01-31 08:13:28.829 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847193.8285015, 1483edbd-0387-4068-b62e-4789e777f5ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:13:28 np0005603609 nova_compute[221550]: 2026-01-31 08:13:28.830 221554 INFO nova.compute.manager [-] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:13:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:13:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:28.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:13:29 np0005603609 nova_compute[221550]: 2026-01-31 08:13:29.032 221554 DEBUG nova.compute.manager [None req-206ecbb6-8383-40a7-b190-3e4ccabdca5b - - - - - -] [instance: 1483edbd-0387-4068-b62e-4789e777f5ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:13:29 np0005603609 nova_compute[221550]: 2026-01-31 08:13:29.070 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:29 np0005603609 nova_compute[221550]: 2026-01-31 08:13:29.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:13:29 np0005603609 nova_compute[221550]: 2026-01-31 08:13:29.800 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:30 np0005603609 ovn_controller[130359]: 2026-01-31T08:13:30Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:13:77 10.100.0.10
Jan 31 03:13:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:30.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:30 np0005603609 nova_compute[221550]: 2026-01-31 08:13:30.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:13:30 np0005603609 nova_compute[221550]: 2026-01-31 08:13:30.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:13:30 np0005603609 nova_compute[221550]: 2026-01-31 08:13:30.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:13:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:13:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:30.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:13:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:31 np0005603609 nova_compute[221550]: 2026-01-31 08:13:31.483 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:13:31 np0005603609 nova_compute[221550]: 2026-01-31 08:13:31.484 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:13:31 np0005603609 nova_compute[221550]: 2026-01-31 08:13:31.484 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:13:31 np0005603609 nova_compute[221550]: 2026-01-31 08:13:31.484 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 981eb990-ec0c-4673-98e7-102afbe0bb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:13:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:13:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:32.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:13:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:13:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:32.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:13:34 np0005603609 nova_compute[221550]: 2026-01-31 08:13:34.072 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:34.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:34 np0005603609 nova_compute[221550]: 2026-01-31 08:13:34.541 221554 DEBUG oslo_concurrency.lockutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:34 np0005603609 nova_compute[221550]: 2026-01-31 08:13:34.541 221554 DEBUG oslo_concurrency.lockutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:34 np0005603609 nova_compute[221550]: 2026-01-31 08:13:34.605 221554 DEBUG nova.compute.manager [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:13:34 np0005603609 nova_compute[221550]: 2026-01-31 08:13:34.786 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Updating instance_info_cache with network_info: [{"id": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "address": "fa:16:3e:3e:5b:36", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c06d54-12", "ovs_interfaceid": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:13:34 np0005603609 nova_compute[221550]: 2026-01-31 08:13:34.801 221554 DEBUG oslo_concurrency.lockutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:34 np0005603609 nova_compute[221550]: 2026-01-31 08:13:34.802 221554 DEBUG oslo_concurrency.lockutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:34 np0005603609 nova_compute[221550]: 2026-01-31 08:13:34.803 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:34 np0005603609 nova_compute[221550]: 2026-01-31 08:13:34.810 221554 DEBUG nova.virt.hardware [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:13:34 np0005603609 nova_compute[221550]: 2026-01-31 08:13:34.810 221554 INFO nova.compute.claims [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:13:34 np0005603609 nova_compute[221550]: 2026-01-31 08:13:34.883 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:13:34 np0005603609 nova_compute[221550]: 2026-01-31 08:13:34.883 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:13:34 np0005603609 nova_compute[221550]: 2026-01-31 08:13:34.883 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:13:34 np0005603609 nova_compute[221550]: 2026-01-31 08:13:34.884 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:13:34 np0005603609 nova_compute[221550]: 2026-01-31 08:13:34.884 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:13:34 np0005603609 nova_compute[221550]: 2026-01-31 08:13:34.884 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:13:34 np0005603609 nova_compute[221550]: 2026-01-31 08:13:34.884 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:13:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:34.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:35 np0005603609 nova_compute[221550]: 2026-01-31 08:13:35.080 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:35 np0005603609 nova_compute[221550]: 2026-01-31 08:13:35.272 221554 DEBUG oslo_concurrency.processutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:13:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:36.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:36.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:13:37 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1064267121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:13:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:37 np0005603609 nova_compute[221550]: 2026-01-31 08:13:37.524 221554 DEBUG oslo_concurrency.processutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:13:37 np0005603609 nova_compute[221550]: 2026-01-31 08:13:37.531 221554 DEBUG nova.compute.provider_tree [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:13:37 np0005603609 nova_compute[221550]: 2026-01-31 08:13:37.584 221554 DEBUG nova.scheduler.client.report [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:13:37 np0005603609 podman[274522]: 2026-01-31 08:13:37.667000577 +0000 UTC m=+1.473212606 container exec 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 03:13:37 np0005603609 nova_compute[221550]: 2026-01-31 08:13:37.764 221554 DEBUG oslo_concurrency.lockutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:37 np0005603609 nova_compute[221550]: 2026-01-31 08:13:37.765 221554 DEBUG nova.compute.manager [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:13:37 np0005603609 nova_compute[221550]: 2026-01-31 08:13:37.768 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 2.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:37 np0005603609 nova_compute[221550]: 2026-01-31 08:13:37.769 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:37 np0005603609 nova_compute[221550]: 2026-01-31 08:13:37.769 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:13:37 np0005603609 nova_compute[221550]: 2026-01-31 08:13:37.769 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:13:37 np0005603609 podman[274542]: 2026-01-31 08:13:37.889540755 +0000 UTC m=+0.129843051 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:13:38 np0005603609 nova_compute[221550]: 2026-01-31 08:13:38.022 221554 DEBUG nova.compute.manager [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:13:38 np0005603609 nova_compute[221550]: 2026-01-31 08:13:38.024 221554 DEBUG nova.network.neutron [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:13:38 np0005603609 nova_compute[221550]: 2026-01-31 08:13:38.152 221554 INFO nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:13:38 np0005603609 podman[274522]: 2026-01-31 08:13:38.182283361 +0000 UTC m=+1.988495360 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:13:38 np0005603609 nova_compute[221550]: 2026-01-31 08:13:38.214 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:13:38 np0005603609 nova_compute[221550]: 2026-01-31 08:13:38.545 221554 DEBUG nova.policy [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd7d9a44201d548aba1e1654e136ddd06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:13:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:38.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:13:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:38.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:13:39 np0005603609 nova_compute[221550]: 2026-01-31 08:13:39.074 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:39 np0005603609 nova_compute[221550]: 2026-01-31 08:13:39.638 221554 DEBUG nova.compute.manager [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:13:39 np0005603609 nova_compute[221550]: 2026-01-31 08:13:39.713 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:13:39 np0005603609 nova_compute[221550]: 2026-01-31 08:13:39.714 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:13:39 np0005603609 nova_compute[221550]: 2026-01-31 08:13:39.718 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:13:39 np0005603609 nova_compute[221550]: 2026-01-31 08:13:39.719 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:13:39 np0005603609 nova_compute[221550]: 2026-01-31 08:13:39.802 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:39 np0005603609 nova_compute[221550]: 2026-01-31 08:13:39.877 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:13:39 np0005603609 nova_compute[221550]: 2026-01-31 08:13:39.878 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4035MB free_disk=20.693622589111328GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:13:39 np0005603609 nova_compute[221550]: 2026-01-31 08:13:39.878 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:39 np0005603609 nova_compute[221550]: 2026-01-31 08:13:39.878 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:40 np0005603609 nova_compute[221550]: 2026-01-31 08:13:40.007 221554 DEBUG nova.compute.manager [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:13:40 np0005603609 nova_compute[221550]: 2026-01-31 08:13:40.010 221554 DEBUG nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:13:40 np0005603609 nova_compute[221550]: 2026-01-31 08:13:40.011 221554 INFO nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Creating image(s)#033[00m
Jan 31 03:13:40 np0005603609 nova_compute[221550]: 2026-01-31 08:13:40.033 221554 DEBUG nova.storage.rbd_utils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:13:40 np0005603609 nova_compute[221550]: 2026-01-31 08:13:40.066 221554 DEBUG nova.storage.rbd_utils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:13:40 np0005603609 nova_compute[221550]: 2026-01-31 08:13:40.089 221554 DEBUG nova.storage.rbd_utils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:13:40 np0005603609 nova_compute[221550]: 2026-01-31 08:13:40.092 221554 DEBUG oslo_concurrency.processutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:13:40 np0005603609 nova_compute[221550]: 2026-01-31 08:13:40.198 221554 DEBUG oslo_concurrency.processutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:13:40 np0005603609 nova_compute[221550]: 2026-01-31 08:13:40.199 221554 DEBUG oslo_concurrency.lockutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:40 np0005603609 nova_compute[221550]: 2026-01-31 08:13:40.200 221554 DEBUG oslo_concurrency.lockutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:40 np0005603609 nova_compute[221550]: 2026-01-31 08:13:40.200 221554 DEBUG oslo_concurrency.lockutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:40 np0005603609 nova_compute[221550]: 2026-01-31 08:13:40.219 221554 DEBUG nova.storage.rbd_utils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:13:40 np0005603609 nova_compute[221550]: 2026-01-31 08:13:40.222 221554 DEBUG oslo_concurrency.processutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:13:40 np0005603609 nova_compute[221550]: 2026-01-31 08:13:40.238 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 981eb990-ec0c-4673-98e7-102afbe0bb51 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:13:40 np0005603609 nova_compute[221550]: 2026-01-31 08:13:40.238 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 518d36e7-b77a-415d-bd48-f53e637ed0d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:13:40 np0005603609 nova_compute[221550]: 2026-01-31 08:13:40.239 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 9fac8a1f-1f12-41c8-81b6-59c0bf962590 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:13:40 np0005603609 nova_compute[221550]: 2026-01-31 08:13:40.239 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:13:40 np0005603609 nova_compute[221550]: 2026-01-31 08:13:40.239 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:13:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:13:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:40.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:13:40 np0005603609 nova_compute[221550]: 2026-01-31 08:13:40.811 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:13:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:40.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:40 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:13:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:13:41 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3296464802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:13:41 np0005603609 nova_compute[221550]: 2026-01-31 08:13:41.970 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:13:41 np0005603609 nova_compute[221550]: 2026-01-31 08:13:41.976 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:13:42 np0005603609 nova_compute[221550]: 2026-01-31 08:13:42.085 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:13:42 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:13:42 np0005603609 nova_compute[221550]: 2026-01-31 08:13:42.233 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:13:42 np0005603609 nova_compute[221550]: 2026-01-31 08:13:42.233 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.355s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:42.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:42.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:43 np0005603609 podman[274918]: 2026-01-31 08:13:43.162591501 +0000 UTC m=+0.051133710 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:13:43 np0005603609 nova_compute[221550]: 2026-01-31 08:13:43.171 221554 DEBUG nova.network.neutron [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Successfully created port: bec4c251-b190-498a-8020-4390b883634e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:13:43 np0005603609 podman[274917]: 2026-01-31 08:13:43.211731252 +0000 UTC m=+0.100271931 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 03:13:43 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:13:43 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:13:43 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:13:43 np0005603609 nova_compute[221550]: 2026-01-31 08:13:43.264 221554 DEBUG oslo_concurrency.processutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:13:43 np0005603609 nova_compute[221550]: 2026-01-31 08:13:43.331 221554 DEBUG nova.storage.rbd_utils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] resizing rbd image 9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:13:43 np0005603609 nova_compute[221550]: 2026-01-31 08:13:43.572 221554 DEBUG nova.objects.instance [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'migration_context' on Instance uuid 9fac8a1f-1f12-41c8-81b6-59c0bf962590 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:13:43 np0005603609 nova_compute[221550]: 2026-01-31 08:13:43.605 221554 DEBUG nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:13:43 np0005603609 nova_compute[221550]: 2026-01-31 08:13:43.605 221554 DEBUG nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Ensure instance console log exists: /var/lib/nova/instances/9fac8a1f-1f12-41c8-81b6-59c0bf962590/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:13:43 np0005603609 nova_compute[221550]: 2026-01-31 08:13:43.606 221554 DEBUG oslo_concurrency.lockutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:43 np0005603609 nova_compute[221550]: 2026-01-31 08:13:43.606 221554 DEBUG oslo_concurrency.lockutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:43 np0005603609 nova_compute[221550]: 2026-01-31 08:13:43.606 221554 DEBUG oslo_concurrency.lockutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:44 np0005603609 nova_compute[221550]: 2026-01-31 08:13:44.076 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:44.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:44 np0005603609 nova_compute[221550]: 2026-01-31 08:13:44.804 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:44.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:13:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/523617742' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:13:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:13:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/523617742' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:13:45 np0005603609 nova_compute[221550]: 2026-01-31 08:13:45.607 221554 DEBUG nova.network.neutron [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Successfully updated port: bec4c251-b190-498a-8020-4390b883634e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:13:45 np0005603609 nova_compute[221550]: 2026-01-31 08:13:45.627 221554 DEBUG oslo_concurrency.lockutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "refresh_cache-9fac8a1f-1f12-41c8-81b6-59c0bf962590" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:13:45 np0005603609 nova_compute[221550]: 2026-01-31 08:13:45.628 221554 DEBUG oslo_concurrency.lockutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquired lock "refresh_cache-9fac8a1f-1f12-41c8-81b6-59c0bf962590" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:13:45 np0005603609 nova_compute[221550]: 2026-01-31 08:13:45.628 221554 DEBUG nova.network.neutron [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:13:46 np0005603609 nova_compute[221550]: 2026-01-31 08:13:46.228 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:13:46 np0005603609 nova_compute[221550]: 2026-01-31 08:13:46.503 221554 DEBUG nova.network.neutron [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:13:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:46.546 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:13:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:46.547 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:13:46 np0005603609 nova_compute[221550]: 2026-01-31 08:13:46.547 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:46.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:46.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:46 np0005603609 nova_compute[221550]: 2026-01-31 08:13:46.958 221554 DEBUG nova.compute.manager [req-463afe17-f906-4a8c-8da3-785c34e5e9be req-596784ed-6c91-4c03-95cc-05778acc4f5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received event network-changed-bec4c251-b190-498a-8020-4390b883634e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:46 np0005603609 nova_compute[221550]: 2026-01-31 08:13:46.959 221554 DEBUG nova.compute.manager [req-463afe17-f906-4a8c-8da3-785c34e5e9be req-596784ed-6c91-4c03-95cc-05778acc4f5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Refreshing instance network info cache due to event network-changed-bec4c251-b190-498a-8020-4390b883634e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:13:46 np0005603609 nova_compute[221550]: 2026-01-31 08:13:46.959 221554 DEBUG oslo_concurrency.lockutils [req-463afe17-f906-4a8c-8da3-785c34e5e9be req-596784ed-6c91-4c03-95cc-05778acc4f5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9fac8a1f-1f12-41c8-81b6-59c0bf962590" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:13:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:48.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:48.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.079 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:13:49 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4047450471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.363 221554 DEBUG nova.network.neutron [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Updating instance_info_cache with network_info: [{"id": "bec4c251-b190-498a-8020-4390b883634e", "address": "fa:16:3e:af:63:b4", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec4c251-b1", "ovs_interfaceid": "bec4c251-b190-498a-8020-4390b883634e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.457 221554 DEBUG oslo_concurrency.lockutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Releasing lock "refresh_cache-9fac8a1f-1f12-41c8-81b6-59c0bf962590" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.458 221554 DEBUG nova.compute.manager [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Instance network_info: |[{"id": "bec4c251-b190-498a-8020-4390b883634e", "address": "fa:16:3e:af:63:b4", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec4c251-b1", "ovs_interfaceid": "bec4c251-b190-498a-8020-4390b883634e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.458 221554 DEBUG oslo_concurrency.lockutils [req-463afe17-f906-4a8c-8da3-785c34e5e9be req-596784ed-6c91-4c03-95cc-05778acc4f5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9fac8a1f-1f12-41c8-81b6-59c0bf962590" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.458 221554 DEBUG nova.network.neutron [req-463afe17-f906-4a8c-8da3-785c34e5e9be req-596784ed-6c91-4c03-95cc-05778acc4f5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Refreshing network info cache for port bec4c251-b190-498a-8020-4390b883634e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.462 221554 DEBUG nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Start _get_guest_xml network_info=[{"id": "bec4c251-b190-498a-8020-4390b883634e", "address": "fa:16:3e:af:63:b4", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec4c251-b1", "ovs_interfaceid": "bec4c251-b190-498a-8020-4390b883634e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.465 221554 WARNING nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.475 221554 DEBUG nova.virt.libvirt.host [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.476 221554 DEBUG nova.virt.libvirt.host [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.479 221554 DEBUG nova.virt.libvirt.host [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.479 221554 DEBUG nova.virt.libvirt.host [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.480 221554 DEBUG nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.480 221554 DEBUG nova.virt.hardware [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.481 221554 DEBUG nova.virt.hardware [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.481 221554 DEBUG nova.virt.hardware [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.481 221554 DEBUG nova.virt.hardware [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.481 221554 DEBUG nova.virt.hardware [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.482 221554 DEBUG nova.virt.hardware [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.482 221554 DEBUG nova.virt.hardware [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.482 221554 DEBUG nova.virt.hardware [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.483 221554 DEBUG nova.virt.hardware [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.483 221554 DEBUG nova.virt.hardware [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.483 221554 DEBUG nova.virt.hardware [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.486 221554 DEBUG oslo_concurrency.processutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.805 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:13:49 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1537017837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.935 221554 DEBUG oslo_concurrency.processutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.961 221554 DEBUG nova.storage.rbd_utils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:13:49 np0005603609 nova_compute[221550]: 2026-01-31 08:13:49.966 221554 DEBUG oslo_concurrency.processutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:13:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:13:50 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2975001044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.379 221554 DEBUG oslo_concurrency.processutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.380 221554 DEBUG nova.virt.libvirt.vif [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1592665940',display_name='tempest-ServerStableDeviceRescueTest-server-1592665940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1592665940',id=132,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1633c84ea1bf46b080aaafd30bbcf25f',ramdisk_id='',reservation_id='r-apdvib3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-569420416',owner_user_name='tempest-ServerStableDeviceRescueTest-569420416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:13:39Z,user_data=None,user_id='d7d9a44201d548aba1e1654e136ddd06',uuid=9fac8a1f-1f12-41c8-81b6-59c0bf962590,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bec4c251-b190-498a-8020-4390b883634e", "address": "fa:16:3e:af:63:b4", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec4c251-b1", "ovs_interfaceid": "bec4c251-b190-498a-8020-4390b883634e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.381 221554 DEBUG nova.network.os_vif_util [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converting VIF {"id": "bec4c251-b190-498a-8020-4390b883634e", "address": "fa:16:3e:af:63:b4", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec4c251-b1", "ovs_interfaceid": "bec4c251-b190-498a-8020-4390b883634e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.381 221554 DEBUG nova.network.os_vif_util [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:63:b4,bridge_name='br-int',has_traffic_filtering=True,id=bec4c251-b190-498a-8020-4390b883634e,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec4c251-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.383 221554 DEBUG nova.objects.instance [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'pci_devices' on Instance uuid 9fac8a1f-1f12-41c8-81b6-59c0bf962590 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.571 221554 DEBUG nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:13:50 np0005603609 nova_compute[221550]:  <uuid>9fac8a1f-1f12-41c8-81b6-59c0bf962590</uuid>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:  <name>instance-00000084</name>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-1592665940</nova:name>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:13:49</nova:creationTime>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:13:50 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:        <nova:user uuid="d7d9a44201d548aba1e1654e136ddd06">tempest-ServerStableDeviceRescueTest-569420416-project-member</nova:user>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:        <nova:project uuid="1633c84ea1bf46b080aaafd30bbcf25f">tempest-ServerStableDeviceRescueTest-569420416</nova:project>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:        <nova:port uuid="bec4c251-b190-498a-8020-4390b883634e">
Jan 31 03:13:50 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <entry name="serial">9fac8a1f-1f12-41c8-81b6-59c0bf962590</entry>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <entry name="uuid">9fac8a1f-1f12-41c8-81b6-59c0bf962590</entry>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk">
Jan 31 03:13:50 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:13:50 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk.config">
Jan 31 03:13:50 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:13:50 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:af:63:b4"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <target dev="tapbec4c251-b1"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/9fac8a1f-1f12-41c8-81b6-59c0bf962590/console.log" append="off"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:13:50 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:13:50 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:13:50 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:13:50 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.572 221554 DEBUG nova.compute.manager [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Preparing to wait for external event network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.572 221554 DEBUG oslo_concurrency.lockutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.573 221554 DEBUG oslo_concurrency.lockutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.573 221554 DEBUG oslo_concurrency.lockutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.574 221554 DEBUG nova.virt.libvirt.vif [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1592665940',display_name='tempest-ServerStableDeviceRescueTest-server-1592665940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1592665940',id=132,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1633c84ea1bf46b080aaafd30bbcf25f',ramdisk_id='',reservation_id='r-apdvib3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-569420416',owner_user_name='tempest-ServerStableDeviceRescueTest-569420416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:13:39Z,user_data=None,user_id='d7d9a44201d548aba1e1654e136ddd06',uuid=9fac8a1f-1f12-41c8-81b6-59c0bf962590,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bec4c251-b190-498a-8020-4390b883634e", "address": "fa:16:3e:af:63:b4", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec4c251-b1", "ovs_interfaceid": "bec4c251-b190-498a-8020-4390b883634e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:13:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.574 221554 DEBUG nova.network.os_vif_util [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converting VIF {"id": "bec4c251-b190-498a-8020-4390b883634e", "address": "fa:16:3e:af:63:b4", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec4c251-b1", "ovs_interfaceid": "bec4c251-b190-498a-8020-4390b883634e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:13:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:50.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.575 221554 DEBUG nova.network.os_vif_util [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:63:b4,bridge_name='br-int',has_traffic_filtering=True,id=bec4c251-b190-498a-8020-4390b883634e,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec4c251-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.575 221554 DEBUG os_vif [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:63:b4,bridge_name='br-int',has_traffic_filtering=True,id=bec4c251-b190-498a-8020-4390b883634e,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec4c251-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.576 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.577 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.577 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.580 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.581 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbec4c251-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.581 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbec4c251-b1, col_values=(('external_ids', {'iface-id': 'bec4c251-b190-498a-8020-4390b883634e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:63:b4', 'vm-uuid': '9fac8a1f-1f12-41c8-81b6-59c0bf962590'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.625 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:50 np0005603609 NetworkManager[49064]: <info>  [1769847230.6267] manager: (tapbec4c251-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.631 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.633 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:50 np0005603609 nova_compute[221550]: 2026-01-31 08:13:50.634 221554 INFO os_vif [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:63:b4,bridge_name='br-int',has_traffic_filtering=True,id=bec4c251-b190-498a-8020-4390b883634e,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec4c251-b1')#033[00m
Jan 31 03:13:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:50.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:51 np0005603609 nova_compute[221550]: 2026-01-31 08:13:51.287 221554 DEBUG nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:13:51 np0005603609 nova_compute[221550]: 2026-01-31 08:13:51.288 221554 DEBUG nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:13:51 np0005603609 nova_compute[221550]: 2026-01-31 08:13:51.288 221554 DEBUG nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No VIF found with MAC fa:16:3e:af:63:b4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:13:51 np0005603609 nova_compute[221550]: 2026-01-31 08:13:51.288 221554 INFO nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Using config drive#033[00m
Jan 31 03:13:51 np0005603609 nova_compute[221550]: 2026-01-31 08:13:51.319 221554 DEBUG nova.storage.rbd_utils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:13:52 np0005603609 nova_compute[221550]: 2026-01-31 08:13:52.424 221554 INFO nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Creating config drive at /var/lib/nova/instances/9fac8a1f-1f12-41c8-81b6-59c0bf962590/disk.config#033[00m
Jan 31 03:13:52 np0005603609 nova_compute[221550]: 2026-01-31 08:13:52.429 221554 DEBUG oslo_concurrency.processutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9fac8a1f-1f12-41c8-81b6-59c0bf962590/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpvbv9jjbl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:13:52 np0005603609 nova_compute[221550]: 2026-01-31 08:13:52.486 221554 DEBUG nova.network.neutron [req-463afe17-f906-4a8c-8da3-785c34e5e9be req-596784ed-6c91-4c03-95cc-05778acc4f5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Updated VIF entry in instance network info cache for port bec4c251-b190-498a-8020-4390b883634e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:13:52 np0005603609 nova_compute[221550]: 2026-01-31 08:13:52.487 221554 DEBUG nova.network.neutron [req-463afe17-f906-4a8c-8da3-785c34e5e9be req-596784ed-6c91-4c03-95cc-05778acc4f5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Updating instance_info_cache with network_info: [{"id": "bec4c251-b190-498a-8020-4390b883634e", "address": "fa:16:3e:af:63:b4", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec4c251-b1", "ovs_interfaceid": "bec4c251-b190-498a-8020-4390b883634e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:13:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:52 np0005603609 nova_compute[221550]: 2026-01-31 08:13:52.560 221554 DEBUG oslo_concurrency.processutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9fac8a1f-1f12-41c8-81b6-59c0bf962590/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpvbv9jjbl" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:13:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:52.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:52 np0005603609 nova_compute[221550]: 2026-01-31 08:13:52.759 221554 DEBUG nova.storage.rbd_utils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:13:52 np0005603609 nova_compute[221550]: 2026-01-31 08:13:52.768 221554 DEBUG oslo_concurrency.processutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9fac8a1f-1f12-41c8-81b6-59c0bf962590/disk.config 9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:13:52 np0005603609 nova_compute[221550]: 2026-01-31 08:13:52.799 221554 DEBUG oslo_concurrency.lockutils [req-463afe17-f906-4a8c-8da3-785c34e5e9be req-596784ed-6c91-4c03-95cc-05778acc4f5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9fac8a1f-1f12-41c8-81b6-59c0bf962590" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:13:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:52.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:53.549 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:53 np0005603609 nova_compute[221550]: 2026-01-31 08:13:53.618 221554 DEBUG oslo_concurrency.processutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9fac8a1f-1f12-41c8-81b6-59c0bf962590/disk.config 9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.850s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:13:53 np0005603609 nova_compute[221550]: 2026-01-31 08:13:53.619 221554 INFO nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Deleting local config drive /var/lib/nova/instances/9fac8a1f-1f12-41c8-81b6-59c0bf962590/disk.config because it was imported into RBD.#033[00m
Jan 31 03:13:53 np0005603609 kernel: tapbec4c251-b1: entered promiscuous mode
Jan 31 03:13:53 np0005603609 NetworkManager[49064]: <info>  [1769847233.6592] manager: (tapbec4c251-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/266)
Jan 31 03:13:53 np0005603609 ovn_controller[130359]: 2026-01-31T08:13:53Z|00547|binding|INFO|Claiming lport bec4c251-b190-498a-8020-4390b883634e for this chassis.
Jan 31 03:13:53 np0005603609 ovn_controller[130359]: 2026-01-31T08:13:53Z|00548|binding|INFO|bec4c251-b190-498a-8020-4390b883634e: Claiming fa:16:3e:af:63:b4 10.100.0.9
Jan 31 03:13:53 np0005603609 nova_compute[221550]: 2026-01-31 08:13:53.658 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:53 np0005603609 ovn_controller[130359]: 2026-01-31T08:13:53Z|00549|binding|INFO|Setting lport bec4c251-b190-498a-8020-4390b883634e ovn-installed in OVS
Jan 31 03:13:53 np0005603609 nova_compute[221550]: 2026-01-31 08:13:53.665 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:53 np0005603609 nova_compute[221550]: 2026-01-31 08:13:53.666 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:53 np0005603609 systemd-udevd[275171]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:13:53 np0005603609 systemd-machined[190912]: New machine qemu-68-instance-00000084.
Jan 31 03:13:53 np0005603609 NetworkManager[49064]: <info>  [1769847233.6896] device (tapbec4c251-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:13:53 np0005603609 NetworkManager[49064]: <info>  [1769847233.6905] device (tapbec4c251-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:13:53 np0005603609 systemd[1]: Started Virtual Machine qemu-68-instance-00000084.
Jan 31 03:13:53 np0005603609 ovn_controller[130359]: 2026-01-31T08:13:53Z|00550|binding|INFO|Setting lport bec4c251-b190-498a-8020-4390b883634e up in Southbound
Jan 31 03:13:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:53.813 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:63:b4 10.100.0.9'], port_security=['fa:16:3e:af:63:b4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9fac8a1f-1f12-41c8-81b6-59c0bf962590', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6bf409ce-5d7d-4755-a538-3f1e81a177fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=bec4c251-b190-498a-8020-4390b883634e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:13:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:53.815 140058 INFO neutron.agent.ovn.metadata.agent [-] Port bec4c251-b190-498a-8020-4390b883634e in datapath 088d6992-6ba6-4719-a977-b3d306740157 bound to our chassis#033[00m
Jan 31 03:13:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:53.817 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 088d6992-6ba6-4719-a977-b3d306740157#033[00m
Jan 31 03:13:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:53.827 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4639de66-4d30-463d-9463-8e457496e8ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:53.846 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f6dbf0ec-2005-4c2c-8728-3beed19ac3cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:53.849 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[fc3c7178-8061-402f-a813-5950ebf47a77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:53.867 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[dc26281d-6326-4b1e-a7ca-6468f618f694]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:53.879 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[602f7ad3-e70c-4bd1-a1c7-1c4100045069]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736325, 'reachable_time': 27412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275185, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:53.891 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2f7eeecf-f009-4359-a4d4-859a2f0aa7da]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736331, 'tstamp': 736331}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275186, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736333, 'tstamp': 736333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275186, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:53.893 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:53 np0005603609 nova_compute[221550]: 2026-01-31 08:13:53.941 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:53 np0005603609 nova_compute[221550]: 2026-01-31 08:13:53.942 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:53.942 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088d6992-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:53.943 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:13:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:53.943 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap088d6992-60, col_values=(('external_ids', {'iface-id': '7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:13:53.943 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:13:54 np0005603609 nova_compute[221550]: 2026-01-31 08:13:54.473 221554 DEBUG nova.compute.manager [req-c8ff926e-5b03-430f-a267-143f172d210f req-6cd3cde8-8626-4015-8111-ad2c55c00eff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received event network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:54 np0005603609 nova_compute[221550]: 2026-01-31 08:13:54.474 221554 DEBUG oslo_concurrency.lockutils [req-c8ff926e-5b03-430f-a267-143f172d210f req-6cd3cde8-8626-4015-8111-ad2c55c00eff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:54 np0005603609 nova_compute[221550]: 2026-01-31 08:13:54.474 221554 DEBUG oslo_concurrency.lockutils [req-c8ff926e-5b03-430f-a267-143f172d210f req-6cd3cde8-8626-4015-8111-ad2c55c00eff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:54 np0005603609 nova_compute[221550]: 2026-01-31 08:13:54.474 221554 DEBUG oslo_concurrency.lockutils [req-c8ff926e-5b03-430f-a267-143f172d210f req-6cd3cde8-8626-4015-8111-ad2c55c00eff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:54 np0005603609 nova_compute[221550]: 2026-01-31 08:13:54.475 221554 DEBUG nova.compute.manager [req-c8ff926e-5b03-430f-a267-143f172d210f req-6cd3cde8-8626-4015-8111-ad2c55c00eff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Processing event network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:13:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:13:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:54.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:13:54 np0005603609 nova_compute[221550]: 2026-01-31 08:13:54.808 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:54 np0005603609 nova_compute[221550]: 2026-01-31 08:13:54.940 221554 DEBUG nova.compute.manager [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:13:54 np0005603609 nova_compute[221550]: 2026-01-31 08:13:54.941 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847234.9399216, 9fac8a1f-1f12-41c8-81b6-59c0bf962590 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:13:54 np0005603609 nova_compute[221550]: 2026-01-31 08:13:54.941 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] VM Started (Lifecycle Event)#033[00m
Jan 31 03:13:54 np0005603609 nova_compute[221550]: 2026-01-31 08:13:54.944 221554 DEBUG nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:13:54 np0005603609 nova_compute[221550]: 2026-01-31 08:13:54.947 221554 INFO nova.virt.libvirt.driver [-] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Instance spawned successfully.#033[00m
Jan 31 03:13:54 np0005603609 nova_compute[221550]: 2026-01-31 08:13:54.947 221554 DEBUG nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:13:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:54.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:55 np0005603609 nova_compute[221550]: 2026-01-31 08:13:55.036 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:13:55 np0005603609 nova_compute[221550]: 2026-01-31 08:13:55.040 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:13:55 np0005603609 nova_compute[221550]: 2026-01-31 08:13:55.055 221554 DEBUG nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:13:55 np0005603609 nova_compute[221550]: 2026-01-31 08:13:55.055 221554 DEBUG nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:13:55 np0005603609 nova_compute[221550]: 2026-01-31 08:13:55.055 221554 DEBUG nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:13:55 np0005603609 nova_compute[221550]: 2026-01-31 08:13:55.056 221554 DEBUG nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:13:55 np0005603609 nova_compute[221550]: 2026-01-31 08:13:55.056 221554 DEBUG nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:13:55 np0005603609 nova_compute[221550]: 2026-01-31 08:13:55.056 221554 DEBUG nova.virt.libvirt.driver [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:13:55 np0005603609 nova_compute[221550]: 2026-01-31 08:13:55.544 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:13:55 np0005603609 nova_compute[221550]: 2026-01-31 08:13:55.545 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847234.9400833, 9fac8a1f-1f12-41c8-81b6-59c0bf962590 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:13:55 np0005603609 nova_compute[221550]: 2026-01-31 08:13:55.545 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:13:55 np0005603609 nova_compute[221550]: 2026-01-31 08:13:55.626 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:13:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:13:55 np0005603609 nova_compute[221550]: 2026-01-31 08:13:55.942 221554 INFO nova.compute.manager [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Took 15.93 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:13:55 np0005603609 nova_compute[221550]: 2026-01-31 08:13:55.942 221554 DEBUG nova.compute.manager [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:13:56 np0005603609 nova_compute[221550]: 2026-01-31 08:13:56.138 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:13:56 np0005603609 nova_compute[221550]: 2026-01-31 08:13:56.145 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847234.9433875, 9fac8a1f-1f12-41c8-81b6-59c0bf962590 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:13:56 np0005603609 nova_compute[221550]: 2026-01-31 08:13:56.145 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:13:56 np0005603609 nova_compute[221550]: 2026-01-31 08:13:56.540 221554 INFO nova.compute.manager [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Took 21.81 seconds to build instance.#033[00m
Jan 31 03:13:56 np0005603609 nova_compute[221550]: 2026-01-31 08:13:56.548 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:13:56 np0005603609 nova_compute[221550]: 2026-01-31 08:13:56.554 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:13:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:13:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:56.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:13:56 np0005603609 nova_compute[221550]: 2026-01-31 08:13:56.664 221554 DEBUG oslo_concurrency.lockutils [None req-addd104a-6d0f-4df9-806e-d1072bc482f8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:56 np0005603609 nova_compute[221550]: 2026-01-31 08:13:56.745 221554 DEBUG nova.compute.manager [req-41bc6ba6-8043-4927-b16e-d165374b1ce5 req-31d81666-e2a5-4165-87fd-9c0e6bbb978d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received event network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:56 np0005603609 nova_compute[221550]: 2026-01-31 08:13:56.746 221554 DEBUG oslo_concurrency.lockutils [req-41bc6ba6-8043-4927-b16e-d165374b1ce5 req-31d81666-e2a5-4165-87fd-9c0e6bbb978d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:56 np0005603609 nova_compute[221550]: 2026-01-31 08:13:56.746 221554 DEBUG oslo_concurrency.lockutils [req-41bc6ba6-8043-4927-b16e-d165374b1ce5 req-31d81666-e2a5-4165-87fd-9c0e6bbb978d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:56 np0005603609 nova_compute[221550]: 2026-01-31 08:13:56.747 221554 DEBUG oslo_concurrency.lockutils [req-41bc6ba6-8043-4927-b16e-d165374b1ce5 req-31d81666-e2a5-4165-87fd-9c0e6bbb978d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:56 np0005603609 nova_compute[221550]: 2026-01-31 08:13:56.747 221554 DEBUG nova.compute.manager [req-41bc6ba6-8043-4927-b16e-d165374b1ce5 req-31d81666-e2a5-4165-87fd-9c0e6bbb978d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] No waiting events found dispatching network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:56 np0005603609 nova_compute[221550]: 2026-01-31 08:13:56.747 221554 WARNING nova.compute.manager [req-41bc6ba6-8043-4927-b16e-d165374b1ce5 req-31d81666-e2a5-4165-87fd-9c0e6bbb978d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received unexpected event network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e for instance with vm_state active and task_state None.#033[00m
Jan 31 03:13:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:13:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:56.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:13:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:58.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:13:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:58.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:59 np0005603609 nova_compute[221550]: 2026-01-31 08:13:59.809 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:00 np0005603609 nova_compute[221550]: 2026-01-31 08:14:00.247 221554 DEBUG nova.compute.manager [None req-d1294489-2f42-4fe5-9a27-3ebb1671bde6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:14:00 np0005603609 nova_compute[221550]: 2026-01-31 08:14:00.414 221554 INFO nova.compute.manager [None req-d1294489-2f42-4fe5-9a27-3ebb1671bde6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] instance snapshotting#033[00m
Jan 31 03:14:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:14:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:00.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:14:00 np0005603609 nova_compute[221550]: 2026-01-31 08:14:00.628 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:00 np0005603609 nova_compute[221550]: 2026-01-31 08:14:00.910 221554 INFO nova.virt.libvirt.driver [None req-d1294489-2f42-4fe5-9a27-3ebb1671bde6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Beginning live snapshot process#033[00m
Jan 31 03:14:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:00.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:01 np0005603609 nova_compute[221550]: 2026-01-31 08:14:01.526 221554 DEBUG nova.virt.libvirt.imagebackend [None req-d1294489-2f42-4fe5-9a27-3ebb1671bde6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:14:01 np0005603609 nova_compute[221550]: 2026-01-31 08:14:01.840 221554 DEBUG nova.storage.rbd_utils [None req-d1294489-2f42-4fe5-9a27-3ebb1671bde6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] creating snapshot(0edf3353278245f6a5266da3b89c8e03) on rbd image(9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:14:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:14:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:02.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:14:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:02.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e294 e294: 3 total, 3 up, 3 in
Jan 31 03:14:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:04.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:04 np0005603609 nova_compute[221550]: 2026-01-31 08:14:04.810 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:04.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:05 np0005603609 nova_compute[221550]: 2026-01-31 08:14:05.630 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:05 np0005603609 nova_compute[221550]: 2026-01-31 08:14:05.721 221554 DEBUG nova.storage.rbd_utils [None req-d1294489-2f42-4fe5-9a27-3ebb1671bde6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] cloning vms/9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk@0edf3353278245f6a5266da3b89c8e03 to images/5df2a010-0131-4bd4-8096-45b6ac756af0 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:14:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:06.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:14:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:06.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:14:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:07.507 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:07.508 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:07.509 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:08.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:14:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:08.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:14:09 np0005603609 nova_compute[221550]: 2026-01-31 08:14:09.812 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:10 np0005603609 nova_compute[221550]: 2026-01-31 08:14:10.305 221554 DEBUG nova.storage.rbd_utils [None req-d1294489-2f42-4fe5-9a27-3ebb1671bde6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] flattening images/5df2a010-0131-4bd4-8096-45b6ac756af0 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:14:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:10.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:10 np0005603609 nova_compute[221550]: 2026-01-31 08:14:10.632 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:11.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e295 e295: 3 total, 3 up, 3 in
Jan 31 03:14:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:12.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:14:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:13.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:14:13 np0005603609 nova_compute[221550]: 2026-01-31 08:14:13.664 221554 DEBUG nova.storage.rbd_utils [None req-d1294489-2f42-4fe5-9a27-3ebb1671bde6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] removing snapshot(0edf3353278245f6a5266da3b89c8e03) on rbd image(9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:14:14 np0005603609 podman[275405]: 2026-01-31 08:14:14.182769716 +0000 UTC m=+0.064689946 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 03:14:14 np0005603609 podman[275404]: 2026-01-31 08:14:14.189897617 +0000 UTC m=+0.073100027 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 31 03:14:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:14.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:14 np0005603609 nova_compute[221550]: 2026-01-31 08:14:14.814 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:15.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:15 np0005603609 ovn_controller[130359]: 2026-01-31T08:14:15Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:af:63:b4 10.100.0.9
Jan 31 03:14:15 np0005603609 ovn_controller[130359]: 2026-01-31T08:14:15Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:63:b4 10.100.0.9
Jan 31 03:14:15 np0005603609 nova_compute[221550]: 2026-01-31 08:14:15.634 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e296 e296: 3 total, 3 up, 3 in
Jan 31 03:14:15 np0005603609 nova_compute[221550]: 2026-01-31 08:14:15.746 221554 DEBUG nova.storage.rbd_utils [None req-d1294489-2f42-4fe5-9a27-3ebb1671bde6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] creating snapshot(snap) on rbd image(5df2a010-0131-4bd4-8096-45b6ac756af0) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:14:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:16.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:14:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:17.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:14:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e297 e297: 3 total, 3 up, 3 in
Jan 31 03:14:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:18.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:19.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e298 e298: 3 total, 3 up, 3 in
Jan 31 03:14:19 np0005603609 nova_compute[221550]: 2026-01-31 08:14:19.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:19 np0005603609 nova_compute[221550]: 2026-01-31 08:14:19.816 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:20.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:20 np0005603609 nova_compute[221550]: 2026-01-31 08:14:20.636 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:21.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:14:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:22.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:14:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e299 e299: 3 total, 3 up, 3 in
Jan 31 03:14:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:23.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:14:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:24.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:14:24 np0005603609 nova_compute[221550]: 2026-01-31 08:14:24.819 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:25.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:25 np0005603609 nova_compute[221550]: 2026-01-31 08:14:25.638 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e300 e300: 3 total, 3 up, 3 in
Jan 31 03:14:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:14:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:26.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:14:26 np0005603609 nova_compute[221550]: 2026-01-31 08:14:26.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:14:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:27.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:14:27 np0005603609 nova_compute[221550]: 2026-01-31 08:14:27.260 221554 INFO nova.virt.libvirt.driver [None req-d1294489-2f42-4fe5-9a27-3ebb1671bde6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Snapshot image upload complete#033[00m
Jan 31 03:14:27 np0005603609 nova_compute[221550]: 2026-01-31 08:14:27.260 221554 INFO nova.compute.manager [None req-d1294489-2f42-4fe5-9a27-3ebb1671bde6 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Took 26.84 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:14:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:28.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:29.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:29 np0005603609 nova_compute[221550]: 2026-01-31 08:14:29.822 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:30 np0005603609 nova_compute[221550]: 2026-01-31 08:14:30.641 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:30.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:30 np0005603609 nova_compute[221550]: 2026-01-31 08:14:30.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:30 np0005603609 nova_compute[221550]: 2026-01-31 08:14:30.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:14:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:31.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:14:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:31.186 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:14:31 np0005603609 nova_compute[221550]: 2026-01-31 08:14:31.187 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:31.188 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:14:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e301 e301: 3 total, 3 up, 3 in
Jan 31 03:14:32 np0005603609 nova_compute[221550]: 2026-01-31 08:14:32.321 221554 INFO nova.compute.manager [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Rescuing#033[00m
Jan 31 03:14:32 np0005603609 nova_compute[221550]: 2026-01-31 08:14:32.322 221554 DEBUG oslo_concurrency.lockutils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "refresh_cache-9fac8a1f-1f12-41c8-81b6-59c0bf962590" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:14:32 np0005603609 nova_compute[221550]: 2026-01-31 08:14:32.323 221554 DEBUG oslo_concurrency.lockutils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquired lock "refresh_cache-9fac8a1f-1f12-41c8-81b6-59c0bf962590" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:14:32 np0005603609 nova_compute[221550]: 2026-01-31 08:14:32.323 221554 DEBUG nova.network.neutron [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:14:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:32.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:32 np0005603609 nova_compute[221550]: 2026-01-31 08:14:32.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:32 np0005603609 nova_compute[221550]: 2026-01-31 08:14:32.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:14:32 np0005603609 nova_compute[221550]: 2026-01-31 08:14:32.927 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-518d36e7-b77a-415d-bd48-f53e637ed0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:14:32 np0005603609 nova_compute[221550]: 2026-01-31 08:14:32.927 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-518d36e7-b77a-415d-bd48-f53e637ed0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:14:32 np0005603609 nova_compute[221550]: 2026-01-31 08:14:32.928 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:14:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:33.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:14:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:34.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:14:34 np0005603609 nova_compute[221550]: 2026-01-31 08:14:34.823 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:34 np0005603609 nova_compute[221550]: 2026-01-31 08:14:34.888 221554 DEBUG nova.network.neutron [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Updating instance_info_cache with network_info: [{"id": "bec4c251-b190-498a-8020-4390b883634e", "address": "fa:16:3e:af:63:b4", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec4c251-b1", "ovs_interfaceid": "bec4c251-b190-498a-8020-4390b883634e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:14:34 np0005603609 nova_compute[221550]: 2026-01-31 08:14:34.909 221554 DEBUG oslo_concurrency.lockutils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Releasing lock "refresh_cache-9fac8a1f-1f12-41c8-81b6-59c0bf962590" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:14:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:35.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.242 221554 DEBUG nova.virt.libvirt.driver [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.376 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Updating instance_info_cache with network_info: [{"id": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "address": "fa:16:3e:49:13:77", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap726c6b35-bb", "ovs_interfaceid": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.393 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-518d36e7-b77a-415d-bd48-f53e637ed0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.393 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.394 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.395 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.395 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.395 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.421 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.421 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.422 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.422 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.423 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.643 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:14:35 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4072180426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.831 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.912 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.912 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.915 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.915 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.918 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:14:35 np0005603609 nova_compute[221550]: 2026-01-31 08:14:35.919 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:14:36 np0005603609 nova_compute[221550]: 2026-01-31 08:14:36.081 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:14:36 np0005603609 nova_compute[221550]: 2026-01-31 08:14:36.082 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3854MB free_disk=20.54543685913086GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:14:36 np0005603609 nova_compute[221550]: 2026-01-31 08:14:36.082 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:36 np0005603609 nova_compute[221550]: 2026-01-31 08:14:36.083 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:36 np0005603609 nova_compute[221550]: 2026-01-31 08:14:36.202 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 981eb990-ec0c-4673-98e7-102afbe0bb51 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:14:36 np0005603609 nova_compute[221550]: 2026-01-31 08:14:36.203 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 518d36e7-b77a-415d-bd48-f53e637ed0d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:14:36 np0005603609 nova_compute[221550]: 2026-01-31 08:14:36.203 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 9fac8a1f-1f12-41c8-81b6-59c0bf962590 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:14:36 np0005603609 nova_compute[221550]: 2026-01-31 08:14:36.203 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:14:36 np0005603609 nova_compute[221550]: 2026-01-31 08:14:36.203 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:14:36 np0005603609 nova_compute[221550]: 2026-01-31 08:14:36.534 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:14:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:36.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:14:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:14:36 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1717633416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:14:36 np0005603609 nova_compute[221550]: 2026-01-31 08:14:36.975 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:36 np0005603609 nova_compute[221550]: 2026-01-31 08:14:36.980 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:14:37 np0005603609 nova_compute[221550]: 2026-01-31 08:14:37.018 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:14:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:37.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:37 np0005603609 nova_compute[221550]: 2026-01-31 08:14:37.048 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:14:37 np0005603609 nova_compute[221550]: 2026-01-31 08:14:37.049 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:38.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:39.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:39 np0005603609 nova_compute[221550]: 2026-01-31 08:14:39.045 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:39.190 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:39 np0005603609 nova_compute[221550]: 2026-01-31 08:14:39.825 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:40.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:40 np0005603609 nova_compute[221550]: 2026-01-31 08:14:40.682 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:14:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:41.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:14:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:42.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:43.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:44.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:44 np0005603609 nova_compute[221550]: 2026-01-31 08:14:44.827 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:14:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:45.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:14:45 np0005603609 podman[275516]: 2026-01-31 08:14:45.161885623 +0000 UTC m=+0.046555689 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 03:14:45 np0005603609 podman[275515]: 2026-01-31 08:14:45.219135008 +0000 UTC m=+0.104004690 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 03:14:45 np0005603609 nova_compute[221550]: 2026-01-31 08:14:45.286 221554 DEBUG nova.virt.libvirt.driver [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:14:45 np0005603609 nova_compute[221550]: 2026-01-31 08:14:45.684 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:46 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #109. Immutable memtables: 0.
Jan 31 03:14:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:14:46.531733) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:14:46 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 109
Jan 31 03:14:46 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847286531796, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1125, "num_deletes": 254, "total_data_size": 2362460, "memory_usage": 2402888, "flush_reason": "Manual Compaction"}
Jan 31 03:14:46 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #110: started
Jan 31 03:14:46 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847286654390, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 110, "file_size": 1036934, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55175, "largest_seqno": 56295, "table_properties": {"data_size": 1032515, "index_size": 1943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11640, "raw_average_key_size": 21, "raw_value_size": 1023083, "raw_average_value_size": 1891, "num_data_blocks": 84, "num_entries": 541, "num_filter_entries": 541, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847207, "oldest_key_time": 1769847207, "file_creation_time": 1769847286, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:14:46 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 122752 microseconds, and 3320 cpu microseconds.
Jan 31 03:14:46 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:14:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:46.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:14:46.654476) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #110: 1036934 bytes OK
Jan 31 03:14:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:14:46.654512) [db/memtable_list.cc:519] [default] Level-0 commit table #110 started
Jan 31 03:14:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:14:46.916240) [db/memtable_list.cc:722] [default] Level-0 commit table #110: memtable #1 done
Jan 31 03:14:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:14:46.916322) EVENT_LOG_v1 {"time_micros": 1769847286916304, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:14:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:14:46.916363) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:14:46 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 2356887, prev total WAL file size 2356887, number of live WAL files 2.
Jan 31 03:14:46 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000106.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:14:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:14:46.917805) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373538' seq:72057594037927935, type:22 .. '6D6772737461740032303130' seq:0, type:0; will stop at (end)
Jan 31 03:14:46 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:14:46 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [110(1012KB)], [108(12MB)]
Jan 31 03:14:46 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847286917883, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [110], "files_L6": [108], "score": -1, "input_data_size": 13818421, "oldest_snapshot_seqno": -1}
Jan 31 03:14:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:47.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:47 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #111: 7973 keys, 10453317 bytes, temperature: kUnknown
Jan 31 03:14:47 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847287437344, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 111, "file_size": 10453317, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10402217, "index_size": 30060, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19973, "raw_key_size": 207240, "raw_average_key_size": 25, "raw_value_size": 10262306, "raw_average_value_size": 1287, "num_data_blocks": 1176, "num_entries": 7973, "num_filter_entries": 7973, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769847286, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 111, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:14:47 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:14:47 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:14:47.437623) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 10453317 bytes
Jan 31 03:14:47 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:14:47.498364) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 26.6 rd, 20.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 12.2 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(23.4) write-amplify(10.1) OK, records in: 8464, records dropped: 491 output_compression: NoCompression
Jan 31 03:14:47 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:14:47.498402) EVENT_LOG_v1 {"time_micros": 1769847287498390, "job": 68, "event": "compaction_finished", "compaction_time_micros": 519563, "compaction_time_cpu_micros": 19905, "output_level": 6, "num_output_files": 1, "total_output_size": 10453317, "num_input_records": 8464, "num_output_records": 7973, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:14:47 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:14:47 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847287498742, "job": 68, "event": "table_file_deletion", "file_number": 110}
Jan 31 03:14:47 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000108.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:14:47 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847287499870, "job": 68, "event": "table_file_deletion", "file_number": 108}
Jan 31 03:14:47 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:14:46.917584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:14:47 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:14:47.499901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:14:47 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:14:47.499905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:14:47 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:14:47.499907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:14:47 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:14:47.499909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:14:47 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:14:47.499911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:14:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:47 np0005603609 nova_compute[221550]: 2026-01-31 08:14:47.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:14:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:48.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:14:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e302 e302: 3 total, 3 up, 3 in
Jan 31 03:14:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:49.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:49 np0005603609 nova_compute[221550]: 2026-01-31 08:14:49.834 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:50.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:50 np0005603609 nova_compute[221550]: 2026-01-31 08:14:50.686 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e303 e303: 3 total, 3 up, 3 in
Jan 31 03:14:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:51.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e304 e304: 3 total, 3 up, 3 in
Jan 31 03:14:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:52.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:53.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:54 np0005603609 kernel: tapbec4c251-b1 (unregistering): left promiscuous mode
Jan 31 03:14:54 np0005603609 NetworkManager[49064]: <info>  [1769847294.0079] device (tapbec4c251-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.017 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:54 np0005603609 ovn_controller[130359]: 2026-01-31T08:14:54Z|00551|binding|INFO|Releasing lport bec4c251-b190-498a-8020-4390b883634e from this chassis (sb_readonly=0)
Jan 31 03:14:54 np0005603609 ovn_controller[130359]: 2026-01-31T08:14:54Z|00552|binding|INFO|Setting lport bec4c251-b190-498a-8020-4390b883634e down in Southbound
Jan 31 03:14:54 np0005603609 ovn_controller[130359]: 2026-01-31T08:14:54Z|00553|binding|INFO|Removing iface tapbec4c251-b1 ovn-installed in OVS
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.019 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:54.026 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:63:b4 10.100.0.9'], port_security=['fa:16:3e:af:63:b4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9fac8a1f-1f12-41c8-81b6-59c0bf962590', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6bf409ce-5d7d-4755-a538-3f1e81a177fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=bec4c251-b190-498a-8020-4390b883634e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.027 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:54.028 140058 INFO neutron.agent.ovn.metadata.agent [-] Port bec4c251-b190-498a-8020-4390b883634e in datapath 088d6992-6ba6-4719-a977-b3d306740157 unbound from our chassis#033[00m
Jan 31 03:14:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:54.030 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 088d6992-6ba6-4719-a977-b3d306740157#033[00m
Jan 31 03:14:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:54.042 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[85c6cf7c-4f82-43a0-b752-57a53fa40fd2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:54 np0005603609 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000084.scope: Deactivated successfully.
Jan 31 03:14:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:54.062 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[6670991c-9e20-40d0-b31f-680130c55fae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:54 np0005603609 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000084.scope: Consumed 15.717s CPU time.
Jan 31 03:14:54 np0005603609 systemd-machined[190912]: Machine qemu-68-instance-00000084 terminated.
Jan 31 03:14:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:54.067 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f8fb1fd0-873a-455b-9a9b-cffad463007f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:54.083 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[03cb7511-6bb7-4a7f-8fb5-2ea499ff6d77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:54.094 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4160ba14-ad32-4994-b561-6efb1df11011]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 17, 'rx_bytes': 868, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 17, 'rx_bytes': 868, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736325, 'reachable_time': 27412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275572, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:54.106 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[44a8a269-3cfa-4de3-9645-a4af19655726]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736331, 'tstamp': 736331}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275573, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736333, 'tstamp': 736333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275573, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:54.108 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.110 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.113 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:54.114 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088d6992-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:54.114 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:14:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:54.115 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap088d6992-60, col_values=(('external_ids', {'iface-id': '7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:14:54.115 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.320 221554 INFO nova.virt.libvirt.driver [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Instance shutdown successfully after 19 seconds.#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.324 221554 INFO nova.virt.libvirt.driver [-] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Instance destroyed successfully.#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.325 221554 DEBUG nova.objects.instance [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'numa_topology' on Instance uuid 9fac8a1f-1f12-41c8-81b6-59c0bf962590 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.343 221554 INFO nova.virt.libvirt.driver [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Attempting a stable device rescue#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.672 221554 DEBUG nova.virt.libvirt.driver [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.675 221554 DEBUG nova.virt.libvirt.driver [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.675 221554 INFO nova.virt.libvirt.driver [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Creating image(s)#033[00m
Jan 31 03:14:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:14:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:54.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.698 221554 DEBUG nova.storage.rbd_utils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.701 221554 DEBUG nova.objects.instance [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9fac8a1f-1f12-41c8-81b6-59c0bf962590 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.704 221554 DEBUG nova.compute.manager [req-28f274cb-b04e-4ee4-a73a-4cd69e835bcc req-fe9c8129-020b-4243-a640-e341bd8a4319 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received event network-vif-unplugged-bec4c251-b190-498a-8020-4390b883634e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.705 221554 DEBUG oslo_concurrency.lockutils [req-28f274cb-b04e-4ee4-a73a-4cd69e835bcc req-fe9c8129-020b-4243-a640-e341bd8a4319 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.705 221554 DEBUG oslo_concurrency.lockutils [req-28f274cb-b04e-4ee4-a73a-4cd69e835bcc req-fe9c8129-020b-4243-a640-e341bd8a4319 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.705 221554 DEBUG oslo_concurrency.lockutils [req-28f274cb-b04e-4ee4-a73a-4cd69e835bcc req-fe9c8129-020b-4243-a640-e341bd8a4319 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.706 221554 DEBUG nova.compute.manager [req-28f274cb-b04e-4ee4-a73a-4cd69e835bcc req-fe9c8129-020b-4243-a640-e341bd8a4319 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] No waiting events found dispatching network-vif-unplugged-bec4c251-b190-498a-8020-4390b883634e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.706 221554 WARNING nova.compute.manager [req-28f274cb-b04e-4ee4-a73a-4cd69e835bcc req-fe9c8129-020b-4243-a640-e341bd8a4319 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received unexpected event network-vif-unplugged-bec4c251-b190-498a-8020-4390b883634e for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.741 221554 DEBUG nova.storage.rbd_utils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.764 221554 DEBUG nova.storage.rbd_utils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.767 221554 DEBUG oslo_concurrency.lockutils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "1b7ff575ad1c122978fa48be65462e692278f5f4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.768 221554 DEBUG oslo_concurrency.lockutils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "1b7ff575ad1c122978fa48be65462e692278f5f4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:54 np0005603609 nova_compute[221550]: 2026-01-31 08:14:54.836 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:14:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:55.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:14:55 np0005603609 nova_compute[221550]: 2026-01-31 08:14:55.305 221554 DEBUG nova.virt.libvirt.imagebackend [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/5df2a010-0131-4bd4-8096-45b6ac756af0/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/5df2a010-0131-4bd4-8096-45b6ac756af0/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 31 03:14:55 np0005603609 nova_compute[221550]: 2026-01-31 08:14:55.371 221554 DEBUG nova.virt.libvirt.imagebackend [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Selected location: {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/5df2a010-0131-4bd4-8096-45b6ac756af0/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 31 03:14:55 np0005603609 nova_compute[221550]: 2026-01-31 08:14:55.371 221554 DEBUG nova.storage.rbd_utils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] cloning images/5df2a010-0131-4bd4-8096-45b6ac756af0@snap to None/9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:14:55 np0005603609 nova_compute[221550]: 2026-01-31 08:14:55.688 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.019 221554 DEBUG oslo_concurrency.lockutils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "1b7ff575ad1c122978fa48be65462e692278f5f4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.069 221554 DEBUG nova.objects.instance [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'migration_context' on Instance uuid 9fac8a1f-1f12-41c8-81b6-59c0bf962590 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.135 221554 DEBUG nova.virt.libvirt.driver [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.138 221554 DEBUG nova.virt.libvirt.driver [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Start _get_guest_xml network_info=[{"id": "bec4c251-b190-498a-8020-4390b883634e", "address": "fa:16:3e:af:63:b4", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "vif_mac": "fa:16:3e:af:63:b4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec4c251-b1", "ovs_interfaceid": "bec4c251-b190-498a-8020-4390b883634e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '5df2a010-0131-4bd4-8096-45b6ac756af0', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.139 221554 DEBUG nova.objects.instance [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'resources' on Instance uuid 9fac8a1f-1f12-41c8-81b6-59c0bf962590 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.178 221554 WARNING nova.virt.libvirt.driver [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.185 221554 DEBUG nova.virt.libvirt.host [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.186 221554 DEBUG nova.virt.libvirt.host [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.189 221554 DEBUG nova.virt.libvirt.host [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.189 221554 DEBUG nova.virt.libvirt.host [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.191 221554 DEBUG nova.virt.libvirt.driver [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.191 221554 DEBUG nova.virt.hardware [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.191 221554 DEBUG nova.virt.hardware [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.192 221554 DEBUG nova.virt.hardware [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.192 221554 DEBUG nova.virt.hardware [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.192 221554 DEBUG nova.virt.hardware [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.192 221554 DEBUG nova.virt.hardware [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.192 221554 DEBUG nova.virt.hardware [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.192 221554 DEBUG nova.virt.hardware [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.193 221554 DEBUG nova.virt.hardware [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.193 221554 DEBUG nova.virt.hardware [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.193 221554 DEBUG nova.virt.hardware [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.193 221554 DEBUG nova.objects.instance [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9fac8a1f-1f12-41c8-81b6-59c0bf962590 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.446 221554 DEBUG oslo_concurrency.processutils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:56.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:14:56 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1135928742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.899 221554 DEBUG oslo_concurrency.processutils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:56 np0005603609 nova_compute[221550]: 2026-01-31 08:14:56.937 221554 DEBUG oslo_concurrency.processutils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:57.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:57 np0005603609 nova_compute[221550]: 2026-01-31 08:14:57.131 221554 DEBUG nova.compute.manager [req-bd8b3d78-2c82-4669-8cde-1ca9222393cb req-b437f23f-d8f9-4b6d-9081-b9cd71a7189f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received event network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:57 np0005603609 nova_compute[221550]: 2026-01-31 08:14:57.132 221554 DEBUG oslo_concurrency.lockutils [req-bd8b3d78-2c82-4669-8cde-1ca9222393cb req-b437f23f-d8f9-4b6d-9081-b9cd71a7189f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:57 np0005603609 nova_compute[221550]: 2026-01-31 08:14:57.132 221554 DEBUG oslo_concurrency.lockutils [req-bd8b3d78-2c82-4669-8cde-1ca9222393cb req-b437f23f-d8f9-4b6d-9081-b9cd71a7189f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:57 np0005603609 nova_compute[221550]: 2026-01-31 08:14:57.132 221554 DEBUG oslo_concurrency.lockutils [req-bd8b3d78-2c82-4669-8cde-1ca9222393cb req-b437f23f-d8f9-4b6d-9081-b9cd71a7189f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:57 np0005603609 nova_compute[221550]: 2026-01-31 08:14:57.133 221554 DEBUG nova.compute.manager [req-bd8b3d78-2c82-4669-8cde-1ca9222393cb req-b437f23f-d8f9-4b6d-9081-b9cd71a7189f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] No waiting events found dispatching network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:14:57 np0005603609 nova_compute[221550]: 2026-01-31 08:14:57.133 221554 WARNING nova.compute.manager [req-bd8b3d78-2c82-4669-8cde-1ca9222393cb req-b437f23f-d8f9-4b6d-9081-b9cd71a7189f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received unexpected event network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:14:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:14:57 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3915545150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:14:57 np0005603609 nova_compute[221550]: 2026-01-31 08:14:57.474 221554 DEBUG oslo_concurrency.processutils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:57 np0005603609 nova_compute[221550]: 2026-01-31 08:14:57.475 221554 DEBUG oslo_concurrency.processutils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:14:57 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/410555706' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:14:57 np0005603609 nova_compute[221550]: 2026-01-31 08:14:57.897 221554 DEBUG oslo_concurrency.processutils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:57 np0005603609 nova_compute[221550]: 2026-01-31 08:14:57.899 221554 DEBUG nova.virt.libvirt.vif [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1592665940',display_name='tempest-ServerStableDeviceRescueTest-server-1592665940',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1592665940',id=132,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:13:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1633c84ea1bf46b080aaafd30bbcf25f',ramdisk_id='',reservation_id='r-apdvib3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-569420416',owner_user_name='tempest-ServerStableDeviceRescueTest-569420416-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:14:27Z,user_data=None,user_id='d7d9a44201d548aba1e1654e136ddd06',uuid=9fac8a1f-1f12-41c8-81b6-59c0bf962590,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bec4c251-b190-498a-8020-4390b883634e", "address": "fa:16:3e:af:63:b4", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "vif_mac": "fa:16:3e:af:63:b4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec4c251-b1", "ovs_interfaceid": "bec4c251-b190-498a-8020-4390b883634e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:14:57 np0005603609 nova_compute[221550]: 2026-01-31 08:14:57.900 221554 DEBUG nova.network.os_vif_util [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converting VIF {"id": "bec4c251-b190-498a-8020-4390b883634e", "address": "fa:16:3e:af:63:b4", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "vif_mac": "fa:16:3e:af:63:b4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec4c251-b1", "ovs_interfaceid": "bec4c251-b190-498a-8020-4390b883634e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:14:57 np0005603609 nova_compute[221550]: 2026-01-31 08:14:57.901 221554 DEBUG nova.network.os_vif_util [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:63:b4,bridge_name='br-int',has_traffic_filtering=True,id=bec4c251-b190-498a-8020-4390b883634e,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec4c251-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:14:57 np0005603609 nova_compute[221550]: 2026-01-31 08:14:57.903 221554 DEBUG nova.objects.instance [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'pci_devices' on Instance uuid 9fac8a1f-1f12-41c8-81b6-59c0bf962590 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:57 np0005603609 nova_compute[221550]: 2026-01-31 08:14:57.932 221554 DEBUG nova.virt.libvirt.driver [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:  <uuid>9fac8a1f-1f12-41c8-81b6-59c0bf962590</uuid>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:  <name>instance-00000084</name>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-1592665940</nova:name>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:14:56</nova:creationTime>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <nova:user uuid="d7d9a44201d548aba1e1654e136ddd06">tempest-ServerStableDeviceRescueTest-569420416-project-member</nova:user>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <nova:project uuid="1633c84ea1bf46b080aaafd30bbcf25f">tempest-ServerStableDeviceRescueTest-569420416</nova:project>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <nova:port uuid="bec4c251-b190-498a-8020-4390b883634e">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <entry name="serial">9fac8a1f-1f12-41c8-81b6-59c0bf962590</entry>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <entry name="uuid">9fac8a1f-1f12-41c8-81b6-59c0bf962590</entry>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk.config">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk.rescue">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <target dev="vdb" bus="virtio"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <boot order="1"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:af:63:b4"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <target dev="tapbec4c251-b1"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/9fac8a1f-1f12-41c8-81b6-59c0bf962590/console.log" append="off"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:14:57 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:14:57 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:14:57 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:14:57 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:14:57 np0005603609 nova_compute[221550]: 2026-01-31 08:14:57.942 221554 INFO nova.virt.libvirt.driver [-] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Instance destroyed successfully.#033[00m
Jan 31 03:14:58 np0005603609 nova_compute[221550]: 2026-01-31 08:14:58.303 221554 DEBUG nova.virt.libvirt.driver [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:14:58 np0005603609 nova_compute[221550]: 2026-01-31 08:14:58.304 221554 DEBUG nova.virt.libvirt.driver [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:14:58 np0005603609 nova_compute[221550]: 2026-01-31 08:14:58.305 221554 DEBUG nova.virt.libvirt.driver [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:14:58 np0005603609 nova_compute[221550]: 2026-01-31 08:14:58.306 221554 DEBUG nova.virt.libvirt.driver [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No VIF found with MAC fa:16:3e:af:63:b4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:14:58 np0005603609 nova_compute[221550]: 2026-01-31 08:14:58.307 221554 INFO nova.virt.libvirt.driver [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Using config drive#033[00m
Jan 31 03:14:58 np0005603609 nova_compute[221550]: 2026-01-31 08:14:58.443 221554 DEBUG nova.storage.rbd_utils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:14:58 np0005603609 nova_compute[221550]: 2026-01-31 08:14:58.520 221554 DEBUG nova.objects.instance [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9fac8a1f-1f12-41c8-81b6-59c0bf962590 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:58 np0005603609 nova_compute[221550]: 2026-01-31 08:14:58.658 221554 DEBUG nova.objects.instance [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'keypairs' on Instance uuid 9fac8a1f-1f12-41c8-81b6-59c0bf962590 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:14:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:58.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:14:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:14:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:14:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:59.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:14:59 np0005603609 nova_compute[221550]: 2026-01-31 08:14:59.857 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:15:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:15:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:15:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:15:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:15:00 np0005603609 nova_compute[221550]: 2026-01-31 08:15:00.504 221554 INFO nova.virt.libvirt.driver [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Creating config drive at /var/lib/nova/instances/9fac8a1f-1f12-41c8-81b6-59c0bf962590/disk.config.rescue#033[00m
Jan 31 03:15:00 np0005603609 nova_compute[221550]: 2026-01-31 08:15:00.510 221554 DEBUG oslo_concurrency.processutils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9fac8a1f-1f12-41c8-81b6-59c0bf962590/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpqp90yovu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:15:00 np0005603609 nova_compute[221550]: 2026-01-31 08:15:00.641 221554 DEBUG oslo_concurrency.processutils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9fac8a1f-1f12-41c8-81b6-59c0bf962590/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpqp90yovu" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:15:00 np0005603609 nova_compute[221550]: 2026-01-31 08:15:00.672 221554 DEBUG nova.storage.rbd_utils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:15:00 np0005603609 nova_compute[221550]: 2026-01-31 08:15:00.678 221554 DEBUG oslo_concurrency.processutils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9fac8a1f-1f12-41c8-81b6-59c0bf962590/disk.config.rescue 9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:15:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:00.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:00 np0005603609 nova_compute[221550]: 2026-01-31 08:15:00.706 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:00 np0005603609 nova_compute[221550]: 2026-01-31 08:15:00.904 221554 DEBUG oslo_concurrency.processutils [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9fac8a1f-1f12-41c8-81b6-59c0bf962590/disk.config.rescue 9fac8a1f-1f12-41c8-81b6-59c0bf962590_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:15:00 np0005603609 nova_compute[221550]: 2026-01-31 08:15:00.905 221554 INFO nova.virt.libvirt.driver [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Deleting local config drive /var/lib/nova/instances/9fac8a1f-1f12-41c8-81b6-59c0bf962590/disk.config.rescue because it was imported into RBD.#033[00m
Jan 31 03:15:00 np0005603609 kernel: tapbec4c251-b1: entered promiscuous mode
Jan 31 03:15:00 np0005603609 NetworkManager[49064]: <info>  [1769847300.9549] manager: (tapbec4c251-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/267)
Jan 31 03:15:00 np0005603609 ovn_controller[130359]: 2026-01-31T08:15:00Z|00554|binding|INFO|Claiming lport bec4c251-b190-498a-8020-4390b883634e for this chassis.
Jan 31 03:15:00 np0005603609 nova_compute[221550]: 2026-01-31 08:15:00.974 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:00 np0005603609 ovn_controller[130359]: 2026-01-31T08:15:00Z|00555|binding|INFO|bec4c251-b190-498a-8020-4390b883634e: Claiming fa:16:3e:af:63:b4 10.100.0.9
Jan 31 03:15:00 np0005603609 systemd-udevd[276010]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:15:00 np0005603609 ovn_controller[130359]: 2026-01-31T08:15:00Z|00556|binding|INFO|Setting lport bec4c251-b190-498a-8020-4390b883634e ovn-installed in OVS
Jan 31 03:15:00 np0005603609 nova_compute[221550]: 2026-01-31 08:15:00.984 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:00 np0005603609 NetworkManager[49064]: <info>  [1769847300.9908] device (tapbec4c251-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:15:00 np0005603609 NetworkManager[49064]: <info>  [1769847300.9917] device (tapbec4c251-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:15:01 np0005603609 ovn_controller[130359]: 2026-01-31T08:15:01Z|00557|binding|INFO|Setting lport bec4c251-b190-498a-8020-4390b883634e up in Southbound
Jan 31 03:15:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:01.006 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:63:b4 10.100.0.9'], port_security=['fa:16:3e:af:63:b4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9fac8a1f-1f12-41c8-81b6-59c0bf962590', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '6bf409ce-5d7d-4755-a538-3f1e81a177fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=bec4c251-b190-498a-8020-4390b883634e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:15:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:01.007 140058 INFO neutron.agent.ovn.metadata.agent [-] Port bec4c251-b190-498a-8020-4390b883634e in datapath 088d6992-6ba6-4719-a977-b3d306740157 bound to our chassis#033[00m
Jan 31 03:15:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:01.009 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 088d6992-6ba6-4719-a977-b3d306740157#033[00m
Jan 31 03:15:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:01.021 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ab58216c-ff95-4b9c-822a-a645c3d83c69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:01.044 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3d12af-0cfa-47fe-86bc-6b5894f85ec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:01.047 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[3997e076-be16-428b-acb7-557924950ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:01 np0005603609 systemd-machined[190912]: New machine qemu-69-instance-00000084.
Jan 31 03:15:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:01.066 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[347e7d13-e681-4586-9f9e-e037b0334091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:15:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:01.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:15:01 np0005603609 systemd[1]: Started Virtual Machine qemu-69-instance-00000084.
Jan 31 03:15:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:01.080 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf68ebb-7b24-480b-a396-4f5c5e6a48e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 19, 'rx_bytes': 868, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 19, 'rx_bytes': 868, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736325, 'reachable_time': 27412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276021, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:01.092 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[463df07b-c953-4848-bab0-75e190601f85]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736331, 'tstamp': 736331}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276022, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736333, 'tstamp': 736333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276022, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:01.094 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:01 np0005603609 nova_compute[221550]: 2026-01-31 08:15:01.096 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:01 np0005603609 nova_compute[221550]: 2026-01-31 08:15:01.097 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:01.098 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088d6992-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:01.098 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:01.099 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap088d6992-60, col_values=(('external_ids', {'iface-id': '7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:01.100 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e305 e305: 3 total, 3 up, 3 in
Jan 31 03:15:01 np0005603609 nova_compute[221550]: 2026-01-31 08:15:01.805 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 9fac8a1f-1f12-41c8-81b6-59c0bf962590 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:15:01 np0005603609 nova_compute[221550]: 2026-01-31 08:15:01.806 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847301.8045955, 9fac8a1f-1f12-41c8-81b6-59c0bf962590 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:15:01 np0005603609 nova_compute[221550]: 2026-01-31 08:15:01.806 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:15:01 np0005603609 nova_compute[221550]: 2026-01-31 08:15:01.810 221554 DEBUG nova.compute.manager [None req-349e72fd-25e6-4b64-ade7-77e1449a4fe9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:15:02 np0005603609 nova_compute[221550]: 2026-01-31 08:15:02.013 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:15:02 np0005603609 nova_compute[221550]: 2026-01-31 08:15:02.019 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:15:02 np0005603609 nova_compute[221550]: 2026-01-31 08:15:02.226 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Jan 31 03:15:02 np0005603609 nova_compute[221550]: 2026-01-31 08:15:02.227 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847301.805733, 9fac8a1f-1f12-41c8-81b6-59c0bf962590 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:15:02 np0005603609 nova_compute[221550]: 2026-01-31 08:15:02.227 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] VM Started (Lifecycle Event)#033[00m
Jan 31 03:15:02 np0005603609 nova_compute[221550]: 2026-01-31 08:15:02.320 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:15:02 np0005603609 nova_compute[221550]: 2026-01-31 08:15:02.324 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:15:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:02.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:02 np0005603609 nova_compute[221550]: 2026-01-31 08:15:02.915 221554 DEBUG nova.compute.manager [req-6539be8f-685a-4a74-a45d-cdf64d885b52 req-34a5cd6d-a671-4943-ba51-1da8c2050285 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received event network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:02 np0005603609 nova_compute[221550]: 2026-01-31 08:15:02.916 221554 DEBUG oslo_concurrency.lockutils [req-6539be8f-685a-4a74-a45d-cdf64d885b52 req-34a5cd6d-a671-4943-ba51-1da8c2050285 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:02 np0005603609 nova_compute[221550]: 2026-01-31 08:15:02.917 221554 DEBUG oslo_concurrency.lockutils [req-6539be8f-685a-4a74-a45d-cdf64d885b52 req-34a5cd6d-a671-4943-ba51-1da8c2050285 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:02 np0005603609 nova_compute[221550]: 2026-01-31 08:15:02.917 221554 DEBUG oslo_concurrency.lockutils [req-6539be8f-685a-4a74-a45d-cdf64d885b52 req-34a5cd6d-a671-4943-ba51-1da8c2050285 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:02 np0005603609 nova_compute[221550]: 2026-01-31 08:15:02.917 221554 DEBUG nova.compute.manager [req-6539be8f-685a-4a74-a45d-cdf64d885b52 req-34a5cd6d-a671-4943-ba51-1da8c2050285 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] No waiting events found dispatching network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:15:02 np0005603609 nova_compute[221550]: 2026-01-31 08:15:02.917 221554 WARNING nova.compute.manager [req-6539be8f-685a-4a74-a45d-cdf64d885b52 req-34a5cd6d-a671-4943-ba51-1da8c2050285 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received unexpected event network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e for instance with vm_state rescued and task_state None.#033[00m
Jan 31 03:15:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:15:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:03.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:15:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:15:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:04.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:15:04 np0005603609 nova_compute[221550]: 2026-01-31 08:15:04.860 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:05.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:05 np0005603609 nova_compute[221550]: 2026-01-31 08:15:05.258 221554 DEBUG nova.compute.manager [req-c07c9a32-1e32-45cf-9ea1-e9e0823557fd req-d59ef859-f19f-4571-a949-f5ed20d5c583 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received event network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:05 np0005603609 nova_compute[221550]: 2026-01-31 08:15:05.259 221554 DEBUG oslo_concurrency.lockutils [req-c07c9a32-1e32-45cf-9ea1-e9e0823557fd req-d59ef859-f19f-4571-a949-f5ed20d5c583 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:05 np0005603609 nova_compute[221550]: 2026-01-31 08:15:05.259 221554 DEBUG oslo_concurrency.lockutils [req-c07c9a32-1e32-45cf-9ea1-e9e0823557fd req-d59ef859-f19f-4571-a949-f5ed20d5c583 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:05 np0005603609 nova_compute[221550]: 2026-01-31 08:15:05.260 221554 DEBUG oslo_concurrency.lockutils [req-c07c9a32-1e32-45cf-9ea1-e9e0823557fd req-d59ef859-f19f-4571-a949-f5ed20d5c583 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:05 np0005603609 nova_compute[221550]: 2026-01-31 08:15:05.261 221554 DEBUG nova.compute.manager [req-c07c9a32-1e32-45cf-9ea1-e9e0823557fd req-d59ef859-f19f-4571-a949-f5ed20d5c583 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] No waiting events found dispatching network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:15:05 np0005603609 nova_compute[221550]: 2026-01-31 08:15:05.261 221554 WARNING nova.compute.manager [req-c07c9a32-1e32-45cf-9ea1-e9e0823557fd req-d59ef859-f19f-4571-a949-f5ed20d5c583 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received unexpected event network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e for instance with vm_state rescued and task_state None.#033[00m
Jan 31 03:15:05 np0005603609 nova_compute[221550]: 2026-01-31 08:15:05.707 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:06 np0005603609 nova_compute[221550]: 2026-01-31 08:15:06.037 221554 INFO nova.compute.manager [None req-d6ac7bdd-64f1-4f5f-a452-285176d3acc5 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Unrescuing#033[00m
Jan 31 03:15:06 np0005603609 nova_compute[221550]: 2026-01-31 08:15:06.037 221554 DEBUG oslo_concurrency.lockutils [None req-d6ac7bdd-64f1-4f5f-a452-285176d3acc5 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "refresh_cache-9fac8a1f-1f12-41c8-81b6-59c0bf962590" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:15:06 np0005603609 nova_compute[221550]: 2026-01-31 08:15:06.038 221554 DEBUG oslo_concurrency.lockutils [None req-d6ac7bdd-64f1-4f5f-a452-285176d3acc5 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquired lock "refresh_cache-9fac8a1f-1f12-41c8-81b6-59c0bf962590" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:15:06 np0005603609 nova_compute[221550]: 2026-01-31 08:15:06.038 221554 DEBUG nova.network.neutron [None req-d6ac7bdd-64f1-4f5f-a452-285176d3acc5 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:15:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:06.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:15:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:07.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:15:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:07.508 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:07.508 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:07.508 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:08.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:08 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:15:08 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:15:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:15:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:09.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:15:09 np0005603609 nova_compute[221550]: 2026-01-31 08:15:09.862 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:10 np0005603609 nova_compute[221550]: 2026-01-31 08:15:10.709 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:15:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:10.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:15:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:11.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:12.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:15:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:13.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:15:13 np0005603609 nova_compute[221550]: 2026-01-31 08:15:13.372 221554 DEBUG nova.network.neutron [None req-d6ac7bdd-64f1-4f5f-a452-285176d3acc5 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Updating instance_info_cache with network_info: [{"id": "bec4c251-b190-498a-8020-4390b883634e", "address": "fa:16:3e:af:63:b4", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec4c251-b1", "ovs_interfaceid": "bec4c251-b190-498a-8020-4390b883634e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:15:13 np0005603609 nova_compute[221550]: 2026-01-31 08:15:13.589 221554 DEBUG oslo_concurrency.lockutils [None req-d6ac7bdd-64f1-4f5f-a452-285176d3acc5 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Releasing lock "refresh_cache-9fac8a1f-1f12-41c8-81b6-59c0bf962590" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:15:13 np0005603609 nova_compute[221550]: 2026-01-31 08:15:13.590 221554 DEBUG nova.objects.instance [None req-d6ac7bdd-64f1-4f5f-a452-285176d3acc5 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'flavor' on Instance uuid 9fac8a1f-1f12-41c8-81b6-59c0bf962590 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:15:13 np0005603609 kernel: tapbec4c251-b1 (unregistering): left promiscuous mode
Jan 31 03:15:13 np0005603609 NetworkManager[49064]: <info>  [1769847313.7734] device (tapbec4c251-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:15:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:15:13Z|00558|binding|INFO|Releasing lport bec4c251-b190-498a-8020-4390b883634e from this chassis (sb_readonly=0)
Jan 31 03:15:13 np0005603609 nova_compute[221550]: 2026-01-31 08:15:13.776 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:15:13Z|00559|binding|INFO|Setting lport bec4c251-b190-498a-8020-4390b883634e down in Southbound
Jan 31 03:15:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:15:13Z|00560|binding|INFO|Removing iface tapbec4c251-b1 ovn-installed in OVS
Jan 31 03:15:13 np0005603609 nova_compute[221550]: 2026-01-31 08:15:13.778 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:13 np0005603609 nova_compute[221550]: 2026-01-31 08:15:13.792 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:13.809 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:63:b4 10.100.0.9'], port_security=['fa:16:3e:af:63:b4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9fac8a1f-1f12-41c8-81b6-59c0bf962590', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6bf409ce-5d7d-4755-a538-3f1e81a177fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=bec4c251-b190-498a-8020-4390b883634e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:15:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:13.810 140058 INFO neutron.agent.ovn.metadata.agent [-] Port bec4c251-b190-498a-8020-4390b883634e in datapath 088d6992-6ba6-4719-a977-b3d306740157 unbound from our chassis#033[00m
Jan 31 03:15:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:13.811 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 088d6992-6ba6-4719-a977-b3d306740157#033[00m
Jan 31 03:15:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:13.826 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4a96dcb9-32f5-4171-ac3a-657fbf86320b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:13 np0005603609 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000084.scope: Deactivated successfully.
Jan 31 03:15:13 np0005603609 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000084.scope: Consumed 12.411s CPU time.
Jan 31 03:15:13 np0005603609 systemd-machined[190912]: Machine qemu-69-instance-00000084 terminated.
Jan 31 03:15:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:13.852 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6acdc9-502a-4169-84b7-b82f1b9765ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:13.856 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[3089c58c-8073-4fe6-b81e-2ccb583d17f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:13.883 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f8456090-a7d4-4e34-974f-3661d64c5796]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:13.902 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f73908a2-ecc7-444e-a8bc-029db6742b9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 21, 'rx_bytes': 868, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 21, 'rx_bytes': 868, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736325, 'reachable_time': 27412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276154, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:13 np0005603609 nova_compute[221550]: 2026-01-31 08:15:13.907 221554 INFO nova.virt.libvirt.driver [-] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Instance destroyed successfully.#033[00m
Jan 31 03:15:13 np0005603609 nova_compute[221550]: 2026-01-31 08:15:13.907 221554 DEBUG nova.objects.instance [None req-d6ac7bdd-64f1-4f5f-a452-285176d3acc5 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'numa_topology' on Instance uuid 9fac8a1f-1f12-41c8-81b6-59c0bf962590 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:15:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:13.918 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[abe06950-f5cd-4935-8d95-99338568b950]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736331, 'tstamp': 736331}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276163, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736333, 'tstamp': 736333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276163, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:13.919 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:13 np0005603609 nova_compute[221550]: 2026-01-31 08:15:13.921 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:13 np0005603609 nova_compute[221550]: 2026-01-31 08:15:13.926 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:13.927 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088d6992-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:13.927 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:13.928 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap088d6992-60, col_values=(('external_ids', {'iface-id': '7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:13.928 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:14 np0005603609 kernel: tapbec4c251-b1: entered promiscuous mode
Jan 31 03:15:14 np0005603609 NetworkManager[49064]: <info>  [1769847314.0455] manager: (tapbec4c251-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/268)
Jan 31 03:15:14 np0005603609 nova_compute[221550]: 2026-01-31 08:15:14.046 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:14 np0005603609 ovn_controller[130359]: 2026-01-31T08:15:14Z|00561|binding|INFO|Claiming lport bec4c251-b190-498a-8020-4390b883634e for this chassis.
Jan 31 03:15:14 np0005603609 ovn_controller[130359]: 2026-01-31T08:15:14Z|00562|binding|INFO|bec4c251-b190-498a-8020-4390b883634e: Claiming fa:16:3e:af:63:b4 10.100.0.9
Jan 31 03:15:14 np0005603609 systemd-udevd[276143]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:15:14 np0005603609 ovn_controller[130359]: 2026-01-31T08:15:14Z|00563|binding|INFO|Setting lport bec4c251-b190-498a-8020-4390b883634e ovn-installed in OVS
Jan 31 03:15:14 np0005603609 nova_compute[221550]: 2026-01-31 08:15:14.054 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:14 np0005603609 NetworkManager[49064]: <info>  [1769847314.0559] device (tapbec4c251-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:15:14 np0005603609 NetworkManager[49064]: <info>  [1769847314.0563] device (tapbec4c251-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:15:14 np0005603609 nova_compute[221550]: 2026-01-31 08:15:14.058 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:14 np0005603609 systemd-machined[190912]: New machine qemu-70-instance-00000084.
Jan 31 03:15:14 np0005603609 systemd[1]: Started Virtual Machine qemu-70-instance-00000084.
Jan 31 03:15:14 np0005603609 ovn_controller[130359]: 2026-01-31T08:15:14Z|00564|binding|INFO|Setting lport bec4c251-b190-498a-8020-4390b883634e up in Southbound
Jan 31 03:15:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:14.150 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:63:b4 10.100.0.9'], port_security=['fa:16:3e:af:63:b4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9fac8a1f-1f12-41c8-81b6-59c0bf962590', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6bf409ce-5d7d-4755-a538-3f1e81a177fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=bec4c251-b190-498a-8020-4390b883634e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:15:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:14.152 140058 INFO neutron.agent.ovn.metadata.agent [-] Port bec4c251-b190-498a-8020-4390b883634e in datapath 088d6992-6ba6-4719-a977-b3d306740157 bound to our chassis#033[00m
Jan 31 03:15:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:14.154 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 088d6992-6ba6-4719-a977-b3d306740157#033[00m
Jan 31 03:15:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:14.163 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8344e712-c3c9-4027-9ae8-89808fe8d381]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:14.178 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[42b4c533-df0d-483f-be2d-c880f6298ab1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:14.181 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[2c73610f-3dc0-4fb9-bf67-3d4cedf717a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:14.198 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d013d6-35e4-48ba-ba59-f1cc2e048b9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:14.207 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5a408438-d96e-427d-84db-1841b15e0303]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 23, 'rx_bytes': 868, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 23, 'rx_bytes': 868, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736325, 'reachable_time': 27412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276191, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:14.216 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9baacf09-f3c0-4464-b9f4-fc5fca904846]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736331, 'tstamp': 736331}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276192, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736333, 'tstamp': 736333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276192, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:14.217 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:14 np0005603609 nova_compute[221550]: 2026-01-31 08:15:14.219 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:14 np0005603609 nova_compute[221550]: 2026-01-31 08:15:14.220 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:14.220 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088d6992-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:14.221 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:14.221 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap088d6992-60, col_values=(('external_ids', {'iface-id': '7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:14.221 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:14 np0005603609 nova_compute[221550]: 2026-01-31 08:15:14.464 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 9fac8a1f-1f12-41c8-81b6-59c0bf962590 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:15:14 np0005603609 nova_compute[221550]: 2026-01-31 08:15:14.464 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847314.464227, 9fac8a1f-1f12-41c8-81b6-59c0bf962590 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:15:14 np0005603609 nova_compute[221550]: 2026-01-31 08:15:14.465 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:15:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:14.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:14 np0005603609 nova_compute[221550]: 2026-01-31 08:15:14.898 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:14 np0005603609 nova_compute[221550]: 2026-01-31 08:15:14.936 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:15:14 np0005603609 nova_compute[221550]: 2026-01-31 08:15:14.940 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:15:14 np0005603609 nova_compute[221550]: 2026-01-31 08:15:14.983 221554 DEBUG nova.compute.manager [None req-d6ac7bdd-64f1-4f5f-a452-285176d3acc5 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:15:15 np0005603609 nova_compute[221550]: 2026-01-31 08:15:15.043 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 31 03:15:15 np0005603609 nova_compute[221550]: 2026-01-31 08:15:15.043 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847314.4670868, 9fac8a1f-1f12-41c8-81b6-59c0bf962590 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:15:15 np0005603609 nova_compute[221550]: 2026-01-31 08:15:15.043 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] VM Started (Lifecycle Event)#033[00m
Jan 31 03:15:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:15.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:15 np0005603609 nova_compute[221550]: 2026-01-31 08:15:15.096 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:15:15 np0005603609 nova_compute[221550]: 2026-01-31 08:15:15.099 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:15:15 np0005603609 nova_compute[221550]: 2026-01-31 08:15:15.710 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:15 np0005603609 nova_compute[221550]: 2026-01-31 08:15:15.892 221554 DEBUG nova.compute.manager [req-707c53d2-a104-4148-8820-cbf9c55aae3b req-772d1ea1-c637-4313-83f6-8e7f176b782a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received event network-vif-unplugged-bec4c251-b190-498a-8020-4390b883634e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:15 np0005603609 nova_compute[221550]: 2026-01-31 08:15:15.892 221554 DEBUG oslo_concurrency.lockutils [req-707c53d2-a104-4148-8820-cbf9c55aae3b req-772d1ea1-c637-4313-83f6-8e7f176b782a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:15 np0005603609 nova_compute[221550]: 2026-01-31 08:15:15.892 221554 DEBUG oslo_concurrency.lockutils [req-707c53d2-a104-4148-8820-cbf9c55aae3b req-772d1ea1-c637-4313-83f6-8e7f176b782a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:15 np0005603609 nova_compute[221550]: 2026-01-31 08:15:15.893 221554 DEBUG oslo_concurrency.lockutils [req-707c53d2-a104-4148-8820-cbf9c55aae3b req-772d1ea1-c637-4313-83f6-8e7f176b782a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:15 np0005603609 nova_compute[221550]: 2026-01-31 08:15:15.893 221554 DEBUG nova.compute.manager [req-707c53d2-a104-4148-8820-cbf9c55aae3b req-772d1ea1-c637-4313-83f6-8e7f176b782a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] No waiting events found dispatching network-vif-unplugged-bec4c251-b190-498a-8020-4390b883634e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:15:15 np0005603609 nova_compute[221550]: 2026-01-31 08:15:15.893 221554 WARNING nova.compute.manager [req-707c53d2-a104-4148-8820-cbf9c55aae3b req-772d1ea1-c637-4313-83f6-8e7f176b782a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received unexpected event network-vif-unplugged-bec4c251-b190-498a-8020-4390b883634e for instance with vm_state active and task_state None.#033[00m
Jan 31 03:15:16 np0005603609 podman[276255]: 2026-01-31 08:15:16.165721663 +0000 UTC m=+0.050382752 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:15:16 np0005603609 podman[276254]: 2026-01-31 08:15:16.211276499 +0000 UTC m=+0.095696532 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 03:15:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:16.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:17.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:18 np0005603609 nova_compute[221550]: 2026-01-31 08:15:18.610 221554 DEBUG nova.compute.manager [req-c8013f71-90ac-41db-a7bf-93e9900d068e req-42efc2f3-c8e9-4301-8d82-73920b074242 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received event network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:18 np0005603609 nova_compute[221550]: 2026-01-31 08:15:18.610 221554 DEBUG oslo_concurrency.lockutils [req-c8013f71-90ac-41db-a7bf-93e9900d068e req-42efc2f3-c8e9-4301-8d82-73920b074242 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:18 np0005603609 nova_compute[221550]: 2026-01-31 08:15:18.610 221554 DEBUG oslo_concurrency.lockutils [req-c8013f71-90ac-41db-a7bf-93e9900d068e req-42efc2f3-c8e9-4301-8d82-73920b074242 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:18 np0005603609 nova_compute[221550]: 2026-01-31 08:15:18.611 221554 DEBUG oslo_concurrency.lockutils [req-c8013f71-90ac-41db-a7bf-93e9900d068e req-42efc2f3-c8e9-4301-8d82-73920b074242 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:18 np0005603609 nova_compute[221550]: 2026-01-31 08:15:18.611 221554 DEBUG nova.compute.manager [req-c8013f71-90ac-41db-a7bf-93e9900d068e req-42efc2f3-c8e9-4301-8d82-73920b074242 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] No waiting events found dispatching network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:15:18 np0005603609 nova_compute[221550]: 2026-01-31 08:15:18.612 221554 WARNING nova.compute.manager [req-c8013f71-90ac-41db-a7bf-93e9900d068e req-42efc2f3-c8e9-4301-8d82-73920b074242 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received unexpected event network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e for instance with vm_state active and task_state None.#033[00m
Jan 31 03:15:18 np0005603609 nova_compute[221550]: 2026-01-31 08:15:18.612 221554 DEBUG nova.compute.manager [req-c8013f71-90ac-41db-a7bf-93e9900d068e req-42efc2f3-c8e9-4301-8d82-73920b074242 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received event network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:18 np0005603609 nova_compute[221550]: 2026-01-31 08:15:18.612 221554 DEBUG oslo_concurrency.lockutils [req-c8013f71-90ac-41db-a7bf-93e9900d068e req-42efc2f3-c8e9-4301-8d82-73920b074242 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:18 np0005603609 nova_compute[221550]: 2026-01-31 08:15:18.613 221554 DEBUG oslo_concurrency.lockutils [req-c8013f71-90ac-41db-a7bf-93e9900d068e req-42efc2f3-c8e9-4301-8d82-73920b074242 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:18 np0005603609 nova_compute[221550]: 2026-01-31 08:15:18.613 221554 DEBUG oslo_concurrency.lockutils [req-c8013f71-90ac-41db-a7bf-93e9900d068e req-42efc2f3-c8e9-4301-8d82-73920b074242 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:18 np0005603609 nova_compute[221550]: 2026-01-31 08:15:18.613 221554 DEBUG nova.compute.manager [req-c8013f71-90ac-41db-a7bf-93e9900d068e req-42efc2f3-c8e9-4301-8d82-73920b074242 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] No waiting events found dispatching network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:15:18 np0005603609 nova_compute[221550]: 2026-01-31 08:15:18.614 221554 WARNING nova.compute.manager [req-c8013f71-90ac-41db-a7bf-93e9900d068e req-42efc2f3-c8e9-4301-8d82-73920b074242 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received unexpected event network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e for instance with vm_state active and task_state None.#033[00m
Jan 31 03:15:18 np0005603609 nova_compute[221550]: 2026-01-31 08:15:18.614 221554 DEBUG nova.compute.manager [req-c8013f71-90ac-41db-a7bf-93e9900d068e req-42efc2f3-c8e9-4301-8d82-73920b074242 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received event network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:18 np0005603609 nova_compute[221550]: 2026-01-31 08:15:18.614 221554 DEBUG oslo_concurrency.lockutils [req-c8013f71-90ac-41db-a7bf-93e9900d068e req-42efc2f3-c8e9-4301-8d82-73920b074242 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:18 np0005603609 nova_compute[221550]: 2026-01-31 08:15:18.615 221554 DEBUG oslo_concurrency.lockutils [req-c8013f71-90ac-41db-a7bf-93e9900d068e req-42efc2f3-c8e9-4301-8d82-73920b074242 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:18 np0005603609 nova_compute[221550]: 2026-01-31 08:15:18.615 221554 DEBUG oslo_concurrency.lockutils [req-c8013f71-90ac-41db-a7bf-93e9900d068e req-42efc2f3-c8e9-4301-8d82-73920b074242 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:18 np0005603609 nova_compute[221550]: 2026-01-31 08:15:18.615 221554 DEBUG nova.compute.manager [req-c8013f71-90ac-41db-a7bf-93e9900d068e req-42efc2f3-c8e9-4301-8d82-73920b074242 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] No waiting events found dispatching network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:15:18 np0005603609 nova_compute[221550]: 2026-01-31 08:15:18.616 221554 WARNING nova.compute.manager [req-c8013f71-90ac-41db-a7bf-93e9900d068e req-42efc2f3-c8e9-4301-8d82-73920b074242 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received unexpected event network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e for instance with vm_state active and task_state None.#033[00m
Jan 31 03:15:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:18.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:19.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:19 np0005603609 nova_compute[221550]: 2026-01-31 08:15:19.899 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:20.433 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:15:20 np0005603609 nova_compute[221550]: 2026-01-31 08:15:20.434 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:20.436 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:15:20 np0005603609 nova_compute[221550]: 2026-01-31 08:15:20.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:20 np0005603609 nova_compute[221550]: 2026-01-31 08:15:20.712 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:20.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:21.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:22.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:23.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:15:23.438 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:15:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:24.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:15:24 np0005603609 nova_compute[221550]: 2026-01-31 08:15:24.901 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:15:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:25.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:15:25 np0005603609 nova_compute[221550]: 2026-01-31 08:15:25.715 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:26 np0005603609 nova_compute[221550]: 2026-01-31 08:15:26.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:26.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:27.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:27 np0005603609 ovn_controller[130359]: 2026-01-31T08:15:27Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:63:b4 10.100.0.9
Jan 31 03:15:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:15:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:28.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:15:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e306 e306: 3 total, 3 up, 3 in
Jan 31 03:15:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:29.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:29 np0005603609 nova_compute[221550]: 2026-01-31 08:15:29.903 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:30 np0005603609 ovn_controller[130359]: 2026-01-31T08:15:30Z|00565|binding|INFO|Releasing lport 7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8 from this chassis (sb_readonly=0)
Jan 31 03:15:30 np0005603609 nova_compute[221550]: 2026-01-31 08:15:30.381 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:30 np0005603609 nova_compute[221550]: 2026-01-31 08:15:30.717 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:30.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:15:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:31.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:15:31 np0005603609 nova_compute[221550]: 2026-01-31 08:15:31.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:32 np0005603609 nova_compute[221550]: 2026-01-31 08:15:32.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:32.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:33.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:33 np0005603609 nova_compute[221550]: 2026-01-31 08:15:33.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:33 np0005603609 nova_compute[221550]: 2026-01-31 08:15:33.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:15:33 np0005603609 nova_compute[221550]: 2026-01-31 08:15:33.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:15:34 np0005603609 nova_compute[221550]: 2026-01-31 08:15:34.310 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:15:34 np0005603609 nova_compute[221550]: 2026-01-31 08:15:34.311 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:15:34 np0005603609 nova_compute[221550]: 2026-01-31 08:15:34.311 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:15:34 np0005603609 nova_compute[221550]: 2026-01-31 08:15:34.311 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 981eb990-ec0c-4673-98e7-102afbe0bb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:15:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:34.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:34 np0005603609 nova_compute[221550]: 2026-01-31 08:15:34.905 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:15:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:35.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:15:35 np0005603609 nova_compute[221550]: 2026-01-31 08:15:35.720 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e307 e307: 3 total, 3 up, 3 in
Jan 31 03:15:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e308 e308: 3 total, 3 up, 3 in
Jan 31 03:15:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:36.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:37.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:15:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:38.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:15:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:39.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:39 np0005603609 nova_compute[221550]: 2026-01-31 08:15:39.551 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Updating instance_info_cache with network_info: [{"id": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "address": "fa:16:3e:3e:5b:36", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c06d54-12", "ovs_interfaceid": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:15:39 np0005603609 nova_compute[221550]: 2026-01-31 08:15:39.580 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:15:39 np0005603609 nova_compute[221550]: 2026-01-31 08:15:39.580 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:15:39 np0005603609 nova_compute[221550]: 2026-01-31 08:15:39.581 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:39 np0005603609 nova_compute[221550]: 2026-01-31 08:15:39.581 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:39 np0005603609 nova_compute[221550]: 2026-01-31 08:15:39.582 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:15:39 np0005603609 nova_compute[221550]: 2026-01-31 08:15:39.582 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:39 np0005603609 nova_compute[221550]: 2026-01-31 08:15:39.659 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:39 np0005603609 nova_compute[221550]: 2026-01-31 08:15:39.659 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:39 np0005603609 nova_compute[221550]: 2026-01-31 08:15:39.660 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:39 np0005603609 nova_compute[221550]: 2026-01-31 08:15:39.660 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:15:39 np0005603609 nova_compute[221550]: 2026-01-31 08:15:39.660 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:15:39 np0005603609 nova_compute[221550]: 2026-01-31 08:15:39.907 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:15:40 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1660631670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:15:40 np0005603609 nova_compute[221550]: 2026-01-31 08:15:40.067 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:15:40 np0005603609 nova_compute[221550]: 2026-01-31 08:15:40.216 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:15:40 np0005603609 nova_compute[221550]: 2026-01-31 08:15:40.216 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:15:40 np0005603609 nova_compute[221550]: 2026-01-31 08:15:40.219 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:15:40 np0005603609 nova_compute[221550]: 2026-01-31 08:15:40.219 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:15:40 np0005603609 nova_compute[221550]: 2026-01-31 08:15:40.222 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:15:40 np0005603609 nova_compute[221550]: 2026-01-31 08:15:40.222 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:15:40 np0005603609 nova_compute[221550]: 2026-01-31 08:15:40.355 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:15:40 np0005603609 nova_compute[221550]: 2026-01-31 08:15:40.356 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3859MB free_disk=20.739547729492188GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:15:40 np0005603609 nova_compute[221550]: 2026-01-31 08:15:40.356 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:40 np0005603609 nova_compute[221550]: 2026-01-31 08:15:40.357 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:40 np0005603609 nova_compute[221550]: 2026-01-31 08:15:40.485 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 981eb990-ec0c-4673-98e7-102afbe0bb51 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:15:40 np0005603609 nova_compute[221550]: 2026-01-31 08:15:40.485 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 518d36e7-b77a-415d-bd48-f53e637ed0d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:15:40 np0005603609 nova_compute[221550]: 2026-01-31 08:15:40.485 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 9fac8a1f-1f12-41c8-81b6-59c0bf962590 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:15:40 np0005603609 nova_compute[221550]: 2026-01-31 08:15:40.486 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:15:40 np0005603609 nova_compute[221550]: 2026-01-31 08:15:40.486 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:15:40 np0005603609 nova_compute[221550]: 2026-01-31 08:15:40.599 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:15:40 np0005603609 nova_compute[221550]: 2026-01-31 08:15:40.722 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:15:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:40.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:15:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:15:41 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3699285779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:15:41 np0005603609 nova_compute[221550]: 2026-01-31 08:15:41.022 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:15:41 np0005603609 nova_compute[221550]: 2026-01-31 08:15:41.027 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:15:41 np0005603609 nova_compute[221550]: 2026-01-31 08:15:41.071 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:15:41 np0005603609 nova_compute[221550]: 2026-01-31 08:15:41.100 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:15:41 np0005603609 nova_compute[221550]: 2026-01-31 08:15:41.100 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:41.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:15:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:42.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:15:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:43.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:44 np0005603609 nova_compute[221550]: 2026-01-31 08:15:44.095 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:44.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:44 np0005603609 nova_compute[221550]: 2026-01-31 08:15:44.909 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:15:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:45.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:15:45 np0005603609 nova_compute[221550]: 2026-01-31 08:15:45.725 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:46 np0005603609 NetworkManager[49064]: <info>  [1769847346.0301] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Jan 31 03:15:46 np0005603609 nova_compute[221550]: 2026-01-31 08:15:46.030 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:46 np0005603609 NetworkManager[49064]: <info>  [1769847346.0312] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Jan 31 03:15:46 np0005603609 nova_compute[221550]: 2026-01-31 08:15:46.059 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:46 np0005603609 ovn_controller[130359]: 2026-01-31T08:15:46Z|00566|binding|INFO|Releasing lport 7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8 from this chassis (sb_readonly=0)
Jan 31 03:15:46 np0005603609 nova_compute[221550]: 2026-01-31 08:15:46.070 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e309 e309: 3 total, 3 up, 3 in
Jan 31 03:15:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:46.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:15:46 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/804013123' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:15:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:15:46 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/804013123' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:15:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:15:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:47.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:15:47 np0005603609 podman[276351]: 2026-01-31 08:15:47.198448377 +0000 UTC m=+0.074080311 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 03:15:47 np0005603609 podman[276350]: 2026-01-31 08:15:47.225306923 +0000 UTC m=+0.100898916 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:15:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:48.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:49.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:49 np0005603609 nova_compute[221550]: 2026-01-31 08:15:49.953 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:50 np0005603609 nova_compute[221550]: 2026-01-31 08:15:50.728 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:50.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:51.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:52.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:53.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:15:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:54.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:15:54 np0005603609 nova_compute[221550]: 2026-01-31 08:15:54.987 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:55.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:55 np0005603609 nova_compute[221550]: 2026-01-31 08:15:55.731 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:56.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:57.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:58 np0005603609 nova_compute[221550]: 2026-01-31 08:15:58.298 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:58.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:15:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:15:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:59.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:15:59 np0005603609 ovn_controller[130359]: 2026-01-31T08:15:59Z|00567|binding|INFO|Releasing lport 7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8 from this chassis (sb_readonly=0)
Jan 31 03:15:59 np0005603609 nova_compute[221550]: 2026-01-31 08:15:59.400 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:59 np0005603609 nova_compute[221550]: 2026-01-31 08:15:59.988 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:00 np0005603609 nova_compute[221550]: 2026-01-31 08:16:00.732 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:00.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:16:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:01.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:16:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:16:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:02.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:16:02 np0005603609 nova_compute[221550]: 2026-01-31 08:16:02.912 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:03.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:04 np0005603609 ovn_controller[130359]: 2026-01-31T08:16:04Z|00568|binding|INFO|Releasing lport 7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8 from this chassis (sb_readonly=0)
Jan 31 03:16:04 np0005603609 nova_compute[221550]: 2026-01-31 08:16:04.483 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:04.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:04 np0005603609 nova_compute[221550]: 2026-01-31 08:16:04.991 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:05.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:05 np0005603609 nova_compute[221550]: 2026-01-31 08:16:05.734 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:06.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:16:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:07.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:16:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:16:07.509 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:16:07.509 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:16:07.510 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:08.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:16:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:09.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:16:09 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:16:09 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:16:09 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:16:09 np0005603609 nova_compute[221550]: 2026-01-31 08:16:09.994 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:10 np0005603609 nova_compute[221550]: 2026-01-31 08:16:10.736 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:16:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:10.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:16:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:11.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:12.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:13.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:14 np0005603609 nova_compute[221550]: 2026-01-31 08:16:14.161 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:14.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:14 np0005603609 nova_compute[221550]: 2026-01-31 08:16:14.997 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:15.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:15 np0005603609 nova_compute[221550]: 2026-01-31 08:16:15.738 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:16.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:17 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:16:17 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:16:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:16:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:17.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:16:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:18 np0005603609 podman[276580]: 2026-01-31 08:16:18.188714776 +0000 UTC m=+0.073341662 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:16:18 np0005603609 podman[276579]: 2026-01-31 08:16:18.254646469 +0000 UTC m=+0.139802588 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:16:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:18.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:19.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:20 np0005603609 nova_compute[221550]: 2026-01-31 08:16:19.999 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:20 np0005603609 nova_compute[221550]: 2026-01-31 08:16:20.479 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:20 np0005603609 nova_compute[221550]: 2026-01-31 08:16:20.740 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:16:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:20.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:16:21 np0005603609 nova_compute[221550]: 2026-01-31 08:16:21.063 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:16:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:21.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:16:22 np0005603609 nova_compute[221550]: 2026-01-31 08:16:22.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:22 np0005603609 nova_compute[221550]: 2026-01-31 08:16:22.664 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:16:22.665 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:16:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:16:22.666 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:16:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:22.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e310 e310: 3 total, 3 up, 3 in
Jan 31 03:16:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:23.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:24.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:25 np0005603609 nova_compute[221550]: 2026-01-31 08:16:25.001 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:16:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:25.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:16:25 np0005603609 nova_compute[221550]: 2026-01-31 08:16:25.742 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e311 e311: 3 total, 3 up, 3 in
Jan 31 03:16:26 np0005603609 nova_compute[221550]: 2026-01-31 08:16:26.012 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:26.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e312 e312: 3 total, 3 up, 3 in
Jan 31 03:16:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:27.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:28 np0005603609 nova_compute[221550]: 2026-01-31 08:16:28.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:28 np0005603609 nova_compute[221550]: 2026-01-31 08:16:28.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:28 np0005603609 nova_compute[221550]: 2026-01-31 08:16:28.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:16:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:28.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:16:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:29.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:16:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:16:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 41K writes, 161K keys, 41K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s#012Cumulative WAL: 41K writes, 14K syncs, 2.78 writes per sync, written: 0.15 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8297 writes, 32K keys, 8297 commit groups, 1.0 writes per commit group, ingest: 30.43 MB, 0.05 MB/s#012Interval WAL: 8298 writes, 3287 syncs, 2.52 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:16:30 np0005603609 nova_compute[221550]: 2026-01-31 08:16:30.004 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:30 np0005603609 nova_compute[221550]: 2026-01-31 08:16:30.745 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:30.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:16:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:31.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:16:31 np0005603609 nova_compute[221550]: 2026-01-31 08:16:31.685 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:16:32.667 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:16:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:16:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:32.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:16:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:33.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:34 np0005603609 nova_compute[221550]: 2026-01-31 08:16:34.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:34 np0005603609 nova_compute[221550]: 2026-01-31 08:16:34.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:34 np0005603609 nova_compute[221550]: 2026-01-31 08:16:34.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:16:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:16:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:34.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:16:35 np0005603609 nova_compute[221550]: 2026-01-31 08:16:35.005 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:35.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:35 np0005603609 nova_compute[221550]: 2026-01-31 08:16:35.603 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:35 np0005603609 nova_compute[221550]: 2026-01-31 08:16:35.638 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Triggering sync for uuid 981eb990-ec0c-4673-98e7-102afbe0bb51 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:16:35 np0005603609 nova_compute[221550]: 2026-01-31 08:16:35.638 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Triggering sync for uuid 518d36e7-b77a-415d-bd48-f53e637ed0d8 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:16:35 np0005603609 nova_compute[221550]: 2026-01-31 08:16:35.638 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Triggering sync for uuid 9fac8a1f-1f12-41c8-81b6-59c0bf962590 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:16:35 np0005603609 nova_compute[221550]: 2026-01-31 08:16:35.639 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "981eb990-ec0c-4673-98e7-102afbe0bb51" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:35 np0005603609 nova_compute[221550]: 2026-01-31 08:16:35.639 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:35 np0005603609 nova_compute[221550]: 2026-01-31 08:16:35.639 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "518d36e7-b77a-415d-bd48-f53e637ed0d8" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:35 np0005603609 nova_compute[221550]: 2026-01-31 08:16:35.640 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:35 np0005603609 nova_compute[221550]: 2026-01-31 08:16:35.640 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:35 np0005603609 nova_compute[221550]: 2026-01-31 08:16:35.640 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:35 np0005603609 nova_compute[221550]: 2026-01-31 08:16:35.688 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:35 np0005603609 nova_compute[221550]: 2026-01-31 08:16:35.696 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:35 np0005603609 nova_compute[221550]: 2026-01-31 08:16:35.696 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:16:35 np0005603609 nova_compute[221550]: 2026-01-31 08:16:35.697 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:35 np0005603609 nova_compute[221550]: 2026-01-31 08:16:35.717 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:35 np0005603609 nova_compute[221550]: 2026-01-31 08:16:35.747 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:36 np0005603609 nova_compute[221550]: 2026-01-31 08:16:36.337 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-518d36e7-b77a-415d-bd48-f53e637ed0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:16:36 np0005603609 nova_compute[221550]: 2026-01-31 08:16:36.338 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-518d36e7-b77a-415d-bd48-f53e637ed0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:16:36 np0005603609 nova_compute[221550]: 2026-01-31 08:16:36.338 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:16:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e313 e313: 3 total, 3 up, 3 in
Jan 31 03:16:36 np0005603609 nova_compute[221550]: 2026-01-31 08:16:36.594 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:36.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:37.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:38.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:16:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:39.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:16:39 np0005603609 nova_compute[221550]: 2026-01-31 08:16:39.481 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Updating instance_info_cache with network_info: [{"id": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "address": "fa:16:3e:49:13:77", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap726c6b35-bb", "ovs_interfaceid": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:16:39 np0005603609 nova_compute[221550]: 2026-01-31 08:16:39.504 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-518d36e7-b77a-415d-bd48-f53e637ed0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:16:39 np0005603609 nova_compute[221550]: 2026-01-31 08:16:39.504 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:16:39 np0005603609 nova_compute[221550]: 2026-01-31 08:16:39.505 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:39 np0005603609 nova_compute[221550]: 2026-01-31 08:16:39.505 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:39 np0005603609 nova_compute[221550]: 2026-01-31 08:16:39.534 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:39 np0005603609 nova_compute[221550]: 2026-01-31 08:16:39.535 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:39 np0005603609 nova_compute[221550]: 2026-01-31 08:16:39.535 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:39 np0005603609 nova_compute[221550]: 2026-01-31 08:16:39.535 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:16:39 np0005603609 nova_compute[221550]: 2026-01-31 08:16:39.536 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:16:40 np0005603609 nova_compute[221550]: 2026-01-31 08:16:40.007 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:40 np0005603609 nova_compute[221550]: 2026-01-31 08:16:40.749 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:40.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:41.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:42.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:43.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:44 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 03:16:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:44.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:45 np0005603609 nova_compute[221550]: 2026-01-31 08:16:45.009 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:45.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:45 np0005603609 nova_compute[221550]: 2026-01-31 08:16:45.751 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:46 np0005603609 nova_compute[221550]: 2026-01-31 08:16:46.064 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:16:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:46.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:16:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 9.143558502s
Jan 31 03:16:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 9.143558502s
Jan 31 03:16:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.416343689s, txc = 0x55f2013ce300
Jan 31 03:16:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.416359901s, txc = 0x55f2008dd200
Jan 31 03:16:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.416142464s, txc = 0x55f200989200
Jan 31 03:16:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.416040421s, txc = 0x55f200555b00
Jan 31 03:16:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.416090012s, txc = 0x55f2013f4000
Jan 31 03:16:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.415860176s, txc = 0x55f200a78c00
Jan 31 03:16:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.415809631s, txc = 0x55f2013cf500
Jan 31 03:16:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.415658951s, txc = 0x55f2013f4300
Jan 31 03:16:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.415645599s, txc = 0x55f2007a6600
Jan 31 03:16:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.415576935s, txc = 0x55f2010c6300
Jan 31 03:16:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).paxos(paxos active c 4770..5506) lease_timeout -- calling new election
Jan 31 03:16:47 np0005603609 ceph-mon[81667]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Jan 31 03:16:47 np0005603609 ceph-mon[81667]: paxos.2).electionLogic(40) init, last seen epoch 40
Jan 31 03:16:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:16:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:47.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:16:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:16:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 03:16:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:16:47 np0005603609 nova_compute[221550]: 2026-01-31 08:16:47.804 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 8.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:16:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 03:16:48 np0005603609 nova_compute[221550]: 2026-01-31 08:16:48.278 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:16:48 np0005603609 nova_compute[221550]: 2026-01-31 08:16:48.279 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:16:48 np0005603609 nova_compute[221550]: 2026-01-31 08:16:48.284 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:16:48 np0005603609 nova_compute[221550]: 2026-01-31 08:16:48.284 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:16:48 np0005603609 nova_compute[221550]: 2026-01-31 08:16:48.288 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:16:48 np0005603609 nova_compute[221550]: 2026-01-31 08:16:48.289 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:16:48 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 03:16:48 np0005603609 nova_compute[221550]: 2026-01-31 08:16:48.461 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:16:48 np0005603609 nova_compute[221550]: 2026-01-31 08:16:48.462 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3894MB free_disk=20.784923553466797GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:16:48 np0005603609 nova_compute[221550]: 2026-01-31 08:16:48.462 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:48 np0005603609 nova_compute[221550]: 2026-01-31 08:16:48.462 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:48 np0005603609 nova_compute[221550]: 2026-01-31 08:16:48.862 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 981eb990-ec0c-4673-98e7-102afbe0bb51 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:16:48 np0005603609 nova_compute[221550]: 2026-01-31 08:16:48.862 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 518d36e7-b77a-415d-bd48-f53e637ed0d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:16:48 np0005603609 nova_compute[221550]: 2026-01-31 08:16:48.863 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 9fac8a1f-1f12-41c8-81b6-59c0bf962590 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:16:48 np0005603609 nova_compute[221550]: 2026-01-31 08:16:48.863 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:16:48 np0005603609 nova_compute[221550]: 2026-01-31 08:16:48.863 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:16:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:48.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:49 np0005603609 nova_compute[221550]: 2026-01-31 08:16:49.013 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:16:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:16:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:16:49 np0005603609 podman[276650]: 2026-01-31 08:16:49.158558237 +0000 UTC m=+0.043622938 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:16:49 np0005603609 podman[276647]: 2026-01-31 08:16:49.201636662 +0000 UTC m=+0.088377833 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:16:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:49.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:16:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:16:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:16:50 np0005603609 nova_compute[221550]: 2026-01-31 08:16:50.011 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:16:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:16:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:16:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:16:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:16:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:16:50 np0005603609 nova_compute[221550]: 2026-01-31 08:16:50.752 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:16:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:50.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:16:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:51.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:16:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:16:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:16:52 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 03:16:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:16:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:52.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:16:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:16:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:16:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:53.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 03:16:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:16:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2529377701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:16:53 np0005603609 nova_compute[221550]: 2026-01-31 08:16:53.437 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:16:53 np0005603609 nova_compute[221550]: 2026-01-31 08:16:53.442 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:16:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:16:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2899978361' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:16:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:16:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2899978361' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:16:53 np0005603609 nova_compute[221550]: 2026-01-31 08:16:53.637 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:16:53 np0005603609 nova_compute[221550]: 2026-01-31 08:16:53.638 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:16:53 np0005603609 nova_compute[221550]: 2026-01-31 08:16:53.639 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 5.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:53 np0005603609 nova_compute[221550]: 2026-01-31 08:16:53.639 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:53 np0005603609 nova_compute[221550]: 2026-01-31 08:16:53.639 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:16:53 np0005603609 nova_compute[221550]: 2026-01-31 08:16:53.994 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:16:54 np0005603609 ceph-mon[81667]: mon.compute-1 calling monitor election
Jan 31 03:16:54 np0005603609 ceph-mon[81667]: mon.compute-0 calling monitor election
Jan 31 03:16:54 np0005603609 ceph-mon[81667]: mon.compute-0 is new leader, mons compute-0,compute-1 in quorum (ranks 0,2)
Jan 31 03:16:54 np0005603609 ceph-mon[81667]: Health check failed: 1/3 mons down, quorum compute-0,compute-1 (MON_DOWN)
Jan 31 03:16:54 np0005603609 ceph-mon[81667]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-1
Jan 31 03:16:54 np0005603609 ceph-mon[81667]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-1
Jan 31 03:16:54 np0005603609 ceph-mon[81667]:    mon.compute-2 (rank 1) addr [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] is down (out of quorum)
Jan 31 03:16:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:54.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:55 np0005603609 nova_compute[221550]: 2026-01-31 08:16:55.013 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:55 np0005603609 ceph-mon[81667]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Jan 31 03:16:55 np0005603609 ceph-mon[81667]: paxos.2).electionLogic(44) init, last seen epoch 44
Jan 31 03:16:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 03:16:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 03:16:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 03:16:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:55.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:55 np0005603609 ceph-mon[81667]: mon.compute-2 calling monitor election
Jan 31 03:16:55 np0005603609 ceph-mon[81667]: mon.compute-0 calling monitor election
Jan 31 03:16:55 np0005603609 ceph-mon[81667]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 03:16:55 np0005603609 ceph-mon[81667]: mon.compute-1 calling monitor election
Jan 31 03:16:55 np0005603609 ceph-mon[81667]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-1)
Jan 31 03:16:55 np0005603609 ceph-mon[81667]: Cluster is now healthy
Jan 31 03:16:55 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 03:16:55 np0005603609 nova_compute[221550]: 2026-01-31 08:16:55.754 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:56.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:56 np0005603609 nova_compute[221550]: 2026-01-31 08:16:56.952 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:56 np0005603609 nova_compute[221550]: 2026-01-31 08:16:56.952 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:57.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:58.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:16:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:16:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:59.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:17:00 np0005603609 nova_compute[221550]: 2026-01-31 08:17:00.059 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:00 np0005603609 nova_compute[221550]: 2026-01-31 08:17:00.755 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:00.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:01.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.3 total, 600.0 interval#012Cumulative writes: 11K writes, 57K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.03 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.11 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1670 writes, 7738 keys, 1670 commit groups, 1.0 writes per commit group, ingest: 16.53 MB, 0.03 MB/s#012Interval WAL: 1670 writes, 1670 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     46.7      1.47              0.15        34    0.043       0      0       0.0       0.0#012  L6      1/0    9.97 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.6     72.8     61.3      5.14              0.69        33    0.156    208K    18K       0.0       0.0#012 Sum      1/0    9.97 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.6     56.5     58.1      6.61              0.84        67    0.099    208K    18K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.7     33.7     34.4      1.58              0.13         8    0.198     33K   2077       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     72.8     61.3      5.14              0.69        33    0.156    208K    18K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     47.8      1.44              0.15        33    0.044       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.3 total, 600.0 interval#012Flush(GB): cumulative 0.067, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.37 GB write, 0.09 MB/s write, 0.37 GB read, 0.09 MB/s read, 6.6 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 1.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619ad8ed1f0#2 capacity: 304.00 MB usage: 41.23 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000282 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2399,39.68 MB,13.052%) FilterBlock(67,582.86 KB,0.187236%) IndexBlock(67,1004.56 KB,0.322703%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 03:17:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:17:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:02.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #112. Immutable memtables: 0.
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:17:02.940649) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 112
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847422940687, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1687, "num_deletes": 254, "total_data_size": 3596624, "memory_usage": 3641248, "flush_reason": "Manual Compaction"}
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #113: started
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847422969385, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 113, "file_size": 2342693, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56301, "largest_seqno": 57982, "table_properties": {"data_size": 2335531, "index_size": 4105, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16155, "raw_average_key_size": 20, "raw_value_size": 2320820, "raw_average_value_size": 3006, "num_data_blocks": 181, "num_entries": 772, "num_filter_entries": 772, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847287, "oldest_key_time": 1769847287, "file_creation_time": 1769847422, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 28788 microseconds, and 3960 cpu microseconds.
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:17:02.969435) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #113: 2342693 bytes OK
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:17:02.969454) [db/memtable_list.cc:519] [default] Level-0 commit table #113 started
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:17:02.974979) [db/memtable_list.cc:722] [default] Level-0 commit table #113: memtable #1 done
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:17:02.975007) EVENT_LOG_v1 {"time_micros": 1769847422974999, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:17:02.975028) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3588833, prev total WAL file size 3588833, number of live WAL files 2.
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000109.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:17:02.975889) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [113(2287KB)], [111(10208KB)]
Jan 31 03:17:02 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847422976000, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [113], "files_L6": [111], "score": -1, "input_data_size": 12796010, "oldest_snapshot_seqno": -1}
Jan 31 03:17:03 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #114: 8214 keys, 10839762 bytes, temperature: kUnknown
Jan 31 03:17:03 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847423097028, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 114, "file_size": 10839762, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10786691, "index_size": 31455, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20549, "raw_key_size": 213435, "raw_average_key_size": 25, "raw_value_size": 10642152, "raw_average_value_size": 1295, "num_data_blocks": 1229, "num_entries": 8214, "num_filter_entries": 8214, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769847422, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 114, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:17:03 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:17:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:17:03.097269) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10839762 bytes
Jan 31 03:17:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:17:03.114557) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.7 rd, 89.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 10.0 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(10.1) write-amplify(4.6) OK, records in: 8745, records dropped: 531 output_compression: NoCompression
Jan 31 03:17:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:17:03.114597) EVENT_LOG_v1 {"time_micros": 1769847423114580, "job": 70, "event": "compaction_finished", "compaction_time_micros": 121092, "compaction_time_cpu_micros": 22536, "output_level": 6, "num_output_files": 1, "total_output_size": 10839762, "num_input_records": 8745, "num_output_records": 8214, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:17:03 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:17:03 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847423115034, "job": 70, "event": "table_file_deletion", "file_number": 113}
Jan 31 03:17:03 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000111.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:17:03 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847423116032, "job": 70, "event": "table_file_deletion", "file_number": 111}
Jan 31 03:17:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:17:02.975719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:17:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:17:03.116129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:17:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:17:03.116135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:17:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:17:03.116137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:17:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:17:03.116139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:17:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:17:03.116141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:17:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:03.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:04.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:05 np0005603609 nova_compute[221550]: 2026-01-31 08:17:05.061 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:17:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:05.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:17:05 np0005603609 nova_compute[221550]: 2026-01-31 08:17:05.757 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:06.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:17:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:07.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:17:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:17:07.510 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:17:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:17:07.510 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:17:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:17:07.511 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:08.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:09.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:10 np0005603609 nova_compute[221550]: 2026-01-31 08:17:10.064 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:10 np0005603609 nova_compute[221550]: 2026-01-31 08:17:10.759 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:10.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:17:11.107 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:17:11 np0005603609 nova_compute[221550]: 2026-01-31 08:17:11.107 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:17:11.108 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:17:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:17:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:11.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:17:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:17:11 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1102898705' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:17:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:12 np0005603609 nova_compute[221550]: 2026-01-31 08:17:12.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:17:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:12.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:13.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:14.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:15 np0005603609 nova_compute[221550]: 2026-01-31 08:17:15.066 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:17:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:15.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:17:15 np0005603609 nova_compute[221550]: 2026-01-31 08:17:15.762 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:17:16.110 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:17:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:16.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:17:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:17.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:17:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:17:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:18.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:17:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:17:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:19.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:17:20 np0005603609 nova_compute[221550]: 2026-01-31 08:17:20.068 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:20 np0005603609 podman[276845]: 2026-01-31 08:17:20.180104817 +0000 UTC m=+0.058963016 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 03:17:20 np0005603609 podman[276844]: 2026-01-31 08:17:20.203720225 +0000 UTC m=+0.087974064 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:17:20 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:17:20 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:17:20 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:17:20 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:17:20 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:17:20 np0005603609 nova_compute[221550]: 2026-01-31 08:17:20.765 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:20.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:21.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e314 e314: 3 total, 3 up, 3 in
Jan 31 03:17:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:17:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:22.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:17:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:23.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:23 np0005603609 nova_compute[221550]: 2026-01-31 08:17:23.703 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:17:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e315 e315: 3 total, 3 up, 3 in
Jan 31 03:17:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:24.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e316 e316: 3 total, 3 up, 3 in
Jan 31 03:17:25 np0005603609 nova_compute[221550]: 2026-01-31 08:17:25.071 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:17:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:25.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:17:25 np0005603609 nova_compute[221550]: 2026-01-31 08:17:25.765 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:17:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:26.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:17:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:27.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:27 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:17:27 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:17:28 np0005603609 nova_compute[221550]: 2026-01-31 08:17:28.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:17:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:28.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:29.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:30 np0005603609 nova_compute[221550]: 2026-01-31 08:17:30.073 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:30 np0005603609 nova_compute[221550]: 2026-01-31 08:17:30.780 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:30.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:31.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:31 np0005603609 nova_compute[221550]: 2026-01-31 08:17:31.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:17:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e317 e317: 3 total, 3 up, 3 in
Jan 31 03:17:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:32.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:33.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:34.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:35 np0005603609 nova_compute[221550]: 2026-01-31 08:17:35.075 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:35.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:35 np0005603609 nova_compute[221550]: 2026-01-31 08:17:35.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:17:35 np0005603609 nova_compute[221550]: 2026-01-31 08:17:35.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:17:35 np0005603609 nova_compute[221550]: 2026-01-31 08:17:35.782 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:35 np0005603609 nova_compute[221550]: 2026-01-31 08:17:35.946 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-9fac8a1f-1f12-41c8-81b6-59c0bf962590" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:17:35 np0005603609 nova_compute[221550]: 2026-01-31 08:17:35.947 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-9fac8a1f-1f12-41c8-81b6-59c0bf962590" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:17:35 np0005603609 nova_compute[221550]: 2026-01-31 08:17:35.947 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:17:36 np0005603609 ovn_controller[130359]: 2026-01-31T08:17:36Z|00569|binding|INFO|Releasing lport 7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8 from this chassis (sb_readonly=0)
Jan 31 03:17:36 np0005603609 nova_compute[221550]: 2026-01-31 08:17:36.152 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:36 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Jan 31 03:17:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:17:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:36.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:17:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:17:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:37.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.208 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Updating instance_info_cache with network_info: [{"id": "bec4c251-b190-498a-8020-4390b883634e", "address": "fa:16:3e:af:63:b4", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec4c251-b1", "ovs_interfaceid": "bec4c251-b190-498a-8020-4390b883634e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.229 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-9fac8a1f-1f12-41c8-81b6-59c0bf962590" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.230 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.230 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.230 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.230 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.231 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.287 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.288 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.288 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.288 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.289 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:17:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:17:38 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2024757108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.702 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.776 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.776 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.779 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.779 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.782 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.782 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:17:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:17:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:38.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.949 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.949 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3879MB free_disk=20.774208068847656GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.950 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:17:38 np0005603609 nova_compute[221550]: 2026-01-31 08:17:38.950 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:17:39 np0005603609 nova_compute[221550]: 2026-01-31 08:17:39.258 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 981eb990-ec0c-4673-98e7-102afbe0bb51 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:17:39 np0005603609 nova_compute[221550]: 2026-01-31 08:17:39.259 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 518d36e7-b77a-415d-bd48-f53e637ed0d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:17:39 np0005603609 nova_compute[221550]: 2026-01-31 08:17:39.259 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 9fac8a1f-1f12-41c8-81b6-59c0bf962590 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:17:39 np0005603609 nova_compute[221550]: 2026-01-31 08:17:39.260 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:17:39 np0005603609 nova_compute[221550]: 2026-01-31 08:17:39.260 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:17:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:17:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:39.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:17:39 np0005603609 nova_compute[221550]: 2026-01-31 08:17:39.361 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:17:39 np0005603609 nova_compute[221550]: 2026-01-31 08:17:39.732 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:17:39 np0005603609 nova_compute[221550]: 2026-01-31 08:17:39.732 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:17:39 np0005603609 nova_compute[221550]: 2026-01-31 08:17:39.753 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:17:39 np0005603609 nova_compute[221550]: 2026-01-31 08:17:39.823 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:17:39 np0005603609 nova_compute[221550]: 2026-01-31 08:17:39.960 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:17:40 np0005603609 nova_compute[221550]: 2026-01-31 08:17:40.078 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:17:40 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3919650562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:17:40 np0005603609 nova_compute[221550]: 2026-01-31 08:17:40.394 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:17:40 np0005603609 nova_compute[221550]: 2026-01-31 08:17:40.401 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:17:40 np0005603609 nova_compute[221550]: 2026-01-31 08:17:40.434 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:17:40 np0005603609 nova_compute[221550]: 2026-01-31 08:17:40.437 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:17:40 np0005603609 nova_compute[221550]: 2026-01-31 08:17:40.437 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:40 np0005603609 nova_compute[221550]: 2026-01-31 08:17:40.785 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:40.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:41.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:41 np0005603609 ovn_controller[130359]: 2026-01-31T08:17:41Z|00570|binding|INFO|Releasing lport 7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8 from this chassis (sb_readonly=0)
Jan 31 03:17:41 np0005603609 nova_compute[221550]: 2026-01-31 08:17:41.583 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:41 np0005603609 nova_compute[221550]: 2026-01-31 08:17:41.868 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:17:41 np0005603609 nova_compute[221550]: 2026-01-31 08:17:41.868 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:17:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:17:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:42.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:17:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:43.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:17:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:44.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:17:45 np0005603609 nova_compute[221550]: 2026-01-31 08:17:45.080 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:45.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:45 np0005603609 nova_compute[221550]: 2026-01-31 08:17:45.788 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:46.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:47.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:17:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:48.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:17:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:49.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:50 np0005603609 nova_compute[221550]: 2026-01-31 08:17:50.082 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:50 np0005603609 nova_compute[221550]: 2026-01-31 08:17:50.790 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:50.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:17:51.083 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:17:51 np0005603609 nova_compute[221550]: 2026-01-31 08:17:51.083 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:17:51.084 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:17:51 np0005603609 podman[276990]: 2026-01-31 08:17:51.170711469 +0000 UTC m=+0.052626906 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:17:51 np0005603609 podman[276989]: 2026-01-31 08:17:51.218731631 +0000 UTC m=+0.102061792 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 03:17:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:51.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:52.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:53.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 03:17:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:54.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 03:17:55 np0005603609 nova_compute[221550]: 2026-01-31 08:17:55.122 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:55.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:55 np0005603609 nova_compute[221550]: 2026-01-31 08:17:55.792 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:17:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:56.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:17:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:17:57.086 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:17:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:57.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:58.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:17:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:59.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:00 np0005603609 nova_compute[221550]: 2026-01-31 08:18:00.125 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e318 e318: 3 total, 3 up, 3 in
Jan 31 03:18:00 np0005603609 nova_compute[221550]: 2026-01-31 08:18:00.794 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:00.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:18:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:01.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:18:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e319 e319: 3 total, 3 up, 3 in
Jan 31 03:18:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:02.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:03.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:04.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:05 np0005603609 nova_compute[221550]: 2026-01-31 08:18:05.127 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:05.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:05 np0005603609 nova_compute[221550]: 2026-01-31 08:18:05.797 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:18:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:06.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:18:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e320 e320: 3 total, 3 up, 3 in
Jan 31 03:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:07.511 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:07.511 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:07.512 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:07.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:18:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:08.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:18:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:09.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:10 np0005603609 nova_compute[221550]: 2026-01-31 08:18:10.129 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:10 np0005603609 nova_compute[221550]: 2026-01-31 08:18:10.799 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:18:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:10.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:18:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:18:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:11.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:18:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e321 e321: 3 total, 3 up, 3 in
Jan 31 03:18:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:18:12 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2716440508' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:18:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:12.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:13.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e322 e322: 3 total, 3 up, 3 in
Jan 31 03:18:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:18:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:14.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:18:15 np0005603609 nova_compute[221550]: 2026-01-31 08:18:15.131 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:18:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:15.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:18:15 np0005603609 nova_compute[221550]: 2026-01-31 08:18:15.801 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:16.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:18:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3663357759' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:18:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:18:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3663357759' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:18:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:17.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e323 e323: 3 total, 3 up, 3 in
Jan 31 03:18:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:18.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:19.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:19 np0005603609 radosgw[84443]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 31 03:18:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e324 e324: 3 total, 3 up, 3 in
Jan 31 03:18:20 np0005603609 radosgw[84443]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 31 03:18:20 np0005603609 radosgw[84443]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Jan 31 03:18:20 np0005603609 nova_compute[221550]: 2026-01-31 08:18:20.134 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:20 np0005603609 nova_compute[221550]: 2026-01-31 08:18:20.804 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:20.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.411 221554 DEBUG oslo_concurrency.lockutils [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.412 221554 DEBUG oslo_concurrency.lockutils [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.412 221554 DEBUG oslo_concurrency.lockutils [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.412 221554 DEBUG oslo_concurrency.lockutils [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.412 221554 DEBUG oslo_concurrency.lockutils [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.413 221554 INFO nova.compute.manager [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Terminating instance#033[00m
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.414 221554 DEBUG nova.compute.manager [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:18:21 np0005603609 kernel: tapbec4c251-b1 (unregistering): left promiscuous mode
Jan 31 03:18:21 np0005603609 NetworkManager[49064]: <info>  [1769847501.4750] device (tapbec4c251-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:18:21 np0005603609 ovn_controller[130359]: 2026-01-31T08:18:21Z|00571|binding|INFO|Releasing lport bec4c251-b190-498a-8020-4390b883634e from this chassis (sb_readonly=0)
Jan 31 03:18:21 np0005603609 ovn_controller[130359]: 2026-01-31T08:18:21Z|00572|binding|INFO|Setting lport bec4c251-b190-498a-8020-4390b883634e down in Southbound
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.529 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:21 np0005603609 ovn_controller[130359]: 2026-01-31T08:18:21Z|00573|binding|INFO|Removing iface tapbec4c251-b1 ovn-installed in OVS
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.536 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:21 np0005603609 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000084.scope: Deactivated successfully.
Jan 31 03:18:21 np0005603609 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000084.scope: Consumed 18.782s CPU time.
Jan 31 03:18:21 np0005603609 systemd-machined[190912]: Machine qemu-70-instance-00000084 terminated.
Jan 31 03:18:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:21.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:21 np0005603609 podman[277036]: 2026-01-31 08:18:21.628118435 +0000 UTC m=+0.080429722 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 03:18:21 np0005603609 podman[277035]: 2026-01-31 08:18:21.630279748 +0000 UTC m=+0.091644513 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller)
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.631 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.635 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.646 221554 INFO nova.virt.libvirt.driver [-] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Instance destroyed successfully.#033[00m
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.647 221554 DEBUG nova.objects.instance [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'resources' on Instance uuid 9fac8a1f-1f12-41c8-81b6-59c0bf962590 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.853 221554 DEBUG nova.virt.libvirt.vif [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1592665940',display_name='tempest-ServerStableDeviceRescueTest-server-1592665940',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1592665940',id=132,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:15:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1633c84ea1bf46b080aaafd30bbcf25f',ramdisk_id='',reservation_id='r-apdvib3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-569420416',owner_user_name='tempest-ServerStableDeviceRescueTest-569420416-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:15:15Z,user_data=None,user_id='d7d9a44201d548aba1e1654e136ddd06',uuid=9fac8a1f-1f12-41c8-81b6-59c0bf962590,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bec4c251-b190-498a-8020-4390b883634e", "address": "fa:16:3e:af:63:b4", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec4c251-b1", "ovs_interfaceid": "bec4c251-b190-498a-8020-4390b883634e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.853 221554 DEBUG nova.network.os_vif_util [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converting VIF {"id": "bec4c251-b190-498a-8020-4390b883634e", "address": "fa:16:3e:af:63:b4", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbec4c251-b1", "ovs_interfaceid": "bec4c251-b190-498a-8020-4390b883634e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.854 221554 DEBUG nova.network.os_vif_util [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:63:b4,bridge_name='br-int',has_traffic_filtering=True,id=bec4c251-b190-498a-8020-4390b883634e,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec4c251-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.854 221554 DEBUG os_vif [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:63:b4,bridge_name='br-int',has_traffic_filtering=True,id=bec4c251-b190-498a-8020-4390b883634e,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec4c251-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.856 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.856 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbec4c251-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.858 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.859 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.861 221554 INFO os_vif [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:63:b4,bridge_name='br-int',has_traffic_filtering=True,id=bec4c251-b190-498a-8020-4390b883634e,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbec4c251-b1')#033[00m
Jan 31 03:18:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:21.871 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:63:b4 10.100.0.9'], port_security=['fa:16:3e:af:63:b4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9fac8a1f-1f12-41c8-81b6-59c0bf962590', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '8', 'neutron:security_group_ids': '6bf409ce-5d7d-4755-a538-3f1e81a177fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=bec4c251-b190-498a-8020-4390b883634e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:18:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:21.872 140058 INFO neutron.agent.ovn.metadata.agent [-] Port bec4c251-b190-498a-8020-4390b883634e in datapath 088d6992-6ba6-4719-a977-b3d306740157 unbound from our chassis#033[00m
Jan 31 03:18:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:21.873 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 088d6992-6ba6-4719-a977-b3d306740157#033[00m
Jan 31 03:18:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:21.883 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e29d14-f0d5-4c4f-9279-2d727b2f5f35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:21.898 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[9739583f-1636-4af7-acb2-ebf18cf13c8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:21.903 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ab5dc8-23fc-4562-af23-c0d6f2a7c5ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:21.917 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[a81553eb-8bb4-40fe-b60e-d6add49fa766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:21.928 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[18668fa0-3cfc-4044-97c8-22e6d7179eb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 25, 'rx_bytes': 952, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 25, 'rx_bytes': 952, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736325, 'reachable_time': 39418, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277121, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:21.945 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9e77598e-88f2-45b6-9139-653d66e219f1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736331, 'tstamp': 736331}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277122, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736333, 'tstamp': 736333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277122, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:21.947 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:18:21 np0005603609 nova_compute[221550]: 2026-01-31 08:18:21.949 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:21.950 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088d6992-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:18:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:21.951 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:18:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:21.951 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap088d6992-60, col_values=(('external_ids', {'iface-id': '7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:18:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:21.952 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:18:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:22 np0005603609 nova_compute[221550]: 2026-01-31 08:18:22.865 221554 INFO nova.virt.libvirt.driver [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Deleting instance files /var/lib/nova/instances/9fac8a1f-1f12-41c8-81b6-59c0bf962590_del#033[00m
Jan 31 03:18:22 np0005603609 nova_compute[221550]: 2026-01-31 08:18:22.866 221554 INFO nova.virt.libvirt.driver [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Deletion of /var/lib/nova/instances/9fac8a1f-1f12-41c8-81b6-59c0bf962590_del complete#033[00m
Jan 31 03:18:22 np0005603609 nova_compute[221550]: 2026-01-31 08:18:22.963 221554 INFO nova.compute.manager [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Took 1.55 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:18:22 np0005603609 nova_compute[221550]: 2026-01-31 08:18:22.964 221554 DEBUG oslo.service.loopingcall [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:18:22 np0005603609 nova_compute[221550]: 2026-01-31 08:18:22.965 221554 DEBUG nova.compute.manager [-] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:18:22 np0005603609 nova_compute[221550]: 2026-01-31 08:18:22.965 221554 DEBUG nova.network.neutron [-] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:18:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:22.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:23 np0005603609 nova_compute[221550]: 2026-01-31 08:18:23.091 221554 DEBUG nova.compute.manager [req-ba3e17ae-ae26-4668-92dc-0a7603b26785 req-740abd2d-bf50-4a54-b8b8-78aa9b7c9497 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received event network-vif-unplugged-bec4c251-b190-498a-8020-4390b883634e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:18:23 np0005603609 nova_compute[221550]: 2026-01-31 08:18:23.092 221554 DEBUG oslo_concurrency.lockutils [req-ba3e17ae-ae26-4668-92dc-0a7603b26785 req-740abd2d-bf50-4a54-b8b8-78aa9b7c9497 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:23 np0005603609 nova_compute[221550]: 2026-01-31 08:18:23.092 221554 DEBUG oslo_concurrency.lockutils [req-ba3e17ae-ae26-4668-92dc-0a7603b26785 req-740abd2d-bf50-4a54-b8b8-78aa9b7c9497 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:23 np0005603609 nova_compute[221550]: 2026-01-31 08:18:23.093 221554 DEBUG oslo_concurrency.lockutils [req-ba3e17ae-ae26-4668-92dc-0a7603b26785 req-740abd2d-bf50-4a54-b8b8-78aa9b7c9497 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:23 np0005603609 nova_compute[221550]: 2026-01-31 08:18:23.093 221554 DEBUG nova.compute.manager [req-ba3e17ae-ae26-4668-92dc-0a7603b26785 req-740abd2d-bf50-4a54-b8b8-78aa9b7c9497 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] No waiting events found dispatching network-vif-unplugged-bec4c251-b190-498a-8020-4390b883634e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:18:23 np0005603609 nova_compute[221550]: 2026-01-31 08:18:23.093 221554 DEBUG nova.compute.manager [req-ba3e17ae-ae26-4668-92dc-0a7603b26785 req-740abd2d-bf50-4a54-b8b8-78aa9b7c9497 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received event network-vif-unplugged-bec4c251-b190-498a-8020-4390b883634e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:18:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:23.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:24 np0005603609 nova_compute[221550]: 2026-01-31 08:18:24.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:18:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:24.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:18:25 np0005603609 nova_compute[221550]: 2026-01-31 08:18:25.167 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:25 np0005603609 nova_compute[221550]: 2026-01-31 08:18:25.233 221554 DEBUG nova.network.neutron [-] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:18:25 np0005603609 nova_compute[221550]: 2026-01-31 08:18:25.277 221554 INFO nova.compute.manager [-] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Took 2.31 seconds to deallocate network for instance.#033[00m
Jan 31 03:18:25 np0005603609 nova_compute[221550]: 2026-01-31 08:18:25.315 221554 DEBUG nova.compute.manager [req-87dce498-cb07-4f48-bc69-fb7db718c396 req-a58ffcea-4988-424b-a41b-12a53c46e468 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received event network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:18:25 np0005603609 nova_compute[221550]: 2026-01-31 08:18:25.316 221554 DEBUG oslo_concurrency.lockutils [req-87dce498-cb07-4f48-bc69-fb7db718c396 req-a58ffcea-4988-424b-a41b-12a53c46e468 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:25 np0005603609 nova_compute[221550]: 2026-01-31 08:18:25.316 221554 DEBUG oslo_concurrency.lockutils [req-87dce498-cb07-4f48-bc69-fb7db718c396 req-a58ffcea-4988-424b-a41b-12a53c46e468 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:25 np0005603609 nova_compute[221550]: 2026-01-31 08:18:25.317 221554 DEBUG oslo_concurrency.lockutils [req-87dce498-cb07-4f48-bc69-fb7db718c396 req-a58ffcea-4988-424b-a41b-12a53c46e468 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:25 np0005603609 nova_compute[221550]: 2026-01-31 08:18:25.317 221554 DEBUG nova.compute.manager [req-87dce498-cb07-4f48-bc69-fb7db718c396 req-a58ffcea-4988-424b-a41b-12a53c46e468 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] No waiting events found dispatching network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:18:25 np0005603609 nova_compute[221550]: 2026-01-31 08:18:25.317 221554 WARNING nova.compute.manager [req-87dce498-cb07-4f48-bc69-fb7db718c396 req-a58ffcea-4988-424b-a41b-12a53c46e468 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received unexpected event network-vif-plugged-bec4c251-b190-498a-8020-4390b883634e for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:18:25 np0005603609 nova_compute[221550]: 2026-01-31 08:18:25.362 221554 DEBUG oslo_concurrency.lockutils [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:25 np0005603609 nova_compute[221550]: 2026-01-31 08:18:25.363 221554 DEBUG oslo_concurrency.lockutils [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:25 np0005603609 nova_compute[221550]: 2026-01-31 08:18:25.541 221554 DEBUG oslo_concurrency.processutils [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:18:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:18:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:25.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:18:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:18:26 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3249558292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:18:26 np0005603609 nova_compute[221550]: 2026-01-31 08:18:26.019 221554 DEBUG oslo_concurrency.processutils [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:18:26 np0005603609 nova_compute[221550]: 2026-01-31 08:18:26.025 221554 DEBUG nova.compute.provider_tree [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:18:26 np0005603609 nova_compute[221550]: 2026-01-31 08:18:26.075 221554 DEBUG nova.scheduler.client.report [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:18:26 np0005603609 nova_compute[221550]: 2026-01-31 08:18:26.371 221554 DEBUG oslo_concurrency.lockutils [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:26 np0005603609 nova_compute[221550]: 2026-01-31 08:18:26.516 221554 INFO nova.scheduler.client.report [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Deleted allocations for instance 9fac8a1f-1f12-41c8-81b6-59c0bf962590#033[00m
Jan 31 03:18:26 np0005603609 nova_compute[221550]: 2026-01-31 08:18:26.771 221554 DEBUG oslo_concurrency.lockutils [None req-aadad0e4-73a9-43ce-918e-2a598c568630 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "9fac8a1f-1f12-41c8-81b6-59c0bf962590" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.359s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:26 np0005603609 nova_compute[221550]: 2026-01-31 08:18:26.858 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:26.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e325 e325: 3 total, 3 up, 3 in
Jan 31 03:18:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:27.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:27 np0005603609 nova_compute[221550]: 2026-01-31 08:18:27.695 221554 DEBUG nova.compute.manager [req-10bd8063-73b9-489e-98ca-db5830631061 req-9b5a1b02-6683-4b16-ada4-ef02ec094cdc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Received event network-vif-deleted-bec4c251-b190-498a-8020-4390b883634e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:18:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:18:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:18:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:18:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:18:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:28.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:18:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:29.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:29 np0005603609 nova_compute[221550]: 2026-01-31 08:18:29.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e326 e326: 3 total, 3 up, 3 in
Jan 31 03:18:30 np0005603609 nova_compute[221550]: 2026-01-31 08:18:30.169 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:31.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:31.513 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:18:31 np0005603609 nova_compute[221550]: 2026-01-31 08:18:31.513 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:31.514 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:18:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:31.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:31 np0005603609 nova_compute[221550]: 2026-01-31 08:18:31.899 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:18:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:33.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:18:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:33.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:33 np0005603609 nova_compute[221550]: 2026-01-31 08:18:33.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:34 np0005603609 nova_compute[221550]: 2026-01-31 08:18:34.465 221554 DEBUG oslo_concurrency.lockutils [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "518d36e7-b77a-415d-bd48-f53e637ed0d8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:34 np0005603609 nova_compute[221550]: 2026-01-31 08:18:34.466 221554 DEBUG oslo_concurrency.lockutils [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:34 np0005603609 nova_compute[221550]: 2026-01-31 08:18:34.466 221554 DEBUG oslo_concurrency.lockutils [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:34 np0005603609 nova_compute[221550]: 2026-01-31 08:18:34.466 221554 DEBUG oslo_concurrency.lockutils [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:34 np0005603609 nova_compute[221550]: 2026-01-31 08:18:34.466 221554 DEBUG oslo_concurrency.lockutils [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:34 np0005603609 nova_compute[221550]: 2026-01-31 08:18:34.467 221554 INFO nova.compute.manager [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Terminating instance#033[00m
Jan 31 03:18:34 np0005603609 nova_compute[221550]: 2026-01-31 08:18:34.468 221554 DEBUG nova.compute.manager [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:18:34 np0005603609 nova_compute[221550]: 2026-01-31 08:18:34.722 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:18:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:35.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.171 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:35 np0005603609 kernel: tap726c6b35-bb (unregistering): left promiscuous mode
Jan 31 03:18:35 np0005603609 NetworkManager[49064]: <info>  [1769847515.4392] device (tap726c6b35-bb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:18:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:18:35Z|00574|binding|INFO|Releasing lport 726c6b35-bbdd-44f9-81a9-69aa19514bdd from this chassis (sb_readonly=0)
Jan 31 03:18:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:18:35Z|00575|binding|INFO|Setting lport 726c6b35-bbdd-44f9-81a9-69aa19514bdd down in Southbound
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.448 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:18:35Z|00576|binding|INFO|Removing iface tap726c6b35-bb ovn-installed in OVS
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.452 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:35.458 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:13:77 10.100.0.10'], port_security=['fa:16:3e:49:13:77 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '518d36e7-b77a-415d-bd48-f53e637ed0d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '8', 'neutron:security_group_ids': '6bf409ce-5d7d-4755-a538-3f1e81a177fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=726c6b35-bbdd-44f9-81a9-69aa19514bdd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:18:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:35.460 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 726c6b35-bbdd-44f9-81a9-69aa19514bdd in datapath 088d6992-6ba6-4719-a977-b3d306740157 unbound from our chassis#033[00m
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.461 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:35.463 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 088d6992-6ba6-4719-a977-b3d306740157#033[00m
Jan 31 03:18:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:35.478 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1f8b51a9-3c3e-4b92-9a73-71828705f90e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:35 np0005603609 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Jan 31 03:18:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:35.503 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab6d300-ce1e-468d-aae3-60b983d35d22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:35 np0005603609 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000007d.scope: Consumed 24.797s CPU time.
Jan 31 03:18:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:35.507 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[13dac59c-cb0e-4db6-95ad-fa6a1237789a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:35 np0005603609 systemd-machined[190912]: Machine qemu-67-instance-0000007d terminated.
Jan 31 03:18:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:35.537 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[ea7b820c-0ac8-4d07-b16f-b3e409d45100]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:35.554 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[61a8faef-cc41-4888-856c-f91ecf90bee8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 27, 'rx_bytes': 952, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 27, 'rx_bytes': 952, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736325, 'reachable_time': 39418, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277339, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:35.567 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0feb7ba7-c629-45da-8d13-87b8c5a47ea9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736331, 'tstamp': 736331}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277340, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap088d6992-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736333, 'tstamp': 736333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277340, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:35.569 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.571 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.574 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:35.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:35.575 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088d6992-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:18:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:35.576 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:18:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:35.576 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap088d6992-60, col_values=(('external_ids', {'iface-id': '7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:18:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:35.577 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.662 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.704 221554 INFO nova.virt.libvirt.driver [-] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Instance destroyed successfully.#033[00m
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.706 221554 DEBUG nova.objects.instance [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'resources' on Instance uuid 518d36e7-b77a-415d-bd48-f53e637ed0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.710 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.793 221554 DEBUG nova.virt.libvirt.vif [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:12:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-478071790',display_name='tempest-ServerStableDeviceRescueTest-server-478071790',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-478071790',id=125,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:13:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1633c84ea1bf46b080aaafd30bbcf25f',ramdisk_id='',reservation_id='r-dq3p1x4r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-569420416',owner_user_name='tempest-ServerStableDeviceRescueTest-569420416-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:13:26Z,user_data=None,user_id='d7d9a44201d548aba1e1654e136ddd06',uuid=518d36e7-b77a-415d-bd48-f53e637ed0d8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "address": "fa:16:3e:49:13:77", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap726c6b35-bb", "ovs_interfaceid": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.795 221554 DEBUG nova.network.os_vif_util [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converting VIF {"id": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "address": "fa:16:3e:49:13:77", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap726c6b35-bb", "ovs_interfaceid": "726c6b35-bbdd-44f9-81a9-69aa19514bdd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.796 221554 DEBUG nova.network.os_vif_util [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:13:77,bridge_name='br-int',has_traffic_filtering=True,id=726c6b35-bbdd-44f9-81a9-69aa19514bdd,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap726c6b35-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.796 221554 DEBUG os_vif [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:13:77,bridge_name='br-int',has_traffic_filtering=True,id=726c6b35-bbdd-44f9-81a9-69aa19514bdd,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap726c6b35-bb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.798 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.799 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap726c6b35-bb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.838 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.841 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:35 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:18:35 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:18:35 np0005603609 nova_compute[221550]: 2026-01-31 08:18:35.844 221554 INFO os_vif [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:13:77,bridge_name='br-int',has_traffic_filtering=True,id=726c6b35-bbdd-44f9-81a9-69aa19514bdd,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap726c6b35-bb')#033[00m
Jan 31 03:18:36 np0005603609 nova_compute[221550]: 2026-01-31 08:18:36.329 221554 INFO nova.virt.libvirt.driver [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Deleting instance files /var/lib/nova/instances/518d36e7-b77a-415d-bd48-f53e637ed0d8_del#033[00m
Jan 31 03:18:36 np0005603609 nova_compute[221550]: 2026-01-31 08:18:36.330 221554 INFO nova.virt.libvirt.driver [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Deletion of /var/lib/nova/instances/518d36e7-b77a-415d-bd48-f53e637ed0d8_del complete#033[00m
Jan 31 03:18:36 np0005603609 nova_compute[221550]: 2026-01-31 08:18:36.451 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:18:36 np0005603609 nova_compute[221550]: 2026-01-31 08:18:36.451 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:18:36 np0005603609 nova_compute[221550]: 2026-01-31 08:18:36.452 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:18:36 np0005603609 nova_compute[221550]: 2026-01-31 08:18:36.452 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 981eb990-ec0c-4673-98e7-102afbe0bb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:18:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:36.517 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:18:36 np0005603609 nova_compute[221550]: 2026-01-31 08:18:36.619 221554 INFO nova.compute.manager [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Took 2.15 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:18:36 np0005603609 nova_compute[221550]: 2026-01-31 08:18:36.620 221554 DEBUG oslo.service.loopingcall [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:18:36 np0005603609 nova_compute[221550]: 2026-01-31 08:18:36.620 221554 DEBUG nova.compute.manager [-] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:18:36 np0005603609 nova_compute[221550]: 2026-01-31 08:18:36.621 221554 DEBUG nova.network.neutron [-] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:18:36 np0005603609 nova_compute[221550]: 2026-01-31 08:18:36.644 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847501.6427066, 9fac8a1f-1f12-41c8-81b6-59c0bf962590 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:18:36 np0005603609 nova_compute[221550]: 2026-01-31 08:18:36.644 221554 INFO nova.compute.manager [-] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:18:36 np0005603609 nova_compute[221550]: 2026-01-31 08:18:36.733 221554 DEBUG nova.compute.manager [None req-0fb02f27-6eb5-45fa-8687-b9ba761f3a1c - - - - - -] [instance: 9fac8a1f-1f12-41c8-81b6-59c0bf962590] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:18:36 np0005603609 nova_compute[221550]: 2026-01-31 08:18:36.779 221554 DEBUG nova.compute.manager [req-58698756-0329-46ce-a0c6-b083bec022b4 req-f5eebd06-822c-4136-8218-fa46e36deb96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received event network-vif-unplugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:18:36 np0005603609 nova_compute[221550]: 2026-01-31 08:18:36.780 221554 DEBUG oslo_concurrency.lockutils [req-58698756-0329-46ce-a0c6-b083bec022b4 req-f5eebd06-822c-4136-8218-fa46e36deb96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:36 np0005603609 nova_compute[221550]: 2026-01-31 08:18:36.781 221554 DEBUG oslo_concurrency.lockutils [req-58698756-0329-46ce-a0c6-b083bec022b4 req-f5eebd06-822c-4136-8218-fa46e36deb96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:36 np0005603609 nova_compute[221550]: 2026-01-31 08:18:36.781 221554 DEBUG oslo_concurrency.lockutils [req-58698756-0329-46ce-a0c6-b083bec022b4 req-f5eebd06-822c-4136-8218-fa46e36deb96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:36 np0005603609 nova_compute[221550]: 2026-01-31 08:18:36.782 221554 DEBUG nova.compute.manager [req-58698756-0329-46ce-a0c6-b083bec022b4 req-f5eebd06-822c-4136-8218-fa46e36deb96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] No waiting events found dispatching network-vif-unplugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:18:36 np0005603609 nova_compute[221550]: 2026-01-31 08:18:36.782 221554 DEBUG nova.compute.manager [req-58698756-0329-46ce-a0c6-b083bec022b4 req-f5eebd06-822c-4136-8218-fa46e36deb96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received event network-vif-unplugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:18:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:37.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:37.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:39.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:39 np0005603609 nova_compute[221550]: 2026-01-31 08:18:39.098 221554 DEBUG nova.compute.manager [req-2de0e625-0b12-45b0-a729-115e5ad86004 req-5b1bf5d2-190d-49ff-9b87-497f5b9bbd78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received event network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:18:39 np0005603609 nova_compute[221550]: 2026-01-31 08:18:39.099 221554 DEBUG oslo_concurrency.lockutils [req-2de0e625-0b12-45b0-a729-115e5ad86004 req-5b1bf5d2-190d-49ff-9b87-497f5b9bbd78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:39 np0005603609 nova_compute[221550]: 2026-01-31 08:18:39.099 221554 DEBUG oslo_concurrency.lockutils [req-2de0e625-0b12-45b0-a729-115e5ad86004 req-5b1bf5d2-190d-49ff-9b87-497f5b9bbd78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:39 np0005603609 nova_compute[221550]: 2026-01-31 08:18:39.099 221554 DEBUG oslo_concurrency.lockutils [req-2de0e625-0b12-45b0-a729-115e5ad86004 req-5b1bf5d2-190d-49ff-9b87-497f5b9bbd78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:39 np0005603609 nova_compute[221550]: 2026-01-31 08:18:39.100 221554 DEBUG nova.compute.manager [req-2de0e625-0b12-45b0-a729-115e5ad86004 req-5b1bf5d2-190d-49ff-9b87-497f5b9bbd78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] No waiting events found dispatching network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:18:39 np0005603609 nova_compute[221550]: 2026-01-31 08:18:39.100 221554 WARNING nova.compute.manager [req-2de0e625-0b12-45b0-a729-115e5ad86004 req-5b1bf5d2-190d-49ff-9b87-497f5b9bbd78 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received unexpected event network-vif-plugged-726c6b35-bbdd-44f9-81a9-69aa19514bdd for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:18:39 np0005603609 nova_compute[221550]: 2026-01-31 08:18:39.311 221554 DEBUG nova.network.neutron [-] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:18:39 np0005603609 nova_compute[221550]: 2026-01-31 08:18:39.341 221554 INFO nova.compute.manager [-] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Took 2.72 seconds to deallocate network for instance.#033[00m
Jan 31 03:18:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:39.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:39 np0005603609 nova_compute[221550]: 2026-01-31 08:18:39.899 221554 DEBUG oslo_concurrency.lockutils [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:39 np0005603609 nova_compute[221550]: 2026-01-31 08:18:39.899 221554 DEBUG oslo_concurrency.lockutils [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:40 np0005603609 nova_compute[221550]: 2026-01-31 08:18:40.070 221554 DEBUG oslo_concurrency.processutils [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:18:40 np0005603609 nova_compute[221550]: 2026-01-31 08:18:40.173 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:18:40 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1463827182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:18:40 np0005603609 nova_compute[221550]: 2026-01-31 08:18:40.484 221554 DEBUG oslo_concurrency.processutils [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:18:40 np0005603609 nova_compute[221550]: 2026-01-31 08:18:40.489 221554 DEBUG nova.compute.provider_tree [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:18:40 np0005603609 nova_compute[221550]: 2026-01-31 08:18:40.696 221554 DEBUG nova.scheduler.client.report [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:18:40 np0005603609 nova_compute[221550]: 2026-01-31 08:18:40.822 221554 DEBUG oslo_concurrency.lockutils [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:40 np0005603609 nova_compute[221550]: 2026-01-31 08:18:40.879 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:40 np0005603609 nova_compute[221550]: 2026-01-31 08:18:40.888 221554 INFO nova.scheduler.client.report [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Deleted allocations for instance 518d36e7-b77a-415d-bd48-f53e637ed0d8#033[00m
Jan 31 03:18:40 np0005603609 nova_compute[221550]: 2026-01-31 08:18:40.943 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Updating instance_info_cache with network_info: [{"id": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "address": "fa:16:3e:3e:5b:36", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c06d54-12", "ovs_interfaceid": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:18:40 np0005603609 nova_compute[221550]: 2026-01-31 08:18:40.972 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-981eb990-ec0c-4673-98e7-102afbe0bb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:18:40 np0005603609 nova_compute[221550]: 2026-01-31 08:18:40.973 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:18:40 np0005603609 nova_compute[221550]: 2026-01-31 08:18:40.974 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:40 np0005603609 nova_compute[221550]: 2026-01-31 08:18:40.974 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:40 np0005603609 nova_compute[221550]: 2026-01-31 08:18:40.974 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:40 np0005603609 nova_compute[221550]: 2026-01-31 08:18:40.974 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:18:40 np0005603609 nova_compute[221550]: 2026-01-31 08:18:40.975 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:41.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:41 np0005603609 nova_compute[221550]: 2026-01-31 08:18:41.057 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:41 np0005603609 nova_compute[221550]: 2026-01-31 08:18:41.057 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:41 np0005603609 nova_compute[221550]: 2026-01-31 08:18:41.058 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:41 np0005603609 nova_compute[221550]: 2026-01-31 08:18:41.058 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:18:41 np0005603609 nova_compute[221550]: 2026-01-31 08:18:41.059 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:18:41 np0005603609 nova_compute[221550]: 2026-01-31 08:18:41.099 221554 DEBUG oslo_concurrency.lockutils [None req-732a9b2f-8c10-4c14-aa47-7a3f85773b47 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "518d36e7-b77a-415d-bd48-f53e637ed0d8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:41 np0005603609 nova_compute[221550]: 2026-01-31 08:18:41.331 221554 DEBUG nova.compute.manager [req-2b0a7b0d-b138-44e3-8443-b9059b404241 req-60e479a7-be68-4d0a-bc38-c1c6b3b6b550 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Received event network-vif-deleted-726c6b35-bbdd-44f9-81a9-69aa19514bdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:18:41 np0005603609 nova_compute[221550]: 2026-01-31 08:18:41.470 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:18:41 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2356523471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:18:41 np0005603609 nova_compute[221550]: 2026-01-31 08:18:41.489 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:18:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:41.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:41 np0005603609 nova_compute[221550]: 2026-01-31 08:18:41.614 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:18:41 np0005603609 nova_compute[221550]: 2026-01-31 08:18:41.614 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:18:41 np0005603609 nova_compute[221550]: 2026-01-31 08:18:41.739 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:18:41 np0005603609 nova_compute[221550]: 2026-01-31 08:18:41.740 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4231MB free_disk=20.926910400390625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:18:41 np0005603609 nova_compute[221550]: 2026-01-31 08:18:41.740 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:41 np0005603609 nova_compute[221550]: 2026-01-31 08:18:41.741 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:43.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:43 np0005603609 nova_compute[221550]: 2026-01-31 08:18:43.212 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 981eb990-ec0c-4673-98e7-102afbe0bb51 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:18:43 np0005603609 nova_compute[221550]: 2026-01-31 08:18:43.213 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:18:43 np0005603609 nova_compute[221550]: 2026-01-31 08:18:43.213 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:18:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e327 e327: 3 total, 3 up, 3 in
Jan 31 03:18:43 np0005603609 nova_compute[221550]: 2026-01-31 08:18:43.274 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:18:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:43.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:18:43 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3652022244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:18:43 np0005603609 nova_compute[221550]: 2026-01-31 08:18:43.695 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:18:43 np0005603609 nova_compute[221550]: 2026-01-31 08:18:43.701 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:18:43 np0005603609 nova_compute[221550]: 2026-01-31 08:18:43.835 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:18:43 np0005603609 nova_compute[221550]: 2026-01-31 08:18:43.839 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:18:43 np0005603609 nova_compute[221550]: 2026-01-31 08:18:43.839 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e328 e328: 3 total, 3 up, 3 in
Jan 31 03:18:45 np0005603609 nova_compute[221550]: 2026-01-31 08:18:45.176 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:18:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:45.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:18:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:45.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:45 np0005603609 nova_compute[221550]: 2026-01-31 08:18:45.885 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:46 np0005603609 nova_compute[221550]: 2026-01-31 08:18:46.792 221554 DEBUG oslo_concurrency.lockutils [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "981eb990-ec0c-4673-98e7-102afbe0bb51" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:46 np0005603609 nova_compute[221550]: 2026-01-31 08:18:46.792 221554 DEBUG oslo_concurrency.lockutils [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:46 np0005603609 nova_compute[221550]: 2026-01-31 08:18:46.793 221554 DEBUG oslo_concurrency.lockutils [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:46 np0005603609 nova_compute[221550]: 2026-01-31 08:18:46.793 221554 DEBUG oslo_concurrency.lockutils [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:46 np0005603609 nova_compute[221550]: 2026-01-31 08:18:46.794 221554 DEBUG oslo_concurrency.lockutils [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:46 np0005603609 nova_compute[221550]: 2026-01-31 08:18:46.796 221554 INFO nova.compute.manager [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Terminating instance#033[00m
Jan 31 03:18:46 np0005603609 nova_compute[221550]: 2026-01-31 08:18:46.797 221554 DEBUG nova.compute.manager [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:18:47 np0005603609 kernel: tap05c06d54-12 (unregistering): left promiscuous mode
Jan 31 03:18:47 np0005603609 NetworkManager[49064]: <info>  [1769847527.0603] device (tap05c06d54-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:18:47 np0005603609 ovn_controller[130359]: 2026-01-31T08:18:47Z|00577|binding|INFO|Releasing lport 05c06d54-1257-48d3-8a5a-f92423aadbd8 from this chassis (sb_readonly=0)
Jan 31 03:18:47 np0005603609 ovn_controller[130359]: 2026-01-31T08:18:47Z|00578|binding|INFO|Setting lport 05c06d54-1257-48d3-8a5a-f92423aadbd8 down in Southbound
Jan 31 03:18:47 np0005603609 ovn_controller[130359]: 2026-01-31T08:18:47Z|00579|binding|INFO|Removing iface tap05c06d54-12 ovn-installed in OVS
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.071 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.075 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:47 np0005603609 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000079.scope: Deactivated successfully.
Jan 31 03:18:47 np0005603609 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000079.scope: Consumed 27.719s CPU time.
Jan 31 03:18:47 np0005603609 systemd-machined[190912]: Machine qemu-63-instance-00000079 terminated.
Jan 31 03:18:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:47.127 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:5b:36 10.100.0.6'], port_security=['fa:16:3e:3e:5b:36 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '981eb990-ec0c-4673-98e7-102afbe0bb51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '8', 'neutron:security_group_ids': '6bf409ce-5d7d-4755-a538-3f1e81a177fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=05c06d54-1257-48d3-8a5a-f92423aadbd8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:18:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:47.129 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 05c06d54-1257-48d3-8a5a-f92423aadbd8 in datapath 088d6992-6ba6-4719-a977-b3d306740157 unbound from our chassis#033[00m
Jan 31 03:18:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:47.131 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 088d6992-6ba6-4719-a977-b3d306740157, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:18:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:47.132 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0386ba-023e-4544-8c23-ee041a27545a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:47.133 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 namespace which is not needed anymore#033[00m
Jan 31 03:18:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:47.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.229 221554 INFO nova.virt.libvirt.driver [-] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Instance destroyed successfully.#033[00m
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.230 221554 DEBUG nova.objects.instance [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'resources' on Instance uuid 981eb990-ec0c-4673-98e7-102afbe0bb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.254 221554 DEBUG nova.virt.libvirt.vif [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:11:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-562797465',display_name='tempest-ServerStableDeviceRescueTest-server-562797465',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-562797465',id=121,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:11:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1633c84ea1bf46b080aaafd30bbcf25f',ramdisk_id='',reservation_id='r-orbojs0e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-569420416',owner_user_name='tempest-ServerStableDeviceRescueTest-569420416-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:12:04Z,user_data=None,user_id='d7d9a44201d548aba1e1654e136ddd06',uuid=981eb990-ec0c-4673-98e7-102afbe0bb51,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "address": "fa:16:3e:3e:5b:36", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c06d54-12", "ovs_interfaceid": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.255 221554 DEBUG nova.network.os_vif_util [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converting VIF {"id": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "address": "fa:16:3e:3e:5b:36", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c06d54-12", "ovs_interfaceid": "05c06d54-1257-48d3-8a5a-f92423aadbd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.256 221554 DEBUG nova.network.os_vif_util [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:5b:36,bridge_name='br-int',has_traffic_filtering=True,id=05c06d54-1257-48d3-8a5a-f92423aadbd8,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c06d54-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.256 221554 DEBUG os_vif [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:5b:36,bridge_name='br-int',has_traffic_filtering=True,id=05c06d54-1257-48d3-8a5a-f92423aadbd8,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c06d54-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.257 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.257 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05c06d54-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.259 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.260 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.262 221554 INFO os_vif [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:5b:36,bridge_name='br-int',has_traffic_filtering=True,id=05c06d54-1257-48d3-8a5a-f92423aadbd8,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c06d54-12')#033[00m
Jan 31 03:18:47 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[272314]: [NOTICE]   (272318) : haproxy version is 2.8.14-c23fe91
Jan 31 03:18:47 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[272314]: [NOTICE]   (272318) : path to executable is /usr/sbin/haproxy
Jan 31 03:18:47 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[272314]: [WARNING]  (272318) : Exiting Master process...
Jan 31 03:18:47 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[272314]: [ALERT]    (272318) : Current worker (272320) exited with code 143 (Terminated)
Jan 31 03:18:47 np0005603609 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[272314]: [WARNING]  (272318) : All workers exited. Exiting... (0)
Jan 31 03:18:47 np0005603609 systemd[1]: libpod-cd5f18ff3fc61f465edc51c8ecbd6d32883b66a9654a2aaa315a3bd7f10fb2de.scope: Deactivated successfully.
Jan 31 03:18:47 np0005603609 conmon[272314]: conmon cd5f18ff3fc61f465edc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cd5f18ff3fc61f465edc51c8ecbd6d32883b66a9654a2aaa315a3bd7f10fb2de.scope/container/memory.events
Jan 31 03:18:47 np0005603609 podman[277467]: 2026-01-31 08:18:47.284028994 +0000 UTC m=+0.060972646 container died cd5f18ff3fc61f465edc51c8ecbd6d32883b66a9654a2aaa315a3bd7f10fb2de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 03:18:47 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cd5f18ff3fc61f465edc51c8ecbd6d32883b66a9654a2aaa315a3bd7f10fb2de-userdata-shm.mount: Deactivated successfully.
Jan 31 03:18:47 np0005603609 systemd[1]: var-lib-containers-storage-overlay-b188977afa39a4383fd3ca9c7e37030af5f3328a213c1035da9806b1aeb2ba4c-merged.mount: Deactivated successfully.
Jan 31 03:18:47 np0005603609 podman[277467]: 2026-01-31 08:18:47.331220378 +0000 UTC m=+0.108164070 container cleanup cd5f18ff3fc61f465edc51c8ecbd6d32883b66a9654a2aaa315a3bd7f10fb2de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:18:47 np0005603609 systemd[1]: libpod-conmon-cd5f18ff3fc61f465edc51c8ecbd6d32883b66a9654a2aaa315a3bd7f10fb2de.scope: Deactivated successfully.
Jan 31 03:18:47 np0005603609 podman[277521]: 2026-01-31 08:18:47.393064463 +0000 UTC m=+0.047349489 container remove cd5f18ff3fc61f465edc51c8ecbd6d32883b66a9654a2aaa315a3bd7f10fb2de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:18:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:47.397 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2b7c42b6-e7f0-4c2f-b997-1d8ce3f79aa7]: (4, ('Sat Jan 31 08:18:47 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 (cd5f18ff3fc61f465edc51c8ecbd6d32883b66a9654a2aaa315a3bd7f10fb2de)\ncd5f18ff3fc61f465edc51c8ecbd6d32883b66a9654a2aaa315a3bd7f10fb2de\nSat Jan 31 08:18:47 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 (cd5f18ff3fc61f465edc51c8ecbd6d32883b66a9654a2aaa315a3bd7f10fb2de)\ncd5f18ff3fc61f465edc51c8ecbd6d32883b66a9654a2aaa315a3bd7f10fb2de\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:47.398 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[afde8f1c-4d57-46e8-8d76-6db4f4365b42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:47.400 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.401 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:47 np0005603609 kernel: tap088d6992-60: left promiscuous mode
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.408 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:47.415 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce599bd-6488-4b46-b576-312734f3aef5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:47.440 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7905b122-24bc-4019-8a07-c66afee2ba19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:47.442 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d224e3df-0f96-470b-abef-10bcca0cf364]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:47.458 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[09433fdd-f916-450c-81bb-d6577fc8bb55]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736319, 'reachable_time': 35755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277537, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:47 np0005603609 systemd[1]: run-netns-ovnmeta\x2d088d6992\x2d6ba6\x2d4719\x2da977\x2db3d306740157.mount: Deactivated successfully.
Jan 31 03:18:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:47.462 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:18:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:18:47.462 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[a57d233b-f131-4ed6-a89b-1dd6c5acc16e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.546 221554 DEBUG nova.compute.manager [req-f768fc3c-99f7-4123-bd6e-01e9941ee840 req-5a65228b-18bc-458b-9e4a-65d81b0e2cf5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received event network-vif-unplugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.546 221554 DEBUG oslo_concurrency.lockutils [req-f768fc3c-99f7-4123-bd6e-01e9941ee840 req-5a65228b-18bc-458b-9e4a-65d81b0e2cf5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.546 221554 DEBUG oslo_concurrency.lockutils [req-f768fc3c-99f7-4123-bd6e-01e9941ee840 req-5a65228b-18bc-458b-9e4a-65d81b0e2cf5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.547 221554 DEBUG oslo_concurrency.lockutils [req-f768fc3c-99f7-4123-bd6e-01e9941ee840 req-5a65228b-18bc-458b-9e4a-65d81b0e2cf5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.547 221554 DEBUG nova.compute.manager [req-f768fc3c-99f7-4123-bd6e-01e9941ee840 req-5a65228b-18bc-458b-9e4a-65d81b0e2cf5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] No waiting events found dispatching network-vif-unplugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:18:47 np0005603609 nova_compute[221550]: 2026-01-31 08:18:47.547 221554 DEBUG nova.compute.manager [req-f768fc3c-99f7-4123-bd6e-01e9941ee840 req-5a65228b-18bc-458b-9e4a-65d81b0e2cf5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received event network-vif-unplugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:18:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:18:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:47.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:18:47 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Jan 31 03:18:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:49.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:49.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:49 np0005603609 nova_compute[221550]: 2026-01-31 08:18:49.722 221554 DEBUG nova.compute.manager [req-6fdfb547-bde0-47b7-aad9-87b6a44ece81 req-6a139c6a-ff8f-423e-8801-192bc37eb76d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received event network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:18:49 np0005603609 nova_compute[221550]: 2026-01-31 08:18:49.723 221554 DEBUG oslo_concurrency.lockutils [req-6fdfb547-bde0-47b7-aad9-87b6a44ece81 req-6a139c6a-ff8f-423e-8801-192bc37eb76d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:49 np0005603609 nova_compute[221550]: 2026-01-31 08:18:49.723 221554 DEBUG oslo_concurrency.lockutils [req-6fdfb547-bde0-47b7-aad9-87b6a44ece81 req-6a139c6a-ff8f-423e-8801-192bc37eb76d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:49 np0005603609 nova_compute[221550]: 2026-01-31 08:18:49.723 221554 DEBUG oslo_concurrency.lockutils [req-6fdfb547-bde0-47b7-aad9-87b6a44ece81 req-6a139c6a-ff8f-423e-8801-192bc37eb76d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:49 np0005603609 nova_compute[221550]: 2026-01-31 08:18:49.723 221554 DEBUG nova.compute.manager [req-6fdfb547-bde0-47b7-aad9-87b6a44ece81 req-6a139c6a-ff8f-423e-8801-192bc37eb76d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] No waiting events found dispatching network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:18:49 np0005603609 nova_compute[221550]: 2026-01-31 08:18:49.724 221554 WARNING nova.compute.manager [req-6fdfb547-bde0-47b7-aad9-87b6a44ece81 req-6a139c6a-ff8f-423e-8801-192bc37eb76d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received unexpected event network-vif-plugged-05c06d54-1257-48d3-8a5a-f92423aadbd8 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:18:49 np0005603609 nova_compute[221550]: 2026-01-31 08:18:49.834 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:50 np0005603609 nova_compute[221550]: 2026-01-31 08:18:50.177 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:50 np0005603609 nova_compute[221550]: 2026-01-31 08:18:50.289 221554 INFO nova.virt.libvirt.driver [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Deleting instance files /var/lib/nova/instances/981eb990-ec0c-4673-98e7-102afbe0bb51_del#033[00m
Jan 31 03:18:50 np0005603609 nova_compute[221550]: 2026-01-31 08:18:50.290 221554 INFO nova.virt.libvirt.driver [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Deletion of /var/lib/nova/instances/981eb990-ec0c-4673-98e7-102afbe0bb51_del complete#033[00m
Jan 31 03:18:50 np0005603609 nova_compute[221550]: 2026-01-31 08:18:50.380 221554 INFO nova.compute.manager [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Took 3.58 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:18:50 np0005603609 nova_compute[221550]: 2026-01-31 08:18:50.381 221554 DEBUG oslo.service.loopingcall [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:18:50 np0005603609 nova_compute[221550]: 2026-01-31 08:18:50.382 221554 DEBUG nova.compute.manager [-] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:18:50 np0005603609 nova_compute[221550]: 2026-01-31 08:18:50.382 221554 DEBUG nova.network.neutron [-] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:18:50 np0005603609 nova_compute[221550]: 2026-01-31 08:18:50.496 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:50 np0005603609 nova_compute[221550]: 2026-01-31 08:18:50.703 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847515.7018085, 518d36e7-b77a-415d-bd48-f53e637ed0d8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:18:50 np0005603609 nova_compute[221550]: 2026-01-31 08:18:50.703 221554 INFO nova.compute.manager [-] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:18:50 np0005603609 nova_compute[221550]: 2026-01-31 08:18:50.781 221554 DEBUG nova.compute.manager [None req-ec9a24a5-03f3-40d3-ba53-498e72d7ff42 - - - - - -] [instance: 518d36e7-b77a-415d-bd48-f53e637ed0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:18:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:51.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:51.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:51 np0005603609 nova_compute[221550]: 2026-01-31 08:18:51.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:52 np0005603609 podman[277540]: 2026-01-31 08:18:52.169714017 +0000 UTC m=+0.057266016 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 03:18:52 np0005603609 podman[277539]: 2026-01-31 08:18:52.228482309 +0000 UTC m=+0.113202950 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 03:18:52 np0005603609 nova_compute[221550]: 2026-01-31 08:18:52.260 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e329 e329: 3 total, 3 up, 3 in
Jan 31 03:18:52 np0005603609 nova_compute[221550]: 2026-01-31 08:18:52.611 221554 DEBUG nova.network.neutron [-] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:18:52 np0005603609 nova_compute[221550]: 2026-01-31 08:18:52.628 221554 DEBUG nova.compute.manager [req-e9999ba8-18ce-404e-9f72-612c7e81a4fb req-520279f3-47ba-4bc1-bfca-eef634f1cc37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Received event network-vif-deleted-05c06d54-1257-48d3-8a5a-f92423aadbd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:18:52 np0005603609 nova_compute[221550]: 2026-01-31 08:18:52.628 221554 INFO nova.compute.manager [req-e9999ba8-18ce-404e-9f72-612c7e81a4fb req-520279f3-47ba-4bc1-bfca-eef634f1cc37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Neutron deleted interface 05c06d54-1257-48d3-8a5a-f92423aadbd8; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:18:52 np0005603609 nova_compute[221550]: 2026-01-31 08:18:52.628 221554 DEBUG nova.network.neutron [req-e9999ba8-18ce-404e-9f72-612c7e81a4fb req-520279f3-47ba-4bc1-bfca-eef634f1cc37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:18:52 np0005603609 nova_compute[221550]: 2026-01-31 08:18:52.852 221554 INFO nova.compute.manager [-] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Took 2.47 seconds to deallocate network for instance.#033[00m
Jan 31 03:18:52 np0005603609 nova_compute[221550]: 2026-01-31 08:18:52.885 221554 DEBUG nova.compute.manager [req-e9999ba8-18ce-404e-9f72-612c7e81a4fb req-520279f3-47ba-4bc1-bfca-eef634f1cc37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Detach interface failed, port_id=05c06d54-1257-48d3-8a5a-f92423aadbd8, reason: Instance 981eb990-ec0c-4673-98e7-102afbe0bb51 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:18:53 np0005603609 nova_compute[221550]: 2026-01-31 08:18:53.194 221554 DEBUG oslo_concurrency.lockutils [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:53 np0005603609 nova_compute[221550]: 2026-01-31 08:18:53.195 221554 DEBUG oslo_concurrency.lockutils [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:53.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:53 np0005603609 nova_compute[221550]: 2026-01-31 08:18:53.259 221554 DEBUG oslo_concurrency.processutils [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:18:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:18:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3720147076' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:18:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:18:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3720147076' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:18:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:18:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:53.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:18:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:18:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/467398597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:18:53 np0005603609 nova_compute[221550]: 2026-01-31 08:18:53.648 221554 DEBUG oslo_concurrency.processutils [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:18:53 np0005603609 nova_compute[221550]: 2026-01-31 08:18:53.654 221554 DEBUG nova.compute.provider_tree [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:18:53 np0005603609 nova_compute[221550]: 2026-01-31 08:18:53.953 221554 DEBUG nova.scheduler.client.report [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:18:54 np0005603609 nova_compute[221550]: 2026-01-31 08:18:54.494 221554 DEBUG oslo_concurrency.lockutils [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:55 np0005603609 nova_compute[221550]: 2026-01-31 08:18:55.217 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:55.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:18:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:55.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:18:56 np0005603609 nova_compute[221550]: 2026-01-31 08:18:56.095 221554 INFO nova.scheduler.client.report [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Deleted allocations for instance 981eb990-ec0c-4673-98e7-102afbe0bb51#033[00m
Jan 31 03:18:56 np0005603609 nova_compute[221550]: 2026-01-31 08:18:56.538 221554 DEBUG oslo_concurrency.lockutils [None req-1d9293db-73d3-4b8f-a330-abccc5ba7dcf d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "981eb990-ec0c-4673-98e7-102afbe0bb51" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:57.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:57 np0005603609 nova_compute[221550]: 2026-01-31 08:18:57.263 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:57.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:57 np0005603609 nova_compute[221550]: 2026-01-31 08:18:57.967 221554 DEBUG oslo_concurrency.lockutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "ac849bdd-07a0-459f-b207-2a3f239d0272" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:57 np0005603609 nova_compute[221550]: 2026-01-31 08:18:57.967 221554 DEBUG oslo_concurrency.lockutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:58 np0005603609 nova_compute[221550]: 2026-01-31 08:18:58.287 221554 DEBUG nova.compute.manager [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:18:58 np0005603609 nova_compute[221550]: 2026-01-31 08:18:58.850 221554 DEBUG oslo_concurrency.lockutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:58 np0005603609 nova_compute[221550]: 2026-01-31 08:18:58.850 221554 DEBUG oslo_concurrency.lockutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:58 np0005603609 nova_compute[221550]: 2026-01-31 08:18:58.858 221554 DEBUG nova.virt.hardware [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:18:58 np0005603609 nova_compute[221550]: 2026-01-31 08:18:58.859 221554 INFO nova.compute.claims [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:18:59 np0005603609 nova_compute[221550]: 2026-01-31 08:18:59.224 221554 DEBUG oslo_concurrency.processutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:18:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:59.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:18:59 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3710704562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:18:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:18:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:18:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:59.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:18:59 np0005603609 nova_compute[221550]: 2026-01-31 08:18:59.608 221554 DEBUG oslo_concurrency.processutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:18:59 np0005603609 nova_compute[221550]: 2026-01-31 08:18:59.612 221554 DEBUG nova.compute.provider_tree [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:18:59 np0005603609 nova_compute[221550]: 2026-01-31 08:18:59.765 221554 DEBUG nova.scheduler.client.report [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:18:59 np0005603609 nova_compute[221550]: 2026-01-31 08:18:59.887 221554 DEBUG oslo_concurrency.lockutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:59 np0005603609 nova_compute[221550]: 2026-01-31 08:18:59.888 221554 DEBUG nova.compute.manager [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:19:00 np0005603609 nova_compute[221550]: 2026-01-31 08:19:00.091 221554 DEBUG nova.compute.manager [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:19:00 np0005603609 nova_compute[221550]: 2026-01-31 08:19:00.091 221554 DEBUG nova.network.neutron [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:19:00 np0005603609 nova_compute[221550]: 2026-01-31 08:19:00.219 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:00 np0005603609 nova_compute[221550]: 2026-01-31 08:19:00.224 221554 INFO nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:19:00 np0005603609 nova_compute[221550]: 2026-01-31 08:19:00.319 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:00 np0005603609 nova_compute[221550]: 2026-01-31 08:19:00.445 221554 DEBUG nova.policy [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b153d2832404e5b9250422b70ba522d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b06982960ad4453b8e542cb6330835d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:19:00 np0005603609 nova_compute[221550]: 2026-01-31 08:19:00.449 221554 DEBUG nova.compute.manager [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:19:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:01.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:01 np0005603609 nova_compute[221550]: 2026-01-31 08:19:01.316 221554 DEBUG nova.compute.manager [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:19:01 np0005603609 nova_compute[221550]: 2026-01-31 08:19:01.318 221554 DEBUG nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:19:01 np0005603609 nova_compute[221550]: 2026-01-31 08:19:01.318 221554 INFO nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Creating image(s)#033[00m
Jan 31 03:19:01 np0005603609 nova_compute[221550]: 2026-01-31 08:19:01.349 221554 DEBUG nova.storage.rbd_utils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image ac849bdd-07a0-459f-b207-2a3f239d0272_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:01 np0005603609 nova_compute[221550]: 2026-01-31 08:19:01.377 221554 DEBUG nova.storage.rbd_utils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image ac849bdd-07a0-459f-b207-2a3f239d0272_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:01 np0005603609 nova_compute[221550]: 2026-01-31 08:19:01.405 221554 DEBUG nova.storage.rbd_utils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image ac849bdd-07a0-459f-b207-2a3f239d0272_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:01 np0005603609 nova_compute[221550]: 2026-01-31 08:19:01.410 221554 DEBUG oslo_concurrency.processutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:01 np0005603609 nova_compute[221550]: 2026-01-31 08:19:01.479 221554 DEBUG oslo_concurrency.processutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:01 np0005603609 nova_compute[221550]: 2026-01-31 08:19:01.481 221554 DEBUG oslo_concurrency.lockutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:01 np0005603609 nova_compute[221550]: 2026-01-31 08:19:01.481 221554 DEBUG oslo_concurrency.lockutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:01 np0005603609 nova_compute[221550]: 2026-01-31 08:19:01.482 221554 DEBUG oslo_concurrency.lockutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:01 np0005603609 nova_compute[221550]: 2026-01-31 08:19:01.516 221554 DEBUG nova.storage.rbd_utils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image ac849bdd-07a0-459f-b207-2a3f239d0272_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:01 np0005603609 nova_compute[221550]: 2026-01-31 08:19:01.520 221554 DEBUG oslo_concurrency.processutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 ac849bdd-07a0-459f-b207-2a3f239d0272_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:01.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:01 np0005603609 nova_compute[221550]: 2026-01-31 08:19:01.816 221554 DEBUG oslo_concurrency.processutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 ac849bdd-07a0-459f-b207-2a3f239d0272_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:01 np0005603609 nova_compute[221550]: 2026-01-31 08:19:01.911 221554 DEBUG nova.storage.rbd_utils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] resizing rbd image ac849bdd-07a0-459f-b207-2a3f239d0272_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:19:02 np0005603609 nova_compute[221550]: 2026-01-31 08:19:02.034 221554 DEBUG nova.objects.instance [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'migration_context' on Instance uuid ac849bdd-07a0-459f-b207-2a3f239d0272 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:19:02 np0005603609 nova_compute[221550]: 2026-01-31 08:19:02.075 221554 DEBUG nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:19:02 np0005603609 nova_compute[221550]: 2026-01-31 08:19:02.076 221554 DEBUG nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Ensure instance console log exists: /var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:19:02 np0005603609 nova_compute[221550]: 2026-01-31 08:19:02.076 221554 DEBUG oslo_concurrency.lockutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:02 np0005603609 nova_compute[221550]: 2026-01-31 08:19:02.077 221554 DEBUG oslo_concurrency.lockutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:02 np0005603609 nova_compute[221550]: 2026-01-31 08:19:02.077 221554 DEBUG oslo_concurrency.lockutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:02 np0005603609 nova_compute[221550]: 2026-01-31 08:19:02.227 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847527.22651, 981eb990-ec0c-4673-98e7-102afbe0bb51 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:19:02 np0005603609 nova_compute[221550]: 2026-01-31 08:19:02.228 221554 INFO nova.compute.manager [-] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:19:02 np0005603609 nova_compute[221550]: 2026-01-31 08:19:02.266 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:02 np0005603609 nova_compute[221550]: 2026-01-31 08:19:02.359 221554 DEBUG nova.compute.manager [None req-ddc99d8e-0b51-410b-adee-4b28f8147068 - - - - - -] [instance: 981eb990-ec0c-4673-98e7-102afbe0bb51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:19:02 np0005603609 nova_compute[221550]: 2026-01-31 08:19:02.550 221554 DEBUG nova.network.neutron [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Successfully created port: c754b929-2e4d-413c-90fa-92a743e1fc38 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:19:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:19:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:03.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:19:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e330 e330: 3 total, 3 up, 3 in
Jan 31 03:19:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:03.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:04 np0005603609 nova_compute[221550]: 2026-01-31 08:19:04.201 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:04 np0005603609 nova_compute[221550]: 2026-01-31 08:19:04.271 221554 DEBUG nova.network.neutron [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Successfully updated port: c754b929-2e4d-413c-90fa-92a743e1fc38 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:19:04 np0005603609 nova_compute[221550]: 2026-01-31 08:19:04.317 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:04 np0005603609 nova_compute[221550]: 2026-01-31 08:19:04.332 221554 DEBUG oslo_concurrency.lockutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:19:04 np0005603609 nova_compute[221550]: 2026-01-31 08:19:04.333 221554 DEBUG oslo_concurrency.lockutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquired lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:19:04 np0005603609 nova_compute[221550]: 2026-01-31 08:19:04.333 221554 DEBUG nova.network.neutron [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:19:04 np0005603609 nova_compute[221550]: 2026-01-31 08:19:04.408 221554 DEBUG nova.compute.manager [req-867a2db1-3e8b-40fc-9198-cb3a935eea6f req-8355998c-ff45-4185-ae6b-a7d01d8dadce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Received event network-changed-c754b929-2e4d-413c-90fa-92a743e1fc38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:19:04 np0005603609 nova_compute[221550]: 2026-01-31 08:19:04.409 221554 DEBUG nova.compute.manager [req-867a2db1-3e8b-40fc-9198-cb3a935eea6f req-8355998c-ff45-4185-ae6b-a7d01d8dadce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Refreshing instance network info cache due to event network-changed-c754b929-2e4d-413c-90fa-92a743e1fc38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:19:04 np0005603609 nova_compute[221550]: 2026-01-31 08:19:04.409 221554 DEBUG oslo_concurrency.lockutils [req-867a2db1-3e8b-40fc-9198-cb3a935eea6f req-8355998c-ff45-4185-ae6b-a7d01d8dadce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:19:04 np0005603609 nova_compute[221550]: 2026-01-31 08:19:04.573 221554 DEBUG nova.network.neutron [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:19:05 np0005603609 nova_compute[221550]: 2026-01-31 08:19:05.221 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:05.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:19:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:05.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.082 221554 DEBUG nova.network.neutron [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Updating instance_info_cache with network_info: [{"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.171 221554 DEBUG oslo_concurrency.lockutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Releasing lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.171 221554 DEBUG nova.compute.manager [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Instance network_info: |[{"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.172 221554 DEBUG oslo_concurrency.lockutils [req-867a2db1-3e8b-40fc-9198-cb3a935eea6f req-8355998c-ff45-4185-ae6b-a7d01d8dadce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.172 221554 DEBUG nova.network.neutron [req-867a2db1-3e8b-40fc-9198-cb3a935eea6f req-8355998c-ff45-4185-ae6b-a7d01d8dadce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Refreshing network info cache for port c754b929-2e4d-413c-90fa-92a743e1fc38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.177 221554 DEBUG nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Start _get_guest_xml network_info=[{"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.182 221554 WARNING nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.188 221554 DEBUG nova.virt.libvirt.host [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.189 221554 DEBUG nova.virt.libvirt.host [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.196 221554 DEBUG nova.virt.libvirt.host [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.197 221554 DEBUG nova.virt.libvirt.host [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.198 221554 DEBUG nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.199 221554 DEBUG nova.virt.hardware [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.200 221554 DEBUG nova.virt.hardware [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.200 221554 DEBUG nova.virt.hardware [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.201 221554 DEBUG nova.virt.hardware [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.201 221554 DEBUG nova.virt.hardware [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.201 221554 DEBUG nova.virt.hardware [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.202 221554 DEBUG nova.virt.hardware [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.202 221554 DEBUG nova.virt.hardware [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.203 221554 DEBUG nova.virt.hardware [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.203 221554 DEBUG nova.virt.hardware [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.204 221554 DEBUG nova.virt.hardware [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.209 221554 DEBUG oslo_concurrency.processutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:19:06 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4218099084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.687 221554 DEBUG oslo_concurrency.processutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.730 221554 DEBUG nova.storage.rbd_utils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image ac849bdd-07a0-459f-b207-2a3f239d0272_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:06 np0005603609 nova_compute[221550]: 2026-01-31 08:19:06.736 221554 DEBUG oslo_concurrency.processutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:19:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1216326437' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.153 221554 DEBUG oslo_concurrency.processutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.154 221554 DEBUG nova.virt.libvirt.vif [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:18:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-154682825',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-154682825',id=140,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMWO9k24zLH8F2gtUwSjTGss8rmi2WVpUqIaGSBNrUsu6yIJKWkJX4AL66VZ1mBnILCET2/Mvny3YbuR68kQ+I2GI5dWxVmFSmypAx4n5E/nzIwrdNA7P5kjqoFQFkL9tA==',key_name='tempest-keypair-572349883',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b06982960ad4453b8e542cb6330835d',ramdisk_id='',reservation_id='r-1oh7rfdp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-332944999',owner_user_name='tempest-AttachVolumeShelveTestJSON-332944999-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:19:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b153d2832404e5b9250422b70ba522d',uuid=ac849bdd-07a0-459f-b207-2a3f239d0272,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.155 221554 DEBUG nova.network.os_vif_util [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converting VIF {"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.156 221554 DEBUG nova.network.os_vif_util [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:73:fe,bridge_name='br-int',has_traffic_filtering=True,id=c754b929-2e4d-413c-90fa-92a743e1fc38,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc754b929-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.156 221554 DEBUG nova.objects.instance [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'pci_devices' on Instance uuid ac849bdd-07a0-459f-b207-2a3f239d0272 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:19:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:07.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.270 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.279 221554 DEBUG nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:19:07 np0005603609 nova_compute[221550]:  <uuid>ac849bdd-07a0-459f-b207-2a3f239d0272</uuid>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:  <name>instance-0000008c</name>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <nova:name>tempest-AttachVolumeShelveTestJSON-server-154682825</nova:name>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:19:06</nova:creationTime>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:19:07 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:        <nova:user uuid="3b153d2832404e5b9250422b70ba522d">tempest-AttachVolumeShelveTestJSON-332944999-project-member</nova:user>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:        <nova:project uuid="3b06982960ad4453b8e542cb6330835d">tempest-AttachVolumeShelveTestJSON-332944999</nova:project>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:        <nova:port uuid="c754b929-2e4d-413c-90fa-92a743e1fc38">
Jan 31 03:19:07 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <entry name="serial">ac849bdd-07a0-459f-b207-2a3f239d0272</entry>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <entry name="uuid">ac849bdd-07a0-459f-b207-2a3f239d0272</entry>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/ac849bdd-07a0-459f-b207-2a3f239d0272_disk">
Jan 31 03:19:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:19:07 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/ac849bdd-07a0-459f-b207-2a3f239d0272_disk.config">
Jan 31 03:19:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:19:07 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:41:73:fe"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <target dev="tapc754b929-2e"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272/console.log" append="off"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:19:07 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:19:07 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:19:07 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:19:07 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.280 221554 DEBUG nova.compute.manager [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Preparing to wait for external event network-vif-plugged-c754b929-2e4d-413c-90fa-92a743e1fc38 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.280 221554 DEBUG oslo_concurrency.lockutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.281 221554 DEBUG oslo_concurrency.lockutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.281 221554 DEBUG oslo_concurrency.lockutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.281 221554 DEBUG nova.virt.libvirt.vif [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:18:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-154682825',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-154682825',id=140,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMWO9k24zLH8F2gtUwSjTGss8rmi2WVpUqIaGSBNrUsu6yIJKWkJX4AL66VZ1mBnILCET2/Mvny3YbuR68kQ+I2GI5dWxVmFSmypAx4n5E/nzIwrdNA7P5kjqoFQFkL9tA==',key_name='tempest-keypair-572349883',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b06982960ad4453b8e542cb6330835d',ramdisk_id='',reservation_id='r-1oh7rfdp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-332944999',owner_user_name='tempest-AttachVolumeShelveTestJSON-332944999-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:19:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b153d2832404e5b9250422b70ba522d',uuid=ac849bdd-07a0-459f-b207-2a3f239d0272,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.282 221554 DEBUG nova.network.os_vif_util [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converting VIF {"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.282 221554 DEBUG nova.network.os_vif_util [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:73:fe,bridge_name='br-int',has_traffic_filtering=True,id=c754b929-2e4d-413c-90fa-92a743e1fc38,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc754b929-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.282 221554 DEBUG os_vif [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:73:fe,bridge_name='br-int',has_traffic_filtering=True,id=c754b929-2e4d-413c-90fa-92a743e1fc38,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc754b929-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.283 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.283 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.284 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.285 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.286 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc754b929-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.286 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc754b929-2e, col_values=(('external_ids', {'iface-id': 'c754b929-2e4d-413c-90fa-92a743e1fc38', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:73:fe', 'vm-uuid': 'ac849bdd-07a0-459f-b207-2a3f239d0272'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.288 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:07 np0005603609 NetworkManager[49064]: <info>  [1769847547.2892] manager: (tapc754b929-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/271)
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.290 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.294 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.295 221554 INFO os_vif [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:73:fe,bridge_name='br-int',has_traffic_filtering=True,id=c754b929-2e4d-413c-90fa-92a743e1fc38,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc754b929-2e')#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.418 221554 DEBUG nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.419 221554 DEBUG nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.419 221554 DEBUG nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] No VIF found with MAC fa:16:3e:41:73:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.420 221554 INFO nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Using config drive#033[00m
Jan 31 03:19:07 np0005603609 nova_compute[221550]: 2026-01-31 08:19:07.441 221554 DEBUG nova.storage.rbd_utils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image ac849bdd-07a0-459f-b207-2a3f239d0272_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:07.511 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:07.512 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:07.512 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:19:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:07.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:19:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:09 np0005603609 nova_compute[221550]: 2026-01-31 08:19:09.247 221554 INFO nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Creating config drive at /var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272/disk.config#033[00m
Jan 31 03:19:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:09.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:09 np0005603609 nova_compute[221550]: 2026-01-31 08:19:09.255 221554 DEBUG oslo_concurrency.processutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp0nyw2bon execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:09 np0005603609 nova_compute[221550]: 2026-01-31 08:19:09.380 221554 DEBUG oslo_concurrency.processutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp0nyw2bon" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:09 np0005603609 nova_compute[221550]: 2026-01-31 08:19:09.414 221554 DEBUG nova.storage.rbd_utils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image ac849bdd-07a0-459f-b207-2a3f239d0272_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:09 np0005603609 nova_compute[221550]: 2026-01-31 08:19:09.420 221554 DEBUG oslo_concurrency.processutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272/disk.config ac849bdd-07a0-459f-b207-2a3f239d0272_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:19:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:09.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:19:09 np0005603609 nova_compute[221550]: 2026-01-31 08:19:09.620 221554 DEBUG oslo_concurrency.processutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272/disk.config ac849bdd-07a0-459f-b207-2a3f239d0272_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:09 np0005603609 nova_compute[221550]: 2026-01-31 08:19:09.621 221554 INFO nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Deleting local config drive /var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272/disk.config because it was imported into RBD.#033[00m
Jan 31 03:19:09 np0005603609 kernel: tapc754b929-2e: entered promiscuous mode
Jan 31 03:19:09 np0005603609 NetworkManager[49064]: <info>  [1769847549.6742] manager: (tapc754b929-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/272)
Jan 31 03:19:09 np0005603609 ovn_controller[130359]: 2026-01-31T08:19:09Z|00580|binding|INFO|Claiming lport c754b929-2e4d-413c-90fa-92a743e1fc38 for this chassis.
Jan 31 03:19:09 np0005603609 ovn_controller[130359]: 2026-01-31T08:19:09Z|00581|binding|INFO|c754b929-2e4d-413c-90fa-92a743e1fc38: Claiming fa:16:3e:41:73:fe 10.100.0.4
Jan 31 03:19:09 np0005603609 nova_compute[221550]: 2026-01-31 08:19:09.710 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:09 np0005603609 systemd-udevd[277927]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.731 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:73:fe 10.100.0.4'], port_security=['fa:16:3e:41:73:fe 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ac849bdd-07a0-459f-b207-2a3f239d0272', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b47e290-9853-478f-86cb-c8ea73119a97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b06982960ad4453b8e542cb6330835d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9238371a-94ed-45d8-9126-02012597fcaa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46c631ec-3e4c-4c27-ae32-3645d3f29853, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=c754b929-2e4d-413c-90fa-92a743e1fc38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:19:09 np0005603609 NetworkManager[49064]: <info>  [1769847549.7356] device (tapc754b929-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:19:09 np0005603609 NetworkManager[49064]: <info>  [1769847549.7362] device (tapc754b929-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.734 140058 INFO neutron.agent.ovn.metadata.agent [-] Port c754b929-2e4d-413c-90fa-92a743e1fc38 in datapath 2b47e290-9853-478f-86cb-c8ea73119a97 bound to our chassis#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.736 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b47e290-9853-478f-86cb-c8ea73119a97#033[00m
Jan 31 03:19:09 np0005603609 systemd-machined[190912]: New machine qemu-71-instance-0000008c.
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.746 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d8b6922e-13b1-4b9c-b4f8-8f0d5d3a2a73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.747 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b47e290-91 in ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:19:09 np0005603609 nova_compute[221550]: 2026-01-31 08:19:09.749 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.750 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b47e290-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.750 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ca9c596c-975f-4548-bc5f-3d56fb13d06b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.751 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[13dacad8-f4eb-4762-8992-c5edb77b65c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:09 np0005603609 ovn_controller[130359]: 2026-01-31T08:19:09Z|00582|binding|INFO|Setting lport c754b929-2e4d-413c-90fa-92a743e1fc38 ovn-installed in OVS
Jan 31 03:19:09 np0005603609 ovn_controller[130359]: 2026-01-31T08:19:09Z|00583|binding|INFO|Setting lport c754b929-2e4d-413c-90fa-92a743e1fc38 up in Southbound
Jan 31 03:19:09 np0005603609 systemd[1]: Started Virtual Machine qemu-71-instance-0000008c.
Jan 31 03:19:09 np0005603609 nova_compute[221550]: 2026-01-31 08:19:09.757 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.762 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[08ea45d4-8edf-4e1b-a078-1778b6406238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:09 np0005603609 nova_compute[221550]: 2026-01-31 08:19:09.781 221554 DEBUG nova.network.neutron [req-867a2db1-3e8b-40fc-9198-cb3a935eea6f req-8355998c-ff45-4185-ae6b-a7d01d8dadce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Updated VIF entry in instance network info cache for port c754b929-2e4d-413c-90fa-92a743e1fc38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:19:09 np0005603609 nova_compute[221550]: 2026-01-31 08:19:09.781 221554 DEBUG nova.network.neutron [req-867a2db1-3e8b-40fc-9198-cb3a935eea6f req-8355998c-ff45-4185-ae6b-a7d01d8dadce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Updating instance_info_cache with network_info: [{"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.785 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c152e04c-bb90-40b0-a5d6-891d82c94b2d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.806 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e5c26f-6fe8-4a32-b577-a94598f26be5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:09 np0005603609 nova_compute[221550]: 2026-01-31 08:19:09.808 221554 DEBUG oslo_concurrency.lockutils [req-867a2db1-3e8b-40fc-9198-cb3a935eea6f req-8355998c-ff45-4185-ae6b-a7d01d8dadce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:19:09 np0005603609 NetworkManager[49064]: <info>  [1769847549.8117] manager: (tap2b47e290-90): new Veth device (/org/freedesktop/NetworkManager/Devices/273)
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.811 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[568e816e-c5be-46fb-ba73-5617262086d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.837 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4a95bc-4390-4469-a12e-cd0ace1f7fb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.840 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d1f51f-d911-4b5b-ba4a-3d520a5a5a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:09 np0005603609 NetworkManager[49064]: <info>  [1769847549.8576] device (tap2b47e290-90): carrier: link connected
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.861 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[dee1de0e-65cc-4269-8e4d-03c3f405b0c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.875 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1985e9a0-ca0e-44e4-99e4-5baf5e2fb2c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b47e290-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:d6:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 181], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778955, 'reachable_time': 38068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277963, 'error': None, 'target': 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.897 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8b394bf8-0383-44f5-8ad9-65100edda65a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:d61c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 778955, 'tstamp': 778955}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277964, 'error': None, 'target': 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.908 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7d415b3f-6837-4c39-b1b4-2edfe1bfab63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b47e290-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:d6:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 181], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778955, 'reachable_time': 38068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277965, 'error': None, 'target': 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.933 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3d183818-768b-4a9a-aefe-5f263ba0b86f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.972 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3dd027-1475-40f9-9997-5e909554baba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.973 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b47e290-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.974 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.974 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b47e290-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:09 np0005603609 nova_compute[221550]: 2026-01-31 08:19:09.975 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:09 np0005603609 NetworkManager[49064]: <info>  [1769847549.9763] manager: (tap2b47e290-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Jan 31 03:19:09 np0005603609 kernel: tap2b47e290-90: entered promiscuous mode
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.979 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b47e290-90, col_values=(('external_ids', {'iface-id': '4fadf8e2-21f6-4df7-9cc2-be518280ee18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:09 np0005603609 nova_compute[221550]: 2026-01-31 08:19:09.979 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:09 np0005603609 ovn_controller[130359]: 2026-01-31T08:19:09Z|00584|binding|INFO|Releasing lport 4fadf8e2-21f6-4df7-9cc2-be518280ee18 from this chassis (sb_readonly=0)
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.980 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b47e290-9853-478f-86cb-c8ea73119a97.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b47e290-9853-478f-86cb-c8ea73119a97.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.981 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[28527f69-df62-43e7-9273-aefd51deb3c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.982 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-2b47e290-9853-478f-86cb-c8ea73119a97
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/2b47e290-9853-478f-86cb-c8ea73119a97.pid.haproxy
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 2b47e290-9853-478f-86cb-c8ea73119a97
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:19:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:09.982 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'env', 'PROCESS_TAG=haproxy-2b47e290-9853-478f-86cb-c8ea73119a97', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b47e290-9853-478f-86cb-c8ea73119a97.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:19:09 np0005603609 nova_compute[221550]: 2026-01-31 08:19:09.984 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:10 np0005603609 nova_compute[221550]: 2026-01-31 08:19:10.164 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847550.163143, ac849bdd-07a0-459f-b207-2a3f239d0272 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:19:10 np0005603609 nova_compute[221550]: 2026-01-31 08:19:10.165 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] VM Started (Lifecycle Event)#033[00m
Jan 31 03:19:10 np0005603609 nova_compute[221550]: 2026-01-31 08:19:10.223 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:10 np0005603609 nova_compute[221550]: 2026-01-31 08:19:10.240 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:19:10 np0005603609 nova_compute[221550]: 2026-01-31 08:19:10.245 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847550.1633058, ac849bdd-07a0-459f-b207-2a3f239d0272 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:19:10 np0005603609 nova_compute[221550]: 2026-01-31 08:19:10.246 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:19:10 np0005603609 podman[278039]: 2026-01-31 08:19:10.310883973 +0000 UTC m=+0.057537792 container create b1934e4cbb0477582a6feff1d4181175f73eec099c418426b3a5ef57b41e9186 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:19:10 np0005603609 nova_compute[221550]: 2026-01-31 08:19:10.335 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:19:10 np0005603609 nova_compute[221550]: 2026-01-31 08:19:10.342 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:19:10 np0005603609 systemd[1]: Started libpod-conmon-b1934e4cbb0477582a6feff1d4181175f73eec099c418426b3a5ef57b41e9186.scope.
Jan 31 03:19:10 np0005603609 podman[278039]: 2026-01-31 08:19:10.277650565 +0000 UTC m=+0.024304384 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:19:10 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:19:10 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f672d00279a31961ff15eab377d6d916963f8cf680775a3d6ce46f6134f03b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:10 np0005603609 nova_compute[221550]: 2026-01-31 08:19:10.395 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:19:10 np0005603609 podman[278039]: 2026-01-31 08:19:10.400612178 +0000 UTC m=+0.147265977 container init b1934e4cbb0477582a6feff1d4181175f73eec099c418426b3a5ef57b41e9186 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:19:10 np0005603609 podman[278039]: 2026-01-31 08:19:10.40733733 +0000 UTC m=+0.153991109 container start b1934e4cbb0477582a6feff1d4181175f73eec099c418426b3a5ef57b41e9186 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 03:19:10 np0005603609 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[278054]: [NOTICE]   (278058) : New worker (278060) forked
Jan 31 03:19:10 np0005603609 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[278054]: [NOTICE]   (278058) : Loading success.
Jan 31 03:19:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:11.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:19:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:11.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:19:11 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.786 221554 DEBUG nova.compute.manager [req-f9dc6552-7c28-4247-b05c-ce41bd227b1b req-3c8cf4f1-6dc7-4fd7-b874-f1c3654bfa09 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Received event network-vif-plugged-c754b929-2e4d-413c-90fa-92a743e1fc38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:19:11 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.787 221554 DEBUG oslo_concurrency.lockutils [req-f9dc6552-7c28-4247-b05c-ce41bd227b1b req-3c8cf4f1-6dc7-4fd7-b874-f1c3654bfa09 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:11 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.787 221554 DEBUG oslo_concurrency.lockutils [req-f9dc6552-7c28-4247-b05c-ce41bd227b1b req-3c8cf4f1-6dc7-4fd7-b874-f1c3654bfa09 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:11 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.788 221554 DEBUG oslo_concurrency.lockutils [req-f9dc6552-7c28-4247-b05c-ce41bd227b1b req-3c8cf4f1-6dc7-4fd7-b874-f1c3654bfa09 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:11 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.788 221554 DEBUG nova.compute.manager [req-f9dc6552-7c28-4247-b05c-ce41bd227b1b req-3c8cf4f1-6dc7-4fd7-b874-f1c3654bfa09 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Processing event network-vif-plugged-c754b929-2e4d-413c-90fa-92a743e1fc38 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:19:11 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.789 221554 DEBUG nova.compute.manager [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:19:11 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.794 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847551.7941995, ac849bdd-07a0-459f-b207-2a3f239d0272 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:19:11 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.794 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:19:11 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.797 221554 DEBUG nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:19:11 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.802 221554 INFO nova.virt.libvirt.driver [-] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Instance spawned successfully.#033[00m
Jan 31 03:19:11 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.803 221554 DEBUG nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:19:11 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.901 221554 DEBUG nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:19:11 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.902 221554 DEBUG nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:19:11 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.902 221554 DEBUG nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:19:11 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.903 221554 DEBUG nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:19:11 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.903 221554 DEBUG nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:19:11 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.903 221554 DEBUG nova.virt.libvirt.driver [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:19:11 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.911 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:19:11 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.916 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:19:12 np0005603609 nova_compute[221550]: 2026-01-31 08:19:11.999 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:19:12 np0005603609 nova_compute[221550]: 2026-01-31 08:19:12.037 221554 INFO nova.compute.manager [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Took 10.72 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:19:12 np0005603609 nova_compute[221550]: 2026-01-31 08:19:12.037 221554 DEBUG nova.compute.manager [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:19:12 np0005603609 nova_compute[221550]: 2026-01-31 08:19:12.148 221554 INFO nova.compute.manager [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Took 13.33 seconds to build instance.#033[00m
Jan 31 03:19:12 np0005603609 nova_compute[221550]: 2026-01-31 08:19:12.213 221554 DEBUG oslo_concurrency.lockutils [None req-e72305c4-f0a7-49b6-9c8e-6b4a4cb17816 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:12 np0005603609 nova_compute[221550]: 2026-01-31 08:19:12.289 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:19:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:13.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:19:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:13.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:14 np0005603609 nova_compute[221550]: 2026-01-31 08:19:14.025 221554 DEBUG nova.compute.manager [req-9db54813-099c-4633-83f4-59e1624ab1a9 req-33ccb392-8d0f-4231-b825-a9663096a92f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Received event network-vif-plugged-c754b929-2e4d-413c-90fa-92a743e1fc38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:19:14 np0005603609 nova_compute[221550]: 2026-01-31 08:19:14.027 221554 DEBUG oslo_concurrency.lockutils [req-9db54813-099c-4633-83f4-59e1624ab1a9 req-33ccb392-8d0f-4231-b825-a9663096a92f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:14 np0005603609 nova_compute[221550]: 2026-01-31 08:19:14.027 221554 DEBUG oslo_concurrency.lockutils [req-9db54813-099c-4633-83f4-59e1624ab1a9 req-33ccb392-8d0f-4231-b825-a9663096a92f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:14 np0005603609 nova_compute[221550]: 2026-01-31 08:19:14.028 221554 DEBUG oslo_concurrency.lockutils [req-9db54813-099c-4633-83f4-59e1624ab1a9 req-33ccb392-8d0f-4231-b825-a9663096a92f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:14 np0005603609 nova_compute[221550]: 2026-01-31 08:19:14.029 221554 DEBUG nova.compute.manager [req-9db54813-099c-4633-83f4-59e1624ab1a9 req-33ccb392-8d0f-4231-b825-a9663096a92f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] No waiting events found dispatching network-vif-plugged-c754b929-2e4d-413c-90fa-92a743e1fc38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:19:14 np0005603609 nova_compute[221550]: 2026-01-31 08:19:14.029 221554 WARNING nova.compute.manager [req-9db54813-099c-4633-83f4-59e1624ab1a9 req-33ccb392-8d0f-4231-b825-a9663096a92f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Received unexpected event network-vif-plugged-c754b929-2e4d-413c-90fa-92a743e1fc38 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:19:15 np0005603609 nova_compute[221550]: 2026-01-31 08:19:15.225 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:19:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:15.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:19:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:15.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:16 np0005603609 NetworkManager[49064]: <info>  [1769847556.1451] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/275)
Jan 31 03:19:16 np0005603609 NetworkManager[49064]: <info>  [1769847556.1462] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/276)
Jan 31 03:19:16 np0005603609 nova_compute[221550]: 2026-01-31 08:19:16.144 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:16 np0005603609 nova_compute[221550]: 2026-01-31 08:19:16.182 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:16 np0005603609 ovn_controller[130359]: 2026-01-31T08:19:16Z|00585|binding|INFO|Releasing lport 4fadf8e2-21f6-4df7-9cc2-be518280ee18 from this chassis (sb_readonly=0)
Jan 31 03:19:16 np0005603609 nova_compute[221550]: 2026-01-31 08:19:16.203 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:17 np0005603609 nova_compute[221550]: 2026-01-31 08:19:17.243 221554 DEBUG nova.compute.manager [req-2db7b13f-e3c0-4a7f-bccc-3f9f211c3c8c req-ba04eabf-1169-4d89-b170-175a9e219d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Received event network-changed-c754b929-2e4d-413c-90fa-92a743e1fc38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:19:17 np0005603609 nova_compute[221550]: 2026-01-31 08:19:17.244 221554 DEBUG nova.compute.manager [req-2db7b13f-e3c0-4a7f-bccc-3f9f211c3c8c req-ba04eabf-1169-4d89-b170-175a9e219d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Refreshing instance network info cache due to event network-changed-c754b929-2e4d-413c-90fa-92a743e1fc38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:19:17 np0005603609 nova_compute[221550]: 2026-01-31 08:19:17.244 221554 DEBUG oslo_concurrency.lockutils [req-2db7b13f-e3c0-4a7f-bccc-3f9f211c3c8c req-ba04eabf-1169-4d89-b170-175a9e219d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:19:17 np0005603609 nova_compute[221550]: 2026-01-31 08:19:17.245 221554 DEBUG oslo_concurrency.lockutils [req-2db7b13f-e3c0-4a7f-bccc-3f9f211c3c8c req-ba04eabf-1169-4d89-b170-175a9e219d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:19:17 np0005603609 nova_compute[221550]: 2026-01-31 08:19:17.245 221554 DEBUG nova.network.neutron [req-2db7b13f-e3c0-4a7f-bccc-3f9f211c3c8c req-ba04eabf-1169-4d89-b170-175a9e219d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Refreshing network info cache for port c754b929-2e4d-413c-90fa-92a743e1fc38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:19:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:17.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:17 np0005603609 nova_compute[221550]: 2026-01-31 08:19:17.291 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:17.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:19.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:19.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:20 np0005603609 nova_compute[221550]: 2026-01-31 08:19:20.227 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:21.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:21 np0005603609 nova_compute[221550]: 2026-01-31 08:19:21.400 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:21 np0005603609 nova_compute[221550]: 2026-01-31 08:19:21.473 221554 DEBUG nova.network.neutron [req-2db7b13f-e3c0-4a7f-bccc-3f9f211c3c8c req-ba04eabf-1169-4d89-b170-175a9e219d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Updated VIF entry in instance network info cache for port c754b929-2e4d-413c-90fa-92a743e1fc38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:19:21 np0005603609 nova_compute[221550]: 2026-01-31 08:19:21.475 221554 DEBUG nova.network.neutron [req-2db7b13f-e3c0-4a7f-bccc-3f9f211c3c8c req-ba04eabf-1169-4d89-b170-175a9e219d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Updating instance_info_cache with network_info: [{"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:19:21 np0005603609 nova_compute[221550]: 2026-01-31 08:19:21.524 221554 DEBUG oslo_concurrency.lockutils [req-2db7b13f-e3c0-4a7f-bccc-3f9f211c3c8c req-ba04eabf-1169-4d89-b170-175a9e219d01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:19:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:21.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:22 np0005603609 nova_compute[221550]: 2026-01-31 08:19:22.332 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:23 np0005603609 podman[278072]: 2026-01-31 08:19:23.234768842 +0000 UTC m=+0.094532671 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 31 03:19:23 np0005603609 podman[278071]: 2026-01-31 08:19:23.256929684 +0000 UTC m=+0.116339935 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:19:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:23.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:23.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:24 np0005603609 ovn_controller[130359]: 2026-01-31T08:19:24Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:41:73:fe 10.100.0.4
Jan 31 03:19:24 np0005603609 ovn_controller[130359]: 2026-01-31T08:19:24Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:41:73:fe 10.100.0.4
Jan 31 03:19:24 np0005603609 nova_compute[221550]: 2026-01-31 08:19:24.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:19:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:25.003 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:19:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:25.005 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:19:25 np0005603609 nova_compute[221550]: 2026-01-31 08:19:25.019 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:25 np0005603609 nova_compute[221550]: 2026-01-31 08:19:25.230 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:19:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:25.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:19:25 np0005603609 nova_compute[221550]: 2026-01-31 08:19:25.429 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:19:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:25.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:19:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:26.008 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:27.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:27 np0005603609 nova_compute[221550]: 2026-01-31 08:19:27.365 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:19:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:27.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:19:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:19:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:29.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:19:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:19:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:29.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:19:30 np0005603609 nova_compute[221550]: 2026-01-31 08:19:30.231 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:31.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:31 np0005603609 nova_compute[221550]: 2026-01-31 08:19:31.329 221554 DEBUG oslo_concurrency.lockutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "38deb482-bd85-4fdf-b2da-2725bffd8f43" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:31 np0005603609 nova_compute[221550]: 2026-01-31 08:19:31.329 221554 DEBUG oslo_concurrency.lockutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:31 np0005603609 nova_compute[221550]: 2026-01-31 08:19:31.359 221554 DEBUG nova.compute.manager [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:19:31 np0005603609 nova_compute[221550]: 2026-01-31 08:19:31.509 221554 DEBUG oslo_concurrency.lockutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:31 np0005603609 nova_compute[221550]: 2026-01-31 08:19:31.509 221554 DEBUG oslo_concurrency.lockutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:31 np0005603609 nova_compute[221550]: 2026-01-31 08:19:31.519 221554 DEBUG nova.virt.hardware [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:19:31 np0005603609 nova_compute[221550]: 2026-01-31 08:19:31.519 221554 INFO nova.compute.claims [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:19:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:31.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:31 np0005603609 nova_compute[221550]: 2026-01-31 08:19:31.661 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:19:31 np0005603609 nova_compute[221550]: 2026-01-31 08:19:31.724 221554 DEBUG oslo_concurrency.processutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:19:32 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1892932877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.137 221554 DEBUG oslo_concurrency.processutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.143 221554 DEBUG nova.compute.provider_tree [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.168 221554 DEBUG nova.scheduler.client.report [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.202 221554 DEBUG oslo_concurrency.lockutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.203 221554 DEBUG nova.compute.manager [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.282 221554 DEBUG nova.compute.manager [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.282 221554 DEBUG nova.network.neutron [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.313 221554 INFO nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.332 221554 DEBUG nova.compute.manager [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.404 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.454 221554 DEBUG nova.compute.manager [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.455 221554 DEBUG nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.456 221554 INFO nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Creating image(s)#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.482 221554 DEBUG nova.storage.rbd_utils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.516 221554 DEBUG nova.storage.rbd_utils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.548 221554 DEBUG nova.storage.rbd_utils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.553 221554 DEBUG oslo_concurrency.processutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.594 221554 DEBUG nova.policy [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6788b0883cb348719d1222b1c9483be2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.615 221554 DEBUG oslo_concurrency.processutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.616 221554 DEBUG oslo_concurrency.lockutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.617 221554 DEBUG oslo_concurrency.lockutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.617 221554 DEBUG oslo_concurrency.lockutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.652 221554 DEBUG nova.storage.rbd_utils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.656 221554 DEBUG oslo_concurrency.processutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:32 np0005603609 nova_compute[221550]: 2026-01-31 08:19:32.934 221554 DEBUG oslo_concurrency.processutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:33 np0005603609 nova_compute[221550]: 2026-01-31 08:19:33.001 221554 DEBUG nova.storage.rbd_utils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] resizing rbd image 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:19:33 np0005603609 nova_compute[221550]: 2026-01-31 08:19:33.126 221554 DEBUG nova.objects.instance [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'migration_context' on Instance uuid 38deb482-bd85-4fdf-b2da-2725bffd8f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:19:33 np0005603609 nova_compute[221550]: 2026-01-31 08:19:33.183 221554 DEBUG nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:19:33 np0005603609 nova_compute[221550]: 2026-01-31 08:19:33.184 221554 DEBUG nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Ensure instance console log exists: /var/lib/nova/instances/38deb482-bd85-4fdf-b2da-2725bffd8f43/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:19:33 np0005603609 nova_compute[221550]: 2026-01-31 08:19:33.184 221554 DEBUG oslo_concurrency.lockutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:33 np0005603609 nova_compute[221550]: 2026-01-31 08:19:33.185 221554 DEBUG oslo_concurrency.lockutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:33 np0005603609 nova_compute[221550]: 2026-01-31 08:19:33.185 221554 DEBUG oslo_concurrency.lockutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:33.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:33 np0005603609 nova_compute[221550]: 2026-01-31 08:19:33.611 221554 DEBUG nova.network.neutron [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Successfully created port: 1f25515c-3e82-445e-baad-bc0e3a0d6f1a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:19:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:19:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:33.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:19:33 np0005603609 nova_compute[221550]: 2026-01-31 08:19:33.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:19:35 np0005603609 nova_compute[221550]: 2026-01-31 08:19:35.234 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:35.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:35 np0005603609 nova_compute[221550]: 2026-01-31 08:19:35.446 221554 DEBUG nova.network.neutron [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Successfully updated port: 1f25515c-3e82-445e-baad-bc0e3a0d6f1a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:19:35 np0005603609 nova_compute[221550]: 2026-01-31 08:19:35.511 221554 DEBUG oslo_concurrency.lockutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "refresh_cache-38deb482-bd85-4fdf-b2da-2725bffd8f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:19:35 np0005603609 nova_compute[221550]: 2026-01-31 08:19:35.513 221554 DEBUG oslo_concurrency.lockutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquired lock "refresh_cache-38deb482-bd85-4fdf-b2da-2725bffd8f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:19:35 np0005603609 nova_compute[221550]: 2026-01-31 08:19:35.513 221554 DEBUG nova.network.neutron [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:19:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:35.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:35 np0005603609 nova_compute[221550]: 2026-01-31 08:19:35.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:19:35 np0005603609 nova_compute[221550]: 2026-01-31 08:19:35.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:19:35 np0005603609 nova_compute[221550]: 2026-01-31 08:19:35.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:19:35 np0005603609 nova_compute[221550]: 2026-01-31 08:19:35.708 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:19:35 np0005603609 nova_compute[221550]: 2026-01-31 08:19:35.709 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:19:35 np0005603609 nova_compute[221550]: 2026-01-31 08:19:35.709 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:19:35 np0005603609 nova_compute[221550]: 2026-01-31 08:19:35.709 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:19:35 np0005603609 nova_compute[221550]: 2026-01-31 08:19:35.709 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ac849bdd-07a0-459f-b207-2a3f239d0272 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:19:35 np0005603609 nova_compute[221550]: 2026-01-31 08:19:35.989 221554 DEBUG nova.compute.manager [req-83c7929d-463b-4626-9fa2-b2113cdbfcc5 req-b0d5c269-4434-4092-ade4-c0d7133d1f45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received event network-changed-1f25515c-3e82-445e-baad-bc0e3a0d6f1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:19:35 np0005603609 nova_compute[221550]: 2026-01-31 08:19:35.990 221554 DEBUG nova.compute.manager [req-83c7929d-463b-4626-9fa2-b2113cdbfcc5 req-b0d5c269-4434-4092-ade4-c0d7133d1f45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Refreshing instance network info cache due to event network-changed-1f25515c-3e82-445e-baad-bc0e3a0d6f1a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:19:35 np0005603609 nova_compute[221550]: 2026-01-31 08:19:35.990 221554 DEBUG oslo_concurrency.lockutils [req-83c7929d-463b-4626-9fa2-b2113cdbfcc5 req-b0d5c269-4434-4092-ade4-c0d7133d1f45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-38deb482-bd85-4fdf-b2da-2725bffd8f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:19:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e331 e331: 3 total, 3 up, 3 in
Jan 31 03:19:36 np0005603609 nova_compute[221550]: 2026-01-31 08:19:36.088 221554 DEBUG nova.network.neutron [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:19:36 np0005603609 nova_compute[221550]: 2026-01-31 08:19:36.407 221554 DEBUG oslo_concurrency.lockutils [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "ac849bdd-07a0-459f-b207-2a3f239d0272" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:36 np0005603609 nova_compute[221550]: 2026-01-31 08:19:36.408 221554 DEBUG oslo_concurrency.lockutils [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:36 np0005603609 nova_compute[221550]: 2026-01-31 08:19:36.408 221554 INFO nova.compute.manager [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Shelving#033[00m
Jan 31 03:19:36 np0005603609 nova_compute[221550]: 2026-01-31 08:19:36.438 221554 DEBUG nova.virt.libvirt.driver [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:19:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:19:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:37.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:19:37 np0005603609 nova_compute[221550]: 2026-01-31 08:19:37.449 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:37 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:19:37 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:19:37 np0005603609 nova_compute[221550]: 2026-01-31 08:19:37.632 221554 DEBUG nova.network.neutron [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Updating instance_info_cache with network_info: [{"id": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "address": "fa:16:3e:f6:68:cd", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f25515c-3e", "ovs_interfaceid": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:19:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:37.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.344 221554 DEBUG oslo_concurrency.lockutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Releasing lock "refresh_cache-38deb482-bd85-4fdf-b2da-2725bffd8f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.344 221554 DEBUG nova.compute.manager [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Instance network_info: |[{"id": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "address": "fa:16:3e:f6:68:cd", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f25515c-3e", "ovs_interfaceid": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.345 221554 DEBUG oslo_concurrency.lockutils [req-83c7929d-463b-4626-9fa2-b2113cdbfcc5 req-b0d5c269-4434-4092-ade4-c0d7133d1f45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-38deb482-bd85-4fdf-b2da-2725bffd8f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.346 221554 DEBUG nova.network.neutron [req-83c7929d-463b-4626-9fa2-b2113cdbfcc5 req-b0d5c269-4434-4092-ade4-c0d7133d1f45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Refreshing network info cache for port 1f25515c-3e82-445e-baad-bc0e3a0d6f1a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.349 221554 DEBUG nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Start _get_guest_xml network_info=[{"id": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "address": "fa:16:3e:f6:68:cd", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f25515c-3e", "ovs_interfaceid": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.355 221554 WARNING nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.369 221554 DEBUG nova.virt.libvirt.host [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.370 221554 DEBUG nova.virt.libvirt.host [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.494 221554 DEBUG nova.virt.libvirt.host [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.495 221554 DEBUG nova.virt.libvirt.host [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.496 221554 DEBUG nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.496 221554 DEBUG nova.virt.hardware [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.496 221554 DEBUG nova.virt.hardware [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.496 221554 DEBUG nova.virt.hardware [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.497 221554 DEBUG nova.virt.hardware [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.497 221554 DEBUG nova.virt.hardware [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.497 221554 DEBUG nova.virt.hardware [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.497 221554 DEBUG nova.virt.hardware [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.498 221554 DEBUG nova.virt.hardware [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.498 221554 DEBUG nova.virt.hardware [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.498 221554 DEBUG nova.virt.hardware [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.498 221554 DEBUG nova.virt.hardware [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:19:38 np0005603609 nova_compute[221550]: 2026-01-31 08:19:38.500 221554 DEBUG oslo_concurrency.processutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:39.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:39 np0005603609 nova_compute[221550]: 2026-01-31 08:19:39.495 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Updating instance_info_cache with network_info: [{"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:19:39 np0005603609 nova_compute[221550]: 2026-01-31 08:19:39.502 221554 INFO nova.virt.libvirt.driver [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 03:19:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:19:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:19:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:19:39 np0005603609 kernel: tapc754b929-2e (unregistering): left promiscuous mode
Jan 31 03:19:39 np0005603609 NetworkManager[49064]: <info>  [1769847579.5584] device (tapc754b929-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:19:39 np0005603609 nova_compute[221550]: 2026-01-31 08:19:39.563 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:39 np0005603609 ovn_controller[130359]: 2026-01-31T08:19:39Z|00586|binding|INFO|Releasing lport c754b929-2e4d-413c-90fa-92a743e1fc38 from this chassis (sb_readonly=0)
Jan 31 03:19:39 np0005603609 ovn_controller[130359]: 2026-01-31T08:19:39Z|00587|binding|INFO|Setting lport c754b929-2e4d-413c-90fa-92a743e1fc38 down in Southbound
Jan 31 03:19:39 np0005603609 ovn_controller[130359]: 2026-01-31T08:19:39Z|00588|binding|INFO|Removing iface tapc754b929-2e ovn-installed in OVS
Jan 31 03:19:39 np0005603609 nova_compute[221550]: 2026-01-31 08:19:39.568 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:39 np0005603609 nova_compute[221550]: 2026-01-31 08:19:39.574 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:39 np0005603609 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Jan 31 03:19:39 np0005603609 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000008c.scope: Consumed 14.356s CPU time.
Jan 31 03:19:39 np0005603609 systemd-machined[190912]: Machine qemu-71-instance-0000008c terminated.
Jan 31 03:19:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:39.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:39 np0005603609 nova_compute[221550]: 2026-01-31 08:19:39.747 221554 INFO nova.virt.libvirt.driver [-] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Instance destroyed successfully.#033[00m
Jan 31 03:19:39 np0005603609 nova_compute[221550]: 2026-01-31 08:19:39.749 221554 DEBUG nova.objects.instance [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'numa_topology' on Instance uuid ac849bdd-07a0-459f-b207-2a3f239d0272 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:19:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:19:39 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1997723901' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:19:39 np0005603609 nova_compute[221550]: 2026-01-31 08:19:39.822 221554 DEBUG oslo_concurrency.processutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:39 np0005603609 nova_compute[221550]: 2026-01-31 08:19:39.856 221554 DEBUG nova.storage.rbd_utils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:39 np0005603609 nova_compute[221550]: 2026-01-31 08:19:39.861 221554 DEBUG oslo_concurrency.processutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:39 np0005603609 nova_compute[221550]: 2026-01-31 08:19:39.885 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:19:39 np0005603609 nova_compute[221550]: 2026-01-31 08:19:39.886 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:19:39 np0005603609 nova_compute[221550]: 2026-01-31 08:19:39.887 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:19:39 np0005603609 nova_compute[221550]: 2026-01-31 08:19:39.888 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:19:39 np0005603609 nova_compute[221550]: 2026-01-31 08:19:39.888 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:19:39 np0005603609 nova_compute[221550]: 2026-01-31 08:19:39.889 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:19:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:40.057 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:73:fe 10.100.0.4'], port_security=['fa:16:3e:41:73:fe 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ac849bdd-07a0-459f-b207-2a3f239d0272', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b47e290-9853-478f-86cb-c8ea73119a97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b06982960ad4453b8e542cb6330835d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9238371a-94ed-45d8-9126-02012597fcaa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46c631ec-3e4c-4c27-ae32-3645d3f29853, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=c754b929-2e4d-413c-90fa-92a743e1fc38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:19:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:40.058 140058 INFO neutron.agent.ovn.metadata.agent [-] Port c754b929-2e4d-413c-90fa-92a743e1fc38 in datapath 2b47e290-9853-478f-86cb-c8ea73119a97 unbound from our chassis#033[00m
Jan 31 03:19:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:40.059 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b47e290-9853-478f-86cb-c8ea73119a97, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:19:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:40.060 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6adac13a-d954-4469-93e3-4f719931a0a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:40.061 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 namespace which is not needed anymore#033[00m
Jan 31 03:19:40 np0005603609 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[278054]: [NOTICE]   (278058) : haproxy version is 2.8.14-c23fe91
Jan 31 03:19:40 np0005603609 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[278054]: [NOTICE]   (278058) : path to executable is /usr/sbin/haproxy
Jan 31 03:19:40 np0005603609 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[278054]: [WARNING]  (278058) : Exiting Master process...
Jan 31 03:19:40 np0005603609 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[278054]: [WARNING]  (278058) : Exiting Master process...
Jan 31 03:19:40 np0005603609 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[278054]: [ALERT]    (278058) : Current worker (278060) exited with code 143 (Terminated)
Jan 31 03:19:40 np0005603609 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[278054]: [WARNING]  (278058) : All workers exited. Exiting... (0)
Jan 31 03:19:40 np0005603609 systemd[1]: libpod-b1934e4cbb0477582a6feff1d4181175f73eec099c418426b3a5ef57b41e9186.scope: Deactivated successfully.
Jan 31 03:19:40 np0005603609 podman[278534]: 2026-01-31 08:19:40.176694448 +0000 UTC m=+0.041409986 container died b1934e4cbb0477582a6feff1d4181175f73eec099c418426b3a5ef57b41e9186 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 03:19:40 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b1934e4cbb0477582a6feff1d4181175f73eec099c418426b3a5ef57b41e9186-userdata-shm.mount: Deactivated successfully.
Jan 31 03:19:40 np0005603609 systemd[1]: var-lib-containers-storage-overlay-d5f672d00279a31961ff15eab377d6d916963f8cf680775a3d6ce46f6134f03b-merged.mount: Deactivated successfully.
Jan 31 03:19:40 np0005603609 podman[278534]: 2026-01-31 08:19:40.208513282 +0000 UTC m=+0.073228850 container cleanup b1934e4cbb0477582a6feff1d4181175f73eec099c418426b3a5ef57b41e9186 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:19:40 np0005603609 systemd[1]: libpod-conmon-b1934e4cbb0477582a6feff1d4181175f73eec099c418426b3a5ef57b41e9186.scope: Deactivated successfully.
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.234 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:40 np0005603609 podman[278561]: 2026-01-31 08:19:40.263524563 +0000 UTC m=+0.038184048 container remove b1934e4cbb0477582a6feff1d4181175f73eec099c418426b3a5ef57b41e9186 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 03:19:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:40.269 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0cea9023-16f2-473f-a0cf-0c8dd6481e24]: (4, ('Sat Jan 31 08:19:40 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 (b1934e4cbb0477582a6feff1d4181175f73eec099c418426b3a5ef57b41e9186)\nb1934e4cbb0477582a6feff1d4181175f73eec099c418426b3a5ef57b41e9186\nSat Jan 31 08:19:40 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 (b1934e4cbb0477582a6feff1d4181175f73eec099c418426b3a5ef57b41e9186)\nb1934e4cbb0477582a6feff1d4181175f73eec099c418426b3a5ef57b41e9186\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:40.271 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f65da28a-64bd-4cc3-8f4b-ac7f58e9982c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:40.272 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b47e290-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.275 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:40 np0005603609 kernel: tap2b47e290-90: left promiscuous mode
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.287 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:40.290 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4a5ee5-73a4-4cfb-afec-9109679de0e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:40.307 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[263739d2-db70-4293-b18c-bd4ea7c6514c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:40.308 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2193e4-a603-4b2d-bb4a-af9fe3cf9b82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:40.318 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd20d5e-0099-4ebe-a868-5e9ca088a470]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778950, 'reachable_time': 19204, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278579, 'error': None, 'target': 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:40.320 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:19:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:40.320 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[52574931-94b9-4b26-8f74-af0a2083b4ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:40 np0005603609 systemd[1]: run-netns-ovnmeta\x2d2b47e290\x2d9853\x2d478f\x2d86cb\x2dc8ea73119a97.mount: Deactivated successfully.
Jan 31 03:19:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:19:40 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/887722912' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.357 221554 DEBUG oslo_concurrency.processutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.359 221554 DEBUG nova.virt.libvirt.vif [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:19:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-549789768',display_name='tempest-ServerRescueNegativeTestJSON-server-549789768',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-549789768',id=143,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4849ff916e1b4e2aa162faaf2c0717a2',ramdisk_id='',reservation_id='r-7j661ndk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1784809431',owner_user_name='tempest-ServerRescueNegativeTestJSON-1784809431-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:19:32Z,user_data=None,user_id='6788b0883cb348719d1222b1c9483be2',uuid=38deb482-bd85-4fdf-b2da-2725bffd8f43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "address": "fa:16:3e:f6:68:cd", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f25515c-3e", "ovs_interfaceid": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.360 221554 DEBUG nova.network.os_vif_util [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converting VIF {"id": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "address": "fa:16:3e:f6:68:cd", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f25515c-3e", "ovs_interfaceid": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.360 221554 DEBUG nova.network.os_vif_util [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:68:cd,bridge_name='br-int',has_traffic_filtering=True,id=1f25515c-3e82-445e-baad-bc0e3a0d6f1a,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f25515c-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.361 221554 DEBUG nova.objects.instance [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38deb482-bd85-4fdf-b2da-2725bffd8f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.665 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.666 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.667 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.667 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.667 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.687 221554 DEBUG nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:19:40 np0005603609 nova_compute[221550]:  <uuid>38deb482-bd85-4fdf-b2da-2725bffd8f43</uuid>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:  <name>instance-0000008f</name>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-549789768</nova:name>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:19:38</nova:creationTime>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:19:40 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:        <nova:user uuid="6788b0883cb348719d1222b1c9483be2">tempest-ServerRescueNegativeTestJSON-1784809431-project-member</nova:user>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:        <nova:project uuid="4849ff916e1b4e2aa162faaf2c0717a2">tempest-ServerRescueNegativeTestJSON-1784809431</nova:project>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:        <nova:port uuid="1f25515c-3e82-445e-baad-bc0e3a0d6f1a">
Jan 31 03:19:40 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <entry name="serial">38deb482-bd85-4fdf-b2da-2725bffd8f43</entry>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <entry name="uuid">38deb482-bd85-4fdf-b2da-2725bffd8f43</entry>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/38deb482-bd85-4fdf-b2da-2725bffd8f43_disk">
Jan 31 03:19:40 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:19:40 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/38deb482-bd85-4fdf-b2da-2725bffd8f43_disk.config">
Jan 31 03:19:40 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:19:40 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:f6:68:cd"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <target dev="tap1f25515c-3e"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/38deb482-bd85-4fdf-b2da-2725bffd8f43/console.log" append="off"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:19:40 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:19:40 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:19:40 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:19:40 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.689 221554 DEBUG nova.compute.manager [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Preparing to wait for external event network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.689 221554 DEBUG oslo_concurrency.lockutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.690 221554 DEBUG oslo_concurrency.lockutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.690 221554 DEBUG oslo_concurrency.lockutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.691 221554 DEBUG nova.virt.libvirt.vif [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:19:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-549789768',display_name='tempest-ServerRescueNegativeTestJSON-server-549789768',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-549789768',id=143,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4849ff916e1b4e2aa162faaf2c0717a2',ramdisk_id='',reservation_id='r-7j661ndk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1784809431',owner_user_name='tempest-ServerRescueNegativeTestJSON-1784809431-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:19:32Z,user_data=None,user_id='6788b0883cb348719d1222b1c9483be2',uuid=38deb482-bd85-4fdf-b2da-2725bffd8f43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "address": "fa:16:3e:f6:68:cd", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f25515c-3e", "ovs_interfaceid": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.691 221554 DEBUG nova.network.os_vif_util [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converting VIF {"id": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "address": "fa:16:3e:f6:68:cd", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f25515c-3e", "ovs_interfaceid": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.692 221554 DEBUG nova.network.os_vif_util [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:68:cd,bridge_name='br-int',has_traffic_filtering=True,id=1f25515c-3e82-445e-baad-bc0e3a0d6f1a,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f25515c-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.692 221554 DEBUG os_vif [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:68:cd,bridge_name='br-int',has_traffic_filtering=True,id=1f25515c-3e82-445e-baad-bc0e3a0d6f1a,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f25515c-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.693 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.693 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.694 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.696 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.697 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f25515c-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.698 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1f25515c-3e, col_values=(('external_ids', {'iface-id': '1f25515c-3e82-445e-baad-bc0e3a0d6f1a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:68:cd', 'vm-uuid': '38deb482-bd85-4fdf-b2da-2725bffd8f43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.699 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:40 np0005603609 NetworkManager[49064]: <info>  [1769847580.7003] manager: (tap1f25515c-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.702 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.706 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.706 221554 INFO os_vif [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:68:cd,bridge_name='br-int',has_traffic_filtering=True,id=1f25515c-3e82-445e-baad-bc0e3a0d6f1a,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f25515c-3e')#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.833 221554 DEBUG nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.834 221554 DEBUG nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.834 221554 DEBUG nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No VIF found with MAC fa:16:3e:f6:68:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.835 221554 INFO nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Using config drive#033[00m
Jan 31 03:19:40 np0005603609 nova_compute[221550]: 2026-01-31 08:19:40.863 221554 DEBUG nova.storage.rbd_utils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:19:41 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2954870532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.065 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.127 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.128 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.130 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.131 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.255 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.255 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4424MB free_disk=20.856639862060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.256 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.256 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.347 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance ac849bdd-07a0-459f-b207-2a3f239d0272 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.348 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 38deb482-bd85-4fdf-b2da-2725bffd8f43 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.348 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.349 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.429 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.455 221554 INFO nova.virt.libvirt.driver [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Beginning cold snapshot process#033[00m
Jan 31 03:19:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:19:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:41.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.622 221554 DEBUG nova.virt.libvirt.imagebackend [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:19:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:19:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:41.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:19:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:19:41 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/249068326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.852 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.859 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.895 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.922 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.922 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:41 np0005603609 nova_compute[221550]: 2026-01-31 08:19:41.961 221554 DEBUG nova.storage.rbd_utils [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] creating snapshot(a23e3ddc166346098d6fdfa74977b090) on rbd image(ac849bdd-07a0-459f-b207-2a3f239d0272_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:19:42 np0005603609 nova_compute[221550]: 2026-01-31 08:19:42.691 221554 DEBUG nova.compute.manager [req-eb47325a-ce16-4e6c-8d80-65f4966d1df1 req-4d03214f-2085-40df-9095-6327012391ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Received event network-vif-unplugged-c754b929-2e4d-413c-90fa-92a743e1fc38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:19:42 np0005603609 nova_compute[221550]: 2026-01-31 08:19:42.691 221554 DEBUG oslo_concurrency.lockutils [req-eb47325a-ce16-4e6c-8d80-65f4966d1df1 req-4d03214f-2085-40df-9095-6327012391ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:42 np0005603609 nova_compute[221550]: 2026-01-31 08:19:42.691 221554 DEBUG oslo_concurrency.lockutils [req-eb47325a-ce16-4e6c-8d80-65f4966d1df1 req-4d03214f-2085-40df-9095-6327012391ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:42 np0005603609 nova_compute[221550]: 2026-01-31 08:19:42.692 221554 DEBUG oslo_concurrency.lockutils [req-eb47325a-ce16-4e6c-8d80-65f4966d1df1 req-4d03214f-2085-40df-9095-6327012391ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:42 np0005603609 nova_compute[221550]: 2026-01-31 08:19:42.692 221554 DEBUG nova.compute.manager [req-eb47325a-ce16-4e6c-8d80-65f4966d1df1 req-4d03214f-2085-40df-9095-6327012391ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] No waiting events found dispatching network-vif-unplugged-c754b929-2e4d-413c-90fa-92a743e1fc38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:19:42 np0005603609 nova_compute[221550]: 2026-01-31 08:19:42.692 221554 WARNING nova.compute.manager [req-eb47325a-ce16-4e6c-8d80-65f4966d1df1 req-4d03214f-2085-40df-9095-6327012391ea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Received unexpected event network-vif-unplugged-c754b929-2e4d-413c-90fa-92a743e1fc38 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Jan 31 03:19:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:43.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:19:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:43.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:19:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e332 e332: 3 total, 3 up, 3 in
Jan 31 03:19:44 np0005603609 nova_compute[221550]: 2026-01-31 08:19:44.694 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:19:44 np0005603609 nova_compute[221550]: 2026-01-31 08:19:44.694 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:19:45 np0005603609 nova_compute[221550]: 2026-01-31 08:19:45.238 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:45 np0005603609 nova_compute[221550]: 2026-01-31 08:19:45.423 221554 DEBUG nova.storage.rbd_utils [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] cloning vms/ac849bdd-07a0-459f-b207-2a3f239d0272_disk@a23e3ddc166346098d6fdfa74977b090 to images/5b5ac3f7-b093-4152-83a1-6cb8fa870d17 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:19:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:45.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:45.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:45 np0005603609 nova_compute[221550]: 2026-01-31 08:19:45.700 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:45 np0005603609 nova_compute[221550]: 2026-01-31 08:19:45.853 221554 DEBUG nova.storage.rbd_utils [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] flattening images/5b5ac3f7-b093-4152-83a1-6cb8fa870d17 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:19:46 np0005603609 nova_compute[221550]: 2026-01-31 08:19:46.592 221554 DEBUG nova.storage.rbd_utils [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] removing snapshot(a23e3ddc166346098d6fdfa74977b090) on rbd image(ac849bdd-07a0-459f-b207-2a3f239d0272_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:19:46 np0005603609 nova_compute[221550]: 2026-01-31 08:19:46.647 221554 INFO nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Creating config drive at /var/lib/nova/instances/38deb482-bd85-4fdf-b2da-2725bffd8f43/disk.config#033[00m
Jan 31 03:19:46 np0005603609 nova_compute[221550]: 2026-01-31 08:19:46.651 221554 DEBUG oslo_concurrency.processutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/38deb482-bd85-4fdf-b2da-2725bffd8f43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpjafbna_i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:46 np0005603609 nova_compute[221550]: 2026-01-31 08:19:46.784 221554 DEBUG oslo_concurrency.processutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/38deb482-bd85-4fdf-b2da-2725bffd8f43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpjafbna_i" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:46 np0005603609 nova_compute[221550]: 2026-01-31 08:19:46.812 221554 DEBUG nova.storage.rbd_utils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:46 np0005603609 nova_compute[221550]: 2026-01-31 08:19:46.817 221554 DEBUG oslo_concurrency.processutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/38deb482-bd85-4fdf-b2da-2725bffd8f43/disk.config 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:46 np0005603609 nova_compute[221550]: 2026-01-31 08:19:46.921 221554 DEBUG nova.compute.manager [req-4aab6ea4-863e-4e7d-a27a-3a4789bcd03b req-f70d2a07-d851-4e52-aa78-534f85d3510f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Received event network-vif-plugged-c754b929-2e4d-413c-90fa-92a743e1fc38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:19:46 np0005603609 nova_compute[221550]: 2026-01-31 08:19:46.921 221554 DEBUG oslo_concurrency.lockutils [req-4aab6ea4-863e-4e7d-a27a-3a4789bcd03b req-f70d2a07-d851-4e52-aa78-534f85d3510f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:46 np0005603609 nova_compute[221550]: 2026-01-31 08:19:46.922 221554 DEBUG oslo_concurrency.lockutils [req-4aab6ea4-863e-4e7d-a27a-3a4789bcd03b req-f70d2a07-d851-4e52-aa78-534f85d3510f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:46 np0005603609 nova_compute[221550]: 2026-01-31 08:19:46.922 221554 DEBUG oslo_concurrency.lockutils [req-4aab6ea4-863e-4e7d-a27a-3a4789bcd03b req-f70d2a07-d851-4e52-aa78-534f85d3510f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:46 np0005603609 nova_compute[221550]: 2026-01-31 08:19:46.922 221554 DEBUG nova.compute.manager [req-4aab6ea4-863e-4e7d-a27a-3a4789bcd03b req-f70d2a07-d851-4e52-aa78-534f85d3510f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] No waiting events found dispatching network-vif-plugged-c754b929-2e4d-413c-90fa-92a743e1fc38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:19:46 np0005603609 nova_compute[221550]: 2026-01-31 08:19:46.922 221554 WARNING nova.compute.manager [req-4aab6ea4-863e-4e7d-a27a-3a4789bcd03b req-f70d2a07-d851-4e52-aa78-534f85d3510f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Received unexpected event network-vif-plugged-c754b929-2e4d-413c-90fa-92a743e1fc38 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Jan 31 03:19:47 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:19:47 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:19:47 np0005603609 nova_compute[221550]: 2026-01-31 08:19:47.349 221554 DEBUG oslo_concurrency.processutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/38deb482-bd85-4fdf-b2da-2725bffd8f43/disk.config 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:47 np0005603609 nova_compute[221550]: 2026-01-31 08:19:47.350 221554 INFO nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Deleting local config drive /var/lib/nova/instances/38deb482-bd85-4fdf-b2da-2725bffd8f43/disk.config because it was imported into RBD.#033[00m
Jan 31 03:19:47 np0005603609 virtqemud[221292]: End of file while reading data: Input/output error
Jan 31 03:19:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e333 e333: 3 total, 3 up, 3 in
Jan 31 03:19:47 np0005603609 virtqemud[221292]: End of file while reading data: Input/output error
Jan 31 03:19:47 np0005603609 NetworkManager[49064]: <info>  [1769847587.4177] manager: (tap1f25515c-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/278)
Jan 31 03:19:47 np0005603609 nova_compute[221550]: 2026-01-31 08:19:47.416 221554 DEBUG nova.storage.rbd_utils [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] creating snapshot(snap) on rbd image(5b5ac3f7-b093-4152-83a1-6cb8fa870d17) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:19:47 np0005603609 kernel: tap1f25515c-3e: entered promiscuous mode
Jan 31 03:19:47 np0005603609 ovn_controller[130359]: 2026-01-31T08:19:47Z|00589|binding|INFO|Claiming lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a for this chassis.
Jan 31 03:19:47 np0005603609 ovn_controller[130359]: 2026-01-31T08:19:47Z|00590|binding|INFO|1f25515c-3e82-445e-baad-bc0e3a0d6f1a: Claiming fa:16:3e:f6:68:cd 10.100.0.8
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.437 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:68:cd 10.100.0.8'], port_security=['fa:16:3e:f6:68:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '38deb482-bd85-4fdf-b2da-2725bffd8f43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e03fc320-c87d-42d2-a772-ec94aeb05209', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a8345fd-717b-4084-912f-0c496810f08f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed93fa99-ea0a-43df-97d5-ee7154033108, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=1f25515c-3e82-445e-baad-bc0e3a0d6f1a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.439 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 1f25515c-3e82-445e-baad-bc0e3a0d6f1a in datapath e03fc320-c87d-42d2-a772-ec94aeb05209 bound to our chassis#033[00m
Jan 31 03:19:47 np0005603609 ovn_controller[130359]: 2026-01-31T08:19:47Z|00591|binding|INFO|Setting lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a ovn-installed in OVS
Jan 31 03:19:47 np0005603609 ovn_controller[130359]: 2026-01-31T08:19:47Z|00592|binding|INFO|Setting lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a up in Southbound
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.442 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e03fc320-c87d-42d2-a772-ec94aeb05209#033[00m
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.451 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d01309f9-ca8e-4c38-8fa0-4cf874a73541]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.452 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape03fc320-c1 in ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:19:47 np0005603609 systemd-udevd[278867]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.456 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape03fc320-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.456 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[35558d0c-93ef-4d03-b66e-4be6cb39d4db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.458 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c7917c73-9717-45e1-86d0-9930e1d1a784]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:47 np0005603609 systemd-machined[190912]: New machine qemu-72-instance-0000008f.
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.468 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[62825611-7e31-4fe8-84fa-e056236bcbe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:47 np0005603609 systemd[1]: Started Virtual Machine qemu-72-instance-0000008f.
Jan 31 03:19:47 np0005603609 NetworkManager[49064]: <info>  [1769847587.4722] device (tap1f25515c-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:19:47 np0005603609 NetworkManager[49064]: <info>  [1769847587.4728] device (tap1f25515c-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.482 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ef3e6516-7579-41b0-8db3-b4af7bffa66c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:47 np0005603609 nova_compute[221550]: 2026-01-31 08:19:47.503 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.506 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[e6777fde-d3a9-48ec-9b09-4b615d8118e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:19:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:47.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:19:47 np0005603609 NetworkManager[49064]: <info>  [1769847587.5135] manager: (tape03fc320-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/279)
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.512 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[68e77f64-533b-401b-ad86-c7b75e99c269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.542 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d39f0dcd-8115-4e53-80ba-e1c19ff037f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.545 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[4031dc72-9f55-4090-9544-90e84b8deecb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:47 np0005603609 NetworkManager[49064]: <info>  [1769847587.5617] device (tape03fc320-c0): carrier: link connected
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.563 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[36a49f2c-b365-4c8c-8484-fdc876366139]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.575 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a28eaa13-7cf3-4365-9814-b04e43429129]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape03fc320-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:22:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782726, 'reachable_time': 34934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278930, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.586 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[abc6b55a-770a-47a1-a240-3d343d339a4a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe79:2269'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782726, 'tstamp': 782726}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278931, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.598 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1be22a3d-ef53-4d3f-897c-c82ad6035ab4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape03fc320-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:22:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782726, 'reachable_time': 34934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278932, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.625 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6d0efb78-4fb4-4423-bb7d-1eb07bcf8769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:47.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.678 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a7deda14-dba8-4b16-a00b-e1e6f9b45c4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.680 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape03fc320-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.680 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.681 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape03fc320-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:47 np0005603609 NetworkManager[49064]: <info>  [1769847587.6833] manager: (tape03fc320-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/280)
Jan 31 03:19:47 np0005603609 nova_compute[221550]: 2026-01-31 08:19:47.682 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:47 np0005603609 kernel: tape03fc320-c0: entered promiscuous mode
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.691 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape03fc320-c0, col_values=(('external_ids', {'iface-id': '075aefe0-13df-4a17-ae95-485ece950a10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:47 np0005603609 ovn_controller[130359]: 2026-01-31T08:19:47Z|00593|binding|INFO|Releasing lport 075aefe0-13df-4a17-ae95-485ece950a10 from this chassis (sb_readonly=0)
Jan 31 03:19:47 np0005603609 nova_compute[221550]: 2026-01-31 08:19:47.692 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:47 np0005603609 nova_compute[221550]: 2026-01-31 08:19:47.705 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.708 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e03fc320-c87d-42d2-a772-ec94aeb05209.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e03fc320-c87d-42d2-a772-ec94aeb05209.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:19:47 np0005603609 nova_compute[221550]: 2026-01-31 08:19:47.708 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.709 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[03c9d099-2294-49a6-8838-81c1b2bf9530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.709 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-e03fc320-c87d-42d2-a772-ec94aeb05209
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/e03fc320-c87d-42d2-a772-ec94aeb05209.pid.haproxy
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID e03fc320-c87d-42d2-a772-ec94aeb05209
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:19:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:19:47.710 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'env', 'PROCESS_TAG=haproxy-e03fc320-c87d-42d2-a772-ec94aeb05209', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e03fc320-c87d-42d2-a772-ec94aeb05209.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:19:47 np0005603609 nova_compute[221550]: 2026-01-31 08:19:47.818 221554 DEBUG nova.network.neutron [req-83c7929d-463b-4626-9fa2-b2113cdbfcc5 req-b0d5c269-4434-4092-ade4-c0d7133d1f45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Updated VIF entry in instance network info cache for port 1f25515c-3e82-445e-baad-bc0e3a0d6f1a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:19:47 np0005603609 nova_compute[221550]: 2026-01-31 08:19:47.819 221554 DEBUG nova.network.neutron [req-83c7929d-463b-4626-9fa2-b2113cdbfcc5 req-b0d5c269-4434-4092-ade4-c0d7133d1f45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Updating instance_info_cache with network_info: [{"id": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "address": "fa:16:3e:f6:68:cd", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f25515c-3e", "ovs_interfaceid": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:19:47 np0005603609 nova_compute[221550]: 2026-01-31 08:19:47.853 221554 DEBUG oslo_concurrency.lockutils [req-83c7929d-463b-4626-9fa2-b2113cdbfcc5 req-b0d5c269-4434-4092-ade4-c0d7133d1f45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-38deb482-bd85-4fdf-b2da-2725bffd8f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:19:48 np0005603609 podman[278967]: 2026-01-31 08:19:48.04981892 +0000 UTC m=+0.054146202 container create d5bc51e025db3ed9425524a1ed045883080819718c115b5bbcb186f35eb8f80a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:19:48 np0005603609 systemd[1]: Started libpod-conmon-d5bc51e025db3ed9425524a1ed045883080819718c115b5bbcb186f35eb8f80a.scope.
Jan 31 03:19:48 np0005603609 nova_compute[221550]: 2026-01-31 08:19:48.094 221554 DEBUG nova.compute.manager [req-4f25a4be-512f-4a12-ba4a-4ee7a7dafed4 req-20421510-f5b5-433d-99c6-eff06923c6a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received event network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:19:48 np0005603609 nova_compute[221550]: 2026-01-31 08:19:48.095 221554 DEBUG oslo_concurrency.lockutils [req-4f25a4be-512f-4a12-ba4a-4ee7a7dafed4 req-20421510-f5b5-433d-99c6-eff06923c6a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:48 np0005603609 nova_compute[221550]: 2026-01-31 08:19:48.095 221554 DEBUG oslo_concurrency.lockutils [req-4f25a4be-512f-4a12-ba4a-4ee7a7dafed4 req-20421510-f5b5-433d-99c6-eff06923c6a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:48 np0005603609 nova_compute[221550]: 2026-01-31 08:19:48.095 221554 DEBUG oslo_concurrency.lockutils [req-4f25a4be-512f-4a12-ba4a-4ee7a7dafed4 req-20421510-f5b5-433d-99c6-eff06923c6a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:48 np0005603609 nova_compute[221550]: 2026-01-31 08:19:48.095 221554 DEBUG nova.compute.manager [req-4f25a4be-512f-4a12-ba4a-4ee7a7dafed4 req-20421510-f5b5-433d-99c6-eff06923c6a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Processing event network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:19:48 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:19:48 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94013593dfbf864a70e4a37700d8264e73b65e95bd55aa014d10c34f9454850e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:48 np0005603609 podman[278967]: 2026-01-31 08:19:48.022708688 +0000 UTC m=+0.027036030 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:19:48 np0005603609 podman[278967]: 2026-01-31 08:19:48.126497281 +0000 UTC m=+0.130824563 container init d5bc51e025db3ed9425524a1ed045883080819718c115b5bbcb186f35eb8f80a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:19:48 np0005603609 podman[278967]: 2026-01-31 08:19:48.132868984 +0000 UTC m=+0.137196266 container start d5bc51e025db3ed9425524a1ed045883080819718c115b5bbcb186f35eb8f80a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 03:19:48 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[278983]: [NOTICE]   (278987) : New worker (278989) forked
Jan 31 03:19:48 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[278983]: [NOTICE]   (278987) : Loading success.
Jan 31 03:19:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e334 e334: 3 total, 3 up, 3 in
Jan 31 03:19:48 np0005603609 nova_compute[221550]: 2026-01-31 08:19:48.614 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847588.6140804, 38deb482-bd85-4fdf-b2da-2725bffd8f43 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:19:48 np0005603609 nova_compute[221550]: 2026-01-31 08:19:48.614 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] VM Started (Lifecycle Event)#033[00m
Jan 31 03:19:48 np0005603609 nova_compute[221550]: 2026-01-31 08:19:48.616 221554 DEBUG nova.compute.manager [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:19:48 np0005603609 nova_compute[221550]: 2026-01-31 08:19:48.619 221554 DEBUG nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:19:48 np0005603609 nova_compute[221550]: 2026-01-31 08:19:48.622 221554 INFO nova.virt.libvirt.driver [-] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Instance spawned successfully.#033[00m
Jan 31 03:19:48 np0005603609 nova_compute[221550]: 2026-01-31 08:19:48.622 221554 DEBUG nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:19:48 np0005603609 nova_compute[221550]: 2026-01-31 08:19:48.938 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.066 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.068 221554 DEBUG nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.068 221554 DEBUG nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.068 221554 DEBUG nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.069 221554 DEBUG nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.069 221554 DEBUG nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.069 221554 DEBUG nova.virt.libvirt.driver [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.124 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.125 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847588.6141624, 38deb482-bd85-4fdf-b2da-2725bffd8f43 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.125 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.192 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.195 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847588.6183388, 38deb482-bd85-4fdf-b2da-2725bffd8f43 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.196 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.227 221554 INFO nova.compute.manager [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Took 16.77 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.227 221554 DEBUG nova.compute.manager [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.229 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.237 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.266 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.348 221554 INFO nova.compute.manager [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Took 17.92 seconds to build instance.#033[00m
Jan 31 03:19:49 np0005603609 nova_compute[221550]: 2026-01-31 08:19:49.377 221554 DEBUG oslo_concurrency.lockutils [None req-d352c585-48c3-4563-89af-acae39f0845c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:49.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:49.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:50 np0005603609 nova_compute[221550]: 2026-01-31 08:19:50.239 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:50 np0005603609 nova_compute[221550]: 2026-01-31 08:19:50.620 221554 DEBUG nova.compute.manager [req-ad6a0f72-34f6-4a5d-bf81-595a44880072 req-ff79c2dd-8931-48be-8c92-cb6cc06e6158 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received event network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:19:50 np0005603609 nova_compute[221550]: 2026-01-31 08:19:50.621 221554 DEBUG oslo_concurrency.lockutils [req-ad6a0f72-34f6-4a5d-bf81-595a44880072 req-ff79c2dd-8931-48be-8c92-cb6cc06e6158 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:50 np0005603609 nova_compute[221550]: 2026-01-31 08:19:50.621 221554 DEBUG oslo_concurrency.lockutils [req-ad6a0f72-34f6-4a5d-bf81-595a44880072 req-ff79c2dd-8931-48be-8c92-cb6cc06e6158 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:50 np0005603609 nova_compute[221550]: 2026-01-31 08:19:50.621 221554 DEBUG oslo_concurrency.lockutils [req-ad6a0f72-34f6-4a5d-bf81-595a44880072 req-ff79c2dd-8931-48be-8c92-cb6cc06e6158 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:50 np0005603609 nova_compute[221550]: 2026-01-31 08:19:50.622 221554 DEBUG nova.compute.manager [req-ad6a0f72-34f6-4a5d-bf81-595a44880072 req-ff79c2dd-8931-48be-8c92-cb6cc06e6158 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] No waiting events found dispatching network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:19:50 np0005603609 nova_compute[221550]: 2026-01-31 08:19:50.622 221554 WARNING nova.compute.manager [req-ad6a0f72-34f6-4a5d-bf81-595a44880072 req-ff79c2dd-8931-48be-8c92-cb6cc06e6158 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received unexpected event network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a for instance with vm_state active and task_state None.#033[00m
Jan 31 03:19:50 np0005603609 nova_compute[221550]: 2026-01-31 08:19:50.702 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:51.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:19:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:51.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #115. Immutable memtables: 0.
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:19:52.001228) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 115
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847592001461, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 2307, "num_deletes": 266, "total_data_size": 5123233, "memory_usage": 5226368, "flush_reason": "Manual Compaction"}
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #116: started
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847592025868, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 116, "file_size": 3341001, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57987, "largest_seqno": 60289, "table_properties": {"data_size": 3331502, "index_size": 5929, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20313, "raw_average_key_size": 20, "raw_value_size": 3312244, "raw_average_value_size": 3383, "num_data_blocks": 257, "num_entries": 979, "num_filter_entries": 979, "num_deletions": 266, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847423, "oldest_key_time": 1769847423, "file_creation_time": 1769847592, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 24511 microseconds, and 4543 cpu microseconds.
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:19:52.025913) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #116: 3341001 bytes OK
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:19:52.025932) [db/memtable_list.cc:519] [default] Level-0 commit table #116 started
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:19:52.027201) [db/memtable_list.cc:722] [default] Level-0 commit table #116: memtable #1 done
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:19:52.027215) EVENT_LOG_v1 {"time_micros": 1769847592027210, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:19:52.027233) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 5112902, prev total WAL file size 5112902, number of live WAL files 2.
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000112.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:19:52.028011) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303039' seq:72057594037927935, type:22 .. '6C6F676D0032323632' seq:0, type:0; will stop at (end)
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [116(3262KB)], [114(10MB)]
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847592028038, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [116], "files_L6": [114], "score": -1, "input_data_size": 14180763, "oldest_snapshot_seqno": -1}
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #117: 8647 keys, 14030136 bytes, temperature: kUnknown
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847592160301, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 117, "file_size": 14030136, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13970653, "index_size": 36779, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21637, "raw_key_size": 223708, "raw_average_key_size": 25, "raw_value_size": 13814995, "raw_average_value_size": 1597, "num_data_blocks": 1452, "num_entries": 8647, "num_filter_entries": 8647, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769847592, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 117, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:19:52.160690) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 14030136 bytes
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:19:52.162316) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.0 rd, 105.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.3 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(8.4) write-amplify(4.2) OK, records in: 9193, records dropped: 546 output_compression: NoCompression
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:19:52.162336) EVENT_LOG_v1 {"time_micros": 1769847592162327, "job": 72, "event": "compaction_finished", "compaction_time_micros": 132469, "compaction_time_cpu_micros": 21452, "output_level": 6, "num_output_files": 1, "total_output_size": 14030136, "num_input_records": 9193, "num_output_records": 8647, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847592162924, "job": 72, "event": "table_file_deletion", "file_number": 116}
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000114.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847592164051, "job": 72, "event": "table_file_deletion", "file_number": 114}
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:19:52.027897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:19:52.164180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:19:52.164185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:19:52.164187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:19:52.164190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:19:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:19:52.164191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:19:52 np0005603609 nova_compute[221550]: 2026-01-31 08:19:52.262 221554 INFO nova.virt.libvirt.driver [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Snapshot image upload complete#033[00m
Jan 31 03:19:52 np0005603609 nova_compute[221550]: 2026-01-31 08:19:52.262 221554 DEBUG nova.compute.manager [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:19:52 np0005603609 nova_compute[221550]: 2026-01-31 08:19:52.373 221554 INFO nova.compute.manager [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Shelve offloading#033[00m
Jan 31 03:19:52 np0005603609 nova_compute[221550]: 2026-01-31 08:19:52.380 221554 INFO nova.virt.libvirt.driver [-] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Instance destroyed successfully.#033[00m
Jan 31 03:19:52 np0005603609 nova_compute[221550]: 2026-01-31 08:19:52.382 221554 DEBUG nova.compute.manager [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:19:52 np0005603609 nova_compute[221550]: 2026-01-31 08:19:52.385 221554 DEBUG oslo_concurrency.lockutils [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:19:52 np0005603609 nova_compute[221550]: 2026-01-31 08:19:52.386 221554 DEBUG oslo_concurrency.lockutils [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquired lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:19:52 np0005603609 nova_compute[221550]: 2026-01-31 08:19:52.386 221554 DEBUG nova.network.neutron [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:19:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:53.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:53 np0005603609 nova_compute[221550]: 2026-01-31 08:19:53.648 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:53.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:54 np0005603609 podman[279043]: 2026-01-31 08:19:54.155679734 +0000 UTC m=+0.044812097 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:19:54 np0005603609 podman[279042]: 2026-01-31 08:19:54.176823822 +0000 UTC m=+0.065684849 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 03:19:54 np0005603609 nova_compute[221550]: 2026-01-31 08:19:54.741 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847579.7400048, ac849bdd-07a0-459f-b207-2a3f239d0272 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:19:54 np0005603609 nova_compute[221550]: 2026-01-31 08:19:54.741 221554 INFO nova.compute.manager [-] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:19:54 np0005603609 nova_compute[221550]: 2026-01-31 08:19:54.807 221554 DEBUG nova.compute.manager [None req-0de793ac-f767-4c10-83d0-1eded6089ddf - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:19:54 np0005603609 nova_compute[221550]: 2026-01-31 08:19:54.811 221554 DEBUG nova.compute.manager [None req-0de793ac-f767-4c10-83d0-1eded6089ddf - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:19:54 np0005603609 nova_compute[221550]: 2026-01-31 08:19:54.853 221554 INFO nova.compute.manager [None req-0de793ac-f767-4c10-83d0-1eded6089ddf - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Jan 31 03:19:54 np0005603609 nova_compute[221550]: 2026-01-31 08:19:54.900 221554 DEBUG nova.network.neutron [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Updating instance_info_cache with network_info: [{"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:19:54 np0005603609 nova_compute[221550]: 2026-01-31 08:19:54.942 221554 DEBUG oslo_concurrency.lockutils [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Releasing lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:19:55 np0005603609 nova_compute[221550]: 2026-01-31 08:19:55.240 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:55.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:55.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:55 np0005603609 nova_compute[221550]: 2026-01-31 08:19:55.704 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e335 e335: 3 total, 3 up, 3 in
Jan 31 03:19:57 np0005603609 nova_compute[221550]: 2026-01-31 08:19:57.387 221554 INFO nova.virt.libvirt.driver [-] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Instance destroyed successfully.#033[00m
Jan 31 03:19:57 np0005603609 nova_compute[221550]: 2026-01-31 08:19:57.388 221554 DEBUG nova.objects.instance [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'resources' on Instance uuid ac849bdd-07a0-459f-b207-2a3f239d0272 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:19:57 np0005603609 nova_compute[221550]: 2026-01-31 08:19:57.412 221554 DEBUG nova.virt.libvirt.vif [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:18:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-154682825',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-154682825',id=140,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMWO9k24zLH8F2gtUwSjTGss8rmi2WVpUqIaGSBNrUsu6yIJKWkJX4AL66VZ1mBnILCET2/Mvny3YbuR68kQ+I2GI5dWxVmFSmypAx4n5E/nzIwrdNA7P5kjqoFQFkL9tA==',key_name='tempest-keypair-572349883',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:19:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3b06982960ad4453b8e542cb6330835d',ramdisk_id='',reservation_id='r-1oh7rfdp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-332944999',owner_user_name='tempest-AttachVolumeShelveTestJSON-332944999-project-member',shelved_at='2026-01-31T08:19:52.262857',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='5b5ac3f7-b093-4152-83a1-6cb8fa870d17'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:19:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b153d2832404e5b9250422b70ba522d',uuid=ac849bdd-07a0-459f-b207-2a3f239d0272,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:19:57 np0005603609 nova_compute[221550]: 2026-01-31 08:19:57.412 221554 DEBUG nova.network.os_vif_util [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converting VIF {"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:19:57 np0005603609 nova_compute[221550]: 2026-01-31 08:19:57.413 221554 DEBUG nova.network.os_vif_util [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:73:fe,bridge_name='br-int',has_traffic_filtering=True,id=c754b929-2e4d-413c-90fa-92a743e1fc38,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc754b929-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:19:57 np0005603609 nova_compute[221550]: 2026-01-31 08:19:57.413 221554 DEBUG os_vif [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:73:fe,bridge_name='br-int',has_traffic_filtering=True,id=c754b929-2e4d-413c-90fa-92a743e1fc38,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc754b929-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:19:57 np0005603609 nova_compute[221550]: 2026-01-31 08:19:57.414 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:57 np0005603609 nova_compute[221550]: 2026-01-31 08:19:57.414 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc754b929-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:57 np0005603609 nova_compute[221550]: 2026-01-31 08:19:57.448 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:57 np0005603609 nova_compute[221550]: 2026-01-31 08:19:57.450 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:19:57 np0005603609 nova_compute[221550]: 2026-01-31 08:19:57.452 221554 INFO os_vif [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:73:fe,bridge_name='br-int',has_traffic_filtering=True,id=c754b929-2e4d-413c-90fa-92a743e1fc38,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc754b929-2e')#033[00m
Jan 31 03:19:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:57.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:57.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:58 np0005603609 nova_compute[221550]: 2026-01-31 08:19:58.184 221554 INFO nova.virt.libvirt.driver [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Deleting instance files /var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272_del#033[00m
Jan 31 03:19:58 np0005603609 nova_compute[221550]: 2026-01-31 08:19:58.185 221554 INFO nova.virt.libvirt.driver [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Deletion of /var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272_del complete#033[00m
Jan 31 03:19:58 np0005603609 nova_compute[221550]: 2026-01-31 08:19:58.305 221554 DEBUG nova.compute.manager [req-4408c33b-d418-45ed-a0c6-175aa2fc218b req-d7e4158b-2e88-442d-b500-9da59aeb0296 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Received event network-changed-c754b929-2e4d-413c-90fa-92a743e1fc38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:19:58 np0005603609 nova_compute[221550]: 2026-01-31 08:19:58.307 221554 DEBUG nova.compute.manager [req-4408c33b-d418-45ed-a0c6-175aa2fc218b req-d7e4158b-2e88-442d-b500-9da59aeb0296 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Refreshing instance network info cache due to event network-changed-c754b929-2e4d-413c-90fa-92a743e1fc38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:19:58 np0005603609 nova_compute[221550]: 2026-01-31 08:19:58.307 221554 DEBUG oslo_concurrency.lockutils [req-4408c33b-d418-45ed-a0c6-175aa2fc218b req-d7e4158b-2e88-442d-b500-9da59aeb0296 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:19:58 np0005603609 nova_compute[221550]: 2026-01-31 08:19:58.308 221554 DEBUG oslo_concurrency.lockutils [req-4408c33b-d418-45ed-a0c6-175aa2fc218b req-d7e4158b-2e88-442d-b500-9da59aeb0296 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:19:58 np0005603609 nova_compute[221550]: 2026-01-31 08:19:58.308 221554 DEBUG nova.network.neutron [req-4408c33b-d418-45ed-a0c6-175aa2fc218b req-d7e4158b-2e88-442d-b500-9da59aeb0296 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Refreshing network info cache for port c754b929-2e4d-413c-90fa-92a743e1fc38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:19:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:58 np0005603609 nova_compute[221550]: 2026-01-31 08:19:58.380 221554 INFO nova.scheduler.client.report [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Deleted allocations for instance ac849bdd-07a0-459f-b207-2a3f239d0272#033[00m
Jan 31 03:19:58 np0005603609 nova_compute[221550]: 2026-01-31 08:19:58.469 221554 DEBUG oslo_concurrency.lockutils [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:58 np0005603609 nova_compute[221550]: 2026-01-31 08:19:58.470 221554 DEBUG oslo_concurrency.lockutils [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:58 np0005603609 nova_compute[221550]: 2026-01-31 08:19:58.572 221554 DEBUG oslo_concurrency.processutils [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:58 np0005603609 nova_compute[221550]: 2026-01-31 08:19:58.996 221554 DEBUG oslo_concurrency.processutils [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:59 np0005603609 nova_compute[221550]: 2026-01-31 08:19:59.000 221554 DEBUG nova.compute.provider_tree [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:19:59 np0005603609 nova_compute[221550]: 2026-01-31 08:19:59.167 221554 DEBUG nova.scheduler.client.report [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:19:59 np0005603609 nova_compute[221550]: 2026-01-31 08:19:59.192 221554 DEBUG oslo_concurrency.lockutils [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:59 np0005603609 nova_compute[221550]: 2026-01-31 08:19:59.257 221554 DEBUG oslo_concurrency.lockutils [None req-74e30821-7530-4a75-b39b-1d9ee670584a 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 22.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:19:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:59.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:19:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:19:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:59.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:00 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 03:20:00 np0005603609 nova_compute[221550]: 2026-01-31 08:20:00.242 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:00 np0005603609 nova_compute[221550]: 2026-01-31 08:20:00.738 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:01.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:01 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:01Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:68:cd 10.100.0.8
Jan 31 03:20:01 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:01Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:68:cd 10.100.0.8
Jan 31 03:20:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:01.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:02 np0005603609 nova_compute[221550]: 2026-01-31 08:20:02.164 221554 DEBUG nova.network.neutron [req-4408c33b-d418-45ed-a0c6-175aa2fc218b req-d7e4158b-2e88-442d-b500-9da59aeb0296 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Updated VIF entry in instance network info cache for port c754b929-2e4d-413c-90fa-92a743e1fc38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:20:02 np0005603609 nova_compute[221550]: 2026-01-31 08:20:02.165 221554 DEBUG nova.network.neutron [req-4408c33b-d418-45ed-a0c6-175aa2fc218b req-d7e4158b-2e88-442d-b500-9da59aeb0296 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Updating instance_info_cache with network_info: [{"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": null, "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapc754b929-2e", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:20:02 np0005603609 nova_compute[221550]: 2026-01-31 08:20:02.202 221554 DEBUG oslo_concurrency.lockutils [req-4408c33b-d418-45ed-a0c6-175aa2fc218b req-d7e4158b-2e88-442d-b500-9da59aeb0296 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:20:02 np0005603609 nova_compute[221550]: 2026-01-31 08:20:02.448 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:03.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:03.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e336 e336: 3 total, 3 up, 3 in
Jan 31 03:20:05 np0005603609 nova_compute[221550]: 2026-01-31 08:20:05.244 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:05.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:20:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:05.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:20:07 np0005603609 nova_compute[221550]: 2026-01-31 08:20:07.449 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:07.512 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:07.512 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:07.513 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:20:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:07.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:20:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:07.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:09 np0005603609 nova_compute[221550]: 2026-01-31 08:20:09.067 221554 DEBUG oslo_concurrency.lockutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "ac849bdd-07a0-459f-b207-2a3f239d0272" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:09 np0005603609 nova_compute[221550]: 2026-01-31 08:20:09.068 221554 DEBUG oslo_concurrency.lockutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:09 np0005603609 nova_compute[221550]: 2026-01-31 08:20:09.068 221554 INFO nova.compute.manager [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Unshelving#033[00m
Jan 31 03:20:09 np0005603609 nova_compute[221550]: 2026-01-31 08:20:09.294 221554 DEBUG oslo_concurrency.lockutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:09 np0005603609 nova_compute[221550]: 2026-01-31 08:20:09.294 221554 DEBUG oslo_concurrency.lockutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:09 np0005603609 nova_compute[221550]: 2026-01-31 08:20:09.299 221554 DEBUG nova.objects.instance [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'pci_requests' on Instance uuid ac849bdd-07a0-459f-b207-2a3f239d0272 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:09 np0005603609 nova_compute[221550]: 2026-01-31 08:20:09.321 221554 DEBUG nova.objects.instance [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'numa_topology' on Instance uuid ac849bdd-07a0-459f-b207-2a3f239d0272 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:09 np0005603609 nova_compute[221550]: 2026-01-31 08:20:09.338 221554 DEBUG nova.virt.hardware [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:20:09 np0005603609 nova_compute[221550]: 2026-01-31 08:20:09.338 221554 INFO nova.compute.claims [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:20:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:09.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:09 np0005603609 nova_compute[221550]: 2026-01-31 08:20:09.584 221554 DEBUG oslo_concurrency.processutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:09.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:20:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3945026701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:20:09 np0005603609 nova_compute[221550]: 2026-01-31 08:20:09.997 221554 DEBUG oslo_concurrency.processutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:10 np0005603609 nova_compute[221550]: 2026-01-31 08:20:10.003 221554 DEBUG nova.compute.provider_tree [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:20:10 np0005603609 nova_compute[221550]: 2026-01-31 08:20:10.085 221554 DEBUG nova.scheduler.client.report [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:20:10 np0005603609 nova_compute[221550]: 2026-01-31 08:20:10.113 221554 DEBUG oslo_concurrency.lockutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:10 np0005603609 nova_compute[221550]: 2026-01-31 08:20:10.247 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:10 np0005603609 nova_compute[221550]: 2026-01-31 08:20:10.420 221554 INFO nova.network.neutron [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Updating port c754b929-2e4d-413c-90fa-92a743e1fc38 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 31 03:20:10 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #118. Immutable memtables: 0.
Jan 31 03:20:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:20:10.818250) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:20:10 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 118
Jan 31 03:20:10 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847610818551, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 468, "num_deletes": 252, "total_data_size": 562841, "memory_usage": 572792, "flush_reason": "Manual Compaction"}
Jan 31 03:20:10 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #119: started
Jan 31 03:20:10 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847610919443, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 119, "file_size": 370738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60294, "largest_seqno": 60757, "table_properties": {"data_size": 368140, "index_size": 634, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6442, "raw_average_key_size": 19, "raw_value_size": 362903, "raw_average_value_size": 1083, "num_data_blocks": 28, "num_entries": 335, "num_filter_entries": 335, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847592, "oldest_key_time": 1769847592, "file_creation_time": 1769847610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:20:10 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 101250 microseconds, and 1547 cpu microseconds.
Jan 31 03:20:10 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:20:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:20:10.919506) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #119: 370738 bytes OK
Jan 31 03:20:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:20:10.919531) [db/memtable_list.cc:519] [default] Level-0 commit table #119 started
Jan 31 03:20:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:20:10.925090) [db/memtable_list.cc:722] [default] Level-0 commit table #119: memtable #1 done
Jan 31 03:20:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:20:10.925140) EVENT_LOG_v1 {"time_micros": 1769847610925127, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:20:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:20:10.925172) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:20:10 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 559985, prev total WAL file size 559985, number of live WAL files 2.
Jan 31 03:20:10 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000115.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:20:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:20:10.926312) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Jan 31 03:20:10 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:20:10 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [119(362KB)], [117(13MB)]
Jan 31 03:20:10 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847610926365, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [119], "files_L6": [117], "score": -1, "input_data_size": 14400874, "oldest_snapshot_seqno": -1}
Jan 31 03:20:11 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #120: 8465 keys, 12500005 bytes, temperature: kUnknown
Jan 31 03:20:11 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847611073910, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 120, "file_size": 12500005, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12443004, "index_size": 34745, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21189, "raw_key_size": 220647, "raw_average_key_size": 26, "raw_value_size": 12291904, "raw_average_value_size": 1452, "num_data_blocks": 1359, "num_entries": 8465, "num_filter_entries": 8465, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769847610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 120, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:20:11 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:20:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:20:11.074795) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 12500005 bytes
Jan 31 03:20:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:20:11.090917) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.2 rd, 84.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.4 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(72.6) write-amplify(33.7) OK, records in: 8982, records dropped: 517 output_compression: NoCompression
Jan 31 03:20:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:20:11.091001) EVENT_LOG_v1 {"time_micros": 1769847611090981, "job": 74, "event": "compaction_finished", "compaction_time_micros": 148172, "compaction_time_cpu_micros": 21918, "output_level": 6, "num_output_files": 1, "total_output_size": 12500005, "num_input_records": 8982, "num_output_records": 8465, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:20:11 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:20:11 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847611091370, "job": 74, "event": "table_file_deletion", "file_number": 119}
Jan 31 03:20:11 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000117.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:20:11 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847611093127, "job": 74, "event": "table_file_deletion", "file_number": 117}
Jan 31 03:20:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:20:10.926248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:20:11.093225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:20:11.093232) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:20:11.093234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:20:11.093236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:20:11.093238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:11.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:11.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e337 e337: 3 total, 3 up, 3 in
Jan 31 03:20:12 np0005603609 nova_compute[221550]: 2026-01-31 08:20:12.353 221554 DEBUG oslo_concurrency.lockutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:20:12 np0005603609 nova_compute[221550]: 2026-01-31 08:20:12.354 221554 DEBUG oslo_concurrency.lockutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquired lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:20:12 np0005603609 nova_compute[221550]: 2026-01-31 08:20:12.354 221554 DEBUG nova.network.neutron [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:20:12 np0005603609 nova_compute[221550]: 2026-01-31 08:20:12.494 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:13 np0005603609 nova_compute[221550]: 2026-01-31 08:20:13.394 221554 DEBUG nova.compute.manager [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Received event network-changed-c754b929-2e4d-413c-90fa-92a743e1fc38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:13 np0005603609 nova_compute[221550]: 2026-01-31 08:20:13.394 221554 DEBUG nova.compute.manager [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Refreshing instance network info cache due to event network-changed-c754b929-2e4d-413c-90fa-92a743e1fc38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:20:13 np0005603609 nova_compute[221550]: 2026-01-31 08:20:13.395 221554 DEBUG oslo_concurrency.lockutils [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:20:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:13.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:20:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:13.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:20:14 np0005603609 nova_compute[221550]: 2026-01-31 08:20:14.439 221554 DEBUG nova.network.neutron [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Updating instance_info_cache with network_info: [{"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:20:14 np0005603609 nova_compute[221550]: 2026-01-31 08:20:14.465 221554 DEBUG oslo_concurrency.lockutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Releasing lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:20:14 np0005603609 nova_compute[221550]: 2026-01-31 08:20:14.467 221554 DEBUG nova.virt.libvirt.driver [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:20:14 np0005603609 nova_compute[221550]: 2026-01-31 08:20:14.468 221554 INFO nova.virt.libvirt.driver [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Creating image(s)#033[00m
Jan 31 03:20:14 np0005603609 nova_compute[221550]: 2026-01-31 08:20:14.499 221554 DEBUG nova.storage.rbd_utils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image ac849bdd-07a0-459f-b207-2a3f239d0272_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:14 np0005603609 nova_compute[221550]: 2026-01-31 08:20:14.504 221554 DEBUG nova.objects.instance [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'trusted_certs' on Instance uuid ac849bdd-07a0-459f-b207-2a3f239d0272 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:14 np0005603609 nova_compute[221550]: 2026-01-31 08:20:14.505 221554 DEBUG oslo_concurrency.lockutils [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:20:14 np0005603609 nova_compute[221550]: 2026-01-31 08:20:14.506 221554 DEBUG nova.network.neutron [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Refreshing network info cache for port c754b929-2e4d-413c-90fa-92a743e1fc38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:20:14 np0005603609 nova_compute[221550]: 2026-01-31 08:20:14.564 221554 DEBUG nova.storage.rbd_utils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image ac849bdd-07a0-459f-b207-2a3f239d0272_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:14 np0005603609 nova_compute[221550]: 2026-01-31 08:20:14.589 221554 DEBUG nova.storage.rbd_utils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image ac849bdd-07a0-459f-b207-2a3f239d0272_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:14 np0005603609 nova_compute[221550]: 2026-01-31 08:20:14.593 221554 DEBUG oslo_concurrency.lockutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "3d0fe2fe32e77971f344385f5ad5b596480b7e82" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:14 np0005603609 nova_compute[221550]: 2026-01-31 08:20:14.594 221554 DEBUG oslo_concurrency.lockutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "3d0fe2fe32e77971f344385f5ad5b596480b7e82" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:15 np0005603609 nova_compute[221550]: 2026-01-31 08:20:15.084 221554 DEBUG nova.virt.libvirt.imagebackend [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/5b5ac3f7-b093-4152-83a1-6cb8fa870d17/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/5b5ac3f7-b093-4152-83a1-6cb8fa870d17/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 31 03:20:15 np0005603609 nova_compute[221550]: 2026-01-31 08:20:15.222 221554 DEBUG nova.virt.libvirt.imagebackend [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Selected location: {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/5b5ac3f7-b093-4152-83a1-6cb8fa870d17/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 31 03:20:15 np0005603609 nova_compute[221550]: 2026-01-31 08:20:15.223 221554 DEBUG nova.storage.rbd_utils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] cloning images/5b5ac3f7-b093-4152-83a1-6cb8fa870d17@snap to None/ac849bdd-07a0-459f-b207-2a3f239d0272_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:20:15 np0005603609 nova_compute[221550]: 2026-01-31 08:20:15.408 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:15.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:15.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:16 np0005603609 nova_compute[221550]: 2026-01-31 08:20:16.284 221554 DEBUG oslo_concurrency.lockutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "3d0fe2fe32e77971f344385f5ad5b596480b7e82" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:16 np0005603609 nova_compute[221550]: 2026-01-31 08:20:16.490 221554 DEBUG nova.objects.instance [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'migration_context' on Instance uuid ac849bdd-07a0-459f-b207-2a3f239d0272 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:16 np0005603609 nova_compute[221550]: 2026-01-31 08:20:16.860 221554 DEBUG nova.storage.rbd_utils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] flattening vms/ac849bdd-07a0-459f-b207-2a3f239d0272_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.166 221554 DEBUG nova.virt.libvirt.driver [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Image rbd:vms/ac849bdd-07a0-459f-b207-2a3f239d0272_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.166 221554 DEBUG nova.virt.libvirt.driver [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.167 221554 DEBUG nova.virt.libvirt.driver [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Ensure instance console log exists: /var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.167 221554 DEBUG oslo_concurrency.lockutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.167 221554 DEBUG oslo_concurrency.lockutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.167 221554 DEBUG oslo_concurrency.lockutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.169 221554 DEBUG nova.virt.libvirt.driver [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Start _get_guest_xml network_info=[{"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T08:19:36Z,direct_url=<?>,disk_format='raw',id=5b5ac3f7-b093-4152-83a1-6cb8fa870d17,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-154682825-shelved',owner='3b06982960ad4453b8e542cb6330835d',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T08:19:51Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.172 221554 WARNING nova.virt.libvirt.driver [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.175 221554 DEBUG nova.virt.libvirt.host [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.176 221554 DEBUG nova.virt.libvirt.host [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.182 221554 DEBUG nova.virt.libvirt.host [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.182 221554 DEBUG nova.virt.libvirt.host [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.183 221554 DEBUG nova.virt.libvirt.driver [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.183 221554 DEBUG nova.virt.hardware [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T08:19:36Z,direct_url=<?>,disk_format='raw',id=5b5ac3f7-b093-4152-83a1-6cb8fa870d17,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-154682825-shelved',owner='3b06982960ad4453b8e542cb6330835d',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T08:19:51Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.184 221554 DEBUG nova.virt.hardware [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.184 221554 DEBUG nova.virt.hardware [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.184 221554 DEBUG nova.virt.hardware [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.184 221554 DEBUG nova.virt.hardware [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.184 221554 DEBUG nova.virt.hardware [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.184 221554 DEBUG nova.virt.hardware [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.185 221554 DEBUG nova.virt.hardware [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.185 221554 DEBUG nova.virt.hardware [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.185 221554 DEBUG nova.virt.hardware [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.185 221554 DEBUG nova.virt.hardware [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.185 221554 DEBUG nova.objects.instance [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'vcpu_model' on Instance uuid ac849bdd-07a0-459f-b207-2a3f239d0272 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.200 221554 DEBUG oslo_concurrency.processutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.496 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:17.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:20:17 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1307699288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.641 221554 DEBUG oslo_concurrency.processutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.672 221554 DEBUG nova.storage.rbd_utils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image ac849bdd-07a0-459f-b207-2a3f239d0272_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.678 221554 DEBUG oslo_concurrency.processutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:20:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:17.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.803 221554 DEBUG nova.network.neutron [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Updated VIF entry in instance network info cache for port c754b929-2e4d-413c-90fa-92a743e1fc38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.804 221554 DEBUG nova.network.neutron [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Updating instance_info_cache with network_info: [{"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:20:17 np0005603609 nova_compute[221550]: 2026-01-31 08:20:17.879 221554 DEBUG oslo_concurrency.lockutils [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-ac849bdd-07a0-459f-b207-2a3f239d0272" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:20:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:20:18 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2106032858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.083 221554 DEBUG oslo_concurrency.processutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.084 221554 DEBUG nova.virt.libvirt.vif [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:18:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-154682825',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-154682825',id=140,image_ref='5b5ac3f7-b093-4152-83a1-6cb8fa870d17',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-572349883',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:19:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='3b06982960ad4453b8e542cb6330835d',ramdisk_id='',reservation_id='r-1oh7rfdp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-332944999',owner_user_name='tempest-AttachVolumeShelveTestJSON-332944999-project-member',shelved_at='2026-01-31T08:19:52.262857',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='5b5ac3f7-b093-4152-83a1-6cb8fa870d17'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:20:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b153d2832404e5b9250422b70ba522d',uuid=ac849bdd-07a0-459f-b207-2a3f239d0272,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.084 221554 DEBUG nova.network.os_vif_util [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converting VIF {"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.085 221554 DEBUG nova.network.os_vif_util [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:73:fe,bridge_name='br-int',has_traffic_filtering=True,id=c754b929-2e4d-413c-90fa-92a743e1fc38,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc754b929-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.086 221554 DEBUG nova.objects.instance [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'pci_devices' on Instance uuid ac849bdd-07a0-459f-b207-2a3f239d0272 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.121 221554 DEBUG nova.virt.libvirt.driver [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:20:18 np0005603609 nova_compute[221550]:  <uuid>ac849bdd-07a0-459f-b207-2a3f239d0272</uuid>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:  <name>instance-0000008c</name>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <nova:name>tempest-AttachVolumeShelveTestJSON-server-154682825</nova:name>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:20:17</nova:creationTime>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:20:18 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:        <nova:user uuid="3b153d2832404e5b9250422b70ba522d">tempest-AttachVolumeShelveTestJSON-332944999-project-member</nova:user>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:        <nova:project uuid="3b06982960ad4453b8e542cb6330835d">tempest-AttachVolumeShelveTestJSON-332944999</nova:project>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="5b5ac3f7-b093-4152-83a1-6cb8fa870d17"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:        <nova:port uuid="c754b929-2e4d-413c-90fa-92a743e1fc38">
Jan 31 03:20:18 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <entry name="serial">ac849bdd-07a0-459f-b207-2a3f239d0272</entry>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <entry name="uuid">ac849bdd-07a0-459f-b207-2a3f239d0272</entry>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/ac849bdd-07a0-459f-b207-2a3f239d0272_disk">
Jan 31 03:20:18 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:20:18 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/ac849bdd-07a0-459f-b207-2a3f239d0272_disk.config">
Jan 31 03:20:18 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:20:18 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:41:73:fe"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <target dev="tapc754b929-2e"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272/console.log" append="off"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <input type="keyboard" bus="usb"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:20:18 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:20:18 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:20:18 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:20:18 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.122 221554 DEBUG nova.compute.manager [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Preparing to wait for external event network-vif-plugged-c754b929-2e4d-413c-90fa-92a743e1fc38 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.123 221554 DEBUG oslo_concurrency.lockutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.123 221554 DEBUG oslo_concurrency.lockutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.124 221554 DEBUG oslo_concurrency.lockutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.124 221554 DEBUG nova.virt.libvirt.vif [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:18:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-154682825',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-154682825',id=140,image_ref='5b5ac3f7-b093-4152-83a1-6cb8fa870d17',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-572349883',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:19:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='3b06982960ad4453b8e542cb6330835d',ramdisk_id='',reservation_id='r-1oh7rfdp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-332944999',owner_user_name='tempest-AttachVolumeShelveTestJSON-332944999-project-member',shelved_at='2026-01-31T08:19:52.262857',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='5b5ac3f7-b093-4152-83a1-6cb8fa870d17'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:20:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b153d2832404e5b9250422b70ba522d',uuid=ac849bdd-07a0-459f-b207-2a3f239d0272,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.125 221554 DEBUG nova.network.os_vif_util [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converting VIF {"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.125 221554 DEBUG nova.network.os_vif_util [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:73:fe,bridge_name='br-int',has_traffic_filtering=True,id=c754b929-2e4d-413c-90fa-92a743e1fc38,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc754b929-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.126 221554 DEBUG os_vif [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:73:fe,bridge_name='br-int',has_traffic_filtering=True,id=c754b929-2e4d-413c-90fa-92a743e1fc38,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc754b929-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.126 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.127 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.127 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.130 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.130 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc754b929-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.130 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc754b929-2e, col_values=(('external_ids', {'iface-id': 'c754b929-2e4d-413c-90fa-92a743e1fc38', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:73:fe', 'vm-uuid': 'ac849bdd-07a0-459f-b207-2a3f239d0272'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.132 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:18 np0005603609 NetworkManager[49064]: <info>  [1769847618.1327] manager: (tapc754b929-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/281)
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.134 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.136 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.137 221554 INFO os_vif [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:73:fe,bridge_name='br-int',has_traffic_filtering=True,id=c754b929-2e4d-413c-90fa-92a743e1fc38,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc754b929-2e')#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.252 221554 DEBUG nova.virt.libvirt.driver [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.253 221554 DEBUG nova.virt.libvirt.driver [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.254 221554 DEBUG nova.virt.libvirt.driver [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] No VIF found with MAC fa:16:3e:41:73:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.255 221554 INFO nova.virt.libvirt.driver [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Using config drive#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.289 221554 DEBUG nova.storage.rbd_utils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image ac849bdd-07a0-459f-b207-2a3f239d0272_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.315 221554 DEBUG nova.objects.instance [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'ec2_ids' on Instance uuid ac849bdd-07a0-459f-b207-2a3f239d0272 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.403 221554 DEBUG nova.objects.instance [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'keypairs' on Instance uuid ac849bdd-07a0-459f-b207-2a3f239d0272 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.963 221554 INFO nova.virt.libvirt.driver [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Creating config drive at /var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272/disk.config#033[00m
Jan 31 03:20:18 np0005603609 nova_compute[221550]: 2026-01-31 08:20:18.968 221554 DEBUG oslo_concurrency.processutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpom99cxkf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.103 221554 DEBUG oslo_concurrency.processutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpom99cxkf" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.124 221554 DEBUG nova.storage.rbd_utils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] rbd image ac849bdd-07a0-459f-b207-2a3f239d0272_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.127 221554 DEBUG oslo_concurrency.processutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272/disk.config ac849bdd-07a0-459f-b207-2a3f239d0272_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.290 221554 DEBUG oslo_concurrency.processutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272/disk.config ac849bdd-07a0-459f-b207-2a3f239d0272_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.291 221554 INFO nova.virt.libvirt.driver [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Deleting local config drive /var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272/disk.config because it was imported into RBD.#033[00m
Jan 31 03:20:19 np0005603609 kernel: tapc754b929-2e: entered promiscuous mode
Jan 31 03:20:19 np0005603609 NetworkManager[49064]: <info>  [1769847619.3472] manager: (tapc754b929-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/282)
Jan 31 03:20:19 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:19Z|00594|binding|INFO|Claiming lport c754b929-2e4d-413c-90fa-92a743e1fc38 for this chassis.
Jan 31 03:20:19 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:19Z|00595|binding|INFO|c754b929-2e4d-413c-90fa-92a743e1fc38: Claiming fa:16:3e:41:73:fe 10.100.0.4
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.348 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.356 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:73:fe 10.100.0.4'], port_security=['fa:16:3e:41:73:fe 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ac849bdd-07a0-459f-b207-2a3f239d0272', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b47e290-9853-478f-86cb-c8ea73119a97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b06982960ad4453b8e542cb6330835d', 'neutron:revision_number': '7', 'neutron:security_group_ids': '9238371a-94ed-45d8-9126-02012597fcaa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46c631ec-3e4c-4c27-ae32-3645d3f29853, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=c754b929-2e4d-413c-90fa-92a743e1fc38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:20:19 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:19Z|00596|binding|INFO|Setting lport c754b929-2e4d-413c-90fa-92a743e1fc38 ovn-installed in OVS
Jan 31 03:20:19 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:19Z|00597|binding|INFO|Setting lport c754b929-2e4d-413c-90fa-92a743e1fc38 up in Southbound
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.358 140058 INFO neutron.agent.ovn.metadata.agent [-] Port c754b929-2e4d-413c-90fa-92a743e1fc38 in datapath 2b47e290-9853-478f-86cb-c8ea73119a97 bound to our chassis#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.359 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.361 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.362 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b47e290-9853-478f-86cb-c8ea73119a97#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.373 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[753eb6b7-ce77-4667-9d56-13e8cc81a886]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.374 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b47e290-91 in ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:20:19 np0005603609 systemd-machined[190912]: New machine qemu-73-instance-0000008c.
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.377 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b47e290-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.377 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8a89d67b-0981-4cc3-9f55-bf4980d2ea4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.378 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2cec7bc8-c60d-4ea0-885c-12b09b0a4466]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.388 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[354eddbe-9270-4e23-a959-eca12f41415a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:19 np0005603609 systemd[1]: Started Virtual Machine qemu-73-instance-0000008c.
Jan 31 03:20:19 np0005603609 systemd-udevd[279504]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.399 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[62bec567-9143-4e09-9336-70a5ea3416b3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:19 np0005603609 NetworkManager[49064]: <info>  [1769847619.4120] device (tapc754b929-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:20:19 np0005603609 NetworkManager[49064]: <info>  [1769847619.4128] device (tapc754b929-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.423 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[8a258f2f-dbc8-48a3-8f9e-7b2a9e66a345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.428 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa8a25c-dcd2-4253-9dad-fc5f25cd1301]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:19 np0005603609 NetworkManager[49064]: <info>  [1769847619.4295] manager: (tap2b47e290-90): new Veth device (/org/freedesktop/NetworkManager/Devices/283)
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.454 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[1f29abfd-2755-4218-9338-ee2b3537e32d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.457 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c052b9ca-6910-4855-ab09-41e758ed5eff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:19 np0005603609 NetworkManager[49064]: <info>  [1769847619.4754] device (tap2b47e290-90): carrier: link connected
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.482 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3d1d1c-f73e-4fd8-9be8-020d236e0f7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.495 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[18c56329-ec26-4a15-a4a8-52ba8e6df9dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b47e290-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:d6:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785917, 'reachable_time': 16854, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279534, 'error': None, 'target': 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.505 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[05d01c22-87d8-4eb7-aa45-5fe7a2f2bef0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:d61c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785917, 'tstamp': 785917}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279535, 'error': None, 'target': 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.517 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0bcdcb3f-2b92-45b3-985d-b0560446cb9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b47e290-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:d6:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785917, 'reachable_time': 16854, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279536, 'error': None, 'target': 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.535 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8d020465-c280-42a0-88ce-0a68650c75df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:19.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.582 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[81fb8858-34be-4b44-ac0b-c33872d555b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.583 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b47e290-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.583 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.584 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b47e290-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.585 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:19 np0005603609 NetworkManager[49064]: <info>  [1769847619.5860] manager: (tap2b47e290-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/284)
Jan 31 03:20:19 np0005603609 kernel: tap2b47e290-90: entered promiscuous mode
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.588 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.589 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b47e290-90, col_values=(('external_ids', {'iface-id': '4fadf8e2-21f6-4df7-9cc2-be518280ee18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.591 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:19 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:19Z|00598|binding|INFO|Releasing lport 4fadf8e2-21f6-4df7-9cc2-be518280ee18 from this chassis (sb_readonly=0)
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.591 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.592 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b47e290-9853-478f-86cb-c8ea73119a97.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b47e290-9853-478f-86cb-c8ea73119a97.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.595 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bb05b760-4518-489f-a952-2728248b262c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.595 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.596 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-2b47e290-9853-478f-86cb-c8ea73119a97
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/2b47e290-9853-478f-86cb-c8ea73119a97.pid.haproxy
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 2b47e290-9853-478f-86cb-c8ea73119a97
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:20:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:19.596 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'env', 'PROCESS_TAG=haproxy-2b47e290-9853-478f-86cb-c8ea73119a97', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b47e290-9853-478f-86cb-c8ea73119a97.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.707 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847619.7067316, ac849bdd-07a0-459f-b207-2a3f239d0272 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.707 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] VM Started (Lifecycle Event)#033[00m
Jan 31 03:20:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:19.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.744 221554 DEBUG nova.compute.manager [req-930dbc00-8313-4f22-b2ac-5ace0d31f2a6 req-6f0beb2c-8d36-4f2e-af51-adde243f3e36 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Received event network-vif-plugged-c754b929-2e4d-413c-90fa-92a743e1fc38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.744 221554 DEBUG oslo_concurrency.lockutils [req-930dbc00-8313-4f22-b2ac-5ace0d31f2a6 req-6f0beb2c-8d36-4f2e-af51-adde243f3e36 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.745 221554 DEBUG oslo_concurrency.lockutils [req-930dbc00-8313-4f22-b2ac-5ace0d31f2a6 req-6f0beb2c-8d36-4f2e-af51-adde243f3e36 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.745 221554 DEBUG oslo_concurrency.lockutils [req-930dbc00-8313-4f22-b2ac-5ace0d31f2a6 req-6f0beb2c-8d36-4f2e-af51-adde243f3e36 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.745 221554 DEBUG nova.compute.manager [req-930dbc00-8313-4f22-b2ac-5ace0d31f2a6 req-6f0beb2c-8d36-4f2e-af51-adde243f3e36 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Processing event network-vif-plugged-c754b929-2e4d-413c-90fa-92a743e1fc38 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.746 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.747 221554 DEBUG nova.compute.manager [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.752 221554 DEBUG nova.virt.libvirt.driver [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.753 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.758 221554 INFO nova.virt.libvirt.driver [-] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Instance spawned successfully.#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.876 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.876 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847619.7068932, ac849bdd-07a0-459f-b207-2a3f239d0272 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.877 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.909 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.913 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847619.7506876, ac849bdd-07a0-459f-b207-2a3f239d0272 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.913 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.957 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.961 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:20:19 np0005603609 podman[279609]: 2026-01-31 08:20:19.967608851 +0000 UTC m=+0.059935230 container create 3d496ba401911c75353dae09bb589ae93b6ede0e22ec8305b00461c0f61fda9b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 03:20:19 np0005603609 nova_compute[221550]: 2026-01-31 08:20:19.992 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:20:20 np0005603609 systemd[1]: Started libpod-conmon-3d496ba401911c75353dae09bb589ae93b6ede0e22ec8305b00461c0f61fda9b.scope.
Jan 31 03:20:20 np0005603609 podman[279609]: 2026-01-31 08:20:19.939442955 +0000 UTC m=+0.031769354 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:20:20 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:20:20 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9108a03c1777ba336e0b96628a229cb3b1681804132ebec7dcb02e3d27ac47d0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:20:20 np0005603609 podman[279609]: 2026-01-31 08:20:20.073033613 +0000 UTC m=+0.165360032 container init 3d496ba401911c75353dae09bb589ae93b6ede0e22ec8305b00461c0f61fda9b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 03:20:20 np0005603609 podman[279609]: 2026-01-31 08:20:20.078522474 +0000 UTC m=+0.170848853 container start 3d496ba401911c75353dae09bb589ae93b6ede0e22ec8305b00461c0f61fda9b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 03:20:20 np0005603609 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[279624]: [NOTICE]   (279628) : New worker (279630) forked
Jan 31 03:20:20 np0005603609 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[279624]: [NOTICE]   (279628) : Loading success.
Jan 31 03:20:20 np0005603609 nova_compute[221550]: 2026-01-31 08:20:20.251 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e338 e338: 3 total, 3 up, 3 in
Jan 31 03:20:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:21.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:21 np0005603609 nova_compute[221550]: 2026-01-31 08:20:21.614 221554 DEBUG nova.compute.manager [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:20:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:21.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:20:21 np0005603609 nova_compute[221550]: 2026-01-31 08:20:21.919 221554 DEBUG oslo_concurrency.lockutils [None req-6d9aa568-fa21-4f4c-8609-8e33b959a24f 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 12.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:22 np0005603609 nova_compute[221550]: 2026-01-31 08:20:22.000 221554 DEBUG nova.compute.manager [req-9e634807-aec2-4128-b13c-99afe709908f req-fbb94d3e-a0f4-4f15-b91d-47d826a7c8e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Received event network-vif-plugged-c754b929-2e4d-413c-90fa-92a743e1fc38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:22 np0005603609 nova_compute[221550]: 2026-01-31 08:20:22.001 221554 DEBUG oslo_concurrency.lockutils [req-9e634807-aec2-4128-b13c-99afe709908f req-fbb94d3e-a0f4-4f15-b91d-47d826a7c8e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:22 np0005603609 nova_compute[221550]: 2026-01-31 08:20:22.001 221554 DEBUG oslo_concurrency.lockutils [req-9e634807-aec2-4128-b13c-99afe709908f req-fbb94d3e-a0f4-4f15-b91d-47d826a7c8e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:22 np0005603609 nova_compute[221550]: 2026-01-31 08:20:22.001 221554 DEBUG oslo_concurrency.lockutils [req-9e634807-aec2-4128-b13c-99afe709908f req-fbb94d3e-a0f4-4f15-b91d-47d826a7c8e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:22 np0005603609 nova_compute[221550]: 2026-01-31 08:20:22.001 221554 DEBUG nova.compute.manager [req-9e634807-aec2-4128-b13c-99afe709908f req-fbb94d3e-a0f4-4f15-b91d-47d826a7c8e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] No waiting events found dispatching network-vif-plugged-c754b929-2e4d-413c-90fa-92a743e1fc38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:20:22 np0005603609 nova_compute[221550]: 2026-01-31 08:20:22.002 221554 WARNING nova.compute.manager [req-9e634807-aec2-4128-b13c-99afe709908f req-fbb94d3e-a0f4-4f15-b91d-47d826a7c8e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Received unexpected event network-vif-plugged-c754b929-2e4d-413c-90fa-92a743e1fc38 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:20:23 np0005603609 nova_compute[221550]: 2026-01-31 08:20:23.134 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:23 np0005603609 nova_compute[221550]: 2026-01-31 08:20:23.495 221554 INFO nova.compute.manager [None req-4e021a73-13d5-4662-8387-38c661792db5 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Pausing#033[00m
Jan 31 03:20:23 np0005603609 nova_compute[221550]: 2026-01-31 08:20:23.496 221554 DEBUG nova.objects.instance [None req-4e021a73-13d5-4662-8387-38c661792db5 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'flavor' on Instance uuid 38deb482-bd85-4fdf-b2da-2725bffd8f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:23 np0005603609 nova_compute[221550]: 2026-01-31 08:20:23.547 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847623.5475974, 38deb482-bd85-4fdf-b2da-2725bffd8f43 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:20:23 np0005603609 nova_compute[221550]: 2026-01-31 08:20:23.547 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:20:23 np0005603609 nova_compute[221550]: 2026-01-31 08:20:23.549 221554 DEBUG nova.compute.manager [None req-4e021a73-13d5-4662-8387-38c661792db5 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:23.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:23 np0005603609 nova_compute[221550]: 2026-01-31 08:20:23.659 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:23 np0005603609 nova_compute[221550]: 2026-01-31 08:20:23.662 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:20:23 np0005603609 nova_compute[221550]: 2026-01-31 08:20:23.684 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Jan 31 03:20:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:20:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:23.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:20:24 np0005603609 nova_compute[221550]: 2026-01-31 08:20:24.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:25 np0005603609 podman[279641]: 2026-01-31 08:20:25.163282338 +0000 UTC m=+0.049093450 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 03:20:25 np0005603609 podman[279640]: 2026-01-31 08:20:25.191809843 +0000 UTC m=+0.077624935 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 03:20:25 np0005603609 nova_compute[221550]: 2026-01-31 08:20:25.253 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:25.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:25.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e339 e339: 3 total, 3 up, 3 in
Jan 31 03:20:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:27.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:27.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:28 np0005603609 nova_compute[221550]: 2026-01-31 08:20:28.004 221554 INFO nova.compute.manager [None req-7725748a-f654-48c0-9d97-43d2a2ce1673 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Unpausing#033[00m
Jan 31 03:20:28 np0005603609 nova_compute[221550]: 2026-01-31 08:20:28.005 221554 DEBUG nova.objects.instance [None req-7725748a-f654-48c0-9d97-43d2a2ce1673 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'flavor' on Instance uuid 38deb482-bd85-4fdf-b2da-2725bffd8f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:28 np0005603609 nova_compute[221550]: 2026-01-31 08:20:28.034 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847628.0340588, 38deb482-bd85-4fdf-b2da-2725bffd8f43 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:20:28 np0005603609 nova_compute[221550]: 2026-01-31 08:20:28.035 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:20:28 np0005603609 virtqemud[221292]: argument unsupported: QEMU guest agent is not configured
Jan 31 03:20:28 np0005603609 nova_compute[221550]: 2026-01-31 08:20:28.040 221554 DEBUG nova.virt.libvirt.guest [None req-7725748a-f654-48c0-9d97-43d2a2ce1673 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 03:20:28 np0005603609 nova_compute[221550]: 2026-01-31 08:20:28.041 221554 DEBUG nova.compute.manager [None req-7725748a-f654-48c0-9d97-43d2a2ce1673 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:28 np0005603609 nova_compute[221550]: 2026-01-31 08:20:28.076 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:28 np0005603609 nova_compute[221550]: 2026-01-31 08:20:28.082 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:20:28 np0005603609 nova_compute[221550]: 2026-01-31 08:20:28.118 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Jan 31 03:20:28 np0005603609 nova_compute[221550]: 2026-01-31 08:20:28.135 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:29.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:29.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:30 np0005603609 nova_compute[221550]: 2026-01-31 08:20:30.255 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:30 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:30Z|00599|binding|INFO|Releasing lport 4fadf8e2-21f6-4df7-9cc2-be518280ee18 from this chassis (sb_readonly=0)
Jan 31 03:20:30 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:30Z|00600|binding|INFO|Releasing lport 075aefe0-13df-4a17-ae95-485ece950a10 from this chassis (sb_readonly=0)
Jan 31 03:20:30 np0005603609 nova_compute[221550]: 2026-01-31 08:20:30.683 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:31.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:20:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:31.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:20:32 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:32Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:41:73:fe 10.100.0.4
Jan 31 03:20:32 np0005603609 nova_compute[221550]: 2026-01-31 08:20:32.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:33 np0005603609 nova_compute[221550]: 2026-01-31 08:20:33.137 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:33.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:20:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:33.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:20:34 np0005603609 nova_compute[221550]: 2026-01-31 08:20:34.041 221554 INFO nova.compute.manager [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Rescuing#033[00m
Jan 31 03:20:34 np0005603609 nova_compute[221550]: 2026-01-31 08:20:34.041 221554 DEBUG oslo_concurrency.lockutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "refresh_cache-38deb482-bd85-4fdf-b2da-2725bffd8f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:20:34 np0005603609 nova_compute[221550]: 2026-01-31 08:20:34.042 221554 DEBUG oslo_concurrency.lockutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquired lock "refresh_cache-38deb482-bd85-4fdf-b2da-2725bffd8f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:20:34 np0005603609 nova_compute[221550]: 2026-01-31 08:20:34.042 221554 DEBUG nova.network.neutron [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:20:34 np0005603609 nova_compute[221550]: 2026-01-31 08:20:34.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:35 np0005603609 nova_compute[221550]: 2026-01-31 08:20:35.258 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:20:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:35.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:20:35 np0005603609 nova_compute[221550]: 2026-01-31 08:20:35.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:35 np0005603609 nova_compute[221550]: 2026-01-31 08:20:35.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:20:35 np0005603609 nova_compute[221550]: 2026-01-31 08:20:35.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:20:35 np0005603609 nova_compute[221550]: 2026-01-31 08:20:35.689 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-38deb482-bd85-4fdf-b2da-2725bffd8f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:20:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:35.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:37 np0005603609 nova_compute[221550]: 2026-01-31 08:20:37.343 221554 DEBUG nova.network.neutron [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Updating instance_info_cache with network_info: [{"id": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "address": "fa:16:3e:f6:68:cd", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f25515c-3e", "ovs_interfaceid": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:20:37 np0005603609 nova_compute[221550]: 2026-01-31 08:20:37.515 221554 DEBUG oslo_concurrency.lockutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Releasing lock "refresh_cache-38deb482-bd85-4fdf-b2da-2725bffd8f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:20:37 np0005603609 nova_compute[221550]: 2026-01-31 08:20:37.522 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-38deb482-bd85-4fdf-b2da-2725bffd8f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:20:37 np0005603609 nova_compute[221550]: 2026-01-31 08:20:37.523 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:20:37 np0005603609 nova_compute[221550]: 2026-01-31 08:20:37.523 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 38deb482-bd85-4fdf-b2da-2725bffd8f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:37.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:20:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:37.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:20:38 np0005603609 nova_compute[221550]: 2026-01-31 08:20:38.140 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:38 np0005603609 nova_compute[221550]: 2026-01-31 08:20:38.274 221554 DEBUG nova.virt.libvirt.driver [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:20:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:38 np0005603609 nova_compute[221550]: 2026-01-31 08:20:38.392 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:38.392 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:20:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:38.393 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:20:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:20:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:39.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:20:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:39.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.090 221554 DEBUG oslo_concurrency.lockutils [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "ac849bdd-07a0-459f-b207-2a3f239d0272" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.091 221554 DEBUG oslo_concurrency.lockutils [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.092 221554 DEBUG oslo_concurrency.lockutils [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.092 221554 DEBUG oslo_concurrency.lockutils [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.093 221554 DEBUG oslo_concurrency.lockutils [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.094 221554 INFO nova.compute.manager [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Terminating instance#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.096 221554 DEBUG nova.compute.manager [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:20:40 np0005603609 kernel: tapc754b929-2e (unregistering): left promiscuous mode
Jan 31 03:20:40 np0005603609 NetworkManager[49064]: <info>  [1769847640.1669] device (tapc754b929-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:20:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:40Z|00601|binding|INFO|Releasing lport c754b929-2e4d-413c-90fa-92a743e1fc38 from this chassis (sb_readonly=0)
Jan 31 03:20:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:40Z|00602|binding|INFO|Setting lport c754b929-2e4d-413c-90fa-92a743e1fc38 down in Southbound
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.228 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:40Z|00603|binding|INFO|Removing iface tapc754b929-2e ovn-installed in OVS
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.231 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.239 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.242 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:73:fe 10.100.0.4'], port_security=['fa:16:3e:41:73:fe 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ac849bdd-07a0-459f-b207-2a3f239d0272', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b47e290-9853-478f-86cb-c8ea73119a97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b06982960ad4453b8e542cb6330835d', 'neutron:revision_number': '9', 'neutron:security_group_ids': '9238371a-94ed-45d8-9126-02012597fcaa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.249', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46c631ec-3e4c-4c27-ae32-3645d3f29853, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=c754b929-2e4d-413c-90fa-92a743e1fc38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.243 140058 INFO neutron.agent.ovn.metadata.agent [-] Port c754b929-2e4d-413c-90fa-92a743e1fc38 in datapath 2b47e290-9853-478f-86cb-c8ea73119a97 unbound from our chassis#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.244 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b47e290-9853-478f-86cb-c8ea73119a97, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.245 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[639331ce-dbcd-4d11-b626-9e50033498bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.246 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 namespace which is not needed anymore#033[00m
Jan 31 03:20:40 np0005603609 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Jan 31 03:20:40 np0005603609 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000008c.scope: Consumed 12.942s CPU time.
Jan 31 03:20:40 np0005603609 systemd-machined[190912]: Machine qemu-73-instance-0000008c terminated.
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.260 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.318 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.322 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.336 221554 INFO nova.virt.libvirt.driver [-] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Instance destroyed successfully.#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.337 221554 DEBUG nova.objects.instance [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lazy-loading 'resources' on Instance uuid ac849bdd-07a0-459f-b207-2a3f239d0272 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.360 221554 DEBUG nova.virt.libvirt.vif [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:18:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-154682825',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-154682825',id=140,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMWO9k24zLH8F2gtUwSjTGss8rmi2WVpUqIaGSBNrUsu6yIJKWkJX4AL66VZ1mBnILCET2/Mvny3YbuR68kQ+I2GI5dWxVmFSmypAx4n5E/nzIwrdNA7P5kjqoFQFkL9tA==',key_name='tempest-keypair-572349883',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:20:21Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b06982960ad4453b8e542cb6330835d',ramdisk_id='',reservation_id='r-1oh7rfdp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-332944999',owner_user_name='tempest-AttachVolumeShelveTestJSON-332944999-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:20:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b153d2832404e5b9250422b70ba522d',uuid=ac849bdd-07a0-459f-b207-2a3f239d0272,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.361 221554 DEBUG nova.network.os_vif_util [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converting VIF {"id": "c754b929-2e4d-413c-90fa-92a743e1fc38", "address": "fa:16:3e:41:73:fe", "network": {"id": "2b47e290-9853-478f-86cb-c8ea73119a97", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-340353567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b06982960ad4453b8e542cb6330835d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc754b929-2e", "ovs_interfaceid": "c754b929-2e4d-413c-90fa-92a743e1fc38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.361 221554 DEBUG nova.network.os_vif_util [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:73:fe,bridge_name='br-int',has_traffic_filtering=True,id=c754b929-2e4d-413c-90fa-92a743e1fc38,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc754b929-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.362 221554 DEBUG os_vif [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:73:fe,bridge_name='br-int',has_traffic_filtering=True,id=c754b929-2e4d-413c-90fa-92a743e1fc38,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc754b929-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.364 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.364 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc754b929-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.369 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:20:40 np0005603609 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[279624]: [NOTICE]   (279628) : haproxy version is 2.8.14-c23fe91
Jan 31 03:20:40 np0005603609 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[279624]: [NOTICE]   (279628) : path to executable is /usr/sbin/haproxy
Jan 31 03:20:40 np0005603609 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[279624]: [WARNING]  (279628) : Exiting Master process...
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.373 221554 INFO os_vif [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:73:fe,bridge_name='br-int',has_traffic_filtering=True,id=c754b929-2e4d-413c-90fa-92a743e1fc38,network=Network(2b47e290-9853-478f-86cb-c8ea73119a97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc754b929-2e')#033[00m
Jan 31 03:20:40 np0005603609 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[279624]: [ALERT]    (279628) : Current worker (279630) exited with code 143 (Terminated)
Jan 31 03:20:40 np0005603609 neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97[279624]: [WARNING]  (279628) : All workers exited. Exiting... (0)
Jan 31 03:20:40 np0005603609 systemd[1]: libpod-3d496ba401911c75353dae09bb589ae93b6ede0e22ec8305b00461c0f61fda9b.scope: Deactivated successfully.
Jan 31 03:20:40 np0005603609 podman[279714]: 2026-01-31 08:20:40.385344163 +0000 UTC m=+0.049965300 container died 3d496ba401911c75353dae09bb589ae93b6ede0e22ec8305b00461c0f61fda9b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:20:40 np0005603609 systemd[1]: var-lib-containers-storage-overlay-9108a03c1777ba336e0b96628a229cb3b1681804132ebec7dcb02e3d27ac47d0-merged.mount: Deactivated successfully.
Jan 31 03:20:40 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3d496ba401911c75353dae09bb589ae93b6ede0e22ec8305b00461c0f61fda9b-userdata-shm.mount: Deactivated successfully.
Jan 31 03:20:40 np0005603609 podman[279714]: 2026-01-31 08:20:40.426209426 +0000 UTC m=+0.090830573 container cleanup 3d496ba401911c75353dae09bb589ae93b6ede0e22ec8305b00461c0f61fda9b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:20:40 np0005603609 systemd[1]: libpod-conmon-3d496ba401911c75353dae09bb589ae93b6ede0e22ec8305b00461c0f61fda9b.scope: Deactivated successfully.
Jan 31 03:20:40 np0005603609 podman[279767]: 2026-01-31 08:20:40.494184957 +0000 UTC m=+0.052754567 container remove 3d496ba401911c75353dae09bb589ae93b6ede0e22ec8305b00461c0f61fda9b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.498 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8938cb76-26dd-44b2-8e74-45c6d551f6fe]: (4, ('Sat Jan 31 08:20:40 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 (3d496ba401911c75353dae09bb589ae93b6ede0e22ec8305b00461c0f61fda9b)\n3d496ba401911c75353dae09bb589ae93b6ede0e22ec8305b00461c0f61fda9b\nSat Jan 31 08:20:40 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 (3d496ba401911c75353dae09bb589ae93b6ede0e22ec8305b00461c0f61fda9b)\n3d496ba401911c75353dae09bb589ae93b6ede0e22ec8305b00461c0f61fda9b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.500 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1a731565-05cd-46b1-83a4-0d05f57843aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.501 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b47e290-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.502 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:40 np0005603609 kernel: tap2b47e290-90: left promiscuous mode
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.511 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.512 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0004dde4-c0d7-41f0-92c0-c15ec9c62a42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.528 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[90fe5e62-b40c-4e85-988f-b303add70892]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.529 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[64f0c36e-c1fc-4fff-9224-6b947ec4d9a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.545 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[97c639a2-9486-4617-b5c7-4d32b343a464]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785912, 'reachable_time': 21638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279782, 'error': None, 'target': 'ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.547 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b47e290-9853-478f-86cb-c8ea73119a97 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.547 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[efd00172-d112-48cc-b301-a85aacdea16b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:40 np0005603609 systemd[1]: run-netns-ovnmeta\x2d2b47e290\x2d9853\x2d478f\x2d86cb\x2dc8ea73119a97.mount: Deactivated successfully.
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.575 221554 DEBUG nova.compute.manager [req-e4e229cd-ac5b-46ca-9980-125a0bc09a64 req-f6805ad3-235d-4404-b2c7-da91166f80d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Received event network-vif-unplugged-c754b929-2e4d-413c-90fa-92a743e1fc38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.576 221554 DEBUG oslo_concurrency.lockutils [req-e4e229cd-ac5b-46ca-9980-125a0bc09a64 req-f6805ad3-235d-4404-b2c7-da91166f80d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.576 221554 DEBUG oslo_concurrency.lockutils [req-e4e229cd-ac5b-46ca-9980-125a0bc09a64 req-f6805ad3-235d-4404-b2c7-da91166f80d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.576 221554 DEBUG oslo_concurrency.lockutils [req-e4e229cd-ac5b-46ca-9980-125a0bc09a64 req-f6805ad3-235d-4404-b2c7-da91166f80d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.576 221554 DEBUG nova.compute.manager [req-e4e229cd-ac5b-46ca-9980-125a0bc09a64 req-f6805ad3-235d-4404-b2c7-da91166f80d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] No waiting events found dispatching network-vif-unplugged-c754b929-2e4d-413c-90fa-92a743e1fc38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.576 221554 DEBUG nova.compute.manager [req-e4e229cd-ac5b-46ca-9980-125a0bc09a64 req-f6805ad3-235d-4404-b2c7-da91166f80d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Received event network-vif-unplugged-c754b929-2e4d-413c-90fa-92a743e1fc38 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:20:40 np0005603609 kernel: tap1f25515c-3e (unregistering): left promiscuous mode
Jan 31 03:20:40 np0005603609 NetworkManager[49064]: <info>  [1769847640.6298] device (tap1f25515c-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.629 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:40Z|00604|binding|INFO|Releasing lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a from this chassis (sb_readonly=0)
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.636 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:40Z|00605|binding|INFO|Setting lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a down in Southbound
Jan 31 03:20:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:40Z|00606|binding|INFO|Removing iface tap1f25515c-3e ovn-installed in OVS
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.638 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.641 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.648 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:68:cd 10.100.0.8'], port_security=['fa:16:3e:f6:68:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '38deb482-bd85-4fdf-b2da-2725bffd8f43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e03fc320-c87d-42d2-a772-ec94aeb05209', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a8345fd-717b-4084-912f-0c496810f08f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed93fa99-ea0a-43df-97d5-ee7154033108, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=1f25515c-3e82-445e-baad-bc0e3a0d6f1a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.650 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 1f25515c-3e82-445e-baad-bc0e3a0d6f1a in datapath e03fc320-c87d-42d2-a772-ec94aeb05209 unbound from our chassis#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.651 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e03fc320-c87d-42d2-a772-ec94aeb05209, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.652 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[68c62767-f9a7-4d55-96db-6a54dd1293b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.652 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 namespace which is not needed anymore#033[00m
Jan 31 03:20:40 np0005603609 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Jan 31 03:20:40 np0005603609 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000008f.scope: Consumed 15.562s CPU time.
Jan 31 03:20:40 np0005603609 systemd-machined[190912]: Machine qemu-72-instance-0000008f terminated.
Jan 31 03:20:40 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[278983]: [NOTICE]   (278987) : haproxy version is 2.8.14-c23fe91
Jan 31 03:20:40 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[278983]: [NOTICE]   (278987) : path to executable is /usr/sbin/haproxy
Jan 31 03:20:40 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[278983]: [WARNING]  (278987) : Exiting Master process...
Jan 31 03:20:40 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[278983]: [ALERT]    (278987) : Current worker (278989) exited with code 143 (Terminated)
Jan 31 03:20:40 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[278983]: [WARNING]  (278987) : All workers exited. Exiting... (0)
Jan 31 03:20:40 np0005603609 systemd[1]: libpod-d5bc51e025db3ed9425524a1ed045883080819718c115b5bbcb186f35eb8f80a.scope: Deactivated successfully.
Jan 31 03:20:40 np0005603609 podman[279804]: 2026-01-31 08:20:40.755291148 +0000 UTC m=+0.040829191 container died d5bc51e025db3ed9425524a1ed045883080819718c115b5bbcb186f35eb8f80a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:20:40 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d5bc51e025db3ed9425524a1ed045883080819718c115b5bbcb186f35eb8f80a-userdata-shm.mount: Deactivated successfully.
Jan 31 03:20:40 np0005603609 systemd[1]: var-lib-containers-storage-overlay-94013593dfbf864a70e4a37700d8264e73b65e95bd55aa014d10c34f9454850e-merged.mount: Deactivated successfully.
Jan 31 03:20:40 np0005603609 podman[279804]: 2026-01-31 08:20:40.810567095 +0000 UTC m=+0.096105138 container cleanup d5bc51e025db3ed9425524a1ed045883080819718c115b5bbcb186f35eb8f80a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:20:40 np0005603609 systemd[1]: libpod-conmon-d5bc51e025db3ed9425524a1ed045883080819718c115b5bbcb186f35eb8f80a.scope: Deactivated successfully.
Jan 31 03:20:40 np0005603609 kernel: tap1f25515c-3e: entered promiscuous mode
Jan 31 03:20:40 np0005603609 kernel: tap1f25515c-3e (unregistering): left promiscuous mode
Jan 31 03:20:40 np0005603609 NetworkManager[49064]: <info>  [1769847640.8565] manager: (tap1f25515c-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/285)
Jan 31 03:20:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:40Z|00607|binding|INFO|Claiming lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a for this chassis.
Jan 31 03:20:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:40Z|00608|binding|INFO|1f25515c-3e82-445e-baad-bc0e3a0d6f1a: Claiming fa:16:3e:f6:68:cd 10.100.0.8
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.860 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:40 np0005603609 podman[279836]: 2026-01-31 08:20:40.873273581 +0000 UTC m=+0.046133229 container remove d5bc51e025db3ed9425524a1ed045883080819718c115b5bbcb186f35eb8f80a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:20:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:40Z|00609|binding|INFO|Setting lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a ovn-installed in OVS
Jan 31 03:20:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:40Z|00610|if_status|INFO|Dropped 4 log messages in last 548 seconds (most recently, 548 seconds ago) due to excessive rate
Jan 31 03:20:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:40Z|00611|if_status|INFO|Not setting lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a down as sb is readonly
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.876 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.877 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[09c18203-b145-41c6-830c-4431b6badbcb]: (4, ('Sat Jan 31 08:20:40 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 (d5bc51e025db3ed9425524a1ed045883080819718c115b5bbcb186f35eb8f80a)\nd5bc51e025db3ed9425524a1ed045883080819718c115b5bbcb186f35eb8f80a\nSat Jan 31 08:20:40 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 (d5bc51e025db3ed9425524a1ed045883080819718c115b5bbcb186f35eb8f80a)\nd5bc51e025db3ed9425524a1ed045883080819718c115b5bbcb186f35eb8f80a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.879 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[61190d2c-6264-4a37-9bb1-90c5f7cbd9a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.879 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape03fc320-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.881 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:40Z|00612|binding|INFO|Releasing lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a from this chassis (sb_readonly=0)
Jan 31 03:20:40 np0005603609 kernel: tape03fc320-c0: left promiscuous mode
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.883 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:68:cd 10.100.0.8'], port_security=['fa:16:3e:f6:68:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '38deb482-bd85-4fdf-b2da-2725bffd8f43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e03fc320-c87d-42d2-a772-ec94aeb05209', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a8345fd-717b-4084-912f-0c496810f08f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed93fa99-ea0a-43df-97d5-ee7154033108, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=1f25515c-3e82-445e-baad-bc0e3a0d6f1a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.890 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:68:cd 10.100.0.8'], port_security=['fa:16:3e:f6:68:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '38deb482-bd85-4fdf-b2da-2725bffd8f43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e03fc320-c87d-42d2-a772-ec94aeb05209', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a8345fd-717b-4084-912f-0c496810f08f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed93fa99-ea0a-43df-97d5-ee7154033108, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=1f25515c-3e82-445e-baad-bc0e3a0d6f1a) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.899 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f218a5-a5d4-4c7d-b0ce-b8c77393e398]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.899 221554 INFO nova.virt.libvirt.driver [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Deleting instance files /var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272_del#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.899 221554 INFO nova.virt.libvirt.driver [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Deletion of /var/lib/nova/instances/ac849bdd-07a0-459f-b207-2a3f239d0272_del complete#033[00m
Jan 31 03:20:40 np0005603609 nova_compute[221550]: 2026-01-31 08:20:40.901 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.908 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[dff8ffce-d3d9-4cb0-af8f-893e12ec905d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.909 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[494f24bd-b387-4ee5-b10d-462bb38ec461]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.919 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f8c71842-f1f5-4f4c-bbdc-5136e9ca56d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782720, 'reachable_time': 29888, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279864, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.920 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.920 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[ae57ff91-9731-4504-a354-202497d8afbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.921 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 1f25515c-3e82-445e-baad-bc0e3a0d6f1a in datapath e03fc320-c87d-42d2-a772-ec94aeb05209 unbound from our chassis#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.922 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e03fc320-c87d-42d2-a772-ec94aeb05209, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.923 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c8b1a155-cc49-49af-9779-2d6c1f778a01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.923 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 1f25515c-3e82-445e-baad-bc0e3a0d6f1a in datapath e03fc320-c87d-42d2-a772-ec94aeb05209 unbound from our chassis#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.924 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e03fc320-c87d-42d2-a772-ec94aeb05209, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:20:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:40.925 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2510ff53-21bc-47dd-bdd9-f0354ff99a20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.011 221554 INFO nova.compute.manager [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Took 0.91 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.012 221554 DEBUG oslo.service.loopingcall [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.012 221554 DEBUG nova.compute.manager [-] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.012 221554 DEBUG nova.network.neutron [-] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.056 221554 DEBUG nova.compute.manager [req-860228fb-8232-459d-89de-04c753ea34e1 req-5318a59f-40f1-4dab-b0d7-aa12d792cef4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received event network-vif-unplugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.057 221554 DEBUG oslo_concurrency.lockutils [req-860228fb-8232-459d-89de-04c753ea34e1 req-5318a59f-40f1-4dab-b0d7-aa12d792cef4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.058 221554 DEBUG oslo_concurrency.lockutils [req-860228fb-8232-459d-89de-04c753ea34e1 req-5318a59f-40f1-4dab-b0d7-aa12d792cef4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.058 221554 DEBUG oslo_concurrency.lockutils [req-860228fb-8232-459d-89de-04c753ea34e1 req-5318a59f-40f1-4dab-b0d7-aa12d792cef4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.058 221554 DEBUG nova.compute.manager [req-860228fb-8232-459d-89de-04c753ea34e1 req-5318a59f-40f1-4dab-b0d7-aa12d792cef4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] No waiting events found dispatching network-vif-unplugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.059 221554 WARNING nova.compute.manager [req-860228fb-8232-459d-89de-04c753ea34e1 req-5318a59f-40f1-4dab-b0d7-aa12d792cef4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received unexpected event network-vif-unplugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.275 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Updating instance_info_cache with network_info: [{"id": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "address": "fa:16:3e:f6:68:cd", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f25515c-3e", "ovs_interfaceid": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:20:41 np0005603609 systemd[1]: run-netns-ovnmeta\x2de03fc320\x2dc87d\x2d42d2\x2da772\x2dec94aeb05209.mount: Deactivated successfully.
Jan 31 03:20:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:41.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.683 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-38deb482-bd85-4fdf-b2da-2725bffd8f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.683 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.685 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.685 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.686 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.686 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.688 221554 INFO nova.virt.libvirt.driver [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.695 221554 INFO nova.virt.libvirt.driver [-] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Instance destroyed successfully.#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.696 221554 DEBUG nova.objects.instance [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'numa_topology' on Instance uuid 38deb482-bd85-4fdf-b2da-2725bffd8f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.717 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.717 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.718 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.718 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.719 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.746 221554 INFO nova.virt.libvirt.driver [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Attempting rescue#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.747 221554 DEBUG nova.virt.libvirt.driver [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.751 221554 DEBUG nova.virt.libvirt.driver [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.752 221554 INFO nova.virt.libvirt.driver [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Creating image(s)#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.780 221554 DEBUG nova.storage.rbd_utils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.785 221554 DEBUG nova.objects.instance [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 38deb482-bd85-4fdf-b2da-2725bffd8f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:41.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.835 221554 DEBUG nova.storage.rbd_utils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.873 221554 DEBUG nova.storage.rbd_utils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.878 221554 DEBUG oslo_concurrency.processutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.941 221554 DEBUG oslo_concurrency.processutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.942 221554 DEBUG oslo_concurrency.lockutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.943 221554 DEBUG oslo_concurrency.lockutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.944 221554 DEBUG oslo_concurrency.lockutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.980 221554 DEBUG nova.storage.rbd_utils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:41 np0005603609 nova_compute[221550]: 2026-01-31 08:20:41.986 221554 DEBUG oslo_concurrency.processutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:20:42 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/342037608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.192 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.303 221554 DEBUG oslo_concurrency.processutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.304 221554 DEBUG nova.objects.instance [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'migration_context' on Instance uuid 38deb482-bd85-4fdf-b2da-2725bffd8f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:42.397 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.631 221554 DEBUG nova.virt.libvirt.driver [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.632 221554 DEBUG nova.virt.libvirt.driver [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Start _get_guest_xml network_info=[{"id": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "address": "fa:16:3e:f6:68:cd", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "vif_mac": "fa:16:3e:f6:68:cd"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f25515c-3e", "ovs_interfaceid": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '7c23949f-bba8-4466-bb79-caf568852d38', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.633 221554 DEBUG nova.objects.instance [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'resources' on Instance uuid 38deb482-bd85-4fdf-b2da-2725bffd8f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.670 221554 WARNING nova.virt.libvirt.driver [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.674 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.675 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.675 221554 DEBUG nova.virt.libvirt.host [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.676 221554 DEBUG nova.virt.libvirt.host [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.682 221554 DEBUG nova.virt.libvirt.host [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.683 221554 DEBUG nova.virt.libvirt.host [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.684 221554 DEBUG nova.virt.libvirt.driver [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.685 221554 DEBUG nova.virt.hardware [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.686 221554 DEBUG nova.virt.hardware [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.686 221554 DEBUG nova.virt.hardware [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.687 221554 DEBUG nova.virt.hardware [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.687 221554 DEBUG nova.virt.hardware [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.687 221554 DEBUG nova.virt.hardware [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.688 221554 DEBUG nova.virt.hardware [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.688 221554 DEBUG nova.virt.hardware [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.689 221554 DEBUG nova.virt.hardware [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.689 221554 DEBUG nova.virt.hardware [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.690 221554 DEBUG nova.virt.hardware [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.690 221554 DEBUG nova.objects.instance [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 38deb482-bd85-4fdf-b2da-2725bffd8f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.720 221554 DEBUG oslo_concurrency.processutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.793 221554 DEBUG nova.compute.manager [req-6bf04ba2-266b-42f1-96e9-bc66da3ec22d req-a815ae19-ce5d-482c-8972-2fb1dae12187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Received event network-vif-plugged-c754b929-2e4d-413c-90fa-92a743e1fc38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.794 221554 DEBUG oslo_concurrency.lockutils [req-6bf04ba2-266b-42f1-96e9-bc66da3ec22d req-a815ae19-ce5d-482c-8972-2fb1dae12187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.794 221554 DEBUG oslo_concurrency.lockutils [req-6bf04ba2-266b-42f1-96e9-bc66da3ec22d req-a815ae19-ce5d-482c-8972-2fb1dae12187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.795 221554 DEBUG oslo_concurrency.lockutils [req-6bf04ba2-266b-42f1-96e9-bc66da3ec22d req-a815ae19-ce5d-482c-8972-2fb1dae12187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.795 221554 DEBUG nova.compute.manager [req-6bf04ba2-266b-42f1-96e9-bc66da3ec22d req-a815ae19-ce5d-482c-8972-2fb1dae12187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] No waiting events found dispatching network-vif-plugged-c754b929-2e4d-413c-90fa-92a743e1fc38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.796 221554 WARNING nova.compute.manager [req-6bf04ba2-266b-42f1-96e9-bc66da3ec22d req-a815ae19-ce5d-482c-8972-2fb1dae12187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Received unexpected event network-vif-plugged-c754b929-2e4d-413c-90fa-92a743e1fc38 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.872 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.873 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4282MB free_disk=20.79541015625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.873 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.873 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.983 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 38deb482-bd85-4fdf-b2da-2725bffd8f43 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.984 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance ac849bdd-07a0-459f-b207-2a3f239d0272 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.984 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:20:42 np0005603609 nova_compute[221550]: 2026-01-31 08:20:42.984 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.070 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.105 221554 DEBUG nova.network.neutron [-] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.135 221554 INFO nova.compute.manager [-] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Took 2.12 seconds to deallocate network for instance.#033[00m
Jan 31 03:20:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:20:43 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/481649911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.202 221554 DEBUG nova.compute.manager [req-f10d875b-4f6d-40df-afde-124a658ea289 req-ae171cad-26b3-4672-ac1a-c17121911f11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received event network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.202 221554 DEBUG oslo_concurrency.lockutils [req-f10d875b-4f6d-40df-afde-124a658ea289 req-ae171cad-26b3-4672-ac1a-c17121911f11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.203 221554 DEBUG oslo_concurrency.lockutils [req-f10d875b-4f6d-40df-afde-124a658ea289 req-ae171cad-26b3-4672-ac1a-c17121911f11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.203 221554 DEBUG oslo_concurrency.lockutils [req-f10d875b-4f6d-40df-afde-124a658ea289 req-ae171cad-26b3-4672-ac1a-c17121911f11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.203 221554 DEBUG nova.compute.manager [req-f10d875b-4f6d-40df-afde-124a658ea289 req-ae171cad-26b3-4672-ac1a-c17121911f11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] No waiting events found dispatching network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.203 221554 WARNING nova.compute.manager [req-f10d875b-4f6d-40df-afde-124a658ea289 req-ae171cad-26b3-4672-ac1a-c17121911f11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received unexpected event network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.205 221554 DEBUG oslo_concurrency.processutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.206 221554 DEBUG oslo_concurrency.processutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.227 221554 DEBUG oslo_concurrency.lockutils [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:20:43 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2769729325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.481 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.487 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.517 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.552 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.553 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.553 221554 DEBUG oslo_concurrency.lockutils [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.590 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:20:43 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1405474155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.614 221554 DEBUG oslo_concurrency.processutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.616 221554 DEBUG oslo_concurrency.processutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:43 np0005603609 nova_compute[221550]: 2026-01-31 08:20:43.676 221554 DEBUG oslo_concurrency.processutils [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:43.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:43.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:20:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1314591952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.019 221554 DEBUG oslo_concurrency.processutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.021 221554 DEBUG nova.virt.libvirt.vif [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:19:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-549789768',display_name='tempest-ServerRescueNegativeTestJSON-server-549789768',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-549789768',id=143,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:19:49Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4849ff916e1b4e2aa162faaf2c0717a2',ramdisk_id='',reservation_id='r-7j661ndk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1784809431',owner_user_name='tempest-ServerRescueNegativeTestJSON-1784809431-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:20:28Z,user_data=None,user_id='6788b0883cb348719d1222b1c9483be2',uuid=38deb482-bd85-4fdf-b2da-2725bffd8f43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "address": "fa:16:3e:f6:68:cd", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "vif_mac": "fa:16:3e:f6:68:cd"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f25515c-3e", "ovs_interfaceid": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.022 221554 DEBUG nova.network.os_vif_util [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converting VIF {"id": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "address": "fa:16:3e:f6:68:cd", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "vif_mac": "fa:16:3e:f6:68:cd"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f25515c-3e", "ovs_interfaceid": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.024 221554 DEBUG nova.network.os_vif_util [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:68:cd,bridge_name='br-int',has_traffic_filtering=True,id=1f25515c-3e82-445e-baad-bc0e3a0d6f1a,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f25515c-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.026 221554 DEBUG nova.objects.instance [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38deb482-bd85-4fdf-b2da-2725bffd8f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.046 221554 DEBUG nova.virt.libvirt.driver [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:  <uuid>38deb482-bd85-4fdf-b2da-2725bffd8f43</uuid>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:  <name>instance-0000008f</name>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-549789768</nova:name>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:20:42</nova:creationTime>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <nova:user uuid="6788b0883cb348719d1222b1c9483be2">tempest-ServerRescueNegativeTestJSON-1784809431-project-member</nova:user>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <nova:project uuid="4849ff916e1b4e2aa162faaf2c0717a2">tempest-ServerRescueNegativeTestJSON-1784809431</nova:project>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <nova:port uuid="1f25515c-3e82-445e-baad-bc0e3a0d6f1a">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <entry name="serial">38deb482-bd85-4fdf-b2da-2725bffd8f43</entry>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <entry name="uuid">38deb482-bd85-4fdf-b2da-2725bffd8f43</entry>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/38deb482-bd85-4fdf-b2da-2725bffd8f43_disk.rescue">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/38deb482-bd85-4fdf-b2da-2725bffd8f43_disk">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <target dev="vdb" bus="virtio"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/38deb482-bd85-4fdf-b2da-2725bffd8f43_disk.config.rescue">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:f6:68:cd"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <target dev="tap1f25515c-3e"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/38deb482-bd85-4fdf-b2da-2725bffd8f43/console.log" append="off"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:20:44 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:20:44 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:20:44 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:20:44 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.059 221554 INFO nova.virt.libvirt.driver [-] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Instance destroyed successfully.#033[00m
Jan 31 03:20:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:20:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/983807351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.106 221554 DEBUG oslo_concurrency.processutils [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.112 221554 DEBUG nova.compute.provider_tree [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.118 221554 DEBUG nova.virt.libvirt.driver [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.118 221554 DEBUG nova.virt.libvirt.driver [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.118 221554 DEBUG nova.virt.libvirt.driver [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.119 221554 DEBUG nova.virt.libvirt.driver [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No VIF found with MAC fa:16:3e:f6:68:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.119 221554 INFO nova.virt.libvirt.driver [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Using config drive#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.143 221554 DEBUG nova.storage.rbd_utils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.150 221554 DEBUG nova.scheduler.client.report [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.183 221554 DEBUG nova.objects.instance [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 38deb482-bd85-4fdf-b2da-2725bffd8f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.209 221554 DEBUG oslo_concurrency.lockutils [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.215 221554 DEBUG nova.objects.instance [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'keypairs' on Instance uuid 38deb482-bd85-4fdf-b2da-2725bffd8f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.262 221554 INFO nova.scheduler.client.report [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Deleted allocations for instance ac849bdd-07a0-459f-b207-2a3f239d0272#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.358 221554 DEBUG oslo_concurrency.lockutils [None req-d78d14d4-836e-4793-91de-28f99540ca4e 3b153d2832404e5b9250422b70ba522d 3b06982960ad4453b8e542cb6330835d - - default default] Lock "ac849bdd-07a0-459f-b207-2a3f239d0272" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.267s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.527 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.528 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.793 221554 INFO nova.virt.libvirt.driver [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Creating config drive at /var/lib/nova/instances/38deb482-bd85-4fdf-b2da-2725bffd8f43/disk.config.rescue#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.799 221554 DEBUG oslo_concurrency.processutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/38deb482-bd85-4fdf-b2da-2725bffd8f43/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_rr769lw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.923 221554 DEBUG oslo_concurrency.processutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/38deb482-bd85-4fdf-b2da-2725bffd8f43/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_rr769lw" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.950 221554 DEBUG nova.storage.rbd_utils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:44 np0005603609 nova_compute[221550]: 2026-01-31 08:20:44.953 221554 DEBUG oslo_concurrency.processutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/38deb482-bd85-4fdf-b2da-2725bffd8f43/disk.config.rescue 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:45 np0005603609 nova_compute[221550]: 2026-01-31 08:20:45.262 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:45 np0005603609 nova_compute[221550]: 2026-01-31 08:20:45.354 221554 DEBUG oslo_concurrency.processutils [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/38deb482-bd85-4fdf-b2da-2725bffd8f43/disk.config.rescue 38deb482-bd85-4fdf-b2da-2725bffd8f43_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:45 np0005603609 nova_compute[221550]: 2026-01-31 08:20:45.355 221554 INFO nova.virt.libvirt.driver [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Deleting local config drive /var/lib/nova/instances/38deb482-bd85-4fdf-b2da-2725bffd8f43/disk.config.rescue because it was imported into RBD.#033[00m
Jan 31 03:20:45 np0005603609 nova_compute[221550]: 2026-01-31 08:20:45.365 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:45 np0005603609 kernel: tap1f25515c-3e: entered promiscuous mode
Jan 31 03:20:45 np0005603609 NetworkManager[49064]: <info>  [1769847645.4032] manager: (tap1f25515c-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/286)
Jan 31 03:20:45 np0005603609 nova_compute[221550]: 2026-01-31 08:20:45.406 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:45Z|00613|binding|INFO|Claiming lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a for this chassis.
Jan 31 03:20:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:45Z|00614|binding|INFO|1f25515c-3e82-445e-baad-bc0e3a0d6f1a: Claiming fa:16:3e:f6:68:cd 10.100.0.8
Jan 31 03:20:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:45Z|00615|binding|INFO|Removing lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a ovn-installed in OVS
Jan 31 03:20:45 np0005603609 nova_compute[221550]: 2026-01-31 08:20:45.408 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:45 np0005603609 nova_compute[221550]: 2026-01-31 08:20:45.411 221554 DEBUG nova.compute.manager [req-e5a2e43b-c13c-4008-be63-6b94671f64c8 req-50691029-71bb-41e7-a383-e1ae5cc9743f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Received event network-vif-deleted-c754b929-2e4d-413c-90fa-92a743e1fc38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:45Z|00616|binding|INFO|Setting lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a ovn-installed in OVS
Jan 31 03:20:45 np0005603609 nova_compute[221550]: 2026-01-31 08:20:45.414 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:45 np0005603609 nova_compute[221550]: 2026-01-31 08:20:45.419 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:45Z|00617|binding|INFO|Setting lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a up in Southbound
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.421 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:68:cd 10.100.0.8'], port_security=['fa:16:3e:f6:68:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '38deb482-bd85-4fdf-b2da-2725bffd8f43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e03fc320-c87d-42d2-a772-ec94aeb05209', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0a8345fd-717b-4084-912f-0c496810f08f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed93fa99-ea0a-43df-97d5-ee7154033108, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=1f25515c-3e82-445e-baad-bc0e3a0d6f1a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.423 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 1f25515c-3e82-445e-baad-bc0e3a0d6f1a in datapath e03fc320-c87d-42d2-a772-ec94aeb05209 bound to our chassis#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.424 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e03fc320-c87d-42d2-a772-ec94aeb05209#033[00m
Jan 31 03:20:45 np0005603609 systemd-udevd[280166]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:20:45 np0005603609 systemd-machined[190912]: New machine qemu-74-instance-0000008f.
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.432 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7118b317-6cbd-4720-b389-ab8b74e39a6a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.432 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape03fc320-c1 in ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.434 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape03fc320-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.434 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[587d1792-7b5f-404d-94b4-3be4208a010b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.435 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[29322ce8-3d9d-469b-b656-00509cd234af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:45 np0005603609 systemd[1]: Started Virtual Machine qemu-74-instance-0000008f.
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.442 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[23eab83b-1441-4e9f-9ee1-3a5de23e49fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:45 np0005603609 NetworkManager[49064]: <info>  [1769847645.4437] device (tap1f25515c-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:20:45 np0005603609 NetworkManager[49064]: <info>  [1769847645.4444] device (tap1f25515c-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.450 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fbcd031f-2440-42fb-9f56-c504b708b54d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.471 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[750dfa81-ea4c-40ac-8fc4-46199d2c2c60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:45 np0005603609 NetworkManager[49064]: <info>  [1769847645.4763] manager: (tape03fc320-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/287)
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.476 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b42989a4-926e-49a4-8665-76fd9db78dec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.504 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb12e48-241d-4813-b80d-ffb5367dbfd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.506 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[8854696f-e2de-49aa-a996-21bb90220033]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:45 np0005603609 NetworkManager[49064]: <info>  [1769847645.5210] device (tape03fc320-c0): carrier: link connected
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.525 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d66260df-9cba-453a-bf5e-956c2d5aad3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.535 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1f45c9d2-f466-4213-9964-f1055efaf9cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape03fc320-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:22:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 190], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788522, 'reachable_time': 15587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280198, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.548 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[21b57527-02d3-45d7-b81c-2ba52045df3b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe79:2269'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 788522, 'tstamp': 788522}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280199, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.560 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5d930827-5b58-4e32-9f23-b2731fc38980]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape03fc320-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:22:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 190], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788522, 'reachable_time': 15587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280200, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.582 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef24112-0192-474a-bd1c-50051018e9bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.619 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e83bb5-92c5-4d1a-939c-c93075bb83ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.620 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape03fc320-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.620 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.621 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape03fc320-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:45 np0005603609 NetworkManager[49064]: <info>  [1769847645.6233] manager: (tape03fc320-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Jan 31 03:20:45 np0005603609 kernel: tape03fc320-c0: entered promiscuous mode
Jan 31 03:20:45 np0005603609 nova_compute[221550]: 2026-01-31 08:20:45.624 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.626 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape03fc320-c0, col_values=(('external_ids', {'iface-id': '075aefe0-13df-4a17-ae95-485ece950a10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:45Z|00618|binding|INFO|Releasing lport 075aefe0-13df-4a17-ae95-485ece950a10 from this chassis (sb_readonly=0)
Jan 31 03:20:45 np0005603609 nova_compute[221550]: 2026-01-31 08:20:45.628 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.629 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e03fc320-c87d-42d2-a772-ec94aeb05209.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e03fc320-c87d-42d2-a772-ec94aeb05209.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.629 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cee5cc51-76a6-4d19-a678-85992f7b83c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.630 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-e03fc320-c87d-42d2-a772-ec94aeb05209
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/e03fc320-c87d-42d2-a772-ec94aeb05209.pid.haproxy
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID e03fc320-c87d-42d2-a772-ec94aeb05209
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:20:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:45.631 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'env', 'PROCESS_TAG=haproxy-e03fc320-c87d-42d2-a772-ec94aeb05209', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e03fc320-c87d-42d2-a772-ec94aeb05209.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:20:45 np0005603609 nova_compute[221550]: 2026-01-31 08:20:45.636 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:45.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:20:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:45.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:20:45 np0005603609 podman[280266]: 2026-01-31 08:20:45.996607721 +0000 UTC m=+0.084845488 container create a9298d76b21152f3b9c760abd1b8d14656d70a12ca7a5eb0774679fc2e972f8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:20:46 np0005603609 podman[280266]: 2026-01-31 08:20:45.945456713 +0000 UTC m=+0.033694560 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:20:46 np0005603609 systemd[1]: Started libpod-conmon-a9298d76b21152f3b9c760abd1b8d14656d70a12ca7a5eb0774679fc2e972f8d.scope.
Jan 31 03:20:46 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:20:46 np0005603609 nova_compute[221550]: 2026-01-31 08:20:46.068 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 38deb482-bd85-4fdf-b2da-2725bffd8f43 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:20:46 np0005603609 nova_compute[221550]: 2026-01-31 08:20:46.069 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847646.068181, 38deb482-bd85-4fdf-b2da-2725bffd8f43 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:20:46 np0005603609 nova_compute[221550]: 2026-01-31 08:20:46.069 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:20:46 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf705f28a8d9553715c7950c339bc4c9a842523da0658cfae3edf001f44bef3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:20:46 np0005603609 nova_compute[221550]: 2026-01-31 08:20:46.074 221554 DEBUG nova.compute.manager [None req-d7fcd702-a735-425f-bde7-8493f75f64bb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:46 np0005603609 podman[280266]: 2026-01-31 08:20:46.0848318 +0000 UTC m=+0.173069577 container init a9298d76b21152f3b9c760abd1b8d14656d70a12ca7a5eb0774679fc2e972f8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:20:46 np0005603609 podman[280266]: 2026-01-31 08:20:46.0898397 +0000 UTC m=+0.178077457 container start a9298d76b21152f3b9c760abd1b8d14656d70a12ca7a5eb0774679fc2e972f8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:20:46 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[280305]: [NOTICE]   (280309) : New worker (280311) forked
Jan 31 03:20:46 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[280305]: [NOTICE]   (280309) : Loading success.
Jan 31 03:20:46 np0005603609 nova_compute[221550]: 2026-01-31 08:20:46.185 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:46 np0005603609 nova_compute[221550]: 2026-01-31 08:20:46.188 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:20:46 np0005603609 nova_compute[221550]: 2026-01-31 08:20:46.261 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Jan 31 03:20:46 np0005603609 nova_compute[221550]: 2026-01-31 08:20:46.262 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847646.0751228, 38deb482-bd85-4fdf-b2da-2725bffd8f43 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:20:46 np0005603609 nova_compute[221550]: 2026-01-31 08:20:46.262 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] VM Started (Lifecycle Event)#033[00m
Jan 31 03:20:46 np0005603609 nova_compute[221550]: 2026-01-31 08:20:46.299 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:46 np0005603609 nova_compute[221550]: 2026-01-31 08:20:46.304 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:20:47 np0005603609 nova_compute[221550]: 2026-01-31 08:20:47.346 221554 INFO nova.compute.manager [None req-2ddbbdf5-eeb5-4f3a-90f6-3f2a8e339c4b 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Unrescuing#033[00m
Jan 31 03:20:47 np0005603609 nova_compute[221550]: 2026-01-31 08:20:47.347 221554 DEBUG oslo_concurrency.lockutils [None req-2ddbbdf5-eeb5-4f3a-90f6-3f2a8e339c4b 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "refresh_cache-38deb482-bd85-4fdf-b2da-2725bffd8f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:20:47 np0005603609 nova_compute[221550]: 2026-01-31 08:20:47.347 221554 DEBUG oslo_concurrency.lockutils [None req-2ddbbdf5-eeb5-4f3a-90f6-3f2a8e339c4b 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquired lock "refresh_cache-38deb482-bd85-4fdf-b2da-2725bffd8f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:20:47 np0005603609 nova_compute[221550]: 2026-01-31 08:20:47.347 221554 DEBUG nova.network.neutron [None req-2ddbbdf5-eeb5-4f3a-90f6-3f2a8e339c4b 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:20:47 np0005603609 nova_compute[221550]: 2026-01-31 08:20:47.628 221554 DEBUG nova.compute.manager [req-b8f0d5fe-bd29-4136-af42-0749484a3cda req-4a57f4ce-1498-4804-a169-3ef4309ebc62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received event network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:47 np0005603609 nova_compute[221550]: 2026-01-31 08:20:47.629 221554 DEBUG oslo_concurrency.lockutils [req-b8f0d5fe-bd29-4136-af42-0749484a3cda req-4a57f4ce-1498-4804-a169-3ef4309ebc62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:47 np0005603609 nova_compute[221550]: 2026-01-31 08:20:47.629 221554 DEBUG oslo_concurrency.lockutils [req-b8f0d5fe-bd29-4136-af42-0749484a3cda req-4a57f4ce-1498-4804-a169-3ef4309ebc62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:47 np0005603609 nova_compute[221550]: 2026-01-31 08:20:47.630 221554 DEBUG oslo_concurrency.lockutils [req-b8f0d5fe-bd29-4136-af42-0749484a3cda req-4a57f4ce-1498-4804-a169-3ef4309ebc62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:47 np0005603609 nova_compute[221550]: 2026-01-31 08:20:47.630 221554 DEBUG nova.compute.manager [req-b8f0d5fe-bd29-4136-af42-0749484a3cda req-4a57f4ce-1498-4804-a169-3ef4309ebc62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] No waiting events found dispatching network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:20:47 np0005603609 nova_compute[221550]: 2026-01-31 08:20:47.631 221554 WARNING nova.compute.manager [req-b8f0d5fe-bd29-4136-af42-0749484a3cda req-4a57f4ce-1498-4804-a169-3ef4309ebc62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received unexpected event network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 31 03:20:47 np0005603609 nova_compute[221550]: 2026-01-31 08:20:47.631 221554 DEBUG nova.compute.manager [req-b8f0d5fe-bd29-4136-af42-0749484a3cda req-4a57f4ce-1498-4804-a169-3ef4309ebc62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received event network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:47 np0005603609 nova_compute[221550]: 2026-01-31 08:20:47.632 221554 DEBUG oslo_concurrency.lockutils [req-b8f0d5fe-bd29-4136-af42-0749484a3cda req-4a57f4ce-1498-4804-a169-3ef4309ebc62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:47 np0005603609 nova_compute[221550]: 2026-01-31 08:20:47.632 221554 DEBUG oslo_concurrency.lockutils [req-b8f0d5fe-bd29-4136-af42-0749484a3cda req-4a57f4ce-1498-4804-a169-3ef4309ebc62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:47 np0005603609 nova_compute[221550]: 2026-01-31 08:20:47.632 221554 DEBUG oslo_concurrency.lockutils [req-b8f0d5fe-bd29-4136-af42-0749484a3cda req-4a57f4ce-1498-4804-a169-3ef4309ebc62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:47 np0005603609 nova_compute[221550]: 2026-01-31 08:20:47.633 221554 DEBUG nova.compute.manager [req-b8f0d5fe-bd29-4136-af42-0749484a3cda req-4a57f4ce-1498-4804-a169-3ef4309ebc62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] No waiting events found dispatching network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:20:47 np0005603609 nova_compute[221550]: 2026-01-31 08:20:47.633 221554 WARNING nova.compute.manager [req-b8f0d5fe-bd29-4136-af42-0749484a3cda req-4a57f4ce-1498-4804-a169-3ef4309ebc62 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received unexpected event network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 31 03:20:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:47.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:47.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:20:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:20:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:20:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:20:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 03:20:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 03:20:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:20:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:20:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:20:49 np0005603609 nova_compute[221550]: 2026-01-31 08:20:49.406 221554 DEBUG nova.network.neutron [None req-2ddbbdf5-eeb5-4f3a-90f6-3f2a8e339c4b 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Updating instance_info_cache with network_info: [{"id": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "address": "fa:16:3e:f6:68:cd", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f25515c-3e", "ovs_interfaceid": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:20:49 np0005603609 nova_compute[221550]: 2026-01-31 08:20:49.444 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:49 np0005603609 nova_compute[221550]: 2026-01-31 08:20:49.453 221554 DEBUG oslo_concurrency.lockutils [None req-2ddbbdf5-eeb5-4f3a-90f6-3f2a8e339c4b 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Releasing lock "refresh_cache-38deb482-bd85-4fdf-b2da-2725bffd8f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:20:49 np0005603609 nova_compute[221550]: 2026-01-31 08:20:49.454 221554 DEBUG nova.objects.instance [None req-2ddbbdf5-eeb5-4f3a-90f6-3f2a8e339c4b 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'flavor' on Instance uuid 38deb482-bd85-4fdf-b2da-2725bffd8f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:49 np0005603609 kernel: tap1f25515c-3e (unregistering): left promiscuous mode
Jan 31 03:20:49 np0005603609 NetworkManager[49064]: <info>  [1769847649.5496] device (tap1f25515c-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:20:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:49Z|00619|binding|INFO|Releasing lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a from this chassis (sb_readonly=0)
Jan 31 03:20:49 np0005603609 nova_compute[221550]: 2026-01-31 08:20:49.561 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:49Z|00620|binding|INFO|Setting lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a down in Southbound
Jan 31 03:20:49 np0005603609 nova_compute[221550]: 2026-01-31 08:20:49.562 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:49Z|00621|binding|INFO|Removing iface tap1f25515c-3e ovn-installed in OVS
Jan 31 03:20:49 np0005603609 nova_compute[221550]: 2026-01-31 08:20:49.564 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:49 np0005603609 nova_compute[221550]: 2026-01-31 08:20:49.573 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:49.583 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:68:cd 10.100.0.8'], port_security=['fa:16:3e:f6:68:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '38deb482-bd85-4fdf-b2da-2725bffd8f43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e03fc320-c87d-42d2-a772-ec94aeb05209', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0a8345fd-717b-4084-912f-0c496810f08f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed93fa99-ea0a-43df-97d5-ee7154033108, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=1f25515c-3e82-445e-baad-bc0e3a0d6f1a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:20:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:49.585 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 1f25515c-3e82-445e-baad-bc0e3a0d6f1a in datapath e03fc320-c87d-42d2-a772-ec94aeb05209 unbound from our chassis#033[00m
Jan 31 03:20:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:49.586 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e03fc320-c87d-42d2-a772-ec94aeb05209, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:20:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:49.587 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[75cd07b8-ce08-478d-aea8-3af34722a69b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:49.587 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 namespace which is not needed anymore#033[00m
Jan 31 03:20:49 np0005603609 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Jan 31 03:20:49 np0005603609 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000008f.scope: Consumed 4.153s CPU time.
Jan 31 03:20:49 np0005603609 systemd-machined[190912]: Machine qemu-74-instance-0000008f terminated.
Jan 31 03:20:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:20:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:49.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:20:49 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[280305]: [NOTICE]   (280309) : haproxy version is 2.8.14-c23fe91
Jan 31 03:20:49 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[280305]: [NOTICE]   (280309) : path to executable is /usr/sbin/haproxy
Jan 31 03:20:49 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[280305]: [WARNING]  (280309) : Exiting Master process...
Jan 31 03:20:49 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[280305]: [ALERT]    (280309) : Current worker (280311) exited with code 143 (Terminated)
Jan 31 03:20:49 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[280305]: [WARNING]  (280309) : All workers exited. Exiting... (0)
Jan 31 03:20:49 np0005603609 NetworkManager[49064]: <info>  [1769847649.7111] manager: (tap1f25515c-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/289)
Jan 31 03:20:49 np0005603609 systemd[1]: libpod-a9298d76b21152f3b9c760abd1b8d14656d70a12ca7a5eb0774679fc2e972f8d.scope: Deactivated successfully.
Jan 31 03:20:49 np0005603609 conmon[280305]: conmon a9298d76b21152f3b9c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a9298d76b21152f3b9c760abd1b8d14656d70a12ca7a5eb0774679fc2e972f8d.scope/container/memory.events
Jan 31 03:20:49 np0005603609 nova_compute[221550]: 2026-01-31 08:20:49.712 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:49 np0005603609 nova_compute[221550]: 2026-01-31 08:20:49.716 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:49 np0005603609 podman[280596]: 2026-01-31 08:20:49.720926156 +0000 UTC m=+0.049349776 container died a9298d76b21152f3b9c760abd1b8d14656d70a12ca7a5eb0774679fc2e972f8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 03:20:49 np0005603609 nova_compute[221550]: 2026-01-31 08:20:49.732 221554 INFO nova.virt.libvirt.driver [-] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Instance destroyed successfully.#033[00m
Jan 31 03:20:49 np0005603609 nova_compute[221550]: 2026-01-31 08:20:49.733 221554 DEBUG nova.objects.instance [None req-2ddbbdf5-eeb5-4f3a-90f6-3f2a8e339c4b 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'numa_topology' on Instance uuid 38deb482-bd85-4fdf-b2da-2725bffd8f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:49 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9298d76b21152f3b9c760abd1b8d14656d70a12ca7a5eb0774679fc2e972f8d-userdata-shm.mount: Deactivated successfully.
Jan 31 03:20:49 np0005603609 systemd[1]: var-lib-containers-storage-overlay-cbf705f28a8d9553715c7950c339bc4c9a842523da0658cfae3edf001f44bef3-merged.mount: Deactivated successfully.
Jan 31 03:20:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:49.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:49 np0005603609 podman[280596]: 2026-01-31 08:20:49.873985531 +0000 UTC m=+0.202409131 container cleanup a9298d76b21152f3b9c760abd1b8d14656d70a12ca7a5eb0774679fc2e972f8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:20:49 np0005603609 kernel: tap1f25515c-3e: entered promiscuous mode
Jan 31 03:20:49 np0005603609 NetworkManager[49064]: <info>  [1769847649.8774] manager: (tap1f25515c-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/290)
Jan 31 03:20:49 np0005603609 systemd-udevd[280575]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:20:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:49Z|00622|binding|INFO|Claiming lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a for this chassis.
Jan 31 03:20:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:49Z|00623|binding|INFO|1f25515c-3e82-445e-baad-bc0e3a0d6f1a: Claiming fa:16:3e:f6:68:cd 10.100.0.8
Jan 31 03:20:49 np0005603609 nova_compute[221550]: 2026-01-31 08:20:49.878 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:49 np0005603609 systemd[1]: libpod-conmon-a9298d76b21152f3b9c760abd1b8d14656d70a12ca7a5eb0774679fc2e972f8d.scope: Deactivated successfully.
Jan 31 03:20:49 np0005603609 NetworkManager[49064]: <info>  [1769847649.8897] device (tap1f25515c-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:20:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:49Z|00624|binding|INFO|Setting lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a ovn-installed in OVS
Jan 31 03:20:49 np0005603609 NetworkManager[49064]: <info>  [1769847649.8905] device (tap1f25515c-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:20:49 np0005603609 nova_compute[221550]: 2026-01-31 08:20:49.890 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:49Z|00625|binding|INFO|Setting lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a up in Southbound
Jan 31 03:20:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:49.894 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:68:cd 10.100.0.8'], port_security=['fa:16:3e:f6:68:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '38deb482-bd85-4fdf-b2da-2725bffd8f43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e03fc320-c87d-42d2-a772-ec94aeb05209', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0a8345fd-717b-4084-912f-0c496810f08f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed93fa99-ea0a-43df-97d5-ee7154033108, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=1f25515c-3e82-445e-baad-bc0e3a0d6f1a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:20:49 np0005603609 nova_compute[221550]: 2026-01-31 08:20:49.894 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:49 np0005603609 systemd-machined[190912]: New machine qemu-75-instance-0000008f.
Jan 31 03:20:49 np0005603609 systemd[1]: Started Virtual Machine qemu-75-instance-0000008f.
Jan 31 03:20:49 np0005603609 podman[280647]: 2026-01-31 08:20:49.975787206 +0000 UTC m=+0.079882549 container remove a9298d76b21152f3b9c760abd1b8d14656d70a12ca7a5eb0774679fc2e972f8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:20:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:49.979 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b80b628f-067e-4690-8ea2-be6df6208dfa]: (4, ('Sat Jan 31 08:20:49 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 (a9298d76b21152f3b9c760abd1b8d14656d70a12ca7a5eb0774679fc2e972f8d)\na9298d76b21152f3b9c760abd1b8d14656d70a12ca7a5eb0774679fc2e972f8d\nSat Jan 31 08:20:49 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 (a9298d76b21152f3b9c760abd1b8d14656d70a12ca7a5eb0774679fc2e972f8d)\na9298d76b21152f3b9c760abd1b8d14656d70a12ca7a5eb0774679fc2e972f8d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:49.981 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee3c036-8dd8-4269-98c7-ae1bd2704d46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:49.982 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape03fc320-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:49 np0005603609 nova_compute[221550]: 2026-01-31 08:20:49.984 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:49 np0005603609 kernel: tape03fc320-c0: left promiscuous mode
Jan 31 03:20:49 np0005603609 nova_compute[221550]: 2026-01-31 08:20:49.989 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:49.993 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ed87884e-57ce-4ca0-8138-62009890e779]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.014 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[018f926e-a8a6-4940-9018-45af7eba6104]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.015 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7b534035-fe9e-4033-993f-7a80154c446b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.026 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc81479-f4f4-45a0-a4dd-e35f0f9d55aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788516, 'reachable_time': 43491, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280670, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.029 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.029 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[b74b41c0-d83f-4181-9d54-1f5500a8f38b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.030 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 1f25515c-3e82-445e-baad-bc0e3a0d6f1a in datapath e03fc320-c87d-42d2-a772-ec94aeb05209 unbound from our chassis#033[00m
Jan 31 03:20:50 np0005603609 systemd[1]: run-netns-ovnmeta\x2de03fc320\x2dc87d\x2d42d2\x2da772\x2dec94aeb05209.mount: Deactivated successfully.
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.032 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e03fc320-c87d-42d2-a772-ec94aeb05209#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.040 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[312e0cdf-a452-4df4-b6cd-ed947f10208d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.039 221554 DEBUG nova.compute.manager [req-34563c17-adf9-47ec-a4d9-6d7696bd947b req-fa26c54c-13b3-4c6f-a5ce-62ecdce48c75 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received event network-vif-unplugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.039 221554 DEBUG oslo_concurrency.lockutils [req-34563c17-adf9-47ec-a4d9-6d7696bd947b req-fa26c54c-13b3-4c6f-a5ce-62ecdce48c75 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.039 221554 DEBUG oslo_concurrency.lockutils [req-34563c17-adf9-47ec-a4d9-6d7696bd947b req-fa26c54c-13b3-4c6f-a5ce-62ecdce48c75 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.039 221554 DEBUG oslo_concurrency.lockutils [req-34563c17-adf9-47ec-a4d9-6d7696bd947b req-fa26c54c-13b3-4c6f-a5ce-62ecdce48c75 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.040 221554 DEBUG nova.compute.manager [req-34563c17-adf9-47ec-a4d9-6d7696bd947b req-fa26c54c-13b3-4c6f-a5ce-62ecdce48c75 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] No waiting events found dispatching network-vif-unplugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.040 221554 WARNING nova.compute.manager [req-34563c17-adf9-47ec-a4d9-6d7696bd947b req-fa26c54c-13b3-4c6f-a5ce-62ecdce48c75 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received unexpected event network-vif-unplugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.040 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape03fc320-c1 in ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.042 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape03fc320-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.043 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[eef5d006-123b-45b0-b3fa-010097955e2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.043 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[50027f5b-45bf-4a94-b8a7-4902226573eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.059 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[67721975-0d85-4eec-b00c-3ad65a0f348b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.071 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[15082cd7-99e5-40fa-9f04-bb7cb62f42b1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.094 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[02db5995-49df-480a-a2e3-c6d49f13eb64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 NetworkManager[49064]: <info>  [1769847650.1035] manager: (tape03fc320-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/291)
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.101 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab511fb-3a27-4936-b4b1-5f8fa9112dff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.134 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[638236b7-8bb1-4874-bc99-66cb235cd04e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.136 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[2012c59f-da38-4d4b-9427-608ab3902f64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 NetworkManager[49064]: <info>  [1769847650.1530] device (tape03fc320-c0): carrier: link connected
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.155 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c99f6015-fc5f-4e4a-bae4-83144b9ec911]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.169 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a146d4ff-60a9-4077-b63f-961620733f67]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape03fc320-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:22:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788985, 'reachable_time': 32342, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280696, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.184 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e9997dc8-337b-42d5-ba03-88ad02896add]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe79:2269'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 788985, 'tstamp': 788985}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280697, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.197 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9ba510-2c3a-4dc4-9116-e7d644edc4a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape03fc320-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:22:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788985, 'reachable_time': 32342, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280698, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.215 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ebac5661-3da0-4d28-97c9-56d80e097fda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.246 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2f3de50c-5d50-4a87-ae15-79c831be3c98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.247 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape03fc320-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.247 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.248 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape03fc320-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:50 np0005603609 kernel: tape03fc320-c0: entered promiscuous mode
Jan 31 03:20:50 np0005603609 NetworkManager[49064]: <info>  [1769847650.2503] manager: (tape03fc320-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/292)
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.250 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.253 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape03fc320-c0, col_values=(('external_ids', {'iface-id': '075aefe0-13df-4a17-ae95-485ece950a10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:50 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:50Z|00626|binding|INFO|Releasing lport 075aefe0-13df-4a17-ae95-485ece950a10 from this chassis (sb_readonly=0)
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.254 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.255 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e03fc320-c87d-42d2-a772-ec94aeb05209.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e03fc320-c87d-42d2-a772-ec94aeb05209.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.256 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2cdac8-6705-4047-af55-dddb8c63dfe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.257 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-e03fc320-c87d-42d2-a772-ec94aeb05209
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/e03fc320-c87d-42d2-a772-ec94aeb05209.pid.haproxy
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID e03fc320-c87d-42d2-a772-ec94aeb05209
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:20:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:20:50.258 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'env', 'PROCESS_TAG=haproxy-e03fc320-c87d-42d2-a772-ec94aeb05209', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e03fc320-c87d-42d2-a772-ec94aeb05209.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.261 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.263 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.367 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:20:50 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1739381354' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:20:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:20:50 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1739381354' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:20:50 np0005603609 podman[280730]: 2026-01-31 08:20:50.531807738 +0000 UTC m=+0.037249245 container create 095f49771aad51996f6602620d15bd3958da104d48a515727930d8bc1667164b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 03:20:50 np0005603609 systemd[1]: Started libpod-conmon-095f49771aad51996f6602620d15bd3958da104d48a515727930d8bc1667164b.scope.
Jan 31 03:20:50 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:20:50 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e6784633f164870a1833c604116ad4247b565c39921d36be2e13bb2704b3de7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:20:50 np0005603609 podman[280730]: 2026-01-31 08:20:50.512447933 +0000 UTC m=+0.017889450 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:20:50 np0005603609 podman[280730]: 2026-01-31 08:20:50.609646287 +0000 UTC m=+0.115087814 container init 095f49771aad51996f6602620d15bd3958da104d48a515727930d8bc1667164b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 03:20:50 np0005603609 podman[280730]: 2026-01-31 08:20:50.613433619 +0000 UTC m=+0.118875126 container start 095f49771aad51996f6602620d15bd3958da104d48a515727930d8bc1667164b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 03:20:50 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[280746]: [NOTICE]   (280750) : New worker (280752) forked
Jan 31 03:20:50 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[280746]: [NOTICE]   (280750) : Loading success.
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.808 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 38deb482-bd85-4fdf-b2da-2725bffd8f43 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.809 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847650.808119, 38deb482-bd85-4fdf-b2da-2725bffd8f43 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.809 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.860 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.864 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.897 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.898 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847650.8101497, 38deb482-bd85-4fdf-b2da-2725bffd8f43 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.899 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] VM Started (Lifecycle Event)#033[00m
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.927 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.931 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:20:50 np0005603609 nova_compute[221550]: 2026-01-31 08:20:50.968 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 31 03:20:51 np0005603609 nova_compute[221550]: 2026-01-31 08:20:51.354 221554 DEBUG nova.compute.manager [None req-2ddbbdf5-eeb5-4f3a-90f6-3f2a8e339c4b 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:51.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:51.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:52 np0005603609 nova_compute[221550]: 2026-01-31 08:20:52.226 221554 DEBUG nova.compute.manager [req-b4a6264a-54bf-4f2c-b336-6eb65f4953d6 req-b174a894-d017-4a0a-930d-45b5724e3846 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received event network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:52 np0005603609 nova_compute[221550]: 2026-01-31 08:20:52.226 221554 DEBUG oslo_concurrency.lockutils [req-b4a6264a-54bf-4f2c-b336-6eb65f4953d6 req-b174a894-d017-4a0a-930d-45b5724e3846 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:52 np0005603609 nova_compute[221550]: 2026-01-31 08:20:52.227 221554 DEBUG oslo_concurrency.lockutils [req-b4a6264a-54bf-4f2c-b336-6eb65f4953d6 req-b174a894-d017-4a0a-930d-45b5724e3846 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:52 np0005603609 nova_compute[221550]: 2026-01-31 08:20:52.227 221554 DEBUG oslo_concurrency.lockutils [req-b4a6264a-54bf-4f2c-b336-6eb65f4953d6 req-b174a894-d017-4a0a-930d-45b5724e3846 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:52 np0005603609 nova_compute[221550]: 2026-01-31 08:20:52.227 221554 DEBUG nova.compute.manager [req-b4a6264a-54bf-4f2c-b336-6eb65f4953d6 req-b174a894-d017-4a0a-930d-45b5724e3846 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] No waiting events found dispatching network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:20:52 np0005603609 nova_compute[221550]: 2026-01-31 08:20:52.228 221554 WARNING nova.compute.manager [req-b4a6264a-54bf-4f2c-b336-6eb65f4953d6 req-b174a894-d017-4a0a-930d-45b5724e3846 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received unexpected event network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a for instance with vm_state active and task_state None.#033[00m
Jan 31 03:20:52 np0005603609 nova_compute[221550]: 2026-01-31 08:20:52.228 221554 DEBUG nova.compute.manager [req-b4a6264a-54bf-4f2c-b336-6eb65f4953d6 req-b174a894-d017-4a0a-930d-45b5724e3846 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received event network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:52 np0005603609 nova_compute[221550]: 2026-01-31 08:20:52.228 221554 DEBUG oslo_concurrency.lockutils [req-b4a6264a-54bf-4f2c-b336-6eb65f4953d6 req-b174a894-d017-4a0a-930d-45b5724e3846 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:52 np0005603609 nova_compute[221550]: 2026-01-31 08:20:52.229 221554 DEBUG oslo_concurrency.lockutils [req-b4a6264a-54bf-4f2c-b336-6eb65f4953d6 req-b174a894-d017-4a0a-930d-45b5724e3846 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:52 np0005603609 nova_compute[221550]: 2026-01-31 08:20:52.229 221554 DEBUG oslo_concurrency.lockutils [req-b4a6264a-54bf-4f2c-b336-6eb65f4953d6 req-b174a894-d017-4a0a-930d-45b5724e3846 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:52 np0005603609 nova_compute[221550]: 2026-01-31 08:20:52.229 221554 DEBUG nova.compute.manager [req-b4a6264a-54bf-4f2c-b336-6eb65f4953d6 req-b174a894-d017-4a0a-930d-45b5724e3846 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] No waiting events found dispatching network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:20:52 np0005603609 nova_compute[221550]: 2026-01-31 08:20:52.229 221554 WARNING nova.compute.manager [req-b4a6264a-54bf-4f2c-b336-6eb65f4953d6 req-b174a894-d017-4a0a-930d-45b5724e3846 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received unexpected event network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a for instance with vm_state active and task_state None.#033[00m
Jan 31 03:20:52 np0005603609 nova_compute[221550]: 2026-01-31 08:20:52.230 221554 DEBUG nova.compute.manager [req-b4a6264a-54bf-4f2c-b336-6eb65f4953d6 req-b174a894-d017-4a0a-930d-45b5724e3846 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received event network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:52 np0005603609 nova_compute[221550]: 2026-01-31 08:20:52.230 221554 DEBUG oslo_concurrency.lockutils [req-b4a6264a-54bf-4f2c-b336-6eb65f4953d6 req-b174a894-d017-4a0a-930d-45b5724e3846 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:52 np0005603609 nova_compute[221550]: 2026-01-31 08:20:52.230 221554 DEBUG oslo_concurrency.lockutils [req-b4a6264a-54bf-4f2c-b336-6eb65f4953d6 req-b174a894-d017-4a0a-930d-45b5724e3846 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:52 np0005603609 nova_compute[221550]: 2026-01-31 08:20:52.231 221554 DEBUG oslo_concurrency.lockutils [req-b4a6264a-54bf-4f2c-b336-6eb65f4953d6 req-b174a894-d017-4a0a-930d-45b5724e3846 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:52 np0005603609 nova_compute[221550]: 2026-01-31 08:20:52.231 221554 DEBUG nova.compute.manager [req-b4a6264a-54bf-4f2c-b336-6eb65f4953d6 req-b174a894-d017-4a0a-930d-45b5724e3846 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] No waiting events found dispatching network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:20:52 np0005603609 nova_compute[221550]: 2026-01-31 08:20:52.231 221554 WARNING nova.compute.manager [req-b4a6264a-54bf-4f2c-b336-6eb65f4953d6 req-b174a894-d017-4a0a-930d-45b5724e3846 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received unexpected event network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a for instance with vm_state active and task_state None.#033[00m
Jan 31 03:20:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:53.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:20:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:53.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:20:54 np0005603609 nova_compute[221550]: 2026-01-31 08:20:54.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:55 np0005603609 nova_compute[221550]: 2026-01-31 08:20:55.266 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:55 np0005603609 nova_compute[221550]: 2026-01-31 08:20:55.336 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847640.3345246, ac849bdd-07a0-459f-b207-2a3f239d0272 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:20:55 np0005603609 nova_compute[221550]: 2026-01-31 08:20:55.336 221554 INFO nova.compute.manager [-] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:20:55 np0005603609 podman[280847]: 2026-01-31 08:20:55.363884843 +0000 UTC m=+0.052146203 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:20:55 np0005603609 nova_compute[221550]: 2026-01-31 08:20:55.370 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:55 np0005603609 podman[280846]: 2026-01-31 08:20:55.381569128 +0000 UTC m=+0.069720655 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_controller)
Jan 31 03:20:55 np0005603609 nova_compute[221550]: 2026-01-31 08:20:55.439 221554 DEBUG nova.compute.manager [None req-5fa004e6-00f7-42d5-bb7c-8a059e8fca51 - - - - - -] [instance: ac849bdd-07a0-459f-b207-2a3f239d0272] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:55.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:55.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:20:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:20:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:20:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:57.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:20:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:57.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:59 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:59Z|00627|binding|INFO|Releasing lport 075aefe0-13df-4a17-ae95-485ece950a10 from this chassis (sb_readonly=0)
Jan 31 03:20:59 np0005603609 nova_compute[221550]: 2026-01-31 08:20:59.451 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:59 np0005603609 ovn_controller[130359]: 2026-01-31T08:20:59Z|00628|binding|INFO|Releasing lport 075aefe0-13df-4a17-ae95-485ece950a10 from this chassis (sb_readonly=0)
Jan 31 03:20:59 np0005603609 nova_compute[221550]: 2026-01-31 08:20:59.512 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:59.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:20:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:20:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:59.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:21:00 np0005603609 nova_compute[221550]: 2026-01-31 08:21:00.268 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:00 np0005603609 nova_compute[221550]: 2026-01-31 08:21:00.370 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:21:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:01.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:21:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:21:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:01.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:21:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:03.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:03.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:03 np0005603609 ovn_controller[130359]: 2026-01-31T08:21:03Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:68:cd 10.100.0.8
Jan 31 03:21:05 np0005603609 nova_compute[221550]: 2026-01-31 08:21:05.270 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:05 np0005603609 nova_compute[221550]: 2026-01-31 08:21:05.373 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:05.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:05.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:07.513 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:07.514 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:07.515 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:21:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:07.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:21:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:21:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:07.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:21:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:09.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:09.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:10 np0005603609 nova_compute[221550]: 2026-01-31 08:21:10.272 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:10 np0005603609 nova_compute[221550]: 2026-01-31 08:21:10.375 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:11.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:21:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:11.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:21:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e340 e340: 3 total, 3 up, 3 in
Jan 31 03:21:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:13.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:13.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e341 e341: 3 total, 3 up, 3 in
Jan 31 03:21:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e342 e342: 3 total, 3 up, 3 in
Jan 31 03:21:15 np0005603609 nova_compute[221550]: 2026-01-31 08:21:15.274 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:15 np0005603609 nova_compute[221550]: 2026-01-31 08:21:15.377 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:15.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:15.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e343 e343: 3 total, 3 up, 3 in
Jan 31 03:21:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e344 e344: 3 total, 3 up, 3 in
Jan 31 03:21:17 np0005603609 NetworkManager[49064]: <info>  [1769847677.3890] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Jan 31 03:21:17 np0005603609 nova_compute[221550]: 2026-01-31 08:21:17.388 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:17 np0005603609 NetworkManager[49064]: <info>  [1769847677.3900] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Jan 31 03:21:17 np0005603609 nova_compute[221550]: 2026-01-31 08:21:17.448 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:21:17Z|00629|binding|INFO|Releasing lport 075aefe0-13df-4a17-ae95-485ece950a10 from this chassis (sb_readonly=0)
Jan 31 03:21:17 np0005603609 nova_compute[221550]: 2026-01-31 08:21:17.472 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:17.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:21:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:17.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:21:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:21:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:19.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:21:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:19.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:20 np0005603609 nova_compute[221550]: 2026-01-31 08:21:20.277 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:20 np0005603609 nova_compute[221550]: 2026-01-31 08:21:20.379 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:21.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:21.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e345 e345: 3 total, 3 up, 3 in
Jan 31 03:21:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:23.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:21:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:23.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.028 221554 DEBUG oslo_concurrency.lockutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "946cb648-0758-4617-bd3a-142804fd70f7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.029 221554 DEBUG oslo_concurrency.lockutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.053 221554 DEBUG nova.compute.manager [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.147 221554 DEBUG oslo_concurrency.lockutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.148 221554 DEBUG oslo_concurrency.lockutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.156 221554 DEBUG nova.virt.hardware [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.156 221554 INFO nova.compute.claims [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.320 221554 DEBUG oslo_concurrency.processutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:21:24 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2298688102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.794 221554 DEBUG oslo_concurrency.processutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.801 221554 DEBUG nova.compute.provider_tree [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.849 221554 DEBUG nova.scheduler.client.report [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.873 221554 DEBUG oslo_concurrency.lockutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.874 221554 DEBUG oslo_concurrency.lockutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.883 221554 DEBUG oslo_concurrency.lockutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.883 221554 DEBUG nova.compute.manager [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.914 221554 DEBUG nova.compute.manager [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.954 221554 DEBUG nova.compute.manager [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.955 221554 DEBUG nova.network.neutron [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:21:24 np0005603609 nova_compute[221550]: 2026-01-31 08:21:24.993 221554 INFO nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:21:25 np0005603609 nova_compute[221550]: 2026-01-31 08:21:25.008 221554 DEBUG oslo_concurrency.lockutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:25 np0005603609 nova_compute[221550]: 2026-01-31 08:21:25.009 221554 DEBUG oslo_concurrency.lockutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:25 np0005603609 nova_compute[221550]: 2026-01-31 08:21:25.018 221554 DEBUG nova.virt.hardware [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:21:25 np0005603609 nova_compute[221550]: 2026-01-31 08:21:25.019 221554 INFO nova.compute.claims [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:21:25 np0005603609 nova_compute[221550]: 2026-01-31 08:21:25.026 221554 DEBUG nova.compute.manager [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:21:25 np0005603609 nova_compute[221550]: 2026-01-31 08:21:25.101 221554 INFO nova.virt.block_device [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Booting with blank volume at /dev/vda#033[00m
Jan 31 03:21:25 np0005603609 nova_compute[221550]: 2026-01-31 08:21:25.199 221554 DEBUG oslo_concurrency.processutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:25 np0005603609 nova_compute[221550]: 2026-01-31 08:21:25.305 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:25 np0005603609 nova_compute[221550]: 2026-01-31 08:21:25.380 221554 DEBUG nova.policy [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '038e2b3b4f174162a3ac6c4870857e60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c90ea7f1be5f484bb873548236fadc00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:21:25 np0005603609 nova_compute[221550]: 2026-01-31 08:21:25.384 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:21:25 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1999579920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:21:25 np0005603609 nova_compute[221550]: 2026-01-31 08:21:25.616 221554 DEBUG oslo_concurrency.processutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:25 np0005603609 nova_compute[221550]: 2026-01-31 08:21:25.624 221554 DEBUG nova.compute.provider_tree [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:21:25 np0005603609 nova_compute[221550]: 2026-01-31 08:21:25.654 221554 DEBUG nova.scheduler.client.report [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:21:25 np0005603609 nova_compute[221550]: 2026-01-31 08:21:25.684 221554 DEBUG oslo_concurrency.lockutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:25 np0005603609 nova_compute[221550]: 2026-01-31 08:21:25.685 221554 DEBUG nova.compute.manager [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:21:25 np0005603609 nova_compute[221550]: 2026-01-31 08:21:25.738 221554 DEBUG nova.compute.manager [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:21:25 np0005603609 nova_compute[221550]: 2026-01-31 08:21:25.738 221554 DEBUG nova.network.neutron [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:21:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:25.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:25 np0005603609 nova_compute[221550]: 2026-01-31 08:21:25.770 221554 INFO nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:21:25 np0005603609 nova_compute[221550]: 2026-01-31 08:21:25.847 221554 DEBUG nova.compute.manager [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:21:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:25.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:26 np0005603609 nova_compute[221550]: 2026-01-31 08:21:26.086 221554 DEBUG nova.compute.manager [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:21:26 np0005603609 nova_compute[221550]: 2026-01-31 08:21:26.087 221554 DEBUG nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:21:26 np0005603609 nova_compute[221550]: 2026-01-31 08:21:26.088 221554 INFO nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Creating image(s)#033[00m
Jan 31 03:21:26 np0005603609 nova_compute[221550]: 2026-01-31 08:21:26.118 221554 DEBUG nova.storage.rbd_utils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] rbd image 20cc7040-fd06-49c7-8e68-41cb74e67e9a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:26 np0005603609 nova_compute[221550]: 2026-01-31 08:21:26.153 221554 DEBUG nova.storage.rbd_utils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] rbd image 20cc7040-fd06-49c7-8e68-41cb74e67e9a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:26 np0005603609 podman[280984]: 2026-01-31 08:21:26.176466895 +0000 UTC m=+0.047372099 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:21:26 np0005603609 nova_compute[221550]: 2026-01-31 08:21:26.189 221554 DEBUG nova.storage.rbd_utils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] rbd image 20cc7040-fd06-49c7-8e68-41cb74e67e9a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:26 np0005603609 nova_compute[221550]: 2026-01-31 08:21:26.193 221554 DEBUG oslo_concurrency.processutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:26 np0005603609 podman[280965]: 2026-01-31 08:21:26.202534181 +0000 UTC m=+0.074314225 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 03:21:26 np0005603609 nova_compute[221550]: 2026-01-31 08:21:26.243 221554 DEBUG oslo_concurrency.processutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:26 np0005603609 nova_compute[221550]: 2026-01-31 08:21:26.245 221554 DEBUG oslo_concurrency.lockutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:26 np0005603609 nova_compute[221550]: 2026-01-31 08:21:26.245 221554 DEBUG oslo_concurrency.lockutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:26 np0005603609 nova_compute[221550]: 2026-01-31 08:21:26.246 221554 DEBUG oslo_concurrency.lockutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:26 np0005603609 nova_compute[221550]: 2026-01-31 08:21:26.273 221554 DEBUG nova.storage.rbd_utils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] rbd image 20cc7040-fd06-49c7-8e68-41cb74e67e9a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:26 np0005603609 nova_compute[221550]: 2026-01-31 08:21:26.277 221554 DEBUG oslo_concurrency.processutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 20cc7040-fd06-49c7-8e68-41cb74e67e9a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:26 np0005603609 nova_compute[221550]: 2026-01-31 08:21:26.302 221554 DEBUG nova.policy [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3ada90dc4b77478cb4b93c63409d8537', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fdf18f1faf4846e2a6e2eab4ac2aec02', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:21:26 np0005603609 nova_compute[221550]: 2026-01-31 08:21:26.383 221554 DEBUG nova.network.neutron [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Successfully created port: 36f5a0a6-029b-4491-8d74-e44ca0e59f7d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:21:26 np0005603609 nova_compute[221550]: 2026-01-31 08:21:26.869 221554 DEBUG oslo_concurrency.processutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 20cc7040-fd06-49c7-8e68-41cb74e67e9a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:26 np0005603609 nova_compute[221550]: 2026-01-31 08:21:26.941 221554 DEBUG nova.storage.rbd_utils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] resizing rbd image 20cc7040-fd06-49c7-8e68-41cb74e67e9a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.057 221554 DEBUG nova.objects.instance [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'migration_context' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.126 221554 DEBUG nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.126 221554 DEBUG nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Ensure instance console log exists: /var/lib/nova/instances/20cc7040-fd06-49c7-8e68-41cb74e67e9a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.127 221554 DEBUG oslo_concurrency.lockutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.127 221554 DEBUG oslo_concurrency.lockutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.127 221554 DEBUG oslo_concurrency.lockutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.148 221554 DEBUG os_brick.utils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.150 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.158 226739 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.158 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[de445d5e-1eaa-4f45-8bca-b5262478d2fc]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.160 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.166 226739 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.166 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[6940ca1a-c573-4ed4-812e-793f08b05403]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1765b9b6275c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.167 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.174 226739 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.174 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[066f406e-dd2e-4771-8998-55a64325cf40]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.175 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[f01a0eba-a6e6-47db-9d0a-eed94ffd8711]: (4, '231927d4-1ded-4b84-843c-456d697af567') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.175 221554 DEBUG oslo_concurrency.processutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.198 221554 DEBUG oslo_concurrency.processutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.199 221554 DEBUG os_brick.initiator.connectors.lightos [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.200 221554 DEBUG os_brick.initiator.connectors.lightos [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.200 221554 DEBUG os_brick.initiator.connectors.lightos [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.200 221554 DEBUG os_brick.utils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] <== get_connector_properties: return (52ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1765b9b6275c', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '231927d4-1ded-4b84-843c-456d697af567', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:21:27 np0005603609 nova_compute[221550]: 2026-01-31 08:21:27.201 221554 DEBUG nova.virt.block_device [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Updating existing volume attachment record: db690fac-05bc-4a16-95ca-cd07b52ed09d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:21:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:21:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:27.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:21:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:27.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:28 np0005603609 nova_compute[221550]: 2026-01-31 08:21:28.229 221554 DEBUG nova.network.neutron [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Successfully updated port: 36f5a0a6-029b-4491-8d74-e44ca0e59f7d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:21:28 np0005603609 nova_compute[221550]: 2026-01-31 08:21:28.248 221554 DEBUG nova.network.neutron [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Successfully created port: c66c4603-0eab-4f51-ad10-8185e33051dd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:21:28 np0005603609 nova_compute[221550]: 2026-01-31 08:21:28.254 221554 DEBUG oslo_concurrency.lockutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "refresh_cache-946cb648-0758-4617-bd3a-142804fd70f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:21:28 np0005603609 nova_compute[221550]: 2026-01-31 08:21:28.254 221554 DEBUG oslo_concurrency.lockutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquired lock "refresh_cache-946cb648-0758-4617-bd3a-142804fd70f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:21:28 np0005603609 nova_compute[221550]: 2026-01-31 08:21:28.254 221554 DEBUG nova.network.neutron [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:21:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:28 np0005603609 nova_compute[221550]: 2026-01-31 08:21:28.500 221554 DEBUG nova.compute.manager [req-4ea96cb8-35b9-42c9-a094-1a65c6879079 req-45b8514c-85fd-48c0-b6e4-aa39b3ca1915 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received event network-changed-36f5a0a6-029b-4491-8d74-e44ca0e59f7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:21:28 np0005603609 nova_compute[221550]: 2026-01-31 08:21:28.500 221554 DEBUG nova.compute.manager [req-4ea96cb8-35b9-42c9-a094-1a65c6879079 req-45b8514c-85fd-48c0-b6e4-aa39b3ca1915 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Refreshing instance network info cache due to event network-changed-36f5a0a6-029b-4491-8d74-e44ca0e59f7d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:21:28 np0005603609 nova_compute[221550]: 2026-01-31 08:21:28.501 221554 DEBUG oslo_concurrency.lockutils [req-4ea96cb8-35b9-42c9-a094-1a65c6879079 req-45b8514c-85fd-48c0-b6e4-aa39b3ca1915 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-946cb648-0758-4617-bd3a-142804fd70f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:21:28 np0005603609 nova_compute[221550]: 2026-01-31 08:21:28.643 221554 DEBUG nova.network.neutron [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:21:28 np0005603609 nova_compute[221550]: 2026-01-31 08:21:28.914 221554 DEBUG nova.compute.manager [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:21:28 np0005603609 nova_compute[221550]: 2026-01-31 08:21:28.916 221554 DEBUG nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:21:28 np0005603609 nova_compute[221550]: 2026-01-31 08:21:28.917 221554 INFO nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Creating image(s)#033[00m
Jan 31 03:21:28 np0005603609 nova_compute[221550]: 2026-01-31 08:21:28.917 221554 DEBUG nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 03:21:28 np0005603609 nova_compute[221550]: 2026-01-31 08:21:28.917 221554 DEBUG nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Ensure instance console log exists: /var/lib/nova/instances/946cb648-0758-4617-bd3a-142804fd70f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:21:28 np0005603609 nova_compute[221550]: 2026-01-31 08:21:28.917 221554 DEBUG oslo_concurrency.lockutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:28 np0005603609 nova_compute[221550]: 2026-01-31 08:21:28.918 221554 DEBUG oslo_concurrency.lockutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:28 np0005603609 nova_compute[221550]: 2026-01-31 08:21:28.918 221554 DEBUG oslo_concurrency.lockutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:29 np0005603609 nova_compute[221550]: 2026-01-31 08:21:29.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:29 np0005603609 nova_compute[221550]: 2026-01-31 08:21:29.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:21:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:29.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:29.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.307 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.386 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.477 221554 DEBUG nova.network.neutron [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Successfully updated port: c66c4603-0eab-4f51-ad10-8185e33051dd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.620 221554 DEBUG oslo_concurrency.lockutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "refresh_cache-20cc7040-fd06-49c7-8e68-41cb74e67e9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.621 221554 DEBUG oslo_concurrency.lockutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquired lock "refresh_cache-20cc7040-fd06-49c7-8e68-41cb74e67e9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.621 221554 DEBUG nova.network.neutron [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.639 221554 DEBUG nova.compute.manager [req-6431a03c-e959-4b3f-9017-bd5120ca487c req-32639188-1834-47b3-a656-7e32eff96a35 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received event network-changed-c66c4603-0eab-4f51-ad10-8185e33051dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.639 221554 DEBUG nova.compute.manager [req-6431a03c-e959-4b3f-9017-bd5120ca487c req-32639188-1834-47b3-a656-7e32eff96a35 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Refreshing instance network info cache due to event network-changed-c66c4603-0eab-4f51-ad10-8185e33051dd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.639 221554 DEBUG oslo_concurrency.lockutils [req-6431a03c-e959-4b3f-9017-bd5120ca487c req-32639188-1834-47b3-a656-7e32eff96a35 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-20cc7040-fd06-49c7-8e68-41cb74e67e9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.693 221554 DEBUG nova.network.neutron [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Updating instance_info_cache with network_info: [{"id": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "address": "fa:16:3e:0d:0a:0c", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f5a0a6-02", "ovs_interfaceid": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.823 221554 DEBUG oslo_concurrency.lockutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Releasing lock "refresh_cache-946cb648-0758-4617-bd3a-142804fd70f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.823 221554 DEBUG nova.compute.manager [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Instance network_info: |[{"id": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "address": "fa:16:3e:0d:0a:0c", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f5a0a6-02", "ovs_interfaceid": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.824 221554 DEBUG oslo_concurrency.lockutils [req-4ea96cb8-35b9-42c9-a094-1a65c6879079 req-45b8514c-85fd-48c0-b6e4-aa39b3ca1915 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-946cb648-0758-4617-bd3a-142804fd70f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.824 221554 DEBUG nova.network.neutron [req-4ea96cb8-35b9-42c9-a094-1a65c6879079 req-45b8514c-85fd-48c0-b6e4-aa39b3ca1915 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Refreshing network info cache for port 36f5a0a6-029b-4491-8d74-e44ca0e59f7d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.827 221554 DEBUG nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Start _get_guest_xml network_info=[{"id": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "address": "fa:16:3e:0d:0a:0c", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f5a0a6-02", "ovs_interfaceid": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': 'db690fac-05bc-4a16-95ca-cd07b52ed09d', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9963309d-4585-4a8d-8bdf-00f8aa2f69e1', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9963309d-4585-4a8d-8bdf-00f8aa2f69e1', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '946cb648-0758-4617-bd3a-142804fd70f7', 'attached_at': '', 'detached_at': '', 'volume_id': '9963309d-4585-4a8d-8bdf-00f8aa2f69e1', 'serial': '9963309d-4585-4a8d-8bdf-00f8aa2f69e1'}, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'guest_format': None, 'mount_device': '/dev/vda', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.831 221554 WARNING nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.835 221554 DEBUG nova.virt.libvirt.host [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.836 221554 DEBUG nova.virt.libvirt.host [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.839 221554 DEBUG nova.virt.libvirt.host [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.839 221554 DEBUG nova.virt.libvirt.host [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.840 221554 DEBUG nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.841 221554 DEBUG nova.virt.hardware [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.841 221554 DEBUG nova.virt.hardware [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.841 221554 DEBUG nova.virt.hardware [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.842 221554 DEBUG nova.virt.hardware [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.842 221554 DEBUG nova.virt.hardware [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.842 221554 DEBUG nova.virt.hardware [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.842 221554 DEBUG nova.virt.hardware [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.843 221554 DEBUG nova.virt.hardware [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.843 221554 DEBUG nova.virt.hardware [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.843 221554 DEBUG nova.virt.hardware [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.843 221554 DEBUG nova.virt.hardware [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.867 221554 DEBUG nova.storage.rbd_utils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image 946cb648-0758-4617-bd3a-142804fd70f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.871 221554 DEBUG oslo_concurrency.processutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:30 np0005603609 nova_compute[221550]: 2026-01-31 08:21:30.892 221554 DEBUG nova.network.neutron [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:21:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:21:31 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/410450162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.280 221554 DEBUG oslo_concurrency.processutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.310 221554 DEBUG nova.virt.libvirt.vif [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:21:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-2006986215',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-2006986215',id=149,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c90ea7f1be5f484bb873548236fadc00',ramdisk_id='',reservation_id='r-nvcx2ai9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:21:25Z,user_data=None,user_id='038e2b3b4f174162a3ac6c4870857e60',uuid=946cb648-0758-4617-bd3a-142804fd70f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "address": "fa:16:3e:0d:0a:0c", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f5a0a6-02", "ovs_interfaceid": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.310 221554 DEBUG nova.network.os_vif_util [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converting VIF {"id": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "address": "fa:16:3e:0d:0a:0c", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f5a0a6-02", "ovs_interfaceid": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.311 221554 DEBUG nova.network.os_vif_util [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0a:0c,bridge_name='br-int',has_traffic_filtering=True,id=36f5a0a6-029b-4491-8d74-e44ca0e59f7d,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36f5a0a6-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.312 221554 DEBUG nova.objects.instance [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 946cb648-0758-4617-bd3a-142804fd70f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.332 221554 DEBUG nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:21:31 np0005603609 nova_compute[221550]:  <uuid>946cb648-0758-4617-bd3a-142804fd70f7</uuid>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:  <name>instance-00000095</name>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-2006986215</nova:name>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:21:30</nova:creationTime>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:21:31 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:        <nova:user uuid="038e2b3b4f174162a3ac6c4870857e60">tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member</nova:user>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:        <nova:project uuid="c90ea7f1be5f484bb873548236fadc00">tempest-ServerBootFromVolumeStableRescueTest-1116995694</nova:project>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:        <nova:port uuid="36f5a0a6-029b-4491-8d74-e44ca0e59f7d">
Jan 31 03:21:31 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <entry name="serial">946cb648-0758-4617-bd3a-142804fd70f7</entry>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <entry name="uuid">946cb648-0758-4617-bd3a-142804fd70f7</entry>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/946cb648-0758-4617-bd3a-142804fd70f7_disk.config">
Jan 31 03:21:31 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:21:31 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="volumes/volume-9963309d-4585-4a8d-8bdf-00f8aa2f69e1">
Jan 31 03:21:31 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:21:31 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <serial>9963309d-4585-4a8d-8bdf-00f8aa2f69e1</serial>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:0d:0a:0c"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <target dev="tap36f5a0a6-02"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/946cb648-0758-4617-bd3a-142804fd70f7/console.log" append="off"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:21:31 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:21:31 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:21:31 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:21:31 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.333 221554 DEBUG nova.compute.manager [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Preparing to wait for external event network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.333 221554 DEBUG oslo_concurrency.lockutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "946cb648-0758-4617-bd3a-142804fd70f7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.333 221554 DEBUG oslo_concurrency.lockutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.334 221554 DEBUG oslo_concurrency.lockutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.334 221554 DEBUG nova.virt.libvirt.vif [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:21:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-2006986215',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-2006986215',id=149,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c90ea7f1be5f484bb873548236fadc00',ramdisk_id='',reservation_id='r-nvcx2ai9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:21:25Z,user_data=None,user_id='038e2b3b4f174162a3ac6c4870857e60',uuid=946cb648-0758-4617-bd3a-142804fd70f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "address": "fa:16:3e:0d:0a:0c", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f5a0a6-02", "ovs_interfaceid": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.334 221554 DEBUG nova.network.os_vif_util [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converting VIF {"id": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "address": "fa:16:3e:0d:0a:0c", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f5a0a6-02", "ovs_interfaceid": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.336 221554 DEBUG nova.network.os_vif_util [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0a:0c,bridge_name='br-int',has_traffic_filtering=True,id=36f5a0a6-029b-4491-8d74-e44ca0e59f7d,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36f5a0a6-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.336 221554 DEBUG os_vif [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0a:0c,bridge_name='br-int',has_traffic_filtering=True,id=36f5a0a6-029b-4491-8d74-e44ca0e59f7d,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36f5a0a6-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.336 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.337 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.337 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.339 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.339 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36f5a0a6-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.340 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap36f5a0a6-02, col_values=(('external_ids', {'iface-id': '36f5a0a6-029b-4491-8d74-e44ca0e59f7d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:0a:0c', 'vm-uuid': '946cb648-0758-4617-bd3a-142804fd70f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.341 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:31 np0005603609 NetworkManager[49064]: <info>  [1769847691.3421] manager: (tap36f5a0a6-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/295)
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.343 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.346 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.348 221554 INFO os_vif [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:0a:0c,bridge_name='br-int',has_traffic_filtering=True,id=36f5a0a6-029b-4491-8d74-e44ca0e59f7d,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36f5a0a6-02')#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.540 221554 DEBUG nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.540 221554 DEBUG nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.540 221554 DEBUG nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No VIF found with MAC fa:16:3e:0d:0a:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.541 221554 INFO nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Using config drive#033[00m
Jan 31 03:21:31 np0005603609 nova_compute[221550]: 2026-01-31 08:21:31.562 221554 DEBUG nova.storage.rbd_utils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image 946cb648-0758-4617-bd3a-142804fd70f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:31.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:31.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.402 221554 INFO nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Creating config drive at /var/lib/nova/instances/946cb648-0758-4617-bd3a-142804fd70f7/disk.config#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.406 221554 DEBUG oslo_concurrency.processutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/946cb648-0758-4617-bd3a-142804fd70f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdzrf_ghp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.426 221554 DEBUG nova.network.neutron [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Updating instance_info_cache with network_info: [{"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.503 221554 DEBUG oslo_concurrency.lockutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Releasing lock "refresh_cache-20cc7040-fd06-49c7-8e68-41cb74e67e9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.504 221554 DEBUG nova.compute.manager [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Instance network_info: |[{"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.506 221554 DEBUG oslo_concurrency.lockutils [req-6431a03c-e959-4b3f-9017-bd5120ca487c req-32639188-1834-47b3-a656-7e32eff96a35 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-20cc7040-fd06-49c7-8e68-41cb74e67e9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.506 221554 DEBUG nova.network.neutron [req-6431a03c-e959-4b3f-9017-bd5120ca487c req-32639188-1834-47b3-a656-7e32eff96a35 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Refreshing network info cache for port c66c4603-0eab-4f51-ad10-8185e33051dd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.512 221554 DEBUG nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Start _get_guest_xml network_info=[{"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.518 221554 WARNING nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.524 221554 DEBUG nova.virt.libvirt.host [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.525 221554 DEBUG nova.virt.libvirt.host [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.528 221554 DEBUG nova.virt.libvirt.host [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.529 221554 DEBUG nova.virt.libvirt.host [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.530 221554 DEBUG nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.530 221554 DEBUG nova.virt.hardware [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.531 221554 DEBUG nova.virt.hardware [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.531 221554 DEBUG nova.virt.hardware [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.531 221554 DEBUG nova.virt.hardware [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.531 221554 DEBUG nova.virt.hardware [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.531 221554 DEBUG nova.virt.hardware [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.531 221554 DEBUG nova.virt.hardware [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.532 221554 DEBUG nova.virt.hardware [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.532 221554 DEBUG nova.virt.hardware [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.532 221554 DEBUG nova.virt.hardware [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.532 221554 DEBUG nova.virt.hardware [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.535 221554 DEBUG oslo_concurrency.processutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.561 221554 DEBUG oslo_concurrency.processutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/946cb648-0758-4617-bd3a-142804fd70f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdzrf_ghp" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.599 221554 DEBUG nova.storage.rbd_utils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image 946cb648-0758-4617-bd3a-142804fd70f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.605 221554 DEBUG oslo_concurrency.processutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/946cb648-0758-4617-bd3a-142804fd70f7/disk.config 946cb648-0758-4617-bd3a-142804fd70f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.713 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.875 221554 DEBUG oslo_concurrency.processutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/946cb648-0758-4617-bd3a-142804fd70f7/disk.config 946cb648-0758-4617-bd3a-142804fd70f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.876 221554 INFO nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Deleting local config drive /var/lib/nova/instances/946cb648-0758-4617-bd3a-142804fd70f7/disk.config because it was imported into RBD.#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.901 221554 DEBUG nova.network.neutron [req-4ea96cb8-35b9-42c9-a094-1a65c6879079 req-45b8514c-85fd-48c0-b6e4-aa39b3ca1915 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Updated VIF entry in instance network info cache for port 36f5a0a6-029b-4491-8d74-e44ca0e59f7d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.901 221554 DEBUG nova.network.neutron [req-4ea96cb8-35b9-42c9-a094-1a65c6879079 req-45b8514c-85fd-48c0-b6e4-aa39b3ca1915 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Updating instance_info_cache with network_info: [{"id": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "address": "fa:16:3e:0d:0a:0c", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f5a0a6-02", "ovs_interfaceid": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:21:32 np0005603609 kernel: tap36f5a0a6-02: entered promiscuous mode
Jan 31 03:21:32 np0005603609 NetworkManager[49064]: <info>  [1769847692.9168] manager: (tap36f5a0a6-02): new Tun device (/org/freedesktop/NetworkManager/Devices/296)
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.917 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:32 np0005603609 ovn_controller[130359]: 2026-01-31T08:21:32Z|00630|binding|INFO|Claiming lport 36f5a0a6-029b-4491-8d74-e44ca0e59f7d for this chassis.
Jan 31 03:21:32 np0005603609 ovn_controller[130359]: 2026-01-31T08:21:32Z|00631|binding|INFO|36f5a0a6-029b-4491-8d74-e44ca0e59f7d: Claiming fa:16:3e:0d:0a:0c 10.100.0.8
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.922 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.928 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:32 np0005603609 ovn_controller[130359]: 2026-01-31T08:21:32Z|00632|binding|INFO|Setting lport 36f5a0a6-029b-4491-8d74-e44ca0e59f7d ovn-installed in OVS
Jan 31 03:21:32 np0005603609 ovn_controller[130359]: 2026-01-31T08:21:32Z|00633|binding|INFO|Setting lport 36f5a0a6-029b-4491-8d74-e44ca0e59f7d up in Southbound
Jan 31 03:21:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:32.933 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:0a:0c 10.100.0.8'], port_security=['fa:16:3e:0d:0a:0c 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '946cb648-0758-4617-bd3a-142804fd70f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c90ea7f1be5f484bb873548236fadc00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '952b4f08-f5a7-4fc0-ae2c-267f2ba857a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42261fad-d2a1-4da1-823a-75e271c17223, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=36f5a0a6-029b-4491-8d74-e44ca0e59f7d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.932 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.934 221554 DEBUG oslo_concurrency.lockutils [req-4ea96cb8-35b9-42c9-a094-1a65c6879079 req-45b8514c-85fd-48c0-b6e4-aa39b3ca1915 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-946cb648-0758-4617-bd3a-142804fd70f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:21:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:32.934 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 36f5a0a6-029b-4491-8d74-e44ca0e59f7d in datapath 936cead9-bc2f-4c2d-8b4c-6079d2159263 bound to our chassis#033[00m
Jan 31 03:21:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:32.936 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 936cead9-bc2f-4c2d-8b4c-6079d2159263#033[00m
Jan 31 03:21:32 np0005603609 systemd-machined[190912]: New machine qemu-76-instance-00000095.
Jan 31 03:21:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:32.943 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e02d38c5-e9d5-431d-ac3d-324295629ffb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:32.944 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap936cead9-b1 in ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:21:32 np0005603609 systemd-udevd[281316]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:21:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:32.946 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap936cead9-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:21:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:32.946 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3c56263e-8f8e-4a3f-8588-6e33a53d9601]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:32.947 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9053befe-9661-45fa-b305-db0bd9092c9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:21:32 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3987036066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:21:32 np0005603609 systemd[1]: Started Virtual Machine qemu-76-instance-00000095.
Jan 31 03:21:32 np0005603609 NetworkManager[49064]: <info>  [1769847692.9570] device (tap36f5a0a6-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:21:32 np0005603609 NetworkManager[49064]: <info>  [1769847692.9577] device (tap36f5a0a6-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:21:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:32.954 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[f9463043-b782-401e-bf9f-ead2038a481a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:32.967 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5811c27d-43c6-46f8-abe0-ec26773c7755]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:32 np0005603609 nova_compute[221550]: 2026-01-31 08:21:32.976 221554 DEBUG oslo_concurrency.processutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:32.989 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5987e0-784b-4021-a37e-b5c6c4e55ddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:32 np0005603609 NetworkManager[49064]: <info>  [1769847692.9947] manager: (tap936cead9-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/297)
Jan 31 03:21:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:32.996 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[eb3cfe28-5f86-4264-8333-f3bc6fadc243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.015 221554 DEBUG nova.storage.rbd_utils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] rbd image 20cc7040-fd06-49c7-8e68-41cb74e67e9a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.019 221554 DEBUG oslo_concurrency.processutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:33.020 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[7e840ddc-6a7f-4ac1-b07a-37690370284a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:33.023 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0aa377-952e-4df5-84c1-db7ecbe21d38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:33 np0005603609 NetworkManager[49064]: <info>  [1769847693.0411] device (tap936cead9-b0): carrier: link connected
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:33.045 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[673cb702-246d-4419-8f41-c8d8b41f4e0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:33.057 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f54b43ce-9db0-4e87-82e1-2b40a29a87f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap936cead9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:06:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 793274, 'reachable_time': 30007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281369, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:33.068 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3e23d488-03b5-4ca8-91fd-45bc0267879b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:62a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 793274, 'tstamp': 793274}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281370, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:33.078 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4e1a7b7f-5a77-40a5-8b7e-e766a3171c12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap936cead9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:06:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 793274, 'reachable_time': 30007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281371, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:33.098 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[be4b5636-7e87-4011-a54c-1bf2489275d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:33.143 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0ffbf4b8-f7aa-410f-a53a-f19c4019a308]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:33.144 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap936cead9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:33.145 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:33.145 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap936cead9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.146 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:33 np0005603609 NetworkManager[49064]: <info>  [1769847693.1476] manager: (tap936cead9-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Jan 31 03:21:33 np0005603609 kernel: tap936cead9-b0: entered promiscuous mode
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.150 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:33.151 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap936cead9-b0, col_values=(('external_ids', {'iface-id': 'fd5187fd-cce9-41da-96d2-ef75fbcbcf0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.152 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:21:33Z|00634|binding|INFO|Releasing lport fd5187fd-cce9-41da-96d2-ef75fbcbcf0f from this chassis (sb_readonly=0)
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:33.153 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/936cead9-bc2f-4c2d-8b4c-6079d2159263.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/936cead9-bc2f-4c2d-8b4c-6079d2159263.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:33.154 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e5181332-a43c-4ee1-b758-6d8b4855b365]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:33.154 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-936cead9-bc2f-4c2d-8b4c-6079d2159263
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/936cead9-bc2f-4c2d-8b4c-6079d2159263.pid.haproxy
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 936cead9-bc2f-4c2d-8b4c-6079d2159263
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:21:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:33.155 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'env', 'PROCESS_TAG=haproxy-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/936cead9-bc2f-4c2d-8b4c-6079d2159263.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.159 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.271 221554 DEBUG nova.compute.manager [req-e91152fc-7294-40e1-81e8-8db3ca630ba8 req-3f7e39b1-87cc-4847-8c6b-54f799c41641 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received event network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.278 221554 DEBUG oslo_concurrency.lockutils [req-e91152fc-7294-40e1-81e8-8db3ca630ba8 req-3f7e39b1-87cc-4847-8c6b-54f799c41641 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "946cb648-0758-4617-bd3a-142804fd70f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.279 221554 DEBUG oslo_concurrency.lockutils [req-e91152fc-7294-40e1-81e8-8db3ca630ba8 req-3f7e39b1-87cc-4847-8c6b-54f799c41641 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.279 221554 DEBUG oslo_concurrency.lockutils [req-e91152fc-7294-40e1-81e8-8db3ca630ba8 req-3f7e39b1-87cc-4847-8c6b-54f799c41641 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.279 221554 DEBUG nova.compute.manager [req-e91152fc-7294-40e1-81e8-8db3ca630ba8 req-3f7e39b1-87cc-4847-8c6b-54f799c41641 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Processing event network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:21:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:21:33 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2232488299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.456 221554 DEBUG oslo_concurrency.processutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.459 221554 DEBUG nova.virt.libvirt.vif [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:21:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-504299522',display_name='tempest-AttachVolumeTestJSON-server-504299522',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-504299522',id=150,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBu/9yb3AwKfL+ijJ3Xo9JzrwBYsej/AsT1y29vhp9sWMO5Td0RAhZoszueDKQGZWIcDlz6V7Rc2fL6hsFrBntj+ecoex7fusa36kcOGL0EN/HOgO9QzUew3k1clrSe20A==',key_name='tempest-keypair-2098733157',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fdf18f1faf4846e2a6e2eab4ac2aec02',ramdisk_id='',reservation_id='r-n08gz270',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-1821521720',owner_user_name='tempest-AttachVolumeTestJSON-1821521720-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:21:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3ada90dc4b77478cb4b93c63409d8537',uuid=20cc7040-fd06-49c7-8e68-41cb74e67e9a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.459 221554 DEBUG nova.network.os_vif_util [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converting VIF {"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.460 221554 DEBUG nova.network.os_vif_util [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.461 221554 DEBUG nova.objects.instance [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'pci_devices' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.484 221554 DEBUG nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:21:33 np0005603609 nova_compute[221550]:  <uuid>20cc7040-fd06-49c7-8e68-41cb74e67e9a</uuid>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:  <name>instance-00000096</name>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <nova:name>tempest-AttachVolumeTestJSON-server-504299522</nova:name>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:21:32</nova:creationTime>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:21:33 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:        <nova:user uuid="3ada90dc4b77478cb4b93c63409d8537">tempest-AttachVolumeTestJSON-1821521720-project-member</nova:user>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:        <nova:project uuid="fdf18f1faf4846e2a6e2eab4ac2aec02">tempest-AttachVolumeTestJSON-1821521720</nova:project>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:        <nova:port uuid="c66c4603-0eab-4f51-ad10-8185e33051dd">
Jan 31 03:21:33 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <entry name="serial">20cc7040-fd06-49c7-8e68-41cb74e67e9a</entry>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <entry name="uuid">20cc7040-fd06-49c7-8e68-41cb74e67e9a</entry>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/20cc7040-fd06-49c7-8e68-41cb74e67e9a_disk">
Jan 31 03:21:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:21:33 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/20cc7040-fd06-49c7-8e68-41cb74e67e9a_disk.config">
Jan 31 03:21:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:21:33 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:11:71:84"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <target dev="tapc66c4603-0e"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/20cc7040-fd06-49c7-8e68-41cb74e67e9a/console.log" append="off"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:21:33 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:21:33 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:21:33 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:21:33 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.488 221554 DEBUG nova.compute.manager [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Preparing to wait for external event network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.489 221554 DEBUG oslo_concurrency.lockutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.489 221554 DEBUG oslo_concurrency.lockutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.489 221554 DEBUG oslo_concurrency.lockutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.490 221554 DEBUG nova.virt.libvirt.vif [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:21:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-504299522',display_name='tempest-AttachVolumeTestJSON-server-504299522',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-504299522',id=150,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBu/9yb3AwKfL+ijJ3Xo9JzrwBYsej/AsT1y29vhp9sWMO5Td0RAhZoszueDKQGZWIcDlz6V7Rc2fL6hsFrBntj+ecoex7fusa36kcOGL0EN/HOgO9QzUew3k1clrSe20A==',key_name='tempest-keypair-2098733157',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fdf18f1faf4846e2a6e2eab4ac2aec02',ramdisk_id='',reservation_id='r-n08gz270',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-1821521720',owner_user_name='tempest-AttachVolumeTestJSON-1821521720-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:21:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3ada90dc4b77478cb4b93c63409d8537',uuid=20cc7040-fd06-49c7-8e68-41cb74e67e9a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.490 221554 DEBUG nova.network.os_vif_util [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converting VIF {"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.491 221554 DEBUG nova.network.os_vif_util [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.492 221554 DEBUG os_vif [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.492 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.492 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.493 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.496 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.496 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc66c4603-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.496 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc66c4603-0e, col_values=(('external_ids', {'iface-id': 'c66c4603-0eab-4f51-ad10-8185e33051dd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:71:84', 'vm-uuid': '20cc7040-fd06-49c7-8e68-41cb74e67e9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.498 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:33 np0005603609 NetworkManager[49064]: <info>  [1769847693.4995] manager: (tapc66c4603-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/299)
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.500 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.503 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.504 221554 INFO os_vif [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e')#033[00m
Jan 31 03:21:33 np0005603609 podman[281421]: 2026-01-31 08:21:33.428534423 +0000 UTC m=+0.020050333 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.588 221554 DEBUG nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.589 221554 DEBUG nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.589 221554 DEBUG nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] No VIF found with MAC fa:16:3e:11:71:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.589 221554 INFO nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Using config drive#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.732 221554 DEBUG nova.storage.rbd_utils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] rbd image 20cc7040-fd06-49c7-8e68-41cb74e67e9a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:33.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:33 np0005603609 podman[281421]: 2026-01-31 08:21:33.860318751 +0000 UTC m=+0.451834641 container create 5f9b6d7cd90a82d764a59cc5bd843ec386e9f1eb91065810b64dd1c0bcf94134 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:21:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:33.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.965 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847693.9645734, 946cb648-0758-4617-bd3a-142804fd70f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.965 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] VM Started (Lifecycle Event)#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.967 221554 DEBUG nova.compute.manager [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.971 221554 DEBUG nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.977 221554 INFO nova.virt.libvirt.driver [-] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Instance spawned successfully.#033[00m
Jan 31 03:21:33 np0005603609 nova_compute[221550]: 2026-01-31 08:21:33.978 221554 DEBUG nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.000 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.023 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.029 221554 DEBUG nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.030 221554 DEBUG nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.030 221554 DEBUG nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.031 221554 DEBUG nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.032 221554 DEBUG nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.032 221554 DEBUG nova.virt.libvirt.driver [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:34 np0005603609 systemd[1]: Started libpod-conmon-5f9b6d7cd90a82d764a59cc5bd843ec386e9f1eb91065810b64dd1c0bcf94134.scope.
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.076 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.076 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847693.9672592, 946cb648-0758-4617-bd3a-142804fd70f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.076 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:21:34 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:21:34 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ee99a26b0cb8a188f897dcfe98a2f92308789dbb1734a27138df619be794d6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.115 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.119 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847693.9703012, 946cb648-0758-4617-bd3a-142804fd70f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.119 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:21:34 np0005603609 podman[281421]: 2026-01-31 08:21:34.137597019 +0000 UTC m=+0.729112949 container init 5f9b6d7cd90a82d764a59cc5bd843ec386e9f1eb91065810b64dd1c0bcf94134 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 03:21:34 np0005603609 podman[281421]: 2026-01-31 08:21:34.141862272 +0000 UTC m=+0.733378172 container start 5f9b6d7cd90a82d764a59cc5bd843ec386e9f1eb91065810b64dd1c0bcf94134 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.152 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.155 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:21:34 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[281502]: [NOTICE]   (281506) : New worker (281508) forked
Jan 31 03:21:34 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[281502]: [NOTICE]   (281506) : Loading success.
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.164 221554 INFO nova.compute.manager [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Took 5.25 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.164 221554 DEBUG nova.compute.manager [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.183 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.307 221554 INFO nova.compute.manager [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Took 10.20 seconds to build instance.#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.342 221554 DEBUG oslo_concurrency.lockutils [None req-fb082b87-6749-4f56-9607-b9b9688994b7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.396 221554 INFO nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Creating config drive at /var/lib/nova/instances/20cc7040-fd06-49c7-8e68-41cb74e67e9a/disk.config#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.401 221554 DEBUG oslo_concurrency.processutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/20cc7040-fd06-49c7-8e68-41cb74e67e9a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1766p6iw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.525 221554 DEBUG oslo_concurrency.processutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/20cc7040-fd06-49c7-8e68-41cb74e67e9a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1766p6iw" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.551 221554 DEBUG nova.storage.rbd_utils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] rbd image 20cc7040-fd06-49c7-8e68-41cb74e67e9a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.555 221554 DEBUG oslo_concurrency.processutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/20cc7040-fd06-49c7-8e68-41cb74e67e9a/disk.config 20cc7040-fd06-49c7-8e68-41cb74e67e9a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.598 221554 DEBUG nova.network.neutron [req-6431a03c-e959-4b3f-9017-bd5120ca487c req-32639188-1834-47b3-a656-7e32eff96a35 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Updated VIF entry in instance network info cache for port c66c4603-0eab-4f51-ad10-8185e33051dd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.599 221554 DEBUG nova.network.neutron [req-6431a03c-e959-4b3f-9017-bd5120ca487c req-32639188-1834-47b3-a656-7e32eff96a35 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Updating instance_info_cache with network_info: [{"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.625 221554 DEBUG oslo_concurrency.lockutils [req-6431a03c-e959-4b3f-9017-bd5120ca487c req-32639188-1834-47b3-a656-7e32eff96a35 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-20cc7040-fd06-49c7-8e68-41cb74e67e9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:21:34 np0005603609 nova_compute[221550]: 2026-01-31 08:21:34.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:35 np0005603609 nova_compute[221550]: 2026-01-31 08:21:35.312 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:35 np0005603609 nova_compute[221550]: 2026-01-31 08:21:35.453 221554 DEBUG nova.compute.manager [req-4f0ecf71-0f7b-44a2-8afc-197013909d7e req-34c78759-ae1e-49c7-a7e0-f03e3f8d20fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received event network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:21:35 np0005603609 nova_compute[221550]: 2026-01-31 08:21:35.454 221554 DEBUG oslo_concurrency.lockutils [req-4f0ecf71-0f7b-44a2-8afc-197013909d7e req-34c78759-ae1e-49c7-a7e0-f03e3f8d20fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "946cb648-0758-4617-bd3a-142804fd70f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:35 np0005603609 nova_compute[221550]: 2026-01-31 08:21:35.454 221554 DEBUG oslo_concurrency.lockutils [req-4f0ecf71-0f7b-44a2-8afc-197013909d7e req-34c78759-ae1e-49c7-a7e0-f03e3f8d20fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:35 np0005603609 nova_compute[221550]: 2026-01-31 08:21:35.454 221554 DEBUG oslo_concurrency.lockutils [req-4f0ecf71-0f7b-44a2-8afc-197013909d7e req-34c78759-ae1e-49c7-a7e0-f03e3f8d20fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:35 np0005603609 nova_compute[221550]: 2026-01-31 08:21:35.455 221554 DEBUG nova.compute.manager [req-4f0ecf71-0f7b-44a2-8afc-197013909d7e req-34c78759-ae1e-49c7-a7e0-f03e3f8d20fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] No waiting events found dispatching network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:21:35 np0005603609 nova_compute[221550]: 2026-01-31 08:21:35.455 221554 WARNING nova.compute.manager [req-4f0ecf71-0f7b-44a2-8afc-197013909d7e req-34c78759-ae1e-49c7-a7e0-f03e3f8d20fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received unexpected event network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d for instance with vm_state active and task_state None.#033[00m
Jan 31 03:21:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:35.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:35 np0005603609 nova_compute[221550]: 2026-01-31 08:21:35.780 221554 DEBUG oslo_concurrency.processutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/20cc7040-fd06-49c7-8e68-41cb74e67e9a/disk.config 20cc7040-fd06-49c7-8e68-41cb74e67e9a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:35 np0005603609 nova_compute[221550]: 2026-01-31 08:21:35.782 221554 INFO nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Deleting local config drive /var/lib/nova/instances/20cc7040-fd06-49c7-8e68-41cb74e67e9a/disk.config because it was imported into RBD.#033[00m
Jan 31 03:21:35 np0005603609 virtqemud[221292]: End of file while reading data: Input/output error
Jan 31 03:21:35 np0005603609 kernel: tapc66c4603-0e: entered promiscuous mode
Jan 31 03:21:35 np0005603609 systemd-udevd[281357]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:21:35 np0005603609 NetworkManager[49064]: <info>  [1769847695.8393] manager: (tapc66c4603-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/300)
Jan 31 03:21:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:35.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:35 np0005603609 nova_compute[221550]: 2026-01-31 08:21:35.888 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:21:35Z|00635|binding|INFO|Claiming lport c66c4603-0eab-4f51-ad10-8185e33051dd for this chassis.
Jan 31 03:21:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:21:35Z|00636|binding|INFO|c66c4603-0eab-4f51-ad10-8185e33051dd: Claiming fa:16:3e:11:71:84 10.100.0.4
Jan 31 03:21:35 np0005603609 nova_compute[221550]: 2026-01-31 08:21:35.890 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:35 np0005603609 NetworkManager[49064]: <info>  [1769847695.8919] device (tapc66c4603-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:21:35 np0005603609 NetworkManager[49064]: <info>  [1769847695.8928] device (tapc66c4603-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:21:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:21:35Z|00637|binding|INFO|Setting lport c66c4603-0eab-4f51-ad10-8185e33051dd ovn-installed in OVS
Jan 31 03:21:35 np0005603609 nova_compute[221550]: 2026-01-31 08:21:35.897 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:35 np0005603609 systemd-machined[190912]: New machine qemu-77-instance-00000096.
Jan 31 03:21:35 np0005603609 systemd[1]: Started Virtual Machine qemu-77-instance-00000096.
Jan 31 03:21:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:21:35Z|00638|binding|INFO|Setting lport c66c4603-0eab-4f51-ad10-8185e33051dd up in Southbound
Jan 31 03:21:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:35.947 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:71:84 10.100.0.4'], port_security=['fa:16:3e:11:71:84 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '20cc7040-fd06-49c7-8e68-41cb74e67e9a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e39ff1d-f815-485f-8f43-698b31333bba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fdf18f1faf4846e2a6e2eab4ac2aec02', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2dffb7ca-42ef-40e9-a7b7-090e002450c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=917c1732-02f7-4e55-ac45-7e04e3147221, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=c66c4603-0eab-4f51-ad10-8185e33051dd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:21:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:35.948 140058 INFO neutron.agent.ovn.metadata.agent [-] Port c66c4603-0eab-4f51-ad10-8185e33051dd in datapath 1e39ff1d-f815-485f-8f43-698b31333bba bound to our chassis#033[00m
Jan 31 03:21:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:35.950 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e39ff1d-f815-485f-8f43-698b31333bba#033[00m
Jan 31 03:21:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:35.960 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[598e06c9-081e-4624-bca5-41361247563e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:35.961 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1e39ff1d-f1 in ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:21:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:35.963 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1e39ff1d-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:21:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:35.963 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[966f2ad9-077a-4651-a70a-4069d4b74306]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:35.964 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9a54d5bf-26a9-432c-bf79-9e534c7271f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:35.971 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ab8920-1ed8-4e1e-9cc5-4946e649cf53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:35.980 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cf79307d-ad8b-46a2-bd3d-485e30fb3cd7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:35.998 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d6241dbf-ef81-48c4-a2ed-513bc75f3599]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:36 np0005603609 NetworkManager[49064]: <info>  [1769847696.0043] manager: (tap1e39ff1d-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/301)
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:36.003 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[71872209-8fd8-47cb-97e9-176a0786e31a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:36 np0005603609 systemd-udevd[281586]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:36.025 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f514a926-bc9c-45cd-8cec-b819b1e19f94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:36.027 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f0f55a6b-9290-4625-8da2-dca2ae8404c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:36 np0005603609 NetworkManager[49064]: <info>  [1769847696.0461] device (tap1e39ff1d-f0): carrier: link connected
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:36.048 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[78da41f0-d520-467d-8cbd-27c543214bf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:36.060 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2364042a-7a28-4cd2-b71e-6f50a26defdb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e39ff1d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:45:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 793574, 'reachable_time': 39196, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281605, 'error': None, 'target': 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:36.073 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe739b3-0d2f-45e0-8369-907a45f70f4f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:4516'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 793574, 'tstamp': 793574}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281606, 'error': None, 'target': 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:36.086 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f84fad1d-a117-40c5-ab63-42e09ab0b513]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e39ff1d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:45:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 793574, 'reachable_time': 39196, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281607, 'error': None, 'target': 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:36.108 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[805d8933-f2e4-4ce9-8c5b-7c8236d0dd96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:36.153 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[edc86b37-1c74-4a2e-929e-184423ebd5d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:36.154 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e39ff1d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:36.155 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:36.155 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e39ff1d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:36 np0005603609 kernel: tap1e39ff1d-f0: entered promiscuous mode
Jan 31 03:21:36 np0005603609 NetworkManager[49064]: <info>  [1769847696.1581] manager: (tap1e39ff1d-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:36.160 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e39ff1d-f0, col_values=(('external_ids', {'iface-id': '98f103d6-a5bc-4680-a4d5-8ced102dd381'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.160 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:36 np0005603609 ovn_controller[130359]: 2026-01-31T08:21:36Z|00639|binding|INFO|Releasing lport 98f103d6-a5bc-4680-a4d5-8ced102dd381 from this chassis (sb_readonly=0)
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.172 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:36.173 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1e39ff1d-f815-485f-8f43-698b31333bba.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1e39ff1d-f815-485f-8f43-698b31333bba.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:36.175 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b9e6b1-93fd-46cc-8ba5-5e77eb3ae748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:36.176 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-1e39ff1d-f815-485f-8f43-698b31333bba
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/1e39ff1d-f815-485f-8f43-698b31333bba.pid.haproxy
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 1e39ff1d-f815-485f-8f43-698b31333bba
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:21:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:36.176 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'env', 'PROCESS_TAG=haproxy-1e39ff1d-f815-485f-8f43-698b31333bba', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1e39ff1d-f815-485f-8f43-698b31333bba.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:21:36 np0005603609 podman[281639]: 2026-01-31 08:21:36.463307328 +0000 UTC m=+0.022837389 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:21:36 np0005603609 podman[281639]: 2026-01-31 08:21:36.564880668 +0000 UTC m=+0.124410719 container create 100774a2396fa743c190db9a6d546fc8835545a22ea20a736c7fdfb778b3dc66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:21:36 np0005603609 systemd[1]: Started libpod-conmon-100774a2396fa743c190db9a6d546fc8835545a22ea20a736c7fdfb778b3dc66.scope.
Jan 31 03:21:36 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:21:36 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1db6486da8c85ebc7592a14a2d938306b4e229b58d1c0d14adfaa202fa9fa682/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:36 np0005603609 podman[281639]: 2026-01-31 08:21:36.682998584 +0000 UTC m=+0.242528665 container init 100774a2396fa743c190db9a6d546fc8835545a22ea20a736c7fdfb778b3dc66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:21:36 np0005603609 podman[281639]: 2026-01-31 08:21:36.688877685 +0000 UTC m=+0.248407736 container start 100774a2396fa743c190db9a6d546fc8835545a22ea20a736c7fdfb778b3dc66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:21:36 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[281654]: [NOTICE]   (281676) : New worker (281678) forked
Jan 31 03:21:36 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[281654]: [NOTICE]   (281676) : Loading success.
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.835 221554 DEBUG nova.compute.manager [req-41f67907-b03a-4e17-909e-b107193dd373 req-2ef55c46-61c5-4f95-992e-f50c4e815ec0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received event network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.835 221554 DEBUG oslo_concurrency.lockutils [req-41f67907-b03a-4e17-909e-b107193dd373 req-2ef55c46-61c5-4f95-992e-f50c4e815ec0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.836 221554 DEBUG oslo_concurrency.lockutils [req-41f67907-b03a-4e17-909e-b107193dd373 req-2ef55c46-61c5-4f95-992e-f50c4e815ec0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.836 221554 DEBUG oslo_concurrency.lockutils [req-41f67907-b03a-4e17-909e-b107193dd373 req-2ef55c46-61c5-4f95-992e-f50c4e815ec0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.836 221554 DEBUG nova.compute.manager [req-41f67907-b03a-4e17-909e-b107193dd373 req-2ef55c46-61c5-4f95-992e-f50c4e815ec0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Processing event network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.912 221554 DEBUG nova.compute.manager [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.914 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847696.911866, 20cc7040-fd06-49c7-8e68-41cb74e67e9a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.914 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] VM Started (Lifecycle Event)#033[00m
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.917 221554 DEBUG nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.921 221554 INFO nova.virt.libvirt.driver [-] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Instance spawned successfully.#033[00m
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.921 221554 DEBUG nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.958 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.962 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.970 221554 DEBUG nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.971 221554 DEBUG nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.972 221554 DEBUG nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.972 221554 DEBUG nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.972 221554 DEBUG nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:36 np0005603609 nova_compute[221550]: 2026-01-31 08:21:36.973 221554 DEBUG nova.virt.libvirt.driver [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:37 np0005603609 nova_compute[221550]: 2026-01-31 08:21:37.009 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:21:37 np0005603609 nova_compute[221550]: 2026-01-31 08:21:37.009 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847696.9122725, 20cc7040-fd06-49c7-8e68-41cb74e67e9a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:21:37 np0005603609 nova_compute[221550]: 2026-01-31 08:21:37.009 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:21:37 np0005603609 nova_compute[221550]: 2026-01-31 08:21:37.049 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:37 np0005603609 nova_compute[221550]: 2026-01-31 08:21:37.053 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847696.9171898, 20cc7040-fd06-49c7-8e68-41cb74e67e9a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:21:37 np0005603609 nova_compute[221550]: 2026-01-31 08:21:37.053 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:21:37 np0005603609 nova_compute[221550]: 2026-01-31 08:21:37.083 221554 INFO nova.compute.manager [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Took 11.00 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:21:37 np0005603609 nova_compute[221550]: 2026-01-31 08:21:37.083 221554 DEBUG nova.compute.manager [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:37 np0005603609 nova_compute[221550]: 2026-01-31 08:21:37.088 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:37 np0005603609 nova_compute[221550]: 2026-01-31 08:21:37.095 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:21:37 np0005603609 nova_compute[221550]: 2026-01-31 08:21:37.139 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:21:37 np0005603609 nova_compute[221550]: 2026-01-31 08:21:37.183 221554 INFO nova.compute.manager [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Took 12.20 seconds to build instance.#033[00m
Jan 31 03:21:37 np0005603609 nova_compute[221550]: 2026-01-31 08:21:37.220 221554 DEBUG oslo_concurrency.lockutils [None req-0abbc4cd-413e-4a0f-812d-efa6aa17f49f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:37 np0005603609 nova_compute[221550]: 2026-01-31 08:21:37.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:37 np0005603609 nova_compute[221550]: 2026-01-31 08:21:37.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:21:37 np0005603609 nova_compute[221550]: 2026-01-31 08:21:37.689 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:21:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:21:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:37.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:21:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:37.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:38 np0005603609 nova_compute[221550]: 2026-01-31 08:21:38.055 221554 INFO nova.compute.manager [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Rescuing#033[00m
Jan 31 03:21:38 np0005603609 nova_compute[221550]: 2026-01-31 08:21:38.055 221554 DEBUG oslo_concurrency.lockutils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "refresh_cache-946cb648-0758-4617-bd3a-142804fd70f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:21:38 np0005603609 nova_compute[221550]: 2026-01-31 08:21:38.055 221554 DEBUG oslo_concurrency.lockutils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquired lock "refresh_cache-946cb648-0758-4617-bd3a-142804fd70f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:21:38 np0005603609 nova_compute[221550]: 2026-01-31 08:21:38.055 221554 DEBUG nova.network.neutron [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:21:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:38 np0005603609 nova_compute[221550]: 2026-01-31 08:21:38.500 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:39 np0005603609 nova_compute[221550]: 2026-01-31 08:21:39.050 221554 DEBUG nova.compute.manager [req-8e28d3ff-d70c-4be5-8456-222c8c189a77 req-5e955e7b-b7a3-41bc-a5de-2b36872e059f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received event network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:21:39 np0005603609 nova_compute[221550]: 2026-01-31 08:21:39.051 221554 DEBUG oslo_concurrency.lockutils [req-8e28d3ff-d70c-4be5-8456-222c8c189a77 req-5e955e7b-b7a3-41bc-a5de-2b36872e059f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:39 np0005603609 nova_compute[221550]: 2026-01-31 08:21:39.052 221554 DEBUG oslo_concurrency.lockutils [req-8e28d3ff-d70c-4be5-8456-222c8c189a77 req-5e955e7b-b7a3-41bc-a5de-2b36872e059f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:39 np0005603609 nova_compute[221550]: 2026-01-31 08:21:39.052 221554 DEBUG oslo_concurrency.lockutils [req-8e28d3ff-d70c-4be5-8456-222c8c189a77 req-5e955e7b-b7a3-41bc-a5de-2b36872e059f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:39 np0005603609 nova_compute[221550]: 2026-01-31 08:21:39.052 221554 DEBUG nova.compute.manager [req-8e28d3ff-d70c-4be5-8456-222c8c189a77 req-5e955e7b-b7a3-41bc-a5de-2b36872e059f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] No waiting events found dispatching network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:21:39 np0005603609 nova_compute[221550]: 2026-01-31 08:21:39.053 221554 WARNING nova.compute.manager [req-8e28d3ff-d70c-4be5-8456-222c8c189a77 req-5e955e7b-b7a3-41bc-a5de-2b36872e059f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received unexpected event network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd for instance with vm_state active and task_state None.#033[00m
Jan 31 03:21:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:21:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:39.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:21:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:39.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:39 np0005603609 nova_compute[221550]: 2026-01-31 08:21:39.983 221554 DEBUG nova.network.neutron [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Updating instance_info_cache with network_info: [{"id": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "address": "fa:16:3e:0d:0a:0c", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f5a0a6-02", "ovs_interfaceid": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:21:40 np0005603609 nova_compute[221550]: 2026-01-31 08:21:40.100 221554 DEBUG oslo_concurrency.lockutils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Releasing lock "refresh_cache-946cb648-0758-4617-bd3a-142804fd70f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:21:40 np0005603609 nova_compute[221550]: 2026-01-31 08:21:40.314 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:40 np0005603609 nova_compute[221550]: 2026-01-31 08:21:40.531 221554 DEBUG nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:21:40 np0005603609 nova_compute[221550]: 2026-01-31 08:21:40.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:40 np0005603609 nova_compute[221550]: 2026-01-31 08:21:40.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:40 np0005603609 nova_compute[221550]: 2026-01-31 08:21:40.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:21:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:40.963 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:21:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:40.964 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:21:41 np0005603609 nova_compute[221550]: 2026-01-31 08:21:41.002 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:41 np0005603609 nova_compute[221550]: 2026-01-31 08:21:41.240 221554 DEBUG nova.compute.manager [req-401cf2c6-45bb-4253-a09e-ff59fc9dfc13 req-4fdbb120-e86e-4806-8dd8-adea62334419 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received event network-changed-c66c4603-0eab-4f51-ad10-8185e33051dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:21:41 np0005603609 nova_compute[221550]: 2026-01-31 08:21:41.240 221554 DEBUG nova.compute.manager [req-401cf2c6-45bb-4253-a09e-ff59fc9dfc13 req-4fdbb120-e86e-4806-8dd8-adea62334419 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Refreshing instance network info cache due to event network-changed-c66c4603-0eab-4f51-ad10-8185e33051dd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:21:41 np0005603609 nova_compute[221550]: 2026-01-31 08:21:41.241 221554 DEBUG oslo_concurrency.lockutils [req-401cf2c6-45bb-4253-a09e-ff59fc9dfc13 req-4fdbb120-e86e-4806-8dd8-adea62334419 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-20cc7040-fd06-49c7-8e68-41cb74e67e9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:21:41 np0005603609 nova_compute[221550]: 2026-01-31 08:21:41.241 221554 DEBUG oslo_concurrency.lockutils [req-401cf2c6-45bb-4253-a09e-ff59fc9dfc13 req-4fdbb120-e86e-4806-8dd8-adea62334419 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-20cc7040-fd06-49c7-8e68-41cb74e67e9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:21:41 np0005603609 nova_compute[221550]: 2026-01-31 08:21:41.241 221554 DEBUG nova.network.neutron [req-401cf2c6-45bb-4253-a09e-ff59fc9dfc13 req-4fdbb120-e86e-4806-8dd8-adea62334419 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Refreshing network info cache for port c66c4603-0eab-4f51-ad10-8185e33051dd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:21:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:41.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:41.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:42 np0005603609 nova_compute[221550]: 2026-01-31 08:21:42.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:42 np0005603609 nova_compute[221550]: 2026-01-31 08:21:42.844 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:42 np0005603609 nova_compute[221550]: 2026-01-31 08:21:42.845 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:42 np0005603609 nova_compute[221550]: 2026-01-31 08:21:42.845 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:42 np0005603609 nova_compute[221550]: 2026-01-31 08:21:42.845 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:21:42 np0005603609 nova_compute[221550]: 2026-01-31 08:21:42.846 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:21:43 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1191907265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.290 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.503 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.540 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.541 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.547 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.548 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.554 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.554 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.622 221554 DEBUG nova.network.neutron [req-401cf2c6-45bb-4253-a09e-ff59fc9dfc13 req-4fdbb120-e86e-4806-8dd8-adea62334419 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Updated VIF entry in instance network info cache for port c66c4603-0eab-4f51-ad10-8185e33051dd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.623 221554 DEBUG nova.network.neutron [req-401cf2c6-45bb-4253-a09e-ff59fc9dfc13 req-4fdbb120-e86e-4806-8dd8-adea62334419 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Updating instance_info_cache with network_info: [{"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.712 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.713 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3916MB free_disk=20.743247985839844GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.714 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.714 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:43.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.873 221554 DEBUG oslo_concurrency.lockutils [req-401cf2c6-45bb-4253-a09e-ff59fc9dfc13 req-4fdbb120-e86e-4806-8dd8-adea62334419 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-20cc7040-fd06-49c7-8e68-41cb74e67e9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:21:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:21:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:43.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.960 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 38deb482-bd85-4fdf-b2da-2725bffd8f43 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.961 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 946cb648-0758-4617-bd3a-142804fd70f7 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.962 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 20cc7040-fd06-49c7-8e68-41cb74e67e9a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.962 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:21:43 np0005603609 nova_compute[221550]: 2026-01-31 08:21:43.963 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:21:44 np0005603609 nova_compute[221550]: 2026-01-31 08:21:44.074 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:21:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2707252371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:21:44 np0005603609 nova_compute[221550]: 2026-01-31 08:21:44.520 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:44 np0005603609 nova_compute[221550]: 2026-01-31 08:21:44.526 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:21:44 np0005603609 nova_compute[221550]: 2026-01-31 08:21:44.585 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:21:44 np0005603609 nova_compute[221550]: 2026-01-31 08:21:44.640 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:21:44 np0005603609 nova_compute[221550]: 2026-01-31 08:21:44.641 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:45 np0005603609 nova_compute[221550]: 2026-01-31 08:21:45.316 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:45 np0005603609 nova_compute[221550]: 2026-01-31 08:21:45.637 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:45 np0005603609 nova_compute[221550]: 2026-01-31 08:21:45.637 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:45.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000046s ======
Jan 31 03:21:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:45.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000046s
Jan 31 03:21:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:47.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:47.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:21:47.966 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:48 np0005603609 nova_compute[221550]: 2026-01-31 08:21:48.551 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:21:49Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:11:71:84 10.100.0.4
Jan 31 03:21:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:21:49Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:71:84 10.100.0.4
Jan 31 03:21:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:49.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:49.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:50 np0005603609 nova_compute[221550]: 2026-01-31 08:21:50.320 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:50 np0005603609 nova_compute[221550]: 2026-01-31 08:21:50.577 221554 DEBUG nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:21:51 np0005603609 nova_compute[221550]: 2026-01-31 08:21:51.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:51 np0005603609 nova_compute[221550]: 2026-01-31 08:21:51.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:21:51 np0005603609 nova_compute[221550]: 2026-01-31 08:21:51.686 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:21:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:51.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:51.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:21:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3918858088' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:21:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:21:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3918858088' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:21:53 np0005603609 nova_compute[221550]: 2026-01-31 08:21:53.593 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:21:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:53.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:21:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:53.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:55 np0005603609 nova_compute[221550]: 2026-01-31 08:21:55.322 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:55.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:55.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:57 np0005603609 podman[281888]: 2026-01-31 08:21:57.159772256 +0000 UTC m=+0.044589672 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:21:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 03:21:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:21:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:21:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:21:57 np0005603609 podman[281887]: 2026-01-31 08:21:57.185629446 +0000 UTC m=+0.071445426 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:21:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:21:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:57.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:21:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:21:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:57.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:21:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:58 np0005603609 nova_compute[221550]: 2026-01-31 08:21:58.647 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:21:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:59.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:21:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:21:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:59.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:00 np0005603609 nova_compute[221550]: 2026-01-31 08:22:00.324 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:01 np0005603609 nova_compute[221550]: 2026-01-31 08:22:01.663 221554 DEBUG nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:22:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:22:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:01.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:22:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:01.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:22:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:22:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:03 np0005603609 nova_compute[221550]: 2026-01-31 08:22:03.703 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:03.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:03.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:05 np0005603609 nova_compute[221550]: 2026-01-31 08:22:05.327 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:05.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:22:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:05.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #121. Immutable memtables: 0.
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:22:07.121818) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 121
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847727121903, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 1662, "num_deletes": 254, "total_data_size": 3458036, "memory_usage": 3504520, "flush_reason": "Manual Compaction"}
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #122: started
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847727147748, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 122, "file_size": 2267714, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60762, "largest_seqno": 62419, "table_properties": {"data_size": 2260799, "index_size": 3922, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14863, "raw_average_key_size": 19, "raw_value_size": 2246497, "raw_average_value_size": 2952, "num_data_blocks": 170, "num_entries": 761, "num_filter_entries": 761, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847611, "oldest_key_time": 1769847611, "file_creation_time": 1769847727, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 26014 microseconds, and 4648 cpu microseconds.
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:22:07.147833) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #122: 2267714 bytes OK
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:22:07.147870) [db/memtable_list.cc:519] [default] Level-0 commit table #122 started
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:22:07.149354) [db/memtable_list.cc:722] [default] Level-0 commit table #122: memtable #1 done
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:22:07.149381) EVENT_LOG_v1 {"time_micros": 1769847727149370, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:22:07.149413) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 3450348, prev total WAL file size 3450348, number of live WAL files 2.
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000118.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:22:07.150650) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323531' seq:72057594037927935, type:22 .. '6B7600353032' seq:0, type:0; will stop at (end)
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [122(2214KB)], [120(11MB)]
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847727150762, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [122], "files_L6": [120], "score": -1, "input_data_size": 14767719, "oldest_snapshot_seqno": -1}
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #123: 8697 keys, 13660122 bytes, temperature: kUnknown
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847727301344, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 123, "file_size": 13660122, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13600565, "index_size": 36725, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 227387, "raw_average_key_size": 26, "raw_value_size": 13444214, "raw_average_value_size": 1545, "num_data_blocks": 1425, "num_entries": 8697, "num_filter_entries": 8697, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769847727, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 123, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:22:07.301590) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 13660122 bytes
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:22:07.305471) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 98.0 rd, 90.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 11.9 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(12.5) write-amplify(6.0) OK, records in: 9226, records dropped: 529 output_compression: NoCompression
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:22:07.305502) EVENT_LOG_v1 {"time_micros": 1769847727305488, "job": 76, "event": "compaction_finished", "compaction_time_micros": 150662, "compaction_time_cpu_micros": 39153, "output_level": 6, "num_output_files": 1, "total_output_size": 13660122, "num_input_records": 9226, "num_output_records": 8697, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847727306315, "job": 76, "event": "table_file_deletion", "file_number": 122}
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000120.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847727308400, "job": 76, "event": "table_file_deletion", "file_number": 120}
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:22:07.150392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:22:07.308593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:22:07.308602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:22:07.308604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:22:07.308606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:22:07 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:22:07.308608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:22:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:07.513 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:07.514 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:07.515 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:07.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:07.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:08 np0005603609 nova_compute[221550]: 2026-01-31 08:22:08.743 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:09.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:09.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:10 np0005603609 nova_compute[221550]: 2026-01-31 08:22:10.329 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:22:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:11.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:22:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:11.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:12 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:12Z|00640|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Jan 31 03:22:12 np0005603609 nova_compute[221550]: 2026-01-31 08:22:12.759 221554 DEBUG nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:22:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:13 np0005603609 nova_compute[221550]: 2026-01-31 08:22:13.745 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:13.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:13.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:13 np0005603609 nova_compute[221550]: 2026-01-31 08:22:13.951 221554 DEBUG oslo_concurrency.lockutils [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:13 np0005603609 nova_compute[221550]: 2026-01-31 08:22:13.951 221554 DEBUG oslo_concurrency.lockutils [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:13 np0005603609 nova_compute[221550]: 2026-01-31 08:22:13.975 221554 DEBUG nova.objects.instance [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'flavor' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.027 221554 DEBUG oslo_concurrency.lockutils [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.356 221554 DEBUG oslo_concurrency.lockutils [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.356 221554 DEBUG oslo_concurrency.lockutils [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.357 221554 INFO nova.compute.manager [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Attaching volume 5cf392c9-4c4c-4680-9529-6ff296b6467e to /dev/vdb#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.619 221554 DEBUG os_brick.utils [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.620 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.638 226739 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.639 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[34021b5a-d542-45d8-bf32-ed3c97372299]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.640 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.650 226739 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.650 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd987ef-c9d4-47e6-bb40-797be679e9a0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1765b9b6275c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.652 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.664 226739 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.664 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[dfa03be2-c2c9-4e29-9d48-32a78b2a3a58]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.665 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[ec501881-85fb-4106-b6b4-373f3b5198cf]: (4, '231927d4-1ded-4b84-843c-456d697af567') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.666 221554 DEBUG oslo_concurrency.processutils [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.695 221554 DEBUG oslo_concurrency.processutils [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.697 221554 DEBUG os_brick.initiator.connectors.lightos [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.697 221554 DEBUG os_brick.initiator.connectors.lightos [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.697 221554 DEBUG os_brick.initiator.connectors.lightos [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.698 221554 DEBUG os_brick.utils [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] <== get_connector_properties: return (78ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1765b9b6275c', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '231927d4-1ded-4b84-843c-456d697af567', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:22:14 np0005603609 nova_compute[221550]: 2026-01-31 08:22:14.698 221554 DEBUG nova.virt.block_device [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Updating existing volume attachment record: 7fceadad-37e2-4c13-8ac8-9c60e9870397 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:22:15 np0005603609 nova_compute[221550]: 2026-01-31 08:22:15.333 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:22:15 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/845999303' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:22:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:22:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:15.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:22:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:15.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:15 np0005603609 nova_compute[221550]: 2026-01-31 08:22:15.985 221554 DEBUG nova.objects.instance [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'flavor' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:16 np0005603609 nova_compute[221550]: 2026-01-31 08:22:16.027 221554 DEBUG nova.virt.libvirt.driver [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Attempting to attach volume 5cf392c9-4c4c-4680-9529-6ff296b6467e with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 31 03:22:16 np0005603609 nova_compute[221550]: 2026-01-31 08:22:16.030 221554 DEBUG nova.virt.libvirt.guest [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 03:22:16 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:22:16 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-5cf392c9-4c4c-4680-9529-6ff296b6467e">
Jan 31 03:22:16 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:22:16 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:22:16 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:22:16 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:22:16 np0005603609 nova_compute[221550]:  <auth username="openstack">
Jan 31 03:22:16 np0005603609 nova_compute[221550]:    <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:22:16 np0005603609 nova_compute[221550]:  </auth>
Jan 31 03:22:16 np0005603609 nova_compute[221550]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:22:16 np0005603609 nova_compute[221550]:  <serial>5cf392c9-4c4c-4680-9529-6ff296b6467e</serial>
Jan 31 03:22:16 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:22:16 np0005603609 nova_compute[221550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 03:22:16 np0005603609 nova_compute[221550]: 2026-01-31 08:22:16.523 221554 DEBUG nova.virt.libvirt.driver [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:22:16 np0005603609 nova_compute[221550]: 2026-01-31 08:22:16.524 221554 DEBUG nova.virt.libvirt.driver [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:22:16 np0005603609 nova_compute[221550]: 2026-01-31 08:22:16.524 221554 DEBUG nova.virt.libvirt.driver [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:22:16 np0005603609 nova_compute[221550]: 2026-01-31 08:22:16.524 221554 DEBUG nova.virt.libvirt.driver [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] No VIF found with MAC fa:16:3e:11:71:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:22:16 np0005603609 nova_compute[221550]: 2026-01-31 08:22:16.994 221554 DEBUG oslo_concurrency.lockutils [None req-040aa492-6300-4ffb-bcb4-4fe16ef077fa 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:22:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:17.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:22:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:17.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:18 np0005603609 nova_compute[221550]: 2026-01-31 08:22:18.573 221554 DEBUG oslo_concurrency.lockutils [None req-dd6add3a-6a8b-44d9-ac47-bdfa50479044 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:18 np0005603609 nova_compute[221550]: 2026-01-31 08:22:18.574 221554 DEBUG oslo_concurrency.lockutils [None req-dd6add3a-6a8b-44d9-ac47-bdfa50479044 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:18 np0005603609 nova_compute[221550]: 2026-01-31 08:22:18.574 221554 DEBUG nova.compute.manager [None req-dd6add3a-6a8b-44d9-ac47-bdfa50479044 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:22:18 np0005603609 nova_compute[221550]: 2026-01-31 08:22:18.578 221554 DEBUG nova.compute.manager [None req-dd6add3a-6a8b-44d9-ac47-bdfa50479044 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 31 03:22:18 np0005603609 nova_compute[221550]: 2026-01-31 08:22:18.580 221554 DEBUG nova.objects.instance [None req-dd6add3a-6a8b-44d9-ac47-bdfa50479044 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'flavor' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:18 np0005603609 nova_compute[221550]: 2026-01-31 08:22:18.627 221554 DEBUG nova.virt.libvirt.driver [None req-dd6add3a-6a8b-44d9-ac47-bdfa50479044 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:22:18 np0005603609 nova_compute[221550]: 2026-01-31 08:22:18.749 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:19.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:22:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:19.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:22:20 np0005603609 nova_compute[221550]: 2026-01-31 08:22:20.334 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:20 np0005603609 kernel: tapc66c4603-0e (unregistering): left promiscuous mode
Jan 31 03:22:20 np0005603609 NetworkManager[49064]: <info>  [1769847740.9508] device (tapc66c4603-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:22:20 np0005603609 nova_compute[221550]: 2026-01-31 08:22:20.961 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:20 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:20Z|00641|binding|INFO|Releasing lport c66c4603-0eab-4f51-ad10-8185e33051dd from this chassis (sb_readonly=0)
Jan 31 03:22:20 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:20Z|00642|binding|INFO|Setting lport c66c4603-0eab-4f51-ad10-8185e33051dd down in Southbound
Jan 31 03:22:20 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:20Z|00643|binding|INFO|Removing iface tapc66c4603-0e ovn-installed in OVS
Jan 31 03:22:20 np0005603609 nova_compute[221550]: 2026-01-31 08:22:20.964 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:20 np0005603609 nova_compute[221550]: 2026-01-31 08:22:20.968 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:20.977 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:71:84 10.100.0.4'], port_security=['fa:16:3e:11:71:84 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '20cc7040-fd06-49c7-8e68-41cb74e67e9a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e39ff1d-f815-485f-8f43-698b31333bba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fdf18f1faf4846e2a6e2eab4ac2aec02', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2dffb7ca-42ef-40e9-a7b7-090e002450c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=917c1732-02f7-4e55-ac45-7e04e3147221, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=c66c4603-0eab-4f51-ad10-8185e33051dd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:22:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:20.979 140058 INFO neutron.agent.ovn.metadata.agent [-] Port c66c4603-0eab-4f51-ad10-8185e33051dd in datapath 1e39ff1d-f815-485f-8f43-698b31333bba unbound from our chassis#033[00m
Jan 31 03:22:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:20.980 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1e39ff1d-f815-485f-8f43-698b31333bba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:22:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:20.982 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[36a3e0cc-c253-4444-a82f-dd3e548bf3fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:20.982 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba namespace which is not needed anymore#033[00m
Jan 31 03:22:21 np0005603609 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000096.scope: Deactivated successfully.
Jan 31 03:22:21 np0005603609 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000096.scope: Consumed 14.857s CPU time.
Jan 31 03:22:21 np0005603609 systemd-machined[190912]: Machine qemu-77-instance-00000096 terminated.
Jan 31 03:22:21 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[281654]: [NOTICE]   (281676) : haproxy version is 2.8.14-c23fe91
Jan 31 03:22:21 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[281654]: [NOTICE]   (281676) : path to executable is /usr/sbin/haproxy
Jan 31 03:22:21 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[281654]: [WARNING]  (281676) : Exiting Master process...
Jan 31 03:22:21 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[281654]: [ALERT]    (281676) : Current worker (281678) exited with code 143 (Terminated)
Jan 31 03:22:21 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[281654]: [WARNING]  (281676) : All workers exited. Exiting... (0)
Jan 31 03:22:21 np0005603609 systemd[1]: libpod-100774a2396fa743c190db9a6d546fc8835545a22ea20a736c7fdfb778b3dc66.scope: Deactivated successfully.
Jan 31 03:22:21 np0005603609 podman[282038]: 2026-01-31 08:22:21.125871679 +0000 UTC m=+0.051848386 container died 100774a2396fa743c190db9a6d546fc8835545a22ea20a736c7fdfb778b3dc66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:22:21 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-100774a2396fa743c190db9a6d546fc8835545a22ea20a736c7fdfb778b3dc66-userdata-shm.mount: Deactivated successfully.
Jan 31 03:22:21 np0005603609 systemd[1]: var-lib-containers-storage-overlay-1db6486da8c85ebc7592a14a2d938306b4e229b58d1c0d14adfaa202fa9fa682-merged.mount: Deactivated successfully.
Jan 31 03:22:21 np0005603609 podman[282038]: 2026-01-31 08:22:21.170931331 +0000 UTC m=+0.096908048 container cleanup 100774a2396fa743c190db9a6d546fc8835545a22ea20a736c7fdfb778b3dc66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:22:21 np0005603609 systemd[1]: libpod-conmon-100774a2396fa743c190db9a6d546fc8835545a22ea20a736c7fdfb778b3dc66.scope: Deactivated successfully.
Jan 31 03:22:21 np0005603609 podman[282072]: 2026-01-31 08:22:21.252454488 +0000 UTC m=+0.061658771 container remove 100774a2396fa743c190db9a6d546fc8835545a22ea20a736c7fdfb778b3dc66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:22:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:21.258 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a4f2f8d6-a9f4-4c22-ae52-14ea136938f4]: (4, ('Sat Jan 31 08:22:21 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba (100774a2396fa743c190db9a6d546fc8835545a22ea20a736c7fdfb778b3dc66)\n100774a2396fa743c190db9a6d546fc8835545a22ea20a736c7fdfb778b3dc66\nSat Jan 31 08:22:21 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba (100774a2396fa743c190db9a6d546fc8835545a22ea20a736c7fdfb778b3dc66)\n100774a2396fa743c190db9a6d546fc8835545a22ea20a736c7fdfb778b3dc66\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:21.260 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[76acfb12-5031-45d7-acf1-5e39524b18d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:21.261 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e39ff1d-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:21 np0005603609 nova_compute[221550]: 2026-01-31 08:22:21.309 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:21 np0005603609 kernel: tap1e39ff1d-f0: left promiscuous mode
Jan 31 03:22:21 np0005603609 nova_compute[221550]: 2026-01-31 08:22:21.317 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:21.320 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[52b4a90c-a07d-43b9-8f68-d05f96de8207]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:21.331 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[43f3ceca-a8bd-472f-978f-69ffd0b5d6ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:21.332 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cb4257da-7ffe-4ee6-9f2c-4b9c0c13a0a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:21 np0005603609 nova_compute[221550]: 2026-01-31 08:22:21.341 221554 DEBUG nova.compute.manager [req-a10f8cd3-a98b-49b4-b82d-dedd0c7aeb87 req-ec76c1d3-8f39-42d8-b9f4-ab16e8961c84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received event network-vif-unplugged-c66c4603-0eab-4f51-ad10-8185e33051dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:21 np0005603609 nova_compute[221550]: 2026-01-31 08:22:21.341 221554 DEBUG oslo_concurrency.lockutils [req-a10f8cd3-a98b-49b4-b82d-dedd0c7aeb87 req-ec76c1d3-8f39-42d8-b9f4-ab16e8961c84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:21 np0005603609 nova_compute[221550]: 2026-01-31 08:22:21.341 221554 DEBUG oslo_concurrency.lockutils [req-a10f8cd3-a98b-49b4-b82d-dedd0c7aeb87 req-ec76c1d3-8f39-42d8-b9f4-ab16e8961c84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:21 np0005603609 nova_compute[221550]: 2026-01-31 08:22:21.341 221554 DEBUG oslo_concurrency.lockutils [req-a10f8cd3-a98b-49b4-b82d-dedd0c7aeb87 req-ec76c1d3-8f39-42d8-b9f4-ab16e8961c84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:21 np0005603609 nova_compute[221550]: 2026-01-31 08:22:21.342 221554 DEBUG nova.compute.manager [req-a10f8cd3-a98b-49b4-b82d-dedd0c7aeb87 req-ec76c1d3-8f39-42d8-b9f4-ab16e8961c84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] No waiting events found dispatching network-vif-unplugged-c66c4603-0eab-4f51-ad10-8185e33051dd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:21 np0005603609 nova_compute[221550]: 2026-01-31 08:22:21.342 221554 WARNING nova.compute.manager [req-a10f8cd3-a98b-49b4-b82d-dedd0c7aeb87 req-ec76c1d3-8f39-42d8-b9f4-ab16e8961c84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received unexpected event network-vif-unplugged-c66c4603-0eab-4f51-ad10-8185e33051dd for instance with vm_state active and task_state powering-off.#033[00m
Jan 31 03:22:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:21.343 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ae5626b0-4b02-4734-957e-afbcbaf3cdfd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 793569, 'reachable_time': 33859, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282099, 'error': None, 'target': 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:21 np0005603609 systemd[1]: run-netns-ovnmeta\x2d1e39ff1d\x2df815\x2d485f\x2d8f43\x2d698b31333bba.mount: Deactivated successfully.
Jan 31 03:22:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:21.347 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:22:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:21.347 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[a858e0ad-f635-4930-88d6-cedd99ac53ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:21 np0005603609 nova_compute[221550]: 2026-01-31 08:22:21.645 221554 INFO nova.virt.libvirt.driver [None req-dd6add3a-6a8b-44d9-ac47-bdfa50479044 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 03:22:21 np0005603609 nova_compute[221550]: 2026-01-31 08:22:21.653 221554 INFO nova.virt.libvirt.driver [-] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Instance destroyed successfully.#033[00m
Jan 31 03:22:21 np0005603609 nova_compute[221550]: 2026-01-31 08:22:21.654 221554 DEBUG nova.objects.instance [None req-dd6add3a-6a8b-44d9-ac47-bdfa50479044 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'numa_topology' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:21.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:21 np0005603609 nova_compute[221550]: 2026-01-31 08:22:21.884 221554 DEBUG nova.compute.manager [None req-dd6add3a-6a8b-44d9-ac47-bdfa50479044 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:22:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:21.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:22 np0005603609 nova_compute[221550]: 2026-01-31 08:22:22.012 221554 DEBUG oslo_concurrency.lockutils [None req-dd6add3a-6a8b-44d9-ac47-bdfa50479044 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:23 np0005603609 nova_compute[221550]: 2026-01-31 08:22:23.801 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:23 np0005603609 nova_compute[221550]: 2026-01-31 08:22:23.807 221554 DEBUG nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:22:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:23.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:23 np0005603609 nova_compute[221550]: 2026-01-31 08:22:23.923 221554 DEBUG nova.compute.manager [req-b1f83951-e392-4bee-9290-b7a1f30be6eb req-5c13b03a-d48a-49ae-a9f8-f198a2bcb2f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received event network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:23 np0005603609 nova_compute[221550]: 2026-01-31 08:22:23.923 221554 DEBUG oslo_concurrency.lockutils [req-b1f83951-e392-4bee-9290-b7a1f30be6eb req-5c13b03a-d48a-49ae-a9f8-f198a2bcb2f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:23 np0005603609 nova_compute[221550]: 2026-01-31 08:22:23.924 221554 DEBUG oslo_concurrency.lockutils [req-b1f83951-e392-4bee-9290-b7a1f30be6eb req-5c13b03a-d48a-49ae-a9f8-f198a2bcb2f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:23 np0005603609 nova_compute[221550]: 2026-01-31 08:22:23.924 221554 DEBUG oslo_concurrency.lockutils [req-b1f83951-e392-4bee-9290-b7a1f30be6eb req-5c13b03a-d48a-49ae-a9f8-f198a2bcb2f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:23 np0005603609 nova_compute[221550]: 2026-01-31 08:22:23.925 221554 DEBUG nova.compute.manager [req-b1f83951-e392-4bee-9290-b7a1f30be6eb req-5c13b03a-d48a-49ae-a9f8-f198a2bcb2f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] No waiting events found dispatching network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:23 np0005603609 nova_compute[221550]: 2026-01-31 08:22:23.925 221554 WARNING nova.compute.manager [req-b1f83951-e392-4bee-9290-b7a1f30be6eb req-5c13b03a-d48a-49ae-a9f8-f198a2bcb2f5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received unexpected event network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd for instance with vm_state stopped and task_state None.#033[00m
Jan 31 03:22:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:22:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:23.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:22:24 np0005603609 nova_compute[221550]: 2026-01-31 08:22:24.136 221554 DEBUG nova.objects.instance [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'flavor' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:24 np0005603609 nova_compute[221550]: 2026-01-31 08:22:24.202 221554 DEBUG oslo_concurrency.lockutils [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "refresh_cache-20cc7040-fd06-49c7-8e68-41cb74e67e9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:22:24 np0005603609 nova_compute[221550]: 2026-01-31 08:22:24.203 221554 DEBUG oslo_concurrency.lockutils [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquired lock "refresh_cache-20cc7040-fd06-49c7-8e68-41cb74e67e9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:22:24 np0005603609 nova_compute[221550]: 2026-01-31 08:22:24.203 221554 DEBUG nova.network.neutron [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:22:24 np0005603609 nova_compute[221550]: 2026-01-31 08:22:24.203 221554 DEBUG nova.objects.instance [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'info_cache' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:25 np0005603609 nova_compute[221550]: 2026-01-31 08:22:25.336 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:25 np0005603609 nova_compute[221550]: 2026-01-31 08:22:25.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:25 np0005603609 nova_compute[221550]: 2026-01-31 08:22:25.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:25.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:25.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:22:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:27.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:22:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:27.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:28 np0005603609 podman[282101]: 2026-01-31 08:22:28.185889393 +0000 UTC m=+0.058754422 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:22:28 np0005603609 podman[282100]: 2026-01-31 08:22:28.221105448 +0000 UTC m=+0.098041345 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 03:22:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:28 np0005603609 nova_compute[221550]: 2026-01-31 08:22:28.851 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:29.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:29.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:30 np0005603609 nova_compute[221550]: 2026-01-31 08:22:30.339 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:22:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:31.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:22:31 np0005603609 nova_compute[221550]: 2026-01-31 08:22:31.876 221554 DEBUG nova.network.neutron [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Updating instance_info_cache with network_info: [{"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:22:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:22:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:31.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.303 221554 DEBUG oslo_concurrency.lockutils [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Releasing lock "refresh_cache-20cc7040-fd06-49c7-8e68-41cb74e67e9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.394 221554 INFO nova.virt.libvirt.driver [-] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Instance destroyed successfully.#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.396 221554 DEBUG nova.objects.instance [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'numa_topology' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.481 221554 DEBUG nova.objects.instance [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'resources' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.543 221554 DEBUG nova.virt.libvirt.vif [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:21:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-504299522',display_name='tempest-AttachVolumeTestJSON-server-504299522',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-504299522',id=150,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBu/9yb3AwKfL+ijJ3Xo9JzrwBYsej/AsT1y29vhp9sWMO5Td0RAhZoszueDKQGZWIcDlz6V7Rc2fL6hsFrBntj+ecoex7fusa36kcOGL0EN/HOgO9QzUew3k1clrSe20A==',key_name='tempest-keypair-2098733157',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:21:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='fdf18f1faf4846e2a6e2eab4ac2aec02',ramdisk_id='',reservation_id='r-n08gz270',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-1821521720',owner_user_name='tempest-AttachVolumeTestJSON-1821521720-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:22:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3ada90dc4b77478cb4b93c63409d8537',uuid=20cc7040-fd06-49c7-8e68-41cb74e67e9a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.544 221554 DEBUG nova.network.os_vif_util [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converting VIF {"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.544 221554 DEBUG nova.network.os_vif_util [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.545 221554 DEBUG os_vif [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.547 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.547 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc66c4603-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.584 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.586 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.590 221554 INFO os_vif [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e')#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.599 221554 DEBUG nova.virt.libvirt.driver [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Start _get_guest_xml network_info=[{"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '7fceadad-37e2-4c13-8ac8-9c60e9870397', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5cf392c9-4c4c-4680-9529-6ff296b6467e', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5cf392c9-4c4c-4680-9529-6ff296b6467e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '20cc7040-fd06-49c7-8e68-41cb74e67e9a', 'attached_at': '', 'detached_at': '', 'volume_id': '5cf392c9-4c4c-4680-9529-6ff296b6467e', 'serial': '5cf392c9-4c4c-4680-9529-6ff296b6467e'}, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'guest_format': None, 'mount_device': '/dev/vdb', 'boot_index': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.602 221554 WARNING nova.virt.libvirt.driver [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.615 221554 DEBUG nova.virt.libvirt.host [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.615 221554 DEBUG nova.virt.libvirt.host [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.619 221554 DEBUG nova.virt.libvirt.host [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.619 221554 DEBUG nova.virt.libvirt.host [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.620 221554 DEBUG nova.virt.libvirt.driver [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.620 221554 DEBUG nova.virt.hardware [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.621 221554 DEBUG nova.virt.hardware [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.621 221554 DEBUG nova.virt.hardware [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.621 221554 DEBUG nova.virt.hardware [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.621 221554 DEBUG nova.virt.hardware [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.621 221554 DEBUG nova.virt.hardware [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.622 221554 DEBUG nova.virt.hardware [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.622 221554 DEBUG nova.virt.hardware [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.622 221554 DEBUG nova.virt.hardware [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.623 221554 DEBUG nova.virt.hardware [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.623 221554 DEBUG nova.virt.hardware [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.623 221554 DEBUG nova.objects.instance [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:32 np0005603609 nova_compute[221550]: 2026-01-31 08:22:32.750 221554 DEBUG oslo_concurrency.processutils [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:33 np0005603609 nova_compute[221550]: 2026-01-31 08:22:33.276 221554 DEBUG oslo_concurrency.processutils [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:33 np0005603609 nova_compute[221550]: 2026-01-31 08:22:33.319 221554 DEBUG oslo_concurrency.processutils [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:22:33 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2068840843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:22:33 np0005603609 nova_compute[221550]: 2026-01-31 08:22:33.777 221554 DEBUG oslo_concurrency.processutils [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:33.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:33.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:33 np0005603609 nova_compute[221550]: 2026-01-31 08:22:33.974 221554 DEBUG nova.virt.libvirt.vif [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:21:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-504299522',display_name='tempest-AttachVolumeTestJSON-server-504299522',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-504299522',id=150,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBu/9yb3AwKfL+ijJ3Xo9JzrwBYsej/AsT1y29vhp9sWMO5Td0RAhZoszueDKQGZWIcDlz6V7Rc2fL6hsFrBntj+ecoex7fusa36kcOGL0EN/HOgO9QzUew3k1clrSe20A==',key_name='tempest-keypair-2098733157',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:21:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='fdf18f1faf4846e2a6e2eab4ac2aec02',ramdisk_id='',reservation_id='r-n08gz270',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-1821521720',owner_user_name='tempest-AttachVolumeTestJSON-1821521720-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:22:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3ada90dc4b77478cb4b93c63409d8537',uuid=20cc7040-fd06-49c7-8e68-41cb74e67e9a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:22:33 np0005603609 nova_compute[221550]: 2026-01-31 08:22:33.974 221554 DEBUG nova.network.os_vif_util [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converting VIF {"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:22:33 np0005603609 nova_compute[221550]: 2026-01-31 08:22:33.975 221554 DEBUG nova.network.os_vif_util [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:22:33 np0005603609 nova_compute[221550]: 2026-01-31 08:22:33.976 221554 DEBUG nova.objects.instance [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'pci_devices' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.022 221554 DEBUG nova.virt.libvirt.driver [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:  <uuid>20cc7040-fd06-49c7-8e68-41cb74e67e9a</uuid>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:  <name>instance-00000096</name>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <nova:name>tempest-AttachVolumeTestJSON-server-504299522</nova:name>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:22:32</nova:creationTime>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <nova:user uuid="3ada90dc4b77478cb4b93c63409d8537">tempest-AttachVolumeTestJSON-1821521720-project-member</nova:user>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <nova:project uuid="fdf18f1faf4846e2a6e2eab4ac2aec02">tempest-AttachVolumeTestJSON-1821521720</nova:project>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <nova:port uuid="c66c4603-0eab-4f51-ad10-8185e33051dd">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <entry name="serial">20cc7040-fd06-49c7-8e68-41cb74e67e9a</entry>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <entry name="uuid">20cc7040-fd06-49c7-8e68-41cb74e67e9a</entry>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/20cc7040-fd06-49c7-8e68-41cb74e67e9a_disk">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/20cc7040-fd06-49c7-8e68-41cb74e67e9a_disk.config">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="volumes/volume-5cf392c9-4c4c-4680-9529-6ff296b6467e">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <target dev="vdb" bus="virtio"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <serial>5cf392c9-4c4c-4680-9529-6ff296b6467e</serial>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:11:71:84"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <target dev="tapc66c4603-0e"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/20cc7040-fd06-49c7-8e68-41cb74e67e9a/console.log" append="off"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <input type="keyboard" bus="usb"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:22:34 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:22:34 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:22:34 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:22:34 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.024 221554 DEBUG nova.virt.libvirt.driver [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.024 221554 DEBUG nova.virt.libvirt.driver [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.024 221554 DEBUG nova.virt.libvirt.driver [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.025 221554 DEBUG nova.virt.libvirt.vif [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:21:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-504299522',display_name='tempest-AttachVolumeTestJSON-server-504299522',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-504299522',id=150,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBu/9yb3AwKfL+ijJ3Xo9JzrwBYsej/AsT1y29vhp9sWMO5Td0RAhZoszueDKQGZWIcDlz6V7Rc2fL6hsFrBntj+ecoex7fusa36kcOGL0EN/HOgO9QzUew3k1clrSe20A==',key_name='tempest-keypair-2098733157',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:21:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='fdf18f1faf4846e2a6e2eab4ac2aec02',ramdisk_id='',reservation_id='r-n08gz270',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-1821521720',owner_user_name='tempest-AttachVolumeTestJSON-1821521720-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:22:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3ada90dc4b77478cb4b93c63409d8537',uuid=20cc7040-fd06-49c7-8e68-41cb74e67e9a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.025 221554 DEBUG nova.network.os_vif_util [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converting VIF {"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.025 221554 DEBUG nova.network.os_vif_util [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.026 221554 DEBUG os_vif [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.026 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.027 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.027 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.030 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.031 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc66c4603-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.031 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc66c4603-0e, col_values=(('external_ids', {'iface-id': 'c66c4603-0eab-4f51-ad10-8185e33051dd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:71:84', 'vm-uuid': '20cc7040-fd06-49c7-8e68-41cb74e67e9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.032 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:34 np0005603609 NetworkManager[49064]: <info>  [1769847754.0344] manager: (tapc66c4603-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.035 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.040 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.041 221554 INFO os_vif [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e')#033[00m
Jan 31 03:22:34 np0005603609 kernel: tapc66c4603-0e: entered promiscuous mode
Jan 31 03:22:34 np0005603609 NetworkManager[49064]: <info>  [1769847754.1231] manager: (tapc66c4603-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/304)
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.126 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:34 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:34Z|00644|binding|INFO|Claiming lport c66c4603-0eab-4f51-ad10-8185e33051dd for this chassis.
Jan 31 03:22:34 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:34Z|00645|binding|INFO|c66c4603-0eab-4f51-ad10-8185e33051dd: Claiming fa:16:3e:11:71:84 10.100.0.4
Jan 31 03:22:34 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:34Z|00646|binding|INFO|Setting lport c66c4603-0eab-4f51-ad10-8185e33051dd ovn-installed in OVS
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.139 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.143 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:34 np0005603609 systemd-udevd[282218]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:22:34 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:34Z|00647|binding|INFO|Setting lport c66c4603-0eab-4f51-ad10-8185e33051dd up in Southbound
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.152 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:71:84 10.100.0.4'], port_security=['fa:16:3e:11:71:84 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '20cc7040-fd06-49c7-8e68-41cb74e67e9a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e39ff1d-f815-485f-8f43-698b31333bba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fdf18f1faf4846e2a6e2eab4ac2aec02', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2dffb7ca-42ef-40e9-a7b7-090e002450c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=917c1732-02f7-4e55-ac45-7e04e3147221, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=c66c4603-0eab-4f51-ad10-8185e33051dd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.154 140058 INFO neutron.agent.ovn.metadata.agent [-] Port c66c4603-0eab-4f51-ad10-8185e33051dd in datapath 1e39ff1d-f815-485f-8f43-698b31333bba bound to our chassis#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.158 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e39ff1d-f815-485f-8f43-698b31333bba#033[00m
Jan 31 03:22:34 np0005603609 NetworkManager[49064]: <info>  [1769847754.1640] device (tapc66c4603-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:22:34 np0005603609 NetworkManager[49064]: <info>  [1769847754.1648] device (tapc66c4603-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:22:34 np0005603609 systemd-machined[190912]: New machine qemu-78-instance-00000096.
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.169 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[627d625f-40c2-4576-b280-ea0c08fc550c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.170 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1e39ff1d-f1 in ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.172 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1e39ff1d-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.172 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f9bfc83f-7f57-4fe8-bc9c-73f8063a566c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.174 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[66f46541-fb73-4718-ba93-83a82d434319]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603609 systemd[1]: Started Virtual Machine qemu-78-instance-00000096.
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.186 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[27b2d3a9-6b1a-4e9f-94af-223522251435]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.198 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[77337cec-3990-47b1-9e74-fa565766fa29]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.222 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[05d4ba7d-92d2-4a0a-8392-7a9d708c0e0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603609 NetworkManager[49064]: <info>  [1769847754.2279] manager: (tap1e39ff1d-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/305)
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.226 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fae48c57-285e-40ed-b3d9-b2e735724d89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603609 systemd-udevd[282222]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.257 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[4c7705dc-8349-43fe-a9cb-f94e2c7e169c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.261 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[22673c1f-13e3-4bca-b2bb-e0a131d2963a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603609 NetworkManager[49064]: <info>  [1769847754.2796] device (tap1e39ff1d-f0): carrier: link connected
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.283 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6bbe40-2e63-4c8c-a0ae-ca5ed2a6e85a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.303 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[85e4f3b7-d23b-4422-9fd3-1c6c04dc123a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e39ff1d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:45:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799398, 'reachable_time': 31680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282253, 'error': None, 'target': 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.317 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b09070d8-5f77-47fa-a211-0a9af4776aca]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:4516'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 799398, 'tstamp': 799398}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282254, 'error': None, 'target': 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.335 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d1779e2a-1950-43c1-bfab-8cad9316a53c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e39ff1d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:45:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799398, 'reachable_time': 31680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282255, 'error': None, 'target': 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.362 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[87c524a7-b7d0-400d-9046-d64ddc4bbae0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.414 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3eeffe35-0f41-423b-9321-f7c9c379fe8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.415 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e39ff1d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.416 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.416 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e39ff1d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.418 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:34 np0005603609 kernel: tap1e39ff1d-f0: entered promiscuous mode
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.422 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e39ff1d-f0, col_values=(('external_ids', {'iface-id': '98f103d6-a5bc-4680-a4d5-8ced102dd381'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.421 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.423 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:34 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:34Z|00648|binding|INFO|Releasing lport 98f103d6-a5bc-4680-a4d5-8ced102dd381 from this chassis (sb_readonly=0)
Jan 31 03:22:34 np0005603609 NetworkManager[49064]: <info>  [1769847754.4243] manager: (tap1e39ff1d-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.434 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.435 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1e39ff1d-f815-485f-8f43-698b31333bba.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1e39ff1d-f815-485f-8f43-698b31333bba.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.436 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e4025e-34e5-4028-b10b-9e39d0bb031a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.437 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-1e39ff1d-f815-485f-8f43-698b31333bba
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/1e39ff1d-f815-485f-8f43-698b31333bba.pid.haproxy
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 1e39ff1d-f815-485f-8f43-698b31333bba
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.437 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'env', 'PROCESS_TAG=haproxy-1e39ff1d-f815-485f-8f43-698b31333bba', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1e39ff1d-f815-485f-8f43-698b31333bba.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.550 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.552 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.688 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 20cc7040-fd06-49c7-8e68-41cb74e67e9a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.689 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847754.6884372, 20cc7040-fd06-49c7-8e68-41cb74e67e9a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.689 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.693 221554 DEBUG nova.compute.manager [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.698 221554 INFO nova.virt.libvirt.driver [-] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Instance rebooted successfully.#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.699 221554 DEBUG nova.compute.manager [None req-95b5099b-7416-4d63-b593-9a912416c0e1 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.723 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.726 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.730 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:22:34 np0005603609 podman[282347]: 2026-01-31 08:22:34.81788415 +0000 UTC m=+0.054191263 container create 092b6039b1926bbcad4d225a5f80693e8c2f7ebe05b688019025d3818bc2a8ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.831 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.833 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847754.6900134, 20cc7040-fd06-49c7-8e68-41cb74e67e9a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.833 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] VM Started (Lifecycle Event)#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.855 221554 DEBUG nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Instance in state 1 after 54 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:22:34 np0005603609 systemd[1]: Started libpod-conmon-092b6039b1926bbcad4d225a5f80693e8c2f7ebe05b688019025d3818bc2a8ca.scope.
Jan 31 03:22:34 np0005603609 podman[282347]: 2026-01-31 08:22:34.786940677 +0000 UTC m=+0.023247840 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:22:34 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:22:34 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ddb7c9e2a5d5085024f10ced2225b0ad9e26478d2c0d03207f3359bdf1ed446/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:34 np0005603609 podman[282347]: 2026-01-31 08:22:34.922528403 +0000 UTC m=+0.158835596 container init 092b6039b1926bbcad4d225a5f80693e8c2f7ebe05b688019025d3818bc2a8ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:22:34 np0005603609 podman[282347]: 2026-01-31 08:22:34.928778982 +0000 UTC m=+0.165086135 container start 092b6039b1926bbcad4d225a5f80693e8c2f7ebe05b688019025d3818bc2a8ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:22:34 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[282363]: [NOTICE]   (282367) : New worker (282369) forked
Jan 31 03:22:34 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[282363]: [NOTICE]   (282367) : Loading success.
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.959 221554 DEBUG nova.compute.manager [req-e503d926-3f9c-4272-8561-502d7dd714e2 req-4077b22f-6575-4d7e-89a5-cc5a390b6472 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received event network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.960 221554 DEBUG oslo_concurrency.lockutils [req-e503d926-3f9c-4272-8561-502d7dd714e2 req-4077b22f-6575-4d7e-89a5-cc5a390b6472 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.960 221554 DEBUG oslo_concurrency.lockutils [req-e503d926-3f9c-4272-8561-502d7dd714e2 req-4077b22f-6575-4d7e-89a5-cc5a390b6472 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.960 221554 DEBUG oslo_concurrency.lockutils [req-e503d926-3f9c-4272-8561-502d7dd714e2 req-4077b22f-6575-4d7e-89a5-cc5a390b6472 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.961 221554 DEBUG nova.compute.manager [req-e503d926-3f9c-4272-8561-502d7dd714e2 req-4077b22f-6575-4d7e-89a5-cc5a390b6472 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] No waiting events found dispatching network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.961 221554 WARNING nova.compute.manager [req-e503d926-3f9c-4272-8561-502d7dd714e2 req-4077b22f-6575-4d7e-89a5-cc5a390b6472 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received unexpected event network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd for instance with vm_state stopped and task_state powering-on.#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.987 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:22:34 np0005603609 nova_compute[221550]: 2026-01-31 08:22:34.993 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:22:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:34.998 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:22:35 np0005603609 nova_compute[221550]: 2026-01-31 08:22:35.341 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:35.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:35.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:36 np0005603609 nova_compute[221550]: 2026-01-31 08:22:36.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:36 np0005603609 ceph-mgr[82033]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 03:22:37 np0005603609 nova_compute[221550]: 2026-01-31 08:22:37.114 221554 DEBUG nova.compute.manager [req-e7deeb06-d6f0-4f64-8559-55fe29dd3266 req-867bd23b-1f32-4e58-83ff-1a53d965755b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received event network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:37 np0005603609 nova_compute[221550]: 2026-01-31 08:22:37.116 221554 DEBUG oslo_concurrency.lockutils [req-e7deeb06-d6f0-4f64-8559-55fe29dd3266 req-867bd23b-1f32-4e58-83ff-1a53d965755b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:37 np0005603609 nova_compute[221550]: 2026-01-31 08:22:37.116 221554 DEBUG oslo_concurrency.lockutils [req-e7deeb06-d6f0-4f64-8559-55fe29dd3266 req-867bd23b-1f32-4e58-83ff-1a53d965755b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:37 np0005603609 nova_compute[221550]: 2026-01-31 08:22:37.117 221554 DEBUG oslo_concurrency.lockutils [req-e7deeb06-d6f0-4f64-8559-55fe29dd3266 req-867bd23b-1f32-4e58-83ff-1a53d965755b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:37 np0005603609 nova_compute[221550]: 2026-01-31 08:22:37.117 221554 DEBUG nova.compute.manager [req-e7deeb06-d6f0-4f64-8559-55fe29dd3266 req-867bd23b-1f32-4e58-83ff-1a53d965755b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] No waiting events found dispatching network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:37 np0005603609 nova_compute[221550]: 2026-01-31 08:22:37.117 221554 WARNING nova.compute.manager [req-e7deeb06-d6f0-4f64-8559-55fe29dd3266 req-867bd23b-1f32-4e58-83ff-1a53d965755b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received unexpected event network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd for instance with vm_state active and task_state None.#033[00m
Jan 31 03:22:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:37.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:37.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:38 np0005603609 nova_compute[221550]: 2026-01-31 08:22:38.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:38 np0005603609 nova_compute[221550]: 2026-01-31 08:22:38.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:22:38 np0005603609 nova_compute[221550]: 2026-01-31 08:22:38.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:22:39 np0005603609 nova_compute[221550]: 2026-01-31 08:22:39.035 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:39.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:39.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:40 np0005603609 nova_compute[221550]: 2026-01-31 08:22:40.343 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:40 np0005603609 nova_compute[221550]: 2026-01-31 08:22:40.878 221554 INFO nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Instance failed to shutdown in 60 seconds.#033[00m
Jan 31 03:22:40 np0005603609 kernel: tap36f5a0a6-02 (unregistering): left promiscuous mode
Jan 31 03:22:40 np0005603609 NetworkManager[49064]: <info>  [1769847760.9215] device (tap36f5a0a6-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:22:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:40Z|00649|binding|INFO|Releasing lport 36f5a0a6-029b-4491-8d74-e44ca0e59f7d from this chassis (sb_readonly=0)
Jan 31 03:22:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:40Z|00650|binding|INFO|Setting lport 36f5a0a6-029b-4491-8d74-e44ca0e59f7d down in Southbound
Jan 31 03:22:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:40Z|00651|binding|INFO|Removing iface tap36f5a0a6-02 ovn-installed in OVS
Jan 31 03:22:40 np0005603609 nova_compute[221550]: 2026-01-31 08:22:40.931 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:40 np0005603609 nova_compute[221550]: 2026-01-31 08:22:40.934 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:40 np0005603609 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000095.scope: Deactivated successfully.
Jan 31 03:22:40 np0005603609 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000095.scope: Consumed 1.480s CPU time.
Jan 31 03:22:40 np0005603609 systemd-machined[190912]: Machine qemu-76-instance-00000095 terminated.
Jan 31 03:22:41 np0005603609 nova_compute[221550]: 2026-01-31 08:22:41.687 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-38deb482-bd85-4fdf-b2da-2725bffd8f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:22:41 np0005603609 nova_compute[221550]: 2026-01-31 08:22:41.687 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-38deb482-bd85-4fdf-b2da-2725bffd8f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:22:41 np0005603609 nova_compute[221550]: 2026-01-31 08:22:41.687 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:22:41 np0005603609 nova_compute[221550]: 2026-01-31 08:22:41.688 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 38deb482-bd85-4fdf-b2da-2725bffd8f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:41 np0005603609 nova_compute[221550]: 2026-01-31 08:22:41.690 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:41.691 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:0a:0c 10.100.0.8'], port_security=['fa:16:3e:0d:0a:0c 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '946cb648-0758-4617-bd3a-142804fd70f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c90ea7f1be5f484bb873548236fadc00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '952b4f08-f5a7-4fc0-ae2c-267f2ba857a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42261fad-d2a1-4da1-823a-75e271c17223, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=36f5a0a6-029b-4491-8d74-e44ca0e59f7d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:22:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:41.694 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 36f5a0a6-029b-4491-8d74-e44ca0e59f7d in datapath 936cead9-bc2f-4c2d-8b4c-6079d2159263 unbound from our chassis#033[00m
Jan 31 03:22:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:41.697 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 936cead9-bc2f-4c2d-8b4c-6079d2159263, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:22:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:41.699 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[14706117-cf4f-4c58-b573-1ebb0663e3a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:41.699 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 namespace which is not needed anymore#033[00m
Jan 31 03:22:41 np0005603609 nova_compute[221550]: 2026-01-31 08:22:41.712 221554 INFO nova.virt.libvirt.driver [-] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Instance destroyed successfully.#033[00m
Jan 31 03:22:41 np0005603609 nova_compute[221550]: 2026-01-31 08:22:41.713 221554 DEBUG nova.objects.instance [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'numa_topology' on Instance uuid 946cb648-0758-4617-bd3a-142804fd70f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:41 np0005603609 nova_compute[221550]: 2026-01-31 08:22:41.756 221554 INFO nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Attempting a stable device rescue#033[00m
Jan 31 03:22:41 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[281502]: [NOTICE]   (281506) : haproxy version is 2.8.14-c23fe91
Jan 31 03:22:41 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[281502]: [NOTICE]   (281506) : path to executable is /usr/sbin/haproxy
Jan 31 03:22:41 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[281502]: [WARNING]  (281506) : Exiting Master process...
Jan 31 03:22:41 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[281502]: [ALERT]    (281506) : Current worker (281508) exited with code 143 (Terminated)
Jan 31 03:22:41 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[281502]: [WARNING]  (281506) : All workers exited. Exiting... (0)
Jan 31 03:22:41 np0005603609 systemd[1]: libpod-5f9b6d7cd90a82d764a59cc5bd843ec386e9f1eb91065810b64dd1c0bcf94134.scope: Deactivated successfully.
Jan 31 03:22:41 np0005603609 podman[282412]: 2026-01-31 08:22:41.821389569 +0000 UTC m=+0.038842884 container died 5f9b6d7cd90a82d764a59cc5bd843ec386e9f1eb91065810b64dd1c0bcf94134 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:22:41 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f9b6d7cd90a82d764a59cc5bd843ec386e9f1eb91065810b64dd1c0bcf94134-userdata-shm.mount: Deactivated successfully.
Jan 31 03:22:41 np0005603609 systemd[1]: var-lib-containers-storage-overlay-03ee99a26b0cb8a188f897dcfe98a2f92308789dbb1734a27138df619be794d6-merged.mount: Deactivated successfully.
Jan 31 03:22:41 np0005603609 podman[282412]: 2026-01-31 08:22:41.856011661 +0000 UTC m=+0.073464976 container cleanup 5f9b6d7cd90a82d764a59cc5bd843ec386e9f1eb91065810b64dd1c0bcf94134 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:22:41 np0005603609 systemd[1]: libpod-conmon-5f9b6d7cd90a82d764a59cc5bd843ec386e9f1eb91065810b64dd1c0bcf94134.scope: Deactivated successfully.
Jan 31 03:22:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:22:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:41.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:22:41 np0005603609 podman[282443]: 2026-01-31 08:22:41.91306254 +0000 UTC m=+0.042888740 container remove 5f9b6d7cd90a82d764a59cc5bd843ec386e9f1eb91065810b64dd1c0bcf94134 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:22:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:41.919 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[47d150c3-e200-48a4-a897-c05b3a6889b2]: (4, ('Sat Jan 31 08:22:41 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 (5f9b6d7cd90a82d764a59cc5bd843ec386e9f1eb91065810b64dd1c0bcf94134)\n5f9b6d7cd90a82d764a59cc5bd843ec386e9f1eb91065810b64dd1c0bcf94134\nSat Jan 31 08:22:41 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 (5f9b6d7cd90a82d764a59cc5bd843ec386e9f1eb91065810b64dd1c0bcf94134)\n5f9b6d7cd90a82d764a59cc5bd843ec386e9f1eb91065810b64dd1c0bcf94134\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:41.920 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2975cd28-2c12-4751-b9af-d59fe7175ad5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:41.921 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap936cead9-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:41 np0005603609 kernel: tap936cead9-b0: left promiscuous mode
Jan 31 03:22:41 np0005603609 nova_compute[221550]: 2026-01-31 08:22:41.923 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:41 np0005603609 nova_compute[221550]: 2026-01-31 08:22:41.932 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:41.935 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[38820e18-acc1-4fca-b9bd-32fd917965e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:41.949 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c6929757-bd38-415f-bf52-939618f8c0a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:41.951 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e682d9c4-2681-4615-9229-715637c3fce0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:41.961 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4f83473a-1c7d-4611-a344-87cca1b516c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 793268, 'reachable_time': 43283, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282463, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:41 np0005603609 systemd[1]: run-netns-ovnmeta\x2d936cead9\x2dbc2f\x2d4c2d\x2d8b4c\x2d6079d2159263.mount: Deactivated successfully.
Jan 31 03:22:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:41.965 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:22:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:41.966 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[999e43e2-4af4-42d8-87f9-cb508f622f3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:41.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:42 np0005603609 nova_compute[221550]: 2026-01-31 08:22:42.106 221554 DEBUG nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Jan 31 03:22:42 np0005603609 nova_compute[221550]: 2026-01-31 08:22:42.111 221554 DEBUG nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:22:42 np0005603609 nova_compute[221550]: 2026-01-31 08:22:42.112 221554 INFO nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Creating image(s)#033[00m
Jan 31 03:22:42 np0005603609 nova_compute[221550]: 2026-01-31 08:22:42.132 221554 DEBUG nova.storage.rbd_utils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image 946cb648-0758-4617-bd3a-142804fd70f7_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:22:42 np0005603609 nova_compute[221550]: 2026-01-31 08:22:42.135 221554 DEBUG nova.objects.instance [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 946cb648-0758-4617-bd3a-142804fd70f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:42 np0005603609 nova_compute[221550]: 2026-01-31 08:22:42.602 221554 DEBUG nova.compute.manager [req-4115723f-aac6-4fc4-8af0-2bf5b59e6b71 req-df63b530-d44b-41a2-93f1-611b21d9a808 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received event network-vif-unplugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:42 np0005603609 nova_compute[221550]: 2026-01-31 08:22:42.602 221554 DEBUG oslo_concurrency.lockutils [req-4115723f-aac6-4fc4-8af0-2bf5b59e6b71 req-df63b530-d44b-41a2-93f1-611b21d9a808 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "946cb648-0758-4617-bd3a-142804fd70f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:42 np0005603609 nova_compute[221550]: 2026-01-31 08:22:42.603 221554 DEBUG oslo_concurrency.lockutils [req-4115723f-aac6-4fc4-8af0-2bf5b59e6b71 req-df63b530-d44b-41a2-93f1-611b21d9a808 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:42 np0005603609 nova_compute[221550]: 2026-01-31 08:22:42.603 221554 DEBUG oslo_concurrency.lockutils [req-4115723f-aac6-4fc4-8af0-2bf5b59e6b71 req-df63b530-d44b-41a2-93f1-611b21d9a808 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:42 np0005603609 nova_compute[221550]: 2026-01-31 08:22:42.603 221554 DEBUG nova.compute.manager [req-4115723f-aac6-4fc4-8af0-2bf5b59e6b71 req-df63b530-d44b-41a2-93f1-611b21d9a808 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] No waiting events found dispatching network-vif-unplugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:42 np0005603609 nova_compute[221550]: 2026-01-31 08:22:42.604 221554 WARNING nova.compute.manager [req-4115723f-aac6-4fc4-8af0-2bf5b59e6b71 req-df63b530-d44b-41a2-93f1-611b21d9a808 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received unexpected event network-vif-unplugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:22:42 np0005603609 nova_compute[221550]: 2026-01-31 08:22:42.700 221554 DEBUG nova.storage.rbd_utils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image 946cb648-0758-4617-bd3a-142804fd70f7_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:22:42 np0005603609 nova_compute[221550]: 2026-01-31 08:22:42.728 221554 DEBUG nova.storage.rbd_utils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image 946cb648-0758-4617-bd3a-142804fd70f7_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:22:42 np0005603609 nova_compute[221550]: 2026-01-31 08:22:42.732 221554 DEBUG oslo_concurrency.lockutils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "3d1faf7b33825dd27e09303693e3bff19299c168" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:42 np0005603609 nova_compute[221550]: 2026-01-31 08:22:42.733 221554 DEBUG oslo_concurrency.lockutils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "3d1faf7b33825dd27e09303693e3bff19299c168" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.223 221554 DEBUG nova.virt.libvirt.imagebackend [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/af02be56-bd6f-4200-837f-ea1f7e8d93ec/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/af02be56-bd6f-4200-837f-ea1f7e8d93ec/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.289 221554 DEBUG nova.virt.libvirt.imagebackend [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Selected location: {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/af02be56-bd6f-4200-837f-ea1f7e8d93ec/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.289 221554 DEBUG nova.storage.rbd_utils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] cloning images/af02be56-bd6f-4200-837f-ea1f7e8d93ec@snap to None/946cb648-0758-4617-bd3a-142804fd70f7_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:22:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.429 221554 DEBUG oslo_concurrency.lockutils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "3d1faf7b33825dd27e09303693e3bff19299c168" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.474 221554 DEBUG nova.objects.instance [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'migration_context' on Instance uuid 946cb648-0758-4617-bd3a-142804fd70f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.520 221554 DEBUG nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.523 221554 DEBUG nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Start _get_guest_xml network_info=[{"id": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "address": "fa:16:3e:0d:0a:0c", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "vif_mac": "fa:16:3e:0d:0a:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f5a0a6-02", "ovs_interfaceid": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'af02be56-bd6f-4200-837f-ea1f7e8d93ec', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': 'db690fac-05bc-4a16-95ca-cd07b52ed09d', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9963309d-4585-4a8d-8bdf-00f8aa2f69e1', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9963309d-4585-4a8d-8bdf-00f8aa2f69e1', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '946cb648-0758-4617-bd3a-142804fd70f7', 'attached_at': '', 'detached_at': '', 'volume_id': '9963309d-4585-4a8d-8bdf-00f8aa2f69e1', 'serial': '9963309d-4585-4a8d-8bdf-00f8aa2f69e1'}, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'guest_format': None, 'mount_device': '/dev/vda', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.524 221554 DEBUG nova.objects.instance [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'resources' on Instance uuid 946cb648-0758-4617-bd3a-142804fd70f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.559 221554 WARNING nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.567 221554 DEBUG nova.virt.libvirt.host [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.568 221554 DEBUG nova.virt.libvirt.host [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.573 221554 DEBUG nova.virt.libvirt.host [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.573 221554 DEBUG nova.virt.libvirt.host [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.575 221554 DEBUG nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.575 221554 DEBUG nova.virt.hardware [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.575 221554 DEBUG nova.virt.hardware [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.576 221554 DEBUG nova.virt.hardware [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.576 221554 DEBUG nova.virt.hardware [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.576 221554 DEBUG nova.virt.hardware [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.577 221554 DEBUG nova.virt.hardware [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.577 221554 DEBUG nova.virt.hardware [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.577 221554 DEBUG nova.virt.hardware [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.577 221554 DEBUG nova.virt.hardware [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.578 221554 DEBUG nova.virt.hardware [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.578 221554 DEBUG nova.virt.hardware [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.578 221554 DEBUG nova.objects.instance [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 946cb648-0758-4617-bd3a-142804fd70f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:43 np0005603609 nova_compute[221550]: 2026-01-31 08:22:43.650 221554 DEBUG oslo_concurrency.processutils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:22:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:43.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:22:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:43.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:44 np0005603609 nova_compute[221550]: 2026-01-31 08:22:44.037 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:22:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3227282260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:22:44 np0005603609 nova_compute[221550]: 2026-01-31 08:22:44.064 221554 DEBUG oslo_concurrency.processutils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:45.000 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:45 np0005603609 nova_compute[221550]: 2026-01-31 08:22:45.346 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:45 np0005603609 nova_compute[221550]: 2026-01-31 08:22:45.463 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Updating instance_info_cache with network_info: [{"id": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "address": "fa:16:3e:f6:68:cd", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f25515c-3e", "ovs_interfaceid": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:22:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:45.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:45.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.322 221554 DEBUG nova.compute.manager [req-6c695929-3240-4556-8445-fc63cfb99fa9 req-5942e543-27a8-4074-a1a2-8bb77fc603f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received event network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.323 221554 DEBUG oslo_concurrency.lockutils [req-6c695929-3240-4556-8445-fc63cfb99fa9 req-5942e543-27a8-4074-a1a2-8bb77fc603f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "946cb648-0758-4617-bd3a-142804fd70f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.324 221554 DEBUG oslo_concurrency.lockutils [req-6c695929-3240-4556-8445-fc63cfb99fa9 req-5942e543-27a8-4074-a1a2-8bb77fc603f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.324 221554 DEBUG oslo_concurrency.lockutils [req-6c695929-3240-4556-8445-fc63cfb99fa9 req-5942e543-27a8-4074-a1a2-8bb77fc603f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.325 221554 DEBUG nova.compute.manager [req-6c695929-3240-4556-8445-fc63cfb99fa9 req-5942e543-27a8-4074-a1a2-8bb77fc603f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] No waiting events found dispatching network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.325 221554 WARNING nova.compute.manager [req-6c695929-3240-4556-8445-fc63cfb99fa9 req-5942e543-27a8-4074-a1a2-8bb77fc603f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received unexpected event network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.376 221554 DEBUG oslo_concurrency.processutils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.407 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-38deb482-bd85-4fdf-b2da-2725bffd8f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.408 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.410 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.412 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.412 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.413 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.413 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.484 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.485 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.486 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.487 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.487 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:22:46 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1587069712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.849 221554 DEBUG oslo_concurrency.processutils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.851 221554 DEBUG nova.virt.libvirt.vif [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:21:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-2006986215',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-2006986215',id=149,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:21:34Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c90ea7f1be5f484bb873548236fadc00',ramdisk_id='',reservation_id='r-nvcx2ai9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:21:34Z,user_data=None,user_id='038e2b3b4f174162a3ac6c4870857e60',uuid=946cb648-0758-4617-bd3a-142804fd70f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "address": "fa:16:3e:0d:0a:0c", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "vif_mac": "fa:16:3e:0d:0a:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f5a0a6-02", "ovs_interfaceid": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.852 221554 DEBUG nova.network.os_vif_util [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converting VIF {"id": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "address": "fa:16:3e:0d:0a:0c", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "vif_mac": "fa:16:3e:0d:0a:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f5a0a6-02", "ovs_interfaceid": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.853 221554 DEBUG nova.network.os_vif_util [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0d:0a:0c,bridge_name='br-int',has_traffic_filtering=True,id=36f5a0a6-029b-4491-8d74-e44ca0e59f7d,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36f5a0a6-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.854 221554 DEBUG nova.objects.instance [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 946cb648-0758-4617-bd3a-142804fd70f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:22:46 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2237483166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:22:46 np0005603609 nova_compute[221550]: 2026-01-31 08:22:46.949 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:47 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:47Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:71:84 10.100.0.4
Jan 31 03:22:47 np0005603609 nova_compute[221550]: 2026-01-31 08:22:47.268 221554 DEBUG nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:  <uuid>946cb648-0758-4617-bd3a-142804fd70f7</uuid>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:  <name>instance-00000095</name>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-2006986215</nova:name>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:22:43</nova:creationTime>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <nova:user uuid="038e2b3b4f174162a3ac6c4870857e60">tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member</nova:user>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <nova:project uuid="c90ea7f1be5f484bb873548236fadc00">tempest-ServerBootFromVolumeStableRescueTest-1116995694</nova:project>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <nova:port uuid="36f5a0a6-029b-4491-8d74-e44ca0e59f7d">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <entry name="serial">946cb648-0758-4617-bd3a-142804fd70f7</entry>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <entry name="uuid">946cb648-0758-4617-bd3a-142804fd70f7</entry>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/946cb648-0758-4617-bd3a-142804fd70f7_disk.config">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="volumes/volume-9963309d-4585-4a8d-8bdf-00f8aa2f69e1">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <serial>9963309d-4585-4a8d-8bdf-00f8aa2f69e1</serial>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/946cb648-0758-4617-bd3a-142804fd70f7_disk.rescue">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <target dev="vdb" bus="virtio"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <boot order="1"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:0d:0a:0c"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <target dev="tap36f5a0a6-02"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/946cb648-0758-4617-bd3a-142804fd70f7/console.log" append="off"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:22:47 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:22:47 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:22:47 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:22:47 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:22:47 np0005603609 nova_compute[221550]: 2026-01-31 08:22:47.275 221554 INFO nova.virt.libvirt.driver [-] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Instance destroyed successfully.#033[00m
Jan 31 03:22:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:47.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:47.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.070 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.070 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.074 221554 DEBUG nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.075 221554 DEBUG nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.075 221554 DEBUG nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.075 221554 DEBUG nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No VIF found with MAC fa:16:3e:0d:0a:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.076 221554 INFO nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Using config drive#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.110 221554 DEBUG nova.storage.rbd_utils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image 946cb648-0758-4617-bd3a-142804fd70f7_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.119 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.120 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.120 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.125 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.125 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.126 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.140 221554 DEBUG nova.objects.instance [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 946cb648-0758-4617-bd3a-142804fd70f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.220 221554 DEBUG nova.objects.instance [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'keypairs' on Instance uuid 946cb648-0758-4617-bd3a-142804fd70f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.315 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.317 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3902MB free_disk=20.739330291748047GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.317 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.318 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.812 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 38deb482-bd85-4fdf-b2da-2725bffd8f43 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.813 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 946cb648-0758-4617-bd3a-142804fd70f7 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.813 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 20cc7040-fd06-49c7-8e68-41cb74e67e9a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.813 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.813 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:22:48 np0005603609 nova_compute[221550]: 2026-01-31 08:22:48.960 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.041 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.086 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.087 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.109 221554 INFO nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Creating config drive at /var/lib/nova/instances/946cb648-0758-4617-bd3a-142804fd70f7/disk.config.rescue#033[00m
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.114 221554 DEBUG oslo_concurrency.processutils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/946cb648-0758-4617-bd3a-142804fd70f7/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp5crmxxfm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.136 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.174 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.238 221554 DEBUG oslo_concurrency.processutils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/946cb648-0758-4617-bd3a-142804fd70f7/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp5crmxxfm" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.267 221554 DEBUG nova.storage.rbd_utils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image 946cb648-0758-4617-bd3a-142804fd70f7_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.271 221554 DEBUG oslo_concurrency.processutils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/946cb648-0758-4617-bd3a-142804fd70f7/disk.config.rescue 946cb648-0758-4617-bd3a-142804fd70f7_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.357 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.725 221554 DEBUG oslo_concurrency.processutils [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/946cb648-0758-4617-bd3a-142804fd70f7/disk.config.rescue 946cb648-0758-4617-bd3a-142804fd70f7_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.726 221554 INFO nova.virt.libvirt.driver [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Deleting local config drive /var/lib/nova/instances/946cb648-0758-4617-bd3a-142804fd70f7/disk.config.rescue because it was imported into RBD.#033[00m
Jan 31 03:22:49 np0005603609 NetworkManager[49064]: <info>  [1769847769.7796] manager: (tap36f5a0a6-02): new Tun device (/org/freedesktop/NetworkManager/Devices/307)
Jan 31 03:22:49 np0005603609 kernel: tap36f5a0a6-02: entered promiscuous mode
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.787 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:49Z|00652|binding|INFO|Claiming lport 36f5a0a6-029b-4491-8d74-e44ca0e59f7d for this chassis.
Jan 31 03:22:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:49Z|00653|binding|INFO|36f5a0a6-029b-4491-8d74-e44ca0e59f7d: Claiming fa:16:3e:0d:0a:0c 10.100.0.8
Jan 31 03:22:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:49Z|00654|binding|INFO|Setting lport 36f5a0a6-029b-4491-8d74-e44ca0e59f7d ovn-installed in OVS
Jan 31 03:22:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.797 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:49 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/245710413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:22:49 np0005603609 systemd-machined[190912]: New machine qemu-79-instance-00000095.
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.815 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:49 np0005603609 systemd-udevd[282786]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:22:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:49Z|00655|binding|INFO|Setting lport 36f5a0a6-029b-4491-8d74-e44ca0e59f7d up in Southbound
Jan 31 03:22:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:49.821 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:0a:0c 10.100.0.8'], port_security=['fa:16:3e:0d:0a:0c 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '946cb648-0758-4617-bd3a-142804fd70f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c90ea7f1be5f484bb873548236fadc00', 'neutron:revision_number': '5', 'neutron:security_group_ids': '952b4f08-f5a7-4fc0-ae2c-267f2ba857a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42261fad-d2a1-4da1-823a-75e271c17223, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=36f5a0a6-029b-4491-8d74-e44ca0e59f7d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.822 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:22:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:49.822 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 36f5a0a6-029b-4491-8d74-e44ca0e59f7d in datapath 936cead9-bc2f-4c2d-8b4c-6079d2159263 bound to our chassis#033[00m
Jan 31 03:22:49 np0005603609 systemd[1]: Started Virtual Machine qemu-79-instance-00000095.
Jan 31 03:22:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:49.825 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 936cead9-bc2f-4c2d-8b4c-6079d2159263#033[00m
Jan 31 03:22:49 np0005603609 NetworkManager[49064]: <info>  [1769847769.8334] device (tap36f5a0a6-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:22:49 np0005603609 NetworkManager[49064]: <info>  [1769847769.8342] device (tap36f5a0a6-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:22:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:49.833 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[282a7cad-80c2-438c-a6a9-3b1bd3e428a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:49.835 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap936cead9-b1 in ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:22:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:49.836 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap936cead9-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:22:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:49.837 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[42de16ba-b3d7-4826-9db3-b68b05007dcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:49.837 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[08122a95-eb0f-47ef-9ff0-965133e497a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:49.845 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[08a96c23-7d6a-4edd-a21a-7b8491d781b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:49.855 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d2c515-8735-4108-bf56-da05675fb254]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.862 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.863 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:22:49 np0005603609 nova_compute[221550]: 2026-01-31 08:22:49.864 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:49.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:49.878 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[84a33210-26ea-4db1-9c98-c4fd507c7d67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:49.883 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5e480c71-7e77-4c5c-b020-7985b3e8c7ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:49 np0005603609 NetworkManager[49064]: <info>  [1769847769.8848] manager: (tap936cead9-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/308)
Jan 31 03:22:49 np0005603609 systemd-udevd[282789]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:22:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:49.911 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[9f76632b-b6e4-43f4-9289-78644d7449b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:49.913 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c64e49d2-8657-4cc4-8f2f-ec5ca660ddbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:49 np0005603609 NetworkManager[49064]: <info>  [1769847769.9296] device (tap936cead9-b0): carrier: link connected
Jan 31 03:22:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:49.933 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[68f77698-3fca-4f7d-8bed-2ae043426285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:49.948 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7be0feb7-fa9f-4425-bc58-e99dbc5d8214]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap936cead9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:06:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800962, 'reachable_time': 15351, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282821, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:49.959 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e4248bf3-e44c-4e20-8825-94ac3cf98e2f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:62a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 800962, 'tstamp': 800962}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282822, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:49.973 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e63c8b8c-72f3-40e7-865e-793c4124aa33]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap936cead9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:06:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800962, 'reachable_time': 15351, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282823, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:22:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:49.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:22:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:49.993 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2544ab4f-b7ec-42e6-84aa-e898d157e436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:50.041 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4779adf0-24e4-4ba4-9aaf-a217aa65ea3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:50.042 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap936cead9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:50.042 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:50.043 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap936cead9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:50 np0005603609 kernel: tap936cead9-b0: entered promiscuous mode
Jan 31 03:22:50 np0005603609 NetworkManager[49064]: <info>  [1769847770.0832] manager: (tap936cead9-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/309)
Jan 31 03:22:50 np0005603609 nova_compute[221550]: 2026-01-31 08:22:50.086 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:50.087 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap936cead9-b0, col_values=(('external_ids', {'iface-id': 'fd5187fd-cce9-41da-96d2-ef75fbcbcf0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:50 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:50Z|00656|binding|INFO|Releasing lport fd5187fd-cce9-41da-96d2-ef75fbcbcf0f from this chassis (sb_readonly=0)
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:50.092 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/936cead9-bc2f-4c2d-8b4c-6079d2159263.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/936cead9-bc2f-4c2d-8b4c-6079d2159263.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:50.093 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[556145f1-9d42-4bdc-9a6d-11d9ff6e325c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:50.094 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-936cead9-bc2f-4c2d-8b4c-6079d2159263
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/936cead9-bc2f-4c2d-8b4c-6079d2159263.pid.haproxy
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 936cead9-bc2f-4c2d-8b4c-6079d2159263
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:22:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:50.095 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'env', 'PROCESS_TAG=haproxy-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/936cead9-bc2f-4c2d-8b4c-6079d2159263.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:22:50 np0005603609 nova_compute[221550]: 2026-01-31 08:22:50.347 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:50 np0005603609 podman[282910]: 2026-01-31 08:22:50.406819276 +0000 UTC m=+0.041925698 container create 02974e8ebc9c872eb821adb702367efe7538ce09f1d4c9dcca9c2dc1ce796f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:22:50 np0005603609 systemd[1]: Started libpod-conmon-02974e8ebc9c872eb821adb702367efe7538ce09f1d4c9dcca9c2dc1ce796f75.scope.
Jan 31 03:22:50 np0005603609 nova_compute[221550]: 2026-01-31 08:22:50.456 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 946cb648-0758-4617-bd3a-142804fd70f7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:22:50 np0005603609 nova_compute[221550]: 2026-01-31 08:22:50.457 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847770.4562023, 946cb648-0758-4617-bd3a-142804fd70f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:22:50 np0005603609 nova_compute[221550]: 2026-01-31 08:22:50.457 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:22:50 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:22:50 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2f4f46f58866ccd361212779a4e897147790bb47df5d54c7190595fdc8f03a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:50 np0005603609 nova_compute[221550]: 2026-01-31 08:22:50.465 221554 DEBUG nova.compute.manager [None req-bb401351-e4a8-412c-b115-99e52ec55f11 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:22:50 np0005603609 podman[282910]: 2026-01-31 08:22:50.475875304 +0000 UTC m=+0.110981746 container init 02974e8ebc9c872eb821adb702367efe7538ce09f1d4c9dcca9c2dc1ce796f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:22:50 np0005603609 podman[282910]: 2026-01-31 08:22:50.381133929 +0000 UTC m=+0.016240371 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:22:50 np0005603609 podman[282910]: 2026-01-31 08:22:50.479651065 +0000 UTC m=+0.114757507 container start 02974e8ebc9c872eb821adb702367efe7538ce09f1d4c9dcca9c2dc1ce796f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:22:50 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[282931]: [NOTICE]   (282935) : New worker (282937) forked
Jan 31 03:22:50 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[282931]: [NOTICE]   (282935) : Loading success.
Jan 31 03:22:50 np0005603609 nova_compute[221550]: 2026-01-31 08:22:50.580 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:22:50 np0005603609 nova_compute[221550]: 2026-01-31 08:22:50.584 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:22:50 np0005603609 nova_compute[221550]: 2026-01-31 08:22:50.737 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847770.4571779, 946cb648-0758-4617-bd3a-142804fd70f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:22:50 np0005603609 nova_compute[221550]: 2026-01-31 08:22:50.738 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] VM Started (Lifecycle Event)#033[00m
Jan 31 03:22:50 np0005603609 nova_compute[221550]: 2026-01-31 08:22:50.983 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:22:50 np0005603609 nova_compute[221550]: 2026-01-31 08:22:50.987 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:22:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:51.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:51 np0005603609 nova_compute[221550]: 2026-01-31 08:22:51.934 221554 DEBUG nova.compute.manager [req-04a2dd21-8ae6-4531-a62b-6a17e900e68a req-5d7580e5-3a99-4fae-b373-c0553217f29c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received event network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:51 np0005603609 nova_compute[221550]: 2026-01-31 08:22:51.935 221554 DEBUG oslo_concurrency.lockutils [req-04a2dd21-8ae6-4531-a62b-6a17e900e68a req-5d7580e5-3a99-4fae-b373-c0553217f29c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "946cb648-0758-4617-bd3a-142804fd70f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:51 np0005603609 nova_compute[221550]: 2026-01-31 08:22:51.935 221554 DEBUG oslo_concurrency.lockutils [req-04a2dd21-8ae6-4531-a62b-6a17e900e68a req-5d7580e5-3a99-4fae-b373-c0553217f29c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:51 np0005603609 nova_compute[221550]: 2026-01-31 08:22:51.935 221554 DEBUG oslo_concurrency.lockutils [req-04a2dd21-8ae6-4531-a62b-6a17e900e68a req-5d7580e5-3a99-4fae-b373-c0553217f29c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:51 np0005603609 nova_compute[221550]: 2026-01-31 08:22:51.935 221554 DEBUG nova.compute.manager [req-04a2dd21-8ae6-4531-a62b-6a17e900e68a req-5d7580e5-3a99-4fae-b373-c0553217f29c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] No waiting events found dispatching network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:51 np0005603609 nova_compute[221550]: 2026-01-31 08:22:51.935 221554 WARNING nova.compute.manager [req-04a2dd21-8ae6-4531-a62b-6a17e900e68a req-5d7580e5-3a99-4fae-b373-c0553217f29c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received unexpected event network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d for instance with vm_state rescued and task_state None.#033[00m
Jan 31 03:22:51 np0005603609 nova_compute[221550]: 2026-01-31 08:22:51.936 221554 DEBUG nova.compute.manager [req-04a2dd21-8ae6-4531-a62b-6a17e900e68a req-5d7580e5-3a99-4fae-b373-c0553217f29c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received event network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:51 np0005603609 nova_compute[221550]: 2026-01-31 08:22:51.936 221554 DEBUG oslo_concurrency.lockutils [req-04a2dd21-8ae6-4531-a62b-6a17e900e68a req-5d7580e5-3a99-4fae-b373-c0553217f29c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "946cb648-0758-4617-bd3a-142804fd70f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:51 np0005603609 nova_compute[221550]: 2026-01-31 08:22:51.936 221554 DEBUG oslo_concurrency.lockutils [req-04a2dd21-8ae6-4531-a62b-6a17e900e68a req-5d7580e5-3a99-4fae-b373-c0553217f29c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:51 np0005603609 nova_compute[221550]: 2026-01-31 08:22:51.936 221554 DEBUG oslo_concurrency.lockutils [req-04a2dd21-8ae6-4531-a62b-6a17e900e68a req-5d7580e5-3a99-4fae-b373-c0553217f29c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:51 np0005603609 nova_compute[221550]: 2026-01-31 08:22:51.936 221554 DEBUG nova.compute.manager [req-04a2dd21-8ae6-4531-a62b-6a17e900e68a req-5d7580e5-3a99-4fae-b373-c0553217f29c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] No waiting events found dispatching network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:51 np0005603609 nova_compute[221550]: 2026-01-31 08:22:51.937 221554 WARNING nova.compute.manager [req-04a2dd21-8ae6-4531-a62b-6a17e900e68a req-5d7580e5-3a99-4fae-b373-c0553217f29c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received unexpected event network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d for instance with vm_state rescued and task_state None.#033[00m
Jan 31 03:22:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:51.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e346 e346: 3 total, 3 up, 3 in
Jan 31 03:22:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:22:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2446759486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:22:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:22:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2446759486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:22:53 np0005603609 nova_compute[221550]: 2026-01-31 08:22:53.868 221554 INFO nova.compute.manager [None req-3bd14f64-fd97-49a6-9256-e2d8a4184ed7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Unrescuing#033[00m
Jan 31 03:22:53 np0005603609 nova_compute[221550]: 2026-01-31 08:22:53.868 221554 DEBUG oslo_concurrency.lockutils [None req-3bd14f64-fd97-49a6-9256-e2d8a4184ed7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "refresh_cache-946cb648-0758-4617-bd3a-142804fd70f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:22:53 np0005603609 nova_compute[221550]: 2026-01-31 08:22:53.868 221554 DEBUG oslo_concurrency.lockutils [None req-3bd14f64-fd97-49a6-9256-e2d8a4184ed7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquired lock "refresh_cache-946cb648-0758-4617-bd3a-142804fd70f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:22:53 np0005603609 nova_compute[221550]: 2026-01-31 08:22:53.868 221554 DEBUG nova.network.neutron [None req-3bd14f64-fd97-49a6-9256-e2d8a4184ed7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:22:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:22:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:53.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:22:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:53.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:54 np0005603609 nova_compute[221550]: 2026-01-31 08:22:54.044 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:55 np0005603609 nova_compute[221550]: 2026-01-31 08:22:55.349 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:55.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:22:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:55.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:22:56 np0005603609 nova_compute[221550]: 2026-01-31 08:22:56.858 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:56 np0005603609 nova_compute[221550]: 2026-01-31 08:22:56.859 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:56 np0005603609 nova_compute[221550]: 2026-01-31 08:22:56.984 221554 DEBUG nova.network.neutron [None req-3bd14f64-fd97-49a6-9256-e2d8a4184ed7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Updating instance_info_cache with network_info: [{"id": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "address": "fa:16:3e:0d:0a:0c", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f5a0a6-02", "ovs_interfaceid": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.004 221554 DEBUG oslo_concurrency.lockutils [None req-3bd14f64-fd97-49a6-9256-e2d8a4184ed7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Releasing lock "refresh_cache-946cb648-0758-4617-bd3a-142804fd70f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.005 221554 DEBUG nova.objects.instance [None req-3bd14f64-fd97-49a6-9256-e2d8a4184ed7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'flavor' on Instance uuid 946cb648-0758-4617-bd3a-142804fd70f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:57 np0005603609 kernel: tap36f5a0a6-02 (unregistering): left promiscuous mode
Jan 31 03:22:57 np0005603609 NetworkManager[49064]: <info>  [1769847777.0900] device (tap36f5a0a6-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:22:57 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:57Z|00657|binding|INFO|Releasing lport 36f5a0a6-029b-4491-8d74-e44ca0e59f7d from this chassis (sb_readonly=0)
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.097 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:57 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:57Z|00658|binding|INFO|Setting lport 36f5a0a6-029b-4491-8d74-e44ca0e59f7d down in Southbound
Jan 31 03:22:57 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:57Z|00659|binding|INFO|Removing iface tap36f5a0a6-02 ovn-installed in OVS
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.105 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.118 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:0a:0c 10.100.0.8'], port_security=['fa:16:3e:0d:0a:0c 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '946cb648-0758-4617-bd3a-142804fd70f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c90ea7f1be5f484bb873548236fadc00', 'neutron:revision_number': '6', 'neutron:security_group_ids': '952b4f08-f5a7-4fc0-ae2c-267f2ba857a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42261fad-d2a1-4da1-823a-75e271c17223, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=36f5a0a6-029b-4491-8d74-e44ca0e59f7d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.120 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 36f5a0a6-029b-4491-8d74-e44ca0e59f7d in datapath 936cead9-bc2f-4c2d-8b4c-6079d2159263 unbound from our chassis#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.122 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 936cead9-bc2f-4c2d-8b4c-6079d2159263, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.123 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[dbe78404-9330-4f11-ac00-c94d89f85020]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.124 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 namespace which is not needed anymore#033[00m
Jan 31 03:22:57 np0005603609 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000095.scope: Deactivated successfully.
Jan 31 03:22:57 np0005603609 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000095.scope: Consumed 7.383s CPU time.
Jan 31 03:22:57 np0005603609 systemd-machined[190912]: Machine qemu-79-instance-00000095 terminated.
Jan 31 03:22:57 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[282931]: [NOTICE]   (282935) : haproxy version is 2.8.14-c23fe91
Jan 31 03:22:57 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[282931]: [NOTICE]   (282935) : path to executable is /usr/sbin/haproxy
Jan 31 03:22:57 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[282931]: [WARNING]  (282935) : Exiting Master process...
Jan 31 03:22:57 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[282931]: [ALERT]    (282935) : Current worker (282937) exited with code 143 (Terminated)
Jan 31 03:22:57 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[282931]: [WARNING]  (282935) : All workers exited. Exiting... (0)
Jan 31 03:22:57 np0005603609 systemd[1]: libpod-02974e8ebc9c872eb821adb702367efe7538ce09f1d4c9dcca9c2dc1ce796f75.scope: Deactivated successfully.
Jan 31 03:22:57 np0005603609 podman[282970]: 2026-01-31 08:22:57.265804485 +0000 UTC m=+0.053263740 container died 02974e8ebc9c872eb821adb702367efe7538ce09f1d4c9dcca9c2dc1ce796f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.274 221554 INFO nova.virt.libvirt.driver [-] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Instance destroyed successfully.#033[00m
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.275 221554 DEBUG nova.objects.instance [None req-3bd14f64-fd97-49a6-9256-e2d8a4184ed7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'numa_topology' on Instance uuid 946cb648-0758-4617-bd3a-142804fd70f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:57 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-02974e8ebc9c872eb821adb702367efe7538ce09f1d4c9dcca9c2dc1ce796f75-userdata-shm.mount: Deactivated successfully.
Jan 31 03:22:57 np0005603609 systemd[1]: var-lib-containers-storage-overlay-9d2f4f46f58866ccd361212779a4e897147790bb47df5d54c7190595fdc8f03a-merged.mount: Deactivated successfully.
Jan 31 03:22:57 np0005603609 podman[282970]: 2026-01-31 08:22:57.307915507 +0000 UTC m=+0.095374762 container cleanup 02974e8ebc9c872eb821adb702367efe7538ce09f1d4c9dcca9c2dc1ce796f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:22:57 np0005603609 systemd[1]: libpod-conmon-02974e8ebc9c872eb821adb702367efe7538ce09f1d4c9dcca9c2dc1ce796f75.scope: Deactivated successfully.
Jan 31 03:22:57 np0005603609 kernel: tap36f5a0a6-02: entered promiscuous mode
Jan 31 03:22:57 np0005603609 systemd-udevd[282950]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:22:57 np0005603609 NetworkManager[49064]: <info>  [1769847777.3554] manager: (tap36f5a0a6-02): new Tun device (/org/freedesktop/NetworkManager/Devices/310)
Jan 31 03:22:57 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:57Z|00660|binding|INFO|Claiming lport 36f5a0a6-029b-4491-8d74-e44ca0e59f7d for this chassis.
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.357 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:57 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:57Z|00661|binding|INFO|36f5a0a6-029b-4491-8d74-e44ca0e59f7d: Claiming fa:16:3e:0d:0a:0c 10.100.0.8
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.365 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:57 np0005603609 NetworkManager[49064]: <info>  [1769847777.3674] device (tap36f5a0a6-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:22:57 np0005603609 NetworkManager[49064]: <info>  [1769847777.3682] device (tap36f5a0a6-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.367 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:0a:0c 10.100.0.8'], port_security=['fa:16:3e:0d:0a:0c 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '946cb648-0758-4617-bd3a-142804fd70f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c90ea7f1be5f484bb873548236fadc00', 'neutron:revision_number': '6', 'neutron:security_group_ids': '952b4f08-f5a7-4fc0-ae2c-267f2ba857a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42261fad-d2a1-4da1-823a-75e271c17223, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=36f5a0a6-029b-4491-8d74-e44ca0e59f7d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:22:57 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:57Z|00662|binding|INFO|Setting lport 36f5a0a6-029b-4491-8d74-e44ca0e59f7d up in Southbound
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.369 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:57 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:57Z|00663|binding|INFO|Setting lport 36f5a0a6-029b-4491-8d74-e44ca0e59f7d ovn-installed in OVS
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.371 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:57 np0005603609 podman[283014]: 2026-01-31 08:22:57.378519172 +0000 UTC m=+0.055268359 container remove 02974e8ebc9c872eb821adb702367efe7538ce09f1d4c9dcca9c2dc1ce796f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.383 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[40bc1b65-1eed-4947-be71-1d9c90af2641]: (4, ('Sat Jan 31 08:22:57 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 (02974e8ebc9c872eb821adb702367efe7538ce09f1d4c9dcca9c2dc1ce796f75)\n02974e8ebc9c872eb821adb702367efe7538ce09f1d4c9dcca9c2dc1ce796f75\nSat Jan 31 08:22:57 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 (02974e8ebc9c872eb821adb702367efe7538ce09f1d4c9dcca9c2dc1ce796f75)\n02974e8ebc9c872eb821adb702367efe7538ce09f1d4c9dcca9c2dc1ce796f75\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 systemd-machined[190912]: New machine qemu-80-instance-00000095.
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.385 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[30b8df72-063c-4745-aab7-5f964c486f05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.386 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap936cead9-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.388 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:57 np0005603609 kernel: tap936cead9-b0: left promiscuous mode
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.392 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.395 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fa42f7cd-4e8d-4372-9f2f-74a5aadd6de0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 systemd[1]: Started Virtual Machine qemu-80-instance-00000095.
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.413 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[336202d8-47b0-4353-ad0e-55ae7c60b4de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.414 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5e7bd181-0c14-4e91-b8c3-9eda9926cb3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.426 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0470b53c-db5f-486a-b0da-1400e67fb24a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800957, 'reachable_time': 30815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283043, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 systemd[1]: run-netns-ovnmeta\x2d936cead9\x2dbc2f\x2d4c2d\x2d8b4c\x2d6079d2159263.mount: Deactivated successfully.
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.428 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.428 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[a052600a-0132-4af5-8f9b-d3afb6853ffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.430 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 36f5a0a6-029b-4491-8d74-e44ca0e59f7d in datapath 936cead9-bc2f-4c2d-8b4c-6079d2159263 unbound from our chassis#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.432 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 936cead9-bc2f-4c2d-8b4c-6079d2159263#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.440 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[158f3a2d-7338-4278-beb3-fd2b2aee67cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.441 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap936cead9-b1 in ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.443 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap936cead9-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.443 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[135eadf8-803b-4617-9a4f-c8a7589f0944]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.444 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4a0d57ca-10c6-4885-84a9-9787f47e31a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.452 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[a4271a77-46f9-4932-99af-6c25baafb2ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.473 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7b027525-0e57-49ee-88c8-55e456a8ec3d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.492 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[dc575a8f-a68b-4592-9afd-5a02d5f25302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.497 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3937f20f-c9b3-46e9-9b37-7558a9f1ffd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 NetworkManager[49064]: <info>  [1769847777.4982] manager: (tap936cead9-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/311)
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.521 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[e852bd6a-19c5-4f30-84de-c1843cc5bb7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.523 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[051e3c91-9a67-4dbf-8381-f0e32e69d78a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 NetworkManager[49064]: <info>  [1769847777.5386] device (tap936cead9-b0): carrier: link connected
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.540 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[a41a24ad-b2b2-41e0-9dfa-9ba06d1c28fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.552 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7e982e93-3d68-48bb-8072-990252986359]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap936cead9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:06:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801723, 'reachable_time': 38833, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283072, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.563 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ce945099-f179-4717-b5ef-88092f8b0549]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:62a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 801723, 'tstamp': 801723}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283073, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.575 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[143d7cd1-ca9d-4de3-b11b-4a303630bb55]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap936cead9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:06:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801723, 'reachable_time': 38833, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283074, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.599 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d4541643-1f7e-4728-92e7-e2e504245141]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.633 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ee0da5-5bb5-47ba-9bc8-be3098ed5ef7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.634 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap936cead9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.634 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.635 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap936cead9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:57 np0005603609 kernel: tap936cead9-b0: entered promiscuous mode
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.637 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:57 np0005603609 NetworkManager[49064]: <info>  [1769847777.6393] manager: (tap936cead9-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.639 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.641 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap936cead9-b0, col_values=(('external_ids', {'iface-id': 'fd5187fd-cce9-41da-96d2-ef75fbcbcf0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.642 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:57 np0005603609 ovn_controller[130359]: 2026-01-31T08:22:57Z|00664|binding|INFO|Releasing lport fd5187fd-cce9-41da-96d2-ef75fbcbcf0f from this chassis (sb_readonly=0)
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.643 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/936cead9-bc2f-4c2d-8b4c-6079d2159263.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/936cead9-bc2f-4c2d-8b4c-6079d2159263.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.644 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[27ac024e-5ecc-4334-a55e-f98bf997e7e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.645 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-936cead9-bc2f-4c2d-8b4c-6079d2159263
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/936cead9-bc2f-4c2d-8b4c-6079d2159263.pid.haproxy
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 936cead9-bc2f-4c2d-8b4c-6079d2159263
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:22:57 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:22:57.645 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'env', 'PROCESS_TAG=haproxy-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/936cead9-bc2f-4c2d-8b4c-6079d2159263.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.648 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.813 221554 DEBUG nova.compute.manager [req-21ee59e6-30bf-4ae0-a176-46db2899e4c1 req-648d44c6-aff4-4ba0-84f0-e3e8017153fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received event network-vif-unplugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.814 221554 DEBUG oslo_concurrency.lockutils [req-21ee59e6-30bf-4ae0-a176-46db2899e4c1 req-648d44c6-aff4-4ba0-84f0-e3e8017153fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "946cb648-0758-4617-bd3a-142804fd70f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.814 221554 DEBUG oslo_concurrency.lockutils [req-21ee59e6-30bf-4ae0-a176-46db2899e4c1 req-648d44c6-aff4-4ba0-84f0-e3e8017153fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.814 221554 DEBUG oslo_concurrency.lockutils [req-21ee59e6-30bf-4ae0-a176-46db2899e4c1 req-648d44c6-aff4-4ba0-84f0-e3e8017153fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.815 221554 DEBUG nova.compute.manager [req-21ee59e6-30bf-4ae0-a176-46db2899e4c1 req-648d44c6-aff4-4ba0-84f0-e3e8017153fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] No waiting events found dispatching network-vif-unplugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:57 np0005603609 nova_compute[221550]: 2026-01-31 08:22:57.815 221554 WARNING nova.compute.manager [req-21ee59e6-30bf-4ae0-a176-46db2899e4c1 req-648d44c6-aff4-4ba0-84f0-e3e8017153fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received unexpected event network-vif-unplugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 31 03:22:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:57.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:57 np0005603609 podman[283106]: 2026-01-31 08:22:57.977162958 +0000 UTC m=+0.055910754 container create 0644bd4151732c721cc61ecdc293b7fde01575c1653375c3a61cde6a8f158b56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:22:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:22:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:57.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:22:58 np0005603609 systemd[1]: Started libpod-conmon-0644bd4151732c721cc61ecdc293b7fde01575c1653375c3a61cde6a8f158b56.scope.
Jan 31 03:22:58 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:22:58 np0005603609 podman[283106]: 2026-01-31 08:22:57.953444779 +0000 UTC m=+0.032192505 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:22:58 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81daabdb265024e52c01f762951f71940b609b854cbcf21d064ca00a1f78d007/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:58 np0005603609 podman[283106]: 2026-01-31 08:22:58.057536188 +0000 UTC m=+0.136283924 container init 0644bd4151732c721cc61ecdc293b7fde01575c1653375c3a61cde6a8f158b56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 03:22:58 np0005603609 podman[283106]: 2026-01-31 08:22:58.061689248 +0000 UTC m=+0.140436964 container start 0644bd4151732c721cc61ecdc293b7fde01575c1653375c3a61cde6a8f158b56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 03:22:58 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[283120]: [NOTICE]   (283124) : New worker (283126) forked
Jan 31 03:22:58 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[283120]: [NOTICE]   (283124) : Loading success.
Jan 31 03:22:58 np0005603609 nova_compute[221550]: 2026-01-31 08:22:58.297 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 946cb648-0758-4617-bd3a-142804fd70f7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:22:58 np0005603609 nova_compute[221550]: 2026-01-31 08:22:58.298 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847778.2975142, 946cb648-0758-4617-bd3a-142804fd70f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:22:58 np0005603609 nova_compute[221550]: 2026-01-31 08:22:58.298 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:22:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:58 np0005603609 podman[283193]: 2026-01-31 08:22:58.380818271 +0000 UTC m=+0.047055641 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:22:58 np0005603609 nova_compute[221550]: 2026-01-31 08:22:58.381 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:22:58 np0005603609 nova_compute[221550]: 2026-01-31 08:22:58.387 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:22:58 np0005603609 nova_compute[221550]: 2026-01-31 08:22:58.435 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 31 03:22:58 np0005603609 nova_compute[221550]: 2026-01-31 08:22:58.436 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847778.3000827, 946cb648-0758-4617-bd3a-142804fd70f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:22:58 np0005603609 nova_compute[221550]: 2026-01-31 08:22:58.436 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] VM Started (Lifecycle Event)#033[00m
Jan 31 03:22:58 np0005603609 podman[283177]: 2026-01-31 08:22:58.44658859 +0000 UTC m=+0.115964965 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:22:58 np0005603609 nova_compute[221550]: 2026-01-31 08:22:58.469 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:22:58 np0005603609 nova_compute[221550]: 2026-01-31 08:22:58.472 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:22:58 np0005603609 nova_compute[221550]: 2026-01-31 08:22:58.510 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 31 03:22:58 np0005603609 nova_compute[221550]: 2026-01-31 08:22:58.984 221554 DEBUG nova.compute.manager [None req-3bd14f64-fd97-49a6-9256-e2d8a4184ed7 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:22:59 np0005603609 nova_compute[221550]: 2026-01-31 08:22:59.046 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:22:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:59.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:00.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.040 221554 DEBUG nova.compute.manager [req-20e94e37-117f-4b8a-aa98-9543dad5d9e8 req-a1a404d3-f5fe-402b-88fc-74fe64199dbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received event network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.041 221554 DEBUG oslo_concurrency.lockutils [req-20e94e37-117f-4b8a-aa98-9543dad5d9e8 req-a1a404d3-f5fe-402b-88fc-74fe64199dbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "946cb648-0758-4617-bd3a-142804fd70f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.041 221554 DEBUG oslo_concurrency.lockutils [req-20e94e37-117f-4b8a-aa98-9543dad5d9e8 req-a1a404d3-f5fe-402b-88fc-74fe64199dbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.041 221554 DEBUG oslo_concurrency.lockutils [req-20e94e37-117f-4b8a-aa98-9543dad5d9e8 req-a1a404d3-f5fe-402b-88fc-74fe64199dbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.041 221554 DEBUG nova.compute.manager [req-20e94e37-117f-4b8a-aa98-9543dad5d9e8 req-a1a404d3-f5fe-402b-88fc-74fe64199dbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] No waiting events found dispatching network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.041 221554 WARNING nova.compute.manager [req-20e94e37-117f-4b8a-aa98-9543dad5d9e8 req-a1a404d3-f5fe-402b-88fc-74fe64199dbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received unexpected event network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d for instance with vm_state active and task_state None.#033[00m
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.041 221554 DEBUG nova.compute.manager [req-20e94e37-117f-4b8a-aa98-9543dad5d9e8 req-a1a404d3-f5fe-402b-88fc-74fe64199dbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received event network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.041 221554 DEBUG oslo_concurrency.lockutils [req-20e94e37-117f-4b8a-aa98-9543dad5d9e8 req-a1a404d3-f5fe-402b-88fc-74fe64199dbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "946cb648-0758-4617-bd3a-142804fd70f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.042 221554 DEBUG oslo_concurrency.lockutils [req-20e94e37-117f-4b8a-aa98-9543dad5d9e8 req-a1a404d3-f5fe-402b-88fc-74fe64199dbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.042 221554 DEBUG oslo_concurrency.lockutils [req-20e94e37-117f-4b8a-aa98-9543dad5d9e8 req-a1a404d3-f5fe-402b-88fc-74fe64199dbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.042 221554 DEBUG nova.compute.manager [req-20e94e37-117f-4b8a-aa98-9543dad5d9e8 req-a1a404d3-f5fe-402b-88fc-74fe64199dbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] No waiting events found dispatching network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.042 221554 WARNING nova.compute.manager [req-20e94e37-117f-4b8a-aa98-9543dad5d9e8 req-a1a404d3-f5fe-402b-88fc-74fe64199dbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received unexpected event network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d for instance with vm_state active and task_state None.#033[00m
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.042 221554 DEBUG nova.compute.manager [req-20e94e37-117f-4b8a-aa98-9543dad5d9e8 req-a1a404d3-f5fe-402b-88fc-74fe64199dbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received event network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.042 221554 DEBUG oslo_concurrency.lockutils [req-20e94e37-117f-4b8a-aa98-9543dad5d9e8 req-a1a404d3-f5fe-402b-88fc-74fe64199dbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "946cb648-0758-4617-bd3a-142804fd70f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.042 221554 DEBUG oslo_concurrency.lockutils [req-20e94e37-117f-4b8a-aa98-9543dad5d9e8 req-a1a404d3-f5fe-402b-88fc-74fe64199dbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.043 221554 DEBUG oslo_concurrency.lockutils [req-20e94e37-117f-4b8a-aa98-9543dad5d9e8 req-a1a404d3-f5fe-402b-88fc-74fe64199dbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.043 221554 DEBUG nova.compute.manager [req-20e94e37-117f-4b8a-aa98-9543dad5d9e8 req-a1a404d3-f5fe-402b-88fc-74fe64199dbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] No waiting events found dispatching network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.043 221554 WARNING nova.compute.manager [req-20e94e37-117f-4b8a-aa98-9543dad5d9e8 req-a1a404d3-f5fe-402b-88fc-74fe64199dbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received unexpected event network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d for instance with vm_state active and task_state None.#033[00m
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.350 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:00 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:00Z|00665|binding|INFO|Releasing lport 075aefe0-13df-4a17-ae95-485ece950a10 from this chassis (sb_readonly=0)
Jan 31 03:23:00 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:00Z|00666|binding|INFO|Releasing lport 98f103d6-a5bc-4680-a4d5-8ced102dd381 from this chassis (sb_readonly=0)
Jan 31 03:23:00 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:00Z|00667|binding|INFO|Releasing lport fd5187fd-cce9-41da-96d2-ef75fbcbcf0f from this chassis (sb_readonly=0)
Jan 31 03:23:00 np0005603609 nova_compute[221550]: 2026-01-31 08:23:00.768 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:23:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:01.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:23:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:23:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:02.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:23:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e347 e347: 3 total, 3 up, 3 in
Jan 31 03:23:02 np0005603609 nova_compute[221550]: 2026-01-31 08:23:02.362 221554 DEBUG oslo_concurrency.lockutils [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "38deb482-bd85-4fdf-b2da-2725bffd8f43" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:02 np0005603609 nova_compute[221550]: 2026-01-31 08:23:02.363 221554 DEBUG oslo_concurrency.lockutils [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:02 np0005603609 nova_compute[221550]: 2026-01-31 08:23:02.363 221554 DEBUG oslo_concurrency.lockutils [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:02 np0005603609 nova_compute[221550]: 2026-01-31 08:23:02.363 221554 DEBUG oslo_concurrency.lockutils [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:02 np0005603609 nova_compute[221550]: 2026-01-31 08:23:02.363 221554 DEBUG oslo_concurrency.lockutils [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:02 np0005603609 nova_compute[221550]: 2026-01-31 08:23:02.365 221554 INFO nova.compute.manager [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Terminating instance#033[00m
Jan 31 03:23:02 np0005603609 nova_compute[221550]: 2026-01-31 08:23:02.366 221554 DEBUG nova.compute.manager [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:23:02 np0005603609 kernel: tap1f25515c-3e (unregistering): left promiscuous mode
Jan 31 03:23:02 np0005603609 NetworkManager[49064]: <info>  [1769847782.4202] device (tap1f25515c-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:23:02 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:02Z|00668|binding|INFO|Releasing lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a from this chassis (sb_readonly=0)
Jan 31 03:23:02 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:02Z|00669|binding|INFO|Setting lport 1f25515c-3e82-445e-baad-bc0e3a0d6f1a down in Southbound
Jan 31 03:23:02 np0005603609 nova_compute[221550]: 2026-01-31 08:23:02.463 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:02 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:02Z|00670|binding|INFO|Removing iface tap1f25515c-3e ovn-installed in OVS
Jan 31 03:23:02 np0005603609 nova_compute[221550]: 2026-01-31 08:23:02.469 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:02 np0005603609 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Jan 31 03:23:02 np0005603609 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000008f.scope: Consumed 17.674s CPU time.
Jan 31 03:23:02 np0005603609 systemd-machined[190912]: Machine qemu-75-instance-0000008f terminated.
Jan 31 03:23:02 np0005603609 nova_compute[221550]: 2026-01-31 08:23:02.604 221554 INFO nova.virt.libvirt.driver [-] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Instance destroyed successfully.#033[00m
Jan 31 03:23:02 np0005603609 nova_compute[221550]: 2026-01-31 08:23:02.604 221554 DEBUG nova.objects.instance [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'resources' on Instance uuid 38deb482-bd85-4fdf-b2da-2725bffd8f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:23:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:03.028 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:68:cd 10.100.0.8'], port_security=['fa:16:3e:f6:68:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '38deb482-bd85-4fdf-b2da-2725bffd8f43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e03fc320-c87d-42d2-a772-ec94aeb05209', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'neutron:revision_number': '8', 'neutron:security_group_ids': '0a8345fd-717b-4084-912f-0c496810f08f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed93fa99-ea0a-43df-97d5-ee7154033108, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=1f25515c-3e82-445e-baad-bc0e3a0d6f1a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:23:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:03.029 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 1f25515c-3e82-445e-baad-bc0e3a0d6f1a in datapath e03fc320-c87d-42d2-a772-ec94aeb05209 unbound from our chassis#033[00m
Jan 31 03:23:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:03.031 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e03fc320-c87d-42d2-a772-ec94aeb05209, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:23:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:03.032 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4e5c755a-4f40-4aad-a056-77e295f5c87b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:03.032 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 namespace which is not needed anymore#033[00m
Jan 31 03:23:03 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[280746]: [NOTICE]   (280750) : haproxy version is 2.8.14-c23fe91
Jan 31 03:23:03 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[280746]: [NOTICE]   (280750) : path to executable is /usr/sbin/haproxy
Jan 31 03:23:03 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[280746]: [WARNING]  (280750) : Exiting Master process...
Jan 31 03:23:03 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[280746]: [WARNING]  (280750) : Exiting Master process...
Jan 31 03:23:03 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[280746]: [ALERT]    (280750) : Current worker (280752) exited with code 143 (Terminated)
Jan 31 03:23:03 np0005603609 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[280746]: [WARNING]  (280750) : All workers exited. Exiting... (0)
Jan 31 03:23:03 np0005603609 systemd[1]: libpod-095f49771aad51996f6602620d15bd3958da104d48a515727930d8bc1667164b.scope: Deactivated successfully.
Jan 31 03:23:03 np0005603609 podman[283325]: 2026-01-31 08:23:03.1593449 +0000 UTC m=+0.037976523 container died 095f49771aad51996f6602620d15bd3958da104d48a515727930d8bc1667164b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:23:03 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-095f49771aad51996f6602620d15bd3958da104d48a515727930d8bc1667164b-userdata-shm.mount: Deactivated successfully.
Jan 31 03:23:03 np0005603609 systemd[1]: var-lib-containers-storage-overlay-0e6784633f164870a1833c604116ad4247b565c39921d36be2e13bb2704b3de7-merged.mount: Deactivated successfully.
Jan 31 03:23:03 np0005603609 podman[283325]: 2026-01-31 08:23:03.190868907 +0000 UTC m=+0.069500530 container cleanup 095f49771aad51996f6602620d15bd3958da104d48a515727930d8bc1667164b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:23:03 np0005603609 systemd[1]: libpod-conmon-095f49771aad51996f6602620d15bd3958da104d48a515727930d8bc1667164b.scope: Deactivated successfully.
Jan 31 03:23:03 np0005603609 podman[283401]: 2026-01-31 08:23:03.246649876 +0000 UTC m=+0.040363180 container remove 095f49771aad51996f6602620d15bd3958da104d48a515727930d8bc1667164b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:23:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:03.251 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e37f2eb5-7809-44bb-ba67-d43f22a9df9c]: (4, ('Sat Jan 31 08:23:03 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 (095f49771aad51996f6602620d15bd3958da104d48a515727930d8bc1667164b)\n095f49771aad51996f6602620d15bd3958da104d48a515727930d8bc1667164b\nSat Jan 31 08:23:03 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 (095f49771aad51996f6602620d15bd3958da104d48a515727930d8bc1667164b)\n095f49771aad51996f6602620d15bd3958da104d48a515727930d8bc1667164b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:03.253 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8a127b94-b82d-420c-8060-b2fba2c15b0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:03.254 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape03fc320-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:03 np0005603609 nova_compute[221550]: 2026-01-31 08:23:03.256 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:03 np0005603609 kernel: tape03fc320-c0: left promiscuous mode
Jan 31 03:23:03 np0005603609 nova_compute[221550]: 2026-01-31 08:23:03.266 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:03 np0005603609 nova_compute[221550]: 2026-01-31 08:23:03.268 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:03.269 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[aafed36c-ea81-4544-8e8f-5e216d0888a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:03.283 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[25f7f8e2-d44d-4b4a-af4d-662a9d74e86d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:03.285 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[14f93224-ef9c-46d1-a2bc-a92f3a022ddb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:03.296 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d0b49413-c805-48ef-a7f5-a33704e87713]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788978, 'reachable_time': 17427, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283427, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:03 np0005603609 systemd[1]: run-netns-ovnmeta\x2de03fc320\x2dc87d\x2d42d2\x2da772\x2dec94aeb05209.mount: Deactivated successfully.
Jan 31 03:23:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:03.301 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:23:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:03.302 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[3996fe22-42f8-4855-a031-4919a0803f70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:23:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:23:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:23:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:03.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:03 np0005603609 nova_compute[221550]: 2026-01-31 08:23:03.950 221554 DEBUG nova.virt.libvirt.vif [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:19:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-549789768',display_name='tempest-ServerRescueNegativeTestJSON-server-549789768',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-549789768',id=143,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:20:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4849ff916e1b4e2aa162faaf2c0717a2',ramdisk_id='',reservation_id='r-7j661ndk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1784809431',owner_user_name='tempest-ServerRescueNegativeTestJSON-1784809431-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:20:51Z,user_data=None,user_id='6788b0883cb348719d1222b1c9483be2',uuid=38deb482-bd85-4fdf-b2da-2725bffd8f43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "address": "fa:16:3e:f6:68:cd", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f25515c-3e", "ovs_interfaceid": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:23:03 np0005603609 nova_compute[221550]: 2026-01-31 08:23:03.951 221554 DEBUG nova.network.os_vif_util [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converting VIF {"id": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "address": "fa:16:3e:f6:68:cd", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f25515c-3e", "ovs_interfaceid": "1f25515c-3e82-445e-baad-bc0e3a0d6f1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:23:03 np0005603609 nova_compute[221550]: 2026-01-31 08:23:03.952 221554 DEBUG nova.network.os_vif_util [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:68:cd,bridge_name='br-int',has_traffic_filtering=True,id=1f25515c-3e82-445e-baad-bc0e3a0d6f1a,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f25515c-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:23:03 np0005603609 nova_compute[221550]: 2026-01-31 08:23:03.952 221554 DEBUG os_vif [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:68:cd,bridge_name='br-int',has_traffic_filtering=True,id=1f25515c-3e82-445e-baad-bc0e3a0d6f1a,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f25515c-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:23:03 np0005603609 nova_compute[221550]: 2026-01-31 08:23:03.954 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:03 np0005603609 nova_compute[221550]: 2026-01-31 08:23:03.955 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f25515c-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:03 np0005603609 nova_compute[221550]: 2026-01-31 08:23:03.958 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:03 np0005603609 nova_compute[221550]: 2026-01-31 08:23:03.960 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:23:03 np0005603609 nova_compute[221550]: 2026-01-31 08:23:03.963 221554 INFO os_vif [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:68:cd,bridge_name='br-int',has_traffic_filtering=True,id=1f25515c-3e82-445e-baad-bc0e3a0d6f1a,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f25515c-3e')#033[00m
Jan 31 03:23:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:04.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:04 np0005603609 nova_compute[221550]: 2026-01-31 08:23:04.462 221554 INFO nova.virt.libvirt.driver [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Deleting instance files /var/lib/nova/instances/38deb482-bd85-4fdf-b2da-2725bffd8f43_del#033[00m
Jan 31 03:23:04 np0005603609 nova_compute[221550]: 2026-01-31 08:23:04.463 221554 INFO nova.virt.libvirt.driver [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Deletion of /var/lib/nova/instances/38deb482-bd85-4fdf-b2da-2725bffd8f43_del complete#033[00m
Jan 31 03:23:04 np0005603609 nova_compute[221550]: 2026-01-31 08:23:04.595 221554 DEBUG nova.compute.manager [req-d2ca4a6c-fb4f-4927-b72a-f8c9aafa4312 req-ef6c8e09-b8de-4501-93fb-38fd8edd346a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received event network-vif-unplugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:23:04 np0005603609 nova_compute[221550]: 2026-01-31 08:23:04.596 221554 DEBUG oslo_concurrency.lockutils [req-d2ca4a6c-fb4f-4927-b72a-f8c9aafa4312 req-ef6c8e09-b8de-4501-93fb-38fd8edd346a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:04 np0005603609 nova_compute[221550]: 2026-01-31 08:23:04.596 221554 DEBUG oslo_concurrency.lockutils [req-d2ca4a6c-fb4f-4927-b72a-f8c9aafa4312 req-ef6c8e09-b8de-4501-93fb-38fd8edd346a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:04 np0005603609 nova_compute[221550]: 2026-01-31 08:23:04.596 221554 DEBUG oslo_concurrency.lockutils [req-d2ca4a6c-fb4f-4927-b72a-f8c9aafa4312 req-ef6c8e09-b8de-4501-93fb-38fd8edd346a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:04 np0005603609 nova_compute[221550]: 2026-01-31 08:23:04.596 221554 DEBUG nova.compute.manager [req-d2ca4a6c-fb4f-4927-b72a-f8c9aafa4312 req-ef6c8e09-b8de-4501-93fb-38fd8edd346a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] No waiting events found dispatching network-vif-unplugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:23:04 np0005603609 nova_compute[221550]: 2026-01-31 08:23:04.596 221554 DEBUG nova.compute.manager [req-d2ca4a6c-fb4f-4927-b72a-f8c9aafa4312 req-ef6c8e09-b8de-4501-93fb-38fd8edd346a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received event network-vif-unplugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:23:04 np0005603609 nova_compute[221550]: 2026-01-31 08:23:04.607 221554 INFO nova.compute.manager [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Took 2.24 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:23:04 np0005603609 nova_compute[221550]: 2026-01-31 08:23:04.608 221554 DEBUG oslo.service.loopingcall [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:23:04 np0005603609 nova_compute[221550]: 2026-01-31 08:23:04.608 221554 DEBUG nova.compute.manager [-] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:23:04 np0005603609 nova_compute[221550]: 2026-01-31 08:23:04.608 221554 DEBUG nova.network.neutron [-] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:23:05 np0005603609 nova_compute[221550]: 2026-01-31 08:23:05.353 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:05.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:06.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:06 np0005603609 nova_compute[221550]: 2026-01-31 08:23:06.456 221554 DEBUG nova.network.neutron [-] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:23:06 np0005603609 nova_compute[221550]: 2026-01-31 08:23:06.519 221554 INFO nova.compute.manager [-] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Took 1.91 seconds to deallocate network for instance.#033[00m
Jan 31 03:23:06 np0005603609 nova_compute[221550]: 2026-01-31 08:23:06.607 221554 DEBUG nova.compute.manager [req-64fd573e-9231-42c4-a6d2-bb2d587b5632 req-63cbeedf-b978-419b-bec4-3ad990c04ae8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received event network-vif-deleted-1f25515c-3e82-445e-baad-bc0e3a0d6f1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:23:06 np0005603609 nova_compute[221550]: 2026-01-31 08:23:06.773 221554 DEBUG nova.compute.manager [req-a16da696-a512-4ec3-8ac9-5b249fd24e0a req-f58233b5-db70-4b8b-983f-ccf05bd88f7e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received event network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:23:06 np0005603609 nova_compute[221550]: 2026-01-31 08:23:06.773 221554 DEBUG oslo_concurrency.lockutils [req-a16da696-a512-4ec3-8ac9-5b249fd24e0a req-f58233b5-db70-4b8b-983f-ccf05bd88f7e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:06 np0005603609 nova_compute[221550]: 2026-01-31 08:23:06.773 221554 DEBUG oslo_concurrency.lockutils [req-a16da696-a512-4ec3-8ac9-5b249fd24e0a req-f58233b5-db70-4b8b-983f-ccf05bd88f7e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:06 np0005603609 nova_compute[221550]: 2026-01-31 08:23:06.774 221554 DEBUG oslo_concurrency.lockutils [req-a16da696-a512-4ec3-8ac9-5b249fd24e0a req-f58233b5-db70-4b8b-983f-ccf05bd88f7e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:06 np0005603609 nova_compute[221550]: 2026-01-31 08:23:06.774 221554 DEBUG nova.compute.manager [req-a16da696-a512-4ec3-8ac9-5b249fd24e0a req-f58233b5-db70-4b8b-983f-ccf05bd88f7e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] No waiting events found dispatching network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:23:06 np0005603609 nova_compute[221550]: 2026-01-31 08:23:06.774 221554 WARNING nova.compute.manager [req-a16da696-a512-4ec3-8ac9-5b249fd24e0a req-f58233b5-db70-4b8b-983f-ccf05bd88f7e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Received unexpected event network-vif-plugged-1f25515c-3e82-445e-baad-bc0e3a0d6f1a for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:23:06 np0005603609 nova_compute[221550]: 2026-01-31 08:23:06.803 221554 DEBUG oslo_concurrency.lockutils [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:06 np0005603609 nova_compute[221550]: 2026-01-31 08:23:06.804 221554 DEBUG oslo_concurrency.lockutils [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:06 np0005603609 nova_compute[221550]: 2026-01-31 08:23:06.917 221554 DEBUG oslo_concurrency.processutils [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:23:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:23:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4137991365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:23:07 np0005603609 nova_compute[221550]: 2026-01-31 08:23:07.346 221554 DEBUG oslo_concurrency.processutils [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:23:07 np0005603609 nova_compute[221550]: 2026-01-31 08:23:07.354 221554 DEBUG nova.compute.provider_tree [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:23:07 np0005603609 nova_compute[221550]: 2026-01-31 08:23:07.385 221554 DEBUG nova.scheduler.client.report [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:23:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:07.515 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:07.516 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:07.516 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:07 np0005603609 nova_compute[221550]: 2026-01-31 08:23:07.517 221554 DEBUG oslo_concurrency.lockutils [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:07 np0005603609 nova_compute[221550]: 2026-01-31 08:23:07.596 221554 INFO nova.scheduler.client.report [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Deleted allocations for instance 38deb482-bd85-4fdf-b2da-2725bffd8f43#033[00m
Jan 31 03:23:07 np0005603609 nova_compute[221550]: 2026-01-31 08:23:07.771 221554 DEBUG oslo_concurrency.lockutils [None req-8190f5f6-24c0-4749-b6b4-48dcecda147d 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "38deb482-bd85-4fdf-b2da-2725bffd8f43" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:23:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:07.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:23:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:08.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:08Z|00671|binding|INFO|Releasing lport 98f103d6-a5bc-4680-a4d5-8ced102dd381 from this chassis (sb_readonly=0)
Jan 31 03:23:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:08Z|00672|binding|INFO|Releasing lport fd5187fd-cce9-41da-96d2-ef75fbcbcf0f from this chassis (sb_readonly=0)
Jan 31 03:23:08 np0005603609 nova_compute[221550]: 2026-01-31 08:23:08.812 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:08 np0005603609 nova_compute[221550]: 2026-01-31 08:23:08.958 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:09 np0005603609 nova_compute[221550]: 2026-01-31 08:23:09.468 221554 DEBUG oslo_concurrency.lockutils [None req-44c8902f-b41c-4ccc-976d-63ae32e39749 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:09 np0005603609 nova_compute[221550]: 2026-01-31 08:23:09.469 221554 DEBUG oslo_concurrency.lockutils [None req-44c8902f-b41c-4ccc-976d-63ae32e39749 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:09 np0005603609 nova_compute[221550]: 2026-01-31 08:23:09.493 221554 INFO nova.compute.manager [None req-44c8902f-b41c-4ccc-976d-63ae32e39749 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Detaching volume 5cf392c9-4c4c-4680-9529-6ff296b6467e#033[00m
Jan 31 03:23:09 np0005603609 nova_compute[221550]: 2026-01-31 08:23:09.789 221554 INFO nova.virt.block_device [None req-44c8902f-b41c-4ccc-976d-63ae32e39749 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Attempting to driver detach volume 5cf392c9-4c4c-4680-9529-6ff296b6467e from mountpoint /dev/vdb#033[00m
Jan 31 03:23:09 np0005603609 nova_compute[221550]: 2026-01-31 08:23:09.796 221554 DEBUG nova.virt.libvirt.driver [None req-44c8902f-b41c-4ccc-976d-63ae32e39749 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Attempting to detach device vdb from instance 20cc7040-fd06-49c7-8e68-41cb74e67e9a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:23:09 np0005603609 nova_compute[221550]: 2026-01-31 08:23:09.797 221554 DEBUG nova.virt.libvirt.guest [None req-44c8902f-b41c-4ccc-976d-63ae32e39749 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:23:09 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:23:09 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-5cf392c9-4c4c-4680-9529-6ff296b6467e">
Jan 31 03:23:09 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:23:09 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:23:09 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:23:09 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:23:09 np0005603609 nova_compute[221550]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:23:09 np0005603609 nova_compute[221550]:  <serial>5cf392c9-4c4c-4680-9529-6ff296b6467e</serial>
Jan 31 03:23:09 np0005603609 nova_compute[221550]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 31 03:23:09 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:23:09 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:23:09 np0005603609 nova_compute[221550]: 2026-01-31 08:23:09.839 221554 INFO nova.virt.libvirt.driver [None req-44c8902f-b41c-4ccc-976d-63ae32e39749 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Successfully detached device vdb from instance 20cc7040-fd06-49c7-8e68-41cb74e67e9a from the persistent domain config.#033[00m
Jan 31 03:23:09 np0005603609 nova_compute[221550]: 2026-01-31 08:23:09.839 221554 DEBUG nova.virt.libvirt.driver [None req-44c8902f-b41c-4ccc-976d-63ae32e39749 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 20cc7040-fd06-49c7-8e68-41cb74e67e9a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 03:23:09 np0005603609 nova_compute[221550]: 2026-01-31 08:23:09.839 221554 DEBUG nova.virt.libvirt.guest [None req-44c8902f-b41c-4ccc-976d-63ae32e39749 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:23:09 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:23:09 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-5cf392c9-4c4c-4680-9529-6ff296b6467e">
Jan 31 03:23:09 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:23:09 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:23:09 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:23:09 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:23:09 np0005603609 nova_compute[221550]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:23:09 np0005603609 nova_compute[221550]:  <serial>5cf392c9-4c4c-4680-9529-6ff296b6467e</serial>
Jan 31 03:23:09 np0005603609 nova_compute[221550]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 31 03:23:09 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:23:09 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:23:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:09.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:09 np0005603609 nova_compute[221550]: 2026-01-31 08:23:09.951 221554 DEBUG nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Received event <DeviceRemovedEvent: 1769847789.9508562, 20cc7040-fd06-49c7-8e68-41cb74e67e9a => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 03:23:09 np0005603609 nova_compute[221550]: 2026-01-31 08:23:09.952 221554 DEBUG nova.virt.libvirt.driver [None req-44c8902f-b41c-4ccc-976d-63ae32e39749 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 20cc7040-fd06-49c7-8e68-41cb74e67e9a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 03:23:09 np0005603609 nova_compute[221550]: 2026-01-31 08:23:09.955 221554 INFO nova.virt.libvirt.driver [None req-44c8902f-b41c-4ccc-976d-63ae32e39749 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Successfully detached device vdb from instance 20cc7040-fd06-49c7-8e68-41cb74e67e9a from the live domain config.#033[00m
Jan 31 03:23:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:23:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:10.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:23:10 np0005603609 nova_compute[221550]: 2026-01-31 08:23:10.346 221554 DEBUG nova.objects.instance [None req-44c8902f-b41c-4ccc-976d-63ae32e39749 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'flavor' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:23:10 np0005603609 nova_compute[221550]: 2026-01-31 08:23:10.355 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:10 np0005603609 nova_compute[221550]: 2026-01-31 08:23:10.456 221554 DEBUG oslo_concurrency.lockutils [None req-44c8902f-b41c-4ccc-976d-63ae32e39749 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:11 np0005603609 nova_compute[221550]: 2026-01-31 08:23:11.474 221554 DEBUG oslo_concurrency.lockutils [None req-a19ef140-5540-47b6-8e96-c38616d96bdb 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:11 np0005603609 nova_compute[221550]: 2026-01-31 08:23:11.475 221554 DEBUG oslo_concurrency.lockutils [None req-a19ef140-5540-47b6-8e96-c38616d96bdb 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:11 np0005603609 nova_compute[221550]: 2026-01-31 08:23:11.475 221554 DEBUG nova.compute.manager [None req-a19ef140-5540-47b6-8e96-c38616d96bdb 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:23:11 np0005603609 nova_compute[221550]: 2026-01-31 08:23:11.479 221554 DEBUG nova.compute.manager [None req-a19ef140-5540-47b6-8e96-c38616d96bdb 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 31 03:23:11 np0005603609 nova_compute[221550]: 2026-01-31 08:23:11.480 221554 DEBUG nova.objects.instance [None req-a19ef140-5540-47b6-8e96-c38616d96bdb 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'flavor' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:23:11 np0005603609 nova_compute[221550]: 2026-01-31 08:23:11.522 221554 DEBUG nova.virt.libvirt.driver [None req-a19ef140-5540-47b6-8e96-c38616d96bdb 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:23:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:11.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:12.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:23:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:23:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:13Z|00673|binding|INFO|Releasing lport 98f103d6-a5bc-4680-a4d5-8ced102dd381 from this chassis (sb_readonly=0)
Jan 31 03:23:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:13Z|00674|binding|INFO|Releasing lport fd5187fd-cce9-41da-96d2-ef75fbcbcf0f from this chassis (sb_readonly=0)
Jan 31 03:23:13 np0005603609 nova_compute[221550]: 2026-01-31 08:23:13.474 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:13.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:13 np0005603609 nova_compute[221550]: 2026-01-31 08:23:13.961 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:14.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:14 np0005603609 kernel: tapc66c4603-0e (unregistering): left promiscuous mode
Jan 31 03:23:14 np0005603609 NetworkManager[49064]: <info>  [1769847794.5047] device (tapc66c4603-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:23:14 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:14Z|00675|binding|INFO|Releasing lport c66c4603-0eab-4f51-ad10-8185e33051dd from this chassis (sb_readonly=0)
Jan 31 03:23:14 np0005603609 nova_compute[221550]: 2026-01-31 08:23:14.509 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:14 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:14Z|00676|binding|INFO|Setting lport c66c4603-0eab-4f51-ad10-8185e33051dd down in Southbound
Jan 31 03:23:14 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:14Z|00677|binding|INFO|Removing iface tapc66c4603-0e ovn-installed in OVS
Jan 31 03:23:14 np0005603609 nova_compute[221550]: 2026-01-31 08:23:14.512 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:14 np0005603609 nova_compute[221550]: 2026-01-31 08:23:14.517 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:14.528 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:71:84 10.100.0.4'], port_security=['fa:16:3e:11:71:84 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '20cc7040-fd06-49c7-8e68-41cb74e67e9a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e39ff1d-f815-485f-8f43-698b31333bba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fdf18f1faf4846e2a6e2eab4ac2aec02', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2dffb7ca-42ef-40e9-a7b7-090e002450c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.225', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=917c1732-02f7-4e55-ac45-7e04e3147221, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=c66c4603-0eab-4f51-ad10-8185e33051dd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:23:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:14.530 140058 INFO neutron.agent.ovn.metadata.agent [-] Port c66c4603-0eab-4f51-ad10-8185e33051dd in datapath 1e39ff1d-f815-485f-8f43-698b31333bba unbound from our chassis#033[00m
Jan 31 03:23:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:14.531 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1e39ff1d-f815-485f-8f43-698b31333bba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:23:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:14.533 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a89d5f43-dad8-4010-a9fb-5288649143c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:14.533 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba namespace which is not needed anymore#033[00m
Jan 31 03:23:14 np0005603609 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000096.scope: Deactivated successfully.
Jan 31 03:23:14 np0005603609 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000096.scope: Consumed 14.487s CPU time.
Jan 31 03:23:14 np0005603609 systemd-machined[190912]: Machine qemu-78-instance-00000096 terminated.
Jan 31 03:23:14 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[282363]: [NOTICE]   (282367) : haproxy version is 2.8.14-c23fe91
Jan 31 03:23:14 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[282363]: [NOTICE]   (282367) : path to executable is /usr/sbin/haproxy
Jan 31 03:23:14 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[282363]: [WARNING]  (282367) : Exiting Master process...
Jan 31 03:23:14 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[282363]: [ALERT]    (282367) : Current worker (282369) exited with code 143 (Terminated)
Jan 31 03:23:14 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[282363]: [WARNING]  (282367) : All workers exited. Exiting... (0)
Jan 31 03:23:14 np0005603609 systemd[1]: libpod-092b6039b1926bbcad4d225a5f80693e8c2f7ebe05b688019025d3818bc2a8ca.scope: Deactivated successfully.
Jan 31 03:23:14 np0005603609 podman[283576]: 2026-01-31 08:23:14.684692234 +0000 UTC m=+0.081990130 container died 092b6039b1926bbcad4d225a5f80693e8c2f7ebe05b688019025d3818bc2a8ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:23:14 np0005603609 nova_compute[221550]: 2026-01-31 08:23:14.742 221554 INFO nova.virt.libvirt.driver [None req-a19ef140-5540-47b6-8e96-c38616d96bdb 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 03:23:14 np0005603609 nova_compute[221550]: 2026-01-31 08:23:14.747 221554 INFO nova.virt.libvirt.driver [-] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Instance destroyed successfully.#033[00m
Jan 31 03:23:14 np0005603609 nova_compute[221550]: 2026-01-31 08:23:14.747 221554 DEBUG nova.objects.instance [None req-a19ef140-5540-47b6-8e96-c38616d96bdb 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'numa_topology' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:23:14 np0005603609 nova_compute[221550]: 2026-01-31 08:23:14.791 221554 DEBUG nova.compute.manager [None req-a19ef140-5540-47b6-8e96-c38616d96bdb 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:23:14 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-092b6039b1926bbcad4d225a5f80693e8c2f7ebe05b688019025d3818bc2a8ca-userdata-shm.mount: Deactivated successfully.
Jan 31 03:23:14 np0005603609 systemd[1]: var-lib-containers-storage-overlay-7ddb7c9e2a5d5085024f10ced2225b0ad9e26478d2c0d03207f3359bdf1ed446-merged.mount: Deactivated successfully.
Jan 31 03:23:14 np0005603609 nova_compute[221550]: 2026-01-31 08:23:14.932 221554 DEBUG oslo_concurrency.lockutils [None req-a19ef140-5540-47b6-8e96-c38616d96bdb 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:14 np0005603609 nova_compute[221550]: 2026-01-31 08:23:14.980 221554 DEBUG nova.compute.manager [req-fec18b12-ff89-431d-8532-7406bb1deb36 req-f7020206-8559-49aa-a4bc-3b11029ff3eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received event network-vif-unplugged-c66c4603-0eab-4f51-ad10-8185e33051dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:23:14 np0005603609 nova_compute[221550]: 2026-01-31 08:23:14.980 221554 DEBUG oslo_concurrency.lockutils [req-fec18b12-ff89-431d-8532-7406bb1deb36 req-f7020206-8559-49aa-a4bc-3b11029ff3eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:14 np0005603609 nova_compute[221550]: 2026-01-31 08:23:14.981 221554 DEBUG oslo_concurrency.lockutils [req-fec18b12-ff89-431d-8532-7406bb1deb36 req-f7020206-8559-49aa-a4bc-3b11029ff3eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:14 np0005603609 nova_compute[221550]: 2026-01-31 08:23:14.981 221554 DEBUG oslo_concurrency.lockutils [req-fec18b12-ff89-431d-8532-7406bb1deb36 req-f7020206-8559-49aa-a4bc-3b11029ff3eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:14 np0005603609 nova_compute[221550]: 2026-01-31 08:23:14.981 221554 DEBUG nova.compute.manager [req-fec18b12-ff89-431d-8532-7406bb1deb36 req-f7020206-8559-49aa-a4bc-3b11029ff3eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] No waiting events found dispatching network-vif-unplugged-c66c4603-0eab-4f51-ad10-8185e33051dd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:23:14 np0005603609 nova_compute[221550]: 2026-01-31 08:23:14.981 221554 WARNING nova.compute.manager [req-fec18b12-ff89-431d-8532-7406bb1deb36 req-f7020206-8559-49aa-a4bc-3b11029ff3eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received unexpected event network-vif-unplugged-c66c4603-0eab-4f51-ad10-8185e33051dd for instance with vm_state stopped and task_state None.#033[00m
Jan 31 03:23:15 np0005603609 podman[283576]: 2026-01-31 08:23:15.119766642 +0000 UTC m=+0.517064478 container cleanup 092b6039b1926bbcad4d225a5f80693e8c2f7ebe05b688019025d3818bc2a8ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:23:15 np0005603609 systemd[1]: libpod-conmon-092b6039b1926bbcad4d225a5f80693e8c2f7ebe05b688019025d3818bc2a8ca.scope: Deactivated successfully.
Jan 31 03:23:15 np0005603609 nova_compute[221550]: 2026-01-31 08:23:15.357 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:15 np0005603609 podman[283618]: 2026-01-31 08:23:15.667234938 +0000 UTC m=+0.531369961 container remove 092b6039b1926bbcad4d225a5f80693e8c2f7ebe05b688019025d3818bc2a8ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:23:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:15.671 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[177eb80e-eee2-4eab-819d-7f8be823e3b8]: (4, ('Sat Jan 31 08:23:14 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba (092b6039b1926bbcad4d225a5f80693e8c2f7ebe05b688019025d3818bc2a8ca)\n092b6039b1926bbcad4d225a5f80693e8c2f7ebe05b688019025d3818bc2a8ca\nSat Jan 31 08:23:15 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba (092b6039b1926bbcad4d225a5f80693e8c2f7ebe05b688019025d3818bc2a8ca)\n092b6039b1926bbcad4d225a5f80693e8c2f7ebe05b688019025d3818bc2a8ca\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:15.672 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3a7e1779-105f-4ec0-ab2d-95f06185f585]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:15.673 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e39ff1d-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:15 np0005603609 nova_compute[221550]: 2026-01-31 08:23:15.675 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:15 np0005603609 kernel: tap1e39ff1d-f0: left promiscuous mode
Jan 31 03:23:15 np0005603609 nova_compute[221550]: 2026-01-31 08:23:15.687 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:15.689 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1620e972-d0e8-483f-8c15-d9e8259e8952]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:15.709 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ebfbb9dc-4842-455c-a07b-d1e33a25f141]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:15.711 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[138be574-65b5-408c-a8a4-8639cb93880d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:15.728 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c1bbc3-aaec-462c-afcc-6f918ac3e9ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799391, 'reachable_time': 19485, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283639, 'error': None, 'target': 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:15 np0005603609 systemd[1]: run-netns-ovnmeta\x2d1e39ff1d\x2df815\x2d485f\x2d8f43\x2d698b31333bba.mount: Deactivated successfully.
Jan 31 03:23:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:15.732 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:23:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:15.732 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[3159a5d6-4c29-4f54-841d-7767cc9f382c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:15.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:16.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:17 np0005603609 nova_compute[221550]: 2026-01-31 08:23:17.238 221554 DEBUG nova.compute.manager [req-ba3addbf-3db3-4ae2-bf2c-2e0398493e46 req-7363ddb0-dd23-4e31-893f-2a5aa1341873 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received event network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:23:17 np0005603609 nova_compute[221550]: 2026-01-31 08:23:17.239 221554 DEBUG oslo_concurrency.lockutils [req-ba3addbf-3db3-4ae2-bf2c-2e0398493e46 req-7363ddb0-dd23-4e31-893f-2a5aa1341873 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:17 np0005603609 nova_compute[221550]: 2026-01-31 08:23:17.239 221554 DEBUG oslo_concurrency.lockutils [req-ba3addbf-3db3-4ae2-bf2c-2e0398493e46 req-7363ddb0-dd23-4e31-893f-2a5aa1341873 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:17 np0005603609 nova_compute[221550]: 2026-01-31 08:23:17.241 221554 DEBUG oslo_concurrency.lockutils [req-ba3addbf-3db3-4ae2-bf2c-2e0398493e46 req-7363ddb0-dd23-4e31-893f-2a5aa1341873 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:17 np0005603609 nova_compute[221550]: 2026-01-31 08:23:17.241 221554 DEBUG nova.compute.manager [req-ba3addbf-3db3-4ae2-bf2c-2e0398493e46 req-7363ddb0-dd23-4e31-893f-2a5aa1341873 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] No waiting events found dispatching network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:23:17 np0005603609 nova_compute[221550]: 2026-01-31 08:23:17.242 221554 WARNING nova.compute.manager [req-ba3addbf-3db3-4ae2-bf2c-2e0398493e46 req-7363ddb0-dd23-4e31-893f-2a5aa1341873 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received unexpected event network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd for instance with vm_state stopped and task_state None.#033[00m
Jan 31 03:23:17 np0005603609 nova_compute[221550]: 2026-01-31 08:23:17.603 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847782.6029384, 38deb482-bd85-4fdf-b2da-2725bffd8f43 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:23:17 np0005603609 nova_compute[221550]: 2026-01-31 08:23:17.604 221554 INFO nova.compute.manager [-] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:23:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:23:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:17.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:23:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:18.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:19 np0005603609 nova_compute[221550]: 2026-01-31 08:23:19.005 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:19.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:23:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:20.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:23:20 np0005603609 nova_compute[221550]: 2026-01-31 08:23:20.038 221554 DEBUG nova.compute.manager [None req-113de435-f7bd-4a23-bee7-ea25e7a2cbc1 - - - - - -] [instance: 38deb482-bd85-4fdf-b2da-2725bffd8f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:23:20 np0005603609 nova_compute[221550]: 2026-01-31 08:23:20.361 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:20 np0005603609 nova_compute[221550]: 2026-01-31 08:23:20.888 221554 DEBUG nova.objects.instance [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'flavor' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:23:20 np0005603609 nova_compute[221550]: 2026-01-31 08:23:20.931 221554 DEBUG oslo_concurrency.lockutils [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "refresh_cache-20cc7040-fd06-49c7-8e68-41cb74e67e9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:23:20 np0005603609 nova_compute[221550]: 2026-01-31 08:23:20.931 221554 DEBUG oslo_concurrency.lockutils [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquired lock "refresh_cache-20cc7040-fd06-49c7-8e68-41cb74e67e9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:23:20 np0005603609 nova_compute[221550]: 2026-01-31 08:23:20.931 221554 DEBUG nova.network.neutron [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:23:20 np0005603609 nova_compute[221550]: 2026-01-31 08:23:20.932 221554 DEBUG nova.objects.instance [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'info_cache' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:23:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:23:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:21.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:23:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:23:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:22.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #124. Immutable memtables: 0.
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.158328) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 124
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802158362, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 1020, "num_deletes": 251, "total_data_size": 2059275, "memory_usage": 2079224, "flush_reason": "Manual Compaction"}
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #125: started
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802168782, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 125, "file_size": 863214, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62424, "largest_seqno": 63439, "table_properties": {"data_size": 859378, "index_size": 1488, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10386, "raw_average_key_size": 21, "raw_value_size": 851117, "raw_average_value_size": 1729, "num_data_blocks": 66, "num_entries": 492, "num_filter_entries": 492, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847728, "oldest_key_time": 1769847728, "file_creation_time": 1769847802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 10541 microseconds, and 2509 cpu microseconds.
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.168854) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #125: 863214 bytes OK
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.168890) [db/memtable_list.cc:519] [default] Level-0 commit table #125 started
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.172726) [db/memtable_list.cc:722] [default] Level-0 commit table #125: memtable #1 done
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.172785) EVENT_LOG_v1 {"time_micros": 1769847802172772, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.172822) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 2054248, prev total WAL file size 2054961, number of live WAL files 2.
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000121.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.177007) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303039' seq:72057594037927935, type:22 .. '6D6772737461740032323630' seq:0, type:0; will stop at (end)
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [125(842KB)], [123(13MB)]
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802177086, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [125], "files_L6": [123], "score": -1, "input_data_size": 14523336, "oldest_snapshot_seqno": -1}
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #126: 8703 keys, 11173778 bytes, temperature: kUnknown
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802325368, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 126, "file_size": 11173778, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11117997, "index_size": 32949, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 227807, "raw_average_key_size": 26, "raw_value_size": 10965252, "raw_average_value_size": 1259, "num_data_blocks": 1270, "num_entries": 8703, "num_filter_entries": 8703, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769847802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 126, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.325732) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 11173778 bytes
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.328451) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.9 rd, 75.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 13.0 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(29.8) write-amplify(12.9) OK, records in: 9189, records dropped: 486 output_compression: NoCompression
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.328488) EVENT_LOG_v1 {"time_micros": 1769847802328471, "job": 78, "event": "compaction_finished", "compaction_time_micros": 148407, "compaction_time_cpu_micros": 40393, "output_level": 6, "num_output_files": 1, "total_output_size": 11173778, "num_input_records": 9189, "num_output_records": 8703, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802328990, "job": 78, "event": "table_file_deletion", "file_number": 125}
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000123.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802331498, "job": 78, "event": "table_file_deletion", "file_number": 123}
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.176878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.331653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.331662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.331667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.331672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.331676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #127. Immutable memtables: 0.
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.332160) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 127
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802332206, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 265, "num_deletes": 251, "total_data_size": 23485, "memory_usage": 29032, "flush_reason": "Manual Compaction"}
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #128: started
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802335601, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 128, "file_size": 14621, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63444, "largest_seqno": 63704, "table_properties": {"data_size": 12786, "index_size": 67, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4731, "raw_average_key_size": 18, "raw_value_size": 9304, "raw_average_value_size": 35, "num_data_blocks": 3, "num_entries": 260, "num_filter_entries": 260, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847802, "oldest_key_time": 1769847802, "file_creation_time": 1769847802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 3523 microseconds, and 1225 cpu microseconds.
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.335676) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #128: 14621 bytes OK
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.335705) [db/memtable_list.cc:519] [default] Level-0 commit table #128 started
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.337713) [db/memtable_list.cc:722] [default] Level-0 commit table #128: memtable #1 done
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.337749) EVENT_LOG_v1 {"time_micros": 1769847802337737, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.337777) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 21427, prev total WAL file size 21427, number of live WAL files 2.
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000124.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.338338) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [128(14KB)], [126(10MB)]
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802338385, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [128], "files_L6": [126], "score": -1, "input_data_size": 11188399, "oldest_snapshot_seqno": -1}
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #129: 8456 keys, 9195864 bytes, temperature: kUnknown
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802435915, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 129, "file_size": 9195864, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9143580, "index_size": 30027, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21189, "raw_key_size": 223371, "raw_average_key_size": 26, "raw_value_size": 8996955, "raw_average_value_size": 1063, "num_data_blocks": 1139, "num_entries": 8456, "num_filter_entries": 8456, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769847802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 129, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.436201) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 9195864 bytes
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.440117) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.6 rd, 94.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 10.7 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(1394.2) write-amplify(628.9) OK, records in: 8963, records dropped: 507 output_compression: NoCompression
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.440172) EVENT_LOG_v1 {"time_micros": 1769847802440151, "job": 80, "event": "compaction_finished", "compaction_time_micros": 97642, "compaction_time_cpu_micros": 23933, "output_level": 6, "num_output_files": 1, "total_output_size": 9195864, "num_input_records": 8963, "num_output_records": 8456, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802440460, "job": 80, "event": "table_file_deletion", "file_number": 128}
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000126.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802442549, "job": 80, "event": "table_file_deletion", "file_number": 126}
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.338254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.442637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.442644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.442647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.442650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:23:22 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:23:22.442653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:23:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:23.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.036 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.004000093s ======
Jan 31 03:23:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:24.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000093s
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.045 221554 DEBUG nova.network.neutron [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Updating instance_info_cache with network_info: [{"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.071 221554 DEBUG oslo_concurrency.lockutils [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Releasing lock "refresh_cache-20cc7040-fd06-49c7-8e68-41cb74e67e9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.123 221554 INFO nova.virt.libvirt.driver [-] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Instance destroyed successfully.#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.124 221554 DEBUG nova.objects.instance [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'numa_topology' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.157 221554 DEBUG nova.objects.instance [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'resources' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.184 221554 DEBUG nova.virt.libvirt.vif [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:21:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-504299522',display_name='tempest-AttachVolumeTestJSON-server-504299522',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-504299522',id=150,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBu/9yb3AwKfL+ijJ3Xo9JzrwBYsej/AsT1y29vhp9sWMO5Td0RAhZoszueDKQGZWIcDlz6V7Rc2fL6hsFrBntj+ecoex7fusa36kcOGL0EN/HOgO9QzUew3k1clrSe20A==',key_name='tempest-keypair-2098733157',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:21:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='fdf18f1faf4846e2a6e2eab4ac2aec02',ramdisk_id='',reservation_id='r-n08gz270',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-1821521720',owner_user_name='tempest-AttachVolumeTestJSON-1821521720-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:23:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3ada90dc4b77478cb4b93c63409d8537',uuid=20cc7040-fd06-49c7-8e68-41cb74e67e9a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.185 221554 DEBUG nova.network.os_vif_util [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converting VIF {"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.186 221554 DEBUG nova.network.os_vif_util [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.186 221554 DEBUG os_vif [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.188 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.188 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc66c4603-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.190 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.192 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.195 221554 INFO os_vif [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e')#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.204 221554 DEBUG nova.virt.libvirt.driver [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Start _get_guest_xml network_info=[{"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.208 221554 WARNING nova.virt.libvirt.driver [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.214 221554 DEBUG nova.virt.libvirt.host [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.216 221554 DEBUG nova.virt.libvirt.host [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.221 221554 DEBUG nova.virt.libvirt.host [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.222 221554 DEBUG nova.virt.libvirt.host [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.223 221554 DEBUG nova.virt.libvirt.driver [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.223 221554 DEBUG nova.virt.hardware [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.224 221554 DEBUG nova.virt.hardware [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.224 221554 DEBUG nova.virt.hardware [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.224 221554 DEBUG nova.virt.hardware [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.225 221554 DEBUG nova.virt.hardware [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.225 221554 DEBUG nova.virt.hardware [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.225 221554 DEBUG nova.virt.hardware [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.225 221554 DEBUG nova.virt.hardware [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.226 221554 DEBUG nova.virt.hardware [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.226 221554 DEBUG nova.virt.hardware [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.226 221554 DEBUG nova.virt.hardware [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.226 221554 DEBUG nova.objects.instance [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.375 221554 DEBUG oslo_concurrency.processutils [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:23:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:23:24 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1503404094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:23:24 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:23:24 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.790 221554 DEBUG oslo_concurrency.processutils [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:23:24 np0005603609 nova_compute[221550]: 2026-01-31 08:23:24.831 221554 DEBUG oslo_concurrency.processutils [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:23:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:23:25 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/597961286' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.252 221554 DEBUG oslo_concurrency.processutils [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.254 221554 DEBUG nova.virt.libvirt.vif [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:21:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-504299522',display_name='tempest-AttachVolumeTestJSON-server-504299522',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-504299522',id=150,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBu/9yb3AwKfL+ijJ3Xo9JzrwBYsej/AsT1y29vhp9sWMO5Td0RAhZoszueDKQGZWIcDlz6V7Rc2fL6hsFrBntj+ecoex7fusa36kcOGL0EN/HOgO9QzUew3k1clrSe20A==',key_name='tempest-keypair-2098733157',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:21:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='fdf18f1faf4846e2a6e2eab4ac2aec02',ramdisk_id='',reservation_id='r-n08gz270',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-1821521720',owner_user_name='tempest-AttachVolumeTestJSON-1821521720-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:23:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3ada90dc4b77478cb4b93c63409d8537',uuid=20cc7040-fd06-49c7-8e68-41cb74e67e9a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.254 221554 DEBUG nova.network.os_vif_util [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converting VIF {"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.255 221554 DEBUG nova.network.os_vif_util [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.256 221554 DEBUG nova.objects.instance [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'pci_devices' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.301 221554 DEBUG nova.virt.libvirt.driver [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:23:25 np0005603609 nova_compute[221550]:  <uuid>20cc7040-fd06-49c7-8e68-41cb74e67e9a</uuid>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:  <name>instance-00000096</name>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <nova:name>tempest-AttachVolumeTestJSON-server-504299522</nova:name>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:23:24</nova:creationTime>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:23:25 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:        <nova:user uuid="3ada90dc4b77478cb4b93c63409d8537">tempest-AttachVolumeTestJSON-1821521720-project-member</nova:user>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:        <nova:project uuid="fdf18f1faf4846e2a6e2eab4ac2aec02">tempest-AttachVolumeTestJSON-1821521720</nova:project>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:        <nova:port uuid="c66c4603-0eab-4f51-ad10-8185e33051dd">
Jan 31 03:23:25 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <entry name="serial">20cc7040-fd06-49c7-8e68-41cb74e67e9a</entry>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <entry name="uuid">20cc7040-fd06-49c7-8e68-41cb74e67e9a</entry>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/20cc7040-fd06-49c7-8e68-41cb74e67e9a_disk">
Jan 31 03:23:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:23:25 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/20cc7040-fd06-49c7-8e68-41cb74e67e9a_disk.config">
Jan 31 03:23:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:23:25 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:11:71:84"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <target dev="tapc66c4603-0e"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/20cc7040-fd06-49c7-8e68-41cb74e67e9a/console.log" append="off"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <input type="keyboard" bus="usb"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:23:25 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:23:25 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:23:25 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:23:25 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.303 221554 DEBUG nova.virt.libvirt.driver [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.303 221554 DEBUG nova.virt.libvirt.driver [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.304 221554 DEBUG nova.virt.libvirt.vif [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:21:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-504299522',display_name='tempest-AttachVolumeTestJSON-server-504299522',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-504299522',id=150,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBu/9yb3AwKfL+ijJ3Xo9JzrwBYsej/AsT1y29vhp9sWMO5Td0RAhZoszueDKQGZWIcDlz6V7Rc2fL6hsFrBntj+ecoex7fusa36kcOGL0EN/HOgO9QzUew3k1clrSe20A==',key_name='tempest-keypair-2098733157',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:21:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='fdf18f1faf4846e2a6e2eab4ac2aec02',ramdisk_id='',reservation_id='r-n08gz270',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-1821521720',owner_user_name='tempest-AttachVolumeTestJSON-1821521720-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:23:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3ada90dc4b77478cb4b93c63409d8537',uuid=20cc7040-fd06-49c7-8e68-41cb74e67e9a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.304 221554 DEBUG nova.network.os_vif_util [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converting VIF {"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.305 221554 DEBUG nova.network.os_vif_util [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.305 221554 DEBUG os_vif [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.306 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.306 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.307 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.309 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.309 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc66c4603-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.309 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc66c4603-0e, col_values=(('external_ids', {'iface-id': 'c66c4603-0eab-4f51-ad10-8185e33051dd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:71:84', 'vm-uuid': '20cc7040-fd06-49c7-8e68-41cb74e67e9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:25 np0005603609 NetworkManager[49064]: <info>  [1769847805.3118] manager: (tapc66c4603-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/313)
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.312 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.316 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.317 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.318 221554 INFO os_vif [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e')#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.360 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:25 np0005603609 kernel: tapc66c4603-0e: entered promiscuous mode
Jan 31 03:23:25 np0005603609 NetworkManager[49064]: <info>  [1769847805.4763] manager: (tapc66c4603-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/314)
Jan 31 03:23:25 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:25Z|00678|binding|INFO|Claiming lport c66c4603-0eab-4f51-ad10-8185e33051dd for this chassis.
Jan 31 03:23:25 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:25Z|00679|binding|INFO|c66c4603-0eab-4f51-ad10-8185e33051dd: Claiming fa:16:3e:11:71:84 10.100.0.4
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.477 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:25 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:25Z|00680|binding|INFO|Setting lport c66c4603-0eab-4f51-ad10-8185e33051dd ovn-installed in OVS
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.485 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.486 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:25 np0005603609 systemd-udevd[283719]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:23:25 np0005603609 systemd-machined[190912]: New machine qemu-81-instance-00000096.
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.503 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:71:84 10.100.0.4'], port_security=['fa:16:3e:11:71:84 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '20cc7040-fd06-49c7-8e68-41cb74e67e9a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e39ff1d-f815-485f-8f43-698b31333bba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fdf18f1faf4846e2a6e2eab4ac2aec02', 'neutron:revision_number': '7', 'neutron:security_group_ids': '2dffb7ca-42ef-40e9-a7b7-090e002450c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=917c1732-02f7-4e55-ac45-7e04e3147221, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=c66c4603-0eab-4f51-ad10-8185e33051dd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:23:25 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:25Z|00681|binding|INFO|Setting lport c66c4603-0eab-4f51-ad10-8185e33051dd up in Southbound
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.504 140058 INFO neutron.agent.ovn.metadata.agent [-] Port c66c4603-0eab-4f51-ad10-8185e33051dd in datapath 1e39ff1d-f815-485f-8f43-698b31333bba bound to our chassis#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.505 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e39ff1d-f815-485f-8f43-698b31333bba#033[00m
Jan 31 03:23:25 np0005603609 NetworkManager[49064]: <info>  [1769847805.5063] device (tapc66c4603-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:23:25 np0005603609 NetworkManager[49064]: <info>  [1769847805.5067] device (tapc66c4603-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:23:25 np0005603609 systemd[1]: Started Virtual Machine qemu-81-instance-00000096.
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.514 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2264a87b-ba3c-4562-b9d4-09e9b0120617]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.515 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1e39ff1d-f1 in ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.516 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1e39ff1d-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.517 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d326d1cd-4633-4e37-9f62-43a0df4b87f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.517 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fda0d5b8-47d6-4105-ac60-75a6efda739b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.523 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[bc429ee0-d6e1-4e1a-bf8f-e366367a622c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.534 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[74c43260-992b-44d2-a719-a479c75daea8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.555 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[7705f636-b3e0-429b-9abf-6654a43e1320]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.559 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e22b41-48f4-47f9-9b5d-d9d2605417a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:25 np0005603609 NetworkManager[49064]: <info>  [1769847805.5606] manager: (tap1e39ff1d-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/315)
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.578 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d059ae-2d25-456f-9a05-e51fa639a24e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.580 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[61d855cc-4bdf-4c8a-9025-034f94816190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:25 np0005603609 NetworkManager[49064]: <info>  [1769847805.5962] device (tap1e39ff1d-f0): carrier: link connected
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.601 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[0e02e85b-d120-4bdb-98b7-b0fefcefff7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.619 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[13312afa-f712-4142-acbe-cdf9d976b02b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e39ff1d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:45:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 210], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804529, 'reachable_time': 44331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283753, 'error': None, 'target': 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.628 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[512bb1d3-d1c4-4d0e-af8d-96548ef8b521]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:4516'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 804529, 'tstamp': 804529}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283754, 'error': None, 'target': 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.637 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d884dabb-5b93-4d26-98b3-7fff91ee5bcc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e39ff1d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:45:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 210], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804529, 'reachable_time': 44331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283755, 'error': None, 'target': 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.657 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b28924c7-fdc6-451f-8263-c2ccd22849ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.691 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0b50f073-c54f-4071-8de7-d22a3cfb6a79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.692 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e39ff1d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.693 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.693 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e39ff1d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:25 np0005603609 kernel: tap1e39ff1d-f0: entered promiscuous mode
Jan 31 03:23:25 np0005603609 NetworkManager[49064]: <info>  [1769847805.6960] manager: (tap1e39ff1d-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/316)
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.695 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.700 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e39ff1d-f0, col_values=(('external_ids', {'iface-id': '98f103d6-a5bc-4680-a4d5-8ced102dd381'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.701 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:25 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:25Z|00682|binding|INFO|Releasing lport 98f103d6-a5bc-4680-a4d5-8ced102dd381 from this chassis (sb_readonly=0)
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.702 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1e39ff1d-f815-485f-8f43-698b31333bba.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1e39ff1d-f815-485f-8f43-698b31333bba.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.703 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ccb85b03-e7df-49cd-81ed-ba51b3464ee0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.704 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-1e39ff1d-f815-485f-8f43-698b31333bba
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/1e39ff1d-f815-485f-8f43-698b31333bba.pid.haproxy
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 1e39ff1d-f815-485f-8f43-698b31333bba
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:23:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:25.705 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'env', 'PROCESS_TAG=haproxy-1e39ff1d-f815-485f-8f43-698b31333bba', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1e39ff1d-f815-485f-8f43-698b31333bba.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:23:25 np0005603609 nova_compute[221550]: 2026-01-31 08:23:25.707 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:25.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:26.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.075 221554 DEBUG nova.compute.manager [req-eeb7a03c-6db2-40ba-87fe-ee4bdbfbb16d req-b080a453-6335-4af6-9bc1-43204466c3cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received event network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.076 221554 DEBUG oslo_concurrency.lockutils [req-eeb7a03c-6db2-40ba-87fe-ee4bdbfbb16d req-b080a453-6335-4af6-9bc1-43204466c3cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.076 221554 DEBUG oslo_concurrency.lockutils [req-eeb7a03c-6db2-40ba-87fe-ee4bdbfbb16d req-b080a453-6335-4af6-9bc1-43204466c3cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.076 221554 DEBUG oslo_concurrency.lockutils [req-eeb7a03c-6db2-40ba-87fe-ee4bdbfbb16d req-b080a453-6335-4af6-9bc1-43204466c3cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.077 221554 DEBUG nova.compute.manager [req-eeb7a03c-6db2-40ba-87fe-ee4bdbfbb16d req-b080a453-6335-4af6-9bc1-43204466c3cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] No waiting events found dispatching network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.077 221554 WARNING nova.compute.manager [req-eeb7a03c-6db2-40ba-87fe-ee4bdbfbb16d req-b080a453-6335-4af6-9bc1-43204466c3cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received unexpected event network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd for instance with vm_state stopped and task_state powering-on.#033[00m
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.114 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 20cc7040-fd06-49c7-8e68-41cb74e67e9a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.114 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847806.1133215, 20cc7040-fd06-49c7-8e68-41cb74e67e9a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.114 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.116 221554 DEBUG nova.compute.manager [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.119 221554 INFO nova.virt.libvirt.driver [-] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Instance rebooted successfully.#033[00m
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.119 221554 DEBUG nova.compute.manager [None req-9bd57968-7d3e-4746-9fae-0956ac39dff6 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:23:26 np0005603609 podman[283823]: 2026-01-31 08:23:26.025433576 +0000 UTC m=+0.023128757 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.182 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.185 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.226 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.227 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847806.1164002, 20cc7040-fd06-49c7-8e68-41cb74e67e9a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.227 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] VM Started (Lifecycle Event)#033[00m
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.295 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.300 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:23:26 np0005603609 podman[283823]: 2026-01-31 08:23:26.391724342 +0000 UTC m=+0.389419533 container create efcafdeb44642c2317324c45c298217aee0a3f0efa5aee6a92d1fe89ec839197 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:23:26 np0005603609 systemd[1]: Started libpod-conmon-efcafdeb44642c2317324c45c298217aee0a3f0efa5aee6a92d1fe89ec839197.scope.
Jan 31 03:23:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e348 e348: 3 total, 3 up, 3 in
Jan 31 03:23:26 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:23:26 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ac71e90df59bf34115d28f86138535af317137ca1473548bcafa54d22c8ec1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:26 np0005603609 podman[283823]: 2026-01-31 08:23:26.633682272 +0000 UTC m=+0.631377493 container init efcafdeb44642c2317324c45c298217aee0a3f0efa5aee6a92d1fe89ec839197 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 03:23:26 np0005603609 podman[283823]: 2026-01-31 08:23:26.638438316 +0000 UTC m=+0.636133507 container start efcafdeb44642c2317324c45c298217aee0a3f0efa5aee6a92d1fe89ec839197 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 03:23:26 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[283845]: [NOTICE]   (283849) : New worker (283851) forked
Jan 31 03:23:26 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[283845]: [NOTICE]   (283849) : Loading success.
Jan 31 03:23:26 np0005603609 nova_compute[221550]: 2026-01-31 08:23:26.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:23:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:27.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:28.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:28 np0005603609 nova_compute[221550]: 2026-01-31 08:23:28.201 221554 DEBUG nova.compute.manager [req-503d34d6-1708-4586-9543-3cb2265c6b15 req-d6985317-7c43-4e48-93ed-dfee33ab4d11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received event network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:23:28 np0005603609 nova_compute[221550]: 2026-01-31 08:23:28.202 221554 DEBUG oslo_concurrency.lockutils [req-503d34d6-1708-4586-9543-3cb2265c6b15 req-d6985317-7c43-4e48-93ed-dfee33ab4d11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:28 np0005603609 nova_compute[221550]: 2026-01-31 08:23:28.202 221554 DEBUG oslo_concurrency.lockutils [req-503d34d6-1708-4586-9543-3cb2265c6b15 req-d6985317-7c43-4e48-93ed-dfee33ab4d11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:28 np0005603609 nova_compute[221550]: 2026-01-31 08:23:28.202 221554 DEBUG oslo_concurrency.lockutils [req-503d34d6-1708-4586-9543-3cb2265c6b15 req-d6985317-7c43-4e48-93ed-dfee33ab4d11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:28 np0005603609 nova_compute[221550]: 2026-01-31 08:23:28.202 221554 DEBUG nova.compute.manager [req-503d34d6-1708-4586-9543-3cb2265c6b15 req-d6985317-7c43-4e48-93ed-dfee33ab4d11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] No waiting events found dispatching network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:23:28 np0005603609 nova_compute[221550]: 2026-01-31 08:23:28.203 221554 WARNING nova.compute.manager [req-503d34d6-1708-4586-9543-3cb2265c6b15 req-d6985317-7c43-4e48-93ed-dfee33ab4d11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received unexpected event network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd for instance with vm_state active and task_state None.#033[00m
Jan 31 03:23:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:29 np0005603609 podman[283861]: 2026-01-31 08:23:29.185875949 +0000 UTC m=+0.058449195 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:23:29 np0005603609 podman[283860]: 2026-01-31 08:23:29.211867274 +0000 UTC m=+0.086675133 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 03:23:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:29.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:23:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:30.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:23:30 np0005603609 nova_compute[221550]: 2026-01-31 08:23:30.311 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:30 np0005603609 nova_compute[221550]: 2026-01-31 08:23:30.363 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e349 e349: 3 total, 3 up, 3 in
Jan 31 03:23:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e350 e350: 3 total, 3 up, 3 in
Jan 31 03:23:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:23:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:31.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:23:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:23:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:32.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:23:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:33.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:23:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:34.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:23:34 np0005603609 nova_compute[221550]: 2026-01-31 08:23:34.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:23:35 np0005603609 nova_compute[221550]: 2026-01-31 08:23:35.123 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:35 np0005603609 nova_compute[221550]: 2026-01-31 08:23:35.313 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:35 np0005603609 nova_compute[221550]: 2026-01-31 08:23:35.365 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:23:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:35.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:23:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:23:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:36.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:23:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e351 e351: 3 total, 3 up, 3 in
Jan 31 03:23:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:37.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:23:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:38.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:23:38 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:38Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:71:84 10.100.0.4
Jan 31 03:23:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:38 np0005603609 nova_compute[221550]: 2026-01-31 08:23:38.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:23:38 np0005603609 nova_compute[221550]: 2026-01-31 08:23:38.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:23:39 np0005603609 nova_compute[221550]: 2026-01-31 08:23:39.394 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-946cb648-0758-4617-bd3a-142804fd70f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:23:39 np0005603609 nova_compute[221550]: 2026-01-31 08:23:39.394 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-946cb648-0758-4617-bd3a-142804fd70f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:23:39 np0005603609 nova_compute[221550]: 2026-01-31 08:23:39.395 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:23:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:39.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:40.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:40 np0005603609 nova_compute[221550]: 2026-01-31 08:23:40.316 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:40 np0005603609 nova_compute[221550]: 2026-01-31 08:23:40.368 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:41.493 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:23:41 np0005603609 nova_compute[221550]: 2026-01-31 08:23:41.493 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:41.495 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:23:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:41.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:41 np0005603609 nova_compute[221550]: 2026-01-31 08:23:41.978 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Updating instance_info_cache with network_info: [{"id": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "address": "fa:16:3e:0d:0a:0c", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f5a0a6-02", "ovs_interfaceid": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:23:42 np0005603609 nova_compute[221550]: 2026-01-31 08:23:42.008 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-946cb648-0758-4617-bd3a-142804fd70f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:23:42 np0005603609 nova_compute[221550]: 2026-01-31 08:23:42.009 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:23:42 np0005603609 nova_compute[221550]: 2026-01-31 08:23:42.010 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:23:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:42.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:42 np0005603609 nova_compute[221550]: 2026-01-31 08:23:42.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:23:42 np0005603609 nova_compute[221550]: 2026-01-31 08:23:42.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:23:42 np0005603609 nova_compute[221550]: 2026-01-31 08:23:42.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:23:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:43 np0005603609 nova_compute[221550]: 2026-01-31 08:23:43.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:23:43 np0005603609 nova_compute[221550]: 2026-01-31 08:23:43.702 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:43 np0005603609 nova_compute[221550]: 2026-01-31 08:23:43.703 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:43 np0005603609 nova_compute[221550]: 2026-01-31 08:23:43.703 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:43 np0005603609 nova_compute[221550]: 2026-01-31 08:23:43.703 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:23:43 np0005603609 nova_compute[221550]: 2026-01-31 08:23:43.703 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:23:43 np0005603609 nova_compute[221550]: 2026-01-31 08:23:43.806 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:23:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:43.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:23:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:44.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:23:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1119347368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:23:44 np0005603609 nova_compute[221550]: 2026-01-31 08:23:44.161 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.318 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.369 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.443 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.443 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.448 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.448 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:23:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:45.498 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.626 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.627 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4124MB free_disk=20.83056640625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.627 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.627 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.671 221554 DEBUG oslo_concurrency.lockutils [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.671 221554 DEBUG oslo_concurrency.lockutils [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.671 221554 DEBUG oslo_concurrency.lockutils [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.672 221554 DEBUG oslo_concurrency.lockutils [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.672 221554 DEBUG oslo_concurrency.lockutils [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.673 221554 INFO nova.compute.manager [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Terminating instance#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.674 221554 DEBUG nova.compute.manager [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:23:45 np0005603609 kernel: tapc66c4603-0e (unregistering): left promiscuous mode
Jan 31 03:23:45 np0005603609 NetworkManager[49064]: <info>  [1769847825.7389] device (tapc66c4603-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:23:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:45Z|00683|binding|INFO|Releasing lport c66c4603-0eab-4f51-ad10-8185e33051dd from this chassis (sb_readonly=0)
Jan 31 03:23:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:45Z|00684|binding|INFO|Setting lport c66c4603-0eab-4f51-ad10-8185e33051dd down in Southbound
Jan 31 03:23:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:45Z|00685|binding|INFO|Removing iface tapc66c4603-0e ovn-installed in OVS
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.753 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.755 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:45.766 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:71:84 10.100.0.4'], port_security=['fa:16:3e:11:71:84 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '20cc7040-fd06-49c7-8e68-41cb74e67e9a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e39ff1d-f815-485f-8f43-698b31333bba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fdf18f1faf4846e2a6e2eab4ac2aec02', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2dffb7ca-42ef-40e9-a7b7-090e002450c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.225', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=917c1732-02f7-4e55-ac45-7e04e3147221, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=c66c4603-0eab-4f51-ad10-8185e33051dd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:23:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:45.767 140058 INFO neutron.agent.ovn.metadata.agent [-] Port c66c4603-0eab-4f51-ad10-8185e33051dd in datapath 1e39ff1d-f815-485f-8f43-698b31333bba unbound from our chassis#033[00m
Jan 31 03:23:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:45.768 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1e39ff1d-f815-485f-8f43-698b31333bba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:23:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:45.769 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fc96b238-05e5-452a-80f8-18b1f0db52fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:45.770 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba namespace which is not needed anymore#033[00m
Jan 31 03:23:45 np0005603609 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000096.scope: Deactivated successfully.
Jan 31 03:23:45 np0005603609 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000096.scope: Consumed 12.531s CPU time.
Jan 31 03:23:45 np0005603609 systemd-machined[190912]: Machine qemu-81-instance-00000096 terminated.
Jan 31 03:23:45 np0005603609 kernel: tapc66c4603-0e: entered promiscuous mode
Jan 31 03:23:45 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[283845]: [NOTICE]   (283849) : haproxy version is 2.8.14-c23fe91
Jan 31 03:23:45 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[283845]: [NOTICE]   (283849) : path to executable is /usr/sbin/haproxy
Jan 31 03:23:45 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[283845]: [WARNING]  (283849) : Exiting Master process...
Jan 31 03:23:45 np0005603609 kernel: tapc66c4603-0e (unregistering): left promiscuous mode
Jan 31 03:23:45 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[283845]: [ALERT]    (283849) : Current worker (283851) exited with code 143 (Terminated)
Jan 31 03:23:45 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[283845]: [WARNING]  (283849) : All workers exited. Exiting... (0)
Jan 31 03:23:45 np0005603609 NetworkManager[49064]: <info>  [1769847825.8941] manager: (tapc66c4603-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/317)
Jan 31 03:23:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:45Z|00686|binding|INFO|Claiming lport c66c4603-0eab-4f51-ad10-8185e33051dd for this chassis.
Jan 31 03:23:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:45Z|00687|binding|INFO|c66c4603-0eab-4f51-ad10-8185e33051dd: Claiming fa:16:3e:11:71:84 10.100.0.4
Jan 31 03:23:45 np0005603609 systemd[1]: libpod-efcafdeb44642c2317324c45c298217aee0a3f0efa5aee6a92d1fe89ec839197.scope: Deactivated successfully.
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.896 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:45 np0005603609 podman[283949]: 2026-01-31 08:23:45.902182227 +0000 UTC m=+0.057964103 container died efcafdeb44642c2317324c45c298217aee0a3f0efa5aee6a92d1fe89ec839197 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 03:23:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:45Z|00688|if_status|INFO|Dropped 1 log messages in last 185 seconds (most recently, 185 seconds ago) due to excessive rate
Jan 31 03:23:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:45Z|00689|if_status|INFO|Not setting lport c66c4603-0eab-4f51-ad10-8185e33051dd down as sb is readonly
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.905 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.908 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.913 221554 INFO nova.virt.libvirt.driver [-] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Instance destroyed successfully.#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.914 221554 DEBUG nova.objects.instance [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'resources' on Instance uuid 20cc7040-fd06-49c7-8e68-41cb74e67e9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:23:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:23:45Z|00690|binding|INFO|Releasing lport c66c4603-0eab-4f51-ad10-8185e33051dd from this chassis (sb_readonly=0)
Jan 31 03:23:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:45.919 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:71:84 10.100.0.4'], port_security=['fa:16:3e:11:71:84 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '20cc7040-fd06-49c7-8e68-41cb74e67e9a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e39ff1d-f815-485f-8f43-698b31333bba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fdf18f1faf4846e2a6e2eab4ac2aec02', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2dffb7ca-42ef-40e9-a7b7-090e002450c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.225', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=917c1732-02f7-4e55-ac45-7e04e3147221, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=c66c4603-0eab-4f51-ad10-8185e33051dd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.923 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:45.931 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:71:84 10.100.0.4'], port_security=['fa:16:3e:11:71:84 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '20cc7040-fd06-49c7-8e68-41cb74e67e9a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e39ff1d-f815-485f-8f43-698b31333bba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fdf18f1faf4846e2a6e2eab4ac2aec02', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2dffb7ca-42ef-40e9-a7b7-090e002450c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.225', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=917c1732-02f7-4e55-ac45-7e04e3147221, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=c66c4603-0eab-4f51-ad10-8185e33051dd) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:23:45 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-efcafdeb44642c2317324c45c298217aee0a3f0efa5aee6a92d1fe89ec839197-userdata-shm.mount: Deactivated successfully.
Jan 31 03:23:45 np0005603609 systemd[1]: var-lib-containers-storage-overlay-63ac71e90df59bf34115d28f86138535af317137ca1473548bcafa54d22c8ec1-merged.mount: Deactivated successfully.
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.957 221554 DEBUG nova.virt.libvirt.vif [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:21:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-504299522',display_name='tempest-AttachVolumeTestJSON-server-504299522',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-504299522',id=150,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBu/9yb3AwKfL+ijJ3Xo9JzrwBYsej/AsT1y29vhp9sWMO5Td0RAhZoszueDKQGZWIcDlz6V7Rc2fL6hsFrBntj+ecoex7fusa36kcOGL0EN/HOgO9QzUew3k1clrSe20A==',key_name='tempest-keypair-2098733157',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:21:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fdf18f1faf4846e2a6e2eab4ac2aec02',ramdisk_id='',reservation_id='r-n08gz270',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-1821521720',owner_user_name='tempest-AttachVolumeTestJSON-1821521720-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:23:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3ada90dc4b77478cb4b93c63409d8537',uuid=20cc7040-fd06-49c7-8e68-41cb74e67e9a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.958 221554 DEBUG nova.network.os_vif_util [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converting VIF {"id": "c66c4603-0eab-4f51-ad10-8185e33051dd", "address": "fa:16:3e:11:71:84", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66c4603-0e", "ovs_interfaceid": "c66c4603-0eab-4f51-ad10-8185e33051dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.959 221554 DEBUG nova.network.os_vif_util [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.960 221554 DEBUG os_vif [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:23:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:45.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.961 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.961 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc66c4603-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.963 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.964 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:45 np0005603609 nova_compute[221550]: 2026-01-31 08:23:45.966 221554 INFO os_vif [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:71:84,bridge_name='br-int',has_traffic_filtering=True,id=c66c4603-0eab-4f51-ad10-8185e33051dd,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66c4603-0e')#033[00m
Jan 31 03:23:45 np0005603609 podman[283949]: 2026-01-31 08:23:45.979564356 +0000 UTC m=+0.135346242 container cleanup efcafdeb44642c2317324c45c298217aee0a3f0efa5aee6a92d1fe89ec839197 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:23:45 np0005603609 systemd[1]: libpod-conmon-efcafdeb44642c2317324c45c298217aee0a3f0efa5aee6a92d1fe89ec839197.scope: Deactivated successfully.
Jan 31 03:23:46 np0005603609 podman[284004]: 2026-01-31 08:23:46.047779683 +0000 UTC m=+0.050310129 container remove efcafdeb44642c2317324c45c298217aee0a3f0efa5aee6a92d1fe89ec839197 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:23:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:46.054 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8f015152-d7ca-493f-934b-9b2540e62039]: (4, ('Sat Jan 31 08:23:45 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba (efcafdeb44642c2317324c45c298217aee0a3f0efa5aee6a92d1fe89ec839197)\nefcafdeb44642c2317324c45c298217aee0a3f0efa5aee6a92d1fe89ec839197\nSat Jan 31 08:23:45 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba (efcafdeb44642c2317324c45c298217aee0a3f0efa5aee6a92d1fe89ec839197)\nefcafdeb44642c2317324c45c298217aee0a3f0efa5aee6a92d1fe89ec839197\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:46.056 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a78ccfaa-0940-409f-84b1-31c71f0ef984]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:46.058 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e39ff1d-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:46 np0005603609 nova_compute[221550]: 2026-01-31 08:23:46.060 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:46 np0005603609 kernel: tap1e39ff1d-f0: left promiscuous mode
Jan 31 03:23:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:46.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:46 np0005603609 nova_compute[221550]: 2026-01-31 08:23:46.070 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:46.076 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb84e27-9e92-4858-90f2-41dfdfe8128b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:46.089 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[12ce3fce-9035-4a25-acaf-690bff2a81d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:46.091 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2c01d094-d5ac-4e23-b408-46b33f9b8f54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:46.111 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d1de4476-e6d3-4dc8-a6e9-d0d1dbf24dd4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804525, 'reachable_time': 23634, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284022, 'error': None, 'target': 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:46.114 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:23:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:46.114 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[cfca3287-9167-413a-8745-c8edff124b30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:46.115 140058 INFO neutron.agent.ovn.metadata.agent [-] Port c66c4603-0eab-4f51-ad10-8185e33051dd in datapath 1e39ff1d-f815-485f-8f43-698b31333bba unbound from our chassis#033[00m
Jan 31 03:23:46 np0005603609 systemd[1]: run-netns-ovnmeta\x2d1e39ff1d\x2df815\x2d485f\x2d8f43\x2d698b31333bba.mount: Deactivated successfully.
Jan 31 03:23:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:46.116 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1e39ff1d-f815-485f-8f43-698b31333bba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:23:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:46.117 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[06a626cc-62b6-4a95-8a0e-8a829af45174]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:46.118 140058 INFO neutron.agent.ovn.metadata.agent [-] Port c66c4603-0eab-4f51-ad10-8185e33051dd in datapath 1e39ff1d-f815-485f-8f43-698b31333bba unbound from our chassis#033[00m
Jan 31 03:23:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:46.119 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1e39ff1d-f815-485f-8f43-698b31333bba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:23:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:23:46.119 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[41ae1561-3e03-405d-bde7-70d894cb5991]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:46 np0005603609 nova_compute[221550]: 2026-01-31 08:23:46.709 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 946cb648-0758-4617-bd3a-142804fd70f7 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:23:46 np0005603609 nova_compute[221550]: 2026-01-31 08:23:46.709 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 20cc7040-fd06-49c7-8e68-41cb74e67e9a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:23:46 np0005603609 nova_compute[221550]: 2026-01-31 08:23:46.710 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:23:46 np0005603609 nova_compute[221550]: 2026-01-31 08:23:46.710 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:23:47 np0005603609 nova_compute[221550]: 2026-01-31 08:23:47.251 221554 INFO nova.virt.libvirt.driver [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Deleting instance files /var/lib/nova/instances/20cc7040-fd06-49c7-8e68-41cb74e67e9a_del#033[00m
Jan 31 03:23:47 np0005603609 nova_compute[221550]: 2026-01-31 08:23:47.252 221554 INFO nova.virt.libvirt.driver [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Deletion of /var/lib/nova/instances/20cc7040-fd06-49c7-8e68-41cb74e67e9a_del complete#033[00m
Jan 31 03:23:47 np0005603609 nova_compute[221550]: 2026-01-31 08:23:47.281 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:23:47 np0005603609 nova_compute[221550]: 2026-01-31 08:23:47.323 221554 INFO nova.compute.manager [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Took 1.65 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:23:47 np0005603609 nova_compute[221550]: 2026-01-31 08:23:47.324 221554 DEBUG oslo.service.loopingcall [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:23:47 np0005603609 nova_compute[221550]: 2026-01-31 08:23:47.325 221554 DEBUG nova.compute.manager [-] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:23:47 np0005603609 nova_compute[221550]: 2026-01-31 08:23:47.325 221554 DEBUG nova.network.neutron [-] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:23:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:23:47 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/320626493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:23:47 np0005603609 nova_compute[221550]: 2026-01-31 08:23:47.698 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:23:47 np0005603609 nova_compute[221550]: 2026-01-31 08:23:47.702 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:23:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:23:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:47.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:23:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 03:23:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:48.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 03:23:48 np0005603609 nova_compute[221550]: 2026-01-31 08:23:48.116 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:23:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:48 np0005603609 nova_compute[221550]: 2026-01-31 08:23:48.426 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:23:48 np0005603609 nova_compute[221550]: 2026-01-31 08:23:48.426 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:49 np0005603609 nova_compute[221550]: 2026-01-31 08:23:49.414 221554 DEBUG nova.network.neutron [-] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:23:49 np0005603609 nova_compute[221550]: 2026-01-31 08:23:49.426 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:23:49 np0005603609 nova_compute[221550]: 2026-01-31 08:23:49.427 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:23:49 np0005603609 nova_compute[221550]: 2026-01-31 08:23:49.448 221554 INFO nova.compute.manager [-] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Took 2.12 seconds to deallocate network for instance.#033[00m
Jan 31 03:23:49 np0005603609 nova_compute[221550]: 2026-01-31 08:23:49.498 221554 DEBUG oslo_concurrency.lockutils [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:49 np0005603609 nova_compute[221550]: 2026-01-31 08:23:49.498 221554 DEBUG oslo_concurrency.lockutils [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:49 np0005603609 nova_compute[221550]: 2026-01-31 08:23:49.674 221554 DEBUG oslo_concurrency.processutils [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:23:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:23:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:49.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:23:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:23:50 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1434908461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:23:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:50.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:50 np0005603609 nova_compute[221550]: 2026-01-31 08:23:50.091 221554 DEBUG oslo_concurrency.processutils [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:23:50 np0005603609 nova_compute[221550]: 2026-01-31 08:23:50.097 221554 DEBUG nova.compute.provider_tree [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:23:50 np0005603609 nova_compute[221550]: 2026-01-31 08:23:50.125 221554 DEBUG nova.scheduler.client.report [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:23:50 np0005603609 nova_compute[221550]: 2026-01-31 08:23:50.157 221554 DEBUG oslo_concurrency.lockutils [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:50 np0005603609 nova_compute[221550]: 2026-01-31 08:23:50.215 221554 INFO nova.scheduler.client.report [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Deleted allocations for instance 20cc7040-fd06-49c7-8e68-41cb74e67e9a#033[00m
Jan 31 03:23:50 np0005603609 nova_compute[221550]: 2026-01-31 08:23:50.322 221554 DEBUG oslo_concurrency.lockutils [None req-3e4d13cf-ad4c-4189-b628-8a75cec0b53a 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:50 np0005603609 nova_compute[221550]: 2026-01-31 08:23:50.372 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:50 np0005603609 nova_compute[221550]: 2026-01-31 08:23:50.405 221554 DEBUG nova.compute.manager [req-c086e61a-beb3-4113-8797-656cc5843e5c req-46fd4a58-a0e2-4fa6-a176-9dcd38295b37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received event network-vif-unplugged-c66c4603-0eab-4f51-ad10-8185e33051dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:23:50 np0005603609 nova_compute[221550]: 2026-01-31 08:23:50.405 221554 DEBUG oslo_concurrency.lockutils [req-c086e61a-beb3-4113-8797-656cc5843e5c req-46fd4a58-a0e2-4fa6-a176-9dcd38295b37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:50 np0005603609 nova_compute[221550]: 2026-01-31 08:23:50.406 221554 DEBUG oslo_concurrency.lockutils [req-c086e61a-beb3-4113-8797-656cc5843e5c req-46fd4a58-a0e2-4fa6-a176-9dcd38295b37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:50 np0005603609 nova_compute[221550]: 2026-01-31 08:23:50.406 221554 DEBUG oslo_concurrency.lockutils [req-c086e61a-beb3-4113-8797-656cc5843e5c req-46fd4a58-a0e2-4fa6-a176-9dcd38295b37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:50 np0005603609 nova_compute[221550]: 2026-01-31 08:23:50.407 221554 DEBUG nova.compute.manager [req-c086e61a-beb3-4113-8797-656cc5843e5c req-46fd4a58-a0e2-4fa6-a176-9dcd38295b37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] No waiting events found dispatching network-vif-unplugged-c66c4603-0eab-4f51-ad10-8185e33051dd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:23:50 np0005603609 nova_compute[221550]: 2026-01-31 08:23:50.407 221554 WARNING nova.compute.manager [req-c086e61a-beb3-4113-8797-656cc5843e5c req-46fd4a58-a0e2-4fa6-a176-9dcd38295b37 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received unexpected event network-vif-unplugged-c66c4603-0eab-4f51-ad10-8185e33051dd for instance with vm_state deleted and task_state None.#033[00m
Jan 31 03:23:50 np0005603609 nova_compute[221550]: 2026-01-31 08:23:50.560 221554 DEBUG nova.compute.manager [req-5db06562-e6c9-408d-8f8e-ab168e49a72d req-a7e9f492-7124-45c9-8768-9a625393b868 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received event network-vif-deleted-c66c4603-0eab-4f51-ad10-8185e33051dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:23:50 np0005603609 nova_compute[221550]: 2026-01-31 08:23:50.964 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:51 np0005603609 nova_compute[221550]: 2026-01-31 08:23:51.115 221554 DEBUG oslo_concurrency.lockutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:51 np0005603609 nova_compute[221550]: 2026-01-31 08:23:51.116 221554 DEBUG oslo_concurrency.lockutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:51 np0005603609 nova_compute[221550]: 2026-01-31 08:23:51.144 221554 DEBUG nova.compute.manager [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:23:51 np0005603609 nova_compute[221550]: 2026-01-31 08:23:51.263 221554 DEBUG oslo_concurrency.lockutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:51 np0005603609 nova_compute[221550]: 2026-01-31 08:23:51.264 221554 DEBUG oslo_concurrency.lockutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:51 np0005603609 nova_compute[221550]: 2026-01-31 08:23:51.272 221554 DEBUG nova.virt.hardware [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:23:51 np0005603609 nova_compute[221550]: 2026-01-31 08:23:51.272 221554 INFO nova.compute.claims [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:23:51 np0005603609 nova_compute[221550]: 2026-01-31 08:23:51.458 221554 DEBUG oslo_concurrency.processutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:23:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:23:51 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1673176264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:23:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:23:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:51.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:23:51 np0005603609 nova_compute[221550]: 2026-01-31 08:23:51.970 221554 DEBUG oslo_concurrency.processutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:23:51 np0005603609 nova_compute[221550]: 2026-01-31 08:23:51.974 221554 DEBUG nova.compute.provider_tree [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.007 221554 DEBUG nova.scheduler.client.report [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.047 221554 DEBUG oslo_concurrency.lockutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.048 221554 DEBUG nova.compute.manager [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:23:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:52.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.118 221554 DEBUG nova.compute.manager [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.119 221554 DEBUG nova.network.neutron [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.147 221554 INFO nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.170 221554 DEBUG nova.compute.manager [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.290 221554 DEBUG nova.compute.manager [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.291 221554 DEBUG nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.291 221554 INFO nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Creating image(s)#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.336 221554 DEBUG nova.storage.rbd_utils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 90aa4e13-650f-43f2-8ebe-19a34e0cc605_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.367 221554 DEBUG nova.storage.rbd_utils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 90aa4e13-650f-43f2-8ebe-19a34e0cc605_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.396 221554 DEBUG nova.storage.rbd_utils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 90aa4e13-650f-43f2-8ebe-19a34e0cc605_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.400 221554 DEBUG oslo_concurrency.processutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.466 221554 DEBUG oslo_concurrency.processutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.468 221554 DEBUG oslo_concurrency.lockutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.469 221554 DEBUG oslo_concurrency.lockutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.470 221554 DEBUG oslo_concurrency.lockutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.496 221554 DEBUG nova.storage.rbd_utils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 90aa4e13-650f-43f2-8ebe-19a34e0cc605_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.499 221554 DEBUG oslo_concurrency.processutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 90aa4e13-650f-43f2-8ebe-19a34e0cc605_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.525 221554 DEBUG nova.compute.manager [req-f8e98d37-c119-402f-af99-19d8557f3605 req-20c18105-b40e-4fa2-a7e7-aad145fbe7be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received event network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.525 221554 DEBUG oslo_concurrency.lockutils [req-f8e98d37-c119-402f-af99-19d8557f3605 req-20c18105-b40e-4fa2-a7e7-aad145fbe7be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.526 221554 DEBUG oslo_concurrency.lockutils [req-f8e98d37-c119-402f-af99-19d8557f3605 req-20c18105-b40e-4fa2-a7e7-aad145fbe7be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.526 221554 DEBUG oslo_concurrency.lockutils [req-f8e98d37-c119-402f-af99-19d8557f3605 req-20c18105-b40e-4fa2-a7e7-aad145fbe7be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20cc7040-fd06-49c7-8e68-41cb74e67e9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.526 221554 DEBUG nova.compute.manager [req-f8e98d37-c119-402f-af99-19d8557f3605 req-20c18105-b40e-4fa2-a7e7-aad145fbe7be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] No waiting events found dispatching network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.526 221554 WARNING nova.compute.manager [req-f8e98d37-c119-402f-af99-19d8557f3605 req-20c18105-b40e-4fa2-a7e7-aad145fbe7be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Received unexpected event network-vif-plugged-c66c4603-0eab-4f51-ad10-8185e33051dd for instance with vm_state deleted and task_state None.#033[00m
Jan 31 03:23:52 np0005603609 nova_compute[221550]: 2026-01-31 08:23:52.529 221554 DEBUG nova.policy [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4d0e9d918b4041fabd5ded633b4cf404', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9710f0cf77d84353ae13fa47922b085d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:23:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:23:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1351194824' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:23:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:23:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1351194824' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:23:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:53.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:54.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:54 np0005603609 nova_compute[221550]: 2026-01-31 08:23:54.666 221554 DEBUG oslo_concurrency.processutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 90aa4e13-650f-43f2-8ebe-19a34e0cc605_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:23:54 np0005603609 nova_compute[221550]: 2026-01-31 08:23:54.901 221554 DEBUG nova.network.neutron [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Successfully created port: 7d9d74bf-cfe7-4c4d-aaec-f0662642996b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:23:55 np0005603609 nova_compute[221550]: 2026-01-31 08:23:55.131 221554 DEBUG nova.storage.rbd_utils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] resizing rbd image 90aa4e13-650f-43f2-8ebe-19a34e0cc605_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:23:55 np0005603609 nova_compute[221550]: 2026-01-31 08:23:55.372 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:55 np0005603609 nova_compute[221550]: 2026-01-31 08:23:55.762 221554 DEBUG nova.objects.instance [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'migration_context' on Instance uuid 90aa4e13-650f-43f2-8ebe-19a34e0cc605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:23:55 np0005603609 nova_compute[221550]: 2026-01-31 08:23:55.795 221554 DEBUG nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:23:55 np0005603609 nova_compute[221550]: 2026-01-31 08:23:55.796 221554 DEBUG nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Ensure instance console log exists: /var/lib/nova/instances/90aa4e13-650f-43f2-8ebe-19a34e0cc605/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:23:55 np0005603609 nova_compute[221550]: 2026-01-31 08:23:55.796 221554 DEBUG oslo_concurrency.lockutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:55 np0005603609 nova_compute[221550]: 2026-01-31 08:23:55.797 221554 DEBUG oslo_concurrency.lockutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:55 np0005603609 nova_compute[221550]: 2026-01-31 08:23:55.797 221554 DEBUG oslo_concurrency.lockutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:55 np0005603609 nova_compute[221550]: 2026-01-31 08:23:55.966 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:23:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:55.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:23:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:56.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:57 np0005603609 nova_compute[221550]: 2026-01-31 08:23:57.610 221554 DEBUG nova.network.neutron [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Successfully updated port: 7d9d74bf-cfe7-4c4d-aaec-f0662642996b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:23:57 np0005603609 nova_compute[221550]: 2026-01-31 08:23:57.647 221554 DEBUG oslo_concurrency.lockutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:23:57 np0005603609 nova_compute[221550]: 2026-01-31 08:23:57.648 221554 DEBUG oslo_concurrency.lockutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquired lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:23:57 np0005603609 nova_compute[221550]: 2026-01-31 08:23:57.648 221554 DEBUG nova.network.neutron [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:23:57 np0005603609 nova_compute[221550]: 2026-01-31 08:23:57.741 221554 DEBUG nova.compute.manager [req-e8bfd785-f372-46fe-8660-8750171d6891 req-306b52f0-113e-4af2-b614-1131ef0a0dfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received event network-changed-7d9d74bf-cfe7-4c4d-aaec-f0662642996b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:23:57 np0005603609 nova_compute[221550]: 2026-01-31 08:23:57.742 221554 DEBUG nova.compute.manager [req-e8bfd785-f372-46fe-8660-8750171d6891 req-306b52f0-113e-4af2-b614-1131ef0a0dfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Refreshing instance network info cache due to event network-changed-7d9d74bf-cfe7-4c4d-aaec-f0662642996b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:23:57 np0005603609 nova_compute[221550]: 2026-01-31 08:23:57.742 221554 DEBUG oslo_concurrency.lockutils [req-e8bfd785-f372-46fe-8660-8750171d6891 req-306b52f0-113e-4af2-b614-1131ef0a0dfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:23:57 np0005603609 nova_compute[221550]: 2026-01-31 08:23:57.859 221554 DEBUG nova.network.neutron [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:23:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:57.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:23:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:58.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:23:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.544 221554 DEBUG nova.network.neutron [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updating instance_info_cache with network_info: [{"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.578 221554 DEBUG oslo_concurrency.lockutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Releasing lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.578 221554 DEBUG nova.compute.manager [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Instance network_info: |[{"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.579 221554 DEBUG oslo_concurrency.lockutils [req-e8bfd785-f372-46fe-8660-8750171d6891 req-306b52f0-113e-4af2-b614-1131ef0a0dfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.579 221554 DEBUG nova.network.neutron [req-e8bfd785-f372-46fe-8660-8750171d6891 req-306b52f0-113e-4af2-b614-1131ef0a0dfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Refreshing network info cache for port 7d9d74bf-cfe7-4c4d-aaec-f0662642996b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.582 221554 DEBUG nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Start _get_guest_xml network_info=[{"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.587 221554 WARNING nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.593 221554 DEBUG nova.virt.libvirt.host [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.594 221554 DEBUG nova.virt.libvirt.host [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.599 221554 DEBUG nova.virt.libvirt.host [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.599 221554 DEBUG nova.virt.libvirt.host [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.600 221554 DEBUG nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.600 221554 DEBUG nova.virt.hardware [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.601 221554 DEBUG nova.virt.hardware [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.601 221554 DEBUG nova.virt.hardware [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.601 221554 DEBUG nova.virt.hardware [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.601 221554 DEBUG nova.virt.hardware [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.601 221554 DEBUG nova.virt.hardware [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.602 221554 DEBUG nova.virt.hardware [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.602 221554 DEBUG nova.virt.hardware [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.602 221554 DEBUG nova.virt.hardware [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.602 221554 DEBUG nova.virt.hardware [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.602 221554 DEBUG nova.virt.hardware [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:23:59 np0005603609 nova_compute[221550]: 2026-01-31 08:23:59.605 221554 DEBUG oslo_concurrency.processutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:23:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:23:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:23:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:59.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:24:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:24:00 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/681131750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.085 221554 DEBUG oslo_concurrency.processutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:00.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.113 221554 DEBUG nova.storage.rbd_utils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 90aa4e13-650f-43f2-8ebe-19a34e0cc605_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.117 221554 DEBUG oslo_concurrency.processutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:00 np0005603609 podman[284297]: 2026-01-31 08:24:00.161170425 +0000 UTC m=+0.049098130 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:24:00 np0005603609 podman[284288]: 2026-01-31 08:24:00.240421319 +0000 UTC m=+0.127887353 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.375 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:24:00 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1572303610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.738 221554 DEBUG oslo_concurrency.processutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.741 221554 DEBUG nova.virt.libvirt.vif [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:23:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2050667150',display_name='tempest-TestNetworkAdvancedServerOps-server-2050667150',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2050667150',id=154,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEPErrFuMatngmbPFI9nTUxaXvpce6NcVYOjpFna1bVYBBpAXT5DWY6THfcwtLeuI/NzxTjJ2/S962+d7YMt9k52/1Ce4TFS9tLoM3ovnpGdPHYwJISH1c/E+TlOqwaBJw==',key_name='tempest-TestNetworkAdvancedServerOps-1275181138',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-wq345vcr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:23:52Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=90aa4e13-650f-43f2-8ebe-19a34e0cc605,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.742 221554 DEBUG nova.network.os_vif_util [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.743 221554 DEBUG nova.network.os_vif_util [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:62:8a,bridge_name='br-int',has_traffic_filtering=True,id=7d9d74bf-cfe7-4c4d-aaec-f0662642996b,network=Network(a584cda0-df11-4171-9687-b79f1d3fe460),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d9d74bf-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.743 221554 DEBUG nova.objects.instance [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'pci_devices' on Instance uuid 90aa4e13-650f-43f2-8ebe-19a34e0cc605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.765 221554 DEBUG nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:24:00 np0005603609 nova_compute[221550]:  <uuid>90aa4e13-650f-43f2-8ebe-19a34e0cc605</uuid>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:  <name>instance-0000009a</name>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-2050667150</nova:name>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:23:59</nova:creationTime>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:24:00 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:        <nova:user uuid="4d0e9d918b4041fabd5ded633b4cf404">tempest-TestNetworkAdvancedServerOps-483180749-project-member</nova:user>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:        <nova:project uuid="9710f0cf77d84353ae13fa47922b085d">tempest-TestNetworkAdvancedServerOps-483180749</nova:project>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:        <nova:port uuid="7d9d74bf-cfe7-4c4d-aaec-f0662642996b">
Jan 31 03:24:00 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <entry name="serial">90aa4e13-650f-43f2-8ebe-19a34e0cc605</entry>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <entry name="uuid">90aa4e13-650f-43f2-8ebe-19a34e0cc605</entry>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/90aa4e13-650f-43f2-8ebe-19a34e0cc605_disk">
Jan 31 03:24:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:24:00 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/90aa4e13-650f-43f2-8ebe-19a34e0cc605_disk.config">
Jan 31 03:24:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:24:00 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:b4:62:8a"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <target dev="tap7d9d74bf-cf"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/90aa4e13-650f-43f2-8ebe-19a34e0cc605/console.log" append="off"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:24:00 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:24:00 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:24:00 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:24:00 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.766 221554 DEBUG nova.compute.manager [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Preparing to wait for external event network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.766 221554 DEBUG oslo_concurrency.lockutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.766 221554 DEBUG oslo_concurrency.lockutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.766 221554 DEBUG oslo_concurrency.lockutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.767 221554 DEBUG nova.virt.libvirt.vif [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:23:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2050667150',display_name='tempest-TestNetworkAdvancedServerOps-server-2050667150',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2050667150',id=154,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEPErrFuMatngmbPFI9nTUxaXvpce6NcVYOjpFna1bVYBBpAXT5DWY6THfcwtLeuI/NzxTjJ2/S962+d7YMt9k52/1Ce4TFS9tLoM3ovnpGdPHYwJISH1c/E+TlOqwaBJw==',key_name='tempest-TestNetworkAdvancedServerOps-1275181138',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-wq345vcr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:23:52Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=90aa4e13-650f-43f2-8ebe-19a34e0cc605,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.768 221554 DEBUG nova.network.os_vif_util [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.768 221554 DEBUG nova.network.os_vif_util [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:62:8a,bridge_name='br-int',has_traffic_filtering=True,id=7d9d74bf-cfe7-4c4d-aaec-f0662642996b,network=Network(a584cda0-df11-4171-9687-b79f1d3fe460),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d9d74bf-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.769 221554 DEBUG os_vif [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:62:8a,bridge_name='br-int',has_traffic_filtering=True,id=7d9d74bf-cfe7-4c4d-aaec-f0662642996b,network=Network(a584cda0-df11-4171-9687-b79f1d3fe460),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d9d74bf-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.770 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.771 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.771 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.775 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.775 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7d9d74bf-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.775 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7d9d74bf-cf, col_values=(('external_ids', {'iface-id': '7d9d74bf-cfe7-4c4d-aaec-f0662642996b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:62:8a', 'vm-uuid': '90aa4e13-650f-43f2-8ebe-19a34e0cc605'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:00 np0005603609 NetworkManager[49064]: <info>  [1769847840.7784] manager: (tap7d9d74bf-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.779 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.781 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.782 221554 INFO os_vif [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:62:8a,bridge_name='br-int',has_traffic_filtering=True,id=7d9d74bf-cfe7-4c4d-aaec-f0662642996b,network=Network(a584cda0-df11-4171-9687-b79f1d3fe460),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d9d74bf-cf')#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.913 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847825.9094002, 20cc7040-fd06-49c7-8e68-41cb74e67e9a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.914 221554 INFO nova.compute.manager [-] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.937 221554 DEBUG nova.compute.manager [None req-90ffb208-90b0-4118-8dda-22fb070c66c1 - - - - - -] [instance: 20cc7040-fd06-49c7-8e68-41cb74e67e9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.945 221554 DEBUG nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.945 221554 DEBUG nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.945 221554 DEBUG nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No VIF found with MAC fa:16:3e:b4:62:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.946 221554 INFO nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Using config drive#033[00m
Jan 31 03:24:00 np0005603609 nova_compute[221550]: 2026-01-31 08:24:00.972 221554 DEBUG nova.storage.rbd_utils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 90aa4e13-650f-43f2-8ebe-19a34e0cc605_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:24:01 np0005603609 nova_compute[221550]: 2026-01-31 08:24:01.612 221554 INFO nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Creating config drive at /var/lib/nova/instances/90aa4e13-650f-43f2-8ebe-19a34e0cc605/disk.config#033[00m
Jan 31 03:24:01 np0005603609 nova_compute[221550]: 2026-01-31 08:24:01.619 221554 DEBUG oslo_concurrency.processutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/90aa4e13-650f-43f2-8ebe-19a34e0cc605/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpcqe3u8sv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:01 np0005603609 nova_compute[221550]: 2026-01-31 08:24:01.758 221554 DEBUG oslo_concurrency.processutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/90aa4e13-650f-43f2-8ebe-19a34e0cc605/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpcqe3u8sv" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:01 np0005603609 nova_compute[221550]: 2026-01-31 08:24:01.800 221554 DEBUG nova.storage.rbd_utils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 90aa4e13-650f-43f2-8ebe-19a34e0cc605_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:24:01 np0005603609 nova_compute[221550]: 2026-01-31 08:24:01.806 221554 DEBUG oslo_concurrency.processutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/90aa4e13-650f-43f2-8ebe-19a34e0cc605/disk.config 90aa4e13-650f-43f2-8ebe-19a34e0cc605_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:01.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:02.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.230 221554 DEBUG oslo_concurrency.processutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/90aa4e13-650f-43f2-8ebe-19a34e0cc605/disk.config 90aa4e13-650f-43f2-8ebe-19a34e0cc605_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.231 221554 INFO nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Deleting local config drive /var/lib/nova/instances/90aa4e13-650f-43f2-8ebe-19a34e0cc605/disk.config because it was imported into RBD.#033[00m
Jan 31 03:24:02 np0005603609 NetworkManager[49064]: <info>  [1769847842.2859] manager: (tap7d9d74bf-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/319)
Jan 31 03:24:02 np0005603609 kernel: tap7d9d74bf-cf: entered promiscuous mode
Jan 31 03:24:02 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:02Z|00691|binding|INFO|Claiming lport 7d9d74bf-cfe7-4c4d-aaec-f0662642996b for this chassis.
Jan 31 03:24:02 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:02Z|00692|binding|INFO|7d9d74bf-cfe7-4c4d-aaec-f0662642996b: Claiming fa:16:3e:b4:62:8a 10.100.0.3
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.290 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.303 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:62:8a 10.100.0.3'], port_security=['fa:16:3e:b4:62:8a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '90aa4e13-650f-43f2-8ebe-19a34e0cc605', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a584cda0-df11-4171-9687-b79f1d3fe460', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3f15617b-dce2-4914-b18c-70facd7e86fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b34831f-93cf-4037-8766-7bee8dbb9141, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=7d9d74bf-cfe7-4c4d-aaec-f0662642996b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:24:02 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:02Z|00693|binding|INFO|Setting lport 7d9d74bf-cfe7-4c4d-aaec-f0662642996b up in Southbound
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.306 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 7d9d74bf-cfe7-4c4d-aaec-f0662642996b in datapath a584cda0-df11-4171-9687-b79f1d3fe460 bound to our chassis#033[00m
Jan 31 03:24:02 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:02Z|00694|binding|INFO|Setting lport 7d9d74bf-cfe7-4c4d-aaec-f0662642996b ovn-installed in OVS
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.307 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.310 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a584cda0-df11-4171-9687-b79f1d3fe460#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.315 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.321 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ab522c44-25e2-4b45-9c2a-721cf2748cbf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.323 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa584cda0-d1 in ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:24:02 np0005603609 systemd-udevd[284438]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.326 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa584cda0-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.327 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ad12578e-a973-45a1-9c81-c928fcd82299]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.328 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f6584de3-0392-45ba-a4a9-18206cb948ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:02 np0005603609 systemd-machined[190912]: New machine qemu-82-instance-0000009a.
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.337 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[74492a07-4ab6-46c7-805c-114f4cfbc58d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:02 np0005603609 NetworkManager[49064]: <info>  [1769847842.3428] device (tap7d9d74bf-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:24:02 np0005603609 NetworkManager[49064]: <info>  [1769847842.3436] device (tap7d9d74bf-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.350 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[58712f83-cc91-489f-8fde-a5da7bc02b7b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:02 np0005603609 systemd[1]: Started Virtual Machine qemu-82-instance-0000009a.
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.373 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[62907c28-0ee1-472d-9f1f-684f8b59b688]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:02 np0005603609 NetworkManager[49064]: <info>  [1769847842.3784] manager: (tapa584cda0-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/320)
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.377 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cb68a7f7-4da7-4f8c-a4f2-796021d036cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.399 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[674f37e7-89db-4aad-ba47-2af2968b2ae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.401 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[632937fb-baa5-4563-a181-f7b860ffbf89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:02 np0005603609 NetworkManager[49064]: <info>  [1769847842.4123] device (tapa584cda0-d0): carrier: link connected
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.417 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d51c8345-795f-4c64-a807-99c1bede9f06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.429 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[42316378-21d8-46e1-ae34-16558e713407]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa584cda0-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:1f:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 213], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 808211, 'reachable_time': 35240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284471, 'error': None, 'target': 'ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.436 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c2917179-d88d-4c90-92af-f56f92904c4b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:1f9c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 808211, 'tstamp': 808211}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284472, 'error': None, 'target': 'ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.443 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[be5b3040-c2c2-45bc-adc4-6fe1a56e0def]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa584cda0-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:1f:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 213], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 808211, 'reachable_time': 35240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284473, 'error': None, 'target': 'ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.468 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8258e03c-0e0e-48f4-b085-75f1a05fd8c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.503 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cfd76ce7-5af6-4df3-a972-810024368821]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.504 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa584cda0-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.504 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.505 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa584cda0-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.545 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:02 np0005603609 NetworkManager[49064]: <info>  [1769847842.5464] manager: (tapa584cda0-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Jan 31 03:24:02 np0005603609 kernel: tapa584cda0-d0: entered promiscuous mode
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.549 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.551 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa584cda0-d0, col_values=(('external_ids', {'iface-id': '82a1b950-f7d7-4649-a61e-273ceb65ba23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.553 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:02 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:02Z|00695|binding|INFO|Releasing lport 82a1b950-f7d7-4649-a61e-273ceb65ba23 from this chassis (sb_readonly=0)
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.556 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.558 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a584cda0-df11-4171-9687-b79f1d3fe460.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a584cda0-df11-4171-9687-b79f1d3fe460.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.559 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a353c363-9b52-49e3-b024-987a6fa891d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.561 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-a584cda0-df11-4171-9687-b79f1d3fe460
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/a584cda0-df11-4171-9687-b79f1d3fe460.pid.haproxy
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID a584cda0-df11-4171-9687-b79f1d3fe460
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.562 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:02.562 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460', 'env', 'PROCESS_TAG=haproxy-a584cda0-df11-4171-9687-b79f1d3fe460', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a584cda0-df11-4171-9687-b79f1d3fe460.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.563 221554 DEBUG nova.network.neutron [req-e8bfd785-f372-46fe-8660-8750171d6891 req-306b52f0-113e-4af2-b614-1131ef0a0dfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updated VIF entry in instance network info cache for port 7d9d74bf-cfe7-4c4d-aaec-f0662642996b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.564 221554 DEBUG nova.network.neutron [req-e8bfd785-f372-46fe-8660-8750171d6891 req-306b52f0-113e-4af2-b614-1131ef0a0dfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updating instance_info_cache with network_info: [{"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.611 221554 DEBUG oslo_concurrency.lockutils [req-e8bfd785-f372-46fe-8660-8750171d6891 req-306b52f0-113e-4af2-b614-1131ef0a0dfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.759 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847842.75851, 90aa4e13-650f-43f2-8ebe-19a34e0cc605 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.760 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] VM Started (Lifecycle Event)#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.811 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.814 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847842.7587886, 90aa4e13-650f-43f2-8ebe-19a34e0cc605 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.815 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.837 221554 DEBUG nova.compute.manager [req-c796f1f0-de43-4592-be5f-81a2b982f0bd req-64fbf17d-9a86-4c3e-b5b1-9ff4311bb8e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received event network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.838 221554 DEBUG oslo_concurrency.lockutils [req-c796f1f0-de43-4592-be5f-81a2b982f0bd req-64fbf17d-9a86-4c3e-b5b1-9ff4311bb8e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.838 221554 DEBUG oslo_concurrency.lockutils [req-c796f1f0-de43-4592-be5f-81a2b982f0bd req-64fbf17d-9a86-4c3e-b5b1-9ff4311bb8e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.838 221554 DEBUG oslo_concurrency.lockutils [req-c796f1f0-de43-4592-be5f-81a2b982f0bd req-64fbf17d-9a86-4c3e-b5b1-9ff4311bb8e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.838 221554 DEBUG nova.compute.manager [req-c796f1f0-de43-4592-be5f-81a2b982f0bd req-64fbf17d-9a86-4c3e-b5b1-9ff4311bb8e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Processing event network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.839 221554 DEBUG nova.compute.manager [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.844 221554 DEBUG nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.848 221554 INFO nova.virt.libvirt.driver [-] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Instance spawned successfully.#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.848 221554 DEBUG nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.852 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.855 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847842.8432217, 90aa4e13-650f-43f2-8ebe-19a34e0cc605 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.855 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.870 221554 DEBUG nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.871 221554 DEBUG nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.871 221554 DEBUG nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.872 221554 DEBUG nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.872 221554 DEBUG nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.872 221554 DEBUG nova.virt.libvirt.driver [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.879 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.881 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:24:02 np0005603609 podman[284547]: 2026-01-31 08:24:02.890648181 +0000 UTC m=+0.047593604 container create 1ecc78b748cafa008deac3df3f2012a4b4cf335d66c6c2aa94ec78828e2cf2f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.912 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:24:02 np0005603609 systemd[1]: Started libpod-conmon-1ecc78b748cafa008deac3df3f2012a4b4cf335d66c6c2aa94ec78828e2cf2f9.scope.
Jan 31 03:24:02 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:24:02 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9175b630175dba05bcb5ace79c337acc47e81d28525171f219a851a4d4522eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:02 np0005603609 podman[284547]: 2026-01-31 08:24:02.867816152 +0000 UTC m=+0.024761615 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:24:02 np0005603609 podman[284547]: 2026-01-31 08:24:02.963296146 +0000 UTC m=+0.120241599 container init 1ecc78b748cafa008deac3df3f2012a4b4cf335d66c6c2aa94ec78828e2cf2f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:24:02 np0005603609 podman[284547]: 2026-01-31 08:24:02.967916866 +0000 UTC m=+0.124862309 container start 1ecc78b748cafa008deac3df3f2012a4b4cf335d66c6c2aa94ec78828e2cf2f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.970 221554 INFO nova.compute.manager [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Took 10.68 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:24:02 np0005603609 nova_compute[221550]: 2026-01-31 08:24:02.971 221554 DEBUG nova.compute.manager [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:24:02 np0005603609 neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460[284562]: [NOTICE]   (284566) : New worker (284568) forked
Jan 31 03:24:02 np0005603609 neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460[284562]: [NOTICE]   (284566) : Loading success.
Jan 31 03:24:03 np0005603609 nova_compute[221550]: 2026-01-31 08:24:03.042 221554 DEBUG oslo_concurrency.lockutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:03 np0005603609 nova_compute[221550]: 2026-01-31 08:24:03.044 221554 DEBUG oslo_concurrency.lockutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:03 np0005603609 nova_compute[221550]: 2026-01-31 08:24:03.072 221554 INFO nova.compute.manager [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Took 11.85 seconds to build instance.#033[00m
Jan 31 03:24:03 np0005603609 nova_compute[221550]: 2026-01-31 08:24:03.074 221554 DEBUG nova.compute.manager [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:24:03 np0005603609 nova_compute[221550]: 2026-01-31 08:24:03.102 221554 DEBUG oslo_concurrency.lockutils [None req-722eea7b-0697-4268-928a-68cb2e31feb9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:03 np0005603609 nova_compute[221550]: 2026-01-31 08:24:03.173 221554 DEBUG oslo_concurrency.lockutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:03 np0005603609 nova_compute[221550]: 2026-01-31 08:24:03.173 221554 DEBUG oslo_concurrency.lockutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:03 np0005603609 nova_compute[221550]: 2026-01-31 08:24:03.184 221554 DEBUG nova.virt.hardware [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:24:03 np0005603609 nova_compute[221550]: 2026-01-31 08:24:03.185 221554 INFO nova.compute.claims [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:24:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:03 np0005603609 nova_compute[221550]: 2026-01-31 08:24:03.374 221554 DEBUG oslo_concurrency.processutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:24:03 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3841614421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:24:03 np0005603609 nova_compute[221550]: 2026-01-31 08:24:03.829 221554 DEBUG oslo_concurrency.processutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:03 np0005603609 nova_compute[221550]: 2026-01-31 08:24:03.835 221554 DEBUG nova.compute.provider_tree [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:24:03 np0005603609 nova_compute[221550]: 2026-01-31 08:24:03.858 221554 DEBUG nova.scheduler.client.report [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:24:03 np0005603609 nova_compute[221550]: 2026-01-31 08:24:03.896 221554 DEBUG oslo_concurrency.lockutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:03 np0005603609 nova_compute[221550]: 2026-01-31 08:24:03.897 221554 DEBUG nova.compute.manager [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:24:03 np0005603609 nova_compute[221550]: 2026-01-31 08:24:03.978 221554 DEBUG nova.compute.manager [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:24:03 np0005603609 nova_compute[221550]: 2026-01-31 08:24:03.979 221554 DEBUG nova.network.neutron [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:24:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:24:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:03.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.031 221554 INFO nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.056 221554 DEBUG nova.compute.manager [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:24:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:24:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:04.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.215 221554 DEBUG nova.compute.manager [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.217 221554 DEBUG nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.217 221554 INFO nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Creating image(s)#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.238 221554 DEBUG nova.storage.rbd_utils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] rbd image bebd0cd6-4043-4283-aee6-2b2d313ca46f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.262 221554 DEBUG nova.storage.rbd_utils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] rbd image bebd0cd6-4043-4283-aee6-2b2d313ca46f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.289 221554 DEBUG nova.storage.rbd_utils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] rbd image bebd0cd6-4043-4283-aee6-2b2d313ca46f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.292 221554 DEBUG oslo_concurrency.processutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.344 221554 DEBUG oslo_concurrency.processutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.345 221554 DEBUG oslo_concurrency.lockutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.346 221554 DEBUG oslo_concurrency.lockutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.346 221554 DEBUG oslo_concurrency.lockutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.368 221554 DEBUG nova.storage.rbd_utils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] rbd image bebd0cd6-4043-4283-aee6-2b2d313ca46f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.371 221554 DEBUG oslo_concurrency.processutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 bebd0cd6-4043-4283-aee6-2b2d313ca46f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.631 221554 DEBUG oslo_concurrency.processutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 bebd0cd6-4043-4283-aee6-2b2d313ca46f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.699 221554 DEBUG nova.storage.rbd_utils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] resizing rbd image bebd0cd6-4043-4283-aee6-2b2d313ca46f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.737 221554 DEBUG nova.policy [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3ada90dc4b77478cb4b93c63409d8537', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fdf18f1faf4846e2a6e2eab4ac2aec02', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.808 221554 DEBUG nova.objects.instance [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'migration_context' on Instance uuid bebd0cd6-4043-4283-aee6-2b2d313ca46f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.847 221554 DEBUG nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.848 221554 DEBUG nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Ensure instance console log exists: /var/lib/nova/instances/bebd0cd6-4043-4283-aee6-2b2d313ca46f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.848 221554 DEBUG oslo_concurrency.lockutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.848 221554 DEBUG oslo_concurrency.lockutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:04 np0005603609 nova_compute[221550]: 2026-01-31 08:24:04.848 221554 DEBUG oslo_concurrency.lockutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:05 np0005603609 nova_compute[221550]: 2026-01-31 08:24:05.006 221554 DEBUG nova.compute.manager [req-a2ddabe1-f55f-46be-b6e7-17af66595356 req-687256aa-d5c7-432a-89e4-79631f0c3544 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received event network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:24:05 np0005603609 nova_compute[221550]: 2026-01-31 08:24:05.007 221554 DEBUG oslo_concurrency.lockutils [req-a2ddabe1-f55f-46be-b6e7-17af66595356 req-687256aa-d5c7-432a-89e4-79631f0c3544 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:05 np0005603609 nova_compute[221550]: 2026-01-31 08:24:05.007 221554 DEBUG oslo_concurrency.lockutils [req-a2ddabe1-f55f-46be-b6e7-17af66595356 req-687256aa-d5c7-432a-89e4-79631f0c3544 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:05 np0005603609 nova_compute[221550]: 2026-01-31 08:24:05.007 221554 DEBUG oslo_concurrency.lockutils [req-a2ddabe1-f55f-46be-b6e7-17af66595356 req-687256aa-d5c7-432a-89e4-79631f0c3544 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:05 np0005603609 nova_compute[221550]: 2026-01-31 08:24:05.007 221554 DEBUG nova.compute.manager [req-a2ddabe1-f55f-46be-b6e7-17af66595356 req-687256aa-d5c7-432a-89e4-79631f0c3544 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] No waiting events found dispatching network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:24:05 np0005603609 nova_compute[221550]: 2026-01-31 08:24:05.007 221554 WARNING nova.compute.manager [req-a2ddabe1-f55f-46be-b6e7-17af66595356 req-687256aa-d5c7-432a-89e4-79631f0c3544 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received unexpected event network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b for instance with vm_state active and task_state None.#033[00m
Jan 31 03:24:05 np0005603609 nova_compute[221550]: 2026-01-31 08:24:05.376 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:05 np0005603609 nova_compute[221550]: 2026-01-31 08:24:05.783 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:05.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:06.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:06 np0005603609 nova_compute[221550]: 2026-01-31 08:24:06.485 221554 DEBUG nova.network.neutron [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Successfully created port: 23692aac-7e72-4966-922e-d30b3648f957 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:24:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:07.516 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:07.517 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:07.518 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:07.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:08.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:08 np0005603609 nova_compute[221550]: 2026-01-31 08:24:08.387 221554 DEBUG nova.network.neutron [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Successfully updated port: 23692aac-7e72-4966-922e-d30b3648f957 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:24:08 np0005603609 nova_compute[221550]: 2026-01-31 08:24:08.426 221554 DEBUG oslo_concurrency.lockutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "refresh_cache-bebd0cd6-4043-4283-aee6-2b2d313ca46f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:24:08 np0005603609 nova_compute[221550]: 2026-01-31 08:24:08.426 221554 DEBUG oslo_concurrency.lockutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquired lock "refresh_cache-bebd0cd6-4043-4283-aee6-2b2d313ca46f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:24:08 np0005603609 nova_compute[221550]: 2026-01-31 08:24:08.426 221554 DEBUG nova.network.neutron [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:24:09 np0005603609 nova_compute[221550]: 2026-01-31 08:24:09.169 221554 DEBUG nova.compute.manager [req-e2a9b19b-5dfa-4ec1-a01e-9cdf4ee9528b req-f17a7ff0-c0d5-4e80-9716-1a4ab0e095fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Received event network-changed-23692aac-7e72-4966-922e-d30b3648f957 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:24:09 np0005603609 nova_compute[221550]: 2026-01-31 08:24:09.170 221554 DEBUG nova.compute.manager [req-e2a9b19b-5dfa-4ec1-a01e-9cdf4ee9528b req-f17a7ff0-c0d5-4e80-9716-1a4ab0e095fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Refreshing instance network info cache due to event network-changed-23692aac-7e72-4966-922e-d30b3648f957. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:24:09 np0005603609 nova_compute[221550]: 2026-01-31 08:24:09.171 221554 DEBUG oslo_concurrency.lockutils [req-e2a9b19b-5dfa-4ec1-a01e-9cdf4ee9528b req-f17a7ff0-c0d5-4e80-9716-1a4ab0e095fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-bebd0cd6-4043-4283-aee6-2b2d313ca46f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:24:09 np0005603609 nova_compute[221550]: 2026-01-31 08:24:09.187 221554 DEBUG nova.network.neutron [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:24:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:09.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:10.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.378 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.786 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.911 221554 DEBUG nova.network.neutron [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Updating instance_info_cache with network_info: [{"id": "23692aac-7e72-4966-922e-d30b3648f957", "address": "fa:16:3e:85:2a:cb", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23692aac-7e", "ovs_interfaceid": "23692aac-7e72-4966-922e-d30b3648f957", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.972 221554 DEBUG oslo_concurrency.lockutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Releasing lock "refresh_cache-bebd0cd6-4043-4283-aee6-2b2d313ca46f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.973 221554 DEBUG nova.compute.manager [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Instance network_info: |[{"id": "23692aac-7e72-4966-922e-d30b3648f957", "address": "fa:16:3e:85:2a:cb", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23692aac-7e", "ovs_interfaceid": "23692aac-7e72-4966-922e-d30b3648f957", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.973 221554 DEBUG oslo_concurrency.lockutils [req-e2a9b19b-5dfa-4ec1-a01e-9cdf4ee9528b req-f17a7ff0-c0d5-4e80-9716-1a4ab0e095fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-bebd0cd6-4043-4283-aee6-2b2d313ca46f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.974 221554 DEBUG nova.network.neutron [req-e2a9b19b-5dfa-4ec1-a01e-9cdf4ee9528b req-f17a7ff0-c0d5-4e80-9716-1a4ab0e095fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Refreshing network info cache for port 23692aac-7e72-4966-922e-d30b3648f957 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.977 221554 DEBUG nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Start _get_guest_xml network_info=[{"id": "23692aac-7e72-4966-922e-d30b3648f957", "address": "fa:16:3e:85:2a:cb", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23692aac-7e", "ovs_interfaceid": "23692aac-7e72-4966-922e-d30b3648f957", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.981 221554 WARNING nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.989 221554 DEBUG nova.virt.libvirt.host [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.990 221554 DEBUG nova.virt.libvirt.host [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.993 221554 DEBUG nova.virt.libvirt.host [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.995 221554 DEBUG nova.virt.libvirt.host [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.996 221554 DEBUG nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.996 221554 DEBUG nova.virt.hardware [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.996 221554 DEBUG nova.virt.hardware [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.997 221554 DEBUG nova.virt.hardware [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.997 221554 DEBUG nova.virt.hardware [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.997 221554 DEBUG nova.virt.hardware [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.997 221554 DEBUG nova.virt.hardware [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.998 221554 DEBUG nova.virt.hardware [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.998 221554 DEBUG nova.virt.hardware [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.998 221554 DEBUG nova.virt.hardware [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.999 221554 DEBUG nova.virt.hardware [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:24:10 np0005603609 nova_compute[221550]: 2026-01-31 08:24:10.999 221554 DEBUG nova.virt.hardware [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.001 221554 DEBUG oslo_concurrency.processutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:24:11 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2899694428' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.389 221554 DEBUG nova.compute.manager [req-77f4198f-987a-4c18-a38f-ae89d265877f req-adbe65bd-bef2-44f2-847a-d9912c8c0a0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received event network-changed-7d9d74bf-cfe7-4c4d-aaec-f0662642996b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.390 221554 DEBUG nova.compute.manager [req-77f4198f-987a-4c18-a38f-ae89d265877f req-adbe65bd-bef2-44f2-847a-d9912c8c0a0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Refreshing instance network info cache due to event network-changed-7d9d74bf-cfe7-4c4d-aaec-f0662642996b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.390 221554 DEBUG oslo_concurrency.lockutils [req-77f4198f-987a-4c18-a38f-ae89d265877f req-adbe65bd-bef2-44f2-847a-d9912c8c0a0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.390 221554 DEBUG oslo_concurrency.lockutils [req-77f4198f-987a-4c18-a38f-ae89d265877f req-adbe65bd-bef2-44f2-847a-d9912c8c0a0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.391 221554 DEBUG nova.network.neutron [req-77f4198f-987a-4c18-a38f-ae89d265877f req-adbe65bd-bef2-44f2-847a-d9912c8c0a0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Refreshing network info cache for port 7d9d74bf-cfe7-4c4d-aaec-f0662642996b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.403 221554 DEBUG oslo_concurrency.processutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.428 221554 DEBUG nova.storage.rbd_utils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] rbd image bebd0cd6-4043-4283-aee6-2b2d313ca46f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.431 221554 DEBUG oslo_concurrency.processutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:24:11 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4275877689' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.839 221554 DEBUG oslo_concurrency.processutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.841 221554 DEBUG nova.virt.libvirt.vif [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:24:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-578301611',display_name='tempest-AttachVolumeTestJSON-server-578301611',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-578301611',id=155,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIgah4XlK1iSaivNSQoj7hWwU0Mj9BDD4IUHaCjeTMSKow9fYTWPYldQNBozVeKEC4K2KMm4GY9/cXTJzi2EQ4T6BD7d6q/eJr6zsCGqyjzMi8P41TAXB1ir8LA2Wk0h+Q==',key_name='tempest-keypair-890016362',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fdf18f1faf4846e2a6e2eab4ac2aec02',ramdisk_id='',reservation_id='r-tk0fk5d6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-1821521720',owner_user_name='tempest-AttachVolumeTestJSON-1821521720-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:24:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3ada90dc4b77478cb4b93c63409d8537',uuid=bebd0cd6-4043-4283-aee6-2b2d313ca46f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "23692aac-7e72-4966-922e-d30b3648f957", "address": "fa:16:3e:85:2a:cb", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23692aac-7e", "ovs_interfaceid": "23692aac-7e72-4966-922e-d30b3648f957", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.841 221554 DEBUG nova.network.os_vif_util [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converting VIF {"id": "23692aac-7e72-4966-922e-d30b3648f957", "address": "fa:16:3e:85:2a:cb", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23692aac-7e", "ovs_interfaceid": "23692aac-7e72-4966-922e-d30b3648f957", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.842 221554 DEBUG nova.network.os_vif_util [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:2a:cb,bridge_name='br-int',has_traffic_filtering=True,id=23692aac-7e72-4966-922e-d30b3648f957,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23692aac-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.843 221554 DEBUG nova.objects.instance [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'pci_devices' on Instance uuid bebd0cd6-4043-4283-aee6-2b2d313ca46f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.895 221554 DEBUG nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:24:11 np0005603609 nova_compute[221550]:  <uuid>bebd0cd6-4043-4283-aee6-2b2d313ca46f</uuid>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:  <name>instance-0000009b</name>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <nova:name>tempest-AttachVolumeTestJSON-server-578301611</nova:name>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:24:10</nova:creationTime>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:24:11 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:        <nova:user uuid="3ada90dc4b77478cb4b93c63409d8537">tempest-AttachVolumeTestJSON-1821521720-project-member</nova:user>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:        <nova:project uuid="fdf18f1faf4846e2a6e2eab4ac2aec02">tempest-AttachVolumeTestJSON-1821521720</nova:project>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:        <nova:port uuid="23692aac-7e72-4966-922e-d30b3648f957">
Jan 31 03:24:11 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <entry name="serial">bebd0cd6-4043-4283-aee6-2b2d313ca46f</entry>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <entry name="uuid">bebd0cd6-4043-4283-aee6-2b2d313ca46f</entry>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/bebd0cd6-4043-4283-aee6-2b2d313ca46f_disk">
Jan 31 03:24:11 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:24:11 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/bebd0cd6-4043-4283-aee6-2b2d313ca46f_disk.config">
Jan 31 03:24:11 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:24:11 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:85:2a:cb"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <target dev="tap23692aac-7e"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/bebd0cd6-4043-4283-aee6-2b2d313ca46f/console.log" append="off"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:24:11 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:24:11 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:24:11 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:24:11 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.896 221554 DEBUG nova.compute.manager [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Preparing to wait for external event network-vif-plugged-23692aac-7e72-4966-922e-d30b3648f957 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.897 221554 DEBUG oslo_concurrency.lockutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.897 221554 DEBUG oslo_concurrency.lockutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.897 221554 DEBUG oslo_concurrency.lockutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.898 221554 DEBUG nova.virt.libvirt.vif [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:24:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-578301611',display_name='tempest-AttachVolumeTestJSON-server-578301611',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-578301611',id=155,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIgah4XlK1iSaivNSQoj7hWwU0Mj9BDD4IUHaCjeTMSKow9fYTWPYldQNBozVeKEC4K2KMm4GY9/cXTJzi2EQ4T6BD7d6q/eJr6zsCGqyjzMi8P41TAXB1ir8LA2Wk0h+Q==',key_name='tempest-keypair-890016362',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fdf18f1faf4846e2a6e2eab4ac2aec02',ramdisk_id='',reservation_id='r-tk0fk5d6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-1821521720',owner_user_name='tempest-AttachVolumeTestJSON-1821521720-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:24:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3ada90dc4b77478cb4b93c63409d8537',uuid=bebd0cd6-4043-4283-aee6-2b2d313ca46f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "23692aac-7e72-4966-922e-d30b3648f957", "address": "fa:16:3e:85:2a:cb", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23692aac-7e", "ovs_interfaceid": "23692aac-7e72-4966-922e-d30b3648f957", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.898 221554 DEBUG nova.network.os_vif_util [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converting VIF {"id": "23692aac-7e72-4966-922e-d30b3648f957", "address": "fa:16:3e:85:2a:cb", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23692aac-7e", "ovs_interfaceid": "23692aac-7e72-4966-922e-d30b3648f957", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.898 221554 DEBUG nova.network.os_vif_util [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:2a:cb,bridge_name='br-int',has_traffic_filtering=True,id=23692aac-7e72-4966-922e-d30b3648f957,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23692aac-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.899 221554 DEBUG os_vif [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:2a:cb,bridge_name='br-int',has_traffic_filtering=True,id=23692aac-7e72-4966-922e-d30b3648f957,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23692aac-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.899 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.899 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.900 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.903 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.903 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap23692aac-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.903 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap23692aac-7e, col_values=(('external_ids', {'iface-id': '23692aac-7e72-4966-922e-d30b3648f957', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:2a:cb', 'vm-uuid': 'bebd0cd6-4043-4283-aee6-2b2d313ca46f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.906 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:24:11 np0005603609 NetworkManager[49064]: <info>  [1769847851.9084] manager: (tap23692aac-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/322)
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.911 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.912 221554 INFO os_vif [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:2a:cb,bridge_name='br-int',has_traffic_filtering=True,id=23692aac-7e72-4966-922e-d30b3648f957,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23692aac-7e')#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.990 221554 DEBUG nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.990 221554 DEBUG nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.990 221554 DEBUG nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] No VIF found with MAC fa:16:3e:85:2a:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:24:11 np0005603609 nova_compute[221550]: 2026-01-31 08:24:11.991 221554 INFO nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Using config drive#033[00m
Jan 31 03:24:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:11.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:12 np0005603609 nova_compute[221550]: 2026-01-31 08:24:12.020 221554 DEBUG nova.storage.rbd_utils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] rbd image bebd0cd6-4043-4283-aee6-2b2d313ca46f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:24:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:12.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:12 np0005603609 nova_compute[221550]: 2026-01-31 08:24:12.865 221554 INFO nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Creating config drive at /var/lib/nova/instances/bebd0cd6-4043-4283-aee6-2b2d313ca46f/disk.config#033[00m
Jan 31 03:24:12 np0005603609 nova_compute[221550]: 2026-01-31 08:24:12.873 221554 DEBUG oslo_concurrency.processutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bebd0cd6-4043-4283-aee6-2b2d313ca46f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpag9nmqfc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:12 np0005603609 podman[285021]: 2026-01-31 08:24:12.935027372 +0000 UTC m=+0.092522242 container exec 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.000 221554 DEBUG oslo_concurrency.processutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bebd0cd6-4043-4283-aee6-2b2d313ca46f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpag9nmqfc" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.029 221554 DEBUG nova.storage.rbd_utils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] rbd image bebd0cd6-4043-4283-aee6-2b2d313ca46f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.032 221554 DEBUG oslo_concurrency.processutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bebd0cd6-4043-4283-aee6-2b2d313ca46f/disk.config bebd0cd6-4043-4283-aee6-2b2d313ca46f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:13 np0005603609 podman[285056]: 2026-01-31 08:24:13.075150038 +0000 UTC m=+0.051880757 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 03:24:13 np0005603609 podman[285021]: 2026-01-31 08:24:13.102650168 +0000 UTC m=+0.260145058 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.215 221554 DEBUG nova.network.neutron [req-e2a9b19b-5dfa-4ec1-a01e-9cdf4ee9528b req-f17a7ff0-c0d5-4e80-9716-1a4ab0e095fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Updated VIF entry in instance network info cache for port 23692aac-7e72-4966-922e-d30b3648f957. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.216 221554 DEBUG nova.network.neutron [req-e2a9b19b-5dfa-4ec1-a01e-9cdf4ee9528b req-f17a7ff0-c0d5-4e80-9716-1a4ab0e095fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Updating instance_info_cache with network_info: [{"id": "23692aac-7e72-4966-922e-d30b3648f957", "address": "fa:16:3e:85:2a:cb", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23692aac-7e", "ovs_interfaceid": "23692aac-7e72-4966-922e-d30b3648f957", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.241 221554 DEBUG oslo_concurrency.processutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bebd0cd6-4043-4283-aee6-2b2d313ca46f/disk.config bebd0cd6-4043-4283-aee6-2b2d313ca46f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.242 221554 INFO nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Deleting local config drive /var/lib/nova/instances/bebd0cd6-4043-4283-aee6-2b2d313ca46f/disk.config because it was imported into RBD.#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.246 221554 DEBUG oslo_concurrency.lockutils [req-e2a9b19b-5dfa-4ec1-a01e-9cdf4ee9528b req-f17a7ff0-c0d5-4e80-9716-1a4ab0e095fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-bebd0cd6-4043-4283-aee6-2b2d313ca46f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:24:13 np0005603609 kernel: tap23692aac-7e: entered promiscuous mode
Jan 31 03:24:13 np0005603609 NetworkManager[49064]: <info>  [1769847853.2935] manager: (tap23692aac-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/323)
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.294 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:13Z|00696|binding|INFO|Claiming lport 23692aac-7e72-4966-922e-d30b3648f957 for this chassis.
Jan 31 03:24:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:13Z|00697|binding|INFO|23692aac-7e72-4966-922e-d30b3648f957: Claiming fa:16:3e:85:2a:cb 10.100.0.7
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.297 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.304 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:2a:cb 10.100.0.7'], port_security=['fa:16:3e:85:2a:cb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bebd0cd6-4043-4283-aee6-2b2d313ca46f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e39ff1d-f815-485f-8f43-698b31333bba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fdf18f1faf4846e2a6e2eab4ac2aec02', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1c78fb75-185d-4975-afc9-20585c19198d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=917c1732-02f7-4e55-ac45-7e04e3147221, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=23692aac-7e72-4966-922e-d30b3648f957) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:24:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:13Z|00698|binding|INFO|Setting lport 23692aac-7e72-4966-922e-d30b3648f957 ovn-installed in OVS
Jan 31 03:24:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:13Z|00699|binding|INFO|Setting lport 23692aac-7e72-4966-922e-d30b3648f957 up in Southbound
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.306 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 23692aac-7e72-4966-922e-d30b3648f957 in datapath 1e39ff1d-f815-485f-8f43-698b31333bba bound to our chassis#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.305 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.307 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e39ff1d-f815-485f-8f43-698b31333bba#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.315 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[efab3ddc-6166-439e-a2fa-f858bf267a22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.316 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1e39ff1d-f1 in ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.318 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1e39ff1d-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.318 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[53578f01-375f-4ddc-b498-043591210a0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.319 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[68ae8d6a-7d2e-4da9-8a13-2e48ef696e40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:13 np0005603609 systemd-udevd[285152]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.328 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[ef00b77b-48b0-4aa3-a0c8-c3dcc34a780d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:13 np0005603609 NetworkManager[49064]: <info>  [1769847853.3322] device (tap23692aac-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:24:13 np0005603609 systemd-machined[190912]: New machine qemu-83-instance-0000009b.
Jan 31 03:24:13 np0005603609 NetworkManager[49064]: <info>  [1769847853.3327] device (tap23692aac-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:24:13 np0005603609 systemd[1]: Started Virtual Machine qemu-83-instance-0000009b.
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.348 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8138fbc9-a56b-420d-967c-06c5eb347397]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.371 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d085e785-8e1c-4120-8cdf-0bb5b6c3b05f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:13 np0005603609 systemd-udevd[285161]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:24:13 np0005603609 NetworkManager[49064]: <info>  [1769847853.3765] manager: (tap1e39ff1d-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/324)
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.375 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f56a8df8-a455-4a45-ab76-c683ef8f2c85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.401 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d052dba4-bf1c-46cf-bd96-6b86d5b1917a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.404 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec4cd86-ae99-4302-8073-c1900f12e37c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:13 np0005603609 NetworkManager[49064]: <info>  [1769847853.4238] device (tap1e39ff1d-f0): carrier: link connected
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.426 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[919be046-ed84-4275-ab46-a251f1ace2f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.438 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[419e10ac-35a2-42fa-bbe4-18d45121ad01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e39ff1d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:45:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809312, 'reachable_time': 30277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285210, 'error': None, 'target': 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.451 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b6fe2c81-fe25-4b78-b0b7-f6a037ef8903]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:4516'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 809312, 'tstamp': 809312}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285213, 'error': None, 'target': 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.462 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cfed6649-581a-4601-8b63-6251db554800]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e39ff1d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:45:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809312, 'reachable_time': 30277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285216, 'error': None, 'target': 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.483 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7b36d07c-423f-4d11-a09f-8211f9723dbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.520 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa6c92b-cf97-4e37-a79e-965233363c56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.522 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e39ff1d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.522 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.522 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e39ff1d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:13 np0005603609 kernel: tap1e39ff1d-f0: entered promiscuous mode
Jan 31 03:24:13 np0005603609 NetworkManager[49064]: <info>  [1769847853.5249] manager: (tap1e39ff1d-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/325)
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.524 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.534 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e39ff1d-f0, col_values=(('external_ids', {'iface-id': '98f103d6-a5bc-4680-a4d5-8ced102dd381'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:13Z|00700|binding|INFO|Releasing lport 98f103d6-a5bc-4680-a4d5-8ced102dd381 from this chassis (sb_readonly=0)
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.536 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.540 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.542 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1e39ff1d-f815-485f-8f43-698b31333bba.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1e39ff1d-f815-485f-8f43-698b31333bba.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.542 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[09926014-5936-4e60-9432-0dbaa3b4f13f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.543 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-1e39ff1d-f815-485f-8f43-698b31333bba
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/1e39ff1d-f815-485f-8f43-698b31333bba.pid.haproxy
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 1e39ff1d-f815-485f-8f43-698b31333bba
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:24:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:13.544 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'env', 'PROCESS_TAG=haproxy-1e39ff1d-f815-485f-8f43-698b31333bba', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1e39ff1d-f815-485f-8f43-698b31333bba.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.760 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847853.7600799, bebd0cd6-4043-4283-aee6-2b2d313ca46f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.761 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] VM Started (Lifecycle Event)#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.771 221554 DEBUG nova.network.neutron [req-77f4198f-987a-4c18-a38f-ae89d265877f req-adbe65bd-bef2-44f2-847a-d9912c8c0a0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updated VIF entry in instance network info cache for port 7d9d74bf-cfe7-4c4d-aaec-f0662642996b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.772 221554 DEBUG nova.network.neutron [req-77f4198f-987a-4c18-a38f-ae89d265877f req-adbe65bd-bef2-44f2-847a-d9912c8c0a0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updating instance_info_cache with network_info: [{"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.816 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.822 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847853.7627606, bebd0cd6-4043-4283-aee6-2b2d313ca46f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.823 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:24:13 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.861 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.866 221554 DEBUG nova.compute.manager [req-a487741e-8caa-4a25-816a-b6d7446968fd req-b9e61a1d-73f6-4e3f-ab32-f839e2b8990d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Received event network-vif-plugged-23692aac-7e72-4966-922e-d30b3648f957 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.867 221554 DEBUG oslo_concurrency.lockutils [req-a487741e-8caa-4a25-816a-b6d7446968fd req-b9e61a1d-73f6-4e3f-ab32-f839e2b8990d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.868 221554 DEBUG oslo_concurrency.lockutils [req-a487741e-8caa-4a25-816a-b6d7446968fd req-b9e61a1d-73f6-4e3f-ab32-f839e2b8990d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.869 221554 DEBUG oslo_concurrency.lockutils [req-a487741e-8caa-4a25-816a-b6d7446968fd req-b9e61a1d-73f6-4e3f-ab32-f839e2b8990d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.869 221554 DEBUG nova.compute.manager [req-a487741e-8caa-4a25-816a-b6d7446968fd req-b9e61a1d-73f6-4e3f-ab32-f839e2b8990d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Processing event network-vif-plugged-23692aac-7e72-4966-922e-d30b3648f957 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.870 221554 DEBUG nova.compute.manager [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.870 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.873 221554 DEBUG nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.880 221554 DEBUG oslo_concurrency.lockutils [req-77f4198f-987a-4c18-a38f-ae89d265877f req-adbe65bd-bef2-44f2-847a-d9912c8c0a0c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.882 221554 INFO nova.virt.libvirt.driver [-] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Instance spawned successfully.#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.883 221554 DEBUG nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.896 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.896 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847853.872932, bebd0cd6-4043-4283-aee6-2b2d313ca46f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.897 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.917 221554 DEBUG nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.918 221554 DEBUG nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.919 221554 DEBUG nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.919 221554 DEBUG nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.920 221554 DEBUG nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.920 221554 DEBUG nova.virt.libvirt.driver [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.927 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.930 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:24:13 np0005603609 podman[285403]: 2026-01-31 08:24:13.934491303 +0000 UTC m=+0.091151970 container create 51e71ad5db169f31834229bed57c32b4f3c368a952585a8dcfe77c0934b56668 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.958 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:24:13 np0005603609 podman[285403]: 2026-01-31 08:24:13.87856184 +0000 UTC m=+0.035222527 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:24:13 np0005603609 systemd[1]: Started libpod-conmon-51e71ad5db169f31834229bed57c32b4f3c368a952585a8dcfe77c0934b56668.scope.
Jan 31 03:24:13 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.991 221554 INFO nova.compute.manager [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Took 9.77 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:24:13 np0005603609 nova_compute[221550]: 2026-01-31 08:24:13.991 221554 DEBUG nova.compute.manager [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:24:14 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955d83b45c8631e036d3f0389456f90cf23b52840882b4c7f5c155dfff42c1fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:13.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:14 np0005603609 podman[285403]: 2026-01-31 08:24:14.020809696 +0000 UTC m=+0.177470383 container init 51e71ad5db169f31834229bed57c32b4f3c368a952585a8dcfe77c0934b56668 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:24:14 np0005603609 podman[285403]: 2026-01-31 08:24:14.026743679 +0000 UTC m=+0.183404346 container start 51e71ad5db169f31834229bed57c32b4f3c368a952585a8dcfe77c0934b56668 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:24:14 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[285430]: [NOTICE]   (285434) : New worker (285437) forked
Jan 31 03:24:14 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[285430]: [NOTICE]   (285434) : Loading success.
Jan 31 03:24:14 np0005603609 nova_compute[221550]: 2026-01-31 08:24:14.079 221554 INFO nova.compute.manager [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Took 10.94 seconds to build instance.#033[00m
Jan 31 03:24:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:14.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:14 np0005603609 nova_compute[221550]: 2026-01-31 08:24:14.106 221554 DEBUG oslo_concurrency.lockutils [None req-4fde2483-4646-4006-96c4-3930df8e6e9b 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:24:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:24:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:24:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:24:14 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:24:15 np0005603609 nova_compute[221550]: 2026-01-31 08:24:15.381 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:16.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:16.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:16 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:16Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b4:62:8a 10.100.0.3
Jan 31 03:24:16 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:16Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b4:62:8a 10.100.0.3
Jan 31 03:24:16 np0005603609 nova_compute[221550]: 2026-01-31 08:24:16.333 221554 DEBUG nova.compute.manager [req-2120af2d-003e-474b-86fd-5a908d0c93b3 req-44f0cd47-c011-4dd7-a67b-81da7d6760e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Received event network-vif-plugged-23692aac-7e72-4966-922e-d30b3648f957 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:24:16 np0005603609 nova_compute[221550]: 2026-01-31 08:24:16.333 221554 DEBUG oslo_concurrency.lockutils [req-2120af2d-003e-474b-86fd-5a908d0c93b3 req-44f0cd47-c011-4dd7-a67b-81da7d6760e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:16 np0005603609 nova_compute[221550]: 2026-01-31 08:24:16.334 221554 DEBUG oslo_concurrency.lockutils [req-2120af2d-003e-474b-86fd-5a908d0c93b3 req-44f0cd47-c011-4dd7-a67b-81da7d6760e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:16 np0005603609 nova_compute[221550]: 2026-01-31 08:24:16.334 221554 DEBUG oslo_concurrency.lockutils [req-2120af2d-003e-474b-86fd-5a908d0c93b3 req-44f0cd47-c011-4dd7-a67b-81da7d6760e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:16 np0005603609 nova_compute[221550]: 2026-01-31 08:24:16.334 221554 DEBUG nova.compute.manager [req-2120af2d-003e-474b-86fd-5a908d0c93b3 req-44f0cd47-c011-4dd7-a67b-81da7d6760e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] No waiting events found dispatching network-vif-plugged-23692aac-7e72-4966-922e-d30b3648f957 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:24:16 np0005603609 nova_compute[221550]: 2026-01-31 08:24:16.334 221554 WARNING nova.compute.manager [req-2120af2d-003e-474b-86fd-5a908d0c93b3 req-44f0cd47-c011-4dd7-a67b-81da7d6760e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Received unexpected event network-vif-plugged-23692aac-7e72-4966-922e-d30b3648f957 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:24:16 np0005603609 nova_compute[221550]: 2026-01-31 08:24:16.905 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:24:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:18.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:24:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:18.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:18 np0005603609 nova_compute[221550]: 2026-01-31 08:24:18.447 221554 DEBUG nova.compute.manager [req-6ec198f8-bfef-4be7-8c32-bc02ab4b4c1a req-40b4053f-9498-47b6-ac60-6e392eb91e64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Received event network-changed-23692aac-7e72-4966-922e-d30b3648f957 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:24:18 np0005603609 nova_compute[221550]: 2026-01-31 08:24:18.448 221554 DEBUG nova.compute.manager [req-6ec198f8-bfef-4be7-8c32-bc02ab4b4c1a req-40b4053f-9498-47b6-ac60-6e392eb91e64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Refreshing instance network info cache due to event network-changed-23692aac-7e72-4966-922e-d30b3648f957. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:24:18 np0005603609 nova_compute[221550]: 2026-01-31 08:24:18.448 221554 DEBUG oslo_concurrency.lockutils [req-6ec198f8-bfef-4be7-8c32-bc02ab4b4c1a req-40b4053f-9498-47b6-ac60-6e392eb91e64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-bebd0cd6-4043-4283-aee6-2b2d313ca46f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:24:18 np0005603609 nova_compute[221550]: 2026-01-31 08:24:18.450 221554 DEBUG oslo_concurrency.lockutils [req-6ec198f8-bfef-4be7-8c32-bc02ab4b4c1a req-40b4053f-9498-47b6-ac60-6e392eb91e64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-bebd0cd6-4043-4283-aee6-2b2d313ca46f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:24:18 np0005603609 nova_compute[221550]: 2026-01-31 08:24:18.450 221554 DEBUG nova.network.neutron [req-6ec198f8-bfef-4be7-8c32-bc02ab4b4c1a req-40b4053f-9498-47b6-ac60-6e392eb91e64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Refreshing network info cache for port 23692aac-7e72-4966-922e-d30b3648f957 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:24:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:20.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:20.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:20 np0005603609 nova_compute[221550]: 2026-01-31 08:24:20.384 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:24:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:24:21 np0005603609 nova_compute[221550]: 2026-01-31 08:24:21.906 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:21 np0005603609 nova_compute[221550]: 2026-01-31 08:24:21.976 221554 DEBUG nova.network.neutron [req-6ec198f8-bfef-4be7-8c32-bc02ab4b4c1a req-40b4053f-9498-47b6-ac60-6e392eb91e64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Updated VIF entry in instance network info cache for port 23692aac-7e72-4966-922e-d30b3648f957. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:24:21 np0005603609 nova_compute[221550]: 2026-01-31 08:24:21.976 221554 DEBUG nova.network.neutron [req-6ec198f8-bfef-4be7-8c32-bc02ab4b4c1a req-40b4053f-9498-47b6-ac60-6e392eb91e64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Updating instance_info_cache with network_info: [{"id": "23692aac-7e72-4966-922e-d30b3648f957", "address": "fa:16:3e:85:2a:cb", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23692aac-7e", "ovs_interfaceid": "23692aac-7e72-4966-922e-d30b3648f957", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:24:22 np0005603609 nova_compute[221550]: 2026-01-31 08:24:22.006 221554 DEBUG oslo_concurrency.lockutils [req-6ec198f8-bfef-4be7-8c32-bc02ab4b4c1a req-40b4053f-9498-47b6-ac60-6e392eb91e64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-bebd0cd6-4043-4283-aee6-2b2d313ca46f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:24:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:22.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:22.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:23 np0005603609 nova_compute[221550]: 2026-01-31 08:24:23.291 221554 INFO nova.compute.manager [None req-af7a327a-a6f3-45d2-bfaf-3dd6d63f0caa 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Get console output#033[00m
Jan 31 03:24:23 np0005603609 nova_compute[221550]: 2026-01-31 08:24:23.301 221554 INFO oslo.privsep.daemon [None req-af7a327a-a6f3-45d2-bfaf-3dd6d63f0caa 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpvfai2by7/privsep.sock']#033[00m
Jan 31 03:24:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:24:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:24.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:24:24 np0005603609 nova_compute[221550]: 2026-01-31 08:24:24.051 221554 INFO oslo.privsep.daemon [None req-af7a327a-a6f3-45d2-bfaf-3dd6d63f0caa 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 31 03:24:24 np0005603609 nova_compute[221550]: 2026-01-31 08:24:23.939 285517 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 31 03:24:24 np0005603609 nova_compute[221550]: 2026-01-31 08:24:23.941 285517 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 31 03:24:24 np0005603609 nova_compute[221550]: 2026-01-31 08:24:23.943 285517 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Jan 31 03:24:24 np0005603609 nova_compute[221550]: 2026-01-31 08:24:23.943 285517 INFO oslo.privsep.daemon [-] privsep daemon running as pid 285517#033[00m
Jan 31 03:24:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:24.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:24 np0005603609 nova_compute[221550]: 2026-01-31 08:24:24.144 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:24:24 np0005603609 nova_compute[221550]: 2026-01-31 08:24:24.566 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:25 np0005603609 nova_compute[221550]: 2026-01-31 08:24:25.386 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:26.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:26.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:26 np0005603609 nova_compute[221550]: 2026-01-31 08:24:26.905 221554 INFO nova.compute.manager [None req-4cb26e51-86f6-4d45-88ec-1b0fb4f9c7ed 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Get console output#033[00m
Jan 31 03:24:26 np0005603609 nova_compute[221550]: 2026-01-31 08:24:26.908 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:26 np0005603609 nova_compute[221550]: 2026-01-31 08:24:26.910 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:24:27 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:27Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:85:2a:cb 10.100.0.7
Jan 31 03:24:27 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:27Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:85:2a:cb 10.100.0.7
Jan 31 03:24:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:28.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:28.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:28 np0005603609 nova_compute[221550]: 2026-01-31 08:24:28.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:28 np0005603609 nova_compute[221550]: 2026-01-31 08:24:28.749 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:30.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:30.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:30 np0005603609 nova_compute[221550]: 2026-01-31 08:24:30.389 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:31 np0005603609 podman[285523]: 2026-01-31 08:24:31.209911092 +0000 UTC m=+0.069334076 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 03:24:31 np0005603609 podman[285522]: 2026-01-31 08:24:31.245111617 +0000 UTC m=+0.109094201 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:24:31 np0005603609 nova_compute[221550]: 2026-01-31 08:24:31.457 221554 DEBUG oslo_concurrency.lockutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquiring lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:24:31 np0005603609 nova_compute[221550]: 2026-01-31 08:24:31.457 221554 DEBUG oslo_concurrency.lockutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquired lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:24:31 np0005603609 nova_compute[221550]: 2026-01-31 08:24:31.457 221554 DEBUG nova.network.neutron [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:24:31 np0005603609 nova_compute[221550]: 2026-01-31 08:24:31.910 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:32.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:32.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:33 np0005603609 nova_compute[221550]: 2026-01-31 08:24:33.183 221554 DEBUG nova.network.neutron [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updating instance_info_cache with network_info: [{"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:24:33 np0005603609 nova_compute[221550]: 2026-01-31 08:24:33.204 221554 DEBUG oslo_concurrency.lockutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Releasing lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:24:33 np0005603609 nova_compute[221550]: 2026-01-31 08:24:33.348 221554 DEBUG nova.virt.libvirt.driver [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Jan 31 03:24:33 np0005603609 nova_compute[221550]: 2026-01-31 08:24:33.349 221554 DEBUG nova.virt.libvirt.volume.remotefs [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Creating file /var/lib/nova/instances/90aa4e13-650f-43f2-8ebe-19a34e0cc605/7978a8015e874d2bae7adace77a9e4a5.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Jan 31 03:24:33 np0005603609 nova_compute[221550]: 2026-01-31 08:24:33.349 221554 DEBUG oslo_concurrency.processutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/90aa4e13-650f-43f2-8ebe-19a34e0cc605/7978a8015e874d2bae7adace77a9e4a5.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:33 np0005603609 nova_compute[221550]: 2026-01-31 08:24:33.797 221554 DEBUG oslo_concurrency.processutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/90aa4e13-650f-43f2-8ebe-19a34e0cc605/7978a8015e874d2bae7adace77a9e4a5.tmp" returned: 1 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:33 np0005603609 nova_compute[221550]: 2026-01-31 08:24:33.798 221554 DEBUG oslo_concurrency.processutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/90aa4e13-650f-43f2-8ebe-19a34e0cc605/7978a8015e874d2bae7adace77a9e4a5.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 03:24:33 np0005603609 nova_compute[221550]: 2026-01-31 08:24:33.798 221554 DEBUG nova.virt.libvirt.volume.remotefs [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Creating directory /var/lib/nova/instances/90aa4e13-650f-43f2-8ebe-19a34e0cc605 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Jan 31 03:24:33 np0005603609 nova_compute[221550]: 2026-01-31 08:24:33.799 221554 DEBUG oslo_concurrency.processutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/90aa4e13-650f-43f2-8ebe-19a34e0cc605 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:34 np0005603609 nova_compute[221550]: 2026-01-31 08:24:34.003 221554 DEBUG oslo_concurrency.processutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/90aa4e13-650f-43f2-8ebe-19a34e0cc605" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:34 np0005603609 nova_compute[221550]: 2026-01-31 08:24:34.009 221554 DEBUG nova.virt.libvirt.driver [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:24:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:34.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:34.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:35 np0005603609 nova_compute[221550]: 2026-01-31 08:24:35.392 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:35 np0005603609 nova_compute[221550]: 2026-01-31 08:24:35.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:35 np0005603609 ceph-osd[79083]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 31 03:24:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:36.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.058 221554 DEBUG oslo_concurrency.lockutils [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.059 221554 DEBUG oslo_concurrency.lockutils [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.078 221554 DEBUG nova.objects.instance [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'flavor' on Instance uuid bebd0cd6-4043-4283-aee6-2b2d313ca46f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:24:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:36.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.147 221554 DEBUG oslo_concurrency.lockutils [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:36 np0005603609 kernel: tap7d9d74bf-cf (unregistering): left promiscuous mode
Jan 31 03:24:36 np0005603609 NetworkManager[49064]: <info>  [1769847876.5577] device (tap7d9d74bf-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:24:36 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:36Z|00701|binding|INFO|Releasing lport 7d9d74bf-cfe7-4c4d-aaec-f0662642996b from this chassis (sb_readonly=0)
Jan 31 03:24:36 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:36Z|00702|binding|INFO|Setting lport 7d9d74bf-cfe7-4c4d-aaec-f0662642996b down in Southbound
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.559 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:36 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:36Z|00703|binding|INFO|Removing iface tap7d9d74bf-cf ovn-installed in OVS
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.562 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.572 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:36.576 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:62:8a 10.100.0.3'], port_security=['fa:16:3e:b4:62:8a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '90aa4e13-650f-43f2-8ebe-19a34e0cc605', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a584cda0-df11-4171-9687-b79f1d3fe460', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3f15617b-dce2-4914-b18c-70facd7e86fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b34831f-93cf-4037-8766-7bee8dbb9141, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=7d9d74bf-cfe7-4c4d-aaec-f0662642996b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:24:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:36.577 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 7d9d74bf-cfe7-4c4d-aaec-f0662642996b in datapath a584cda0-df11-4171-9687-b79f1d3fe460 unbound from our chassis#033[00m
Jan 31 03:24:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:36.580 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a584cda0-df11-4171-9687-b79f1d3fe460, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:24:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:36.580 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[05c405a7-8432-48eb-89ff-66daa54f5dfa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:36.581 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460 namespace which is not needed anymore#033[00m
Jan 31 03:24:36 np0005603609 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Jan 31 03:24:36 np0005603609 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d0000009a.scope: Consumed 13.428s CPU time.
Jan 31 03:24:36 np0005603609 systemd-machined[190912]: Machine qemu-82-instance-0000009a terminated.
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.627 221554 DEBUG oslo_concurrency.lockutils [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.627 221554 DEBUG oslo_concurrency.lockutils [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.628 221554 INFO nova.compute.manager [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Attaching volume c7efb5e6-4cbc-4621-8f4f-a1a945d0bb8c to /dev/vdb#033[00m
Jan 31 03:24:36 np0005603609 neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460[284562]: [NOTICE]   (284566) : haproxy version is 2.8.14-c23fe91
Jan 31 03:24:36 np0005603609 neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460[284562]: [NOTICE]   (284566) : path to executable is /usr/sbin/haproxy
Jan 31 03:24:36 np0005603609 neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460[284562]: [WARNING]  (284566) : Exiting Master process...
Jan 31 03:24:36 np0005603609 neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460[284562]: [ALERT]    (284566) : Current worker (284568) exited with code 143 (Terminated)
Jan 31 03:24:36 np0005603609 neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460[284562]: [WARNING]  (284566) : All workers exited. Exiting... (0)
Jan 31 03:24:36 np0005603609 systemd[1]: libpod-1ecc78b748cafa008deac3df3f2012a4b4cf335d66c6c2aa94ec78828e2cf2f9.scope: Deactivated successfully.
Jan 31 03:24:36 np0005603609 podman[285592]: 2026-01-31 08:24:36.71172702 +0000 UTC m=+0.045181166 container died 1ecc78b748cafa008deac3df3f2012a4b4cf335d66c6c2aa94ec78828e2cf2f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 03:24:36 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ecc78b748cafa008deac3df3f2012a4b4cf335d66c6c2aa94ec78828e2cf2f9-userdata-shm.mount: Deactivated successfully.
Jan 31 03:24:36 np0005603609 systemd[1]: var-lib-containers-storage-overlay-e9175b630175dba05bcb5ace79c337acc47e81d28525171f219a851a4d4522eb-merged.mount: Deactivated successfully.
Jan 31 03:24:36 np0005603609 podman[285592]: 2026-01-31 08:24:36.754445887 +0000 UTC m=+0.087900073 container cleanup 1ecc78b748cafa008deac3df3f2012a4b4cf335d66c6c2aa94ec78828e2cf2f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:24:36 np0005603609 systemd[1]: libpod-conmon-1ecc78b748cafa008deac3df3f2012a4b4cf335d66c6c2aa94ec78828e2cf2f9.scope: Deactivated successfully.
Jan 31 03:24:36 np0005603609 podman[285622]: 2026-01-31 08:24:36.829687043 +0000 UTC m=+0.057779758 container remove 1ecc78b748cafa008deac3df3f2012a4b4cf335d66c6c2aa94ec78828e2cf2f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 03:24:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:36.834 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[adbbdba2-5681-4410-8fa2-82b94e8bffda]: (4, ('Sat Jan 31 08:24:36 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460 (1ecc78b748cafa008deac3df3f2012a4b4cf335d66c6c2aa94ec78828e2cf2f9)\n1ecc78b748cafa008deac3df3f2012a4b4cf335d66c6c2aa94ec78828e2cf2f9\nSat Jan 31 08:24:36 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460 (1ecc78b748cafa008deac3df3f2012a4b4cf335d66c6c2aa94ec78828e2cf2f9)\n1ecc78b748cafa008deac3df3f2012a4b4cf335d66c6c2aa94ec78828e2cf2f9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:36.835 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[28129ac2-49ef-4625-ba9f-ac3915e8d103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:36.836 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa584cda0-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.838 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:36 np0005603609 kernel: tapa584cda0-d0: left promiscuous mode
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.846 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:36.849 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fcf60de1-85d0-44b0-9a89-b5a2647e659d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:36.875 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bf44b892-7eb0-4823-9076-cb01dc368af7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:36.877 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bb614dc4-ad8a-4141-be00-08892af8c670]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:36.895 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7c7dbd3f-f5f4-4700-b9d7-60d54df505d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 808206, 'reachable_time': 39807, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285652, 'error': None, 'target': 'ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:36 np0005603609 systemd[1]: run-netns-ovnmeta\x2da584cda0\x2ddf11\x2d4171\x2d9687\x2db79f1d3fe460.mount: Deactivated successfully.
Jan 31 03:24:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:36.901 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a584cda0-df11-4171-9687-b79f1d3fe460 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:24:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:36.901 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[4db6b1a9-69d5-4035-ad57-e4f41333acd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.911 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.930 221554 DEBUG os_brick.utils [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.931 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.945 226739 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.946 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4a5a8f-23bd-423b-963f-82ce838d1eed]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.948 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.957 226739 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.958 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[0b3c231f-958e-487d-bea6-9ea796bb5817]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1765b9b6275c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.960 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.971 226739 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.971 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1d8199-465f-4372-9b8a-cabb15ef6297]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.973 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[7b48bc61-1284-49c2-bd82-4dc2e75bf9a5]: (4, '231927d4-1ded-4b84-843c-456d697af567') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:36 np0005603609 nova_compute[221550]: 2026-01-31 08:24:36.974 221554 DEBUG oslo_concurrency.processutils [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.009 221554 DEBUG oslo_concurrency.processutils [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.013 221554 DEBUG os_brick.initiator.connectors.lightos [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.013 221554 DEBUG os_brick.initiator.connectors.lightos [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.014 221554 DEBUG os_brick.initiator.connectors.lightos [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.014 221554 DEBUG os_brick.utils [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] <== get_connector_properties: return (83ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1765b9b6275c', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '231927d4-1ded-4b84-843c-456d697af567', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.015 221554 DEBUG nova.virt.block_device [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Updating existing volume attachment record: 98e91211-70ba-46de-8359-01cc8f145c58 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.030 221554 DEBUG nova.compute.manager [req-358be447-cf41-47eb-9ed2-29c2c193bab6 req-73614ec5-cbcc-401a-9436-cfca1c1d4653 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received event network-vif-unplugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.030 221554 DEBUG oslo_concurrency.lockutils [req-358be447-cf41-47eb-9ed2-29c2c193bab6 req-73614ec5-cbcc-401a-9436-cfca1c1d4653 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.031 221554 DEBUG oslo_concurrency.lockutils [req-358be447-cf41-47eb-9ed2-29c2c193bab6 req-73614ec5-cbcc-401a-9436-cfca1c1d4653 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.031 221554 DEBUG oslo_concurrency.lockutils [req-358be447-cf41-47eb-9ed2-29c2c193bab6 req-73614ec5-cbcc-401a-9436-cfca1c1d4653 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.031 221554 DEBUG nova.compute.manager [req-358be447-cf41-47eb-9ed2-29c2c193bab6 req-73614ec5-cbcc-401a-9436-cfca1c1d4653 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] No waiting events found dispatching network-vif-unplugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.032 221554 WARNING nova.compute.manager [req-358be447-cf41-47eb-9ed2-29c2c193bab6 req-73614ec5-cbcc-401a-9436-cfca1c1d4653 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received unexpected event network-vif-unplugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.035 221554 INFO nova.virt.libvirt.driver [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.041 221554 INFO nova.virt.libvirt.driver [-] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Instance destroyed successfully.#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.042 221554 DEBUG nova.virt.libvirt.vif [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:23:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2050667150',display_name='tempest-TestNetworkAdvancedServerOps-server-2050667150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2050667150',id=154,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEPErrFuMatngmbPFI9nTUxaXvpce6NcVYOjpFna1bVYBBpAXT5DWY6THfcwtLeuI/NzxTjJ2/S962+d7YMt9k52/1Ce4TFS9tLoM3ovnpGdPHYwJISH1c/E+TlOqwaBJw==',key_name='tempest-TestNetworkAdvancedServerOps-1275181138',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:24:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-wq345vcr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:24:30Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=90aa4e13-650f-43f2-8ebe-19a34e0cc605,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--863310389", "vif_mac": "fa:16:3e:b4:62:8a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.043 221554 DEBUG nova.network.os_vif_util [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Converting VIF {"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--863310389", "vif_mac": "fa:16:3e:b4:62:8a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.043 221554 DEBUG nova.network.os_vif_util [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b4:62:8a,bridge_name='br-int',has_traffic_filtering=True,id=7d9d74bf-cfe7-4c4d-aaec-f0662642996b,network=Network(a584cda0-df11-4171-9687-b79f1d3fe460),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d9d74bf-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.044 221554 DEBUG os_vif [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:62:8a,bridge_name='br-int',has_traffic_filtering=True,id=7d9d74bf-cfe7-4c4d-aaec-f0662642996b,network=Network(a584cda0-df11-4171-9687-b79f1d3fe460),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d9d74bf-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.045 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.046 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7d9d74bf-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.065 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.071 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.074 221554 INFO os_vif [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:62:8a,bridge_name='br-int',has_traffic_filtering=True,id=7d9d74bf-cfe7-4c4d-aaec-f0662642996b,network=Network(a584cda0-df11-4171-9687-b79f1d3fe460),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d9d74bf-cf')#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.079 221554 DEBUG nova.virt.libvirt.driver [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.079 221554 DEBUG nova.virt.libvirt.driver [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.479 221554 DEBUG neutronclient.v2_0.client [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 7d9d74bf-cfe7-4c4d-aaec-f0662642996b for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.707 221554 DEBUG oslo_concurrency.lockutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquiring lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.707 221554 DEBUG oslo_concurrency.lockutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:37 np0005603609 nova_compute[221550]: 2026-01-31 08:24:37.707 221554 DEBUG oslo_concurrency.lockutils [None req-3c9ce93b-3653-40b2-84b2-c234da2af10a af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:38.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:24:38 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1950468783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:24:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:38.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:38 np0005603609 nova_compute[221550]: 2026-01-31 08:24:38.445 221554 DEBUG nova.objects.instance [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'flavor' on Instance uuid bebd0cd6-4043-4283-aee6-2b2d313ca46f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:24:38 np0005603609 nova_compute[221550]: 2026-01-31 08:24:38.480 221554 DEBUG nova.virt.libvirt.driver [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Attempting to attach volume c7efb5e6-4cbc-4621-8f4f-a1a945d0bb8c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 31 03:24:38 np0005603609 nova_compute[221550]: 2026-01-31 08:24:38.483 221554 DEBUG nova.virt.libvirt.guest [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 03:24:38 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:24:38 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-c7efb5e6-4cbc-4621-8f4f-a1a945d0bb8c">
Jan 31 03:24:38 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:24:38 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:24:38 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:24:38 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:24:38 np0005603609 nova_compute[221550]:  <auth username="openstack">
Jan 31 03:24:38 np0005603609 nova_compute[221550]:    <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:24:38 np0005603609 nova_compute[221550]:  </auth>
Jan 31 03:24:38 np0005603609 nova_compute[221550]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:24:38 np0005603609 nova_compute[221550]:  <serial>c7efb5e6-4cbc-4621-8f4f-a1a945d0bb8c</serial>
Jan 31 03:24:38 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:24:38 np0005603609 nova_compute[221550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 03:24:38 np0005603609 nova_compute[221550]: 2026-01-31 08:24:38.617 221554 DEBUG nova.virt.libvirt.driver [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:24:38 np0005603609 nova_compute[221550]: 2026-01-31 08:24:38.618 221554 DEBUG nova.virt.libvirt.driver [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:24:38 np0005603609 nova_compute[221550]: 2026-01-31 08:24:38.618 221554 DEBUG nova.virt.libvirt.driver [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:24:38 np0005603609 nova_compute[221550]: 2026-01-31 08:24:38.619 221554 DEBUG nova.virt.libvirt.driver [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] No VIF found with MAC fa:16:3e:85:2a:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:24:39 np0005603609 nova_compute[221550]: 2026-01-31 08:24:39.085 221554 DEBUG oslo_concurrency.lockutils [None req-de27d02e-6fdc-45c9-a336-17218f0a8a68 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:39 np0005603609 nova_compute[221550]: 2026-01-31 08:24:39.176 221554 DEBUG nova.compute.manager [req-54ba9a52-db2f-410a-807c-2072faa15a4e req-e99e561c-cf3e-4f84-a359-bf0be951abfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received event network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:24:39 np0005603609 nova_compute[221550]: 2026-01-31 08:24:39.176 221554 DEBUG oslo_concurrency.lockutils [req-54ba9a52-db2f-410a-807c-2072faa15a4e req-e99e561c-cf3e-4f84-a359-bf0be951abfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:39 np0005603609 nova_compute[221550]: 2026-01-31 08:24:39.177 221554 DEBUG oslo_concurrency.lockutils [req-54ba9a52-db2f-410a-807c-2072faa15a4e req-e99e561c-cf3e-4f84-a359-bf0be951abfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:39 np0005603609 nova_compute[221550]: 2026-01-31 08:24:39.177 221554 DEBUG oslo_concurrency.lockutils [req-54ba9a52-db2f-410a-807c-2072faa15a4e req-e99e561c-cf3e-4f84-a359-bf0be951abfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:39 np0005603609 nova_compute[221550]: 2026-01-31 08:24:39.177 221554 DEBUG nova.compute.manager [req-54ba9a52-db2f-410a-807c-2072faa15a4e req-e99e561c-cf3e-4f84-a359-bf0be951abfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] No waiting events found dispatching network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:24:39 np0005603609 nova_compute[221550]: 2026-01-31 08:24:39.177 221554 WARNING nova.compute.manager [req-54ba9a52-db2f-410a-807c-2072faa15a4e req-e99e561c-cf3e-4f84-a359-bf0be951abfa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received unexpected event network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:24:39 np0005603609 nova_compute[221550]: 2026-01-31 08:24:39.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:39 np0005603609 nova_compute[221550]: 2026-01-31 08:24:39.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:24:39 np0005603609 nova_compute[221550]: 2026-01-31 08:24:39.699 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:24:39 np0005603609 nova_compute[221550]: 2026-01-31 08:24:39.700 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:40 np0005603609 nova_compute[221550]: 2026-01-31 08:24:40.034 221554 DEBUG nova.compute.manager [req-d8cca14d-cb4c-41db-9eef-41858024d908 req-c3ff7a34-6412-41e0-b9f0-ab64863733e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received event network-changed-7d9d74bf-cfe7-4c4d-aaec-f0662642996b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:24:40 np0005603609 nova_compute[221550]: 2026-01-31 08:24:40.034 221554 DEBUG nova.compute.manager [req-d8cca14d-cb4c-41db-9eef-41858024d908 req-c3ff7a34-6412-41e0-b9f0-ab64863733e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Refreshing instance network info cache due to event network-changed-7d9d74bf-cfe7-4c4d-aaec-f0662642996b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:24:40 np0005603609 nova_compute[221550]: 2026-01-31 08:24:40.035 221554 DEBUG oslo_concurrency.lockutils [req-d8cca14d-cb4c-41db-9eef-41858024d908 req-c3ff7a34-6412-41e0-b9f0-ab64863733e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:24:40 np0005603609 nova_compute[221550]: 2026-01-31 08:24:40.035 221554 DEBUG oslo_concurrency.lockutils [req-d8cca14d-cb4c-41db-9eef-41858024d908 req-c3ff7a34-6412-41e0-b9f0-ab64863733e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:24:40 np0005603609 nova_compute[221550]: 2026-01-31 08:24:40.035 221554 DEBUG nova.network.neutron [req-d8cca14d-cb4c-41db-9eef-41858024d908 req-c3ff7a34-6412-41e0-b9f0-ab64863733e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Refreshing network info cache for port 7d9d74bf-cfe7-4c4d-aaec-f0662642996b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:24:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:40.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:40.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:40 np0005603609 nova_compute[221550]: 2026-01-31 08:24:40.417 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:40 np0005603609 ceph-osd[79083]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 31 03:24:41 np0005603609 nova_compute[221550]: 2026-01-31 08:24:41.664 221554 DEBUG oslo_concurrency.lockutils [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:41 np0005603609 nova_compute[221550]: 2026-01-31 08:24:41.664 221554 DEBUG oslo_concurrency.lockutils [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:41 np0005603609 nova_compute[221550]: 2026-01-31 08:24:41.692 221554 DEBUG nova.objects.instance [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'flavor' on Instance uuid bebd0cd6-4043-4283-aee6-2b2d313ca46f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:24:41 np0005603609 nova_compute[221550]: 2026-01-31 08:24:41.750 221554 DEBUG oslo_concurrency.lockutils [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:24:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:42.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.066 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:42.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:42.648 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.648 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:42.650 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.681 221554 DEBUG oslo_concurrency.lockutils [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.683 221554 DEBUG oslo_concurrency.lockutils [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.683 221554 INFO nova.compute.manager [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Attaching volume e0ddd33b-2a84-4319-a509-f1419e80aa70 to /dev/vdc#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.745 221554 DEBUG nova.network.neutron [req-d8cca14d-cb4c-41db-9eef-41858024d908 req-c3ff7a34-6412-41e0-b9f0-ab64863733e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updated VIF entry in instance network info cache for port 7d9d74bf-cfe7-4c4d-aaec-f0662642996b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.747 221554 DEBUG nova.network.neutron [req-d8cca14d-cb4c-41db-9eef-41858024d908 req-c3ff7a34-6412-41e0-b9f0-ab64863733e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updating instance_info_cache with network_info: [{"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.768 221554 DEBUG oslo_concurrency.lockutils [req-d8cca14d-cb4c-41db-9eef-41858024d908 req-c3ff7a34-6412-41e0-b9f0-ab64863733e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.932 221554 DEBUG os_brick.utils [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.933 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.942 226739 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.943 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[578844a5-45d6-4306-b982-316a2116a08d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.944 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.950 226739 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.951 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[99a98028-d6e8-4d59-85ac-72245c9ca06b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1765b9b6275c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.952 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.957 226739 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.958 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[06d48aef-4cd8-4d4f-b184-c7194295c2d4]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.959 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f4c094-dc32-4601-88ea-f0f0e5b2b8ad]: (4, '231927d4-1ded-4b84-843c-456d697af567') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.959 221554 DEBUG oslo_concurrency.processutils [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.980 221554 DEBUG oslo_concurrency.processutils [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.983 221554 DEBUG os_brick.initiator.connectors.lightos [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.983 221554 DEBUG os_brick.initiator.connectors.lightos [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.984 221554 DEBUG os_brick.initiator.connectors.lightos [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.984 221554 DEBUG os_brick.utils [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] <== get_connector_properties: return (50ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1765b9b6275c', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '231927d4-1ded-4b84-843c-456d697af567', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:24:42 np0005603609 nova_compute[221550]: 2026-01-31 08:24:42.984 221554 DEBUG nova.virt.block_device [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Updating existing volume attachment record: 40ee5842-f664-4e19-b852-da2cb237beea _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:24:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e352 e352: 3 total, 3 up, 3 in
Jan 31 03:24:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:43 np0005603609 nova_compute[221550]: 2026-01-31 08:24:43.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:43 np0005603609 nova_compute[221550]: 2026-01-31 08:24:43.717 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:43 np0005603609 nova_compute[221550]: 2026-01-31 08:24:43.717 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:43 np0005603609 nova_compute[221550]: 2026-01-31 08:24:43.718 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:43 np0005603609 nova_compute[221550]: 2026-01-31 08:24:43.718 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:24:43 np0005603609 nova_compute[221550]: 2026-01-31 08:24:43.718 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.017 221554 DEBUG nova.objects.instance [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'flavor' on Instance uuid bebd0cd6-4043-4283-aee6-2b2d313ca46f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:24:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:44.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.075 221554 DEBUG nova.virt.libvirt.driver [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Attempting to attach volume e0ddd33b-2a84-4319-a509-f1419e80aa70 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.079 221554 DEBUG nova.virt.libvirt.guest [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 03:24:44 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:24:44 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-e0ddd33b-2a84-4319-a509-f1419e80aa70">
Jan 31 03:24:44 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:24:44 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:24:44 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:24:44 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:24:44 np0005603609 nova_compute[221550]:  <auth username="openstack">
Jan 31 03:24:44 np0005603609 nova_compute[221550]:    <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:24:44 np0005603609 nova_compute[221550]:  </auth>
Jan 31 03:24:44 np0005603609 nova_compute[221550]:  <target dev="vdc" bus="virtio"/>
Jan 31 03:24:44 np0005603609 nova_compute[221550]:  <serial>e0ddd33b-2a84-4319-a509-f1419e80aa70</serial>
Jan 31 03:24:44 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:24:44 np0005603609 nova_compute[221550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 03:24:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:24:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/933169485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.119 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:24:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:44.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.338 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.338 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.344 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.344 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.344 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.344 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.345 221554 DEBUG nova.virt.libvirt.driver [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.346 221554 DEBUG nova.virt.libvirt.driver [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.346 221554 DEBUG nova.virt.libvirt.driver [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.346 221554 DEBUG nova.virt.libvirt.driver [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.346 221554 DEBUG nova.virt.libvirt.driver [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] No VIF found with MAC fa:16:3e:85:2a:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.350 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.351 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.518 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.519 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4011MB free_disk=20.8057861328125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.520 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.520 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.718 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Migration for instance 90aa4e13-650f-43f2-8ebe-19a34e0cc605 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.754 221554 DEBUG oslo_concurrency.lockutils [None req-34a0d55f-ceb2-47e7-8558-ee18a5308037 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.763 221554 INFO nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updating resource usage from migration 28009c64-41b3-4fe9-854a-e346c8d0b39b#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.764 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Starting to track outgoing migration 28009c64-41b3-4fe9-854a-e346c8d0b39b with flavor fea01737-128b-41fa-a695-aaaa6e96e4b2 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.831 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 946cb648-0758-4617-bd3a-142804fd70f7 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.832 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance bebd0cd6-4043-4283-aee6-2b2d313ca46f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.832 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Migration 28009c64-41b3-4fe9-854a-e346c8d0b39b is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.832 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.833 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:24:44 np0005603609 nova_compute[221550]: 2026-01-31 08:24:44.961 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:45 np0005603609 nova_compute[221550]: 2026-01-31 08:24:45.420 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:24:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2806860288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:24:45 np0005603609 nova_compute[221550]: 2026-01-31 08:24:45.448 221554 DEBUG nova.compute.manager [req-3c857b05-2f30-449a-942e-dab85dd6c4f0 req-0786e5f6-8062-4ff8-8d32-369da0e19b7c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received event network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:24:45 np0005603609 nova_compute[221550]: 2026-01-31 08:24:45.449 221554 DEBUG oslo_concurrency.lockutils [req-3c857b05-2f30-449a-942e-dab85dd6c4f0 req-0786e5f6-8062-4ff8-8d32-369da0e19b7c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:45 np0005603609 nova_compute[221550]: 2026-01-31 08:24:45.450 221554 DEBUG oslo_concurrency.lockutils [req-3c857b05-2f30-449a-942e-dab85dd6c4f0 req-0786e5f6-8062-4ff8-8d32-369da0e19b7c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:45 np0005603609 nova_compute[221550]: 2026-01-31 08:24:45.450 221554 DEBUG oslo_concurrency.lockutils [req-3c857b05-2f30-449a-942e-dab85dd6c4f0 req-0786e5f6-8062-4ff8-8d32-369da0e19b7c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:45 np0005603609 nova_compute[221550]: 2026-01-31 08:24:45.451 221554 DEBUG nova.compute.manager [req-3c857b05-2f30-449a-942e-dab85dd6c4f0 req-0786e5f6-8062-4ff8-8d32-369da0e19b7c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] No waiting events found dispatching network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:24:45 np0005603609 nova_compute[221550]: 2026-01-31 08:24:45.451 221554 WARNING nova.compute.manager [req-3c857b05-2f30-449a-942e-dab85dd6c4f0 req-0786e5f6-8062-4ff8-8d32-369da0e19b7c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received unexpected event network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b for instance with vm_state active and task_state resize_finish.#033[00m
Jan 31 03:24:45 np0005603609 nova_compute[221550]: 2026-01-31 08:24:45.457 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:45 np0005603609 nova_compute[221550]: 2026-01-31 08:24:45.465 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:24:45 np0005603609 nova_compute[221550]: 2026-01-31 08:24:45.542 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:24:45 np0005603609 nova_compute[221550]: 2026-01-31 08:24:45.614 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:24:45 np0005603609 nova_compute[221550]: 2026-01-31 08:24:45.614 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:46.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:46.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:46 np0005603609 nova_compute[221550]: 2026-01-31 08:24:46.230 221554 DEBUG oslo_concurrency.lockutils [None req-8e503b31-21da-4a1e-9ce3-9389ca86474f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:46 np0005603609 nova_compute[221550]: 2026-01-31 08:24:46.230 221554 DEBUG oslo_concurrency.lockutils [None req-8e503b31-21da-4a1e-9ce3-9389ca86474f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:46 np0005603609 nova_compute[221550]: 2026-01-31 08:24:46.250 221554 INFO nova.compute.manager [None req-8e503b31-21da-4a1e-9ce3-9389ca86474f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Detaching volume c7efb5e6-4cbc-4621-8f4f-a1a945d0bb8c#033[00m
Jan 31 03:24:46 np0005603609 nova_compute[221550]: 2026-01-31 08:24:46.419 221554 INFO nova.virt.block_device [None req-8e503b31-21da-4a1e-9ce3-9389ca86474f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Attempting to driver detach volume c7efb5e6-4cbc-4621-8f4f-a1a945d0bb8c from mountpoint /dev/vdb#033[00m
Jan 31 03:24:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e353 e353: 3 total, 3 up, 3 in
Jan 31 03:24:46 np0005603609 nova_compute[221550]: 2026-01-31 08:24:46.433 221554 DEBUG nova.virt.libvirt.driver [None req-8e503b31-21da-4a1e-9ce3-9389ca86474f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Attempting to detach device vdb from instance bebd0cd6-4043-4283-aee6-2b2d313ca46f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:24:46 np0005603609 nova_compute[221550]: 2026-01-31 08:24:46.434 221554 DEBUG nova.virt.libvirt.guest [None req-8e503b31-21da-4a1e-9ce3-9389ca86474f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:24:46 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:24:46 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-c7efb5e6-4cbc-4621-8f4f-a1a945d0bb8c">
Jan 31 03:24:46 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:24:46 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:24:46 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:24:46 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:24:46 np0005603609 nova_compute[221550]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:24:46 np0005603609 nova_compute[221550]:  <serial>c7efb5e6-4cbc-4621-8f4f-a1a945d0bb8c</serial>
Jan 31 03:24:46 np0005603609 nova_compute[221550]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 03:24:46 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:24:46 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:24:46 np0005603609 nova_compute[221550]: 2026-01-31 08:24:46.449 221554 INFO nova.virt.libvirt.driver [None req-8e503b31-21da-4a1e-9ce3-9389ca86474f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Successfully detached device vdb from instance bebd0cd6-4043-4283-aee6-2b2d313ca46f from the persistent domain config.#033[00m
Jan 31 03:24:46 np0005603609 nova_compute[221550]: 2026-01-31 08:24:46.451 221554 DEBUG nova.virt.libvirt.driver [None req-8e503b31-21da-4a1e-9ce3-9389ca86474f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance bebd0cd6-4043-4283-aee6-2b2d313ca46f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 03:24:46 np0005603609 nova_compute[221550]: 2026-01-31 08:24:46.452 221554 DEBUG nova.virt.libvirt.guest [None req-8e503b31-21da-4a1e-9ce3-9389ca86474f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:24:46 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:24:46 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-c7efb5e6-4cbc-4621-8f4f-a1a945d0bb8c">
Jan 31 03:24:46 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:24:46 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:24:46 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:24:46 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:24:46 np0005603609 nova_compute[221550]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:24:46 np0005603609 nova_compute[221550]:  <serial>c7efb5e6-4cbc-4621-8f4f-a1a945d0bb8c</serial>
Jan 31 03:24:46 np0005603609 nova_compute[221550]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 03:24:46 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:24:46 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:24:46 np0005603609 nova_compute[221550]: 2026-01-31 08:24:46.523 221554 DEBUG nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Received event <DeviceRemovedEvent: 1769847886.523016, bebd0cd6-4043-4283-aee6-2b2d313ca46f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 03:24:46 np0005603609 nova_compute[221550]: 2026-01-31 08:24:46.525 221554 DEBUG nova.virt.libvirt.driver [None req-8e503b31-21da-4a1e-9ce3-9389ca86474f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance bebd0cd6-4043-4283-aee6-2b2d313ca46f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 03:24:46 np0005603609 nova_compute[221550]: 2026-01-31 08:24:46.528 221554 INFO nova.virt.libvirt.driver [None req-8e503b31-21da-4a1e-9ce3-9389ca86474f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Successfully detached device vdb from instance bebd0cd6-4043-4283-aee6-2b2d313ca46f from the live domain config.#033[00m
Jan 31 03:24:46 np0005603609 nova_compute[221550]: 2026-01-31 08:24:46.615 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:46 np0005603609 nova_compute[221550]: 2026-01-31 08:24:46.616 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:46 np0005603609 nova_compute[221550]: 2026-01-31 08:24:46.616 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:24:46 np0005603609 nova_compute[221550]: 2026-01-31 08:24:46.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:46 np0005603609 nova_compute[221550]: 2026-01-31 08:24:46.805 221554 DEBUG nova.objects.instance [None req-8e503b31-21da-4a1e-9ce3-9389ca86474f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'flavor' on Instance uuid bebd0cd6-4043-4283-aee6-2b2d313ca46f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:24:46 np0005603609 nova_compute[221550]: 2026-01-31 08:24:46.898 221554 DEBUG oslo_concurrency.lockutils [None req-8e503b31-21da-4a1e-9ce3-9389ca86474f 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:47 np0005603609 nova_compute[221550]: 2026-01-31 08:24:47.068 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:47 np0005603609 nova_compute[221550]: 2026-01-31 08:24:47.071 221554 DEBUG oslo_concurrency.lockutils [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:47 np0005603609 nova_compute[221550]: 2026-01-31 08:24:47.072 221554 DEBUG oslo_concurrency.lockutils [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:47 np0005603609 nova_compute[221550]: 2026-01-31 08:24:47.072 221554 DEBUG nova.compute.manager [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Going to confirm migration 19 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Jan 31 03:24:47 np0005603609 nova_compute[221550]: 2026-01-31 08:24:47.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:47 np0005603609 nova_compute[221550]: 2026-01-31 08:24:47.773 221554 DEBUG nova.compute.manager [req-bf1791fa-170b-4308-ad7e-a20752132e25 req-3e569f23-76d0-4530-b65f-54b67310bb2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received event network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:24:47 np0005603609 nova_compute[221550]: 2026-01-31 08:24:47.774 221554 DEBUG oslo_concurrency.lockutils [req-bf1791fa-170b-4308-ad7e-a20752132e25 req-3e569f23-76d0-4530-b65f-54b67310bb2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:47 np0005603609 nova_compute[221550]: 2026-01-31 08:24:47.774 221554 DEBUG oslo_concurrency.lockutils [req-bf1791fa-170b-4308-ad7e-a20752132e25 req-3e569f23-76d0-4530-b65f-54b67310bb2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:47 np0005603609 nova_compute[221550]: 2026-01-31 08:24:47.775 221554 DEBUG oslo_concurrency.lockutils [req-bf1791fa-170b-4308-ad7e-a20752132e25 req-3e569f23-76d0-4530-b65f-54b67310bb2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:47 np0005603609 nova_compute[221550]: 2026-01-31 08:24:47.775 221554 DEBUG nova.compute.manager [req-bf1791fa-170b-4308-ad7e-a20752132e25 req-3e569f23-76d0-4530-b65f-54b67310bb2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] No waiting events found dispatching network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:24:47 np0005603609 nova_compute[221550]: 2026-01-31 08:24:47.776 221554 WARNING nova.compute.manager [req-bf1791fa-170b-4308-ad7e-a20752132e25 req-3e569f23-76d0-4530-b65f-54b67310bb2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Received unexpected event network-vif-plugged-7d9d74bf-cfe7-4c4d-aaec-f0662642996b for instance with vm_state resized and task_state None.#033[00m
Jan 31 03:24:48 np0005603609 nova_compute[221550]: 2026-01-31 08:24:48.029 221554 DEBUG neutronclient.v2_0.client [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 7d9d74bf-cfe7-4c4d-aaec-f0662642996b for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 31 03:24:48 np0005603609 nova_compute[221550]: 2026-01-31 08:24:48.030 221554 DEBUG oslo_concurrency.lockutils [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:24:48 np0005603609 nova_compute[221550]: 2026-01-31 08:24:48.030 221554 DEBUG oslo_concurrency.lockutils [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquired lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:24:48 np0005603609 nova_compute[221550]: 2026-01-31 08:24:48.031 221554 DEBUG nova.network.neutron [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:24:48 np0005603609 nova_compute[221550]: 2026-01-31 08:24:48.031 221554 DEBUG nova.objects.instance [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'info_cache' on Instance uuid 90aa4e13-650f-43f2-8ebe-19a34e0cc605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:24:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:48.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:48.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:48 np0005603609 nova_compute[221550]: 2026-01-31 08:24:48.649 221554 DEBUG oslo_concurrency.lockutils [None req-b7fe6138-ffc0-41ff-b6a7-247e456a6a67 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:48 np0005603609 nova_compute[221550]: 2026-01-31 08:24:48.650 221554 DEBUG oslo_concurrency.lockutils [None req-b7fe6138-ffc0-41ff-b6a7-247e456a6a67 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:48 np0005603609 nova_compute[221550]: 2026-01-31 08:24:48.690 221554 INFO nova.compute.manager [None req-b7fe6138-ffc0-41ff-b6a7-247e456a6a67 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Detaching volume e0ddd33b-2a84-4319-a509-f1419e80aa70#033[00m
Jan 31 03:24:48 np0005603609 nova_compute[221550]: 2026-01-31 08:24:48.943 221554 INFO nova.virt.block_device [None req-b7fe6138-ffc0-41ff-b6a7-247e456a6a67 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Attempting to driver detach volume e0ddd33b-2a84-4319-a509-f1419e80aa70 from mountpoint /dev/vdc#033[00m
Jan 31 03:24:48 np0005603609 nova_compute[221550]: 2026-01-31 08:24:48.953 221554 DEBUG nova.virt.libvirt.driver [None req-b7fe6138-ffc0-41ff-b6a7-247e456a6a67 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Attempting to detach device vdc from instance bebd0cd6-4043-4283-aee6-2b2d313ca46f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:24:48 np0005603609 nova_compute[221550]: 2026-01-31 08:24:48.954 221554 DEBUG nova.virt.libvirt.guest [None req-b7fe6138-ffc0-41ff-b6a7-247e456a6a67 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:24:48 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:24:48 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-e0ddd33b-2a84-4319-a509-f1419e80aa70">
Jan 31 03:24:48 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:24:48 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:24:48 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:24:48 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:24:48 np0005603609 nova_compute[221550]:  <target dev="vdc" bus="virtio"/>
Jan 31 03:24:48 np0005603609 nova_compute[221550]:  <serial>e0ddd33b-2a84-4319-a509-f1419e80aa70</serial>
Jan 31 03:24:48 np0005603609 nova_compute[221550]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 31 03:24:48 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:24:48 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:24:49 np0005603609 nova_compute[221550]: 2026-01-31 08:24:49.082 221554 INFO nova.virt.libvirt.driver [None req-b7fe6138-ffc0-41ff-b6a7-247e456a6a67 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Successfully detached device vdc from instance bebd0cd6-4043-4283-aee6-2b2d313ca46f from the persistent domain config.#033[00m
Jan 31 03:24:49 np0005603609 nova_compute[221550]: 2026-01-31 08:24:49.083 221554 DEBUG nova.virt.libvirt.driver [None req-b7fe6138-ffc0-41ff-b6a7-247e456a6a67 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance bebd0cd6-4043-4283-aee6-2b2d313ca46f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 03:24:49 np0005603609 nova_compute[221550]: 2026-01-31 08:24:49.083 221554 DEBUG nova.virt.libvirt.guest [None req-b7fe6138-ffc0-41ff-b6a7-247e456a6a67 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:24:49 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:24:49 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-e0ddd33b-2a84-4319-a509-f1419e80aa70">
Jan 31 03:24:49 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:24:49 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:24:49 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:24:49 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:24:49 np0005603609 nova_compute[221550]:  <target dev="vdc" bus="virtio"/>
Jan 31 03:24:49 np0005603609 nova_compute[221550]:  <serial>e0ddd33b-2a84-4319-a509-f1419e80aa70</serial>
Jan 31 03:24:49 np0005603609 nova_compute[221550]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 31 03:24:49 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:24:49 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:24:49 np0005603609 nova_compute[221550]: 2026-01-31 08:24:49.186 221554 DEBUG nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Received event <DeviceRemovedEvent: 1769847889.1857767, bebd0cd6-4043-4283-aee6-2b2d313ca46f => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 03:24:49 np0005603609 nova_compute[221550]: 2026-01-31 08:24:49.188 221554 DEBUG nova.virt.libvirt.driver [None req-b7fe6138-ffc0-41ff-b6a7-247e456a6a67 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance bebd0cd6-4043-4283-aee6-2b2d313ca46f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 03:24:49 np0005603609 nova_compute[221550]: 2026-01-31 08:24:49.191 221554 INFO nova.virt.libvirt.driver [None req-b7fe6138-ffc0-41ff-b6a7-247e456a6a67 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Successfully detached device vdc from instance bebd0cd6-4043-4283-aee6-2b2d313ca46f from the live domain config.#033[00m
Jan 31 03:24:49 np0005603609 nova_compute[221550]: 2026-01-31 08:24:49.476 221554 DEBUG nova.objects.instance [None req-b7fe6138-ffc0-41ff-b6a7-247e456a6a67 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'flavor' on Instance uuid bebd0cd6-4043-4283-aee6-2b2d313ca46f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:24:49 np0005603609 nova_compute[221550]: 2026-01-31 08:24:49.556 221554 DEBUG oslo_concurrency.lockutils [None req-b7fe6138-ffc0-41ff-b6a7-247e456a6a67 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:50.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:50.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:50 np0005603609 nova_compute[221550]: 2026-01-31 08:24:50.307 221554 DEBUG nova.network.neutron [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Updating instance_info_cache with network_info: [{"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:24:50 np0005603609 nova_compute[221550]: 2026-01-31 08:24:50.357 221554 DEBUG oslo_concurrency.lockutils [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Releasing lock "refresh_cache-90aa4e13-650f-43f2-8ebe-19a34e0cc605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:24:50 np0005603609 nova_compute[221550]: 2026-01-31 08:24:50.358 221554 DEBUG nova.objects.instance [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'migration_context' on Instance uuid 90aa4e13-650f-43f2-8ebe-19a34e0cc605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:24:50 np0005603609 nova_compute[221550]: 2026-01-31 08:24:50.445 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:50 np0005603609 nova_compute[221550]: 2026-01-31 08:24:50.485 221554 DEBUG nova.storage.rbd_utils [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] removing snapshot(nova-resize) on rbd image(90aa4e13-650f-43f2-8ebe-19a34e0cc605_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:24:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:50.653 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e354 e354: 3 total, 3 up, 3 in
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.256 221554 DEBUG nova.virt.libvirt.vif [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:23:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2050667150',display_name='tempest-TestNetworkAdvancedServerOps-server-2050667150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2050667150',id=154,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEPErrFuMatngmbPFI9nTUxaXvpce6NcVYOjpFna1bVYBBpAXT5DWY6THfcwtLeuI/NzxTjJ2/S962+d7YMt9k52/1Ce4TFS9tLoM3ovnpGdPHYwJISH1c/E+TlOqwaBJw==',key_name='tempest-TestNetworkAdvancedServerOps-1275181138',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:24:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-wq345vcr',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:24:45Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=90aa4e13-650f-43f2-8ebe-19a34e0cc605,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.257 221554 DEBUG nova.network.os_vif_util [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "address": "fa:16:3e:b4:62:8a", "network": {"id": "a584cda0-df11-4171-9687-b79f1d3fe460", "bridge": "br-int", "label": "tempest-network-smoke--863310389", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d9d74bf-cf", "ovs_interfaceid": "7d9d74bf-cfe7-4c4d-aaec-f0662642996b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.258 221554 DEBUG nova.network.os_vif_util [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b4:62:8a,bridge_name='br-int',has_traffic_filtering=True,id=7d9d74bf-cfe7-4c4d-aaec-f0662642996b,network=Network(a584cda0-df11-4171-9687-b79f1d3fe460),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d9d74bf-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.258 221554 DEBUG os_vif [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:62:8a,bridge_name='br-int',has_traffic_filtering=True,id=7d9d74bf-cfe7-4c4d-aaec-f0662642996b,network=Network(a584cda0-df11-4171-9687-b79f1d3fe460),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d9d74bf-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.260 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.260 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7d9d74bf-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.261 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.263 221554 INFO os_vif [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:62:8a,bridge_name='br-int',has_traffic_filtering=True,id=7d9d74bf-cfe7-4c4d-aaec-f0662642996b,network=Network(a584cda0-df11-4171-9687-b79f1d3fe460),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d9d74bf-cf')#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.263 221554 DEBUG oslo_concurrency.lockutils [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.263 221554 DEBUG oslo_concurrency.lockutils [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.396 221554 DEBUG oslo_concurrency.lockutils [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.397 221554 DEBUG oslo_concurrency.lockutils [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.397 221554 DEBUG oslo_concurrency.lockutils [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.398 221554 DEBUG oslo_concurrency.lockutils [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.398 221554 DEBUG oslo_concurrency.lockutils [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.399 221554 INFO nova.compute.manager [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Terminating instance#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.400 221554 DEBUG nova.compute.manager [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.448 221554 DEBUG oslo_concurrency.processutils [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:51 np0005603609 kernel: tap23692aac-7e (unregistering): left promiscuous mode
Jan 31 03:24:51 np0005603609 NetworkManager[49064]: <info>  [1769847891.4658] device (tap23692aac-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.495 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:51 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:51Z|00704|binding|INFO|Releasing lport 23692aac-7e72-4966-922e-d30b3648f957 from this chassis (sb_readonly=0)
Jan 31 03:24:51 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:51Z|00705|binding|INFO|Setting lport 23692aac-7e72-4966-922e-d30b3648f957 down in Southbound
Jan 31 03:24:51 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:51Z|00706|binding|INFO|Removing iface tap23692aac-7e ovn-installed in OVS
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.497 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.504 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:51.515 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:2a:cb 10.100.0.7'], port_security=['fa:16:3e:85:2a:cb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bebd0cd6-4043-4283-aee6-2b2d313ca46f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e39ff1d-f815-485f-8f43-698b31333bba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fdf18f1faf4846e2a6e2eab4ac2aec02', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1c78fb75-185d-4975-afc9-20585c19198d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=917c1732-02f7-4e55-ac45-7e04e3147221, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=23692aac-7e72-4966-922e-d30b3648f957) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:24:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:51.516 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 23692aac-7e72-4966-922e-d30b3648f957 in datapath 1e39ff1d-f815-485f-8f43-698b31333bba unbound from our chassis#033[00m
Jan 31 03:24:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:51.517 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1e39ff1d-f815-485f-8f43-698b31333bba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:24:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:51.518 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[32daa23c-401d-4565-a39e-7fe83be9e281]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:51.519 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba namespace which is not needed anymore#033[00m
Jan 31 03:24:51 np0005603609 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d0000009b.scope: Deactivated successfully.
Jan 31 03:24:51 np0005603609 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d0000009b.scope: Consumed 14.309s CPU time.
Jan 31 03:24:51 np0005603609 systemd-machined[190912]: Machine qemu-83-instance-0000009b terminated.
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.629 221554 INFO nova.virt.libvirt.driver [-] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Instance destroyed successfully.#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.630 221554 DEBUG nova.objects.instance [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lazy-loading 'resources' on Instance uuid bebd0cd6-4043-4283-aee6-2b2d313ca46f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:24:51 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[285430]: [NOTICE]   (285434) : haproxy version is 2.8.14-c23fe91
Jan 31 03:24:51 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[285430]: [NOTICE]   (285434) : path to executable is /usr/sbin/haproxy
Jan 31 03:24:51 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[285430]: [ALERT]    (285434) : Current worker (285437) exited with code 143 (Terminated)
Jan 31 03:24:51 np0005603609 neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba[285430]: [WARNING]  (285434) : All workers exited. Exiting... (0)
Jan 31 03:24:51 np0005603609 systemd[1]: libpod-51e71ad5db169f31834229bed57c32b4f3c368a952585a8dcfe77c0934b56668.scope: Deactivated successfully.
Jan 31 03:24:51 np0005603609 podman[285833]: 2026-01-31 08:24:51.656152344 +0000 UTC m=+0.052068763 container died 51e71ad5db169f31834229bed57c32b4f3c368a952585a8dcfe77c0934b56668 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:24:51 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-51e71ad5db169f31834229bed57c32b4f3c368a952585a8dcfe77c0934b56668-userdata-shm.mount: Deactivated successfully.
Jan 31 03:24:51 np0005603609 systemd[1]: var-lib-containers-storage-overlay-955d83b45c8631e036d3f0389456f90cf23b52840882b4c7f5c155dfff42c1fa-merged.mount: Deactivated successfully.
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.676 221554 DEBUG nova.virt.libvirt.vif [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:24:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-578301611',display_name='tempest-AttachVolumeTestJSON-server-578301611',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-578301611',id=155,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIgah4XlK1iSaivNSQoj7hWwU0Mj9BDD4IUHaCjeTMSKow9fYTWPYldQNBozVeKEC4K2KMm4GY9/cXTJzi2EQ4T6BD7d6q/eJr6zsCGqyjzMi8P41TAXB1ir8LA2Wk0h+Q==',key_name='tempest-keypair-890016362',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:24:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fdf18f1faf4846e2a6e2eab4ac2aec02',ramdisk_id='',reservation_id='r-tk0fk5d6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-1821521720',owner_user_name='tempest-AttachVolumeTestJSON-1821521720-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:24:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3ada90dc4b77478cb4b93c63409d8537',uuid=bebd0cd6-4043-4283-aee6-2b2d313ca46f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "23692aac-7e72-4966-922e-d30b3648f957", "address": "fa:16:3e:85:2a:cb", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23692aac-7e", "ovs_interfaceid": "23692aac-7e72-4966-922e-d30b3648f957", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.676 221554 DEBUG nova.network.os_vif_util [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converting VIF {"id": "23692aac-7e72-4966-922e-d30b3648f957", "address": "fa:16:3e:85:2a:cb", "network": {"id": "1e39ff1d-f815-485f-8f43-698b31333bba", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1780713827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fdf18f1faf4846e2a6e2eab4ac2aec02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23692aac-7e", "ovs_interfaceid": "23692aac-7e72-4966-922e-d30b3648f957", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.677 221554 DEBUG nova.network.os_vif_util [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:2a:cb,bridge_name='br-int',has_traffic_filtering=True,id=23692aac-7e72-4966-922e-d30b3648f957,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23692aac-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.678 221554 DEBUG os_vif [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:2a:cb,bridge_name='br-int',has_traffic_filtering=True,id=23692aac-7e72-4966-922e-d30b3648f957,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23692aac-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.680 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.680 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23692aac-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.681 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.682 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.684 221554 INFO os_vif [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:2a:cb,bridge_name='br-int',has_traffic_filtering=True,id=23692aac-7e72-4966-922e-d30b3648f957,network=Network(1e39ff1d-f815-485f-8f43-698b31333bba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23692aac-7e')#033[00m
Jan 31 03:24:51 np0005603609 podman[285833]: 2026-01-31 08:24:51.690051177 +0000 UTC m=+0.085967596 container cleanup 51e71ad5db169f31834229bed57c32b4f3c368a952585a8dcfe77c0934b56668 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:24:51 np0005603609 systemd[1]: libpod-conmon-51e71ad5db169f31834229bed57c32b4f3c368a952585a8dcfe77c0934b56668.scope: Deactivated successfully.
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.735 221554 DEBUG oslo_concurrency.lockutils [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "946cb648-0758-4617-bd3a-142804fd70f7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.735 221554 DEBUG oslo_concurrency.lockutils [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.736 221554 DEBUG oslo_concurrency.lockutils [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "946cb648-0758-4617-bd3a-142804fd70f7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.736 221554 DEBUG oslo_concurrency.lockutils [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.736 221554 DEBUG oslo_concurrency.lockutils [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.738 221554 INFO nova.compute.manager [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Terminating instance#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.739 221554 DEBUG nova.compute.manager [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:24:51 np0005603609 podman[285879]: 2026-01-31 08:24:51.74514684 +0000 UTC m=+0.035772259 container remove 51e71ad5db169f31834229bed57c32b4f3c368a952585a8dcfe77c0934b56668 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:24:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:51.750 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7a7aef-656f-4d3c-8fbf-727ed9d789b5]: (4, ('Sat Jan 31 08:24:51 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba (51e71ad5db169f31834229bed57c32b4f3c368a952585a8dcfe77c0934b56668)\n51e71ad5db169f31834229bed57c32b4f3c368a952585a8dcfe77c0934b56668\nSat Jan 31 08:24:51 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba (51e71ad5db169f31834229bed57c32b4f3c368a952585a8dcfe77c0934b56668)\n51e71ad5db169f31834229bed57c32b4f3c368a952585a8dcfe77c0934b56668\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:51.753 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f21c17-d2a3-48cd-a68f-5bb6593f0cf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:51.754 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e39ff1d-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:51 np0005603609 kernel: tap1e39ff1d-f0: left promiscuous mode
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.756 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.764 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:51.768 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[02ee666f-bfd3-49cc-b4af-38b50f0bc154]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:51 np0005603609 kernel: tap36f5a0a6-02 (unregistering): left promiscuous mode
Jan 31 03:24:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:51.779 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c96ed10f-ea31-448d-9b81-64f04b55a727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:51 np0005603609 NetworkManager[49064]: <info>  [1769847891.7814] device (tap36f5a0a6-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:24:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:51.783 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4c7ecf84-b720-464b-8f65-8220b4a19b8e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.788 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:51 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:51Z|00707|binding|INFO|Releasing lport 36f5a0a6-029b-4491-8d74-e44ca0e59f7d from this chassis (sb_readonly=0)
Jan 31 03:24:51 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:51Z|00708|binding|INFO|Setting lport 36f5a0a6-029b-4491-8d74-e44ca0e59f7d down in Southbound
Jan 31 03:24:51 np0005603609 ovn_controller[130359]: 2026-01-31T08:24:51Z|00709|binding|INFO|Removing iface tap36f5a0a6-02 ovn-installed in OVS
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.790 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:51.795 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:0a:0c 10.100.0.8'], port_security=['fa:16:3e:0d:0a:0c 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '946cb648-0758-4617-bd3a-142804fd70f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c90ea7f1be5f484bb873548236fadc00', 'neutron:revision_number': '8', 'neutron:security_group_ids': '952b4f08-f5a7-4fc0-ae2c-267f2ba857a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42261fad-d2a1-4da1-823a-75e271c17223, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=36f5a0a6-029b-4491-8d74-e44ca0e59f7d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.799 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847876.798716, 90aa4e13-650f-43f2-8ebe-19a34e0cc605 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.800 221554 INFO nova.compute.manager [-] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:24:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:51.801 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[90a656cc-ec9c-46f2-8342-f1e77b90ca2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809306, 'reachable_time': 30151, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285910, 'error': None, 'target': 'ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.802 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:51.803 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1e39ff1d-f815-485f-8f43-698b31333bba deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:24:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:51.803 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[d99ae382-e408-464b-be62-915780ea2513]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:51.804 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 36f5a0a6-029b-4491-8d74-e44ca0e59f7d in datapath 936cead9-bc2f-4c2d-8b4c-6079d2159263 unbound from our chassis#033[00m
Jan 31 03:24:51 np0005603609 systemd[1]: run-netns-ovnmeta\x2d1e39ff1d\x2df815\x2d485f\x2d8f43\x2d698b31333bba.mount: Deactivated successfully.
Jan 31 03:24:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:51.806 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 936cead9-bc2f-4c2d-8b4c-6079d2159263, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:24:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:51.806 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[16e1a3e5-6fae-43d9-b952-b362986d18d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:51.806 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 namespace which is not needed anymore#033[00m
Jan 31 03:24:51 np0005603609 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000095.scope: Deactivated successfully.
Jan 31 03:24:51 np0005603609 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000095.scope: Consumed 1.893s CPU time.
Jan 31 03:24:51 np0005603609 systemd-machined[190912]: Machine qemu-80-instance-00000095 terminated.
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.838 221554 DEBUG nova.compute.manager [None req-10df4f3f-c8a6-4e1e-8965-2dad9d8ca5f3 - - - - - -] [instance: 90aa4e13-650f-43f2-8ebe-19a34e0cc605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:24:51 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[283120]: [NOTICE]   (283124) : haproxy version is 2.8.14-c23fe91
Jan 31 03:24:51 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[283120]: [NOTICE]   (283124) : path to executable is /usr/sbin/haproxy
Jan 31 03:24:51 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[283120]: [WARNING]  (283124) : Exiting Master process...
Jan 31 03:24:51 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[283120]: [WARNING]  (283124) : Exiting Master process...
Jan 31 03:24:51 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[283120]: [ALERT]    (283124) : Current worker (283126) exited with code 143 (Terminated)
Jan 31 03:24:51 np0005603609 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[283120]: [WARNING]  (283124) : All workers exited. Exiting... (0)
Jan 31 03:24:51 np0005603609 systemd[1]: libpod-0644bd4151732c721cc61ecdc293b7fde01575c1653375c3a61cde6a8f158b56.scope: Deactivated successfully.
Jan 31 03:24:51 np0005603609 podman[285927]: 2026-01-31 08:24:51.913328949 +0000 UTC m=+0.043948866 container died 0644bd4151732c721cc61ecdc293b7fde01575c1653375c3a61cde6a8f158b56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:24:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:24:51 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4119211165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.936 221554 DEBUG oslo_concurrency.processutils [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:51 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0644bd4151732c721cc61ecdc293b7fde01575c1653375c3a61cde6a8f158b56-userdata-shm.mount: Deactivated successfully.
Jan 31 03:24:51 np0005603609 systemd[1]: var-lib-containers-storage-overlay-81daabdb265024e52c01f762951f71940b609b854cbcf21d064ca00a1f78d007-merged.mount: Deactivated successfully.
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.944 221554 DEBUG nova.compute.provider_tree [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:24:51 np0005603609 podman[285927]: 2026-01-31 08:24:51.94918307 +0000 UTC m=+0.079802987 container cleanup 0644bd4151732c721cc61ecdc293b7fde01575c1653375c3a61cde6a8f158b56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:24:51 np0005603609 NetworkManager[49064]: <info>  [1769847891.9525] manager: (tap36f5a0a6-02): new Tun device (/org/freedesktop/NetworkManager/Devices/326)
Jan 31 03:24:51 np0005603609 systemd[1]: libpod-conmon-0644bd4151732c721cc61ecdc293b7fde01575c1653375c3a61cde6a8f158b56.scope: Deactivated successfully.
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.967 221554 DEBUG nova.scheduler.client.report [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.977 221554 INFO nova.virt.libvirt.driver [-] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Instance destroyed successfully.#033[00m
Jan 31 03:24:51 np0005603609 nova_compute[221550]: 2026-01-31 08:24:51.978 221554 DEBUG nova.objects.instance [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'resources' on Instance uuid 946cb648-0758-4617-bd3a-142804fd70f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.004 221554 DEBUG nova.virt.libvirt.vif [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:21:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-2006986215',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-2006986215',id=149,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:22:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c90ea7f1be5f484bb873548236fadc00',ramdisk_id='',reservation_id='r-nvcx2ai9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:22:59Z,user_data=None,user_id='038e2b3b4f174162a3ac6c4870857e60',uuid=946cb648-0758-4617-bd3a-142804fd70f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "address": "fa:16:3e:0d:0a:0c", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f5a0a6-02", "ovs_interfaceid": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.005 221554 DEBUG nova.network.os_vif_util [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converting VIF {"id": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "address": "fa:16:3e:0d:0a:0c", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f5a0a6-02", "ovs_interfaceid": "36f5a0a6-029b-4491-8d74-e44ca0e59f7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:24:52 np0005603609 podman[285966]: 2026-01-31 08:24:52.005256006 +0000 UTC m=+0.040399261 container remove 0644bd4151732c721cc61ecdc293b7fde01575c1653375c3a61cde6a8f158b56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.006 221554 DEBUG nova.network.os_vif_util [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0d:0a:0c,bridge_name='br-int',has_traffic_filtering=True,id=36f5a0a6-029b-4491-8d74-e44ca0e59f7d,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36f5a0a6-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.006 221554 DEBUG os_vif [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:0a:0c,bridge_name='br-int',has_traffic_filtering=True,id=36f5a0a6-029b-4491-8d74-e44ca0e59f7d,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36f5a0a6-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.007 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.008 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36f5a0a6-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.010 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.010 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:52.010 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[544d0a56-5642-437c-994c-08df28b380bd]: (4, ('Sat Jan 31 08:24:51 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 (0644bd4151732c721cc61ecdc293b7fde01575c1653375c3a61cde6a8f158b56)\n0644bd4151732c721cc61ecdc293b7fde01575c1653375c3a61cde6a8f158b56\nSat Jan 31 08:24:51 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 (0644bd4151732c721cc61ecdc293b7fde01575c1653375c3a61cde6a8f158b56)\n0644bd4151732c721cc61ecdc293b7fde01575c1653375c3a61cde6a8f158b56\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:52.012 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ba44b083-172b-49ff-87a8-d2e725ee31c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:52.013 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap936cead9-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:52 np0005603609 kernel: tap936cead9-b0: left promiscuous mode
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.015 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.017 221554 INFO os_vif [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:0a:0c,bridge_name='br-int',has_traffic_filtering=True,id=36f5a0a6-029b-4491-8d74-e44ca0e59f7d,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36f5a0a6-02')#033[00m
Jan 31 03:24:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:52.023 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[88edb3bf-4a08-4c66-84d6-ef874def38f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.038 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:52.039 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[93487944-8a00-435d-82f9-673612b041dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:52.040 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[adbf2aad-97cd-41f7-8acf-8ed71df8001e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.051 221554 DEBUG oslo_concurrency.lockutils [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:52.054 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3bd2bca2-29fb-4b46-9867-0894d1e51a8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801718, 'reachable_time': 16128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286000, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:52.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:52.056 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:24:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:24:52.056 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[c61a153c-6efa-4c8a-85ff-7372bef4747d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.130 221554 INFO nova.virt.libvirt.driver [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Deleting instance files /var/lib/nova/instances/bebd0cd6-4043-4283-aee6-2b2d313ca46f_del#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.132 221554 INFO nova.virt.libvirt.driver [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Deletion of /var/lib/nova/instances/bebd0cd6-4043-4283-aee6-2b2d313ca46f_del complete#033[00m
Jan 31 03:24:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:24:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:52.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.256 221554 INFO nova.virt.libvirt.driver [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Deleting instance files /var/lib/nova/instances/946cb648-0758-4617-bd3a-142804fd70f7_del#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.257 221554 INFO nova.virt.libvirt.driver [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Deletion of /var/lib/nova/instances/946cb648-0758-4617-bd3a-142804fd70f7_del complete#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.558 221554 DEBUG nova.compute.manager [req-bc0dd882-2987-45ef-a3f8-9c14bf239ac7 req-630298a2-6bbe-48b1-98cf-e1a4c8f44044 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received event network-vif-unplugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.559 221554 DEBUG oslo_concurrency.lockutils [req-bc0dd882-2987-45ef-a3f8-9c14bf239ac7 req-630298a2-6bbe-48b1-98cf-e1a4c8f44044 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "946cb648-0758-4617-bd3a-142804fd70f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.559 221554 DEBUG oslo_concurrency.lockutils [req-bc0dd882-2987-45ef-a3f8-9c14bf239ac7 req-630298a2-6bbe-48b1-98cf-e1a4c8f44044 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.559 221554 DEBUG oslo_concurrency.lockutils [req-bc0dd882-2987-45ef-a3f8-9c14bf239ac7 req-630298a2-6bbe-48b1-98cf-e1a4c8f44044 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.560 221554 DEBUG nova.compute.manager [req-bc0dd882-2987-45ef-a3f8-9c14bf239ac7 req-630298a2-6bbe-48b1-98cf-e1a4c8f44044 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] No waiting events found dispatching network-vif-unplugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.560 221554 DEBUG nova.compute.manager [req-bc0dd882-2987-45ef-a3f8-9c14bf239ac7 req-630298a2-6bbe-48b1-98cf-e1a4c8f44044 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received event network-vif-unplugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:24:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e355 e355: 3 total, 3 up, 3 in
Jan 31 03:24:52 np0005603609 systemd[1]: run-netns-ovnmeta\x2d936cead9\x2dbc2f\x2d4c2d\x2d8b4c\x2d6079d2159263.mount: Deactivated successfully.
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.708 221554 INFO nova.compute.manager [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Took 1.31 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.709 221554 DEBUG oslo.service.loopingcall [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.709 221554 DEBUG nova.compute.manager [-] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.709 221554 DEBUG nova.network.neutron [-] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.722 221554 INFO nova.scheduler.client.report [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Deleted allocation for migration 28009c64-41b3-4fe9-854a-e346c8d0b39b#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.727 221554 DEBUG nova.compute.manager [req-5a526360-90c9-4a79-b5d7-89eda613b43e req-6f760262-ee7a-4a9b-bb22-04cf3ee25184 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Received event network-vif-unplugged-23692aac-7e72-4966-922e-d30b3648f957 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.727 221554 DEBUG oslo_concurrency.lockutils [req-5a526360-90c9-4a79-b5d7-89eda613b43e req-6f760262-ee7a-4a9b-bb22-04cf3ee25184 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.727 221554 DEBUG oslo_concurrency.lockutils [req-5a526360-90c9-4a79-b5d7-89eda613b43e req-6f760262-ee7a-4a9b-bb22-04cf3ee25184 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.728 221554 DEBUG oslo_concurrency.lockutils [req-5a526360-90c9-4a79-b5d7-89eda613b43e req-6f760262-ee7a-4a9b-bb22-04cf3ee25184 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.728 221554 DEBUG nova.compute.manager [req-5a526360-90c9-4a79-b5d7-89eda613b43e req-6f760262-ee7a-4a9b-bb22-04cf3ee25184 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] No waiting events found dispatching network-vif-unplugged-23692aac-7e72-4966-922e-d30b3648f957 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.728 221554 DEBUG nova.compute.manager [req-5a526360-90c9-4a79-b5d7-89eda613b43e req-6f760262-ee7a-4a9b-bb22-04cf3ee25184 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Received event network-vif-unplugged-23692aac-7e72-4966-922e-d30b3648f957 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.737 221554 INFO nova.compute.manager [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.738 221554 DEBUG oslo.service.loopingcall [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.738 221554 DEBUG nova.compute.manager [-] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.738 221554 DEBUG nova.network.neutron [-] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:24:52 np0005603609 nova_compute[221550]: 2026-01-31 08:24:52.923 221554 DEBUG oslo_concurrency.lockutils [None req-e7bea908-2641-4d2e-99c0-4c280e1b916a 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "90aa4e13-650f-43f2-8ebe-19a34e0cc605" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 5.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e355 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:24:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:54.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:24:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:54.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.181 221554 DEBUG nova.network.neutron [-] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.232 221554 INFO nova.compute.manager [-] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Took 2.49 seconds to deallocate network for instance.#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.314 221554 DEBUG nova.network.neutron [-] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.371 221554 INFO nova.compute.manager [-] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Took 2.66 seconds to deallocate network for instance.#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.405 221554 DEBUG nova.compute.manager [req-82acc409-b685-486a-9c91-50e644276008 req-a12a83f5-57ea-4da5-b7d3-011c277c4da8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Received event network-vif-plugged-23692aac-7e72-4966-922e-d30b3648f957 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.406 221554 DEBUG oslo_concurrency.lockutils [req-82acc409-b685-486a-9c91-50e644276008 req-a12a83f5-57ea-4da5-b7d3-011c277c4da8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.406 221554 DEBUG oslo_concurrency.lockutils [req-82acc409-b685-486a-9c91-50e644276008 req-a12a83f5-57ea-4da5-b7d3-011c277c4da8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.406 221554 DEBUG oslo_concurrency.lockutils [req-82acc409-b685-486a-9c91-50e644276008 req-a12a83f5-57ea-4da5-b7d3-011c277c4da8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.406 221554 DEBUG nova.compute.manager [req-82acc409-b685-486a-9c91-50e644276008 req-a12a83f5-57ea-4da5-b7d3-011c277c4da8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] No waiting events found dispatching network-vif-plugged-23692aac-7e72-4966-922e-d30b3648f957 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.406 221554 WARNING nova.compute.manager [req-82acc409-b685-486a-9c91-50e644276008 req-a12a83f5-57ea-4da5-b7d3-011c277c4da8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Received unexpected event network-vif-plugged-23692aac-7e72-4966-922e-d30b3648f957 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.425 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.457 221554 DEBUG oslo_concurrency.lockutils [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.457 221554 DEBUG oslo_concurrency.lockutils [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.471 221554 DEBUG nova.compute.manager [req-69bb5fe1-f35e-49e7-a973-f9591fdf591e req-1768548f-40e0-482c-b886-afac61695822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received event network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.472 221554 DEBUG oslo_concurrency.lockutils [req-69bb5fe1-f35e-49e7-a973-f9591fdf591e req-1768548f-40e0-482c-b886-afac61695822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "946cb648-0758-4617-bd3a-142804fd70f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.472 221554 DEBUG oslo_concurrency.lockutils [req-69bb5fe1-f35e-49e7-a973-f9591fdf591e req-1768548f-40e0-482c-b886-afac61695822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.472 221554 DEBUG oslo_concurrency.lockutils [req-69bb5fe1-f35e-49e7-a973-f9591fdf591e req-1768548f-40e0-482c-b886-afac61695822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.472 221554 DEBUG nova.compute.manager [req-69bb5fe1-f35e-49e7-a973-f9591fdf591e req-1768548f-40e0-482c-b886-afac61695822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] No waiting events found dispatching network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.472 221554 WARNING nova.compute.manager [req-69bb5fe1-f35e-49e7-a973-f9591fdf591e req-1768548f-40e0-482c-b886-afac61695822 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received unexpected event network-vif-plugged-36f5a0a6-029b-4491-8d74-e44ca0e59f7d for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.502 221554 DEBUG nova.compute.manager [req-d2615537-8249-49c0-b575-116eb21254d0 req-7a4bd357-acc7-44e4-a4fe-00c6ab98358c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Received event network-vif-deleted-36f5a0a6-029b-4491-8d74-e44ca0e59f7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.602 221554 INFO nova.compute.manager [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Took 0.37 seconds to detach 1 volumes for instance.#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.607 221554 DEBUG oslo_concurrency.processutils [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:55 np0005603609 nova_compute[221550]: 2026-01-31 08:24:55.663 221554 DEBUG oslo_concurrency.lockutils [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:24:56 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/88557555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:24:56 np0005603609 nova_compute[221550]: 2026-01-31 08:24:56.018 221554 DEBUG oslo_concurrency.processutils [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:56 np0005603609 nova_compute[221550]: 2026-01-31 08:24:56.022 221554 DEBUG nova.compute.provider_tree [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:24:56 np0005603609 nova_compute[221550]: 2026-01-31 08:24:56.042 221554 DEBUG nova.scheduler.client.report [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:24:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:56.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:56 np0005603609 nova_compute[221550]: 2026-01-31 08:24:56.093 221554 DEBUG oslo_concurrency.lockutils [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:56 np0005603609 nova_compute[221550]: 2026-01-31 08:24:56.097 221554 DEBUG oslo_concurrency.lockutils [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:56 np0005603609 nova_compute[221550]: 2026-01-31 08:24:56.136 221554 INFO nova.scheduler.client.report [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Deleted allocations for instance bebd0cd6-4043-4283-aee6-2b2d313ca46f#033[00m
Jan 31 03:24:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:56.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:56 np0005603609 nova_compute[221550]: 2026-01-31 08:24:56.225 221554 DEBUG oslo_concurrency.processutils [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:56 np0005603609 nova_compute[221550]: 2026-01-31 08:24:56.289 221554 DEBUG oslo_concurrency.lockutils [None req-7478696e-17e6-4c67-a39f-01da53eb1361 3ada90dc4b77478cb4b93c63409d8537 fdf18f1faf4846e2a6e2eab4ac2aec02 - - default default] Lock "bebd0cd6-4043-4283-aee6-2b2d313ca46f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:56 np0005603609 nova_compute[221550]: 2026-01-31 08:24:56.683 221554 DEBUG oslo_concurrency.processutils [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:56 np0005603609 nova_compute[221550]: 2026-01-31 08:24:56.689 221554 DEBUG nova.compute.provider_tree [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:24:56 np0005603609 nova_compute[221550]: 2026-01-31 08:24:56.719 221554 DEBUG nova.scheduler.client.report [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:24:56 np0005603609 nova_compute[221550]: 2026-01-31 08:24:56.747 221554 DEBUG oslo_concurrency.lockutils [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:56 np0005603609 nova_compute[221550]: 2026-01-31 08:24:56.784 221554 INFO nova.scheduler.client.report [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Deleted allocations for instance 946cb648-0758-4617-bd3a-142804fd70f7#033[00m
Jan 31 03:24:56 np0005603609 nova_compute[221550]: 2026-01-31 08:24:56.882 221554 DEBUG oslo_concurrency.lockutils [None req-23162a44-f593-42d7-a57a-4b9ce6343068 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "946cb648-0758-4617-bd3a-142804fd70f7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:57 np0005603609 nova_compute[221550]: 2026-01-31 08:24:57.009 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:57 np0005603609 nova_compute[221550]: 2026-01-31 08:24:57.626 221554 DEBUG nova.compute.manager [req-bc3c792f-6e13-4c56-b346-b622fd9720fc req-9e578cf0-28af-4958-bc41-d2c3a4a77127 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Received event network-vif-deleted-23692aac-7e72-4966-922e-d30b3648f957 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:24:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e356 e356: 3 total, 3 up, 3 in
Jan 31 03:24:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:58.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:24:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:24:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:58.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:24:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e357 e357: 3 total, 3 up, 3 in
Jan 31 03:25:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:00.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:00.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:00 np0005603609 nova_compute[221550]: 2026-01-31 08:25:00.427 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:00 np0005603609 nova_compute[221550]: 2026-01-31 08:25:00.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:25:01 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1055653894' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:25:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:25:01 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1055653894' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:25:02 np0005603609 nova_compute[221550]: 2026-01-31 08:25:02.011 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:02.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:02.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:02 np0005603609 podman[286049]: 2026-01-31 08:25:02.193611858 +0000 UTC m=+0.078951238 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:25:02 np0005603609 podman[286050]: 2026-01-31 08:25:02.204893159 +0000 UTC m=+0.084258125 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 31 03:25:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:25:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:04.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:25:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:04.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:05 np0005603609 nova_compute[221550]: 2026-01-31 08:25:05.429 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:06.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:06.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:06 np0005603609 nova_compute[221550]: 2026-01-31 08:25:06.627 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847891.6232293, bebd0cd6-4043-4283-aee6-2b2d313ca46f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:25:06 np0005603609 nova_compute[221550]: 2026-01-31 08:25:06.627 221554 INFO nova.compute.manager [-] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:25:06 np0005603609 nova_compute[221550]: 2026-01-31 08:25:06.967 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847891.9644675, 946cb648-0758-4617-bd3a-142804fd70f7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:25:06 np0005603609 nova_compute[221550]: 2026-01-31 08:25:06.968 221554 INFO nova.compute.manager [-] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:25:07 np0005603609 nova_compute[221550]: 2026-01-31 08:25:07.014 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:07 np0005603609 nova_compute[221550]: 2026-01-31 08:25:07.253 221554 DEBUG nova.compute.manager [None req-e4db2009-65e4-480e-b50d-4bb240ddeb69 - - - - - -] [instance: 946cb648-0758-4617-bd3a-142804fd70f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:25:07 np0005603609 nova_compute[221550]: 2026-01-31 08:25:07.256 221554 DEBUG nova.compute.manager [None req-4d234ed4-093f-4a5d-a9d2-09732f9ba2fe - - - - - -] [instance: bebd0cd6-4043-4283-aee6-2b2d313ca46f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:25:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:07.517 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:07.518 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:07.518 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 e358: 3 total, 3 up, 3 in
Jan 31 03:25:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:08.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:25:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:08.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:25:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:25:08 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2344999837' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:25:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:25:08 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2344999837' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:25:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:10.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:10.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:10 np0005603609 nova_compute[221550]: 2026-01-31 08:25:10.430 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:10 np0005603609 nova_compute[221550]: 2026-01-31 08:25:10.832 221554 DEBUG oslo_concurrency.lockutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "8bf29da0-2e1a-4e89-bce8-b293a938c742" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:10 np0005603609 nova_compute[221550]: 2026-01-31 08:25:10.832 221554 DEBUG oslo_concurrency.lockutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:10 np0005603609 nova_compute[221550]: 2026-01-31 08:25:10.979 221554 DEBUG nova.compute.manager [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:25:11 np0005603609 nova_compute[221550]: 2026-01-31 08:25:11.103 221554 DEBUG oslo_concurrency.lockutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:11 np0005603609 nova_compute[221550]: 2026-01-31 08:25:11.103 221554 DEBUG oslo_concurrency.lockutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:11 np0005603609 nova_compute[221550]: 2026-01-31 08:25:11.110 221554 DEBUG nova.virt.hardware [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:25:11 np0005603609 nova_compute[221550]: 2026-01-31 08:25:11.110 221554 INFO nova.compute.claims [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:25:11 np0005603609 nova_compute[221550]: 2026-01-31 08:25:11.303 221554 DEBUG oslo_concurrency.processutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:25:11 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1401734021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:25:11 np0005603609 nova_compute[221550]: 2026-01-31 08:25:11.750 221554 DEBUG oslo_concurrency.processutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:11 np0005603609 nova_compute[221550]: 2026-01-31 08:25:11.754 221554 DEBUG nova.compute.provider_tree [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:25:11 np0005603609 nova_compute[221550]: 2026-01-31 08:25:11.848 221554 DEBUG nova.scheduler.client.report [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:25:12 np0005603609 nova_compute[221550]: 2026-01-31 08:25:12.016 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:25:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:12.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:25:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:12.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:12 np0005603609 nova_compute[221550]: 2026-01-31 08:25:12.267 221554 DEBUG oslo_concurrency.lockutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:12 np0005603609 nova_compute[221550]: 2026-01-31 08:25:12.268 221554 DEBUG nova.compute.manager [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:25:12 np0005603609 nova_compute[221550]: 2026-01-31 08:25:12.532 221554 DEBUG nova.compute.manager [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:25:12 np0005603609 nova_compute[221550]: 2026-01-31 08:25:12.533 221554 DEBUG nova.network.neutron [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:25:12 np0005603609 nova_compute[221550]: 2026-01-31 08:25:12.627 221554 INFO nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:25:12 np0005603609 nova_compute[221550]: 2026-01-31 08:25:12.656 221554 DEBUG nova.compute.manager [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:25:12 np0005603609 nova_compute[221550]: 2026-01-31 08:25:12.877 221554 DEBUG nova.compute.manager [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:25:12 np0005603609 nova_compute[221550]: 2026-01-31 08:25:12.878 221554 DEBUG nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:25:12 np0005603609 nova_compute[221550]: 2026-01-31 08:25:12.879 221554 INFO nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Creating image(s)#033[00m
Jan 31 03:25:12 np0005603609 nova_compute[221550]: 2026-01-31 08:25:12.960 221554 DEBUG nova.storage.rbd_utils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image 8bf29da0-2e1a-4e89-bce8-b293a938c742_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:13 np0005603609 nova_compute[221550]: 2026-01-31 08:25:13.085 221554 DEBUG nova.storage.rbd_utils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image 8bf29da0-2e1a-4e89-bce8-b293a938c742_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:13 np0005603609 nova_compute[221550]: 2026-01-31 08:25:13.125 221554 DEBUG nova.storage.rbd_utils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image 8bf29da0-2e1a-4e89-bce8-b293a938c742_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:13 np0005603609 nova_compute[221550]: 2026-01-31 08:25:13.130 221554 DEBUG oslo_concurrency.processutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:13 np0005603609 nova_compute[221550]: 2026-01-31 08:25:13.160 221554 DEBUG nova.policy [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '85dfa8546d9942648bb4197c8b1947e3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48bbdbdee526499e90da7e971ede68d3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:25:13 np0005603609 nova_compute[221550]: 2026-01-31 08:25:13.208 221554 DEBUG oslo_concurrency.processutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:13 np0005603609 nova_compute[221550]: 2026-01-31 08:25:13.209 221554 DEBUG oslo_concurrency.lockutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:13 np0005603609 nova_compute[221550]: 2026-01-31 08:25:13.209 221554 DEBUG oslo_concurrency.lockutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:13 np0005603609 nova_compute[221550]: 2026-01-31 08:25:13.210 221554 DEBUG oslo_concurrency.lockutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:13 np0005603609 nova_compute[221550]: 2026-01-31 08:25:13.242 221554 DEBUG nova.storage.rbd_utils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image 8bf29da0-2e1a-4e89-bce8-b293a938c742_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:13 np0005603609 nova_compute[221550]: 2026-01-31 08:25:13.247 221554 DEBUG oslo_concurrency.processutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 8bf29da0-2e1a-4e89-bce8-b293a938c742_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:13 np0005603609 nova_compute[221550]: 2026-01-31 08:25:13.693 221554 DEBUG oslo_concurrency.processutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 8bf29da0-2e1a-4e89-bce8-b293a938c742_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:13 np0005603609 nova_compute[221550]: 2026-01-31 08:25:13.748 221554 DEBUG nova.storage.rbd_utils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] resizing rbd image 8bf29da0-2e1a-4e89-bce8-b293a938c742_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:25:13 np0005603609 nova_compute[221550]: 2026-01-31 08:25:13.936 221554 DEBUG nova.objects.instance [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'migration_context' on Instance uuid 8bf29da0-2e1a-4e89-bce8-b293a938c742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:25:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:25:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:14.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:25:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:14.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:14 np0005603609 nova_compute[221550]: 2026-01-31 08:25:14.553 221554 DEBUG nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:25:14 np0005603609 nova_compute[221550]: 2026-01-31 08:25:14.553 221554 DEBUG nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Ensure instance console log exists: /var/lib/nova/instances/8bf29da0-2e1a-4e89-bce8-b293a938c742/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:25:14 np0005603609 nova_compute[221550]: 2026-01-31 08:25:14.554 221554 DEBUG oslo_concurrency.lockutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:14 np0005603609 nova_compute[221550]: 2026-01-31 08:25:14.555 221554 DEBUG oslo_concurrency.lockutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:14 np0005603609 nova_compute[221550]: 2026-01-31 08:25:14.555 221554 DEBUG oslo_concurrency.lockutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:15 np0005603609 nova_compute[221550]: 2026-01-31 08:25:15.433 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:16.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:25:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:16.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:25:16 np0005603609 nova_compute[221550]: 2026-01-31 08:25:16.714 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:16 np0005603609 nova_compute[221550]: 2026-01-31 08:25:16.733 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:17 np0005603609 nova_compute[221550]: 2026-01-31 08:25:17.018 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:17 np0005603609 nova_compute[221550]: 2026-01-31 08:25:17.613 221554 DEBUG nova.network.neutron [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Successfully created port: 2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:25:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:25:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:18.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:25:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:18.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:20 np0005603609 nova_compute[221550]: 2026-01-31 08:25:20.072 221554 DEBUG nova.network.neutron [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Successfully updated port: 2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:25:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:25:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:20.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:25:20 np0005603609 nova_compute[221550]: 2026-01-31 08:25:20.126 221554 DEBUG oslo_concurrency.lockutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:25:20 np0005603609 nova_compute[221550]: 2026-01-31 08:25:20.126 221554 DEBUG oslo_concurrency.lockutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquired lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:25:20 np0005603609 nova_compute[221550]: 2026-01-31 08:25:20.127 221554 DEBUG nova.network.neutron [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:25:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:25:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:20.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:25:20 np0005603609 nova_compute[221550]: 2026-01-31 08:25:20.437 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:22 np0005603609 nova_compute[221550]: 2026-01-31 08:25:22.020 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:25:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:22.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:25:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:25:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:22.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:25:23 np0005603609 nova_compute[221550]: 2026-01-31 08:25:23.313 221554 DEBUG nova.compute.manager [req-af1009f4-e97b-41f5-98f9-a36b80555b3c req-9104320c-d68d-45e2-89c2-4f96decc24ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Received event network-changed-2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:25:23 np0005603609 nova_compute[221550]: 2026-01-31 08:25:23.313 221554 DEBUG nova.compute.manager [req-af1009f4-e97b-41f5-98f9-a36b80555b3c req-9104320c-d68d-45e2-89c2-4f96decc24ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Refreshing instance network info cache due to event network-changed-2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:25:23 np0005603609 nova_compute[221550]: 2026-01-31 08:25:23.313 221554 DEBUG oslo_concurrency.lockutils [req-af1009f4-e97b-41f5-98f9-a36b80555b3c req-9104320c-d68d-45e2-89c2-4f96decc24ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:25:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:23 np0005603609 nova_compute[221550]: 2026-01-31 08:25:23.403 221554 DEBUG nova.network.neutron [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:25:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:25:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:25:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:25:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:25:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:25:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:24.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:24.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:25 np0005603609 nova_compute[221550]: 2026-01-31 08:25:25.388 221554 DEBUG nova.network.neutron [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Updating instance_info_cache with network_info: [{"id": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "address": "fa:16:3e:85:fa:8d", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b86ae61-4c", "ovs_interfaceid": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:25:25 np0005603609 nova_compute[221550]: 2026-01-31 08:25:25.438 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:25:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:26.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:25:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:26.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.293 221554 DEBUG oslo_concurrency.lockutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Releasing lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.294 221554 DEBUG nova.compute.manager [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Instance network_info: |[{"id": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "address": "fa:16:3e:85:fa:8d", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b86ae61-4c", "ovs_interfaceid": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.294 221554 DEBUG oslo_concurrency.lockutils [req-af1009f4-e97b-41f5-98f9-a36b80555b3c req-9104320c-d68d-45e2-89c2-4f96decc24ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.295 221554 DEBUG nova.network.neutron [req-af1009f4-e97b-41f5-98f9-a36b80555b3c req-9104320c-d68d-45e2-89c2-4f96decc24ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Refreshing network info cache for port 2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.298 221554 DEBUG nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Start _get_guest_xml network_info=[{"id": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "address": "fa:16:3e:85:fa:8d", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b86ae61-4c", "ovs_interfaceid": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.303 221554 WARNING nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.309 221554 DEBUG nova.virt.libvirt.host [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.309 221554 DEBUG nova.virt.libvirt.host [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.313 221554 DEBUG nova.virt.libvirt.host [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.313 221554 DEBUG nova.virt.libvirt.host [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.314 221554 DEBUG nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.315 221554 DEBUG nova.virt.hardware [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.315 221554 DEBUG nova.virt.hardware [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.315 221554 DEBUG nova.virt.hardware [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.316 221554 DEBUG nova.virt.hardware [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.316 221554 DEBUG nova.virt.hardware [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.316 221554 DEBUG nova.virt.hardware [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.317 221554 DEBUG nova.virt.hardware [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.317 221554 DEBUG nova.virt.hardware [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.317 221554 DEBUG nova.virt.hardware [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.317 221554 DEBUG nova.virt.hardware [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.318 221554 DEBUG nova.virt.hardware [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.321 221554 DEBUG oslo_concurrency.processutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:25:26 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/18559313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.791 221554 DEBUG oslo_concurrency.processutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.833 221554 DEBUG nova.storage.rbd_utils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image 8bf29da0-2e1a-4e89-bce8-b293a938c742_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:26 np0005603609 nova_compute[221550]: 2026-01-31 08:25:26.838 221554 DEBUG oslo_concurrency.processutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.023 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:25:27 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4210798319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.295 221554 DEBUG oslo_concurrency.processutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.297 221554 DEBUG nova.virt.libvirt.vif [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:25:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-0',id=157,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMx4MwFIYcLufTgJTIjkaLBrJONZCyY8Yf6X6pbg0U3Us81VO6LfliTNhhDzgfgfMWpf9GXPg5uphWD0tDnxS1Zf2IaRx1ENKXJOF1zVaOJTSt3BjSDZpbJsUpD0/zLEPw==',key_name='tempest-keypair-253684506',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-sg9y73tv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:25:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=8bf29da0-2e1a-4e89-bce8-b293a938c742,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "address": "fa:16:3e:85:fa:8d", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b86ae61-4c", "ovs_interfaceid": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.297 221554 DEBUG nova.network.os_vif_util [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "address": "fa:16:3e:85:fa:8d", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b86ae61-4c", "ovs_interfaceid": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.298 221554 DEBUG nova.network.os_vif_util [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:fa:8d,bridge_name='br-int',has_traffic_filtering=True,id=2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b86ae61-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.300 221554 DEBUG nova.objects.instance [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8bf29da0-2e1a-4e89-bce8-b293a938c742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.557 221554 DEBUG nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:25:27 np0005603609 nova_compute[221550]:  <uuid>8bf29da0-2e1a-4e89-bce8-b293a938c742</uuid>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:  <name>instance-0000009d</name>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <nova:name>multiattach-server-0</nova:name>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:25:26</nova:creationTime>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:25:27 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:        <nova:user uuid="85dfa8546d9942648bb4197c8b1947e3">tempest-AttachVolumeMultiAttachTest-2017021026-project-member</nova:user>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:        <nova:project uuid="48bbdbdee526499e90da7e971ede68d3">tempest-AttachVolumeMultiAttachTest-2017021026</nova:project>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:        <nova:port uuid="2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3">
Jan 31 03:25:27 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <entry name="serial">8bf29da0-2e1a-4e89-bce8-b293a938c742</entry>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <entry name="uuid">8bf29da0-2e1a-4e89-bce8-b293a938c742</entry>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/8bf29da0-2e1a-4e89-bce8-b293a938c742_disk">
Jan 31 03:25:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:25:27 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/8bf29da0-2e1a-4e89-bce8-b293a938c742_disk.config">
Jan 31 03:25:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:25:27 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:85:fa:8d"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <target dev="tap2b86ae61-4c"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/8bf29da0-2e1a-4e89-bce8-b293a938c742/console.log" append="off"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:25:27 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:25:27 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:25:27 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:25:27 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.559 221554 DEBUG nova.compute.manager [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Preparing to wait for external event network-vif-plugged-2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.560 221554 DEBUG oslo_concurrency.lockutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "8bf29da0-2e1a-4e89-bce8-b293a938c742-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.561 221554 DEBUG oslo_concurrency.lockutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.561 221554 DEBUG oslo_concurrency.lockutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.563 221554 DEBUG nova.virt.libvirt.vif [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:25:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-0',id=157,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMx4MwFIYcLufTgJTIjkaLBrJONZCyY8Yf6X6pbg0U3Us81VO6LfliTNhhDzgfgfMWpf9GXPg5uphWD0tDnxS1Zf2IaRx1ENKXJOF1zVaOJTSt3BjSDZpbJsUpD0/zLEPw==',key_name='tempest-keypair-253684506',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-sg9y73tv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:25:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=8bf29da0-2e1a-4e89-bce8-b293a938c742,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "address": "fa:16:3e:85:fa:8d", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b86ae61-4c", "ovs_interfaceid": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.563 221554 DEBUG nova.network.os_vif_util [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "address": "fa:16:3e:85:fa:8d", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b86ae61-4c", "ovs_interfaceid": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.565 221554 DEBUG nova.network.os_vif_util [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:fa:8d,bridge_name='br-int',has_traffic_filtering=True,id=2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b86ae61-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.565 221554 DEBUG os_vif [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:fa:8d,bridge_name='br-int',has_traffic_filtering=True,id=2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b86ae61-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.566 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.567 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.568 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.573 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.573 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b86ae61-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.574 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2b86ae61-4c, col_values=(('external_ids', {'iface-id': '2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:fa:8d', 'vm-uuid': '8bf29da0-2e1a-4e89-bce8-b293a938c742'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.577 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:27 np0005603609 NetworkManager[49064]: <info>  [1769847927.5784] manager: (tap2b86ae61-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/327)
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.581 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.583 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.583 221554 INFO os_vif [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:fa:8d,bridge_name='br-int',has_traffic_filtering=True,id=2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b86ae61-4c')#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.947 221554 DEBUG nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.948 221554 DEBUG nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.948 221554 DEBUG nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No VIF found with MAC fa:16:3e:85:fa:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.949 221554 INFO nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Using config drive#033[00m
Jan 31 03:25:27 np0005603609 nova_compute[221550]: 2026-01-31 08:25:27.989 221554 DEBUG nova.storage.rbd_utils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image 8bf29da0-2e1a-4e89-bce8-b293a938c742_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:25:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:28.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:25:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:28.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:29 np0005603609 nova_compute[221550]: 2026-01-31 08:25:29.712 221554 INFO nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Creating config drive at /var/lib/nova/instances/8bf29da0-2e1a-4e89-bce8-b293a938c742/disk.config#033[00m
Jan 31 03:25:29 np0005603609 nova_compute[221550]: 2026-01-31 08:25:29.723 221554 DEBUG oslo_concurrency.processutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8bf29da0-2e1a-4e89-bce8-b293a938c742/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpe3fbs8wf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:29 np0005603609 nova_compute[221550]: 2026-01-31 08:25:29.866 221554 DEBUG oslo_concurrency.processutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8bf29da0-2e1a-4e89-bce8-b293a938c742/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpe3fbs8wf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:29 np0005603609 nova_compute[221550]: 2026-01-31 08:25:29.912 221554 DEBUG nova.storage.rbd_utils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image 8bf29da0-2e1a-4e89-bce8-b293a938c742_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:29 np0005603609 nova_compute[221550]: 2026-01-31 08:25:29.916 221554 DEBUG oslo_concurrency.processutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8bf29da0-2e1a-4e89-bce8-b293a938c742/disk.config 8bf29da0-2e1a-4e89-bce8-b293a938c742_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:25:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:30.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:25:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:25:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:30.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:25:30 np0005603609 nova_compute[221550]: 2026-01-31 08:25:30.440 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:30 np0005603609 nova_compute[221550]: 2026-01-31 08:25:30.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:25:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:25:30 np0005603609 nova_compute[221550]: 2026-01-31 08:25:30.851 221554 DEBUG oslo_concurrency.processutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8bf29da0-2e1a-4e89-bce8-b293a938c742/disk.config 8bf29da0-2e1a-4e89-bce8-b293a938c742_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.935s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:30 np0005603609 nova_compute[221550]: 2026-01-31 08:25:30.852 221554 INFO nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Deleting local config drive /var/lib/nova/instances/8bf29da0-2e1a-4e89-bce8-b293a938c742/disk.config because it was imported into RBD.#033[00m
Jan 31 03:25:30 np0005603609 kernel: tap2b86ae61-4c: entered promiscuous mode
Jan 31 03:25:30 np0005603609 NetworkManager[49064]: <info>  [1769847930.9010] manager: (tap2b86ae61-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/328)
Jan 31 03:25:30 np0005603609 ovn_controller[130359]: 2026-01-31T08:25:30Z|00710|binding|INFO|Claiming lport 2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 for this chassis.
Jan 31 03:25:30 np0005603609 nova_compute[221550]: 2026-01-31 08:25:30.900 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:30 np0005603609 ovn_controller[130359]: 2026-01-31T08:25:30Z|00711|binding|INFO|2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3: Claiming fa:16:3e:85:fa:8d 10.100.0.6
Jan 31 03:25:30 np0005603609 nova_compute[221550]: 2026-01-31 08:25:30.909 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:30 np0005603609 systemd-machined[190912]: New machine qemu-84-instance-0000009d.
Jan 31 03:25:30 np0005603609 ovn_controller[130359]: 2026-01-31T08:25:30Z|00712|binding|INFO|Setting lport 2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 ovn-installed in OVS
Jan 31 03:25:30 np0005603609 systemd[1]: Started Virtual Machine qemu-84-instance-0000009d.
Jan 31 03:25:30 np0005603609 nova_compute[221550]: 2026-01-31 08:25:30.943 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:30 np0005603609 systemd-udevd[286601]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:25:30 np0005603609 NetworkManager[49064]: <info>  [1769847930.9717] device (tap2b86ae61-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:25:30 np0005603609 NetworkManager[49064]: <info>  [1769847930.9726] device (tap2b86ae61-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:25:31 np0005603609 ovn_controller[130359]: 2026-01-31T08:25:31Z|00713|binding|INFO|Setting lport 2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 up in Southbound
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.078 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:fa:8d 10.100.0.6'], port_security=['fa:16:3e:85:fa:8d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8bf29da0-2e1a-4e89-bce8-b293a938c742', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbdbdee526499e90da7e971ede68d3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8d5863f6-4aa0-486a-96ed-eb36f7d4a61d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=379aaecc-3dde-4f00-82cf-dc8bd8367d4b, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.080 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 in datapath 26ad6a8f-33d5-432e-83d3-63a9d2f165ad bound to our chassis#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.082 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26ad6a8f-33d5-432e-83d3-63a9d2f165ad#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.090 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4b00f9b5-e203-43b7-9682-3a31887c8262]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.091 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap26ad6a8f-31 in ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.094 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap26ad6a8f-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.094 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6b46b682-5b67-49da-9872-22113188267c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.096 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7da802df-b6e9-4120-a0cf-99d789c8bbff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.105 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[01e6365f-974f-40f2-94ee-08cfb7073ee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.118 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[915ab342-2fed-4fba-a698-ac7cb962e0d9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.148 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[0c4f294a-9246-4386-bd1c-a119150c45fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.153 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[efc640b4-cfef-4cdd-9e2a-46472a026601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:31 np0005603609 NetworkManager[49064]: <info>  [1769847931.1551] manager: (tap26ad6a8f-30): new Veth device (/org/freedesktop/NetworkManager/Devices/329)
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.177 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[24a1b713-bc60-4731-a04d-ef9a3e5922b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.181 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f2254d43-2419-400c-a9e6-ced569ed316a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:31 np0005603609 NetworkManager[49064]: <info>  [1769847931.1999] device (tap26ad6a8f-30): carrier: link connected
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.204 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[6d561767-4fe0-4f28-9c29-ac0f6f45a13d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.214 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[996f7817-572e-4ebd-aef4-a00dbe2603f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26ad6a8f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:60:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817090, 'reachable_time': 21863, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286634, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.223 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7f926f9d-f210-4dd3-ae11-48a678c01088]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3a:605d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817090, 'tstamp': 817090}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286635, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.234 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f26451f5-9bc4-4c08-98e2-15ad5f5b21bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26ad6a8f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:60:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817090, 'reachable_time': 21863, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286636, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.262 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c26c2f48-9eae-4257-b3ee-873d96828fbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.306 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d36eec70-7685-4cc9-8088-c9bafba01f4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.308 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26ad6a8f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.308 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.309 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26ad6a8f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:25:31 np0005603609 nova_compute[221550]: 2026-01-31 08:25:31.372 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:31 np0005603609 NetworkManager[49064]: <info>  [1769847931.3733] manager: (tap26ad6a8f-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Jan 31 03:25:31 np0005603609 kernel: tap26ad6a8f-30: entered promiscuous mode
Jan 31 03:25:31 np0005603609 nova_compute[221550]: 2026-01-31 08:25:31.375 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.376 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26ad6a8f-30, col_values=(('external_ids', {'iface-id': '0b9d56f1-a803-44f1-b709-3bfbc71e0f57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:25:31 np0005603609 nova_compute[221550]: 2026-01-31 08:25:31.378 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:31 np0005603609 ovn_controller[130359]: 2026-01-31T08:25:31Z|00714|binding|INFO|Releasing lport 0b9d56f1-a803-44f1-b709-3bfbc71e0f57 from this chassis (sb_readonly=1)
Jan 31 03:25:31 np0005603609 nova_compute[221550]: 2026-01-31 08:25:31.378 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.381 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/26ad6a8f-33d5-432e-83d3-63a9d2f165ad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/26ad6a8f-33d5-432e-83d3-63a9d2f165ad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.382 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf7f528-fd01-4137-8b6a-9571d08a4e12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:31 np0005603609 nova_compute[221550]: 2026-01-31 08:25:31.383 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.383 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-26ad6a8f-33d5-432e-83d3-63a9d2f165ad
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/26ad6a8f-33d5-432e-83d3-63a9d2f165ad.pid.haproxy
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 26ad6a8f-33d5-432e-83d3-63a9d2f165ad
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:25:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:31.386 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'env', 'PROCESS_TAG=haproxy-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/26ad6a8f-33d5-432e-83d3-63a9d2f165ad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:25:31 np0005603609 nova_compute[221550]: 2026-01-31 08:25:31.623 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847931.621848, 8bf29da0-2e1a-4e89-bce8-b293a938c742 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:25:31 np0005603609 nova_compute[221550]: 2026-01-31 08:25:31.623 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] VM Started (Lifecycle Event)#033[00m
Jan 31 03:25:31 np0005603609 nova_compute[221550]: 2026-01-31 08:25:31.634 221554 DEBUG nova.network.neutron [req-af1009f4-e97b-41f5-98f9-a36b80555b3c req-9104320c-d68d-45e2-89c2-4f96decc24ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Updated VIF entry in instance network info cache for port 2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:25:31 np0005603609 nova_compute[221550]: 2026-01-31 08:25:31.635 221554 DEBUG nova.network.neutron [req-af1009f4-e97b-41f5-98f9-a36b80555b3c req-9104320c-d68d-45e2-89c2-4f96decc24ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Updating instance_info_cache with network_info: [{"id": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "address": "fa:16:3e:85:fa:8d", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b86ae61-4c", "ovs_interfaceid": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:25:31 np0005603609 nova_compute[221550]: 2026-01-31 08:25:31.737 221554 DEBUG oslo_concurrency.lockutils [req-af1009f4-e97b-41f5-98f9-a36b80555b3c req-9104320c-d68d-45e2-89c2-4f96decc24ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:25:31 np0005603609 nova_compute[221550]: 2026-01-31 08:25:31.744 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:25:31 np0005603609 nova_compute[221550]: 2026-01-31 08:25:31.749 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847931.622223, 8bf29da0-2e1a-4e89-bce8-b293a938c742 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:25:31 np0005603609 nova_compute[221550]: 2026-01-31 08:25:31.749 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:25:31 np0005603609 nova_compute[221550]: 2026-01-31 08:25:31.802 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:25:31 np0005603609 nova_compute[221550]: 2026-01-31 08:25:31.805 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:25:31 np0005603609 podman[286710]: 2026-01-31 08:25:31.712293241 +0000 UTC m=+0.032568293 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:25:31 np0005603609 nova_compute[221550]: 2026-01-31 08:25:31.876 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:25:32 np0005603609 podman[286710]: 2026-01-31 08:25:32.05915996 +0000 UTC m=+0.379434902 container create 8259dcf8c146c34f6db6bb389dcbbb5c7a50ba1f643290a1c3e7d2e0ea089d37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:25:32 np0005603609 systemd[1]: Started libpod-conmon-8259dcf8c146c34f6db6bb389dcbbb5c7a50ba1f643290a1c3e7d2e0ea089d37.scope.
Jan 31 03:25:32 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:25:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:32.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:32 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1302cd6881f7a463c3ea1971cf11568d54399ff56afcd5de2f77193f073bd78/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:32 np0005603609 podman[286710]: 2026-01-31 08:25:32.192143634 +0000 UTC m=+0.512418606 container init 8259dcf8c146c34f6db6bb389dcbbb5c7a50ba1f643290a1c3e7d2e0ea089d37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:25:32 np0005603609 podman[286710]: 2026-01-31 08:25:32.198124437 +0000 UTC m=+0.518399389 container start 8259dcf8c146c34f6db6bb389dcbbb5c7a50ba1f643290a1c3e7d2e0ea089d37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 03:25:32 np0005603609 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[286725]: [NOTICE]   (286730) : New worker (286732) forked
Jan 31 03:25:32 np0005603609 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[286725]: [NOTICE]   (286730) : Loading success.
Jan 31 03:25:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:32.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.246 221554 DEBUG nova.compute.manager [req-f54a041e-5881-4be1-a665-9887abd091be req-63822e6b-6737-4c3e-8a12-222ed70b44f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Received event network-vif-plugged-2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.247 221554 DEBUG oslo_concurrency.lockutils [req-f54a041e-5881-4be1-a665-9887abd091be req-63822e6b-6737-4c3e-8a12-222ed70b44f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8bf29da0-2e1a-4e89-bce8-b293a938c742-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.247 221554 DEBUG oslo_concurrency.lockutils [req-f54a041e-5881-4be1-a665-9887abd091be req-63822e6b-6737-4c3e-8a12-222ed70b44f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.247 221554 DEBUG oslo_concurrency.lockutils [req-f54a041e-5881-4be1-a665-9887abd091be req-63822e6b-6737-4c3e-8a12-222ed70b44f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.248 221554 DEBUG nova.compute.manager [req-f54a041e-5881-4be1-a665-9887abd091be req-63822e6b-6737-4c3e-8a12-222ed70b44f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Processing event network-vif-plugged-2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.248 221554 DEBUG nova.compute.manager [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.251 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847932.2514884, 8bf29da0-2e1a-4e89-bce8-b293a938c742 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.251 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.253 221554 DEBUG nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.256 221554 INFO nova.virt.libvirt.driver [-] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Instance spawned successfully.#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.257 221554 DEBUG nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.289 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.294 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.299 221554 DEBUG nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.299 221554 DEBUG nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.300 221554 DEBUG nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.300 221554 DEBUG nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.301 221554 DEBUG nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.301 221554 DEBUG nova.virt.libvirt.driver [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.350 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.416 221554 INFO nova.compute.manager [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Took 19.54 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.417 221554 DEBUG nova.compute.manager [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.521 221554 INFO nova.compute.manager [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Took 21.45 seconds to build instance.#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.565 221554 DEBUG oslo_concurrency.lockutils [None req-952916cd-2972-4f97-abd3-48eb2fb1a957 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:32 np0005603609 nova_compute[221550]: 2026-01-31 08:25:32.607 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:33 np0005603609 podman[286741]: 2026-01-31 08:25:33.213159122 +0000 UTC m=+0.089220473 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:25:33 np0005603609 podman[286742]: 2026-01-31 08:25:33.217106587 +0000 UTC m=+0.085162006 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 03:25:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:34.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:34.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:34 np0005603609 nova_compute[221550]: 2026-01-31 08:25:34.907 221554 DEBUG nova.compute.manager [req-93afe0a5-e121-4e89-90bc-b80694da70e7 req-f1b33291-8cef-41cf-89e9-16a641dc6ce4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Received event network-vif-plugged-2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:25:34 np0005603609 nova_compute[221550]: 2026-01-31 08:25:34.907 221554 DEBUG oslo_concurrency.lockutils [req-93afe0a5-e121-4e89-90bc-b80694da70e7 req-f1b33291-8cef-41cf-89e9-16a641dc6ce4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8bf29da0-2e1a-4e89-bce8-b293a938c742-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:34 np0005603609 nova_compute[221550]: 2026-01-31 08:25:34.908 221554 DEBUG oslo_concurrency.lockutils [req-93afe0a5-e121-4e89-90bc-b80694da70e7 req-f1b33291-8cef-41cf-89e9-16a641dc6ce4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:34 np0005603609 nova_compute[221550]: 2026-01-31 08:25:34.908 221554 DEBUG oslo_concurrency.lockutils [req-93afe0a5-e121-4e89-90bc-b80694da70e7 req-f1b33291-8cef-41cf-89e9-16a641dc6ce4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:34 np0005603609 nova_compute[221550]: 2026-01-31 08:25:34.909 221554 DEBUG nova.compute.manager [req-93afe0a5-e121-4e89-90bc-b80694da70e7 req-f1b33291-8cef-41cf-89e9-16a641dc6ce4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] No waiting events found dispatching network-vif-plugged-2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:25:34 np0005603609 nova_compute[221550]: 2026-01-31 08:25:34.909 221554 WARNING nova.compute.manager [req-93afe0a5-e121-4e89-90bc-b80694da70e7 req-f1b33291-8cef-41cf-89e9-16a641dc6ce4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Received unexpected event network-vif-plugged-2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:25:35 np0005603609 nova_compute[221550]: 2026-01-31 08:25:35.442 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:25:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:36.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:25:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:36.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:37 np0005603609 nova_compute[221550]: 2026-01-31 08:25:37.609 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:37 np0005603609 nova_compute[221550]: 2026-01-31 08:25:37.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:25:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:38.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:25:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:38.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:38 np0005603609 ovn_controller[130359]: 2026-01-31T08:25:38Z|00715|binding|INFO|Releasing lport 0b9d56f1-a803-44f1-b709-3bfbc71e0f57 from this chassis (sb_readonly=0)
Jan 31 03:25:38 np0005603609 NetworkManager[49064]: <info>  [1769847938.9531] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/331)
Jan 31 03:25:38 np0005603609 NetworkManager[49064]: <info>  [1769847938.9537] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Jan 31 03:25:38 np0005603609 nova_compute[221550]: 2026-01-31 08:25:38.955 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:38 np0005603609 nova_compute[221550]: 2026-01-31 08:25:38.959 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:38 np0005603609 ovn_controller[130359]: 2026-01-31T08:25:38Z|00716|binding|INFO|Releasing lport 0b9d56f1-a803-44f1-b709-3bfbc71e0f57 from this chassis (sb_readonly=0)
Jan 31 03:25:38 np0005603609 nova_compute[221550]: 2026-01-31 08:25:38.963 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:39 np0005603609 nova_compute[221550]: 2026-01-31 08:25:39.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:40 np0005603609 nova_compute[221550]: 2026-01-31 08:25:40.010 221554 DEBUG nova.compute.manager [req-e729fd8f-ffaf-46e2-ac1b-c4e7c84ef1e7 req-346b8303-3492-4864-8ef1-8946a0dcb113 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Received event network-changed-2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:25:40 np0005603609 nova_compute[221550]: 2026-01-31 08:25:40.011 221554 DEBUG nova.compute.manager [req-e729fd8f-ffaf-46e2-ac1b-c4e7c84ef1e7 req-346b8303-3492-4864-8ef1-8946a0dcb113 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Refreshing instance network info cache due to event network-changed-2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:25:40 np0005603609 nova_compute[221550]: 2026-01-31 08:25:40.012 221554 DEBUG oslo_concurrency.lockutils [req-e729fd8f-ffaf-46e2-ac1b-c4e7c84ef1e7 req-346b8303-3492-4864-8ef1-8946a0dcb113 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:25:40 np0005603609 nova_compute[221550]: 2026-01-31 08:25:40.013 221554 DEBUG oslo_concurrency.lockutils [req-e729fd8f-ffaf-46e2-ac1b-c4e7c84ef1e7 req-346b8303-3492-4864-8ef1-8946a0dcb113 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:25:40 np0005603609 nova_compute[221550]: 2026-01-31 08:25:40.013 221554 DEBUG nova.network.neutron [req-e729fd8f-ffaf-46e2-ac1b-c4e7c84ef1e7 req-346b8303-3492-4864-8ef1-8946a0dcb113 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Refreshing network info cache for port 2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:25:40 np0005603609 nova_compute[221550]: 2026-01-31 08:25:40.055 221554 DEBUG oslo_concurrency.lockutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquiring lock "d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:40 np0005603609 nova_compute[221550]: 2026-01-31 08:25:40.056 221554 DEBUG oslo_concurrency.lockutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:40 np0005603609 nova_compute[221550]: 2026-01-31 08:25:40.105 221554 DEBUG nova.compute.manager [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:25:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:40.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:40 np0005603609 nova_compute[221550]: 2026-01-31 08:25:40.190 221554 DEBUG oslo_concurrency.lockutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:40 np0005603609 nova_compute[221550]: 2026-01-31 08:25:40.190 221554 DEBUG oslo_concurrency.lockutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:40 np0005603609 nova_compute[221550]: 2026-01-31 08:25:40.199 221554 DEBUG nova.virt.hardware [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:25:40 np0005603609 nova_compute[221550]: 2026-01-31 08:25:40.200 221554 INFO nova.compute.claims [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:25:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:40.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:40 np0005603609 nova_compute[221550]: 2026-01-31 08:25:40.444 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:40 np0005603609 nova_compute[221550]: 2026-01-31 08:25:40.649 221554 DEBUG oslo_concurrency.processutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:40 np0005603609 nova_compute[221550]: 2026-01-31 08:25:40.681 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:40 np0005603609 nova_compute[221550]: 2026-01-31 08:25:40.682 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:25:40 np0005603609 nova_compute[221550]: 2026-01-31 08:25:40.682 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:25:41 np0005603609 nova_compute[221550]: 2026-01-31 08:25:41.143 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:25:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:25:41 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2644805062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:25:41 np0005603609 nova_compute[221550]: 2026-01-31 08:25:41.450 221554 DEBUG oslo_concurrency.processutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.801s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:41 np0005603609 nova_compute[221550]: 2026-01-31 08:25:41.456 221554 DEBUG nova.compute.provider_tree [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:25:41 np0005603609 nova_compute[221550]: 2026-01-31 08:25:41.535 221554 DEBUG nova.scheduler.client.report [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:25:41 np0005603609 nova_compute[221550]: 2026-01-31 08:25:41.902 221554 DEBUG oslo_concurrency.lockutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:41 np0005603609 nova_compute[221550]: 2026-01-31 08:25:41.903 221554 DEBUG nova.compute.manager [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:25:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:25:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:42.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:25:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:42.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:42 np0005603609 nova_compute[221550]: 2026-01-31 08:25:42.611 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:42 np0005603609 nova_compute[221550]: 2026-01-31 08:25:42.643 221554 DEBUG nova.compute.manager [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Jan 31 03:25:42 np0005603609 nova_compute[221550]: 2026-01-31 08:25:42.678 221554 INFO nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:25:42 np0005603609 nova_compute[221550]: 2026-01-31 08:25:42.722 221554 DEBUG nova.compute.manager [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:25:42 np0005603609 nova_compute[221550]: 2026-01-31 08:25:42.828 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:25:42 np0005603609 nova_compute[221550]: 2026-01-31 08:25:42.839 221554 DEBUG nova.compute.manager [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:25:42 np0005603609 nova_compute[221550]: 2026-01-31 08:25:42.840 221554 DEBUG nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:25:42 np0005603609 nova_compute[221550]: 2026-01-31 08:25:42.841 221554 INFO nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Creating image(s)#033[00m
Jan 31 03:25:42 np0005603609 nova_compute[221550]: 2026-01-31 08:25:42.873 221554 DEBUG nova.storage.rbd_utils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:42 np0005603609 nova_compute[221550]: 2026-01-31 08:25:42.918 221554 DEBUG nova.storage.rbd_utils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:42 np0005603609 nova_compute[221550]: 2026-01-31 08:25:42.959 221554 DEBUG nova.storage.rbd_utils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:42 np0005603609 nova_compute[221550]: 2026-01-31 08:25:42.965 221554 DEBUG oslo_concurrency.processutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:43 np0005603609 nova_compute[221550]: 2026-01-31 08:25:43.040 221554 DEBUG oslo_concurrency.processutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:43 np0005603609 nova_compute[221550]: 2026-01-31 08:25:43.041 221554 DEBUG oslo_concurrency.lockutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:43 np0005603609 nova_compute[221550]: 2026-01-31 08:25:43.041 221554 DEBUG oslo_concurrency.lockutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:43 np0005603609 nova_compute[221550]: 2026-01-31 08:25:43.041 221554 DEBUG oslo_concurrency.lockutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:43 np0005603609 nova_compute[221550]: 2026-01-31 08:25:43.069 221554 DEBUG nova.storage.rbd_utils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:43 np0005603609 nova_compute[221550]: 2026-01-31 08:25:43.073 221554 DEBUG oslo_concurrency.processutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:43 np0005603609 nova_compute[221550]: 2026-01-31 08:25:43.573 221554 DEBUG oslo_concurrency.processutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:43 np0005603609 nova_compute[221550]: 2026-01-31 08:25:43.661 221554 DEBUG nova.storage.rbd_utils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] resizing rbd image d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:25:43 np0005603609 nova_compute[221550]: 2026-01-31 08:25:43.889 221554 DEBUG nova.objects.instance [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lazy-loading 'migration_context' on Instance uuid d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.081 221554 DEBUG nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.082 221554 DEBUG nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Ensure instance console log exists: /var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.083 221554 DEBUG oslo_concurrency.lockutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.083 221554 DEBUG oslo_concurrency.lockutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.083 221554 DEBUG oslo_concurrency.lockutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.085 221554 DEBUG nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.091 221554 WARNING nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.095 221554 DEBUG nova.virt.libvirt.host [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.096 221554 DEBUG nova.virt.libvirt.host [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.098 221554 DEBUG nova.virt.libvirt.host [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.098 221554 DEBUG nova.virt.libvirt.host [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.099 221554 DEBUG nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.100 221554 DEBUG nova.virt.hardware [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.100 221554 DEBUG nova.virt.hardware [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.100 221554 DEBUG nova.virt.hardware [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.100 221554 DEBUG nova.virt.hardware [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.101 221554 DEBUG nova.virt.hardware [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.101 221554 DEBUG nova.virt.hardware [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.101 221554 DEBUG nova.virt.hardware [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.101 221554 DEBUG nova.virt.hardware [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.102 221554 DEBUG nova.virt.hardware [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.102 221554 DEBUG nova.virt.hardware [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.102 221554 DEBUG nova.virt.hardware [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.104 221554 DEBUG oslo_concurrency.processutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:44.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:44.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:25:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2405147875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.554 221554 DEBUG oslo_concurrency.processutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.582 221554 DEBUG nova.storage.rbd_utils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.586 221554 DEBUG oslo_concurrency.processutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:44 np0005603609 nova_compute[221550]: 2026-01-31 08:25:44.754 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:44.755 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:25:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:44.757 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:25:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:25:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2694520024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:25:45 np0005603609 nova_compute[221550]: 2026-01-31 08:25:45.011 221554 DEBUG oslo_concurrency.processutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:45 np0005603609 nova_compute[221550]: 2026-01-31 08:25:45.013 221554 DEBUG nova.objects.instance [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lazy-loading 'pci_devices' on Instance uuid d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:25:45 np0005603609 nova_compute[221550]: 2026-01-31 08:25:45.037 221554 DEBUG nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:25:45 np0005603609 nova_compute[221550]:  <uuid>d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5</uuid>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:  <name>instance-0000009f</name>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerShowV247Test-server-87063752</nova:name>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:25:44</nova:creationTime>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:25:45 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:        <nova:user uuid="3ebb4f01dd6e420b91e5f2282ecfd49d">tempest-ServerShowV247Test-1312169099-project-member</nova:user>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:        <nova:project uuid="03f29bb9f284401a8ff5c6431219974b">tempest-ServerShowV247Test-1312169099</nova:project>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <nova:ports/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <entry name="serial">d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5</entry>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <entry name="uuid">d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5</entry>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk">
Jan 31 03:25:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:25:45 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk.config">
Jan 31 03:25:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:25:45 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5/console.log" append="off"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:25:45 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:25:45 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:25:45 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:25:45 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:25:45 np0005603609 nova_compute[221550]: 2026-01-31 08:25:45.094 221554 DEBUG nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:25:45 np0005603609 nova_compute[221550]: 2026-01-31 08:25:45.095 221554 DEBUG nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:25:45 np0005603609 nova_compute[221550]: 2026-01-31 08:25:45.095 221554 INFO nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Using config drive#033[00m
Jan 31 03:25:45 np0005603609 nova_compute[221550]: 2026-01-31 08:25:45.120 221554 DEBUG nova.storage.rbd_utils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:45 np0005603609 nova_compute[221550]: 2026-01-31 08:25:45.446 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:45 np0005603609 nova_compute[221550]: 2026-01-31 08:25:45.645 221554 INFO nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Creating config drive at /var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5/disk.config#033[00m
Jan 31 03:25:45 np0005603609 nova_compute[221550]: 2026-01-31 08:25:45.651 221554 DEBUG oslo_concurrency.processutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpytqm0y4m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:45 np0005603609 nova_compute[221550]: 2026-01-31 08:25:45.781 221554 DEBUG oslo_concurrency.processutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpytqm0y4m" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:45 np0005603609 nova_compute[221550]: 2026-01-31 08:25:45.820 221554 DEBUG nova.storage.rbd_utils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:45 np0005603609 nova_compute[221550]: 2026-01-31 08:25:45.826 221554 DEBUG oslo_concurrency.processutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5/disk.config d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.035 221554 DEBUG oslo_concurrency.processutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5/disk.config d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.036 221554 INFO nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Deleting local config drive /var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5/disk.config because it was imported into RBD.#033[00m
Jan 31 03:25:46 np0005603609 systemd-machined[190912]: New machine qemu-85-instance-0000009f.
Jan 31 03:25:46 np0005603609 systemd[1]: Started Virtual Machine qemu-85-instance-0000009f.
Jan 31 03:25:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:46.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:46.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.265 221554 DEBUG nova.network.neutron [req-e729fd8f-ffaf-46e2-ac1b-c4e7c84ef1e7 req-346b8303-3492-4864-8ef1-8946a0dcb113 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Updated VIF entry in instance network info cache for port 2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.267 221554 DEBUG nova.network.neutron [req-e729fd8f-ffaf-46e2-ac1b-c4e7c84ef1e7 req-346b8303-3492-4864-8ef1-8946a0dcb113 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Updating instance_info_cache with network_info: [{"id": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "address": "fa:16:3e:85:fa:8d", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b86ae61-4c", "ovs_interfaceid": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.300 221554 DEBUG oslo_concurrency.lockutils [req-e729fd8f-ffaf-46e2-ac1b-c4e7c84ef1e7 req-346b8303-3492-4864-8ef1-8946a0dcb113 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.301 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.301 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.301 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8bf29da0-2e1a-4e89-bce8-b293a938c742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:25:46 np0005603609 ovn_controller[130359]: 2026-01-31T08:25:46Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:85:fa:8d 10.100.0.6
Jan 31 03:25:46 np0005603609 ovn_controller[130359]: 2026-01-31T08:25:46Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:85:fa:8d 10.100.0.6
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.564 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847946.5646017, d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.565 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.567 221554 DEBUG nova.compute.manager [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.567 221554 DEBUG nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.572 221554 INFO nova.virt.libvirt.driver [-] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Instance spawned successfully.#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.572 221554 DEBUG nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.605 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.608 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.620 221554 DEBUG nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.621 221554 DEBUG nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.621 221554 DEBUG nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.621 221554 DEBUG nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.622 221554 DEBUG nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.622 221554 DEBUG nova.virt.libvirt.driver [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.687 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.688 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847946.5665812, d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.688 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] VM Started (Lifecycle Event)#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.720 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.723 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.758 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.807 221554 INFO nova.compute.manager [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Took 3.97 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.808 221554 DEBUG nova.compute.manager [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:25:46 np0005603609 nova_compute[221550]: 2026-01-31 08:25:46.913 221554 INFO nova.compute.manager [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Took 6.76 seconds to build instance.#033[00m
Jan 31 03:25:47 np0005603609 nova_compute[221550]: 2026-01-31 08:25:47.392 221554 DEBUG oslo_concurrency.lockutils [None req-df019b81-b157-470e-b6a3-fe24983ff50a 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.336s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:47 np0005603609 nova_compute[221550]: 2026-01-31 08:25:47.615 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:25:47.758 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:25:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:48.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:48.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.139 221554 INFO nova.compute.manager [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Rebuilding instance#033[00m
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.569 221554 DEBUG nova.objects.instance [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lazy-loading 'trusted_certs' on Instance uuid d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.588 221554 DEBUG nova.compute.manager [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.750 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Updating instance_info_cache with network_info: [{"id": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "address": "fa:16:3e:85:fa:8d", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b86ae61-4c", "ovs_interfaceid": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.809 221554 DEBUG nova.objects.instance [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lazy-loading 'pci_requests' on Instance uuid d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.810 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.811 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.811 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.811 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.812 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.812 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.812 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.812 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.971 221554 DEBUG nova.objects.instance [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lazy-loading 'pci_devices' on Instance uuid d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.977 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.977 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.977 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.978 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.978 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:49 np0005603609 nova_compute[221550]: 2026-01-31 08:25:49.996 221554 DEBUG nova.objects.instance [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lazy-loading 'resources' on Instance uuid d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:25:50 np0005603609 nova_compute[221550]: 2026-01-31 08:25:50.011 221554 DEBUG nova.objects.instance [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lazy-loading 'migration_context' on Instance uuid d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:25:50 np0005603609 nova_compute[221550]: 2026-01-31 08:25:50.025 221554 DEBUG nova.objects.instance [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 03:25:50 np0005603609 nova_compute[221550]: 2026-01-31 08:25:50.027 221554 DEBUG nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:25:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:50.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:50.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:25:50 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3045328600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:25:50 np0005603609 nova_compute[221550]: 2026-01-31 08:25:50.436 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:50 np0005603609 nova_compute[221550]: 2026-01-31 08:25:50.475 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:50 np0005603609 nova_compute[221550]: 2026-01-31 08:25:50.621 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:25:50 np0005603609 nova_compute[221550]: 2026-01-31 08:25:50.622 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:25:50 np0005603609 nova_compute[221550]: 2026-01-31 08:25:50.625 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:25:50 np0005603609 nova_compute[221550]: 2026-01-31 08:25:50.625 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:25:50 np0005603609 nova_compute[221550]: 2026-01-31 08:25:50.764 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:25:50 np0005603609 nova_compute[221550]: 2026-01-31 08:25:50.765 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3958MB free_disk=20.90155792236328GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:25:50 np0005603609 nova_compute[221550]: 2026-01-31 08:25:50.765 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:50 np0005603609 nova_compute[221550]: 2026-01-31 08:25:50.765 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:50 np0005603609 nova_compute[221550]: 2026-01-31 08:25:50.895 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:50 np0005603609 nova_compute[221550]: 2026-01-31 08:25:50.986 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 8bf29da0-2e1a-4e89-bce8-b293a938c742 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:25:50 np0005603609 nova_compute[221550]: 2026-01-31 08:25:50.987 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:25:50 np0005603609 nova_compute[221550]: 2026-01-31 08:25:50.987 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:25:50 np0005603609 nova_compute[221550]: 2026-01-31 08:25:50.987 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:25:51 np0005603609 nova_compute[221550]: 2026-01-31 08:25:51.095 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:25:51 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2226192119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:25:51 np0005603609 nova_compute[221550]: 2026-01-31 08:25:51.503 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:51 np0005603609 nova_compute[221550]: 2026-01-31 08:25:51.509 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:25:51 np0005603609 nova_compute[221550]: 2026-01-31 08:25:51.540 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:25:51 np0005603609 nova_compute[221550]: 2026-01-31 08:25:51.587 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:25:51 np0005603609 nova_compute[221550]: 2026-01-31 08:25:51.587 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:52.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:52.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:52 np0005603609 nova_compute[221550]: 2026-01-31 08:25:52.618 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #130. Immutable memtables: 0.
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:25:52.687775) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 130
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847952687852, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 2039, "num_deletes": 262, "total_data_size": 4385333, "memory_usage": 4455488, "flush_reason": "Manual Compaction"}
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #131: started
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847952720892, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 131, "file_size": 2866418, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63709, "largest_seqno": 65743, "table_properties": {"data_size": 2858202, "index_size": 4901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18167, "raw_average_key_size": 20, "raw_value_size": 2841245, "raw_average_value_size": 3203, "num_data_blocks": 213, "num_entries": 887, "num_filter_entries": 887, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847804, "oldest_key_time": 1769847804, "file_creation_time": 1769847952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 33239 microseconds, and 8913 cpu microseconds.
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:25:52.721015) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #131: 2866418 bytes OK
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:25:52.721051) [db/memtable_list.cc:519] [default] Level-0 commit table #131 started
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:25:52.723051) [db/memtable_list.cc:722] [default] Level-0 commit table #131: memtable #1 done
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:25:52.723073) EVENT_LOG_v1 {"time_micros": 1769847952723065, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:25:52.723098) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 4376138, prev total WAL file size 4376138, number of live WAL files 2.
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000127.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:25:52.724288) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323631' seq:72057594037927935, type:22 .. '6C6F676D0032353134' seq:0, type:0; will stop at (end)
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [131(2799KB)], [129(8980KB)]
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847952724322, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [131], "files_L6": [129], "score": -1, "input_data_size": 12062282, "oldest_snapshot_seqno": -1}
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #132: 8802 keys, 11909707 bytes, temperature: kUnknown
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847952868338, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 132, "file_size": 11909707, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11852323, "index_size": 34295, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22021, "raw_key_size": 231733, "raw_average_key_size": 26, "raw_value_size": 11696882, "raw_average_value_size": 1328, "num_data_blocks": 1318, "num_entries": 8802, "num_filter_entries": 8802, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769847952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 132, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:25:52.869308) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11909707 bytes
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:25:52.892007) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 83.7 rd, 82.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 8.8 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(8.4) write-amplify(4.2) OK, records in: 9343, records dropped: 541 output_compression: NoCompression
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:25:52.892046) EVENT_LOG_v1 {"time_micros": 1769847952892032, "job": 82, "event": "compaction_finished", "compaction_time_micros": 144196, "compaction_time_cpu_micros": 39718, "output_level": 6, "num_output_files": 1, "total_output_size": 11909707, "num_input_records": 9343, "num_output_records": 8802, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847952892433, "job": 82, "event": "table_file_deletion", "file_number": 131}
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000129.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847952893291, "job": 82, "event": "table_file_deletion", "file_number": 129}
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:25:52.724190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:25:52.893340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:25:52.893346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:25:52.893349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:25:52.893350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:25:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:25:52.893352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:25:53 np0005603609 nova_compute[221550]: 2026-01-31 08:25:53.084 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:54.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:54.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:55 np0005603609 nova_compute[221550]: 2026-01-31 08:25:55.478 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:25:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:56.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:25:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:56.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:56 np0005603609 nova_compute[221550]: 2026-01-31 08:25:56.755 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:57 np0005603609 nova_compute[221550]: 2026-01-31 08:25:57.622 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:25:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:58.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:25:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:25:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:58.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:59 np0005603609 nova_compute[221550]: 2026-01-31 08:25:59.561 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:00 np0005603609 nova_compute[221550]: 2026-01-31 08:26:00.071 221554 DEBUG nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:26:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:00.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:26:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:00.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:26:00 np0005603609 nova_compute[221550]: 2026-01-31 08:26:00.480 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:02.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:02.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:02 np0005603609 nova_compute[221550]: 2026-01-31 08:26:02.624 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:02 np0005603609 nova_compute[221550]: 2026-01-31 08:26:02.807 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:03 np0005603609 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d0000009f.scope: Deactivated successfully.
Jan 31 03:26:03 np0005603609 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d0000009f.scope: Consumed 12.997s CPU time.
Jan 31 03:26:03 np0005603609 systemd-machined[190912]: Machine qemu-85-instance-0000009f terminated.
Jan 31 03:26:03 np0005603609 podman[287202]: 2026-01-31 08:26:03.734947021 +0000 UTC m=+0.065708199 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:26:03 np0005603609 podman[287201]: 2026-01-31 08:26:03.75862719 +0000 UTC m=+0.089697055 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 03:26:04 np0005603609 nova_compute[221550]: 2026-01-31 08:26:04.088 221554 INFO nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Instance shutdown successfully after 14 seconds.#033[00m
Jan 31 03:26:04 np0005603609 nova_compute[221550]: 2026-01-31 08:26:04.096 221554 INFO nova.virt.libvirt.driver [-] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Instance destroyed successfully.#033[00m
Jan 31 03:26:04 np0005603609 nova_compute[221550]: 2026-01-31 08:26:04.103 221554 INFO nova.virt.libvirt.driver [-] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Instance destroyed successfully.#033[00m
Jan 31 03:26:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:04.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:04.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:04 np0005603609 nova_compute[221550]: 2026-01-31 08:26:04.681 221554 INFO nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Deleting instance files /var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_del#033[00m
Jan 31 03:26:04 np0005603609 nova_compute[221550]: 2026-01-31 08:26:04.682 221554 INFO nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Deletion of /var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_del complete#033[00m
Jan 31 03:26:05 np0005603609 nova_compute[221550]: 2026-01-31 08:26:05.483 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:05 np0005603609 nova_compute[221550]: 2026-01-31 08:26:05.847 221554 DEBUG nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:26:05 np0005603609 nova_compute[221550]: 2026-01-31 08:26:05.848 221554 INFO nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Creating image(s)#033[00m
Jan 31 03:26:05 np0005603609 nova_compute[221550]: 2026-01-31 08:26:05.882 221554 DEBUG nova.storage.rbd_utils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:26:05 np0005603609 nova_compute[221550]: 2026-01-31 08:26:05.908 221554 DEBUG nova.storage.rbd_utils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:26:05 np0005603609 nova_compute[221550]: 2026-01-31 08:26:05.932 221554 DEBUG nova.storage.rbd_utils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:26:05 np0005603609 nova_compute[221550]: 2026-01-31 08:26:05.936 221554 DEBUG oslo_concurrency.processutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:05 np0005603609 nova_compute[221550]: 2026-01-31 08:26:05.994 221554 DEBUG oslo_concurrency.processutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:05 np0005603609 nova_compute[221550]: 2026-01-31 08:26:05.994 221554 DEBUG oslo_concurrency.lockutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquiring lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:05 np0005603609 nova_compute[221550]: 2026-01-31 08:26:05.995 221554 DEBUG oslo_concurrency.lockutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:05 np0005603609 nova_compute[221550]: 2026-01-31 08:26:05.995 221554 DEBUG oslo_concurrency.lockutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.022 221554 DEBUG nova.storage.rbd_utils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.026 221554 DEBUG oslo_concurrency.processutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:06.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:06.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.343 221554 DEBUG oslo_concurrency.processutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.412 221554 DEBUG nova.storage.rbd_utils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] resizing rbd image d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.546 221554 DEBUG nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.546 221554 DEBUG nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Ensure instance console log exists: /var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.547 221554 DEBUG oslo_concurrency.lockutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.547 221554 DEBUG oslo_concurrency.lockutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.548 221554 DEBUG oslo_concurrency.lockutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.549 221554 DEBUG nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.554 221554 WARNING nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.561 221554 DEBUG nova.virt.libvirt.host [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.562 221554 DEBUG nova.virt.libvirt.host [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.565 221554 DEBUG nova.virt.libvirt.host [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.566 221554 DEBUG nova.virt.libvirt.host [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.567 221554 DEBUG nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.568 221554 DEBUG nova.virt.hardware [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.568 221554 DEBUG nova.virt.hardware [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.569 221554 DEBUG nova.virt.hardware [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.569 221554 DEBUG nova.virt.hardware [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.569 221554 DEBUG nova.virt.hardware [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.570 221554 DEBUG nova.virt.hardware [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.570 221554 DEBUG nova.virt.hardware [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.570 221554 DEBUG nova.virt.hardware [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.571 221554 DEBUG nova.virt.hardware [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.571 221554 DEBUG nova.virt.hardware [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.571 221554 DEBUG nova.virt.hardware [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.571 221554 DEBUG nova.objects.instance [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lazy-loading 'vcpu_model' on Instance uuid d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:26:06 np0005603609 nova_compute[221550]: 2026-01-31 08:26:06.613 221554 DEBUG oslo_concurrency.processutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:26:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1500801380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:26:07 np0005603609 nova_compute[221550]: 2026-01-31 08:26:07.065 221554 DEBUG oslo_concurrency.processutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:07 np0005603609 nova_compute[221550]: 2026-01-31 08:26:07.094 221554 DEBUG nova.storage.rbd_utils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:26:07 np0005603609 nova_compute[221550]: 2026-01-31 08:26:07.099 221554 DEBUG oslo_concurrency.processutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:26:07.518 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:26:07.518 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:26:07.519 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:26:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2069233769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:26:07 np0005603609 nova_compute[221550]: 2026-01-31 08:26:07.586 221554 DEBUG oslo_concurrency.processutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:07 np0005603609 nova_compute[221550]: 2026-01-31 08:26:07.590 221554 DEBUG nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:26:07 np0005603609 nova_compute[221550]:  <uuid>d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5</uuid>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:  <name>instance-0000009f</name>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerShowV247Test-server-87063752</nova:name>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:26:06</nova:creationTime>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:26:07 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:        <nova:user uuid="3ebb4f01dd6e420b91e5f2282ecfd49d">tempest-ServerShowV247Test-1312169099-project-member</nova:user>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:        <nova:project uuid="03f29bb9f284401a8ff5c6431219974b">tempest-ServerShowV247Test-1312169099</nova:project>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="40cf2ff3-f7ff-4843-b4ab-b7dcc843006f"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <nova:ports/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <entry name="serial">d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5</entry>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <entry name="uuid">d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5</entry>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk">
Jan 31 03:26:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:26:07 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk.config">
Jan 31 03:26:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:26:07 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5/console.log" append="off"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:26:07 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:26:07 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:26:07 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:26:07 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:26:07 np0005603609 nova_compute[221550]: 2026-01-31 08:26:07.626 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:07 np0005603609 nova_compute[221550]: 2026-01-31 08:26:07.755 221554 DEBUG nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:26:07 np0005603609 nova_compute[221550]: 2026-01-31 08:26:07.755 221554 DEBUG nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:26:07 np0005603609 nova_compute[221550]: 2026-01-31 08:26:07.756 221554 INFO nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Using config drive#033[00m
Jan 31 03:26:07 np0005603609 nova_compute[221550]: 2026-01-31 08:26:07.786 221554 DEBUG nova.storage.rbd_utils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:26:07 np0005603609 nova_compute[221550]: 2026-01-31 08:26:07.814 221554 DEBUG nova.objects.instance [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lazy-loading 'ec2_ids' on Instance uuid d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:26:07 np0005603609 nova_compute[221550]: 2026-01-31 08:26:07.861 221554 DEBUG nova.objects.instance [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lazy-loading 'keypairs' on Instance uuid d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:26:08 np0005603609 nova_compute[221550]: 2026-01-31 08:26:08.079 221554 INFO nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Creating config drive at /var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5/disk.config#033[00m
Jan 31 03:26:08 np0005603609 nova_compute[221550]: 2026-01-31 08:26:08.083 221554 DEBUG oslo_concurrency.processutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp7yr40q0j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:26:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:08.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:26:08 np0005603609 nova_compute[221550]: 2026-01-31 08:26:08.212 221554 DEBUG oslo_concurrency.processutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp7yr40q0j" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:08 np0005603609 nova_compute[221550]: 2026-01-31 08:26:08.258 221554 DEBUG nova.storage.rbd_utils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] rbd image d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:26:08 np0005603609 nova_compute[221550]: 2026-01-31 08:26:08.264 221554 DEBUG oslo_concurrency.processutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5/disk.config d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:08.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:08 np0005603609 nova_compute[221550]: 2026-01-31 08:26:08.461 221554 DEBUG oslo_concurrency.processutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5/disk.config d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:08 np0005603609 nova_compute[221550]: 2026-01-31 08:26:08.463 221554 INFO nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Deleting local config drive /var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5/disk.config because it was imported into RBD.#033[00m
Jan 31 03:26:08 np0005603609 systemd-machined[190912]: New machine qemu-86-instance-0000009f.
Jan 31 03:26:08 np0005603609 systemd[1]: Started Virtual Machine qemu-86-instance-0000009f.
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.073 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.074 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847969.0728939, d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.074 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.076 221554 DEBUG nova.compute.manager [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.077 221554 DEBUG nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.079 221554 INFO nova.virt.libvirt.driver [-] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Instance spawned successfully.#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.080 221554 DEBUG nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.109 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.114 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.117 221554 DEBUG nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.117 221554 DEBUG nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.118 221554 DEBUG nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.118 221554 DEBUG nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.118 221554 DEBUG nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.119 221554 DEBUG nova.virt.libvirt.driver [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.172 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.173 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769847969.073744, d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.173 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] VM Started (Lifecycle Event)#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.198 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.201 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.233 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.310 221554 DEBUG nova.compute.manager [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.398 221554 DEBUG oslo_concurrency.lockutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.398 221554 DEBUG oslo_concurrency.lockutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.399 221554 DEBUG nova.objects.instance [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 03:26:09 np0005603609 nova_compute[221550]: 2026-01-31 08:26:09.592 221554 DEBUG oslo_concurrency.lockutils [None req-74788837-c552-4b2e-b349-63d1d2199900 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:10.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:10.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:10 np0005603609 nova_compute[221550]: 2026-01-31 08:26:10.524 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:11 np0005603609 nova_compute[221550]: 2026-01-31 08:26:11.665 221554 DEBUG oslo_concurrency.lockutils [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquiring lock "d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:11 np0005603609 nova_compute[221550]: 2026-01-31 08:26:11.666 221554 DEBUG oslo_concurrency.lockutils [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:11 np0005603609 nova_compute[221550]: 2026-01-31 08:26:11.666 221554 DEBUG oslo_concurrency.lockutils [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquiring lock "d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:11 np0005603609 nova_compute[221550]: 2026-01-31 08:26:11.666 221554 DEBUG oslo_concurrency.lockutils [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:11 np0005603609 nova_compute[221550]: 2026-01-31 08:26:11.666 221554 DEBUG oslo_concurrency.lockutils [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:11 np0005603609 nova_compute[221550]: 2026-01-31 08:26:11.667 221554 INFO nova.compute.manager [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Terminating instance#033[00m
Jan 31 03:26:11 np0005603609 nova_compute[221550]: 2026-01-31 08:26:11.668 221554 DEBUG oslo_concurrency.lockutils [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquiring lock "refresh_cache-d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:26:11 np0005603609 nova_compute[221550]: 2026-01-31 08:26:11.668 221554 DEBUG oslo_concurrency.lockutils [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquired lock "refresh_cache-d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:26:11 np0005603609 nova_compute[221550]: 2026-01-31 08:26:11.668 221554 DEBUG nova.network.neutron [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:26:12 np0005603609 nova_compute[221550]: 2026-01-31 08:26:12.022 221554 DEBUG nova.network.neutron [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:26:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:12.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:26:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:12.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:26:12 np0005603609 nova_compute[221550]: 2026-01-31 08:26:12.629 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:12 np0005603609 nova_compute[221550]: 2026-01-31 08:26:12.851 221554 DEBUG nova.network.neutron [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:26:12 np0005603609 nova_compute[221550]: 2026-01-31 08:26:12.920 221554 DEBUG oslo_concurrency.lockutils [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Releasing lock "refresh_cache-d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:26:12 np0005603609 nova_compute[221550]: 2026-01-31 08:26:12.921 221554 DEBUG nova.compute.manager [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:26:12 np0005603609 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000009f.scope: Deactivated successfully.
Jan 31 03:26:12 np0005603609 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000009f.scope: Consumed 4.513s CPU time.
Jan 31 03:26:12 np0005603609 systemd-machined[190912]: Machine qemu-86-instance-0000009f terminated.
Jan 31 03:26:13 np0005603609 nova_compute[221550]: 2026-01-31 08:26:13.137 221554 INFO nova.virt.libvirt.driver [-] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Instance destroyed successfully.#033[00m
Jan 31 03:26:13 np0005603609 nova_compute[221550]: 2026-01-31 08:26:13.138 221554 DEBUG nova.objects.instance [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lazy-loading 'resources' on Instance uuid d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:26:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:26:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:14.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:26:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:14.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:15 np0005603609 nova_compute[221550]: 2026-01-31 08:26:15.319 221554 INFO nova.virt.libvirt.driver [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Deleting instance files /var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_del#033[00m
Jan 31 03:26:15 np0005603609 nova_compute[221550]: 2026-01-31 08:26:15.320 221554 INFO nova.virt.libvirt.driver [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Deletion of /var/lib/nova/instances/d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5_del complete#033[00m
Jan 31 03:26:15 np0005603609 nova_compute[221550]: 2026-01-31 08:26:15.462 221554 INFO nova.compute.manager [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Took 2.54 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:26:15 np0005603609 nova_compute[221550]: 2026-01-31 08:26:15.463 221554 DEBUG oslo.service.loopingcall [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:26:15 np0005603609 nova_compute[221550]: 2026-01-31 08:26:15.463 221554 DEBUG nova.compute.manager [-] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:26:15 np0005603609 nova_compute[221550]: 2026-01-31 08:26:15.463 221554 DEBUG nova.network.neutron [-] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:26:15 np0005603609 nova_compute[221550]: 2026-01-31 08:26:15.526 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:16 np0005603609 nova_compute[221550]: 2026-01-31 08:26:16.003 221554 DEBUG nova.network.neutron [-] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:26:16 np0005603609 nova_compute[221550]: 2026-01-31 08:26:16.026 221554 DEBUG nova.network.neutron [-] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:26:16 np0005603609 nova_compute[221550]: 2026-01-31 08:26:16.101 221554 INFO nova.compute.manager [-] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Took 0.64 seconds to deallocate network for instance.#033[00m
Jan 31 03:26:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:26:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:16.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:26:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:16.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:16 np0005603609 nova_compute[221550]: 2026-01-31 08:26:16.299 221554 DEBUG oslo_concurrency.lockutils [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:16 np0005603609 nova_compute[221550]: 2026-01-31 08:26:16.300 221554 DEBUG oslo_concurrency.lockutils [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:16 np0005603609 nova_compute[221550]: 2026-01-31 08:26:16.509 221554 DEBUG nova.compute.manager [req-c94368fa-1a1e-4995-a46a-c57b9a3230fb req-64f8182f-9424-4cb6-bca4-fd06d0a0b194 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Received event network-changed-2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:26:16 np0005603609 nova_compute[221550]: 2026-01-31 08:26:16.510 221554 DEBUG nova.compute.manager [req-c94368fa-1a1e-4995-a46a-c57b9a3230fb req-64f8182f-9424-4cb6-bca4-fd06d0a0b194 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Refreshing instance network info cache due to event network-changed-2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:26:16 np0005603609 nova_compute[221550]: 2026-01-31 08:26:16.510 221554 DEBUG oslo_concurrency.lockutils [req-c94368fa-1a1e-4995-a46a-c57b9a3230fb req-64f8182f-9424-4cb6-bca4-fd06d0a0b194 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:26:16 np0005603609 nova_compute[221550]: 2026-01-31 08:26:16.510 221554 DEBUG oslo_concurrency.lockutils [req-c94368fa-1a1e-4995-a46a-c57b9a3230fb req-64f8182f-9424-4cb6-bca4-fd06d0a0b194 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:26:16 np0005603609 nova_compute[221550]: 2026-01-31 08:26:16.511 221554 DEBUG nova.network.neutron [req-c94368fa-1a1e-4995-a46a-c57b9a3230fb req-64f8182f-9424-4cb6-bca4-fd06d0a0b194 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Refreshing network info cache for port 2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:26:16 np0005603609 nova_compute[221550]: 2026-01-31 08:26:16.551 221554 DEBUG oslo_concurrency.processutils [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:26:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3581095025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:26:16 np0005603609 nova_compute[221550]: 2026-01-31 08:26:16.980 221554 DEBUG oslo_concurrency.processutils [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:16 np0005603609 nova_compute[221550]: 2026-01-31 08:26:16.989 221554 DEBUG nova.compute.provider_tree [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:26:17 np0005603609 nova_compute[221550]: 2026-01-31 08:26:17.014 221554 DEBUG nova.scheduler.client.report [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:26:17 np0005603609 nova_compute[221550]: 2026-01-31 08:26:17.062 221554 DEBUG oslo_concurrency.lockutils [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:17 np0005603609 nova_compute[221550]: 2026-01-31 08:26:17.144 221554 INFO nova.scheduler.client.report [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Deleted allocations for instance d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5#033[00m
Jan 31 03:26:17 np0005603609 nova_compute[221550]: 2026-01-31 08:26:17.338 221554 DEBUG oslo_concurrency.lockutils [None req-c6289a56-c10e-4e51-9f2d-ba97c6cad69b 3ebb4f01dd6e420b91e5f2282ecfd49d 03f29bb9f284401a8ff5c6431219974b - - default default] Lock "d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:17 np0005603609 nova_compute[221550]: 2026-01-31 08:26:17.666 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:18.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:18.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:20.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:20.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:20 np0005603609 nova_compute[221550]: 2026-01-31 08:26:20.528 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:22.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:22.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:22 np0005603609 nova_compute[221550]: 2026-01-31 08:26:22.638 221554 DEBUG nova.network.neutron [req-c94368fa-1a1e-4995-a46a-c57b9a3230fb req-64f8182f-9424-4cb6-bca4-fd06d0a0b194 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Updated VIF entry in instance network info cache for port 2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:26:22 np0005603609 nova_compute[221550]: 2026-01-31 08:26:22.639 221554 DEBUG nova.network.neutron [req-c94368fa-1a1e-4995-a46a-c57b9a3230fb req-64f8182f-9424-4cb6-bca4-fd06d0a0b194 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Updating instance_info_cache with network_info: [{"id": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "address": "fa:16:3e:85:fa:8d", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b86ae61-4c", "ovs_interfaceid": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:26:22 np0005603609 nova_compute[221550]: 2026-01-31 08:26:22.670 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:22 np0005603609 nova_compute[221550]: 2026-01-31 08:26:22.885 221554 DEBUG oslo_concurrency.lockutils [req-c94368fa-1a1e-4995-a46a-c57b9a3230fb req-64f8182f-9424-4cb6-bca4-fd06d0a0b194 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:26:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:24.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:26:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:24.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:26:25 np0005603609 nova_compute[221550]: 2026-01-31 08:26:25.530 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:26.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:26.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:26 np0005603609 nova_compute[221550]: 2026-01-31 08:26:26.730 221554 DEBUG oslo_concurrency.lockutils [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "8bf29da0-2e1a-4e89-bce8-b293a938c742" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:26 np0005603609 nova_compute[221550]: 2026-01-31 08:26:26.731 221554 DEBUG oslo_concurrency.lockutils [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:26 np0005603609 nova_compute[221550]: 2026-01-31 08:26:26.859 221554 DEBUG nova.objects.instance [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'flavor' on Instance uuid 8bf29da0-2e1a-4e89-bce8-b293a938c742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.077 221554 DEBUG oslo_concurrency.lockutils [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.617 221554 DEBUG oslo_concurrency.lockutils [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "8bf29da0-2e1a-4e89-bce8-b293a938c742" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.617 221554 DEBUG oslo_concurrency.lockutils [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.618 221554 INFO nova.compute.manager [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Attaching volume 4e49c5a3-4191-4757-9b9f-0465fe16a7f1 to /dev/vdb#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.672 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.812 221554 DEBUG os_brick.utils [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.813 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.821 226739 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.821 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[50bdab37-a360-47d5-bad7-50c9eed1a37b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.822 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.828 226739 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.828 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[f89b9b13-cdb1-49d4-ab21-7d1e4d6aca6a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1765b9b6275c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.831 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.837 226739 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.837 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[65f8b138-bf5c-41af-86f1-acebe0b62ae3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.838 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[98cb6207-5d78-4494-9ab9-0082fbbc423d]: (4, '231927d4-1ded-4b84-843c-456d697af567') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.839 221554 DEBUG oslo_concurrency.processutils [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.864 221554 DEBUG oslo_concurrency.processutils [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.869 221554 DEBUG os_brick.initiator.connectors.lightos [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.869 221554 DEBUG os_brick.initiator.connectors.lightos [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.870 221554 DEBUG os_brick.initiator.connectors.lightos [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.870 221554 DEBUG os_brick.utils [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] <== get_connector_properties: return (57ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1765b9b6275c', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '231927d4-1ded-4b84-843c-456d697af567', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:26:27 np0005603609 nova_compute[221550]: 2026-01-31 08:26:27.871 221554 DEBUG nova.virt.block_device [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Updating existing volume attachment record: e7959411-8aea-4bb3-83f7-22e949a10669 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:26:28 np0005603609 nova_compute[221550]: 2026-01-31 08:26:28.134 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847973.1340108, d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:26:28 np0005603609 nova_compute[221550]: 2026-01-31 08:26:28.135 221554 INFO nova.compute.manager [-] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:26:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:28.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:28 np0005603609 nova_compute[221550]: 2026-01-31 08:26:28.224 221554 DEBUG nova.compute.manager [None req-cae48aea-2a7a-498a-93e6-63899b991f86 - - - - - -] [instance: d7b6a5b4-0ded-4ba6-8c8f-54bf16942af5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:26:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:28.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:26:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 49K writes, 194K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s#012Cumulative WAL: 49K writes, 18K syncs, 2.75 writes per sync, written: 0.19 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8528 writes, 32K keys, 8528 commit groups, 1.0 writes per commit group, ingest: 33.37 MB, 0.06 MB/s#012Interval WAL: 8528 writes, 3294 syncs, 2.59 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:26:29 np0005603609 nova_compute[221550]: 2026-01-31 08:26:29.985 221554 DEBUG nova.objects.instance [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'flavor' on Instance uuid 8bf29da0-2e1a-4e89-bce8-b293a938c742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:26:30 np0005603609 nova_compute[221550]: 2026-01-31 08:26:30.030 221554 DEBUG nova.virt.libvirt.driver [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Attempting to attach volume 4e49c5a3-4191-4757-9b9f-0465fe16a7f1 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 31 03:26:30 np0005603609 nova_compute[221550]: 2026-01-31 08:26:30.033 221554 DEBUG nova.virt.libvirt.guest [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 03:26:30 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:26:30 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-4e49c5a3-4191-4757-9b9f-0465fe16a7f1">
Jan 31 03:26:30 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:26:30 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:26:30 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:26:30 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:26:30 np0005603609 nova_compute[221550]:  <auth username="openstack">
Jan 31 03:26:30 np0005603609 nova_compute[221550]:    <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:26:30 np0005603609 nova_compute[221550]:  </auth>
Jan 31 03:26:30 np0005603609 nova_compute[221550]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:26:30 np0005603609 nova_compute[221550]:  <serial>4e49c5a3-4191-4757-9b9f-0465fe16a7f1</serial>
Jan 31 03:26:30 np0005603609 nova_compute[221550]:  <shareable/>
Jan 31 03:26:30 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:26:30 np0005603609 nova_compute[221550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 03:26:30 np0005603609 nova_compute[221550]: 2026-01-31 08:26:30.175 221554 DEBUG nova.virt.libvirt.driver [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:26:30 np0005603609 nova_compute[221550]: 2026-01-31 08:26:30.175 221554 DEBUG nova.virt.libvirt.driver [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:26:30 np0005603609 nova_compute[221550]: 2026-01-31 08:26:30.176 221554 DEBUG nova.virt.libvirt.driver [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:26:30 np0005603609 nova_compute[221550]: 2026-01-31 08:26:30.176 221554 DEBUG nova.virt.libvirt.driver [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No VIF found with MAC fa:16:3e:85:fa:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:26:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:30.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:30.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:30 np0005603609 nova_compute[221550]: 2026-01-31 08:26:30.531 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:30 np0005603609 nova_compute[221550]: 2026-01-31 08:26:30.714 221554 DEBUG oslo_concurrency.lockutils [None req-f478f416-7121-43c9-ab1c-393af978124d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #133. Immutable memtables: 0.
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:26:31.567641) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 133
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847991567701, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 661, "num_deletes": 251, "total_data_size": 979588, "memory_usage": 991448, "flush_reason": "Manual Compaction"}
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #134: started
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847991579811, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 134, "file_size": 644796, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65748, "largest_seqno": 66404, "table_properties": {"data_size": 641646, "index_size": 1057, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7682, "raw_average_key_size": 19, "raw_value_size": 635209, "raw_average_value_size": 1596, "num_data_blocks": 47, "num_entries": 398, "num_filter_entries": 398, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847953, "oldest_key_time": 1769847953, "file_creation_time": 1769847991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 12203 microseconds, and 2392 cpu microseconds.
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:26:31.579850) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #134: 644796 bytes OK
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:26:31.579870) [db/memtable_list.cc:519] [default] Level-0 commit table #134 started
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:26:31.581649) [db/memtable_list.cc:722] [default] Level-0 commit table #134: memtable #1 done
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:26:31.581662) EVENT_LOG_v1 {"time_micros": 1769847991581658, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:26:31.581678) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 975982, prev total WAL file size 975982, number of live WAL files 2.
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000130.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:26:31.582217) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [134(629KB)], [132(11MB)]
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847991582268, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [134], "files_L6": [132], "score": -1, "input_data_size": 12554503, "oldest_snapshot_seqno": -1}
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #135: 8688 keys, 10652124 bytes, temperature: kUnknown
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847991798107, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 135, "file_size": 10652124, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10596698, "index_size": 32593, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 230085, "raw_average_key_size": 26, "raw_value_size": 10444624, "raw_average_value_size": 1202, "num_data_blocks": 1238, "num_entries": 8688, "num_filter_entries": 8688, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769847991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 135, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:26:31.798370) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 10652124 bytes
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:26:31.800586) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 58.2 rd, 49.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 11.4 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(36.0) write-amplify(16.5) OK, records in: 9200, records dropped: 512 output_compression: NoCompression
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:26:31.800604) EVENT_LOG_v1 {"time_micros": 1769847991800595, "job": 84, "event": "compaction_finished", "compaction_time_micros": 215882, "compaction_time_cpu_micros": 19359, "output_level": 6, "num_output_files": 1, "total_output_size": 10652124, "num_input_records": 9200, "num_output_records": 8688, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847991800758, "job": 84, "event": "table_file_deletion", "file_number": 134}
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000132.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847991801975, "job": 84, "event": "table_file_deletion", "file_number": 132}
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:26:31.582134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:26:31.802010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:26:31.802014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:26:31.802015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:26:31.802017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:26:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:26:31.802018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:26:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:32.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:26:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:32.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:26:32 np0005603609 nova_compute[221550]: 2026-01-31 08:26:32.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:32 np0005603609 nova_compute[221550]: 2026-01-31 08:26:32.674 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:34 np0005603609 podman[287817]: 2026-01-31 08:26:34.172680243 +0000 UTC m=+0.051090338 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:26:34 np0005603609 podman[287816]: 2026-01-31 08:26:34.213672108 +0000 UTC m=+0.097509213 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:26:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:34.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:34.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:34 np0005603609 nova_compute[221550]: 2026-01-31 08:26:34.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:34 np0005603609 nova_compute[221550]: 2026-01-31 08:26:34.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:26:35 np0005603609 nova_compute[221550]: 2026-01-31 08:26:35.564 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:36.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:36.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:37 np0005603609 nova_compute[221550]: 2026-01-31 08:26:37.676 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:37 np0005603609 nova_compute[221550]: 2026-01-31 08:26:37.697 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 03:26:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:26:38 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:38.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:26:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:26:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:38.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:26:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:26:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:26:38 np0005603609 nova_compute[221550]: 2026-01-31 08:26:38.919 221554 DEBUG oslo_concurrency.lockutils [None req-9c7c1b5d-8fae-4cca-bceb-dce887e711f8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "8bf29da0-2e1a-4e89-bce8-b293a938c742" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:38 np0005603609 nova_compute[221550]: 2026-01-31 08:26:38.920 221554 DEBUG oslo_concurrency.lockutils [None req-9c7c1b5d-8fae-4cca-bceb-dce887e711f8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:38 np0005603609 nova_compute[221550]: 2026-01-31 08:26:38.953 221554 INFO nova.compute.manager [None req-9c7c1b5d-8fae-4cca-bceb-dce887e711f8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Detaching volume 4e49c5a3-4191-4757-9b9f-0465fe16a7f1#033[00m
Jan 31 03:26:39 np0005603609 nova_compute[221550]: 2026-01-31 08:26:39.155 221554 INFO nova.virt.block_device [None req-9c7c1b5d-8fae-4cca-bceb-dce887e711f8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Attempting to driver detach volume 4e49c5a3-4191-4757-9b9f-0465fe16a7f1 from mountpoint /dev/vdb#033[00m
Jan 31 03:26:39 np0005603609 nova_compute[221550]: 2026-01-31 08:26:39.165 221554 DEBUG nova.virt.libvirt.driver [None req-9c7c1b5d-8fae-4cca-bceb-dce887e711f8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Attempting to detach device vdb from instance 8bf29da0-2e1a-4e89-bce8-b293a938c742 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:26:39 np0005603609 nova_compute[221550]: 2026-01-31 08:26:39.166 221554 DEBUG nova.virt.libvirt.guest [None req-9c7c1b5d-8fae-4cca-bceb-dce887e711f8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:26:39 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:26:39 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-4e49c5a3-4191-4757-9b9f-0465fe16a7f1">
Jan 31 03:26:39 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:26:39 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:26:39 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:26:39 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:26:39 np0005603609 nova_compute[221550]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:26:39 np0005603609 nova_compute[221550]:  <serial>4e49c5a3-4191-4757-9b9f-0465fe16a7f1</serial>
Jan 31 03:26:39 np0005603609 nova_compute[221550]:  <shareable/>
Jan 31 03:26:39 np0005603609 nova_compute[221550]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 03:26:39 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:26:39 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:26:39 np0005603609 nova_compute[221550]: 2026-01-31 08:26:39.172 221554 INFO nova.virt.libvirt.driver [None req-9c7c1b5d-8fae-4cca-bceb-dce887e711f8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully detached device vdb from instance 8bf29da0-2e1a-4e89-bce8-b293a938c742 from the persistent domain config.#033[00m
Jan 31 03:26:39 np0005603609 nova_compute[221550]: 2026-01-31 08:26:39.173 221554 DEBUG nova.virt.libvirt.driver [None req-9c7c1b5d-8fae-4cca-bceb-dce887e711f8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 8bf29da0-2e1a-4e89-bce8-b293a938c742 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 03:26:39 np0005603609 nova_compute[221550]: 2026-01-31 08:26:39.173 221554 DEBUG nova.virt.libvirt.guest [None req-9c7c1b5d-8fae-4cca-bceb-dce887e711f8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:26:39 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:26:39 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-4e49c5a3-4191-4757-9b9f-0465fe16a7f1">
Jan 31 03:26:39 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:26:39 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:26:39 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:26:39 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:26:39 np0005603609 nova_compute[221550]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:26:39 np0005603609 nova_compute[221550]:  <serial>4e49c5a3-4191-4757-9b9f-0465fe16a7f1</serial>
Jan 31 03:26:39 np0005603609 nova_compute[221550]:  <shareable/>
Jan 31 03:26:39 np0005603609 nova_compute[221550]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 03:26:39 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:26:39 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:26:39 np0005603609 nova_compute[221550]: 2026-01-31 08:26:39.272 221554 DEBUG nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Received event <DeviceRemovedEvent: 1769847999.2717419, 8bf29da0-2e1a-4e89-bce8-b293a938c742 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 03:26:39 np0005603609 nova_compute[221550]: 2026-01-31 08:26:39.273 221554 DEBUG nova.virt.libvirt.driver [None req-9c7c1b5d-8fae-4cca-bceb-dce887e711f8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 8bf29da0-2e1a-4e89-bce8-b293a938c742 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 03:26:39 np0005603609 nova_compute[221550]: 2026-01-31 08:26:39.277 221554 INFO nova.virt.libvirt.driver [None req-9c7c1b5d-8fae-4cca-bceb-dce887e711f8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully detached device vdb from instance 8bf29da0-2e1a-4e89-bce8-b293a938c742 from the live domain config.#033[00m
Jan 31 03:26:39 np0005603609 nova_compute[221550]: 2026-01-31 08:26:39.882 221554 DEBUG nova.objects.instance [None req-9c7c1b5d-8fae-4cca-bceb-dce887e711f8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'flavor' on Instance uuid 8bf29da0-2e1a-4e89-bce8-b293a938c742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:26:40 np0005603609 nova_compute[221550]: 2026-01-31 08:26:40.210 221554 DEBUG oslo_concurrency.lockutils [None req-9c7c1b5d-8fae-4cca-bceb-dce887e711f8 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 03:26:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:26:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:26:40 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:40.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:26:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:40.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:26:40 np0005603609 nova_compute[221550]: 2026-01-31 08:26:40.566 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:40 np0005603609 nova_compute[221550]: 2026-01-31 08:26:40.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 03:26:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:42.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:42 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:42.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:42 np0005603609 nova_compute[221550]: 2026-01-31 08:26:42.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:42 np0005603609 nova_compute[221550]: 2026-01-31 08:26:42.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:26:42 np0005603609 nova_compute[221550]: 2026-01-31 08:26:42.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:26:42 np0005603609 nova_compute[221550]: 2026-01-31 08:26:42.678 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:42 np0005603609 nova_compute[221550]: 2026-01-31 08:26:42.901 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:26:42 np0005603609 nova_compute[221550]: 2026-01-31 08:26:42.902 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:26:42 np0005603609 nova_compute[221550]: 2026-01-31 08:26:42.902 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:26:42 np0005603609 nova_compute[221550]: 2026-01-31 08:26:42.902 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8bf29da0-2e1a-4e89-bce8-b293a938c742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:26:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:44.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:44.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:44 np0005603609 nova_compute[221550]: 2026-01-31 08:26:44.538 221554 DEBUG oslo_concurrency.lockutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Acquiring lock "1953310a-9489-4772-bf88-a90d20b0a43c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:44 np0005603609 nova_compute[221550]: 2026-01-31 08:26:44.538 221554 DEBUG oslo_concurrency.lockutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "1953310a-9489-4772-bf88-a90d20b0a43c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:44 np0005603609 nova_compute[221550]: 2026-01-31 08:26:44.616 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Updating instance_info_cache with network_info: [{"id": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "address": "fa:16:3e:85:fa:8d", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b86ae61-4c", "ovs_interfaceid": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:26:44 np0005603609 nova_compute[221550]: 2026-01-31 08:26:44.633 221554 DEBUG nova.compute.manager [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:26:44 np0005603609 nova_compute[221550]: 2026-01-31 08:26:44.650 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:26:44 np0005603609 nova_compute[221550]: 2026-01-31 08:26:44.651 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:26:44 np0005603609 nova_compute[221550]: 2026-01-31 08:26:44.899 221554 DEBUG oslo_concurrency.lockutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:44 np0005603609 nova_compute[221550]: 2026-01-31 08:26:44.899 221554 DEBUG oslo_concurrency.lockutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:44 np0005603609 nova_compute[221550]: 2026-01-31 08:26:44.910 221554 DEBUG nova.virt.hardware [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:26:44 np0005603609 nova_compute[221550]: 2026-01-31 08:26:44.911 221554 INFO nova.compute.claims [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:26:45 np0005603609 nova_compute[221550]: 2026-01-31 08:26:45.102 221554 DEBUG oslo_concurrency.processutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:26:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/848783938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:26:45 np0005603609 nova_compute[221550]: 2026-01-31 08:26:45.513 221554 DEBUG oslo_concurrency.processutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:45 np0005603609 nova_compute[221550]: 2026-01-31 08:26:45.518 221554 DEBUG nova.compute.provider_tree [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:26:45 np0005603609 nova_compute[221550]: 2026-01-31 08:26:45.640 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:45 np0005603609 nova_compute[221550]: 2026-01-31 08:26:45.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:45 np0005603609 nova_compute[221550]: 2026-01-31 08:26:45.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:26:45 np0005603609 nova_compute[221550]: 2026-01-31 08:26:45.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:45 np0005603609 nova_compute[221550]: 2026-01-31 08:26:45.670 221554 DEBUG nova.scheduler.client.report [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:26:45 np0005603609 nova_compute[221550]: 2026-01-31 08:26:45.700 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:45 np0005603609 nova_compute[221550]: 2026-01-31 08:26:45.731 221554 DEBUG oslo_concurrency.lockutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:45 np0005603609 nova_compute[221550]: 2026-01-31 08:26:45.732 221554 DEBUG nova.compute.manager [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:26:45 np0005603609 nova_compute[221550]: 2026-01-31 08:26:45.735 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:45 np0005603609 nova_compute[221550]: 2026-01-31 08:26:45.735 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:45 np0005603609 nova_compute[221550]: 2026-01-31 08:26:45.735 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:26:45 np0005603609 nova_compute[221550]: 2026-01-31 08:26:45.736 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:45 np0005603609 nova_compute[221550]: 2026-01-31 08:26:45.849 221554 DEBUG nova.compute.manager [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Jan 31 03:26:45 np0005603609 nova_compute[221550]: 2026-01-31 08:26:45.916 221554 INFO nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:26:45 np0005603609 nova_compute[221550]: 2026-01-31 08:26:45.969 221554 DEBUG nova.compute.manager [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:26:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:26:46 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1898698978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.140 221554 DEBUG nova.compute.manager [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.141 221554 DEBUG nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.141 221554 INFO nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Creating image(s)#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.164 221554 DEBUG nova.storage.rbd_utils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] rbd image 1953310a-9489-4772-bf88-a90d20b0a43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.188 221554 DEBUG nova.storage.rbd_utils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] rbd image 1953310a-9489-4772-bf88-a90d20b0a43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.214 221554 DEBUG nova.storage.rbd_utils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] rbd image 1953310a-9489-4772-bf88-a90d20b0a43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.219 221554 DEBUG oslo_concurrency.processutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.240 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.277 221554 DEBUG oslo_concurrency.processutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.278 221554 DEBUG oslo_concurrency.lockutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.279 221554 DEBUG oslo_concurrency.lockutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.279 221554 DEBUG oslo_concurrency.lockutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.305 221554 DEBUG nova.storage.rbd_utils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] rbd image 1953310a-9489-4772-bf88-a90d20b0a43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.309 221554 DEBUG oslo_concurrency.processutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1953310a-9489-4772-bf88-a90d20b0a43c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.407 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.409 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:26:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:46.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:46.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.564 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.565 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4105MB free_disk=20.830669403076172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.565 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.566 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.655 221554 DEBUG oslo_concurrency.processutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1953310a-9489-4772-bf88-a90d20b0a43c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.730 221554 DEBUG nova.storage.rbd_utils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] resizing rbd image 1953310a-9489-4772-bf88-a90d20b0a43c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.769 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 8bf29da0-2e1a-4e89-bce8-b293a938c742 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.770 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 1953310a-9489-4772-bf88-a90d20b0a43c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.770 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.770 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.841 221554 DEBUG nova.objects.instance [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lazy-loading 'migration_context' on Instance uuid 1953310a-9489-4772-bf88-a90d20b0a43c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.871 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.897 221554 DEBUG nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.898 221554 DEBUG nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Ensure instance console log exists: /var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.899 221554 DEBUG oslo_concurrency.lockutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.899 221554 DEBUG oslo_concurrency.lockutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.899 221554 DEBUG oslo_concurrency.lockutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.900 221554 DEBUG nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.905 221554 WARNING nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.909 221554 DEBUG nova.virt.libvirt.host [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.910 221554 DEBUG nova.virt.libvirt.host [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.913 221554 DEBUG nova.virt.libvirt.host [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.913 221554 DEBUG nova.virt.libvirt.host [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.914 221554 DEBUG nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.915 221554 DEBUG nova.virt.hardware [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.915 221554 DEBUG nova.virt.hardware [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.915 221554 DEBUG nova.virt.hardware [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.916 221554 DEBUG nova.virt.hardware [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.916 221554 DEBUG nova.virt.hardware [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.916 221554 DEBUG nova.virt.hardware [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.916 221554 DEBUG nova.virt.hardware [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.917 221554 DEBUG nova.virt.hardware [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.917 221554 DEBUG nova.virt.hardware [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.917 221554 DEBUG nova.virt.hardware [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.918 221554 DEBUG nova.virt.hardware [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:26:46 np0005603609 nova_compute[221550]: 2026-01-31 08:26:46.921 221554 DEBUG oslo_concurrency.processutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:26:47 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2720197295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:26:47 np0005603609 nova_compute[221550]: 2026-01-31 08:26:47.279 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:47 np0005603609 nova_compute[221550]: 2026-01-31 08:26:47.284 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:26:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:26:47 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1176362589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:26:47 np0005603609 nova_compute[221550]: 2026-01-31 08:26:47.334 221554 DEBUG oslo_concurrency.processutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:47 np0005603609 nova_compute[221550]: 2026-01-31 08:26:47.359 221554 DEBUG nova.storage.rbd_utils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] rbd image 1953310a-9489-4772-bf88-a90d20b0a43c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:26:47 np0005603609 nova_compute[221550]: 2026-01-31 08:26:47.363 221554 DEBUG oslo_concurrency.processutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:47 np0005603609 nova_compute[221550]: 2026-01-31 08:26:47.532 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:26:47 np0005603609 nova_compute[221550]: 2026-01-31 08:26:47.592 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:26:47 np0005603609 nova_compute[221550]: 2026-01-31 08:26:47.593 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:47 np0005603609 nova_compute[221550]: 2026-01-31 08:26:47.681 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:26:47 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1160455600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:26:47 np0005603609 nova_compute[221550]: 2026-01-31 08:26:47.775 221554 DEBUG oslo_concurrency.processutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:47 np0005603609 nova_compute[221550]: 2026-01-31 08:26:47.777 221554 DEBUG nova.objects.instance [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1953310a-9489-4772-bf88-a90d20b0a43c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:26:47 np0005603609 nova_compute[221550]: 2026-01-31 08:26:47.883 221554 DEBUG nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:26:47 np0005603609 nova_compute[221550]:  <uuid>1953310a-9489-4772-bf88-a90d20b0a43c</uuid>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:  <name>instance-000000a3</name>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerShowV254Test-server-369254529</nova:name>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:26:46</nova:creationTime>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:26:47 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:        <nova:user uuid="c3fccb843f7e4945a32e8bd3ea398cbe">tempest-ServerShowV254Test-582177530-project-member</nova:user>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:        <nova:project uuid="8c8f9946318d4a17b0d659dea7815cb7">tempest-ServerShowV254Test-582177530</nova:project>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <nova:ports/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <entry name="serial">1953310a-9489-4772-bf88-a90d20b0a43c</entry>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <entry name="uuid">1953310a-9489-4772-bf88-a90d20b0a43c</entry>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/1953310a-9489-4772-bf88-a90d20b0a43c_disk">
Jan 31 03:26:47 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:26:47 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/1953310a-9489-4772-bf88-a90d20b0a43c_disk.config">
Jan 31 03:26:47 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:26:47 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c/console.log" append="off"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:26:47 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:26:47 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:26:47 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:26:47 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:26:47 np0005603609 nova_compute[221550]: 2026-01-31 08:26:47.941 221554 DEBUG nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:26:47 np0005603609 nova_compute[221550]: 2026-01-31 08:26:47.941 221554 DEBUG nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:26:47 np0005603609 nova_compute[221550]: 2026-01-31 08:26:47.942 221554 INFO nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Using config drive#033[00m
Jan 31 03:26:47 np0005603609 nova_compute[221550]: 2026-01-31 08:26:47.969 221554 DEBUG nova.storage.rbd_utils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] rbd image 1953310a-9489-4772-bf88-a90d20b0a43c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:26:48 np0005603609 nova_compute[221550]: 2026-01-31 08:26:48.198 221554 INFO nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Creating config drive at /var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c/disk.config#033[00m
Jan 31 03:26:48 np0005603609 nova_compute[221550]: 2026-01-31 08:26:48.204 221554 DEBUG oslo_concurrency.processutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpiobc64cb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:48 np0005603609 nova_compute[221550]: 2026-01-31 08:26:48.337 221554 DEBUG oslo_concurrency.processutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpiobc64cb" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:48 np0005603609 nova_compute[221550]: 2026-01-31 08:26:48.383 221554 DEBUG nova.storage.rbd_utils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] rbd image 1953310a-9489-4772-bf88-a90d20b0a43c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:26:48 np0005603609 nova_compute[221550]: 2026-01-31 08:26:48.387 221554 DEBUG oslo_concurrency.processutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c/disk.config 1953310a-9489-4772-bf88-a90d20b0a43c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:48.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:48.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:48 np0005603609 nova_compute[221550]: 2026-01-31 08:26:48.605 221554 DEBUG oslo_concurrency.processutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c/disk.config 1953310a-9489-4772-bf88-a90d20b0a43c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:48 np0005603609 nova_compute[221550]: 2026-01-31 08:26:48.607 221554 INFO nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Deleting local config drive /var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c/disk.config because it was imported into RBD.#033[00m
Jan 31 03:26:48 np0005603609 systemd-machined[190912]: New machine qemu-87-instance-000000a3.
Jan 31 03:26:48 np0005603609 systemd[1]: Started Virtual Machine qemu-87-instance-000000a3.
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.119 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848009.1194077, 1953310a-9489-4772-bf88-a90d20b0a43c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.120 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.122 221554 DEBUG nova.compute.manager [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.122 221554 DEBUG nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.124 221554 INFO nova.virt.libvirt.driver [-] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Instance spawned successfully.#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.124 221554 DEBUG nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.159 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.162 221554 DEBUG nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.163 221554 DEBUG nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.164 221554 DEBUG nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.165 221554 DEBUG nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.165 221554 DEBUG nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.166 221554 DEBUG nova.virt.libvirt.driver [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.173 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.224 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.225 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848009.1219969, 1953310a-9489-4772-bf88-a90d20b0a43c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.225 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] VM Started (Lifecycle Event)#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.280 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.286 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.305 221554 INFO nova.compute.manager [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Took 3.16 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.306 221554 DEBUG nova.compute.manager [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.321 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.382 221554 INFO nova.compute.manager [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Took 4.52 seconds to build instance.#033[00m
Jan 31 03:26:49 np0005603609 nova_compute[221550]: 2026-01-31 08:26:49.413 221554 DEBUG oslo_concurrency.lockutils [None req-f6022639-f224-48b7-93ba-4d4348efeb8e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "1953310a-9489-4772-bf88-a90d20b0a43c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:50.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:50.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:50 np0005603609 nova_compute[221550]: 2026-01-31 08:26:50.569 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:50 np0005603609 nova_compute[221550]: 2026-01-31 08:26:50.594 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:50 np0005603609 nova_compute[221550]: 2026-01-31 08:26:50.594 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:50 np0005603609 nova_compute[221550]: 2026-01-31 08:26:50.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:51 np0005603609 nova_compute[221550]: 2026-01-31 08:26:51.469 221554 INFO nova.compute.manager [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Rebuilding instance#033[00m
Jan 31 03:26:51 np0005603609 nova_compute[221550]: 2026-01-31 08:26:51.997 221554 DEBUG nova.objects.instance [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1953310a-9489-4772-bf88-a90d20b0a43c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:26:52 np0005603609 nova_compute[221550]: 2026-01-31 08:26:52.291 221554 DEBUG nova.compute.manager [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:26:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:52.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:52.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:52 np0005603609 nova_compute[221550]: 2026-01-31 08:26:52.683 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:52 np0005603609 nova_compute[221550]: 2026-01-31 08:26:52.876 221554 DEBUG nova.objects.instance [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 1953310a-9489-4772-bf88-a90d20b0a43c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:26:52 np0005603609 nova_compute[221550]: 2026-01-31 08:26:52.961 221554 DEBUG nova.objects.instance [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1953310a-9489-4772-bf88-a90d20b0a43c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:26:53 np0005603609 nova_compute[221550]: 2026-01-31 08:26:53.036 221554 DEBUG nova.objects.instance [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lazy-loading 'resources' on Instance uuid 1953310a-9489-4772-bf88-a90d20b0a43c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:26:53 np0005603609 nova_compute[221550]: 2026-01-31 08:26:53.058 221554 DEBUG nova.objects.instance [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lazy-loading 'migration_context' on Instance uuid 1953310a-9489-4772-bf88-a90d20b0a43c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:26:53 np0005603609 nova_compute[221550]: 2026-01-31 08:26:53.075 221554 DEBUG nova.objects.instance [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 03:26:53 np0005603609 nova_compute[221550]: 2026-01-31 08:26:53.079 221554 DEBUG nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:26:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:26:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2557482008' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:26:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:26:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2557482008' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:26:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:54.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:54.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:54 np0005603609 nova_compute[221550]: 2026-01-31 08:26:54.603 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:54 np0005603609 nova_compute[221550]: 2026-01-31 08:26:54.657 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Triggering sync for uuid 8bf29da0-2e1a-4e89-bce8-b293a938c742 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:26:54 np0005603609 nova_compute[221550]: 2026-01-31 08:26:54.657 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Triggering sync for uuid 1953310a-9489-4772-bf88-a90d20b0a43c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:26:54 np0005603609 nova_compute[221550]: 2026-01-31 08:26:54.657 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "8bf29da0-2e1a-4e89-bce8-b293a938c742" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:54 np0005603609 nova_compute[221550]: 2026-01-31 08:26:54.658 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:54 np0005603609 nova_compute[221550]: 2026-01-31 08:26:54.658 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "1953310a-9489-4772-bf88-a90d20b0a43c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:54 np0005603609 nova_compute[221550]: 2026-01-31 08:26:54.659 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "1953310a-9489-4772-bf88-a90d20b0a43c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:54 np0005603609 nova_compute[221550]: 2026-01-31 08:26:54.659 221554 INFO nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] During sync_power_state the instance has a pending task (rebuilding). Skip.#033[00m
Jan 31 03:26:54 np0005603609 nova_compute[221550]: 2026-01-31 08:26:54.660 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "1953310a-9489-4772-bf88-a90d20b0a43c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:54 np0005603609 nova_compute[221550]: 2026-01-31 08:26:54.729 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:55 np0005603609 nova_compute[221550]: 2026-01-31 08:26:55.571 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:26:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:56.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:26:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:56.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:57 np0005603609 nova_compute[221550]: 2026-01-31 08:26:57.713 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:58.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:26:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:26:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:58.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:27:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:00.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:00.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:00 np0005603609 nova_compute[221550]: 2026-01-31 08:27:00.573 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:00 np0005603609 nova_compute[221550]: 2026-01-31 08:27:00.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:00 np0005603609 nova_compute[221550]: 2026-01-31 08:27:00.687 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:00 np0005603609 nova_compute[221550]: 2026-01-31 08:27:00.688 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:27:00 np0005603609 nova_compute[221550]: 2026-01-31 08:27:00.705 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:27:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:02.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:02.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:27:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.3 total, 600.0 interval#012Cumulative writes: 13K writes, 66K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1749 writes, 9005 keys, 1749 commit groups, 1.0 writes per commit group, ingest: 16.64 MB, 0.03 MB/s#012Interval WAL: 1749 writes, 1749 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     47.3      1.71              0.18        42    0.041       0      0       0.0       0.0#012  L6      1/0   10.16 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0     75.5     64.3      6.30              0.92        41    0.154    281K    22K       0.0       0.0#012 Sum      1/0   10.16 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0     59.4     60.6      8.01              1.10        83    0.097    281K    22K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4     72.6     72.7      1.40              0.26        16    0.087     72K   4169       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     75.5     64.3      6.30              0.92        41    0.154    281K    22K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     48.2      1.68              0.18        41    0.041       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.3 total, 600.0 interval#012Flush(GB): cumulative 0.079, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.47 GB write, 0.10 MB/s write, 0.46 GB read, 0.10 MB/s read, 8.0 seconds#012Interval compaction: 0.10 GB write, 0.17 MB/s write, 0.10 GB read, 0.17 MB/s read, 1.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619ad8ed1f0#2 capacity: 304.00 MB usage: 51.87 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.0003 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3020,49.81 MB,16.3847%) FilterBlock(83,788.36 KB,0.253251%) IndexBlock(83,1.29 MB,0.425524%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 03:27:02 np0005603609 nova_compute[221550]: 2026-01-31 08:27:02.715 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:03 np0005603609 nova_compute[221550]: 2026-01-31 08:27:03.119 221554 DEBUG nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:27:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:04.504 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:27:04 np0005603609 nova_compute[221550]: 2026-01-31 08:27:04.504 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:04.505 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:27:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:04.505 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:27:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:27:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:04.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:27:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:04.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:05 np0005603609 podman[288329]: 2026-01-31 08:27:05.16076036 +0000 UTC m=+0.046891368 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:27:05 np0005603609 podman[288328]: 2026-01-31 08:27:05.182611494 +0000 UTC m=+0.068687480 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:27:05 np0005603609 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000a3.scope: Deactivated successfully.
Jan 31 03:27:05 np0005603609 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000a3.scope: Consumed 12.647s CPU time.
Jan 31 03:27:05 np0005603609 systemd-machined[190912]: Machine qemu-87-instance-000000a3 terminated.
Jan 31 03:27:05 np0005603609 nova_compute[221550]: 2026-01-31 08:27:05.575 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:06 np0005603609 nova_compute[221550]: 2026-01-31 08:27:06.133 221554 INFO nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Instance shutdown successfully after 13 seconds.#033[00m
Jan 31 03:27:06 np0005603609 nova_compute[221550]: 2026-01-31 08:27:06.139 221554 INFO nova.virt.libvirt.driver [-] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Instance destroyed successfully.#033[00m
Jan 31 03:27:06 np0005603609 nova_compute[221550]: 2026-01-31 08:27:06.147 221554 INFO nova.virt.libvirt.driver [-] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Instance destroyed successfully.#033[00m
Jan 31 03:27:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:27:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:06.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:27:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:06.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:06 np0005603609 nova_compute[221550]: 2026-01-31 08:27:06.632 221554 INFO nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Deleting instance files /var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c_del#033[00m
Jan 31 03:27:06 np0005603609 nova_compute[221550]: 2026-01-31 08:27:06.633 221554 INFO nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Deletion of /var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c_del complete#033[00m
Jan 31 03:27:06 np0005603609 nova_compute[221550]: 2026-01-31 08:27:06.839 221554 DEBUG nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:27:06 np0005603609 nova_compute[221550]: 2026-01-31 08:27:06.839 221554 INFO nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Creating image(s)#033[00m
Jan 31 03:27:06 np0005603609 nova_compute[221550]: 2026-01-31 08:27:06.862 221554 DEBUG nova.storage.rbd_utils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] rbd image 1953310a-9489-4772-bf88-a90d20b0a43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:27:06 np0005603609 nova_compute[221550]: 2026-01-31 08:27:06.886 221554 DEBUG nova.storage.rbd_utils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] rbd image 1953310a-9489-4772-bf88-a90d20b0a43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:27:06 np0005603609 nova_compute[221550]: 2026-01-31 08:27:06.915 221554 DEBUG nova.storage.rbd_utils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] rbd image 1953310a-9489-4772-bf88-a90d20b0a43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:27:06 np0005603609 nova_compute[221550]: 2026-01-31 08:27:06.918 221554 DEBUG oslo_concurrency.processutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:06 np0005603609 nova_compute[221550]: 2026-01-31 08:27:06.972 221554 DEBUG oslo_concurrency.processutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:06 np0005603609 nova_compute[221550]: 2026-01-31 08:27:06.973 221554 DEBUG oslo_concurrency.lockutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Acquiring lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:06 np0005603609 nova_compute[221550]: 2026-01-31 08:27:06.973 221554 DEBUG oslo_concurrency.lockutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:06 np0005603609 nova_compute[221550]: 2026-01-31 08:27:06.974 221554 DEBUG oslo_concurrency.lockutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:06 np0005603609 nova_compute[221550]: 2026-01-31 08:27:06.997 221554 DEBUG nova.storage.rbd_utils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] rbd image 1953310a-9489-4772-bf88-a90d20b0a43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.000 221554 DEBUG oslo_concurrency.processutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c 1953310a-9489-4772-bf88-a90d20b0a43c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.285 221554 DEBUG oslo_concurrency.processutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c 1953310a-9489-4772-bf88-a90d20b0a43c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.360 221554 DEBUG nova.storage.rbd_utils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] resizing rbd image 1953310a-9489-4772-bf88-a90d20b0a43c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.472 221554 DEBUG nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.472 221554 DEBUG nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Ensure instance console log exists: /var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.473 221554 DEBUG oslo_concurrency.lockutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.473 221554 DEBUG oslo_concurrency.lockutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.473 221554 DEBUG oslo_concurrency.lockutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.475 221554 DEBUG nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.480 221554 WARNING nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.499 221554 DEBUG nova.virt.libvirt.host [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.499 221554 DEBUG nova.virt.libvirt.host [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.506 221554 DEBUG nova.virt.libvirt.host [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.507 221554 DEBUG nova.virt.libvirt.host [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.508 221554 DEBUG nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.508 221554 DEBUG nova.virt.hardware [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.508 221554 DEBUG nova.virt.hardware [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.509 221554 DEBUG nova.virt.hardware [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.509 221554 DEBUG nova.virt.hardware [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.509 221554 DEBUG nova.virt.hardware [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.509 221554 DEBUG nova.virt.hardware [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.509 221554 DEBUG nova.virt.hardware [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.510 221554 DEBUG nova.virt.hardware [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.510 221554 DEBUG nova.virt.hardware [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.510 221554 DEBUG nova.virt.hardware [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.510 221554 DEBUG nova.virt.hardware [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.511 221554 DEBUG nova.objects.instance [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1953310a-9489-4772-bf88-a90d20b0a43c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:27:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:07.519 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:07.519 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:07.519 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.568 221554 DEBUG oslo_concurrency.processutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:07 np0005603609 nova_compute[221550]: 2026-01-31 08:27:07.771 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:27:08 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2764240598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:27:08 np0005603609 nova_compute[221550]: 2026-01-31 08:27:08.066 221554 DEBUG oslo_concurrency.processutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:08 np0005603609 nova_compute[221550]: 2026-01-31 08:27:08.098 221554 DEBUG nova.storage.rbd_utils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] rbd image 1953310a-9489-4772-bf88-a90d20b0a43c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:27:08 np0005603609 nova_compute[221550]: 2026-01-31 08:27:08.107 221554 DEBUG oslo_concurrency.processutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:27:08 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1378778972' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:27:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:08 np0005603609 nova_compute[221550]: 2026-01-31 08:27:08.508 221554 DEBUG oslo_concurrency.processutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:08 np0005603609 nova_compute[221550]: 2026-01-31 08:27:08.513 221554 DEBUG nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:27:08 np0005603609 nova_compute[221550]:  <uuid>1953310a-9489-4772-bf88-a90d20b0a43c</uuid>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:  <name>instance-000000a3</name>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServerShowV254Test-server-369254529</nova:name>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:27:07</nova:creationTime>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:27:08 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:        <nova:user uuid="c3fccb843f7e4945a32e8bd3ea398cbe">tempest-ServerShowV254Test-582177530-project-member</nova:user>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:        <nova:project uuid="8c8f9946318d4a17b0d659dea7815cb7">tempest-ServerShowV254Test-582177530</nova:project>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="40cf2ff3-f7ff-4843-b4ab-b7dcc843006f"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <nova:ports/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <entry name="serial">1953310a-9489-4772-bf88-a90d20b0a43c</entry>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <entry name="uuid">1953310a-9489-4772-bf88-a90d20b0a43c</entry>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/1953310a-9489-4772-bf88-a90d20b0a43c_disk">
Jan 31 03:27:08 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:27:08 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/1953310a-9489-4772-bf88-a90d20b0a43c_disk.config">
Jan 31 03:27:08 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:27:08 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c/console.log" append="off"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:27:08 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:27:08 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:27:08 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:27:08 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:27:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:08.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:27:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:08.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:27:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e359 e359: 3 total, 3 up, 3 in
Jan 31 03:27:08 np0005603609 nova_compute[221550]: 2026-01-31 08:27:08.624 221554 DEBUG nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:27:08 np0005603609 nova_compute[221550]: 2026-01-31 08:27:08.624 221554 DEBUG nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:27:08 np0005603609 nova_compute[221550]: 2026-01-31 08:27:08.625 221554 INFO nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Using config drive#033[00m
Jan 31 03:27:08 np0005603609 nova_compute[221550]: 2026-01-31 08:27:08.661 221554 DEBUG nova.storage.rbd_utils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] rbd image 1953310a-9489-4772-bf88-a90d20b0a43c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:27:08 np0005603609 nova_compute[221550]: 2026-01-31 08:27:08.686 221554 DEBUG nova.objects.instance [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1953310a-9489-4772-bf88-a90d20b0a43c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:27:09 np0005603609 nova_compute[221550]: 2026-01-31 08:27:09.087 221554 INFO nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Creating config drive at /var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c/disk.config#033[00m
Jan 31 03:27:09 np0005603609 nova_compute[221550]: 2026-01-31 08:27:09.091 221554 DEBUG oslo_concurrency.processutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpykljs6bt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:09 np0005603609 nova_compute[221550]: 2026-01-31 08:27:09.225 221554 DEBUG oslo_concurrency.processutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpykljs6bt" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:09 np0005603609 nova_compute[221550]: 2026-01-31 08:27:09.261 221554 DEBUG nova.storage.rbd_utils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] rbd image 1953310a-9489-4772-bf88-a90d20b0a43c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:27:09 np0005603609 nova_compute[221550]: 2026-01-31 08:27:09.267 221554 DEBUG oslo_concurrency.processutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c/disk.config 1953310a-9489-4772-bf88-a90d20b0a43c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:09 np0005603609 nova_compute[221550]: 2026-01-31 08:27:09.437 221554 DEBUG oslo_concurrency.processutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c/disk.config 1953310a-9489-4772-bf88-a90d20b0a43c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:09 np0005603609 nova_compute[221550]: 2026-01-31 08:27:09.438 221554 INFO nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Deleting local config drive /var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c/disk.config because it was imported into RBD.#033[00m
Jan 31 03:27:09 np0005603609 systemd-machined[190912]: New machine qemu-88-instance-000000a3.
Jan 31 03:27:09 np0005603609 systemd[1]: Started Virtual Machine qemu-88-instance-000000a3.
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.230 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 1953310a-9489-4772-bf88-a90d20b0a43c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.233 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848030.2293878, 1953310a-9489-4772-bf88-a90d20b0a43c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.233 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.238 221554 DEBUG nova.compute.manager [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.238 221554 DEBUG nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.242 221554 INFO nova.virt.libvirt.driver [-] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Instance spawned successfully.#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.243 221554 DEBUG nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.263 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.266 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.276 221554 DEBUG nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.276 221554 DEBUG nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.277 221554 DEBUG nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.277 221554 DEBUG nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.277 221554 DEBUG nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.278 221554 DEBUG nova.virt.libvirt.driver [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.321 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.321 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848030.2308495, 1953310a-9489-4772-bf88-a90d20b0a43c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.321 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] VM Started (Lifecycle Event)#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.374 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.379 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.393 221554 DEBUG nova.compute.manager [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.409 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.485 221554 DEBUG oslo_concurrency.lockutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.486 221554 DEBUG oslo_concurrency.lockutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.486 221554 DEBUG nova.objects.instance [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 03:27:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:10.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:27:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:10.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.577 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:10 np0005603609 nova_compute[221550]: 2026-01-31 08:27:10.989 221554 DEBUG oslo_concurrency.lockutils [None req-06e2114c-a3a1-4cdd-b88e-6d3858d2f4c2 c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:12 np0005603609 nova_compute[221550]: 2026-01-31 08:27:12.146 221554 DEBUG oslo_concurrency.lockutils [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Acquiring lock "1953310a-9489-4772-bf88-a90d20b0a43c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:12 np0005603609 nova_compute[221550]: 2026-01-31 08:27:12.147 221554 DEBUG oslo_concurrency.lockutils [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "1953310a-9489-4772-bf88-a90d20b0a43c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:12 np0005603609 nova_compute[221550]: 2026-01-31 08:27:12.147 221554 DEBUG oslo_concurrency.lockutils [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Acquiring lock "1953310a-9489-4772-bf88-a90d20b0a43c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:12 np0005603609 nova_compute[221550]: 2026-01-31 08:27:12.147 221554 DEBUG oslo_concurrency.lockutils [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "1953310a-9489-4772-bf88-a90d20b0a43c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:12 np0005603609 nova_compute[221550]: 2026-01-31 08:27:12.148 221554 DEBUG oslo_concurrency.lockutils [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "1953310a-9489-4772-bf88-a90d20b0a43c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:12 np0005603609 nova_compute[221550]: 2026-01-31 08:27:12.149 221554 INFO nova.compute.manager [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Terminating instance#033[00m
Jan 31 03:27:12 np0005603609 nova_compute[221550]: 2026-01-31 08:27:12.150 221554 DEBUG oslo_concurrency.lockutils [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Acquiring lock "refresh_cache-1953310a-9489-4772-bf88-a90d20b0a43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:27:12 np0005603609 nova_compute[221550]: 2026-01-31 08:27:12.150 221554 DEBUG oslo_concurrency.lockutils [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Acquired lock "refresh_cache-1953310a-9489-4772-bf88-a90d20b0a43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:27:12 np0005603609 nova_compute[221550]: 2026-01-31 08:27:12.150 221554 DEBUG nova.network.neutron [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:27:12 np0005603609 nova_compute[221550]: 2026-01-31 08:27:12.384 221554 DEBUG nova.network.neutron [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:27:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:27:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:12.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:27:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:12.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:12 np0005603609 nova_compute[221550]: 2026-01-31 08:27:12.773 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:12 np0005603609 nova_compute[221550]: 2026-01-31 08:27:12.774 221554 DEBUG nova.network.neutron [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:27:12 np0005603609 nova_compute[221550]: 2026-01-31 08:27:12.793 221554 DEBUG oslo_concurrency.lockutils [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Releasing lock "refresh_cache-1953310a-9489-4772-bf88-a90d20b0a43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:27:12 np0005603609 nova_compute[221550]: 2026-01-31 08:27:12.794 221554 DEBUG nova.compute.manager [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:27:12 np0005603609 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000a3.scope: Deactivated successfully.
Jan 31 03:27:12 np0005603609 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000a3.scope: Consumed 3.313s CPU time.
Jan 31 03:27:12 np0005603609 systemd-machined[190912]: Machine qemu-88-instance-000000a3 terminated.
Jan 31 03:27:13 np0005603609 nova_compute[221550]: 2026-01-31 08:27:13.008 221554 INFO nova.virt.libvirt.driver [-] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Instance destroyed successfully.#033[00m
Jan 31 03:27:13 np0005603609 nova_compute[221550]: 2026-01-31 08:27:13.008 221554 DEBUG nova.objects.instance [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lazy-loading 'resources' on Instance uuid 1953310a-9489-4772-bf88-a90d20b0a43c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:27:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:13 np0005603609 nova_compute[221550]: 2026-01-31 08:27:13.524 221554 INFO nova.virt.libvirt.driver [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Deleting instance files /var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c_del#033[00m
Jan 31 03:27:13 np0005603609 nova_compute[221550]: 2026-01-31 08:27:13.525 221554 INFO nova.virt.libvirt.driver [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Deletion of /var/lib/nova/instances/1953310a-9489-4772-bf88-a90d20b0a43c_del complete#033[00m
Jan 31 03:27:13 np0005603609 nova_compute[221550]: 2026-01-31 08:27:13.631 221554 INFO nova.compute.manager [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:27:13 np0005603609 nova_compute[221550]: 2026-01-31 08:27:13.631 221554 DEBUG oslo.service.loopingcall [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:27:13 np0005603609 nova_compute[221550]: 2026-01-31 08:27:13.632 221554 DEBUG nova.compute.manager [-] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:27:13 np0005603609 nova_compute[221550]: 2026-01-31 08:27:13.632 221554 DEBUG nova.network.neutron [-] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:27:13 np0005603609 nova_compute[221550]: 2026-01-31 08:27:13.885 221554 DEBUG nova.network.neutron [-] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:27:13 np0005603609 nova_compute[221550]: 2026-01-31 08:27:13.907 221554 DEBUG nova.network.neutron [-] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:27:13 np0005603609 nova_compute[221550]: 2026-01-31 08:27:13.930 221554 INFO nova.compute.manager [-] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Took 0.30 seconds to deallocate network for instance.#033[00m
Jan 31 03:27:14 np0005603609 nova_compute[221550]: 2026-01-31 08:27:14.035 221554 DEBUG oslo_concurrency.lockutils [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:14 np0005603609 nova_compute[221550]: 2026-01-31 08:27:14.035 221554 DEBUG oslo_concurrency.lockutils [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:14 np0005603609 nova_compute[221550]: 2026-01-31 08:27:14.127 221554 DEBUG oslo_concurrency.processutils [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:27:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:14.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:27:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:27:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1067723279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:27:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:27:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:14.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:27:14 np0005603609 nova_compute[221550]: 2026-01-31 08:27:14.571 221554 DEBUG oslo_concurrency.processutils [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:14 np0005603609 nova_compute[221550]: 2026-01-31 08:27:14.576 221554 DEBUG nova.compute.provider_tree [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:27:14 np0005603609 nova_compute[221550]: 2026-01-31 08:27:14.618 221554 DEBUG nova.scheduler.client.report [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:27:14 np0005603609 nova_compute[221550]: 2026-01-31 08:27:14.711 221554 DEBUG oslo_concurrency.lockutils [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:14 np0005603609 nova_compute[221550]: 2026-01-31 08:27:14.863 221554 INFO nova.scheduler.client.report [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Deleted allocations for instance 1953310a-9489-4772-bf88-a90d20b0a43c#033[00m
Jan 31 03:27:15 np0005603609 nova_compute[221550]: 2026-01-31 08:27:15.011 221554 DEBUG oslo_concurrency.lockutils [None req-2adf4a5f-bf5b-4c1e-93e7-4ed84f46579e c3fccb843f7e4945a32e8bd3ea398cbe 8c8f9946318d4a17b0d659dea7815cb7 - - default default] Lock "1953310a-9489-4772-bf88-a90d20b0a43c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:15 np0005603609 nova_compute[221550]: 2026-01-31 08:27:15.579 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:27:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:16.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:27:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:16.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:17 np0005603609 nova_compute[221550]: 2026-01-31 08:27:17.774 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:18.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:18.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:19 np0005603609 nova_compute[221550]: 2026-01-31 08:27:19.640 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:20.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:20.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:20 np0005603609 nova_compute[221550]: 2026-01-31 08:27:20.581 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:27:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:22.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:27:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:22.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:22 np0005603609 nova_compute[221550]: 2026-01-31 08:27:22.776 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:27:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:24.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:27:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:27:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:24.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:27:25 np0005603609 nova_compute[221550]: 2026-01-31 08:27:25.584 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:26.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:26.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:27 np0005603609 nova_compute[221550]: 2026-01-31 08:27:27.830 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:27 np0005603609 nova_compute[221550]: 2026-01-31 08:27:27.875 221554 DEBUG oslo_concurrency.lockutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "45129920-90ba-447c-9248-9aab48f34863" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:27 np0005603609 nova_compute[221550]: 2026-01-31 08:27:27.876 221554 DEBUG oslo_concurrency.lockutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "45129920-90ba-447c-9248-9aab48f34863" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:27 np0005603609 nova_compute[221550]: 2026-01-31 08:27:27.937 221554 DEBUG nova.compute.manager [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:27:28 np0005603609 nova_compute[221550]: 2026-01-31 08:27:28.007 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848033.0070367, 1953310a-9489-4772-bf88-a90d20b0a43c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:27:28 np0005603609 nova_compute[221550]: 2026-01-31 08:27:28.007 221554 INFO nova.compute.manager [-] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:27:28 np0005603609 nova_compute[221550]: 2026-01-31 08:27:28.037 221554 DEBUG nova.compute.manager [None req-934ce612-ee2b-4e8d-994d-a5bf372bc344 - - - - - -] [instance: 1953310a-9489-4772-bf88-a90d20b0a43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:27:28 np0005603609 nova_compute[221550]: 2026-01-31 08:27:28.043 221554 DEBUG oslo_concurrency.lockutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:28 np0005603609 nova_compute[221550]: 2026-01-31 08:27:28.043 221554 DEBUG oslo_concurrency.lockutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:28 np0005603609 nova_compute[221550]: 2026-01-31 08:27:28.049 221554 DEBUG nova.virt.hardware [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:27:28 np0005603609 nova_compute[221550]: 2026-01-31 08:27:28.049 221554 INFO nova.compute.claims [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:27:28 np0005603609 nova_compute[221550]: 2026-01-31 08:27:28.288 221554 DEBUG oslo_concurrency.processutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:27:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:28.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:27:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:28.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:27:28 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/35819088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:27:28 np0005603609 nova_compute[221550]: 2026-01-31 08:27:28.710 221554 DEBUG oslo_concurrency.processutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:28 np0005603609 nova_compute[221550]: 2026-01-31 08:27:28.715 221554 DEBUG nova.compute.provider_tree [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:27:28 np0005603609 nova_compute[221550]: 2026-01-31 08:27:28.778 221554 DEBUG nova.scheduler.client.report [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:27:28 np0005603609 nova_compute[221550]: 2026-01-31 08:27:28.869 221554 DEBUG oslo_concurrency.lockutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:28 np0005603609 nova_compute[221550]: 2026-01-31 08:27:28.870 221554 DEBUG nova.compute.manager [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:27:28 np0005603609 nova_compute[221550]: 2026-01-31 08:27:28.938 221554 DEBUG nova.compute.manager [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:27:28 np0005603609 nova_compute[221550]: 2026-01-31 08:27:28.939 221554 DEBUG nova.network.neutron [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:27:28 np0005603609 nova_compute[221550]: 2026-01-31 08:27:28.967 221554 INFO nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:27:28 np0005603609 nova_compute[221550]: 2026-01-31 08:27:28.993 221554 DEBUG nova.compute.manager [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:27:29 np0005603609 nova_compute[221550]: 2026-01-31 08:27:29.258 221554 DEBUG nova.compute.manager [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:27:29 np0005603609 nova_compute[221550]: 2026-01-31 08:27:29.261 221554 DEBUG nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:27:29 np0005603609 nova_compute[221550]: 2026-01-31 08:27:29.261 221554 INFO nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Creating image(s)#033[00m
Jan 31 03:27:29 np0005603609 nova_compute[221550]: 2026-01-31 08:27:29.288 221554 DEBUG nova.storage.rbd_utils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image 45129920-90ba-447c-9248-9aab48f34863_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:27:29 np0005603609 nova_compute[221550]: 2026-01-31 08:27:29.311 221554 DEBUG nova.storage.rbd_utils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image 45129920-90ba-447c-9248-9aab48f34863_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:27:29 np0005603609 nova_compute[221550]: 2026-01-31 08:27:29.335 221554 DEBUG nova.storage.rbd_utils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image 45129920-90ba-447c-9248-9aab48f34863_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:27:29 np0005603609 nova_compute[221550]: 2026-01-31 08:27:29.339 221554 DEBUG oslo_concurrency.processutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:29 np0005603609 nova_compute[221550]: 2026-01-31 08:27:29.388 221554 DEBUG oslo_concurrency.processutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:29 np0005603609 nova_compute[221550]: 2026-01-31 08:27:29.389 221554 DEBUG oslo_concurrency.lockutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:29 np0005603609 nova_compute[221550]: 2026-01-31 08:27:29.390 221554 DEBUG oslo_concurrency.lockutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:29 np0005603609 nova_compute[221550]: 2026-01-31 08:27:29.391 221554 DEBUG oslo_concurrency.lockutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:29 np0005603609 nova_compute[221550]: 2026-01-31 08:27:29.419 221554 DEBUG nova.storage.rbd_utils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image 45129920-90ba-447c-9248-9aab48f34863_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:27:29 np0005603609 nova_compute[221550]: 2026-01-31 08:27:29.423 221554 DEBUG oslo_concurrency.processutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 45129920-90ba-447c-9248-9aab48f34863_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e360 e360: 3 total, 3 up, 3 in
Jan 31 03:27:29 np0005603609 nova_compute[221550]: 2026-01-31 08:27:29.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:29 np0005603609 nova_compute[221550]: 2026-01-31 08:27:29.903 221554 DEBUG nova.policy [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '48d684de9ba340f48e249b4cce857bfa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '418d5319c640455ab23850c0b0f24f92', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:27:30 np0005603609 nova_compute[221550]: 2026-01-31 08:27:30.001 221554 DEBUG oslo_concurrency.processutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 45129920-90ba-447c-9248-9aab48f34863_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:30 np0005603609 nova_compute[221550]: 2026-01-31 08:27:30.077 221554 DEBUG nova.storage.rbd_utils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] resizing rbd image 45129920-90ba-447c-9248-9aab48f34863_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:27:30 np0005603609 nova_compute[221550]: 2026-01-31 08:27:30.200 221554 DEBUG nova.objects.instance [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'migration_context' on Instance uuid 45129920-90ba-447c-9248-9aab48f34863 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:27:30 np0005603609 nova_compute[221550]: 2026-01-31 08:27:30.227 221554 DEBUG nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:27:30 np0005603609 nova_compute[221550]: 2026-01-31 08:27:30.228 221554 DEBUG nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Ensure instance console log exists: /var/lib/nova/instances/45129920-90ba-447c-9248-9aab48f34863/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:27:30 np0005603609 nova_compute[221550]: 2026-01-31 08:27:30.228 221554 DEBUG oslo_concurrency.lockutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:30 np0005603609 nova_compute[221550]: 2026-01-31 08:27:30.228 221554 DEBUG oslo_concurrency.lockutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:30 np0005603609 nova_compute[221550]: 2026-01-31 08:27:30.229 221554 DEBUG oslo_concurrency.lockutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:30.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:30.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:30 np0005603609 nova_compute[221550]: 2026-01-31 08:27:30.585 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:32 np0005603609 nova_compute[221550]: 2026-01-31 08:27:32.449 221554 DEBUG nova.network.neutron [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Successfully created port: 25a0d5fc-40fd-4968-b00e-927b52f03c49 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:27:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:32.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:32.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:32 np0005603609 nova_compute[221550]: 2026-01-31 08:27:32.663 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:32 np0005603609 nova_compute[221550]: 2026-01-31 08:27:32.832 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:34 np0005603609 nova_compute[221550]: 2026-01-31 08:27:34.568 221554 DEBUG nova.network.neutron [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Successfully updated port: 25a0d5fc-40fd-4968-b00e-927b52f03c49 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:27:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:34.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:34.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:34 np0005603609 nova_compute[221550]: 2026-01-31 08:27:34.670 221554 DEBUG oslo_concurrency.lockutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "refresh_cache-45129920-90ba-447c-9248-9aab48f34863" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:27:34 np0005603609 nova_compute[221550]: 2026-01-31 08:27:34.670 221554 DEBUG oslo_concurrency.lockutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquired lock "refresh_cache-45129920-90ba-447c-9248-9aab48f34863" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:27:34 np0005603609 nova_compute[221550]: 2026-01-31 08:27:34.670 221554 DEBUG nova.network.neutron [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:27:34 np0005603609 nova_compute[221550]: 2026-01-31 08:27:34.694 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:34 np0005603609 nova_compute[221550]: 2026-01-31 08:27:34.701 221554 DEBUG nova.compute.manager [req-e7628692-e576-4d6f-8720-956bf6cdf455 req-cfaed0a3-2bee-45dd-bd34-0107500a1497 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Received event network-changed-25a0d5fc-40fd-4968-b00e-927b52f03c49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:27:34 np0005603609 nova_compute[221550]: 2026-01-31 08:27:34.702 221554 DEBUG nova.compute.manager [req-e7628692-e576-4d6f-8720-956bf6cdf455 req-cfaed0a3-2bee-45dd-bd34-0107500a1497 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Refreshing instance network info cache due to event network-changed-25a0d5fc-40fd-4968-b00e-927b52f03c49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:27:34 np0005603609 nova_compute[221550]: 2026-01-31 08:27:34.702 221554 DEBUG oslo_concurrency.lockutils [req-e7628692-e576-4d6f-8720-956bf6cdf455 req-cfaed0a3-2bee-45dd-bd34-0107500a1497 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-45129920-90ba-447c-9248-9aab48f34863" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:27:35 np0005603609 nova_compute[221550]: 2026-01-31 08:27:35.038 221554 DEBUG nova.network.neutron [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:27:35 np0005603609 nova_compute[221550]: 2026-01-31 08:27:35.588 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:36 np0005603609 podman[288971]: 2026-01-31 08:27:36.190841267 +0000 UTC m=+0.064955591 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:27:36 np0005603609 podman[288972]: 2026-01-31 08:27:36.195652602 +0000 UTC m=+0.071793685 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.537 221554 DEBUG nova.network.neutron [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Updating instance_info_cache with network_info: [{"id": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "address": "fa:16:3e:aa:91:6d", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25a0d5fc-40", "ovs_interfaceid": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.573 221554 DEBUG oslo_concurrency.lockutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Releasing lock "refresh_cache-45129920-90ba-447c-9248-9aab48f34863" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.573 221554 DEBUG nova.compute.manager [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Instance network_info: |[{"id": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "address": "fa:16:3e:aa:91:6d", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25a0d5fc-40", "ovs_interfaceid": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.573 221554 DEBUG oslo_concurrency.lockutils [req-e7628692-e576-4d6f-8720-956bf6cdf455 req-cfaed0a3-2bee-45dd-bd34-0107500a1497 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-45129920-90ba-447c-9248-9aab48f34863" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.573 221554 DEBUG nova.network.neutron [req-e7628692-e576-4d6f-8720-956bf6cdf455 req-cfaed0a3-2bee-45dd-bd34-0107500a1497 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Refreshing network info cache for port 25a0d5fc-40fd-4968-b00e-927b52f03c49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.575 221554 DEBUG nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Start _get_guest_xml network_info=[{"id": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "address": "fa:16:3e:aa:91:6d", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25a0d5fc-40", "ovs_interfaceid": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:27:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.580 221554 WARNING nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:27:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:36.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:36.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.595 221554 DEBUG nova.virt.libvirt.host [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.596 221554 DEBUG nova.virt.libvirt.host [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.599 221554 DEBUG nova.virt.libvirt.host [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.600 221554 DEBUG nova.virt.libvirt.host [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.602 221554 DEBUG nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.602 221554 DEBUG nova.virt.hardware [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.603 221554 DEBUG nova.virt.hardware [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.603 221554 DEBUG nova.virt.hardware [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.603 221554 DEBUG nova.virt.hardware [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.604 221554 DEBUG nova.virt.hardware [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.604 221554 DEBUG nova.virt.hardware [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.604 221554 DEBUG nova.virt.hardware [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.605 221554 DEBUG nova.virt.hardware [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.605 221554 DEBUG nova.virt.hardware [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.605 221554 DEBUG nova.virt.hardware [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.606 221554 DEBUG nova.virt.hardware [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:27:36 np0005603609 nova_compute[221550]: 2026-01-31 08:27:36.611 221554 DEBUG oslo_concurrency.processutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:27:37 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/953468057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.055 221554 DEBUG oslo_concurrency.processutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.080 221554 DEBUG nova.storage.rbd_utils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image 45129920-90ba-447c-9248-9aab48f34863_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.085 221554 DEBUG oslo_concurrency.processutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:27:37 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1320643255' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.512 221554 DEBUG oslo_concurrency.processutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.514 221554 DEBUG nova.virt.libvirt.vif [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:27:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-2114237403',display_name='tempest-AttachVolumeNegativeTest-server-2114237403',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-2114237403',id=166,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK17LKHy2ovd//Jqy0J2nwL60uQ9q4IySU1Ezf+7WaNwMWYjskeukYZwUKTjOAdK1+n81ISTjlIwKtHUebZBUbDI4o6AovzGtFFGwJogTo+E0dcJguncmjfdCozMj+dKnA==',key_name='tempest-keypair-876032082',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='418d5319c640455ab23850c0b0f24f92',ramdisk_id='',reservation_id='r-97k154mv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-562353674',owner_user_name='tempest-AttachVolumeNegativeTest-562353674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:27:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48d684de9ba340f48e249b4cce857bfa',uuid=45129920-90ba-447c-9248-9aab48f34863,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "address": "fa:16:3e:aa:91:6d", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25a0d5fc-40", "ovs_interfaceid": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.515 221554 DEBUG nova.network.os_vif_util [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converting VIF {"id": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "address": "fa:16:3e:aa:91:6d", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25a0d5fc-40", "ovs_interfaceid": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.515 221554 DEBUG nova.network.os_vif_util [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:91:6d,bridge_name='br-int',has_traffic_filtering=True,id=25a0d5fc-40fd-4968-b00e-927b52f03c49,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25a0d5fc-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.517 221554 DEBUG nova.objects.instance [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'pci_devices' on Instance uuid 45129920-90ba-447c-9248-9aab48f34863 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.567 221554 DEBUG nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:27:37 np0005603609 nova_compute[221550]:  <uuid>45129920-90ba-447c-9248-9aab48f34863</uuid>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:  <name>instance-000000a6</name>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <nova:name>tempest-AttachVolumeNegativeTest-server-2114237403</nova:name>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:27:36</nova:creationTime>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:27:37 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:        <nova:user uuid="48d684de9ba340f48e249b4cce857bfa">tempest-AttachVolumeNegativeTest-562353674-project-member</nova:user>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:        <nova:project uuid="418d5319c640455ab23850c0b0f24f92">tempest-AttachVolumeNegativeTest-562353674</nova:project>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:        <nova:port uuid="25a0d5fc-40fd-4968-b00e-927b52f03c49">
Jan 31 03:27:37 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <entry name="serial">45129920-90ba-447c-9248-9aab48f34863</entry>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <entry name="uuid">45129920-90ba-447c-9248-9aab48f34863</entry>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/45129920-90ba-447c-9248-9aab48f34863_disk">
Jan 31 03:27:37 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:27:37 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/45129920-90ba-447c-9248-9aab48f34863_disk.config">
Jan 31 03:27:37 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:27:37 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:aa:91:6d"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <target dev="tap25a0d5fc-40"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/45129920-90ba-447c-9248-9aab48f34863/console.log" append="off"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:27:37 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:27:37 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:27:37 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:27:37 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.568 221554 DEBUG nova.compute.manager [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Preparing to wait for external event network-vif-plugged-25a0d5fc-40fd-4968-b00e-927b52f03c49 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.569 221554 DEBUG oslo_concurrency.lockutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "45129920-90ba-447c-9248-9aab48f34863-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.569 221554 DEBUG oslo_concurrency.lockutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "45129920-90ba-447c-9248-9aab48f34863-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.569 221554 DEBUG oslo_concurrency.lockutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "45129920-90ba-447c-9248-9aab48f34863-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.570 221554 DEBUG nova.virt.libvirt.vif [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:27:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-2114237403',display_name='tempest-AttachVolumeNegativeTest-server-2114237403',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-2114237403',id=166,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK17LKHy2ovd//Jqy0J2nwL60uQ9q4IySU1Ezf+7WaNwMWYjskeukYZwUKTjOAdK1+n81ISTjlIwKtHUebZBUbDI4o6AovzGtFFGwJogTo+E0dcJguncmjfdCozMj+dKnA==',key_name='tempest-keypair-876032082',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='418d5319c640455ab23850c0b0f24f92',ramdisk_id='',reservation_id='r-97k154mv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-562353674',owner_user_name='tempest-AttachVolumeNegativeTest-562353674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:27:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48d684de9ba340f48e249b4cce857bfa',uuid=45129920-90ba-447c-9248-9aab48f34863,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "address": "fa:16:3e:aa:91:6d", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25a0d5fc-40", "ovs_interfaceid": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.571 221554 DEBUG nova.network.os_vif_util [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converting VIF {"id": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "address": "fa:16:3e:aa:91:6d", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25a0d5fc-40", "ovs_interfaceid": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.571 221554 DEBUG nova.network.os_vif_util [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:91:6d,bridge_name='br-int',has_traffic_filtering=True,id=25a0d5fc-40fd-4968-b00e-927b52f03c49,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25a0d5fc-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.572 221554 DEBUG os_vif [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:91:6d,bridge_name='br-int',has_traffic_filtering=True,id=25a0d5fc-40fd-4968-b00e-927b52f03c49,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25a0d5fc-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.573 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.573 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.574 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.577 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.577 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25a0d5fc-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.578 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25a0d5fc-40, col_values=(('external_ids', {'iface-id': '25a0d5fc-40fd-4968-b00e-927b52f03c49', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:91:6d', 'vm-uuid': '45129920-90ba-447c-9248-9aab48f34863'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.579 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:37 np0005603609 NetworkManager[49064]: <info>  [1769848057.5801] manager: (tap25a0d5fc-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/333)
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.582 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.585 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.587 221554 INFO os_vif [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:91:6d,bridge_name='br-int',has_traffic_filtering=True,id=25a0d5fc-40fd-4968-b00e-927b52f03c49,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25a0d5fc-40')#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.744 221554 DEBUG nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.745 221554 DEBUG nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.745 221554 DEBUG nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No VIF found with MAC fa:16:3e:aa:91:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:27:37 np0005603609 nova_compute[221550]: 2026-01-31 08:27:37.746 221554 INFO nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Using config drive#033[00m
Jan 31 03:27:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e361 e361: 3 total, 3 up, 3 in
Jan 31 03:27:38 np0005603609 nova_compute[221550]: 2026-01-31 08:27:38.137 221554 DEBUG nova.storage.rbd_utils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image 45129920-90ba-447c-9248-9aab48f34863_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:27:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:38.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:27:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:38.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:27:38 np0005603609 nova_compute[221550]: 2026-01-31 08:27:38.759 221554 INFO nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Creating config drive at /var/lib/nova/instances/45129920-90ba-447c-9248-9aab48f34863/disk.config#033[00m
Jan 31 03:27:38 np0005603609 nova_compute[221550]: 2026-01-31 08:27:38.764 221554 DEBUG oslo_concurrency.processutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/45129920-90ba-447c-9248-9aab48f34863/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpv7yb989e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:38 np0005603609 nova_compute[221550]: 2026-01-31 08:27:38.895 221554 DEBUG oslo_concurrency.processutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/45129920-90ba-447c-9248-9aab48f34863/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpv7yb989e" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:38 np0005603609 nova_compute[221550]: 2026-01-31 08:27:38.922 221554 DEBUG nova.storage.rbd_utils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image 45129920-90ba-447c-9248-9aab48f34863_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:27:38 np0005603609 nova_compute[221550]: 2026-01-31 08:27:38.925 221554 DEBUG oslo_concurrency.processutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/45129920-90ba-447c-9248-9aab48f34863/disk.config 45129920-90ba-447c-9248-9aab48f34863_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:39 np0005603609 nova_compute[221550]: 2026-01-31 08:27:39.074 221554 DEBUG nova.network.neutron [req-e7628692-e576-4d6f-8720-956bf6cdf455 req-cfaed0a3-2bee-45dd-bd34-0107500a1497 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Updated VIF entry in instance network info cache for port 25a0d5fc-40fd-4968-b00e-927b52f03c49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:27:39 np0005603609 nova_compute[221550]: 2026-01-31 08:27:39.076 221554 DEBUG nova.network.neutron [req-e7628692-e576-4d6f-8720-956bf6cdf455 req-cfaed0a3-2bee-45dd-bd34-0107500a1497 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Updating instance_info_cache with network_info: [{"id": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "address": "fa:16:3e:aa:91:6d", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25a0d5fc-40", "ovs_interfaceid": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:27:39 np0005603609 nova_compute[221550]: 2026-01-31 08:27:39.105 221554 DEBUG oslo_concurrency.lockutils [req-e7628692-e576-4d6f-8720-956bf6cdf455 req-cfaed0a3-2bee-45dd-bd34-0107500a1497 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-45129920-90ba-447c-9248-9aab48f34863" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:27:39 np0005603609 nova_compute[221550]: 2026-01-31 08:27:39.845 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:40 np0005603609 nova_compute[221550]: 2026-01-31 08:27:40.190 221554 DEBUG oslo_concurrency.processutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/45129920-90ba-447c-9248-9aab48f34863/disk.config 45129920-90ba-447c-9248-9aab48f34863_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:40 np0005603609 nova_compute[221550]: 2026-01-31 08:27:40.191 221554 INFO nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Deleting local config drive /var/lib/nova/instances/45129920-90ba-447c-9248-9aab48f34863/disk.config because it was imported into RBD.#033[00m
Jan 31 03:27:40 np0005603609 kernel: tap25a0d5fc-40: entered promiscuous mode
Jan 31 03:27:40 np0005603609 NetworkManager[49064]: <info>  [1769848060.2289] manager: (tap25a0d5fc-40): new Tun device (/org/freedesktop/NetworkManager/Devices/334)
Jan 31 03:27:40 np0005603609 nova_compute[221550]: 2026-01-31 08:27:40.229 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:27:40Z|00717|binding|INFO|Claiming lport 25a0d5fc-40fd-4968-b00e-927b52f03c49 for this chassis.
Jan 31 03:27:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:27:40Z|00718|binding|INFO|25a0d5fc-40fd-4968-b00e-927b52f03c49: Claiming fa:16:3e:aa:91:6d 10.100.0.10
Jan 31 03:27:40 np0005603609 nova_compute[221550]: 2026-01-31 08:27:40.237 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.238 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:91:6d 10.100.0.10'], port_security=['fa:16:3e:aa:91:6d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '45129920-90ba-447c-9248-9aab48f34863', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e26a2af1-a850-4885-977e-596b6be13fb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '418d5319c640455ab23850c0b0f24f92', 'neutron:revision_number': '2', 'neutron:security_group_ids': '58565303-90e4-4587-bbe2-2f31600320f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=068708bd-dd36-4d03-9d65-912eb9981ecc, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=25a0d5fc-40fd-4968-b00e-927b52f03c49) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:27:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:27:40Z|00719|binding|INFO|Setting lport 25a0d5fc-40fd-4968-b00e-927b52f03c49 ovn-installed in OVS
Jan 31 03:27:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:27:40Z|00720|binding|INFO|Setting lport 25a0d5fc-40fd-4968-b00e-927b52f03c49 up in Southbound
Jan 31 03:27:40 np0005603609 nova_compute[221550]: 2026-01-31 08:27:40.239 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.240 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 25a0d5fc-40fd-4968-b00e-927b52f03c49 in datapath e26a2af1-a850-4885-977e-596b6be13fb8 bound to our chassis#033[00m
Jan 31 03:27:40 np0005603609 nova_compute[221550]: 2026-01-31 08:27:40.240 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.242 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e26a2af1-a850-4885-977e-596b6be13fb8#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.251 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f10586-9760-465a-8d57-2b7ec30063df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.252 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape26a2af1-a1 in ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.254 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape26a2af1-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.254 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9a262092-9051-42d2-afcc-e0d27a86de16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.255 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4bdbcd2e-3302-4ff4-9f61-30f3fb0463f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:40 np0005603609 systemd-machined[190912]: New machine qemu-89-instance-000000a6.
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.263 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[5e6e8074-52e8-40c2-ad76-0b9eab4bdca4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:40 np0005603609 systemd[1]: Started Virtual Machine qemu-89-instance-000000a6.
Jan 31 03:27:40 np0005603609 systemd-udevd[289287]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.283 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b8e0ac-e10a-420d-8be2-68654715590e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:40 np0005603609 NetworkManager[49064]: <info>  [1769848060.2920] device (tap25a0d5fc-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:27:40 np0005603609 NetworkManager[49064]: <info>  [1769848060.2924] device (tap25a0d5fc-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.307 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b988cd22-e00a-4405-a2ce-bc08abd6a401]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.311 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[63f1a59e-6999-40a5-b418-97859b63efc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:40 np0005603609 NetworkManager[49064]: <info>  [1769848060.3127] manager: (tape26a2af1-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/335)
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.331 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[58e73094-f35e-481c-8bab-8d86ae6076d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.337 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[5492ab45-2afe-41bf-b3c8-0d8edb2d14b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:40 np0005603609 NetworkManager[49064]: <info>  [1769848060.3524] device (tape26a2af1-a0): carrier: link connected
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.354 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[510908b3-d80a-47cd-9647-38c89b8a4f63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.367 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f8fd63b1-ed31-425e-835d-d48da864499b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape26a2af1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:7d:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 830005, 'reachable_time': 25927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289317, 'error': None, 'target': 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.378 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ccaa2a3d-dda6-40eb-ab14-e01034d1bc76]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9c:7dba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 830005, 'tstamp': 830005}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289318, 'error': None, 'target': 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.390 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e6189f49-07da-45ef-8d21-6aecba81079d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape26a2af1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:7d:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 830005, 'reachable_time': 25927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289319, 'error': None, 'target': 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.414 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[26edddc1-2c7e-4290-968c-42d8e0272277]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:27:40 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2333100646' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.453 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c532e679-d213-4cf1-a536-0f91433c3067]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.454 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape26a2af1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.454 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.455 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape26a2af1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:27:40 np0005603609 nova_compute[221550]: 2026-01-31 08:27:40.456 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:40 np0005603609 NetworkManager[49064]: <info>  [1769848060.4571] manager: (tape26a2af1-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Jan 31 03:27:40 np0005603609 kernel: tape26a2af1-a0: entered promiscuous mode
Jan 31 03:27:40 np0005603609 nova_compute[221550]: 2026-01-31 08:27:40.458 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.462 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape26a2af1-a0, col_values=(('external_ids', {'iface-id': '003d1f0e-744f-4244-8c9f-3a9be6033652'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:27:40 np0005603609 nova_compute[221550]: 2026-01-31 08:27:40.463 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:27:40Z|00721|binding|INFO|Releasing lport 003d1f0e-744f-4244-8c9f-3a9be6033652 from this chassis (sb_readonly=0)
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.466 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e26a2af1-a850-4885-977e-596b6be13fb8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e26a2af1-a850-4885-977e-596b6be13fb8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.467 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa48c4d-c389-4f49-875d-83a7ccc66ef6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:40 np0005603609 nova_compute[221550]: 2026-01-31 08:27:40.468 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.469 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-e26a2af1-a850-4885-977e-596b6be13fb8
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/e26a2af1-a850-4885-977e-596b6be13fb8.pid.haproxy
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID e26a2af1-a850-4885-977e-596b6be13fb8
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:27:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:27:40.470 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'env', 'PROCESS_TAG=haproxy-e26a2af1-a850-4885-977e-596b6be13fb8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e26a2af1-a850-4885-977e-596b6be13fb8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:27:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:27:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:40.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:27:40 np0005603609 nova_compute[221550]: 2026-01-31 08:27:40.589 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:40.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:40 np0005603609 podman[289351]: 2026-01-31 08:27:40.787708574 +0000 UTC m=+0.043379593 container create cd374c9286385a5922b903a061d3a5e01066e8379b9a4927eb265ccd1413e3d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:27:40 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:27:40 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:27:40 np0005603609 nova_compute[221550]: 2026-01-31 08:27:40.820 221554 DEBUG nova.compute.manager [req-936e5cfb-b2ac-45c3-9c72-8f0ad9765dea req-539f508f-0048-4e3f-8b68-a2a6da84808b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Received event network-vif-plugged-25a0d5fc-40fd-4968-b00e-927b52f03c49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:27:40 np0005603609 nova_compute[221550]: 2026-01-31 08:27:40.820 221554 DEBUG oslo_concurrency.lockutils [req-936e5cfb-b2ac-45c3-9c72-8f0ad9765dea req-539f508f-0048-4e3f-8b68-a2a6da84808b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "45129920-90ba-447c-9248-9aab48f34863-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:40 np0005603609 nova_compute[221550]: 2026-01-31 08:27:40.820 221554 DEBUG oslo_concurrency.lockutils [req-936e5cfb-b2ac-45c3-9c72-8f0ad9765dea req-539f508f-0048-4e3f-8b68-a2a6da84808b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "45129920-90ba-447c-9248-9aab48f34863-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:40 np0005603609 nova_compute[221550]: 2026-01-31 08:27:40.821 221554 DEBUG oslo_concurrency.lockutils [req-936e5cfb-b2ac-45c3-9c72-8f0ad9765dea req-539f508f-0048-4e3f-8b68-a2a6da84808b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "45129920-90ba-447c-9248-9aab48f34863-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:40 np0005603609 nova_compute[221550]: 2026-01-31 08:27:40.821 221554 DEBUG nova.compute.manager [req-936e5cfb-b2ac-45c3-9c72-8f0ad9765dea req-539f508f-0048-4e3f-8b68-a2a6da84808b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Processing event network-vif-plugged-25a0d5fc-40fd-4968-b00e-927b52f03c49 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:27:40 np0005603609 systemd[1]: Started libpod-conmon-cd374c9286385a5922b903a061d3a5e01066e8379b9a4927eb265ccd1413e3d4.scope.
Jan 31 03:27:40 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:27:40 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89928bec84fc7c1274516f0682ccb0a96f064a7937acad04d7f91fa9335536c9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:40 np0005603609 podman[289351]: 2026-01-31 08:27:40.852562161 +0000 UTC m=+0.108233190 container init cd374c9286385a5922b903a061d3a5e01066e8379b9a4927eb265ccd1413e3d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 03:27:40 np0005603609 podman[289351]: 2026-01-31 08:27:40.765081351 +0000 UTC m=+0.020752400 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:27:40 np0005603609 podman[289351]: 2026-01-31 08:27:40.861067015 +0000 UTC m=+0.116738044 container start cd374c9286385a5922b903a061d3a5e01066e8379b9a4927eb265ccd1413e3d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 03:27:40 np0005603609 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[289367]: [NOTICE]   (289371) : New worker (289373) forked
Jan 31 03:27:40 np0005603609 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[289367]: [NOTICE]   (289371) : Loading success.
Jan 31 03:27:41 np0005603609 nova_compute[221550]: 2026-01-31 08:27:41.370 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848061.3700814, 45129920-90ba-447c-9248-9aab48f34863 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:27:41 np0005603609 nova_compute[221550]: 2026-01-31 08:27:41.371 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 45129920-90ba-447c-9248-9aab48f34863] VM Started (Lifecycle Event)#033[00m
Jan 31 03:27:41 np0005603609 nova_compute[221550]: 2026-01-31 08:27:41.374 221554 DEBUG nova.compute.manager [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:27:41 np0005603609 nova_compute[221550]: 2026-01-31 08:27:41.378 221554 DEBUG nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:27:41 np0005603609 nova_compute[221550]: 2026-01-31 08:27:41.381 221554 INFO nova.virt.libvirt.driver [-] [instance: 45129920-90ba-447c-9248-9aab48f34863] Instance spawned successfully.#033[00m
Jan 31 03:27:41 np0005603609 nova_compute[221550]: 2026-01-31 08:27:41.381 221554 DEBUG nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:27:41 np0005603609 nova_compute[221550]: 2026-01-31 08:27:41.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:41 np0005603609 nova_compute[221550]: 2026-01-31 08:27:41.949 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 45129920-90ba-447c-9248-9aab48f34863] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:27:41 np0005603609 nova_compute[221550]: 2026-01-31 08:27:41.951 221554 DEBUG nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:27:41 np0005603609 nova_compute[221550]: 2026-01-31 08:27:41.951 221554 DEBUG nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:27:41 np0005603609 nova_compute[221550]: 2026-01-31 08:27:41.952 221554 DEBUG nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:27:41 np0005603609 nova_compute[221550]: 2026-01-31 08:27:41.952 221554 DEBUG nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:27:41 np0005603609 nova_compute[221550]: 2026-01-31 08:27:41.952 221554 DEBUG nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:27:41 np0005603609 nova_compute[221550]: 2026-01-31 08:27:41.953 221554 DEBUG nova.virt.libvirt.driver [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:27:41 np0005603609 nova_compute[221550]: 2026-01-31 08:27:41.958 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 45129920-90ba-447c-9248-9aab48f34863] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:27:42 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:27:42 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:27:42 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.096 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 45129920-90ba-447c-9248-9aab48f34863] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.098 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848061.3705595, 45129920-90ba-447c-9248-9aab48f34863 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.098 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 45129920-90ba-447c-9248-9aab48f34863] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.139 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 45129920-90ba-447c-9248-9aab48f34863] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.142 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848061.3772159, 45129920-90ba-447c-9248-9aab48f34863 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.142 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 45129920-90ba-447c-9248-9aab48f34863] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.157 221554 INFO nova.compute.manager [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Took 12.90 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.158 221554 DEBUG nova.compute.manager [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.226 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 45129920-90ba-447c-9248-9aab48f34863] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.229 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 45129920-90ba-447c-9248-9aab48f34863] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.287 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 45129920-90ba-447c-9248-9aab48f34863] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.328 221554 INFO nova.compute.manager [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Took 14.32 seconds to build instance.#033[00m
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.386 221554 DEBUG oslo_concurrency.lockutils [None req-a079a6f2-6a01-40a8-a561-8fd8fc86ce83 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "45129920-90ba-447c-9248-9aab48f34863" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.580 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:42.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:42.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.662 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.956 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.957 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.957 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:27:42 np0005603609 nova_compute[221550]: 2026-01-31 08:27:42.957 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8bf29da0-2e1a-4e89-bce8-b293a938c742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:27:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:44 np0005603609 nova_compute[221550]: 2026-01-31 08:27:44.091 221554 DEBUG nova.compute.manager [req-38c82b78-91e4-4245-9019-68c22d9899da req-e65c486e-904c-46a5-ab95-928fe0c00a7a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Received event network-vif-plugged-25a0d5fc-40fd-4968-b00e-927b52f03c49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:27:44 np0005603609 nova_compute[221550]: 2026-01-31 08:27:44.092 221554 DEBUG oslo_concurrency.lockutils [req-38c82b78-91e4-4245-9019-68c22d9899da req-e65c486e-904c-46a5-ab95-928fe0c00a7a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "45129920-90ba-447c-9248-9aab48f34863-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:44 np0005603609 nova_compute[221550]: 2026-01-31 08:27:44.092 221554 DEBUG oslo_concurrency.lockutils [req-38c82b78-91e4-4245-9019-68c22d9899da req-e65c486e-904c-46a5-ab95-928fe0c00a7a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "45129920-90ba-447c-9248-9aab48f34863-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:44 np0005603609 nova_compute[221550]: 2026-01-31 08:27:44.093 221554 DEBUG oslo_concurrency.lockutils [req-38c82b78-91e4-4245-9019-68c22d9899da req-e65c486e-904c-46a5-ab95-928fe0c00a7a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "45129920-90ba-447c-9248-9aab48f34863-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:44 np0005603609 nova_compute[221550]: 2026-01-31 08:27:44.093 221554 DEBUG nova.compute.manager [req-38c82b78-91e4-4245-9019-68c22d9899da req-e65c486e-904c-46a5-ab95-928fe0c00a7a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] No waiting events found dispatching network-vif-plugged-25a0d5fc-40fd-4968-b00e-927b52f03c49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:27:44 np0005603609 nova_compute[221550]: 2026-01-31 08:27:44.093 221554 WARNING nova.compute.manager [req-38c82b78-91e4-4245-9019-68c22d9899da req-e65c486e-904c-46a5-ab95-928fe0c00a7a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Received unexpected event network-vif-plugged-25a0d5fc-40fd-4968-b00e-927b52f03c49 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:27:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:27:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:44.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:27:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:44.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:45 np0005603609 nova_compute[221550]: 2026-01-31 08:27:45.593 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:46.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:46.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:47 np0005603609 nova_compute[221550]: 2026-01-31 08:27:47.582 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:48 np0005603609 nova_compute[221550]: 2026-01-31 08:27:48.385 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Updating instance_info_cache with network_info: [{"id": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "address": "fa:16:3e:85:fa:8d", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b86ae61-4c", "ovs_interfaceid": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:27:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:48.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:48.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:48 np0005603609 nova_compute[221550]: 2026-01-31 08:27:48.620 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:27:48 np0005603609 nova_compute[221550]: 2026-01-31 08:27:48.621 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:27:48 np0005603609 nova_compute[221550]: 2026-01-31 08:27:48.621 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:48 np0005603609 nova_compute[221550]: 2026-01-31 08:27:48.621 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:27:48 np0005603609 nova_compute[221550]: 2026-01-31 08:27:48.622 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:49 np0005603609 nova_compute[221550]: 2026-01-31 08:27:49.253 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:49 np0005603609 nova_compute[221550]: 2026-01-31 08:27:49.254 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:49 np0005603609 nova_compute[221550]: 2026-01-31 08:27:49.254 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:49 np0005603609 nova_compute[221550]: 2026-01-31 08:27:49.254 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:27:49 np0005603609 nova_compute[221550]: 2026-01-31 08:27:49.255 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:27:49 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3567843601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:27:49 np0005603609 nova_compute[221550]: 2026-01-31 08:27:49.738 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:50.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:50.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:50 np0005603609 nova_compute[221550]: 2026-01-31 08:27:50.640 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:50 np0005603609 nova_compute[221550]: 2026-01-31 08:27:50.665 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:27:50 np0005603609 nova_compute[221550]: 2026-01-31 08:27:50.666 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:27:50 np0005603609 nova_compute[221550]: 2026-01-31 08:27:50.669 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:27:50 np0005603609 nova_compute[221550]: 2026-01-31 08:27:50.669 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:27:50 np0005603609 nova_compute[221550]: 2026-01-31 08:27:50.817 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:27:50 np0005603609 nova_compute[221550]: 2026-01-31 08:27:50.818 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3828MB free_disk=20.693660736083984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:27:50 np0005603609 nova_compute[221550]: 2026-01-31 08:27:50.819 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:50 np0005603609 nova_compute[221550]: 2026-01-31 08:27:50.819 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:50 np0005603609 nova_compute[221550]: 2026-01-31 08:27:50.843 221554 DEBUG nova.compute.manager [req-fc744f55-6d56-4745-bf75-27ed45307941 req-ebee2774-a7c8-4221-946d-f2d1cbf44acc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Received event network-changed-25a0d5fc-40fd-4968-b00e-927b52f03c49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:27:50 np0005603609 nova_compute[221550]: 2026-01-31 08:27:50.843 221554 DEBUG nova.compute.manager [req-fc744f55-6d56-4745-bf75-27ed45307941 req-ebee2774-a7c8-4221-946d-f2d1cbf44acc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Refreshing instance network info cache due to event network-changed-25a0d5fc-40fd-4968-b00e-927b52f03c49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:27:50 np0005603609 nova_compute[221550]: 2026-01-31 08:27:50.843 221554 DEBUG oslo_concurrency.lockutils [req-fc744f55-6d56-4745-bf75-27ed45307941 req-ebee2774-a7c8-4221-946d-f2d1cbf44acc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-45129920-90ba-447c-9248-9aab48f34863" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:27:50 np0005603609 nova_compute[221550]: 2026-01-31 08:27:50.844 221554 DEBUG oslo_concurrency.lockutils [req-fc744f55-6d56-4745-bf75-27ed45307941 req-ebee2774-a7c8-4221-946d-f2d1cbf44acc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-45129920-90ba-447c-9248-9aab48f34863" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:27:50 np0005603609 nova_compute[221550]: 2026-01-31 08:27:50.844 221554 DEBUG nova.network.neutron [req-fc744f55-6d56-4745-bf75-27ed45307941 req-ebee2774-a7c8-4221-946d-f2d1cbf44acc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Refreshing network info cache for port 25a0d5fc-40fd-4968-b00e-927b52f03c49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:27:50 np0005603609 nova_compute[221550]: 2026-01-31 08:27:50.847 221554 DEBUG oslo_concurrency.lockutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:50 np0005603609 nova_compute[221550]: 2026-01-31 08:27:50.847 221554 DEBUG oslo_concurrency.lockutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:51 np0005603609 nova_compute[221550]: 2026-01-31 08:27:51.368 221554 DEBUG nova.compute.manager [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:27:51 np0005603609 nova_compute[221550]: 2026-01-31 08:27:51.530 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 8bf29da0-2e1a-4e89-bce8-b293a938c742 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:27:51 np0005603609 nova_compute[221550]: 2026-01-31 08:27:51.530 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 45129920-90ba-447c-9248-9aab48f34863 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:27:51 np0005603609 nova_compute[221550]: 2026-01-31 08:27:51.735 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 20d5fd84-de90-46b0-816e-0f378fd7d0c7 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m
Jan 31 03:27:51 np0005603609 nova_compute[221550]: 2026-01-31 08:27:51.735 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:27:51 np0005603609 nova_compute[221550]: 2026-01-31 08:27:51.736 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:27:51 np0005603609 nova_compute[221550]: 2026-01-31 08:27:51.752 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:27:51 np0005603609 nova_compute[221550]: 2026-01-31 08:27:51.777 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:27:51 np0005603609 nova_compute[221550]: 2026-01-31 08:27:51.778 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:27:51 np0005603609 nova_compute[221550]: 2026-01-31 08:27:51.793 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:27:51 np0005603609 nova_compute[221550]: 2026-01-31 08:27:51.819 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:27:51 np0005603609 nova_compute[221550]: 2026-01-31 08:27:51.856 221554 DEBUG oslo_concurrency.lockutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:51 np0005603609 nova_compute[221550]: 2026-01-31 08:27:51.932 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:52 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 03:27:52 np0005603609 nova_compute[221550]: 2026-01-31 08:27:52.584 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:27:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:52.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:27:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:52.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:54.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:54.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:55 np0005603609 nova_compute[221550]: 2026-01-31 08:27:55.640 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:56 np0005603609 nova_compute[221550]: 2026-01-31 08:27:56.329 221554 DEBUG nova.network.neutron [req-fc744f55-6d56-4745-bf75-27ed45307941 req-ebee2774-a7c8-4221-946d-f2d1cbf44acc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Updated VIF entry in instance network info cache for port 25a0d5fc-40fd-4968-b00e-927b52f03c49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:27:56 np0005603609 nova_compute[221550]: 2026-01-31 08:27:56.329 221554 DEBUG nova.network.neutron [req-fc744f55-6d56-4745-bf75-27ed45307941 req-ebee2774-a7c8-4221-946d-f2d1cbf44acc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Updating instance_info_cache with network_info: [{"id": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "address": "fa:16:3e:aa:91:6d", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25a0d5fc-40", "ovs_interfaceid": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:27:56 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 03:27:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:56.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:56.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:56 np0005603609 nova_compute[221550]: 2026-01-31 08:27:56.697 221554 DEBUG oslo_concurrency.lockutils [req-fc744f55-6d56-4745-bf75-27ed45307941 req-ebee2774-a7c8-4221-946d-f2d1cbf44acc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-45129920-90ba-447c-9248-9aab48f34863" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:27:57 np0005603609 nova_compute[221550]: 2026-01-31 08:27:57.587 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:58.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:27:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:58.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:00 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 03:28:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:00.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:00.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:00 np0005603609 nova_compute[221550]: 2026-01-31 08:28:00.642 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).paxos(paxos active c 5774..6368) lease_timeout -- calling new election
Jan 31 03:28:00 np0005603609 ceph-mon[81667]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Jan 31 03:28:00 np0005603609 ceph-mon[81667]: paxos.2).electionLogic(46) init, last seen epoch 46
Jan 31 03:28:01 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 5.571057796s
Jan 31 03:28:01 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.571347237s, txc = 0x55f202411200
Jan 31 03:28:01 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 5.571057796s
Jan 31 03:28:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 03:28:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 03:28:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:28:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:28:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:28:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 03:28:02 np0005603609 nova_compute[221550]: 2026-01-31 08:28:02.590 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:02.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:28:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:02.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:28:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Jan 31 03:28:02 np0005603609 nova_compute[221550]: 2026-01-31 08:28:02.937 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 11.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:02 np0005603609 nova_compute[221550]: 2026-01-31 08:28:02.941 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:28:03 np0005603609 nova_compute[221550]: 2026-01-31 08:28:03.170 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:28:03 np0005603609 nova_compute[221550]: 2026-01-31 08:28:03.590 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:28:03 np0005603609 nova_compute[221550]: 2026-01-31 08:28:03.591 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 12.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:03 np0005603609 nova_compute[221550]: 2026-01-31 08:28:03.591 221554 DEBUG oslo_concurrency.lockutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 11.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:03 np0005603609 nova_compute[221550]: 2026-01-31 08:28:03.605 221554 DEBUG nova.virt.hardware [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:28:03 np0005603609 nova_compute[221550]: 2026-01-31 08:28:03.605 221554 INFO nova.compute.claims [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:28:03 np0005603609 nova_compute[221550]: 2026-01-31 08:28:03.629 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:03 np0005603609 nova_compute[221550]: 2026-01-31 08:28:03.629 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:03 np0005603609 nova_compute[221550]: 2026-01-31 08:28:03.630 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 03:28:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:28:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:04.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:28:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:28:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:04.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:28:04 np0005603609 nova_compute[221550]: 2026-01-31 08:28:04.955 221554 DEBUG oslo_concurrency.processutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:28:05 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2965806856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:28:05 np0005603609 nova_compute[221550]: 2026-01-31 08:28:05.367 221554 DEBUG oslo_concurrency.processutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:05 np0005603609 nova_compute[221550]: 2026-01-31 08:28:05.374 221554 DEBUG nova.compute.provider_tree [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:28:05 np0005603609 nova_compute[221550]: 2026-01-31 08:28:05.644 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:05 np0005603609 ceph-mon[81667]: mon.compute-2 calling monitor election
Jan 31 03:28:05 np0005603609 ceph-mon[81667]: mon.compute-1 calling monitor election
Jan 31 03:28:05 np0005603609 ceph-mon[81667]: mon.compute-0 calling monitor election
Jan 31 03:28:05 np0005603609 ceph-mon[81667]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 03:28:05 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 03:28:05 np0005603609 nova_compute[221550]: 2026-01-31 08:28:05.762 221554 DEBUG nova.scheduler.client.report [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:28:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:06 np0005603609 nova_compute[221550]: 2026-01-31 08:28:06.204 221554 DEBUG oslo_concurrency.lockutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:06 np0005603609 nova_compute[221550]: 2026-01-31 08:28:06.206 221554 DEBUG nova.compute.manager [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:28:06 np0005603609 nova_compute[221550]: 2026-01-31 08:28:06.329 221554 DEBUG nova.compute.manager [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:28:06 np0005603609 nova_compute[221550]: 2026-01-31 08:28:06.330 221554 DEBUG nova.network.neutron [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:28:06 np0005603609 nova_compute[221550]: 2026-01-31 08:28:06.426 221554 INFO nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:28:06 np0005603609 nova_compute[221550]: 2026-01-31 08:28:06.555 221554 DEBUG nova.compute.manager [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:28:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:06.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:06 np0005603609 nova_compute[221550]: 2026-01-31 08:28:06.632 221554 DEBUG nova.policy [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eac51187531841e2891fc5d3c5f84123', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '722ab2e9dd674709953be812d4c88493', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:28:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:06.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:06 np0005603609 nova_compute[221550]: 2026-01-31 08:28:06.946 221554 DEBUG nova.compute.manager [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:28:06 np0005603609 nova_compute[221550]: 2026-01-31 08:28:06.948 221554 DEBUG nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:28:06 np0005603609 nova_compute[221550]: 2026-01-31 08:28:06.948 221554 INFO nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Creating image(s)#033[00m
Jan 31 03:28:06 np0005603609 nova_compute[221550]: 2026-01-31 08:28:06.982 221554 DEBUG nova.storage.rbd_utils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:07 np0005603609 podman[289527]: 2026-01-31 08:28:07.158711359 +0000 UTC m=+0.042351198 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:28:07 np0005603609 podman[289526]: 2026-01-31 08:28:07.190712048 +0000 UTC m=+0.074014459 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 03:28:07 np0005603609 nova_compute[221550]: 2026-01-31 08:28:07.301 221554 DEBUG nova.storage.rbd_utils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:07 np0005603609 nova_compute[221550]: 2026-01-31 08:28:07.325 221554 DEBUG nova.storage.rbd_utils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:07 np0005603609 nova_compute[221550]: 2026-01-31 08:28:07.328 221554 DEBUG oslo_concurrency.processutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:07 np0005603609 nova_compute[221550]: 2026-01-31 08:28:07.381 221554 DEBUG oslo_concurrency.processutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:07 np0005603609 nova_compute[221550]: 2026-01-31 08:28:07.382 221554 DEBUG oslo_concurrency.lockutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:07 np0005603609 nova_compute[221550]: 2026-01-31 08:28:07.383 221554 DEBUG oslo_concurrency.lockutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:07 np0005603609 nova_compute[221550]: 2026-01-31 08:28:07.383 221554 DEBUG oslo_concurrency.lockutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:07 np0005603609 nova_compute[221550]: 2026-01-31 08:28:07.408 221554 DEBUG nova.storage.rbd_utils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:07 np0005603609 nova_compute[221550]: 2026-01-31 08:28:07.411 221554 DEBUG oslo_concurrency.processutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:07.520 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:07.520 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:07.521 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:07 np0005603609 nova_compute[221550]: 2026-01-31 08:28:07.631 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:28:08Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:aa:91:6d 10.100.0.10
Jan 31 03:28:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:28:08Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:91:6d 10.100.0.10
Jan 31 03:28:08 np0005603609 nova_compute[221550]: 2026-01-31 08:28:08.616 221554 DEBUG nova.network.neutron [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Successfully created port: 54f55019-4a06-4846-ad7a-5d67daa3e094 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:28:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:28:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:08.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:28:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:28:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:08.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:28:08 np0005603609 nova_compute[221550]: 2026-01-31 08:28:08.680 221554 DEBUG oslo_concurrency.processutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:08 np0005603609 nova_compute[221550]: 2026-01-31 08:28:08.771 221554 DEBUG nova.storage.rbd_utils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] resizing rbd image 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:28:08 np0005603609 nova_compute[221550]: 2026-01-31 08:28:08.956 221554 DEBUG nova.objects.instance [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'migration_context' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:28:08 np0005603609 nova_compute[221550]: 2026-01-31 08:28:08.974 221554 DEBUG nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:28:08 np0005603609 nova_compute[221550]: 2026-01-31 08:28:08.975 221554 DEBUG nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Ensure instance console log exists: /var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:28:08 np0005603609 nova_compute[221550]: 2026-01-31 08:28:08.975 221554 DEBUG oslo_concurrency.lockutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:08 np0005603609 nova_compute[221550]: 2026-01-31 08:28:08.976 221554 DEBUG oslo_concurrency.lockutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:08 np0005603609 nova_compute[221550]: 2026-01-31 08:28:08.976 221554 DEBUG oslo_concurrency.lockutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:28:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:28:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:28:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:10.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:28:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:10.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:10 np0005603609 nova_compute[221550]: 2026-01-31 08:28:10.649 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:11.017 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:28:11 np0005603609 nova_compute[221550]: 2026-01-31 08:28:11.017 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:11.018 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:28:11 np0005603609 nova_compute[221550]: 2026-01-31 08:28:11.433 221554 DEBUG nova.network.neutron [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Successfully updated port: 54f55019-4a06-4846-ad7a-5d67daa3e094 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:28:11 np0005603609 nova_compute[221550]: 2026-01-31 08:28:11.460 221554 DEBUG oslo_concurrency.lockutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:28:11 np0005603609 nova_compute[221550]: 2026-01-31 08:28:11.460 221554 DEBUG oslo_concurrency.lockutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquired lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:28:11 np0005603609 nova_compute[221550]: 2026-01-31 08:28:11.460 221554 DEBUG nova.network.neutron [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:28:11 np0005603609 nova_compute[221550]: 2026-01-31 08:28:11.580 221554 DEBUG nova.compute.manager [req-d9220435-3f28-45ee-a505-255a8eb750d1 req-913a1096-a187-4d2a-bcc8-69cadbebfbf9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received event network-changed-54f55019-4a06-4846-ad7a-5d67daa3e094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:28:11 np0005603609 nova_compute[221550]: 2026-01-31 08:28:11.580 221554 DEBUG nova.compute.manager [req-d9220435-3f28-45ee-a505-255a8eb750d1 req-913a1096-a187-4d2a-bcc8-69cadbebfbf9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Refreshing instance network info cache due to event network-changed-54f55019-4a06-4846-ad7a-5d67daa3e094. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:28:11 np0005603609 nova_compute[221550]: 2026-01-31 08:28:11.581 221554 DEBUG oslo_concurrency.lockutils [req-d9220435-3f28-45ee-a505-255a8eb750d1 req-913a1096-a187-4d2a-bcc8-69cadbebfbf9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:28:11 np0005603609 nova_compute[221550]: 2026-01-31 08:28:11.743 221554 DEBUG nova.network.neutron [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:28:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:12.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:12 np0005603609 nova_compute[221550]: 2026-01-31 08:28:12.633 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:12.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.229 221554 DEBUG nova.network.neutron [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Updating instance_info_cache with network_info: [{"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.272 221554 DEBUG oslo_concurrency.lockutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Releasing lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.273 221554 DEBUG nova.compute.manager [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Instance network_info: |[{"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.273 221554 DEBUG oslo_concurrency.lockutils [req-d9220435-3f28-45ee-a505-255a8eb750d1 req-913a1096-a187-4d2a-bcc8-69cadbebfbf9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.273 221554 DEBUG nova.network.neutron [req-d9220435-3f28-45ee-a505-255a8eb750d1 req-913a1096-a187-4d2a-bcc8-69cadbebfbf9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Refreshing network info cache for port 54f55019-4a06-4846-ad7a-5d67daa3e094 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.276 221554 DEBUG nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Start _get_guest_xml network_info=[{"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.282 221554 WARNING nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.292 221554 DEBUG nova.virt.libvirt.host [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.293 221554 DEBUG nova.virt.libvirt.host [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.297 221554 DEBUG nova.virt.libvirt.host [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.297 221554 DEBUG nova.virt.libvirt.host [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.298 221554 DEBUG nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.299 221554 DEBUG nova.virt.hardware [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.299 221554 DEBUG nova.virt.hardware [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.299 221554 DEBUG nova.virt.hardware [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.300 221554 DEBUG nova.virt.hardware [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.300 221554 DEBUG nova.virt.hardware [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.300 221554 DEBUG nova.virt.hardware [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.300 221554 DEBUG nova.virt.hardware [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.300 221554 DEBUG nova.virt.hardware [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.300 221554 DEBUG nova.virt.hardware [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.301 221554 DEBUG nova.virt.hardware [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.301 221554 DEBUG nova.virt.hardware [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.304 221554 DEBUG oslo_concurrency.processutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:28:13 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/960312472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.740 221554 DEBUG oslo_concurrency.processutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.780 221554 DEBUG nova.storage.rbd_utils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:13 np0005603609 nova_compute[221550]: 2026-01-31 08:28:13.783 221554 DEBUG oslo_concurrency.processutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:28:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3856339751' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:28:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e362 e362: 3 total, 3 up, 3 in
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.233 221554 DEBUG oslo_concurrency.processutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.235 221554 DEBUG nova.virt.libvirt.vif [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:27:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-711445957',display_name='tempest-ServersNegativeTestJSON-server-711445957',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-711445957',id=167,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='722ab2e9dd674709953be812d4c88493',ramdisk_id='',reservation_id='r-48kv7dus',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1810683467',owner_user_name='tempest-ServersNegativeTestJSON-1810683467-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:28:06Z,user_data=None,user_id='eac51187531841e2891fc5d3c5f84123',uuid=20d5fd84-de90-46b0-816e-0f378fd7d0c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.235 221554 DEBUG nova.network.os_vif_util [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converting VIF {"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.236 221554 DEBUG nova.network.os_vif_util [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:38:16,bridge_name='br-int',has_traffic_filtering=True,id=54f55019-4a06-4846-ad7a-5d67daa3e094,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f55019-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.238 221554 DEBUG nova.objects.instance [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'pci_devices' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.340 221554 DEBUG nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:28:14 np0005603609 nova_compute[221550]:  <uuid>20d5fd84-de90-46b0-816e-0f378fd7d0c7</uuid>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:  <name>instance-000000a7</name>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServersNegativeTestJSON-server-711445957</nova:name>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:28:13</nova:creationTime>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:28:14 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:        <nova:user uuid="eac51187531841e2891fc5d3c5f84123">tempest-ServersNegativeTestJSON-1810683467-project-member</nova:user>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:        <nova:project uuid="722ab2e9dd674709953be812d4c88493">tempest-ServersNegativeTestJSON-1810683467</nova:project>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:        <nova:port uuid="54f55019-4a06-4846-ad7a-5d67daa3e094">
Jan 31 03:28:14 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <entry name="serial">20d5fd84-de90-46b0-816e-0f378fd7d0c7</entry>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <entry name="uuid">20d5fd84-de90-46b0-816e-0f378fd7d0c7</entry>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk">
Jan 31 03:28:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:28:14 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk.config">
Jan 31 03:28:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:28:14 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:31:38:16"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <target dev="tap54f55019-4a"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7/console.log" append="off"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:28:14 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:28:14 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:28:14 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:28:14 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.342 221554 DEBUG nova.compute.manager [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Preparing to wait for external event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.342 221554 DEBUG oslo_concurrency.lockutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.343 221554 DEBUG oslo_concurrency.lockutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.343 221554 DEBUG oslo_concurrency.lockutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.344 221554 DEBUG nova.virt.libvirt.vif [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:27:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-711445957',display_name='tempest-ServersNegativeTestJSON-server-711445957',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-711445957',id=167,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='722ab2e9dd674709953be812d4c88493',ramdisk_id='',reservation_id='r-48kv7dus',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1810683467',owner_user_name='tempest-ServersNegativeTestJSON-1810683467-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:28:06Z,user_data=None,user_id='eac51187531841e2891fc5d3c5f84123',uuid=20d5fd84-de90-46b0-816e-0f378fd7d0c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.344 221554 DEBUG nova.network.os_vif_util [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converting VIF {"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.345 221554 DEBUG nova.network.os_vif_util [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:38:16,bridge_name='br-int',has_traffic_filtering=True,id=54f55019-4a06-4846-ad7a-5d67daa3e094,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f55019-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.345 221554 DEBUG os_vif [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:38:16,bridge_name='br-int',has_traffic_filtering=True,id=54f55019-4a06-4846-ad7a-5d67daa3e094,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f55019-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.349 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.349 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.350 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.354 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.354 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54f55019-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.355 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54f55019-4a, col_values=(('external_ids', {'iface-id': '54f55019-4a06-4846-ad7a-5d67daa3e094', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:31:38:16', 'vm-uuid': '20d5fd84-de90-46b0-816e-0f378fd7d0c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.356 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:14 np0005603609 NetworkManager[49064]: <info>  [1769848094.3578] manager: (tap54f55019-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.358 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.362 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.363 221554 INFO os_vif [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:38:16,bridge_name='br-int',has_traffic_filtering=True,id=54f55019-4a06-4846-ad7a-5d67daa3e094,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f55019-4a')#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.608 221554 DEBUG nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.609 221554 DEBUG nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.609 221554 DEBUG nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] No VIF found with MAC fa:16:3e:31:38:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.610 221554 INFO nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Using config drive#033[00m
Jan 31 03:28:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:14.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:14 np0005603609 nova_compute[221550]: 2026-01-31 08:28:14.636 221554 DEBUG nova.storage.rbd_utils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:14.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:15.020 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:28:15 np0005603609 nova_compute[221550]: 2026-01-31 08:28:15.650 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:16 np0005603609 nova_compute[221550]: 2026-01-31 08:28:16.351 221554 INFO nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Creating config drive at /var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7/disk.config#033[00m
Jan 31 03:28:16 np0005603609 nova_compute[221550]: 2026-01-31 08:28:16.355 221554 DEBUG oslo_concurrency.processutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpd7mgsnfr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:16 np0005603609 nova_compute[221550]: 2026-01-31 08:28:16.489 221554 DEBUG oslo_concurrency.processutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpd7mgsnfr" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:16 np0005603609 nova_compute[221550]: 2026-01-31 08:28:16.518 221554 DEBUG nova.storage.rbd_utils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:16 np0005603609 nova_compute[221550]: 2026-01-31 08:28:16.521 221554 DEBUG oslo_concurrency.processutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7/disk.config 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:16.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:16.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:17 np0005603609 nova_compute[221550]: 2026-01-31 08:28:17.716 221554 DEBUG oslo_concurrency.processutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7/disk.config 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:17 np0005603609 nova_compute[221550]: 2026-01-31 08:28:17.716 221554 INFO nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Deleting local config drive /var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7/disk.config because it was imported into RBD.#033[00m
Jan 31 03:28:17 np0005603609 kernel: tap54f55019-4a: entered promiscuous mode
Jan 31 03:28:17 np0005603609 NetworkManager[49064]: <info>  [1769848097.7617] manager: (tap54f55019-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/338)
Jan 31 03:28:17 np0005603609 nova_compute[221550]: 2026-01-31 08:28:17.762 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:28:17Z|00722|binding|INFO|Claiming lport 54f55019-4a06-4846-ad7a-5d67daa3e094 for this chassis.
Jan 31 03:28:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:28:17Z|00723|binding|INFO|54f55019-4a06-4846-ad7a-5d67daa3e094: Claiming fa:16:3e:31:38:16 10.100.0.12
Jan 31 03:28:17 np0005603609 nova_compute[221550]: 2026-01-31 08:28:17.769 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.771 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:38:16 10.100.0.12'], port_security=['fa:16:3e:31:38:16 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '20d5fd84-de90-46b0-816e-0f378fd7d0c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '722ab2e9dd674709953be812d4c88493', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9520bea1-c264-4093-b5f2-d0e6aba13129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2519b9f-dcfa-4a01-b5ee-7b195f240dfa, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=54f55019-4a06-4846-ad7a-5d67daa3e094) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:28:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:28:17Z|00724|binding|INFO|Setting lport 54f55019-4a06-4846-ad7a-5d67daa3e094 ovn-installed in OVS
Jan 31 03:28:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:28:17Z|00725|binding|INFO|Setting lport 54f55019-4a06-4846-ad7a-5d67daa3e094 up in Southbound
Jan 31 03:28:17 np0005603609 nova_compute[221550]: 2026-01-31 08:28:17.773 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.772 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 54f55019-4a06-4846-ad7a-5d67daa3e094 in datapath e14a8b1b-1e10-44db-9858-e5a0ae5f2476 bound to our chassis#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.774 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e14a8b1b-1e10-44db-9858-e5a0ae5f2476#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.783 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b061502c-ddc3-4f13-81ef-d1927cd92a1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.784 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape14a8b1b-11 in ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:28:17 np0005603609 systemd-udevd[289893]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.787 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape14a8b1b-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.788 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[374acf8e-453d-404a-826f-80d298413a5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:17 np0005603609 systemd-machined[190912]: New machine qemu-90-instance-000000a7.
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.789 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[380f0e0f-9160-46e1-87df-d3ecea526ad3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.797 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[23dac743-9830-4ba5-a4b8-cc21e19d4506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:17 np0005603609 NetworkManager[49064]: <info>  [1769848097.8002] device (tap54f55019-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:28:17 np0005603609 systemd[1]: Started Virtual Machine qemu-90-instance-000000a7.
Jan 31 03:28:17 np0005603609 NetworkManager[49064]: <info>  [1769848097.8006] device (tap54f55019-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.809 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1b077234-d48c-41ab-ac27-2295439495b6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.828 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[145cd3eb-a181-4085-ac27-c74888b8b178]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.831 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c43e7159-497c-4176-8aaa-6515bce98028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:17 np0005603609 systemd-udevd[289897]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:28:17 np0005603609 NetworkManager[49064]: <info>  [1769848097.8348] manager: (tape14a8b1b-10): new Veth device (/org/freedesktop/NetworkManager/Devices/339)
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.851 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[5f312b1c-3afc-4ad3-b4dc-f9509934b9c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.854 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b038d080-44af-4d23-8048-2f2f3546d1b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:17 np0005603609 NetworkManager[49064]: <info>  [1769848097.8680] device (tape14a8b1b-10): carrier: link connected
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.871 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[ab5bd263-381a-486f-81e4-9dc15f31a36f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.882 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d934c6-23d8-4ee8-ab35-a8e36c08a94f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape14a8b1b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:d3:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 833756, 'reachable_time': 32915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289925, 'error': None, 'target': 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.894 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d83b76f2-065c-4aa0-a720-30174e19f07d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:d359'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 833756, 'tstamp': 833756}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289926, 'error': None, 'target': 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.905 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0f304a07-ab69-40a2-8c54-61f1887b2389]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape14a8b1b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:d3:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 833756, 'reachable_time': 32915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289927, 'error': None, 'target': 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.926 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[97032fc8-e8f6-4196-bc8f-2d5585a20834]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.966 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[931eebdd-7663-4f04-bcea-e2621d671d37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.968 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape14a8b1b-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.968 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.968 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape14a8b1b-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:28:17 np0005603609 NetworkManager[49064]: <info>  [1769848097.9709] manager: (tape14a8b1b-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/340)
Jan 31 03:28:17 np0005603609 nova_compute[221550]: 2026-01-31 08:28:17.970 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:17 np0005603609 kernel: tape14a8b1b-10: entered promiscuous mode
Jan 31 03:28:17 np0005603609 nova_compute[221550]: 2026-01-31 08:28:17.973 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.976 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape14a8b1b-10, col_values=(('external_ids', {'iface-id': '6550f58c-c002-4c91-85a2-53b4c1c807b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:28:17 np0005603609 nova_compute[221550]: 2026-01-31 08:28:17.977 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:28:17Z|00726|binding|INFO|Releasing lport 6550f58c-c002-4c91-85a2-53b4c1c807b7 from this chassis (sb_readonly=0)
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.979 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e14a8b1b-1e10-44db-9858-e5a0ae5f2476.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e14a8b1b-1e10-44db-9858-e5a0ae5f2476.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.980 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[23e9011d-9b5d-4d1a-bd33-1b3f9bd7bd93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.981 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-e14a8b1b-1e10-44db-9858-e5a0ae5f2476
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/e14a8b1b-1e10-44db-9858-e5a0ae5f2476.pid.haproxy
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID e14a8b1b-1e10-44db-9858-e5a0ae5f2476
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:28:17 np0005603609 nova_compute[221550]: 2026-01-31 08:28:17.982 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:17.983 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'env', 'PROCESS_TAG=haproxy-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e14a8b1b-1e10-44db-9858-e5a0ae5f2476.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:28:18 np0005603609 nova_compute[221550]: 2026-01-31 08:28:18.004 221554 DEBUG nova.network.neutron [req-d9220435-3f28-45ee-a505-255a8eb750d1 req-913a1096-a187-4d2a-bcc8-69cadbebfbf9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Updated VIF entry in instance network info cache for port 54f55019-4a06-4846-ad7a-5d67daa3e094. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:28:18 np0005603609 nova_compute[221550]: 2026-01-31 08:28:18.005 221554 DEBUG nova.network.neutron [req-d9220435-3f28-45ee-a505-255a8eb750d1 req-913a1096-a187-4d2a-bcc8-69cadbebfbf9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Updating instance_info_cache with network_info: [{"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:28:18 np0005603609 nova_compute[221550]: 2026-01-31 08:28:18.029 221554 DEBUG oslo_concurrency.lockutils [req-d9220435-3f28-45ee-a505-255a8eb750d1 req-913a1096-a187-4d2a-bcc8-69cadbebfbf9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:28:18 np0005603609 podman[289966]: 2026-01-31 08:28:18.288432862 +0000 UTC m=+0.019729424 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:28:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:18.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:18 np0005603609 podman[289966]: 2026-01-31 08:28:18.642372782 +0000 UTC m=+0.373669314 container create eb65e349e968dca389ff5178672733478199c432ee5a2835ffd8e129b30e047a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:28:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:18.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:18 np0005603609 systemd[1]: Started libpod-conmon-eb65e349e968dca389ff5178672733478199c432ee5a2835ffd8e129b30e047a.scope.
Jan 31 03:28:18 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:28:18 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0454effab9d4bd52bcad53ff17348235ee8ae6105715b86e8b64dfe082b9ff19/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:28:18 np0005603609 podman[289966]: 2026-01-31 08:28:18.854094806 +0000 UTC m=+0.585391358 container init eb65e349e968dca389ff5178672733478199c432ee5a2835ffd8e129b30e047a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:28:18 np0005603609 podman[289966]: 2026-01-31 08:28:18.858774989 +0000 UTC m=+0.590071521 container start eb65e349e968dca389ff5178672733478199c432ee5a2835ffd8e129b30e047a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:28:18 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[289991]: [NOTICE]   (290011) : New worker (290015) forked
Jan 31 03:28:18 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[289991]: [NOTICE]   (290011) : Loading success.
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.030 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848099.0294256, 20d5fd84-de90-46b0-816e-0f378fd7d0c7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.030 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] VM Started (Lifecycle Event)#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.064 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.068 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848099.0299208, 20d5fd84-de90-46b0-816e-0f378fd7d0c7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.068 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.098 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.101 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.123 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.397 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.612 221554 DEBUG nova.compute.manager [req-d4be654c-3637-4c6a-8bb1-4ab443225105 req-ba91c07b-ec33-4494-ab5b-55ea83c34288 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.612 221554 DEBUG oslo_concurrency.lockutils [req-d4be654c-3637-4c6a-8bb1-4ab443225105 req-ba91c07b-ec33-4494-ab5b-55ea83c34288 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.612 221554 DEBUG oslo_concurrency.lockutils [req-d4be654c-3637-4c6a-8bb1-4ab443225105 req-ba91c07b-ec33-4494-ab5b-55ea83c34288 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.613 221554 DEBUG oslo_concurrency.lockutils [req-d4be654c-3637-4c6a-8bb1-4ab443225105 req-ba91c07b-ec33-4494-ab5b-55ea83c34288 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.613 221554 DEBUG nova.compute.manager [req-d4be654c-3637-4c6a-8bb1-4ab443225105 req-ba91c07b-ec33-4494-ab5b-55ea83c34288 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Processing event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.613 221554 DEBUG nova.compute.manager [req-d4be654c-3637-4c6a-8bb1-4ab443225105 req-ba91c07b-ec33-4494-ab5b-55ea83c34288 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.613 221554 DEBUG oslo_concurrency.lockutils [req-d4be654c-3637-4c6a-8bb1-4ab443225105 req-ba91c07b-ec33-4494-ab5b-55ea83c34288 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.614 221554 DEBUG oslo_concurrency.lockutils [req-d4be654c-3637-4c6a-8bb1-4ab443225105 req-ba91c07b-ec33-4494-ab5b-55ea83c34288 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.614 221554 DEBUG oslo_concurrency.lockutils [req-d4be654c-3637-4c6a-8bb1-4ab443225105 req-ba91c07b-ec33-4494-ab5b-55ea83c34288 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.614 221554 DEBUG nova.compute.manager [req-d4be654c-3637-4c6a-8bb1-4ab443225105 req-ba91c07b-ec33-4494-ab5b-55ea83c34288 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] No waiting events found dispatching network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.614 221554 WARNING nova.compute.manager [req-d4be654c-3637-4c6a-8bb1-4ab443225105 req-ba91c07b-ec33-4494-ab5b-55ea83c34288 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received unexpected event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.615 221554 DEBUG nova.compute.manager [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.617 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848099.6174903, 20d5fd84-de90-46b0-816e-0f378fd7d0c7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.618 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.620 221554 DEBUG nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.622 221554 INFO nova.virt.libvirt.driver [-] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Instance spawned successfully.#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.623 221554 DEBUG nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.653 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.658 221554 DEBUG nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.658 221554 DEBUG nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.659 221554 DEBUG nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.659 221554 DEBUG nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.660 221554 DEBUG nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.660 221554 DEBUG nova.virt.libvirt.driver [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.665 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.728 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.770 221554 INFO nova.compute.manager [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Took 12.82 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.771 221554 DEBUG nova.compute.manager [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.849 221554 INFO nova.compute.manager [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Took 28.04 seconds to build instance.#033[00m
Jan 31 03:28:19 np0005603609 nova_compute[221550]: 2026-01-31 08:28:19.874 221554 DEBUG oslo_concurrency.lockutils [None req-44788741-8d6e-410e-997e-4523b5e65e8f eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 29.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:20 np0005603609 radosgw[84443]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 31 03:28:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:20.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:20 np0005603609 nova_compute[221550]: 2026-01-31 08:28:20.654 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:20.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:20 np0005603609 radosgw[84443]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 31 03:28:20 np0005603609 radosgw[84443]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 31 03:28:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:21 np0005603609 ovn_controller[130359]: 2026-01-31T08:28:21Z|00727|binding|INFO|Releasing lport 0b9d56f1-a803-44f1-b709-3bfbc71e0f57 from this chassis (sb_readonly=0)
Jan 31 03:28:21 np0005603609 ovn_controller[130359]: 2026-01-31T08:28:21Z|00728|binding|INFO|Releasing lport 6550f58c-c002-4c91-85a2-53b4c1c807b7 from this chassis (sb_readonly=0)
Jan 31 03:28:21 np0005603609 ovn_controller[130359]: 2026-01-31T08:28:21Z|00729|binding|INFO|Releasing lport 003d1f0e-744f-4244-8c9f-3a9be6033652 from this chassis (sb_readonly=0)
Jan 31 03:28:21 np0005603609 nova_compute[221550]: 2026-01-31 08:28:21.120 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:21 np0005603609 radosgw[84443]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 31 03:28:21 np0005603609 radosgw[84443]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 31 03:28:21 np0005603609 radosgw[84443]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 31 03:28:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:22.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:22.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.192 221554 DEBUG oslo_concurrency.lockutils [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "45129920-90ba-447c-9248-9aab48f34863" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.192 221554 DEBUG oslo_concurrency.lockutils [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "45129920-90ba-447c-9248-9aab48f34863" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.193 221554 DEBUG oslo_concurrency.lockutils [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "45129920-90ba-447c-9248-9aab48f34863-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.193 221554 DEBUG oslo_concurrency.lockutils [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "45129920-90ba-447c-9248-9aab48f34863-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.193 221554 DEBUG oslo_concurrency.lockutils [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "45129920-90ba-447c-9248-9aab48f34863-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.194 221554 INFO nova.compute.manager [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Terminating instance#033[00m
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.195 221554 DEBUG nova.compute.manager [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:28:23 np0005603609 kernel: tap25a0d5fc-40 (unregistering): left promiscuous mode
Jan 31 03:28:23 np0005603609 NetworkManager[49064]: <info>  [1769848103.4733] device (tap25a0d5fc-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:28:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:28:23Z|00730|binding|INFO|Releasing lport 25a0d5fc-40fd-4968-b00e-927b52f03c49 from this chassis (sb_readonly=0)
Jan 31 03:28:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:28:23Z|00731|binding|INFO|Setting lport 25a0d5fc-40fd-4968-b00e-927b52f03c49 down in Southbound
Jan 31 03:28:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:28:23Z|00732|binding|INFO|Removing iface tap25a0d5fc-40 ovn-installed in OVS
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.482 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:23.500 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:91:6d 10.100.0.10'], port_security=['fa:16:3e:aa:91:6d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '45129920-90ba-447c-9248-9aab48f34863', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e26a2af1-a850-4885-977e-596b6be13fb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '418d5319c640455ab23850c0b0f24f92', 'neutron:revision_number': '4', 'neutron:security_group_ids': '58565303-90e4-4587-bbe2-2f31600320f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=068708bd-dd36-4d03-9d65-912eb9981ecc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=25a0d5fc-40fd-4968-b00e-927b52f03c49) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:28:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:23.501 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 25a0d5fc-40fd-4968-b00e-927b52f03c49 in datapath e26a2af1-a850-4885-977e-596b6be13fb8 unbound from our chassis#033[00m
Jan 31 03:28:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:23.504 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e26a2af1-a850-4885-977e-596b6be13fb8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:28:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:23.505 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d1775fc5-b142-4a6e-b451-a28007b12d58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:23.505 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 namespace which is not needed anymore#033[00m
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.506 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:23 np0005603609 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000a6.scope: Deactivated successfully.
Jan 31 03:28:23 np0005603609 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000a6.scope: Consumed 14.410s CPU time.
Jan 31 03:28:23 np0005603609 systemd-machined[190912]: Machine qemu-89-instance-000000a6 terminated.
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.624 221554 INFO nova.virt.libvirt.driver [-] [instance: 45129920-90ba-447c-9248-9aab48f34863] Instance destroyed successfully.#033[00m
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.625 221554 DEBUG nova.objects.instance [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'resources' on Instance uuid 45129920-90ba-447c-9248-9aab48f34863 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.655 221554 DEBUG nova.virt.libvirt.vif [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:27:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-2114237403',display_name='tempest-AttachVolumeNegativeTest-server-2114237403',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-2114237403',id=166,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK17LKHy2ovd//Jqy0J2nwL60uQ9q4IySU1Ezf+7WaNwMWYjskeukYZwUKTjOAdK1+n81ISTjlIwKtHUebZBUbDI4o6AovzGtFFGwJogTo+E0dcJguncmjfdCozMj+dKnA==',key_name='tempest-keypair-876032082',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:27:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='418d5319c640455ab23850c0b0f24f92',ramdisk_id='',reservation_id='r-97k154mv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-562353674',owner_user_name='tempest-AttachVolumeNegativeTest-562353674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:27:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48d684de9ba340f48e249b4cce857bfa',uuid=45129920-90ba-447c-9248-9aab48f34863,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "address": "fa:16:3e:aa:91:6d", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25a0d5fc-40", "ovs_interfaceid": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.656 221554 DEBUG nova.network.os_vif_util [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converting VIF {"id": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "address": "fa:16:3e:aa:91:6d", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25a0d5fc-40", "ovs_interfaceid": "25a0d5fc-40fd-4968-b00e-927b52f03c49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.656 221554 DEBUG nova.network.os_vif_util [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:aa:91:6d,bridge_name='br-int',has_traffic_filtering=True,id=25a0d5fc-40fd-4968-b00e-927b52f03c49,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25a0d5fc-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.657 221554 DEBUG os_vif [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:91:6d,bridge_name='br-int',has_traffic_filtering=True,id=25a0d5fc-40fd-4968-b00e-927b52f03c49,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25a0d5fc-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.658 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.659 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25a0d5fc-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.660 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.662 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:23 np0005603609 nova_compute[221550]: 2026-01-31 08:28:23.665 221554 INFO os_vif [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:91:6d,bridge_name='br-int',has_traffic_filtering=True,id=25a0d5fc-40fd-4968-b00e-927b52f03c49,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25a0d5fc-40')#033[00m
Jan 31 03:28:23 np0005603609 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[289367]: [NOTICE]   (289371) : haproxy version is 2.8.14-c23fe91
Jan 31 03:28:23 np0005603609 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[289367]: [NOTICE]   (289371) : path to executable is /usr/sbin/haproxy
Jan 31 03:28:23 np0005603609 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[289367]: [WARNING]  (289371) : Exiting Master process...
Jan 31 03:28:23 np0005603609 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[289367]: [ALERT]    (289371) : Current worker (289373) exited with code 143 (Terminated)
Jan 31 03:28:23 np0005603609 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[289367]: [WARNING]  (289371) : All workers exited. Exiting... (0)
Jan 31 03:28:23 np0005603609 systemd[1]: libpod-cd374c9286385a5922b903a061d3a5e01066e8379b9a4927eb265ccd1413e3d4.scope: Deactivated successfully.
Jan 31 03:28:23 np0005603609 podman[290054]: 2026-01-31 08:28:23.749114143 +0000 UTC m=+0.173254481 container died cd374c9286385a5922b903a061d3a5e01066e8379b9a4927eb265ccd1413e3d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:28:24 np0005603609 nova_compute[221550]: 2026-01-31 08:28:24.008 221554 DEBUG nova.compute.manager [req-8ed12914-25b7-4880-a871-a45f4db45529 req-2e69c68e-9cca-4a2f-b6e5-cd3a01a62b79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Received event network-vif-unplugged-25a0d5fc-40fd-4968-b00e-927b52f03c49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:28:24 np0005603609 nova_compute[221550]: 2026-01-31 08:28:24.008 221554 DEBUG oslo_concurrency.lockutils [req-8ed12914-25b7-4880-a871-a45f4db45529 req-2e69c68e-9cca-4a2f-b6e5-cd3a01a62b79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "45129920-90ba-447c-9248-9aab48f34863-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:24 np0005603609 nova_compute[221550]: 2026-01-31 08:28:24.008 221554 DEBUG oslo_concurrency.lockutils [req-8ed12914-25b7-4880-a871-a45f4db45529 req-2e69c68e-9cca-4a2f-b6e5-cd3a01a62b79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "45129920-90ba-447c-9248-9aab48f34863-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:24 np0005603609 nova_compute[221550]: 2026-01-31 08:28:24.009 221554 DEBUG oslo_concurrency.lockutils [req-8ed12914-25b7-4880-a871-a45f4db45529 req-2e69c68e-9cca-4a2f-b6e5-cd3a01a62b79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "45129920-90ba-447c-9248-9aab48f34863-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:24 np0005603609 nova_compute[221550]: 2026-01-31 08:28:24.009 221554 DEBUG nova.compute.manager [req-8ed12914-25b7-4880-a871-a45f4db45529 req-2e69c68e-9cca-4a2f-b6e5-cd3a01a62b79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] No waiting events found dispatching network-vif-unplugged-25a0d5fc-40fd-4968-b00e-927b52f03c49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:28:24 np0005603609 nova_compute[221550]: 2026-01-31 08:28:24.009 221554 DEBUG nova.compute.manager [req-8ed12914-25b7-4880-a871-a45f4db45529 req-2e69c68e-9cca-4a2f-b6e5-cd3a01a62b79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Received event network-vif-unplugged-25a0d5fc-40fd-4968-b00e-927b52f03c49 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:28:24 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cd374c9286385a5922b903a061d3a5e01066e8379b9a4927eb265ccd1413e3d4-userdata-shm.mount: Deactivated successfully.
Jan 31 03:28:24 np0005603609 systemd[1]: var-lib-containers-storage-overlay-89928bec84fc7c1274516f0682ccb0a96f064a7937acad04d7f91fa9335536c9-merged.mount: Deactivated successfully.
Jan 31 03:28:24 np0005603609 podman[290054]: 2026-01-31 08:28:24.501227365 +0000 UTC m=+0.925367703 container cleanup cd374c9286385a5922b903a061d3a5e01066e8379b9a4927eb265ccd1413e3d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 03:28:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:24.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:24.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:25 np0005603609 podman[290111]: 2026-01-31 08:28:25.074517951 +0000 UTC m=+0.556008732 container remove cd374c9286385a5922b903a061d3a5e01066e8379b9a4927eb265ccd1413e3d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 03:28:25 np0005603609 systemd[1]: libpod-conmon-cd374c9286385a5922b903a061d3a5e01066e8379b9a4927eb265ccd1413e3d4.scope: Deactivated successfully.
Jan 31 03:28:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:25.080 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1d6c8341-e029-4ff7-8e23-f397e07ae2a7]: (4, ('Sat Jan 31 08:28:23 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 (cd374c9286385a5922b903a061d3a5e01066e8379b9a4927eb265ccd1413e3d4)\ncd374c9286385a5922b903a061d3a5e01066e8379b9a4927eb265ccd1413e3d4\nSat Jan 31 08:28:24 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 (cd374c9286385a5922b903a061d3a5e01066e8379b9a4927eb265ccd1413e3d4)\ncd374c9286385a5922b903a061d3a5e01066e8379b9a4927eb265ccd1413e3d4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:25.083 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[79718cf2-8ead-4483-ba37-419f0650c6bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:25.085 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape26a2af1-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:28:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e363 e363: 3 total, 3 up, 3 in
Jan 31 03:28:25 np0005603609 nova_compute[221550]: 2026-01-31 08:28:25.120 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:25 np0005603609 kernel: tape26a2af1-a0: left promiscuous mode
Jan 31 03:28:25 np0005603609 nova_compute[221550]: 2026-01-31 08:28:25.125 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:25.129 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[990ecd00-2d29-4209-8b89-238fe701d657]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:25 np0005603609 nova_compute[221550]: 2026-01-31 08:28:25.131 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:25.150 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[677b8e33-b62e-4e4f-a7ee-946e530987f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:25.152 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e107c1ad-85ff-47eb-8529-95931b603d4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:25.171 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e25778f8-43f7-400c-92de-9db6eeffd4ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 830000, 'reachable_time': 27426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290127, 'error': None, 'target': 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:25 np0005603609 systemd[1]: run-netns-ovnmeta\x2de26a2af1\x2da850\x2d4885\x2d977e\x2d596b6be13fb8.mount: Deactivated successfully.
Jan 31 03:28:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:25.176 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:28:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:28:25.176 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[a0cb1553-3472-4d7f-96d9-9b62960e42ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:25 np0005603609 nova_compute[221550]: 2026-01-31 08:28:25.655 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:26 np0005603609 nova_compute[221550]: 2026-01-31 08:28:26.180 221554 DEBUG nova.compute.manager [req-b005fa0e-4c96-4ec2-80a4-98a645127b9d req-42a54455-bd0d-47e5-ae73-ab9e0cf11764 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Received event network-vif-plugged-25a0d5fc-40fd-4968-b00e-927b52f03c49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:28:26 np0005603609 nova_compute[221550]: 2026-01-31 08:28:26.180 221554 DEBUG oslo_concurrency.lockutils [req-b005fa0e-4c96-4ec2-80a4-98a645127b9d req-42a54455-bd0d-47e5-ae73-ab9e0cf11764 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "45129920-90ba-447c-9248-9aab48f34863-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:26 np0005603609 nova_compute[221550]: 2026-01-31 08:28:26.180 221554 DEBUG oslo_concurrency.lockutils [req-b005fa0e-4c96-4ec2-80a4-98a645127b9d req-42a54455-bd0d-47e5-ae73-ab9e0cf11764 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "45129920-90ba-447c-9248-9aab48f34863-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:26 np0005603609 nova_compute[221550]: 2026-01-31 08:28:26.181 221554 DEBUG oslo_concurrency.lockutils [req-b005fa0e-4c96-4ec2-80a4-98a645127b9d req-42a54455-bd0d-47e5-ae73-ab9e0cf11764 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "45129920-90ba-447c-9248-9aab48f34863-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:26 np0005603609 nova_compute[221550]: 2026-01-31 08:28:26.181 221554 DEBUG nova.compute.manager [req-b005fa0e-4c96-4ec2-80a4-98a645127b9d req-42a54455-bd0d-47e5-ae73-ab9e0cf11764 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] No waiting events found dispatching network-vif-plugged-25a0d5fc-40fd-4968-b00e-927b52f03c49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:28:26 np0005603609 nova_compute[221550]: 2026-01-31 08:28:26.181 221554 WARNING nova.compute.manager [req-b005fa0e-4c96-4ec2-80a4-98a645127b9d req-42a54455-bd0d-47e5-ae73-ab9e0cf11764 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Received unexpected event network-vif-plugged-25a0d5fc-40fd-4968-b00e-927b52f03c49 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:28:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:26.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:26.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:28:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:28.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:28:28 np0005603609 nova_compute[221550]: 2026-01-31 08:28:28.662 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:28.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:28:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:30.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:28:30 np0005603609 nova_compute[221550]: 2026-01-31 08:28:30.656 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:30.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:30 np0005603609 nova_compute[221550]: 2026-01-31 08:28:30.789 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:32.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:28:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:32.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:28:33 np0005603609 nova_compute[221550]: 2026-01-31 08:28:33.665 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:33 np0005603609 nova_compute[221550]: 2026-01-31 08:28:33.744 221554 INFO nova.virt.libvirt.driver [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Deleting instance files /var/lib/nova/instances/45129920-90ba-447c-9248-9aab48f34863_del#033[00m
Jan 31 03:28:33 np0005603609 nova_compute[221550]: 2026-01-31 08:28:33.745 221554 INFO nova.virt.libvirt.driver [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Deletion of /var/lib/nova/instances/45129920-90ba-447c-9248-9aab48f34863_del complete#033[00m
Jan 31 03:28:33 np0005603609 nova_compute[221550]: 2026-01-31 08:28:33.892 221554 INFO nova.compute.manager [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Took 10.70 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:28:33 np0005603609 nova_compute[221550]: 2026-01-31 08:28:33.892 221554 DEBUG oslo.service.loopingcall [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:28:33 np0005603609 nova_compute[221550]: 2026-01-31 08:28:33.892 221554 DEBUG nova.compute.manager [-] [instance: 45129920-90ba-447c-9248-9aab48f34863] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:28:33 np0005603609 nova_compute[221550]: 2026-01-31 08:28:33.892 221554 DEBUG nova.network.neutron [-] [instance: 45129920-90ba-447c-9248-9aab48f34863] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:28:34 np0005603609 ovn_controller[130359]: 2026-01-31T08:28:34Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:31:38:16 10.100.0.12
Jan 31 03:28:34 np0005603609 ovn_controller[130359]: 2026-01-31T08:28:34Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:31:38:16 10.100.0.12
Jan 31 03:28:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:34.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:34 np0005603609 nova_compute[221550]: 2026-01-31 08:28:34.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:28:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:34.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:28:35 np0005603609 nova_compute[221550]: 2026-01-31 08:28:35.673 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:28:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:36.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:28:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:36.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:37 np0005603609 nova_compute[221550]: 2026-01-31 08:28:37.358 221554 DEBUG nova.network.neutron [-] [instance: 45129920-90ba-447c-9248-9aab48f34863] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:28:37 np0005603609 nova_compute[221550]: 2026-01-31 08:28:37.455 221554 INFO nova.compute.manager [-] [instance: 45129920-90ba-447c-9248-9aab48f34863] Took 3.56 seconds to deallocate network for instance.#033[00m
Jan 31 03:28:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e364 e364: 3 total, 3 up, 3 in
Jan 31 03:28:37 np0005603609 nova_compute[221550]: 2026-01-31 08:28:37.576 221554 DEBUG oslo_concurrency.lockutils [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:37 np0005603609 nova_compute[221550]: 2026-01-31 08:28:37.577 221554 DEBUG oslo_concurrency.lockutils [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:37 np0005603609 nova_compute[221550]: 2026-01-31 08:28:37.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:37 np0005603609 nova_compute[221550]: 2026-01-31 08:28:37.703 221554 DEBUG oslo_concurrency.processutils [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:38 np0005603609 podman[290149]: 2026-01-31 08:28:38.191686673 +0000 UTC m=+0.073544078 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 03:28:38 np0005603609 podman[290151]: 2026-01-31 08:28:38.196486758 +0000 UTC m=+0.074383227 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:28:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:28:38 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3289421549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:28:38 np0005603609 nova_compute[221550]: 2026-01-31 08:28:38.250 221554 DEBUG oslo_concurrency.processutils [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:38 np0005603609 nova_compute[221550]: 2026-01-31 08:28:38.256 221554 DEBUG nova.compute.provider_tree [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:28:38 np0005603609 nova_compute[221550]: 2026-01-31 08:28:38.293 221554 DEBUG nova.scheduler.client.report [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:28:38 np0005603609 nova_compute[221550]: 2026-01-31 08:28:38.453 221554 DEBUG oslo_concurrency.lockutils [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:38 np0005603609 nova_compute[221550]: 2026-01-31 08:28:38.591 221554 INFO nova.scheduler.client.report [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Deleted allocations for instance 45129920-90ba-447c-9248-9aab48f34863#033[00m
Jan 31 03:28:38 np0005603609 nova_compute[221550]: 2026-01-31 08:28:38.622 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848103.621499, 45129920-90ba-447c-9248-9aab48f34863 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:28:38 np0005603609 nova_compute[221550]: 2026-01-31 08:28:38.623 221554 INFO nova.compute.manager [-] [instance: 45129920-90ba-447c-9248-9aab48f34863] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:28:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:38.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:38 np0005603609 nova_compute[221550]: 2026-01-31 08:28:38.667 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:38.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:38 np0005603609 nova_compute[221550]: 2026-01-31 08:28:38.738 221554 DEBUG nova.compute.manager [req-8f569638-ac4e-4bea-b2e0-b54f0a056d59 req-1db091ac-7fd8-44f6-b248-9585efa2fdbb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 45129920-90ba-447c-9248-9aab48f34863] Received event network-vif-deleted-25a0d5fc-40fd-4968-b00e-927b52f03c49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:28:38 np0005603609 nova_compute[221550]: 2026-01-31 08:28:38.778 221554 DEBUG nova.compute.manager [None req-df4be3ee-3e15-4c2a-a11b-c4cf965dd178 - - - - - -] [instance: 45129920-90ba-447c-9248-9aab48f34863] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:28:38 np0005603609 nova_compute[221550]: 2026-01-31 08:28:38.826 221554 DEBUG oslo_concurrency.lockutils [None req-6749cf4b-4f74-4860-a782-ddd03c984ed0 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "45129920-90ba-447c-9248-9aab48f34863" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 15.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:40.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:40 np0005603609 nova_compute[221550]: 2026-01-31 08:28:40.674 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:40.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:41 np0005603609 nova_compute[221550]: 2026-01-31 08:28:41.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:42 np0005603609 nova_compute[221550]: 2026-01-31 08:28:42.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:42 np0005603609 nova_compute[221550]: 2026-01-31 08:28:42.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:28:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:28:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:42.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:28:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:42.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:42 np0005603609 nova_compute[221550]: 2026-01-31 08:28:42.862 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:28:43 np0005603609 nova_compute[221550]: 2026-01-31 08:28:43.700 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:28:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:44.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:28:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:28:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:44.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:28:45 np0005603609 nova_compute[221550]: 2026-01-31 08:28:45.677 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:46 np0005603609 nova_compute[221550]: 2026-01-31 08:28:46.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:46 np0005603609 nova_compute[221550]: 2026-01-31 08:28:46.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:28:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:46.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:46.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:47 np0005603609 nova_compute[221550]: 2026-01-31 08:28:47.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:47 np0005603609 nova_compute[221550]: 2026-01-31 08:28:47.699 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:47 np0005603609 nova_compute[221550]: 2026-01-31 08:28:47.751 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:47 np0005603609 nova_compute[221550]: 2026-01-31 08:28:47.752 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:47 np0005603609 nova_compute[221550]: 2026-01-31 08:28:47.752 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:47 np0005603609 nova_compute[221550]: 2026-01-31 08:28:47.753 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:28:47 np0005603609 nova_compute[221550]: 2026-01-31 08:28:47.753 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:28:48 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1814181840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:28:48 np0005603609 nova_compute[221550]: 2026-01-31 08:28:48.522 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.769s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:28:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:48.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:28:48 np0005603609 nova_compute[221550]: 2026-01-31 08:28:48.703 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:28:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:48.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:28:48 np0005603609 nova_compute[221550]: 2026-01-31 08:28:48.947 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:28:48 np0005603609 nova_compute[221550]: 2026-01-31 08:28:48.947 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:28:48 np0005603609 nova_compute[221550]: 2026-01-31 08:28:48.950 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:28:48 np0005603609 nova_compute[221550]: 2026-01-31 08:28:48.950 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:28:49 np0005603609 nova_compute[221550]: 2026-01-31 08:28:49.085 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:28:49 np0005603609 nova_compute[221550]: 2026-01-31 08:28:49.085 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3878MB free_disk=20.6937255859375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:28:49 np0005603609 nova_compute[221550]: 2026-01-31 08:28:49.086 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:49 np0005603609 nova_compute[221550]: 2026-01-31 08:28:49.086 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:49 np0005603609 nova_compute[221550]: 2026-01-31 08:28:49.410 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 8bf29da0-2e1a-4e89-bce8-b293a938c742 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:28:49 np0005603609 nova_compute[221550]: 2026-01-31 08:28:49.410 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 20d5fd84-de90-46b0-816e-0f378fd7d0c7 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:28:49 np0005603609 nova_compute[221550]: 2026-01-31 08:28:49.411 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:28:49 np0005603609 nova_compute[221550]: 2026-01-31 08:28:49.411 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:28:49 np0005603609 nova_compute[221550]: 2026-01-31 08:28:49.579 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:28:49 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2216114932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:28:50 np0005603609 nova_compute[221550]: 2026-01-31 08:28:50.000 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:50 np0005603609 nova_compute[221550]: 2026-01-31 08:28:50.004 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:28:50 np0005603609 nova_compute[221550]: 2026-01-31 08:28:50.092 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:28:50 np0005603609 nova_compute[221550]: 2026-01-31 08:28:50.137 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:28:50 np0005603609 nova_compute[221550]: 2026-01-31 08:28:50.138 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:50 np0005603609 nova_compute[221550]: 2026-01-31 08:28:50.679 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:50.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:50.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:52.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:52.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:53 np0005603609 nova_compute[221550]: 2026-01-31 08:28:53.138 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:53 np0005603609 nova_compute[221550]: 2026-01-31 08:28:53.139 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:53 np0005603609 nova_compute[221550]: 2026-01-31 08:28:53.139 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:28:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3992570297' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:28:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:28:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3992570297' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:28:53 np0005603609 nova_compute[221550]: 2026-01-31 08:28:53.704 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:54.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:54.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:55 np0005603609 nova_compute[221550]: 2026-01-31 08:28:55.681 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:28:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:56.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:28:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:28:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:56.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:28:57 np0005603609 nova_compute[221550]: 2026-01-31 08:28:57.134 221554 DEBUG oslo_concurrency.lockutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "2b24a8d0-ad95-4460-acf1-0acb658330aa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:57 np0005603609 nova_compute[221550]: 2026-01-31 08:28:57.134 221554 DEBUG oslo_concurrency.lockutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:57 np0005603609 nova_compute[221550]: 2026-01-31 08:28:57.264 221554 DEBUG nova.compute.manager [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:28:57 np0005603609 nova_compute[221550]: 2026-01-31 08:28:57.505 221554 DEBUG oslo_concurrency.lockutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:57 np0005603609 nova_compute[221550]: 2026-01-31 08:28:57.505 221554 DEBUG oslo_concurrency.lockutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:57 np0005603609 nova_compute[221550]: 2026-01-31 08:28:57.511 221554 DEBUG nova.virt.hardware [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:28:57 np0005603609 nova_compute[221550]: 2026-01-31 08:28:57.511 221554 INFO nova.compute.claims [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:28:57 np0005603609 nova_compute[221550]: 2026-01-31 08:28:57.880 221554 DEBUG oslo_concurrency.processutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:28:58 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3570348620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:28:58 np0005603609 nova_compute[221550]: 2026-01-31 08:28:58.308 221554 DEBUG oslo_concurrency.processutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:58 np0005603609 nova_compute[221550]: 2026-01-31 08:28:58.313 221554 DEBUG nova.compute.provider_tree [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:28:58 np0005603609 nova_compute[221550]: 2026-01-31 08:28:58.347 221554 DEBUG nova.scheduler.client.report [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:28:58 np0005603609 nova_compute[221550]: 2026-01-31 08:28:58.434 221554 DEBUG oslo_concurrency.lockutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:58 np0005603609 nova_compute[221550]: 2026-01-31 08:28:58.435 221554 DEBUG nova.compute.manager [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:28:58 np0005603609 nova_compute[221550]: 2026-01-31 08:28:58.626 221554 DEBUG nova.compute.manager [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:28:58 np0005603609 nova_compute[221550]: 2026-01-31 08:28:58.626 221554 DEBUG nova.network.neutron [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:28:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:58.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:28:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:58.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:58 np0005603609 nova_compute[221550]: 2026-01-31 08:28:58.745 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:58 np0005603609 nova_compute[221550]: 2026-01-31 08:28:58.813 221554 INFO nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:28:58 np0005603609 nova_compute[221550]: 2026-01-31 08:28:58.894 221554 DEBUG nova.compute.manager [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:28:59 np0005603609 nova_compute[221550]: 2026-01-31 08:28:59.231 221554 DEBUG nova.compute.manager [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:28:59 np0005603609 nova_compute[221550]: 2026-01-31 08:28:59.232 221554 DEBUG nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:28:59 np0005603609 nova_compute[221550]: 2026-01-31 08:28:59.232 221554 INFO nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Creating image(s)#033[00m
Jan 31 03:28:59 np0005603609 nova_compute[221550]: 2026-01-31 08:28:59.259 221554 DEBUG nova.storage.rbd_utils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 2b24a8d0-ad95-4460-acf1-0acb658330aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:59 np0005603609 nova_compute[221550]: 2026-01-31 08:28:59.285 221554 DEBUG nova.storage.rbd_utils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 2b24a8d0-ad95-4460-acf1-0acb658330aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:59 np0005603609 nova_compute[221550]: 2026-01-31 08:28:59.310 221554 DEBUG nova.storage.rbd_utils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 2b24a8d0-ad95-4460-acf1-0acb658330aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:59 np0005603609 nova_compute[221550]: 2026-01-31 08:28:59.313 221554 DEBUG oslo_concurrency.processutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:59 np0005603609 nova_compute[221550]: 2026-01-31 08:28:59.363 221554 DEBUG oslo_concurrency.processutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:59 np0005603609 nova_compute[221550]: 2026-01-31 08:28:59.364 221554 DEBUG oslo_concurrency.lockutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:59 np0005603609 nova_compute[221550]: 2026-01-31 08:28:59.364 221554 DEBUG oslo_concurrency.lockutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:59 np0005603609 nova_compute[221550]: 2026-01-31 08:28:59.364 221554 DEBUG oslo_concurrency.lockutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:59 np0005603609 nova_compute[221550]: 2026-01-31 08:28:59.390 221554 DEBUG nova.storage.rbd_utils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 2b24a8d0-ad95-4460-acf1-0acb658330aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:59 np0005603609 nova_compute[221550]: 2026-01-31 08:28:59.393 221554 DEBUG oslo_concurrency.processutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 2b24a8d0-ad95-4460-acf1-0acb658330aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:59 np0005603609 nova_compute[221550]: 2026-01-31 08:28:59.781 221554 DEBUG nova.policy [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4d0e9d918b4041fabd5ded633b4cf404', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9710f0cf77d84353ae13fa47922b085d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:28:59 np0005603609 nova_compute[221550]: 2026-01-31 08:28:59.859 221554 DEBUG oslo_concurrency.processutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 2b24a8d0-ad95-4460-acf1-0acb658330aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:59 np0005603609 nova_compute[221550]: 2026-01-31 08:28:59.925 221554 DEBUG nova.storage.rbd_utils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] resizing rbd image 2b24a8d0-ad95-4460-acf1-0acb658330aa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:29:00 np0005603609 nova_compute[221550]: 2026-01-31 08:29:00.082 221554 DEBUG nova.objects.instance [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'migration_context' on Instance uuid 2b24a8d0-ad95-4460-acf1-0acb658330aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:29:00 np0005603609 nova_compute[221550]: 2026-01-31 08:29:00.138 221554 DEBUG nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:29:00 np0005603609 nova_compute[221550]: 2026-01-31 08:29:00.139 221554 DEBUG nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Ensure instance console log exists: /var/lib/nova/instances/2b24a8d0-ad95-4460-acf1-0acb658330aa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:29:00 np0005603609 nova_compute[221550]: 2026-01-31 08:29:00.139 221554 DEBUG oslo_concurrency.lockutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:00 np0005603609 nova_compute[221550]: 2026-01-31 08:29:00.140 221554 DEBUG oslo_concurrency.lockutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:00 np0005603609 nova_compute[221550]: 2026-01-31 08:29:00.140 221554 DEBUG oslo_concurrency.lockutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:00 np0005603609 nova_compute[221550]: 2026-01-31 08:29:00.684 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:29:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:00.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:29:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:00.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:02.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:02.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:03 np0005603609 nova_compute[221550]: 2026-01-31 08:29:03.748 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:04 np0005603609 nova_compute[221550]: 2026-01-31 08:29:04.359 221554 DEBUG nova.network.neutron [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Successfully created port: 324249e6-6299-4722-a570-3439880bde1f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:29:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:29:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:04.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:29:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:04.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:05 np0005603609 nova_compute[221550]: 2026-01-31 08:29:05.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:05 np0005603609 nova_compute[221550]: 2026-01-31 08:29:05.686 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:06.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:29:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:06.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:29:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:07.521 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:07.522 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:07.523 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:08.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:08.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:08 np0005603609 nova_compute[221550]: 2026-01-31 08:29:08.751 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:09 np0005603609 podman[290433]: 2026-01-31 08:29:09.161523882 +0000 UTC m=+0.040646617 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:29:09 np0005603609 podman[290432]: 2026-01-31 08:29:09.209691168 +0000 UTC m=+0.088549927 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 03:29:10 np0005603609 nova_compute[221550]: 2026-01-31 08:29:10.376 221554 DEBUG nova.network.neutron [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Successfully updated port: 324249e6-6299-4722-a570-3439880bde1f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:29:10 np0005603609 nova_compute[221550]: 2026-01-31 08:29:10.510 221554 DEBUG oslo_concurrency.lockutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:29:10 np0005603609 nova_compute[221550]: 2026-01-31 08:29:10.511 221554 DEBUG oslo_concurrency.lockutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquired lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:29:10 np0005603609 nova_compute[221550]: 2026-01-31 08:29:10.511 221554 DEBUG nova.network.neutron [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:29:10 np0005603609 nova_compute[221550]: 2026-01-31 08:29:10.688 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:10.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:29:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:10.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:29:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:29:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:29:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:29:10 np0005603609 nova_compute[221550]: 2026-01-31 08:29:10.981 221554 DEBUG nova.network.neutron [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:29:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:11 np0005603609 nova_compute[221550]: 2026-01-31 08:29:11.649 221554 DEBUG nova.compute.manager [req-c14aaba0-38b2-4325-9c90-213d123a0e79 req-084f31d1-2268-4293-b478-f847a04a7c07 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-changed-324249e6-6299-4722-a570-3439880bde1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:29:11 np0005603609 nova_compute[221550]: 2026-01-31 08:29:11.649 221554 DEBUG nova.compute.manager [req-c14aaba0-38b2-4325-9c90-213d123a0e79 req-084f31d1-2268-4293-b478-f847a04a7c07 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Refreshing instance network info cache due to event network-changed-324249e6-6299-4722-a570-3439880bde1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:29:11 np0005603609 nova_compute[221550]: 2026-01-31 08:29:11.650 221554 DEBUG oslo_concurrency.lockutils [req-c14aaba0-38b2-4325-9c90-213d123a0e79 req-084f31d1-2268-4293-b478-f847a04a7c07 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:29:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:11.720 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:29:11 np0005603609 nova_compute[221550]: 2026-01-31 08:29:11.720 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:11.721 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:29:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:29:12 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/503516173' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:29:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:12.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:12.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:12 np0005603609 nova_compute[221550]: 2026-01-31 08:29:12.950 221554 DEBUG nova.network.neutron [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Updating instance_info_cache with network_info: [{"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.446 221554 DEBUG oslo_concurrency.lockutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Releasing lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.447 221554 DEBUG nova.compute.manager [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Instance network_info: |[{"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.447 221554 DEBUG oslo_concurrency.lockutils [req-c14aaba0-38b2-4325-9c90-213d123a0e79 req-084f31d1-2268-4293-b478-f847a04a7c07 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.448 221554 DEBUG nova.network.neutron [req-c14aaba0-38b2-4325-9c90-213d123a0e79 req-084f31d1-2268-4293-b478-f847a04a7c07 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Refreshing network info cache for port 324249e6-6299-4722-a570-3439880bde1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.454 221554 DEBUG nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Start _get_guest_xml network_info=[{"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.460 221554 WARNING nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.483 221554 DEBUG nova.virt.libvirt.host [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.483 221554 DEBUG nova.virt.libvirt.host [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.488 221554 DEBUG nova.virt.libvirt.host [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.488 221554 DEBUG nova.virt.libvirt.host [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.490 221554 DEBUG nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.490 221554 DEBUG nova.virt.hardware [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.490 221554 DEBUG nova.virt.hardware [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.491 221554 DEBUG nova.virt.hardware [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.491 221554 DEBUG nova.virt.hardware [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.491 221554 DEBUG nova.virt.hardware [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.492 221554 DEBUG nova.virt.hardware [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.492 221554 DEBUG nova.virt.hardware [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.492 221554 DEBUG nova.virt.hardware [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.493 221554 DEBUG nova.virt.hardware [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.493 221554 DEBUG nova.virt.hardware [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.493 221554 DEBUG nova.virt.hardware [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.497 221554 DEBUG oslo_concurrency.processutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.755 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:29:13 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4283237309' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.912 221554 DEBUG oslo_concurrency.processutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.940 221554 DEBUG nova.storage.rbd_utils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 2b24a8d0-ad95-4460-acf1-0acb658330aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:29:13 np0005603609 nova_compute[221550]: 2026-01-31 08:29:13.946 221554 DEBUG oslo_concurrency.processutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:29:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e365 e365: 3 total, 3 up, 3 in
Jan 31 03:29:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:29:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2624916612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.456 221554 DEBUG oslo_concurrency.processutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.458 221554 DEBUG nova.virt.libvirt.vif [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:28:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2146916992',display_name='tempest-TestNetworkAdvancedServerOps-server-2146916992',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2146916992',id=169,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK7T1FRM2hnvWHsS2vC2H2YEYiw3bA7e7xk8qp1+JMKzlq7jPy/PzZqxW2VC1KQOHQDuHMYvJPGgKGcZZL8ySdxPXOSaobgGLmos6mi9nlU6Dv+erNyHG2EMArHdJN1ryg==',key_name='tempest-TestNetworkAdvancedServerOps-694728097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-ah4zs948',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:28:58Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=2b24a8d0-ad95-4460-acf1-0acb658330aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.459 221554 DEBUG nova.network.os_vif_util [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.460 221554 DEBUG nova.network.os_vif_util [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:84:a5,bridge_name='br-int',has_traffic_filtering=True,id=324249e6-6299-4722-a570-3439880bde1f,network=Network(5c8cd691-2e19-4c04-b061-2e5304161623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap324249e6-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.461 221554 DEBUG nova.objects.instance [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b24a8d0-ad95-4460-acf1-0acb658330aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.507 221554 DEBUG nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:29:14 np0005603609 nova_compute[221550]:  <uuid>2b24a8d0-ad95-4460-acf1-0acb658330aa</uuid>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:  <name>instance-000000a9</name>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-2146916992</nova:name>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:29:13</nova:creationTime>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:29:14 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:        <nova:user uuid="4d0e9d918b4041fabd5ded633b4cf404">tempest-TestNetworkAdvancedServerOps-483180749-project-member</nova:user>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:        <nova:project uuid="9710f0cf77d84353ae13fa47922b085d">tempest-TestNetworkAdvancedServerOps-483180749</nova:project>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:        <nova:port uuid="324249e6-6299-4722-a570-3439880bde1f">
Jan 31 03:29:14 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <entry name="serial">2b24a8d0-ad95-4460-acf1-0acb658330aa</entry>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <entry name="uuid">2b24a8d0-ad95-4460-acf1-0acb658330aa</entry>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/2b24a8d0-ad95-4460-acf1-0acb658330aa_disk">
Jan 31 03:29:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:29:14 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/2b24a8d0-ad95-4460-acf1-0acb658330aa_disk.config">
Jan 31 03:29:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:29:14 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:73:84:a5"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <target dev="tap324249e6-62"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/2b24a8d0-ad95-4460-acf1-0acb658330aa/console.log" append="off"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:29:14 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:29:14 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:29:14 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:29:14 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.509 221554 DEBUG nova.compute.manager [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Preparing to wait for external event network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.509 221554 DEBUG oslo_concurrency.lockutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.509 221554 DEBUG oslo_concurrency.lockutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.510 221554 DEBUG oslo_concurrency.lockutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.511 221554 DEBUG nova.virt.libvirt.vif [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:28:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2146916992',display_name='tempest-TestNetworkAdvancedServerOps-server-2146916992',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2146916992',id=169,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK7T1FRM2hnvWHsS2vC2H2YEYiw3bA7e7xk8qp1+JMKzlq7jPy/PzZqxW2VC1KQOHQDuHMYvJPGgKGcZZL8ySdxPXOSaobgGLmos6mi9nlU6Dv+erNyHG2EMArHdJN1ryg==',key_name='tempest-TestNetworkAdvancedServerOps-694728097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-ah4zs948',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:28:58Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=2b24a8d0-ad95-4460-acf1-0acb658330aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.511 221554 DEBUG nova.network.os_vif_util [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.512 221554 DEBUG nova.network.os_vif_util [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:84:a5,bridge_name='br-int',has_traffic_filtering=True,id=324249e6-6299-4722-a570-3439880bde1f,network=Network(5c8cd691-2e19-4c04-b061-2e5304161623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap324249e6-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.512 221554 DEBUG os_vif [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:84:a5,bridge_name='br-int',has_traffic_filtering=True,id=324249e6-6299-4722-a570-3439880bde1f,network=Network(5c8cd691-2e19-4c04-b061-2e5304161623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap324249e6-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.513 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.514 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.514 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.519 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.519 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap324249e6-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.520 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap324249e6-62, col_values=(('external_ids', {'iface-id': '324249e6-6299-4722-a570-3439880bde1f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:84:a5', 'vm-uuid': '2b24a8d0-ad95-4460-acf1-0acb658330aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:29:14 np0005603609 NetworkManager[49064]: <info>  [1769848154.5572] manager: (tap324249e6-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.556 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.562 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.564 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.565 221554 INFO os_vif [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:84:a5,bridge_name='br-int',has_traffic_filtering=True,id=324249e6-6299-4722-a570-3439880bde1f,network=Network(5c8cd691-2e19-4c04-b061-2e5304161623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap324249e6-62')#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.649 221554 DEBUG nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.649 221554 DEBUG nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.649 221554 DEBUG nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No VIF found with MAC fa:16:3e:73:84:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.650 221554 INFO nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Using config drive#033[00m
Jan 31 03:29:14 np0005603609 nova_compute[221550]: 2026-01-31 08:29:14.678 221554 DEBUG nova.storage.rbd_utils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 2b24a8d0-ad95-4460-acf1-0acb658330aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:29:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:14.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:29:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:14.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:29:15 np0005603609 nova_compute[221550]: 2026-01-31 08:29:15.360 221554 INFO nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Creating config drive at /var/lib/nova/instances/2b24a8d0-ad95-4460-acf1-0acb658330aa/disk.config#033[00m
Jan 31 03:29:15 np0005603609 nova_compute[221550]: 2026-01-31 08:29:15.364 221554 DEBUG oslo_concurrency.processutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b24a8d0-ad95-4460-acf1-0acb658330aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpetxd_dcd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:29:15 np0005603609 nova_compute[221550]: 2026-01-31 08:29:15.408 221554 DEBUG nova.network.neutron [req-c14aaba0-38b2-4325-9c90-213d123a0e79 req-084f31d1-2268-4293-b478-f847a04a7c07 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Updated VIF entry in instance network info cache for port 324249e6-6299-4722-a570-3439880bde1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:29:15 np0005603609 nova_compute[221550]: 2026-01-31 08:29:15.409 221554 DEBUG nova.network.neutron [req-c14aaba0-38b2-4325-9c90-213d123a0e79 req-084f31d1-2268-4293-b478-f847a04a7c07 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Updating instance_info_cache with network_info: [{"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:29:15 np0005603609 nova_compute[221550]: 2026-01-31 08:29:15.452 221554 DEBUG oslo_concurrency.lockutils [req-c14aaba0-38b2-4325-9c90-213d123a0e79 req-084f31d1-2268-4293-b478-f847a04a7c07 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:29:15 np0005603609 nova_compute[221550]: 2026-01-31 08:29:15.501 221554 DEBUG oslo_concurrency.processutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b24a8d0-ad95-4460-acf1-0acb658330aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpetxd_dcd" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:29:15 np0005603609 nova_compute[221550]: 2026-01-31 08:29:15.524 221554 DEBUG nova.storage.rbd_utils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 2b24a8d0-ad95-4460-acf1-0acb658330aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:29:15 np0005603609 nova_compute[221550]: 2026-01-31 08:29:15.527 221554 DEBUG oslo_concurrency.processutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b24a8d0-ad95-4460-acf1-0acb658330aa/disk.config 2b24a8d0-ad95-4460-acf1-0acb658330aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:29:15 np0005603609 nova_compute[221550]: 2026-01-31 08:29:15.690 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:15 np0005603609 nova_compute[221550]: 2026-01-31 08:29:15.876 221554 DEBUG oslo_concurrency.processutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b24a8d0-ad95-4460-acf1-0acb658330aa/disk.config 2b24a8d0-ad95-4460-acf1-0acb658330aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:29:15 np0005603609 nova_compute[221550]: 2026-01-31 08:29:15.877 221554 INFO nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Deleting local config drive /var/lib/nova/instances/2b24a8d0-ad95-4460-acf1-0acb658330aa/disk.config because it was imported into RBD.#033[00m
Jan 31 03:29:15 np0005603609 kernel: tap324249e6-62: entered promiscuous mode
Jan 31 03:29:15 np0005603609 NetworkManager[49064]: <info>  [1769848155.9333] manager: (tap324249e6-62): new Tun device (/org/freedesktop/NetworkManager/Devices/342)
Jan 31 03:29:15 np0005603609 ovn_controller[130359]: 2026-01-31T08:29:15Z|00733|binding|INFO|Claiming lport 324249e6-6299-4722-a570-3439880bde1f for this chassis.
Jan 31 03:29:15 np0005603609 ovn_controller[130359]: 2026-01-31T08:29:15Z|00734|binding|INFO|324249e6-6299-4722-a570-3439880bde1f: Claiming fa:16:3e:73:84:a5 10.100.0.7
Jan 31 03:29:15 np0005603609 nova_compute[221550]: 2026-01-31 08:29:15.934 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:15 np0005603609 ovn_controller[130359]: 2026-01-31T08:29:15Z|00735|binding|INFO|Setting lport 324249e6-6299-4722-a570-3439880bde1f ovn-installed in OVS
Jan 31 03:29:15 np0005603609 nova_compute[221550]: 2026-01-31 08:29:15.940 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:15 np0005603609 nova_compute[221550]: 2026-01-31 08:29:15.943 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:15 np0005603609 systemd-udevd[290743]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:29:15 np0005603609 systemd-machined[190912]: New machine qemu-91-instance-000000a9.
Jan 31 03:29:15 np0005603609 NetworkManager[49064]: <info>  [1769848155.9786] device (tap324249e6-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:29:15 np0005603609 NetworkManager[49064]: <info>  [1769848155.9792] device (tap324249e6-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:29:15 np0005603609 systemd[1]: Started Virtual Machine qemu-91-instance-000000a9.
Jan 31 03:29:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.048 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:84:a5 10.100.0.7'], port_security=['fa:16:3e:73:84:a5 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2b24a8d0-ad95-4460-acf1-0acb658330aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c8cd691-2e19-4c04-b061-2e5304161623', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0e11440e-2192-492f-a554-7817b0fd324e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0bee9a73-d2c9-492b-9a5b-302cf0b7a5b8, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=324249e6-6299-4722-a570-3439880bde1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:29:16 np0005603609 ovn_controller[130359]: 2026-01-31T08:29:16Z|00736|binding|INFO|Setting lport 324249e6-6299-4722-a570-3439880bde1f up in Southbound
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.049 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 324249e6-6299-4722-a570-3439880bde1f in datapath 5c8cd691-2e19-4c04-b061-2e5304161623 bound to our chassis#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.051 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c8cd691-2e19-4c04-b061-2e5304161623#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.061 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[23af9ec5-0dac-4513-b6e0-97db72048b4b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.063 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c8cd691-21 in ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.065 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c8cd691-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.065 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[dc0d3516-691b-4cd5-994a-2d33bdf57378]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.066 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0837882b-6694-4c07-9d79-9d465eb06312]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.076 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[7926fb03-27bb-495a-8460-55ef790e8068]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.090 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c5830288-41f4-4283-86c3-9ea6f167f9c2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.121 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[8981e0a8-c34a-4166-a823-cf310ab4c2ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:29:16 np0005603609 NetworkManager[49064]: <info>  [1769848156.1362] manager: (tap5c8cd691-20): new Veth device (/org/freedesktop/NetworkManager/Devices/343)
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.135 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[acbff774-a7cb-46e1-9db6-2cdba0f4e7e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:29:16 np0005603609 systemd-udevd[290746]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.165 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[16feb380-9613-4199-ad41-2f7e8d15cb21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.169 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[93ecd57f-3d76-4ca2-ac58-c53756c173d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:29:16 np0005603609 NetworkManager[49064]: <info>  [1769848156.1957] device (tap5c8cd691-20): carrier: link connected
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.203 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[a57235ac-26e0-4476-805a-1b26f6e6cb38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.310 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[35eb3c7c-21ea-4c78-b2e9-43788b35e401]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c8cd691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:0e:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 839589, 'reachable_time': 39358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290796, 'error': None, 'target': 'ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.323 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0b453931-2dff-4ea0-9837-36787264d81f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1e:ed3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 839589, 'tstamp': 839589}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290817, 'error': None, 'target': 'ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.337 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8eb528f2-ffb0-4252-8d9a-6a0348f974fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c8cd691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:0e:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 839589, 'reachable_time': 39358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290818, 'error': None, 'target': 'ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.362 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1479fa9f-f89f-4e31-8707-d7cad01be8f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.410 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[34ea763f-17d8-4995-9bc5-247fd34a51ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.412 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c8cd691-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.414 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.415 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c8cd691-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:29:16 np0005603609 nova_compute[221550]: 2026-01-31 08:29:16.415 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848156.4151537, 2b24a8d0-ad95-4460-acf1-0acb658330aa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:29:16 np0005603609 nova_compute[221550]: 2026-01-31 08:29:16.416 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] VM Started (Lifecycle Event)#033[00m
Jan 31 03:29:16 np0005603609 NetworkManager[49064]: <info>  [1769848156.4173] manager: (tap5c8cd691-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/344)
Jan 31 03:29:16 np0005603609 kernel: tap5c8cd691-20: entered promiscuous mode
Jan 31 03:29:16 np0005603609 nova_compute[221550]: 2026-01-31 08:29:16.418 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.419 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c8cd691-20, col_values=(('external_ids', {'iface-id': '0e00cbca-3de8-4f0d-92ce-93192811a08b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:29:16 np0005603609 nova_compute[221550]: 2026-01-31 08:29:16.420 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:16 np0005603609 ovn_controller[130359]: 2026-01-31T08:29:16Z|00737|binding|INFO|Releasing lport 0e00cbca-3de8-4f0d-92ce-93192811a08b from this chassis (sb_readonly=0)
Jan 31 03:29:16 np0005603609 nova_compute[221550]: 2026-01-31 08:29:16.426 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.426 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c8cd691-2e19-4c04-b061-2e5304161623.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c8cd691-2e19-4c04-b061-2e5304161623.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.427 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e377b4d6-5261-4796-bdd6-8a5d85334db1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.428 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-5c8cd691-2e19-4c04-b061-2e5304161623
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/5c8cd691-2e19-4c04-b061-2e5304161623.pid.haproxy
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 5c8cd691-2e19-4c04-b061-2e5304161623
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:29:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:16.428 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623', 'env', 'PROCESS_TAG=haproxy-5c8cd691-2e19-4c04-b061-2e5304161623', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c8cd691-2e19-4c04-b061-2e5304161623.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:29:16 np0005603609 nova_compute[221550]: 2026-01-31 08:29:16.526 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:29:16 np0005603609 nova_compute[221550]: 2026-01-31 08:29:16.530 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848156.4153125, 2b24a8d0-ad95-4460-acf1-0acb658330aa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:29:16 np0005603609 nova_compute[221550]: 2026-01-31 08:29:16.530 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:29:16 np0005603609 nova_compute[221550]: 2026-01-31 08:29:16.629 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:29:16 np0005603609 nova_compute[221550]: 2026-01-31 08:29:16.633 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:29:16 np0005603609 nova_compute[221550]: 2026-01-31 08:29:16.697 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:29:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:16.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:16.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:16 np0005603609 podman[290856]: 2026-01-31 08:29:16.793401992 +0000 UTC m=+0.100057543 container create 1bf2f552d5e051eb693733445cd301652c498f1f9112f72b97cead42c18ee2b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:29:16 np0005603609 podman[290856]: 2026-01-31 08:29:16.710353528 +0000 UTC m=+0.017009099 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:29:16 np0005603609 systemd[1]: Started libpod-conmon-1bf2f552d5e051eb693733445cd301652c498f1f9112f72b97cead42c18ee2b8.scope.
Jan 31 03:29:16 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:29:16 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1060f7e99d35037030208d4d3a9c7a3f434c5c06a2fe06dccb6753558ff96325/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:16 np0005603609 podman[290856]: 2026-01-31 08:29:16.90492415 +0000 UTC m=+0.211579721 container init 1bf2f552d5e051eb693733445cd301652c498f1f9112f72b97cead42c18ee2b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 03:29:16 np0005603609 podman[290856]: 2026-01-31 08:29:16.91282224 +0000 UTC m=+0.219477831 container start 1bf2f552d5e051eb693733445cd301652c498f1f9112f72b97cead42c18ee2b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Jan 31 03:29:16 np0005603609 neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623[290871]: [NOTICE]   (290875) : New worker (290877) forked
Jan 31 03:29:16 np0005603609 neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623[290871]: [NOTICE]   (290875) : Loading success.
Jan 31 03:29:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:29:17.723 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.168 221554 DEBUG nova.compute.manager [req-2ae95608-dd9b-4e00-a3e6-d8424320e666 req-67b61950-3ea9-48ba-be95-11affed584e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.169 221554 DEBUG oslo_concurrency.lockutils [req-2ae95608-dd9b-4e00-a3e6-d8424320e666 req-67b61950-3ea9-48ba-be95-11affed584e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.169 221554 DEBUG oslo_concurrency.lockutils [req-2ae95608-dd9b-4e00-a3e6-d8424320e666 req-67b61950-3ea9-48ba-be95-11affed584e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.169 221554 DEBUG oslo_concurrency.lockutils [req-2ae95608-dd9b-4e00-a3e6-d8424320e666 req-67b61950-3ea9-48ba-be95-11affed584e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.169 221554 DEBUG nova.compute.manager [req-2ae95608-dd9b-4e00-a3e6-d8424320e666 req-67b61950-3ea9-48ba-be95-11affed584e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Processing event network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.170 221554 DEBUG nova.compute.manager [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.173 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848158.1734784, 2b24a8d0-ad95-4460-acf1-0acb658330aa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.174 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.175 221554 DEBUG nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.177 221554 INFO nova.virt.libvirt.driver [-] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Instance spawned successfully.#033[00m
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.178 221554 DEBUG nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.360 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.367 221554 DEBUG nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.368 221554 DEBUG nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.369 221554 DEBUG nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.370 221554 DEBUG nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.371 221554 DEBUG nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.371 221554 DEBUG nova.virt.libvirt.driver [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.380 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:29:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:18.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:29:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:18.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:29:18 np0005603609 nova_compute[221550]: 2026-01-31 08:29:18.851 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:29:19 np0005603609 nova_compute[221550]: 2026-01-31 08:29:19.322 221554 INFO nova.compute.manager [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Took 20.09 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:29:19 np0005603609 nova_compute[221550]: 2026-01-31 08:29:19.323 221554 DEBUG nova.compute.manager [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:29:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:29:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:29:19 np0005603609 nova_compute[221550]: 2026-01-31 08:29:19.560 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:19 np0005603609 nova_compute[221550]: 2026-01-31 08:29:19.844 221554 INFO nova.compute.manager [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Took 22.37 seconds to build instance.#033[00m
Jan 31 03:29:20 np0005603609 nova_compute[221550]: 2026-01-31 08:29:20.161 221554 DEBUG oslo_concurrency.lockutils [None req-48b46db7-c038-4e75-a4fb-2001a410690c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:20 np0005603609 nova_compute[221550]: 2026-01-31 08:29:20.691 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:29:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:20.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:29:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:20.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:20 np0005603609 nova_compute[221550]: 2026-01-31 08:29:20.880 221554 DEBUG nova.compute.manager [req-2561c283-26f2-4bff-82fe-90f175f130d8 req-e4b2b7af-5786-40d3-ba4f-91842d5a0d71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:29:20 np0005603609 nova_compute[221550]: 2026-01-31 08:29:20.882 221554 DEBUG oslo_concurrency.lockutils [req-2561c283-26f2-4bff-82fe-90f175f130d8 req-e4b2b7af-5786-40d3-ba4f-91842d5a0d71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:20 np0005603609 nova_compute[221550]: 2026-01-31 08:29:20.882 221554 DEBUG oslo_concurrency.lockutils [req-2561c283-26f2-4bff-82fe-90f175f130d8 req-e4b2b7af-5786-40d3-ba4f-91842d5a0d71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:20 np0005603609 nova_compute[221550]: 2026-01-31 08:29:20.882 221554 DEBUG oslo_concurrency.lockutils [req-2561c283-26f2-4bff-82fe-90f175f130d8 req-e4b2b7af-5786-40d3-ba4f-91842d5a0d71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:20 np0005603609 nova_compute[221550]: 2026-01-31 08:29:20.883 221554 DEBUG nova.compute.manager [req-2561c283-26f2-4bff-82fe-90f175f130d8 req-e4b2b7af-5786-40d3-ba4f-91842d5a0d71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] No waiting events found dispatching network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:29:20 np0005603609 nova_compute[221550]: 2026-01-31 08:29:20.883 221554 WARNING nova.compute.manager [req-2561c283-26f2-4bff-82fe-90f175f130d8 req-e4b2b7af-5786-40d3-ba4f-91842d5a0d71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received unexpected event network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f for instance with vm_state active and task_state None.#033[00m
Jan 31 03:29:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:29:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:22.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:29:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:29:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:22.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:29:24 np0005603609 nova_compute[221550]: 2026-01-31 08:29:24.564 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:24.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:24.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:25 np0005603609 nova_compute[221550]: 2026-01-31 08:29:25.694 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:29:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:26.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:29:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:26.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:29:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:28.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:29:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:28.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:29 np0005603609 nova_compute[221550]: 2026-01-31 08:29:29.611 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e366 e366: 3 total, 3 up, 3 in
Jan 31 03:29:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:30.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:30 np0005603609 nova_compute[221550]: 2026-01-31 08:29:30.748 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:30 np0005603609 nova_compute[221550]: 2026-01-31 08:29:30.757 221554 DEBUG nova.compute.manager [req-959f533f-c20c-4b2a-ba84-e500f0506373 req-7d521d8b-2ee1-4f18-b1f9-09e6b84e6682 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-changed-324249e6-6299-4722-a570-3439880bde1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:29:30 np0005603609 nova_compute[221550]: 2026-01-31 08:29:30.758 221554 DEBUG nova.compute.manager [req-959f533f-c20c-4b2a-ba84-e500f0506373 req-7d521d8b-2ee1-4f18-b1f9-09e6b84e6682 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Refreshing instance network info cache due to event network-changed-324249e6-6299-4722-a570-3439880bde1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:29:30 np0005603609 nova_compute[221550]: 2026-01-31 08:29:30.758 221554 DEBUG oslo_concurrency.lockutils [req-959f533f-c20c-4b2a-ba84-e500f0506373 req-7d521d8b-2ee1-4f18-b1f9-09e6b84e6682 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:29:30 np0005603609 nova_compute[221550]: 2026-01-31 08:29:30.759 221554 DEBUG oslo_concurrency.lockutils [req-959f533f-c20c-4b2a-ba84-e500f0506373 req-7d521d8b-2ee1-4f18-b1f9-09e6b84e6682 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:29:30 np0005603609 nova_compute[221550]: 2026-01-31 08:29:30.759 221554 DEBUG nova.network.neutron [req-959f533f-c20c-4b2a-ba84-e500f0506373 req-7d521d8b-2ee1-4f18-b1f9-09e6b84e6682 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Refreshing network info cache for port 324249e6-6299-4722-a570-3439880bde1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:29:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:29:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:30.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:29:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:32 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Jan 31 03:29:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:32.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:32.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:29:33Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:73:84:a5 10.100.0.7
Jan 31 03:29:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:29:33Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:73:84:a5 10.100.0.7
Jan 31 03:29:33 np0005603609 nova_compute[221550]: 2026-01-31 08:29:33.432 221554 DEBUG nova.network.neutron [req-959f533f-c20c-4b2a-ba84-e500f0506373 req-7d521d8b-2ee1-4f18-b1f9-09e6b84e6682 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Updated VIF entry in instance network info cache for port 324249e6-6299-4722-a570-3439880bde1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:29:33 np0005603609 nova_compute[221550]: 2026-01-31 08:29:33.432 221554 DEBUG nova.network.neutron [req-959f533f-c20c-4b2a-ba84-e500f0506373 req-7d521d8b-2ee1-4f18-b1f9-09e6b84e6682 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Updating instance_info_cache with network_info: [{"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:29:33 np0005603609 nova_compute[221550]: 2026-01-31 08:29:33.675 221554 DEBUG oslo_concurrency.lockutils [req-959f533f-c20c-4b2a-ba84-e500f0506373 req-7d521d8b-2ee1-4f18-b1f9-09e6b84e6682 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:29:34 np0005603609 nova_compute[221550]: 2026-01-31 08:29:34.650 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:34.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:29:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:34.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:29:35 np0005603609 nova_compute[221550]: 2026-01-31 08:29:35.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:35 np0005603609 nova_compute[221550]: 2026-01-31 08:29:35.788 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:36.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:36.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e367 e367: 3 total, 3 up, 3 in
Jan 31 03:29:38 np0005603609 nova_compute[221550]: 2026-01-31 08:29:38.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:38.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:38.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:39 np0005603609 nova_compute[221550]: 2026-01-31 08:29:39.691 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:40 np0005603609 podman[290940]: 2026-01-31 08:29:40.178463523 +0000 UTC m=+0.051534277 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 03:29:40 np0005603609 podman[290939]: 2026-01-31 08:29:40.215527744 +0000 UTC m=+0.095415472 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:29:40 np0005603609 nova_compute[221550]: 2026-01-31 08:29:40.609 221554 INFO nova.compute.manager [None req-7a489606-c982-49b3-a847-c9a3a6d96664 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Get console output#033[00m
Jan 31 03:29:40 np0005603609 nova_compute[221550]: 2026-01-31 08:29:40.614 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:29:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:40.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:40.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:40 np0005603609 nova_compute[221550]: 2026-01-31 08:29:40.822 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:41 np0005603609 nova_compute[221550]: 2026-01-31 08:29:41.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:42 np0005603609 nova_compute[221550]: 2026-01-31 08:29:42.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:42 np0005603609 nova_compute[221550]: 2026-01-31 08:29:42.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:29:42 np0005603609 nova_compute[221550]: 2026-01-31 08:29:42.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:29:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:42.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:42.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:44 np0005603609 nova_compute[221550]: 2026-01-31 08:29:44.031 221554 INFO nova.compute.manager [None req-cc96299a-92f8-4930-9f48-29d177101996 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Get console output#033[00m
Jan 31 03:29:44 np0005603609 nova_compute[221550]: 2026-01-31 08:29:44.040 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:29:44 np0005603609 nova_compute[221550]: 2026-01-31 08:29:44.270 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:29:44 np0005603609 nova_compute[221550]: 2026-01-31 08:29:44.271 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:29:44 np0005603609 nova_compute[221550]: 2026-01-31 08:29:44.271 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:29:44 np0005603609 nova_compute[221550]: 2026-01-31 08:29:44.271 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8bf29da0-2e1a-4e89-bce8-b293a938c742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:29:44 np0005603609 nova_compute[221550]: 2026-01-31 08:29:44.739 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:44.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:29:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:44.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:29:45 np0005603609 nova_compute[221550]: 2026-01-31 08:29:45.829 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:46.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:29:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:46.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:29:48 np0005603609 nova_compute[221550]: 2026-01-31 08:29:48.204 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Updating instance_info_cache with network_info: [{"id": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "address": "fa:16:3e:85:fa:8d", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b86ae61-4c", "ovs_interfaceid": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:29:48 np0005603609 nova_compute[221550]: 2026-01-31 08:29:48.340 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-8bf29da0-2e1a-4e89-bce8-b293a938c742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:29:48 np0005603609 nova_compute[221550]: 2026-01-31 08:29:48.341 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:29:48 np0005603609 nova_compute[221550]: 2026-01-31 08:29:48.341 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:48 np0005603609 nova_compute[221550]: 2026-01-31 08:29:48.341 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:29:48 np0005603609 nova_compute[221550]: 2026-01-31 08:29:48.342 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:48 np0005603609 nova_compute[221550]: 2026-01-31 08:29:48.553 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:48 np0005603609 nova_compute[221550]: 2026-01-31 08:29:48.553 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:48 np0005603609 nova_compute[221550]: 2026-01-31 08:29:48.554 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:48 np0005603609 nova_compute[221550]: 2026-01-31 08:29:48.554 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:29:48 np0005603609 nova_compute[221550]: 2026-01-31 08:29:48.555 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:29:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:48.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:29:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:48.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:29:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:29:48 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3395964674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:29:48 np0005603609 nova_compute[221550]: 2026-01-31 08:29:48.997 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:29:49 np0005603609 nova_compute[221550]: 2026-01-31 08:29:49.243 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:29:49 np0005603609 nova_compute[221550]: 2026-01-31 08:29:49.244 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:29:49 np0005603609 nova_compute[221550]: 2026-01-31 08:29:49.248 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000a9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:29:49 np0005603609 nova_compute[221550]: 2026-01-31 08:29:49.249 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000a9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:29:49 np0005603609 nova_compute[221550]: 2026-01-31 08:29:49.252 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:29:49 np0005603609 nova_compute[221550]: 2026-01-31 08:29:49.253 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:29:49 np0005603609 nova_compute[221550]: 2026-01-31 08:29:49.419 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:29:49 np0005603609 nova_compute[221550]: 2026-01-31 08:29:49.420 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3704MB free_disk=20.693622589111328GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:29:49 np0005603609 nova_compute[221550]: 2026-01-31 08:29:49.420 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:49 np0005603609 nova_compute[221550]: 2026-01-31 08:29:49.420 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:49 np0005603609 nova_compute[221550]: 2026-01-31 08:29:49.742 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:49 np0005603609 nova_compute[221550]: 2026-01-31 08:29:49.748 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 8bf29da0-2e1a-4e89-bce8-b293a938c742 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:29:49 np0005603609 nova_compute[221550]: 2026-01-31 08:29:49.749 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 20d5fd84-de90-46b0-816e-0f378fd7d0c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:29:50 np0005603609 nova_compute[221550]: 2026-01-31 08:29:50.232 221554 INFO nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 5dda00e8-f066-4d14-a228-a65e64dbc5de has allocations against this compute host but is not found in the database.#033[00m
Jan 31 03:29:50 np0005603609 nova_compute[221550]: 2026-01-31 08:29:50.233 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:29:50 np0005603609 nova_compute[221550]: 2026-01-31 08:29:50.233 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:29:50 np0005603609 nova_compute[221550]: 2026-01-31 08:29:50.333 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:29:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:29:50 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2309520996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:29:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:50 np0005603609 nova_compute[221550]: 2026-01-31 08:29:50.765 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:29:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:50.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:50 np0005603609 nova_compute[221550]: 2026-01-31 08:29:50.770 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:29:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:29:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:50.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:29:50 np0005603609 nova_compute[221550]: 2026-01-31 08:29:50.831 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:51 np0005603609 nova_compute[221550]: 2026-01-31 08:29:51.092 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:29:51 np0005603609 nova_compute[221550]: 2026-01-31 08:29:51.499 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:29:51 np0005603609 nova_compute[221550]: 2026-01-31 08:29:51.500 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:51 np0005603609 nova_compute[221550]: 2026-01-31 08:29:51.921 221554 DEBUG nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Check if temp file /var/lib/nova/instances/tmpda3il2h9 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Jan 31 03:29:51 np0005603609 nova_compute[221550]: 2026-01-31 08:29:51.921 221554 DEBUG nova.compute.manager [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpda3il2h9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2b24a8d0-ad95-4460-acf1-0acb658330aa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Jan 31 03:29:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:29:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:52.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:29:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:29:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:52.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:29:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:29:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1075930529' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:29:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:29:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1075930529' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:29:54 np0005603609 nova_compute[221550]: 2026-01-31 08:29:54.744 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:29:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:54.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:29:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:29:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:54.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:29:54 np0005603609 nova_compute[221550]: 2026-01-31 08:29:54.817 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:54 np0005603609 nova_compute[221550]: 2026-01-31 08:29:54.818 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:54 np0005603609 nova_compute[221550]: 2026-01-31 08:29:54.818 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:55 np0005603609 nova_compute[221550]: 2026-01-31 08:29:55.834 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:56.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:56.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:29:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:58.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:29:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:29:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:58.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:59 np0005603609 nova_compute[221550]: 2026-01-31 08:29:59.749 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:00.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:00 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 03:30:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:00.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:00 np0005603609 nova_compute[221550]: 2026-01-31 08:30:00.835 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:01.923 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:30:01 np0005603609 nova_compute[221550]: 2026-01-31 08:30:01.924 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:01.925 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:30:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:01.926 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:30:02 np0005603609 nova_compute[221550]: 2026-01-31 08:30:02.276 221554 DEBUG nova.compute.manager [req-8ca8341f-d478-4377-9445-13e5ab4c2ba8 req-79410afe-a63d-459f-abda-68a78a1ee61e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-vif-unplugged-324249e6-6299-4722-a570-3439880bde1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:30:02 np0005603609 nova_compute[221550]: 2026-01-31 08:30:02.277 221554 DEBUG oslo_concurrency.lockutils [req-8ca8341f-d478-4377-9445-13e5ab4c2ba8 req-79410afe-a63d-459f-abda-68a78a1ee61e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:02 np0005603609 nova_compute[221550]: 2026-01-31 08:30:02.277 221554 DEBUG oslo_concurrency.lockutils [req-8ca8341f-d478-4377-9445-13e5ab4c2ba8 req-79410afe-a63d-459f-abda-68a78a1ee61e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:02 np0005603609 nova_compute[221550]: 2026-01-31 08:30:02.277 221554 DEBUG oslo_concurrency.lockutils [req-8ca8341f-d478-4377-9445-13e5ab4c2ba8 req-79410afe-a63d-459f-abda-68a78a1ee61e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:02 np0005603609 nova_compute[221550]: 2026-01-31 08:30:02.277 221554 DEBUG nova.compute.manager [req-8ca8341f-d478-4377-9445-13e5ab4c2ba8 req-79410afe-a63d-459f-abda-68a78a1ee61e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] No waiting events found dispatching network-vif-unplugged-324249e6-6299-4722-a570-3439880bde1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:30:02 np0005603609 nova_compute[221550]: 2026-01-31 08:30:02.278 221554 DEBUG nova.compute.manager [req-8ca8341f-d478-4377-9445-13e5ab4c2ba8 req-79410afe-a63d-459f-abda-68a78a1ee61e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-vif-unplugged-324249e6-6299-4722-a570-3439880bde1f for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:30:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:30:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:02.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:30:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:02.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:03 np0005603609 nova_compute[221550]: 2026-01-31 08:30:03.521 221554 INFO nova.compute.manager [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Took 8.82 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.#033[00m
Jan 31 03:30:03 np0005603609 nova_compute[221550]: 2026-01-31 08:30:03.521 221554 DEBUG nova.compute.manager [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:30:03 np0005603609 nova_compute[221550]: 2026-01-31 08:30:03.707 221554 DEBUG nova.compute.manager [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpda3il2h9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2b24a8d0-ad95-4460-acf1-0acb658330aa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(5dda00e8-f066-4d14-a228-a65e64dbc5de),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Jan 31 03:30:03 np0005603609 nova_compute[221550]: 2026-01-31 08:30:03.710 221554 DEBUG nova.objects.instance [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lazy-loading 'migration_context' on Instance uuid 2b24a8d0-ad95-4460-acf1-0acb658330aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:30:03 np0005603609 nova_compute[221550]: 2026-01-31 08:30:03.711 221554 DEBUG nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Jan 31 03:30:03 np0005603609 nova_compute[221550]: 2026-01-31 08:30:03.713 221554 DEBUG nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Jan 31 03:30:03 np0005603609 nova_compute[221550]: 2026-01-31 08:30:03.713 221554 DEBUG nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Jan 31 03:30:03 np0005603609 nova_compute[221550]: 2026-01-31 08:30:03.748 221554 DEBUG nova.virt.libvirt.vif [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:28:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2146916992',display_name='tempest-TestNetworkAdvancedServerOps-server-2146916992',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2146916992',id=169,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK7T1FRM2hnvWHsS2vC2H2YEYiw3bA7e7xk8qp1+JMKzlq7jPy/PzZqxW2VC1KQOHQDuHMYvJPGgKGcZZL8ySdxPXOSaobgGLmos6mi9nlU6Dv+erNyHG2EMArHdJN1ryg==',key_name='tempest-TestNetworkAdvancedServerOps-694728097',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:29:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-ah4zs948',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:29:19Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=2b24a8d0-ad95-4460-acf1-0acb658330aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:30:03 np0005603609 nova_compute[221550]: 2026-01-31 08:30:03.749 221554 DEBUG nova.network.os_vif_util [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Converting VIF {"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:30:03 np0005603609 nova_compute[221550]: 2026-01-31 08:30:03.750 221554 DEBUG nova.network.os_vif_util [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:73:84:a5,bridge_name='br-int',has_traffic_filtering=True,id=324249e6-6299-4722-a570-3439880bde1f,network=Network(5c8cd691-2e19-4c04-b061-2e5304161623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap324249e6-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:30:03 np0005603609 nova_compute[221550]: 2026-01-31 08:30:03.750 221554 DEBUG nova.virt.libvirt.migration [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Updating guest XML with vif config: <interface type="ethernet">
Jan 31 03:30:03 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:73:84:a5"/>
Jan 31 03:30:03 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 03:30:03 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:30:03 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 03:30:03 np0005603609 nova_compute[221550]:  <target dev="tap324249e6-62"/>
Jan 31 03:30:03 np0005603609 nova_compute[221550]: </interface>
Jan 31 03:30:03 np0005603609 nova_compute[221550]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Jan 31 03:30:03 np0005603609 nova_compute[221550]: 2026-01-31 08:30:03.751 221554 DEBUG nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Jan 31 03:30:04 np0005603609 systemd[1]: Starting dnf makecache...
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.215 221554 DEBUG nova.virt.libvirt.migration [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.216 221554 INFO nova.virt.libvirt.migration [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.307 221554 INFO nova.compute.manager [None req-ea737bfd-10ad-4b83-ac0d-cc8d5410769d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Pausing#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.308 221554 DEBUG nova.objects.instance [None req-ea737bfd-10ad-4b83-ac0d-cc8d5410769d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'flavor' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.443 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848204.443669, 20d5fd84-de90-46b0-816e-0f378fd7d0c7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.444 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.446 221554 DEBUG nova.compute.manager [None req-ea737bfd-10ad-4b83-ac0d-cc8d5410769d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.455 221554 DEBUG nova.compute.manager [req-160139ea-3d46-4bd3-9783-b16c51884eb5 req-cc14396e-d6fe-45d3-b0f6-803e824b5804 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.456 221554 DEBUG oslo_concurrency.lockutils [req-160139ea-3d46-4bd3-9783-b16c51884eb5 req-cc14396e-d6fe-45d3-b0f6-803e824b5804 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.456 221554 DEBUG oslo_concurrency.lockutils [req-160139ea-3d46-4bd3-9783-b16c51884eb5 req-cc14396e-d6fe-45d3-b0f6-803e824b5804 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.456 221554 DEBUG oslo_concurrency.lockutils [req-160139ea-3d46-4bd3-9783-b16c51884eb5 req-cc14396e-d6fe-45d3-b0f6-803e824b5804 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.457 221554 DEBUG nova.compute.manager [req-160139ea-3d46-4bd3-9783-b16c51884eb5 req-cc14396e-d6fe-45d3-b0f6-803e824b5804 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] No waiting events found dispatching network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.457 221554 WARNING nova.compute.manager [req-160139ea-3d46-4bd3-9783-b16c51884eb5 req-cc14396e-d6fe-45d3-b0f6-803e824b5804 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received unexpected event network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f for instance with vm_state active and task_state migrating.#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.457 221554 DEBUG nova.compute.manager [req-160139ea-3d46-4bd3-9783-b16c51884eb5 req-cc14396e-d6fe-45d3-b0f6-803e824b5804 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-changed-324249e6-6299-4722-a570-3439880bde1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.457 221554 DEBUG nova.compute.manager [req-160139ea-3d46-4bd3-9783-b16c51884eb5 req-cc14396e-d6fe-45d3-b0f6-803e824b5804 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Refreshing instance network info cache due to event network-changed-324249e6-6299-4722-a570-3439880bde1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.458 221554 DEBUG oslo_concurrency.lockutils [req-160139ea-3d46-4bd3-9783-b16c51884eb5 req-cc14396e-d6fe-45d3-b0f6-803e824b5804 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.458 221554 DEBUG oslo_concurrency.lockutils [req-160139ea-3d46-4bd3-9783-b16c51884eb5 req-cc14396e-d6fe-45d3-b0f6-803e824b5804 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.458 221554 DEBUG nova.network.neutron [req-160139ea-3d46-4bd3-9783-b16c51884eb5 req-cc14396e-d6fe-45d3-b0f6-803e824b5804 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Refreshing network info cache for port 324249e6-6299-4722-a570-3439880bde1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.461 221554 INFO nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Jan 31 03:30:04 np0005603609 dnf[291032]: Metadata cache refreshed recently.
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.486 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.489 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:30:04 np0005603609 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 31 03:30:04 np0005603609 systemd[1]: Finished dnf makecache.
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.541 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.751 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:04.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:04.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.964 221554 DEBUG nova.virt.libvirt.migration [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 31 03:30:04 np0005603609 nova_compute[221550]: 2026-01-31 08:30:04.964 221554 DEBUG nova.virt.libvirt.migration [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.372 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848205.3723288, 2b24a8d0-ad95-4460-acf1-0acb658330aa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.373 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.433 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.438 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.468 221554 DEBUG nova.virt.libvirt.migration [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.469 221554 DEBUG nova.virt.libvirt.migration [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Jan 31 03:30:05 np0005603609 kernel: tap324249e6-62 (unregistering): left promiscuous mode
Jan 31 03:30:05 np0005603609 NetworkManager[49064]: <info>  [1769848205.5445] device (tap324249e6-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:30:05 np0005603609 ovn_controller[130359]: 2026-01-31T08:30:05Z|00738|binding|INFO|Releasing lport 324249e6-6299-4722-a570-3439880bde1f from this chassis (sb_readonly=0)
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.551 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:05 np0005603609 ovn_controller[130359]: 2026-01-31T08:30:05Z|00739|binding|INFO|Setting lport 324249e6-6299-4722-a570-3439880bde1f down in Southbound
Jan 31 03:30:05 np0005603609 ovn_controller[130359]: 2026-01-31T08:30:05Z|00740|binding|INFO|Removing iface tap324249e6-62 ovn-installed in OVS
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.554 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.562 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:05 np0005603609 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000a9.scope: Deactivated successfully.
Jan 31 03:30:05 np0005603609 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000a9.scope: Consumed 15.227s CPU time.
Jan 31 03:30:05 np0005603609 systemd-machined[190912]: Machine qemu-91-instance-000000a9 terminated.
Jan 31 03:30:05 np0005603609 ovn_controller[130359]: 2026-01-31T08:30:05Z|00741|binding|INFO|Releasing lport 0b9d56f1-a803-44f1-b709-3bfbc71e0f57 from this chassis (sb_readonly=0)
Jan 31 03:30:05 np0005603609 ovn_controller[130359]: 2026-01-31T08:30:05Z|00742|binding|INFO|Releasing lport 6550f58c-c002-4c91-85a2-53b4c1c807b7 from this chassis (sb_readonly=0)
Jan 31 03:30:05 np0005603609 ovn_controller[130359]: 2026-01-31T08:30:05Z|00743|binding|INFO|Releasing lport 0e00cbca-3de8-4f0d-92ce-93192811a08b from this chassis (sb_readonly=0)
Jan 31 03:30:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:05.622 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:84:a5 10.100.0.7'], port_security=['fa:16:3e:73:84:a5 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '5c307474-e9ec-4d19-9f52-463eb0ff26d1'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2b24a8d0-ad95-4460-acf1-0acb658330aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c8cd691-2e19-4c04-b061-2e5304161623', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '8', 'neutron:security_group_ids': '0e11440e-2192-492f-a554-7817b0fd324e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0bee9a73-d2c9-492b-9a5b-302cf0b7a5b8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=324249e6-6299-4722-a570-3439880bde1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.625 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Jan 31 03:30:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:05.624 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 324249e6-6299-4722-a570-3439880bde1f in datapath 5c8cd691-2e19-4c04-b061-2e5304161623 unbound from our chassis#033[00m
Jan 31 03:30:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:05.627 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c8cd691-2e19-4c04-b061-2e5304161623, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:30:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:05.629 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc8c314-e238-4936-8591-353b7610a198]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:05.630 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623 namespace which is not needed anymore#033[00m
Jan 31 03:30:05 np0005603609 virtqemud[221292]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/2b24a8d0-ad95-4460-acf1-0acb658330aa_disk: No such file or directory
Jan 31 03:30:05 np0005603609 virtqemud[221292]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/2b24a8d0-ad95-4460-acf1-0acb658330aa_disk: No such file or directory
Jan 31 03:30:05 np0005603609 kernel: tap324249e6-62: entered promiscuous mode
Jan 31 03:30:05 np0005603609 kernel: tap324249e6-62 (unregistering): left promiscuous mode
Jan 31 03:30:05 np0005603609 NetworkManager[49064]: <info>  [1769848205.7085] manager: (tap324249e6-62): new Tun device (/org/freedesktop/NetworkManager/Devices/345)
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.711 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.725 221554 DEBUG nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.725 221554 DEBUG nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.725 221554 DEBUG nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Jan 31 03:30:05 np0005603609 neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623[290871]: [NOTICE]   (290875) : haproxy version is 2.8.14-c23fe91
Jan 31 03:30:05 np0005603609 neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623[290871]: [NOTICE]   (290875) : path to executable is /usr/sbin/haproxy
Jan 31 03:30:05 np0005603609 neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623[290871]: [WARNING]  (290875) : Exiting Master process...
Jan 31 03:30:05 np0005603609 neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623[290871]: [WARNING]  (290875) : Exiting Master process...
Jan 31 03:30:05 np0005603609 neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623[290871]: [ALERT]    (290875) : Current worker (290877) exited with code 143 (Terminated)
Jan 31 03:30:05 np0005603609 neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623[290871]: [WARNING]  (290875) : All workers exited. Exiting... (0)
Jan 31 03:30:05 np0005603609 systemd[1]: libpod-1bf2f552d5e051eb693733445cd301652c498f1f9112f72b97cead42c18ee2b8.scope: Deactivated successfully.
Jan 31 03:30:05 np0005603609 podman[291062]: 2026-01-31 08:30:05.756666172 +0000 UTC m=+0.042789839 container died 1bf2f552d5e051eb693733445cd301652c498f1f9112f72b97cead42c18ee2b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 03:30:05 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1bf2f552d5e051eb693733445cd301652c498f1f9112f72b97cead42c18ee2b8-userdata-shm.mount: Deactivated successfully.
Jan 31 03:30:05 np0005603609 systemd[1]: var-lib-containers-storage-overlay-1060f7e99d35037030208d4d3a9c7a3f434c5c06a2fe06dccb6753558ff96325-merged.mount: Deactivated successfully.
Jan 31 03:30:05 np0005603609 podman[291062]: 2026-01-31 08:30:05.811072988 +0000 UTC m=+0.097196655 container cleanup 1bf2f552d5e051eb693733445cd301652c498f1f9112f72b97cead42c18ee2b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:30:05 np0005603609 systemd[1]: libpod-conmon-1bf2f552d5e051eb693733445cd301652c498f1f9112f72b97cead42c18ee2b8.scope: Deactivated successfully.
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.838 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:05 np0005603609 podman[291103]: 2026-01-31 08:30:05.87362879 +0000 UTC m=+0.042924081 container remove 1bf2f552d5e051eb693733445cd301652c498f1f9112f72b97cead42c18ee2b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:30:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:05.879 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[eef6d393-a1b0-4198-8e78-439fa8b68253]: (4, ('Sat Jan 31 08:30:05 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623 (1bf2f552d5e051eb693733445cd301652c498f1f9112f72b97cead42c18ee2b8)\n1bf2f552d5e051eb693733445cd301652c498f1f9112f72b97cead42c18ee2b8\nSat Jan 31 08:30:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623 (1bf2f552d5e051eb693733445cd301652c498f1f9112f72b97cead42c18ee2b8)\n1bf2f552d5e051eb693733445cd301652c498f1f9112f72b97cead42c18ee2b8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:05.881 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[848a3d7e-0a77-49a7-8b59-9fe0519c9a1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:05.882 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c8cd691-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.884 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:05 np0005603609 kernel: tap5c8cd691-20: left promiscuous mode
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.892 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:05.895 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d40600ad-0864-4ed7-819d-ddf4b64a9be4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:05.913 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9149e59c-7692-409f-b088-ee4b14d76f01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:05.914 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[00a52487-99fa-43d2-87c8-81d020736306]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:05.926 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c19061b1-ddcd-4e33-b937-c23b4c6e2a79]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 839581, 'reachable_time': 24642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291122, 'error': None, 'target': 'ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:05.928 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:30:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:05.928 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[35d77995-7fe8-4f09-a9bc-c84e48c49941]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:05 np0005603609 systemd[1]: run-netns-ovnmeta\x2d5c8cd691\x2d2e19\x2d4c04\x2db061\x2d2e5304161623.mount: Deactivated successfully.
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.971 221554 DEBUG nova.virt.libvirt.guest [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '2b24a8d0-ad95-4460-acf1-0acb658330aa' (instance-000000a9) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.972 221554 INFO nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Migration operation has completed#033[00m
Jan 31 03:30:05 np0005603609 nova_compute[221550]: 2026-01-31 08:30:05.972 221554 INFO nova.compute.manager [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] _post_live_migration() is started..#033[00m
Jan 31 03:30:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:06 np0005603609 nova_compute[221550]: 2026-01-31 08:30:06.391 221554 DEBUG nova.compute.manager [req-3ac8a193-72bb-46ce-b84e-4de77c789a44 req-6f1121ba-805e-48e0-92d8-8a9e9a568608 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-vif-unplugged-324249e6-6299-4722-a570-3439880bde1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:30:06 np0005603609 nova_compute[221550]: 2026-01-31 08:30:06.391 221554 DEBUG oslo_concurrency.lockutils [req-3ac8a193-72bb-46ce-b84e-4de77c789a44 req-6f1121ba-805e-48e0-92d8-8a9e9a568608 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:06 np0005603609 nova_compute[221550]: 2026-01-31 08:30:06.392 221554 DEBUG oslo_concurrency.lockutils [req-3ac8a193-72bb-46ce-b84e-4de77c789a44 req-6f1121ba-805e-48e0-92d8-8a9e9a568608 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:06 np0005603609 nova_compute[221550]: 2026-01-31 08:30:06.392 221554 DEBUG oslo_concurrency.lockutils [req-3ac8a193-72bb-46ce-b84e-4de77c789a44 req-6f1121ba-805e-48e0-92d8-8a9e9a568608 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:06 np0005603609 nova_compute[221550]: 2026-01-31 08:30:06.392 221554 DEBUG nova.compute.manager [req-3ac8a193-72bb-46ce-b84e-4de77c789a44 req-6f1121ba-805e-48e0-92d8-8a9e9a568608 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] No waiting events found dispatching network-vif-unplugged-324249e6-6299-4722-a570-3439880bde1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:30:06 np0005603609 nova_compute[221550]: 2026-01-31 08:30:06.392 221554 DEBUG nova.compute.manager [req-3ac8a193-72bb-46ce-b84e-4de77c789a44 req-6f1121ba-805e-48e0-92d8-8a9e9a568608 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-vif-unplugged-324249e6-6299-4722-a570-3439880bde1f for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:30:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:06.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:06.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:07 np0005603609 nova_compute[221550]: 2026-01-31 08:30:07.162 221554 DEBUG nova.network.neutron [req-160139ea-3d46-4bd3-9783-b16c51884eb5 req-cc14396e-d6fe-45d3-b0f6-803e824b5804 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Updated VIF entry in instance network info cache for port 324249e6-6299-4722-a570-3439880bde1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:30:07 np0005603609 nova_compute[221550]: 2026-01-31 08:30:07.163 221554 DEBUG nova.network.neutron [req-160139ea-3d46-4bd3-9783-b16c51884eb5 req-cc14396e-d6fe-45d3-b0f6-803e824b5804 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Updating instance_info_cache with network_info: [{"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:30:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:07.522 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:07.522 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:07.523 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:08 np0005603609 nova_compute[221550]: 2026-01-31 08:30:08.005 221554 DEBUG nova.compute.manager [req-4d43f52f-3b6a-44b1-a92b-5cf57a528979 req-4047173a-db93-40d7-b9c8-8886de6ff3e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-vif-unplugged-324249e6-6299-4722-a570-3439880bde1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:30:08 np0005603609 nova_compute[221550]: 2026-01-31 08:30:08.005 221554 DEBUG oslo_concurrency.lockutils [req-4d43f52f-3b6a-44b1-a92b-5cf57a528979 req-4047173a-db93-40d7-b9c8-8886de6ff3e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:08 np0005603609 nova_compute[221550]: 2026-01-31 08:30:08.006 221554 DEBUG oslo_concurrency.lockutils [req-4d43f52f-3b6a-44b1-a92b-5cf57a528979 req-4047173a-db93-40d7-b9c8-8886de6ff3e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:08 np0005603609 nova_compute[221550]: 2026-01-31 08:30:08.006 221554 DEBUG oslo_concurrency.lockutils [req-4d43f52f-3b6a-44b1-a92b-5cf57a528979 req-4047173a-db93-40d7-b9c8-8886de6ff3e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:08 np0005603609 nova_compute[221550]: 2026-01-31 08:30:08.006 221554 DEBUG nova.compute.manager [req-4d43f52f-3b6a-44b1-a92b-5cf57a528979 req-4047173a-db93-40d7-b9c8-8886de6ff3e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] No waiting events found dispatching network-vif-unplugged-324249e6-6299-4722-a570-3439880bde1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:30:08 np0005603609 nova_compute[221550]: 2026-01-31 08:30:08.006 221554 DEBUG nova.compute.manager [req-4d43f52f-3b6a-44b1-a92b-5cf57a528979 req-4047173a-db93-40d7-b9c8-8886de6ff3e6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-vif-unplugged-324249e6-6299-4722-a570-3439880bde1f for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:30:08 np0005603609 nova_compute[221550]: 2026-01-31 08:30:08.044 221554 DEBUG oslo_concurrency.lockutils [req-160139ea-3d46-4bd3-9783-b16c51884eb5 req-cc14396e-d6fe-45d3-b0f6-803e824b5804 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:30:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:08.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:08.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:08 np0005603609 nova_compute[221550]: 2026-01-31 08:30:08.954 221554 DEBUG nova.compute.manager [req-d8bff3cd-d565-4fb5-90b7-d9f465628fb3 req-802fb4bc-c5cc-40f2-b592-85e361ed69b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:30:08 np0005603609 nova_compute[221550]: 2026-01-31 08:30:08.954 221554 DEBUG oslo_concurrency.lockutils [req-d8bff3cd-d565-4fb5-90b7-d9f465628fb3 req-802fb4bc-c5cc-40f2-b592-85e361ed69b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:08 np0005603609 nova_compute[221550]: 2026-01-31 08:30:08.955 221554 DEBUG oslo_concurrency.lockutils [req-d8bff3cd-d565-4fb5-90b7-d9f465628fb3 req-802fb4bc-c5cc-40f2-b592-85e361ed69b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:08 np0005603609 nova_compute[221550]: 2026-01-31 08:30:08.955 221554 DEBUG oslo_concurrency.lockutils [req-d8bff3cd-d565-4fb5-90b7-d9f465628fb3 req-802fb4bc-c5cc-40f2-b592-85e361ed69b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:08 np0005603609 nova_compute[221550]: 2026-01-31 08:30:08.955 221554 DEBUG nova.compute.manager [req-d8bff3cd-d565-4fb5-90b7-d9f465628fb3 req-802fb4bc-c5cc-40f2-b592-85e361ed69b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] No waiting events found dispatching network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:30:08 np0005603609 nova_compute[221550]: 2026-01-31 08:30:08.955 221554 WARNING nova.compute.manager [req-d8bff3cd-d565-4fb5-90b7-d9f465628fb3 req-802fb4bc-c5cc-40f2-b592-85e361ed69b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received unexpected event network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f for instance with vm_state active and task_state migrating.#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.122 221554 INFO nova.compute.manager [None req-dfe50c3d-41e8-4b91-a0ce-b655f6f4d408 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Unpausing#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.123 221554 DEBUG nova.objects.instance [None req-dfe50c3d-41e8-4b91-a0ce-b655f6f4d408 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'flavor' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.338 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848209.337621, 20d5fd84-de90-46b0-816e-0f378fd7d0c7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.338 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:30:09 np0005603609 virtqemud[221292]: argument unsupported: QEMU guest agent is not configured
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.343 221554 DEBUG nova.virt.libvirt.guest [None req-dfe50c3d-41e8-4b91-a0ce-b655f6f4d408 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.343 221554 DEBUG nova.compute.manager [None req-dfe50c3d-41e8-4b91-a0ce-b655f6f4d408 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.402 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.409 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.579 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.630 221554 DEBUG nova.network.neutron [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Activated binding for port 324249e6-6299-4722-a570-3439880bde1f and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.631 221554 DEBUG nova.compute.manager [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.632 221554 DEBUG nova.virt.libvirt.vif [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:28:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2146916992',display_name='tempest-TestNetworkAdvancedServerOps-server-2146916992',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2146916992',id=169,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK7T1FRM2hnvWHsS2vC2H2YEYiw3bA7e7xk8qp1+JMKzlq7jPy/PzZqxW2VC1KQOHQDuHMYvJPGgKGcZZL8ySdxPXOSaobgGLmos6mi9nlU6Dv+erNyHG2EMArHdJN1ryg==',key_name='tempest-TestNetworkAdvancedServerOps-694728097',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:29:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-ah4zs948',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:29:46Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=2b24a8d0-ad95-4460-acf1-0acb658330aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.632 221554 DEBUG nova.network.os_vif_util [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Converting VIF {"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.633 221554 DEBUG nova.network.os_vif_util [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:73:84:a5,bridge_name='br-int',has_traffic_filtering=True,id=324249e6-6299-4722-a570-3439880bde1f,network=Network(5c8cd691-2e19-4c04-b061-2e5304161623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap324249e6-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.634 221554 DEBUG os_vif [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:84:a5,bridge_name='br-int',has_traffic_filtering=True,id=324249e6-6299-4722-a570-3439880bde1f,network=Network(5c8cd691-2e19-4c04-b061-2e5304161623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap324249e6-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.636 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.636 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap324249e6-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.678 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.682 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.685 221554 INFO os_vif [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:84:a5,bridge_name='br-int',has_traffic_filtering=True,id=324249e6-6299-4722-a570-3439880bde1f,network=Network(5c8cd691-2e19-4c04-b061-2e5304161623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap324249e6-62')#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.686 221554 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.686 221554 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.687 221554 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.687 221554 DEBUG nova.compute.manager [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.688 221554 INFO nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Deleting instance files /var/lib/nova/instances/2b24a8d0-ad95-4460-acf1-0acb658330aa_del#033[00m
Jan 31 03:30:09 np0005603609 nova_compute[221550]: 2026-01-31 08:30:09.688 221554 INFO nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Deletion of /var/lib/nova/instances/2b24a8d0-ad95-4460-acf1-0acb658330aa_del complete#033[00m
Jan 31 03:30:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:10.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:10.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:10 np0005603609 nova_compute[221550]: 2026-01-31 08:30:10.839 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:11 np0005603609 podman[291123]: 2026-01-31 08:30:11.178638852 +0000 UTC m=+0.064177202 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 03:30:11 np0005603609 podman[291124]: 2026-01-31 08:30:11.178594811 +0000 UTC m=+0.064254124 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 03:30:11 np0005603609 nova_compute[221550]: 2026-01-31 08:30:11.413 221554 DEBUG nova.compute.manager [req-373cdde7-8199-46ff-8ffe-abd657c1b99e req-95de348e-4d05-465b-82b6-25e42a6b0f66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:30:11 np0005603609 nova_compute[221550]: 2026-01-31 08:30:11.413 221554 DEBUG oslo_concurrency.lockutils [req-373cdde7-8199-46ff-8ffe-abd657c1b99e req-95de348e-4d05-465b-82b6-25e42a6b0f66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:11 np0005603609 nova_compute[221550]: 2026-01-31 08:30:11.413 221554 DEBUG oslo_concurrency.lockutils [req-373cdde7-8199-46ff-8ffe-abd657c1b99e req-95de348e-4d05-465b-82b6-25e42a6b0f66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:11 np0005603609 nova_compute[221550]: 2026-01-31 08:30:11.414 221554 DEBUG oslo_concurrency.lockutils [req-373cdde7-8199-46ff-8ffe-abd657c1b99e req-95de348e-4d05-465b-82b6-25e42a6b0f66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:11 np0005603609 nova_compute[221550]: 2026-01-31 08:30:11.414 221554 DEBUG nova.compute.manager [req-373cdde7-8199-46ff-8ffe-abd657c1b99e req-95de348e-4d05-465b-82b6-25e42a6b0f66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] No waiting events found dispatching network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:30:11 np0005603609 nova_compute[221550]: 2026-01-31 08:30:11.414 221554 WARNING nova.compute.manager [req-373cdde7-8199-46ff-8ffe-abd657c1b99e req-95de348e-4d05-465b-82b6-25e42a6b0f66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received unexpected event network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f for instance with vm_state active and task_state migrating.#033[00m
Jan 31 03:30:11 np0005603609 nova_compute[221550]: 2026-01-31 08:30:11.414 221554 DEBUG nova.compute.manager [req-373cdde7-8199-46ff-8ffe-abd657c1b99e req-95de348e-4d05-465b-82b6-25e42a6b0f66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:30:11 np0005603609 nova_compute[221550]: 2026-01-31 08:30:11.414 221554 DEBUG oslo_concurrency.lockutils [req-373cdde7-8199-46ff-8ffe-abd657c1b99e req-95de348e-4d05-465b-82b6-25e42a6b0f66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:11 np0005603609 nova_compute[221550]: 2026-01-31 08:30:11.415 221554 DEBUG oslo_concurrency.lockutils [req-373cdde7-8199-46ff-8ffe-abd657c1b99e req-95de348e-4d05-465b-82b6-25e42a6b0f66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:11 np0005603609 nova_compute[221550]: 2026-01-31 08:30:11.415 221554 DEBUG oslo_concurrency.lockutils [req-373cdde7-8199-46ff-8ffe-abd657c1b99e req-95de348e-4d05-465b-82b6-25e42a6b0f66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:11 np0005603609 nova_compute[221550]: 2026-01-31 08:30:11.415 221554 DEBUG nova.compute.manager [req-373cdde7-8199-46ff-8ffe-abd657c1b99e req-95de348e-4d05-465b-82b6-25e42a6b0f66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] No waiting events found dispatching network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:30:11 np0005603609 nova_compute[221550]: 2026-01-31 08:30:11.415 221554 WARNING nova.compute.manager [req-373cdde7-8199-46ff-8ffe-abd657c1b99e req-95de348e-4d05-465b-82b6-25e42a6b0f66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received unexpected event network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f for instance with vm_state active and task_state migrating.#033[00m
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #136. Immutable memtables: 0.
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:30:12.038947) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 136
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848212039064, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2457, "num_deletes": 254, "total_data_size": 5684391, "memory_usage": 5756888, "flush_reason": "Manual Compaction"}
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #137: started
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848212233312, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 137, "file_size": 3723986, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66409, "largest_seqno": 68861, "table_properties": {"data_size": 3714067, "index_size": 6220, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21444, "raw_average_key_size": 20, "raw_value_size": 3693969, "raw_average_value_size": 3596, "num_data_blocks": 270, "num_entries": 1027, "num_filter_entries": 1027, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847992, "oldest_key_time": 1769847992, "file_creation_time": 1769848212, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 194377 microseconds, and 10405 cpu microseconds.
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:30:12.233368) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #137: 3723986 bytes OK
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:30:12.233390) [db/memtable_list.cc:519] [default] Level-0 commit table #137 started
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:30:12.411741) [db/memtable_list.cc:722] [default] Level-0 commit table #137: memtable #1 done
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:30:12.411781) EVENT_LOG_v1 {"time_micros": 1769848212411773, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:30:12.411804) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 5673610, prev total WAL file size 5673610, number of live WAL files 2.
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000133.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:30:12.413050) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [137(3636KB)], [135(10MB)]
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848212413110, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [137], "files_L6": [135], "score": -1, "input_data_size": 14376110, "oldest_snapshot_seqno": -1}
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #138: 9187 keys, 12495658 bytes, temperature: kUnknown
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848212614468, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 138, "file_size": 12495658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12435457, "index_size": 36152, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 241573, "raw_average_key_size": 26, "raw_value_size": 12273066, "raw_average_value_size": 1335, "num_data_blocks": 1384, "num_entries": 9187, "num_filter_entries": 9187, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769848212, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 138, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:30:12.614782) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 12495658 bytes
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:30:12.625311) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 71.4 rd, 62.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.2 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(7.2) write-amplify(3.4) OK, records in: 9715, records dropped: 528 output_compression: NoCompression
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:30:12.625342) EVENT_LOG_v1 {"time_micros": 1769848212625329, "job": 86, "event": "compaction_finished", "compaction_time_micros": 201449, "compaction_time_cpu_micros": 38428, "output_level": 6, "num_output_files": 1, "total_output_size": 12495658, "num_input_records": 9715, "num_output_records": 9187, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848212625900, "job": 86, "event": "table_file_deletion", "file_number": 137}
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000135.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848212627608, "job": 86, "event": "table_file_deletion", "file_number": 135}
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:30:12.412845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:30:12.627684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:30:12.627689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:30:12.627692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:30:12.627694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:30:12 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:30:12.627696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:30:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:12.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:30:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:12.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:30:13 np0005603609 nova_compute[221550]: 2026-01-31 08:30:13.575 221554 DEBUG nova.compute.manager [req-82525db1-1f9a-49dc-baab-fd24144fbc1d req-8768656f-2719-44c4-8662-1f229b5335be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:30:13 np0005603609 nova_compute[221550]: 2026-01-31 08:30:13.575 221554 DEBUG oslo_concurrency.lockutils [req-82525db1-1f9a-49dc-baab-fd24144fbc1d req-8768656f-2719-44c4-8662-1f229b5335be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:13 np0005603609 nova_compute[221550]: 2026-01-31 08:30:13.576 221554 DEBUG oslo_concurrency.lockutils [req-82525db1-1f9a-49dc-baab-fd24144fbc1d req-8768656f-2719-44c4-8662-1f229b5335be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:13 np0005603609 nova_compute[221550]: 2026-01-31 08:30:13.576 221554 DEBUG oslo_concurrency.lockutils [req-82525db1-1f9a-49dc-baab-fd24144fbc1d req-8768656f-2719-44c4-8662-1f229b5335be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:13 np0005603609 nova_compute[221550]: 2026-01-31 08:30:13.577 221554 DEBUG nova.compute.manager [req-82525db1-1f9a-49dc-baab-fd24144fbc1d req-8768656f-2719-44c4-8662-1f229b5335be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] No waiting events found dispatching network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:30:13 np0005603609 nova_compute[221550]: 2026-01-31 08:30:13.577 221554 WARNING nova.compute.manager [req-82525db1-1f9a-49dc-baab-fd24144fbc1d req-8768656f-2719-44c4-8662-1f229b5335be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received unexpected event network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f for instance with vm_state active and task_state migrating.#033[00m
Jan 31 03:30:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:30:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4019013427' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:30:14 np0005603609 nova_compute[221550]: 2026-01-31 08:30:14.717 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:14.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:14.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:15 np0005603609 nova_compute[221550]: 2026-01-31 08:30:15.842 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:16.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:16.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:17 np0005603609 nova_compute[221550]: 2026-01-31 08:30:17.243 221554 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquiring lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:17 np0005603609 nova_compute[221550]: 2026-01-31 08:30:17.244 221554 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:17 np0005603609 nova_compute[221550]: 2026-01-31 08:30:17.244 221554 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:17 np0005603609 nova_compute[221550]: 2026-01-31 08:30:17.391 221554 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:17 np0005603609 nova_compute[221550]: 2026-01-31 08:30:17.392 221554 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:17 np0005603609 nova_compute[221550]: 2026-01-31 08:30:17.392 221554 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:17 np0005603609 nova_compute[221550]: 2026-01-31 08:30:17.392 221554 DEBUG nova.compute.resource_tracker [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:30:17 np0005603609 nova_compute[221550]: 2026-01-31 08:30:17.393 221554 DEBUG oslo_concurrency.processutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:30:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:30:17 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/647630110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:30:17 np0005603609 nova_compute[221550]: 2026-01-31 08:30:17.811 221554 DEBUG oslo_concurrency.processutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:30:18 np0005603609 nova_compute[221550]: 2026-01-31 08:30:18.187 221554 DEBUG nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:30:18 np0005603609 nova_compute[221550]: 2026-01-31 08:30:18.187 221554 DEBUG nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:30:18 np0005603609 nova_compute[221550]: 2026-01-31 08:30:18.190 221554 DEBUG nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:30:18 np0005603609 nova_compute[221550]: 2026-01-31 08:30:18.190 221554 DEBUG nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:30:18 np0005603609 nova_compute[221550]: 2026-01-31 08:30:18.326 221554 WARNING nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:30:18 np0005603609 nova_compute[221550]: 2026-01-31 08:30:18.327 221554 DEBUG nova.compute.resource_tracker [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3837MB free_disk=20.668907165527344GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:30:18 np0005603609 nova_compute[221550]: 2026-01-31 08:30:18.327 221554 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:18 np0005603609 nova_compute[221550]: 2026-01-31 08:30:18.327 221554 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:18.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:18.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:18 np0005603609 nova_compute[221550]: 2026-01-31 08:30:18.867 221554 DEBUG nova.compute.resource_tracker [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Migration for instance 2b24a8d0-ad95-4460-acf1-0acb658330aa refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 31 03:30:19 np0005603609 nova_compute[221550]: 2026-01-31 08:30:19.144 221554 DEBUG nova.compute.resource_tracker [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Jan 31 03:30:19 np0005603609 nova_compute[221550]: 2026-01-31 08:30:19.191 221554 DEBUG nova.compute.resource_tracker [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Instance 8bf29da0-2e1a-4e89-bce8-b293a938c742 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:30:19 np0005603609 nova_compute[221550]: 2026-01-31 08:30:19.191 221554 DEBUG nova.compute.resource_tracker [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Instance 20d5fd84-de90-46b0-816e-0f378fd7d0c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:30:19 np0005603609 nova_compute[221550]: 2026-01-31 08:30:19.192 221554 DEBUG nova.compute.resource_tracker [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Migration 5dda00e8-f066-4d14-a228-a65e64dbc5de is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 03:30:19 np0005603609 nova_compute[221550]: 2026-01-31 08:30:19.192 221554 DEBUG nova.compute.resource_tracker [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:30:19 np0005603609 nova_compute[221550]: 2026-01-31 08:30:19.192 221554 DEBUG nova.compute.resource_tracker [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:30:19 np0005603609 nova_compute[221550]: 2026-01-31 08:30:19.322 221554 DEBUG oslo_concurrency.processutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:30:19 np0005603609 nova_compute[221550]: 2026-01-31 08:30:19.722 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:30:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4186738958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:30:19 np0005603609 nova_compute[221550]: 2026-01-31 08:30:19.758 221554 DEBUG oslo_concurrency.processutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:30:19 np0005603609 nova_compute[221550]: 2026-01-31 08:30:19.764 221554 DEBUG nova.compute.provider_tree [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:30:20 np0005603609 nova_compute[221550]: 2026-01-31 08:30:20.106 221554 DEBUG nova.scheduler.client.report [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:30:20 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:30:20 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:30:20 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:30:20 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:30:20 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:30:20 np0005603609 nova_compute[221550]: 2026-01-31 08:30:20.432 221554 DEBUG nova.compute.resource_tracker [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:30:20 np0005603609 nova_compute[221550]: 2026-01-31 08:30:20.432 221554 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:20 np0005603609 nova_compute[221550]: 2026-01-31 08:30:20.438 221554 INFO nova.compute.manager [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Migrating instance to compute-0.ctlplane.example.com finished successfully.#033[00m
Jan 31 03:30:20 np0005603609 nova_compute[221550]: 2026-01-31 08:30:20.723 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848205.72214, 2b24a8d0-ad95-4460-acf1-0acb658330aa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:30:20 np0005603609 nova_compute[221550]: 2026-01-31 08:30:20.723 221554 INFO nova.compute.manager [-] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:30:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:30:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:20.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:30:20 np0005603609 nova_compute[221550]: 2026-01-31 08:30:20.843 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:20.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:20 np0005603609 nova_compute[221550]: 2026-01-31 08:30:20.899 221554 DEBUG nova.compute.manager [None req-f4146d7c-d2cf-47fc-83ca-c8b3783b05a6 - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:30:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:21 np0005603609 nova_compute[221550]: 2026-01-31 08:30:21.084 221554 INFO nova.scheduler.client.report [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Deleted allocation for migration 5dda00e8-f066-4d14-a228-a65e64dbc5de#033[00m
Jan 31 03:30:21 np0005603609 nova_compute[221550]: 2026-01-31 08:30:21.085 221554 DEBUG nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Jan 31 03:30:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:30:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:22.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:30:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:22.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:24 np0005603609 nova_compute[221550]: 2026-01-31 08:30:24.725 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:24.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:24.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:25 np0005603609 nova_compute[221550]: 2026-01-31 08:30:25.845 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:26.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:26.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:27 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:30:27 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:30:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:30:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:28.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:30:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:28.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:29 np0005603609 nova_compute[221550]: 2026-01-31 08:30:29.728 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e368 e368: 3 total, 3 up, 3 in
Jan 31 03:30:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:30:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:30.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:30:30 np0005603609 nova_compute[221550]: 2026-01-31 08:30:30.848 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:30.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:32.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:32.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e369 e369: 3 total, 3 up, 3 in
Jan 31 03:30:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:30:33Z|00744|binding|INFO|Releasing lport 0b9d56f1-a803-44f1-b709-3bfbc71e0f57 from this chassis (sb_readonly=0)
Jan 31 03:30:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:30:33Z|00745|binding|INFO|Releasing lport 6550f58c-c002-4c91-85a2-53b4c1c807b7 from this chassis (sb_readonly=0)
Jan 31 03:30:33 np0005603609 nova_compute[221550]: 2026-01-31 08:30:33.989 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:34 np0005603609 nova_compute[221550]: 2026-01-31 08:30:34.731 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:34.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:30:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:34.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:30:35 np0005603609 nova_compute[221550]: 2026-01-31 08:30:35.851 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:36.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:36.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:37 np0005603609 nova_compute[221550]: 2026-01-31 08:30:37.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:38.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:38.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:39 np0005603609 nova_compute[221550]: 2026-01-31 08:30:39.132 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:39.132 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:30:39 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:39.133 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:30:39 np0005603609 nova_compute[221550]: 2026-01-31 08:30:39.780 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:40 np0005603609 nova_compute[221550]: 2026-01-31 08:30:40.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:40.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:30:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:40.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:30:40 np0005603609 nova_compute[221550]: 2026-01-31 08:30:40.884 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:41.135 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:30:42 np0005603609 podman[291400]: 2026-01-31 08:30:42.165948813 +0000 UTC m=+0.047765138 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 03:30:42 np0005603609 podman[291399]: 2026-01-31 08:30:42.190857291 +0000 UTC m=+0.073738562 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 03:30:42 np0005603609 nova_compute[221550]: 2026-01-31 08:30:42.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:42.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:30:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:42.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:30:43 np0005603609 nova_compute[221550]: 2026-01-31 08:30:43.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:43 np0005603609 nova_compute[221550]: 2026-01-31 08:30:43.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:30:44 np0005603609 nova_compute[221550]: 2026-01-31 08:30:44.151 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:30:44 np0005603609 nova_compute[221550]: 2026-01-31 08:30:44.152 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:30:44 np0005603609 nova_compute[221550]: 2026-01-31 08:30:44.152 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:30:44 np0005603609 nova_compute[221550]: 2026-01-31 08:30:44.783 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:30:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:44.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:30:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:30:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:44.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:30:45 np0005603609 nova_compute[221550]: 2026-01-31 08:30:45.886 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:46.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:30:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:46.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:30:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e370 e370: 3 total, 3 up, 3 in
Jan 31 03:30:48 np0005603609 nova_compute[221550]: 2026-01-31 08:30:48.491 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Updating instance_info_cache with network_info: [{"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:30:48 np0005603609 nova_compute[221550]: 2026-01-31 08:30:48.746 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:30:48 np0005603609 nova_compute[221550]: 2026-01-31 08:30:48.747 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:30:48 np0005603609 nova_compute[221550]: 2026-01-31 08:30:48.747 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:48 np0005603609 nova_compute[221550]: 2026-01-31 08:30:48.748 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:30:48 np0005603609 nova_compute[221550]: 2026-01-31 08:30:48.748 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:30:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:48.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:30:48 np0005603609 nova_compute[221550]: 2026-01-31 08:30:48.862 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:48 np0005603609 nova_compute[221550]: 2026-01-31 08:30:48.863 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:48 np0005603609 nova_compute[221550]: 2026-01-31 08:30:48.863 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:48 np0005603609 nova_compute[221550]: 2026-01-31 08:30:48.863 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:30:48 np0005603609 nova_compute[221550]: 2026-01-31 08:30:48.864 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:30:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:48.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:30:49 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/573318407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:30:49 np0005603609 nova_compute[221550]: 2026-01-31 08:30:49.296 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:30:49 np0005603609 nova_compute[221550]: 2026-01-31 08:30:49.420 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:30:49 np0005603609 nova_compute[221550]: 2026-01-31 08:30:49.420 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:30:49 np0005603609 nova_compute[221550]: 2026-01-31 08:30:49.425 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:30:49 np0005603609 nova_compute[221550]: 2026-01-31 08:30:49.425 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:30:49 np0005603609 nova_compute[221550]: 2026-01-31 08:30:49.583 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:30:49 np0005603609 nova_compute[221550]: 2026-01-31 08:30:49.585 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3871MB free_disk=20.73931884765625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:30:49 np0005603609 nova_compute[221550]: 2026-01-31 08:30:49.585 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:49 np0005603609 nova_compute[221550]: 2026-01-31 08:30:49.585 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:49 np0005603609 nova_compute[221550]: 2026-01-31 08:30:49.785 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 8bf29da0-2e1a-4e89-bce8-b293a938c742 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:30:49 np0005603609 nova_compute[221550]: 2026-01-31 08:30:49.785 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 20d5fd84-de90-46b0-816e-0f378fd7d0c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:30:49 np0005603609 nova_compute[221550]: 2026-01-31 08:30:49.785 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:30:49 np0005603609 nova_compute[221550]: 2026-01-31 08:30:49.786 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:30:49 np0005603609 nova_compute[221550]: 2026-01-31 08:30:49.788 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:49 np0005603609 nova_compute[221550]: 2026-01-31 08:30:49.914 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:30:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:30:50 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3089461330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:30:50 np0005603609 nova_compute[221550]: 2026-01-31 08:30:50.327 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:30:50 np0005603609 nova_compute[221550]: 2026-01-31 08:30:50.334 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:30:50 np0005603609 nova_compute[221550]: 2026-01-31 08:30:50.380 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:30:50 np0005603609 nova_compute[221550]: 2026-01-31 08:30:50.462 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:30:50 np0005603609 nova_compute[221550]: 2026-01-31 08:30:50.462 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:50 np0005603609 nova_compute[221550]: 2026-01-31 08:30:50.469 221554 DEBUG oslo_concurrency.lockutils [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:50 np0005603609 nova_compute[221550]: 2026-01-31 08:30:50.469 221554 DEBUG oslo_concurrency.lockutils [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:50 np0005603609 nova_compute[221550]: 2026-01-31 08:30:50.469 221554 INFO nova.compute.manager [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Shelving#033[00m
Jan 31 03:30:50 np0005603609 nova_compute[221550]: 2026-01-31 08:30:50.534 221554 DEBUG nova.virt.libvirt.driver [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:30:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:50.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:50.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:50 np0005603609 nova_compute[221550]: 2026-01-31 08:30:50.931 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:51 np0005603609 nova_compute[221550]: 2026-01-31 08:30:51.133 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:52.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:52.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:53 np0005603609 kernel: tap54f55019-4a (unregistering): left promiscuous mode
Jan 31 03:30:53 np0005603609 NetworkManager[49064]: <info>  [1769848253.2474] device (tap54f55019-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:30:53 np0005603609 ovn_controller[130359]: 2026-01-31T08:30:53Z|00746|binding|INFO|Releasing lport 54f55019-4a06-4846-ad7a-5d67daa3e094 from this chassis (sb_readonly=0)
Jan 31 03:30:53 np0005603609 ovn_controller[130359]: 2026-01-31T08:30:53Z|00747|binding|INFO|Setting lport 54f55019-4a06-4846-ad7a-5d67daa3e094 down in Southbound
Jan 31 03:30:53 np0005603609 nova_compute[221550]: 2026-01-31 08:30:53.296 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:53 np0005603609 ovn_controller[130359]: 2026-01-31T08:30:53Z|00748|binding|INFO|Removing iface tap54f55019-4a ovn-installed in OVS
Jan 31 03:30:53 np0005603609 nova_compute[221550]: 2026-01-31 08:30:53.303 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:53.309 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:38:16 10.100.0.12'], port_security=['fa:16:3e:31:38:16 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '20d5fd84-de90-46b0-816e-0f378fd7d0c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '722ab2e9dd674709953be812d4c88493', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9520bea1-c264-4093-b5f2-d0e6aba13129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2519b9f-dcfa-4a01-b5ee-7b195f240dfa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=54f55019-4a06-4846-ad7a-5d67daa3e094) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:30:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:53.310 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 54f55019-4a06-4846-ad7a-5d67daa3e094 in datapath e14a8b1b-1e10-44db-9858-e5a0ae5f2476 unbound from our chassis#033[00m
Jan 31 03:30:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:53.312 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e14a8b1b-1e10-44db-9858-e5a0ae5f2476, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:30:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:53.313 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b8a99cd3-9bb6-452e-97dd-04e2b3909d6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:53.314 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 namespace which is not needed anymore#033[00m
Jan 31 03:30:53 np0005603609 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000a7.scope: Deactivated successfully.
Jan 31 03:30:53 np0005603609 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000a7.scope: Consumed 19.361s CPU time.
Jan 31 03:30:53 np0005603609 systemd-machined[190912]: Machine qemu-90-instance-000000a7 terminated.
Jan 31 03:30:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:30:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1006507322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:30:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:30:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1006507322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:30:53 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[289991]: [NOTICE]   (290011) : haproxy version is 2.8.14-c23fe91
Jan 31 03:30:53 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[289991]: [NOTICE]   (290011) : path to executable is /usr/sbin/haproxy
Jan 31 03:30:53 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[289991]: [WARNING]  (290011) : Exiting Master process...
Jan 31 03:30:53 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[289991]: [WARNING]  (290011) : Exiting Master process...
Jan 31 03:30:53 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[289991]: [ALERT]    (290011) : Current worker (290015) exited with code 143 (Terminated)
Jan 31 03:30:53 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[289991]: [WARNING]  (290011) : All workers exited. Exiting... (0)
Jan 31 03:30:53 np0005603609 systemd[1]: libpod-eb65e349e968dca389ff5178672733478199c432ee5a2835ffd8e129b30e047a.scope: Deactivated successfully.
Jan 31 03:30:53 np0005603609 podman[291511]: 2026-01-31 08:30:53.499532318 +0000 UTC m=+0.109746329 container died eb65e349e968dca389ff5178672733478199c432ee5a2835ffd8e129b30e047a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:30:53 np0005603609 nova_compute[221550]: 2026-01-31 08:30:53.550 221554 INFO nova.virt.libvirt.driver [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 03:30:53 np0005603609 nova_compute[221550]: 2026-01-31 08:30:53.555 221554 INFO nova.virt.libvirt.driver [-] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Instance destroyed successfully.#033[00m
Jan 31 03:30:53 np0005603609 nova_compute[221550]: 2026-01-31 08:30:53.556 221554 DEBUG nova.objects.instance [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'numa_topology' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:30:53 np0005603609 nova_compute[221550]: 2026-01-31 08:30:53.970 221554 DEBUG nova.compute.manager [req-553c03bb-53dd-4c73-9b7a-ec7cf6ff125b req-67cbdf5a-1617-4910-8c35-3773cc569eef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received event network-vif-unplugged-54f55019-4a06-4846-ad7a-5d67daa3e094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:30:53 np0005603609 nova_compute[221550]: 2026-01-31 08:30:53.971 221554 DEBUG oslo_concurrency.lockutils [req-553c03bb-53dd-4c73-9b7a-ec7cf6ff125b req-67cbdf5a-1617-4910-8c35-3773cc569eef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:53 np0005603609 nova_compute[221550]: 2026-01-31 08:30:53.971 221554 DEBUG oslo_concurrency.lockutils [req-553c03bb-53dd-4c73-9b7a-ec7cf6ff125b req-67cbdf5a-1617-4910-8c35-3773cc569eef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:53 np0005603609 nova_compute[221550]: 2026-01-31 08:30:53.971 221554 DEBUG oslo_concurrency.lockutils [req-553c03bb-53dd-4c73-9b7a-ec7cf6ff125b req-67cbdf5a-1617-4910-8c35-3773cc569eef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:53 np0005603609 nova_compute[221550]: 2026-01-31 08:30:53.972 221554 DEBUG nova.compute.manager [req-553c03bb-53dd-4c73-9b7a-ec7cf6ff125b req-67cbdf5a-1617-4910-8c35-3773cc569eef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] No waiting events found dispatching network-vif-unplugged-54f55019-4a06-4846-ad7a-5d67daa3e094 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:30:53 np0005603609 nova_compute[221550]: 2026-01-31 08:30:53.972 221554 WARNING nova.compute.manager [req-553c03bb-53dd-4c73-9b7a-ec7cf6ff125b req-67cbdf5a-1617-4910-8c35-3773cc569eef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received unexpected event network-vif-unplugged-54f55019-4a06-4846-ad7a-5d67daa3e094 for instance with vm_state active and task_state shelving.#033[00m
Jan 31 03:30:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e371 e371: 3 total, 3 up, 3 in
Jan 31 03:30:54 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eb65e349e968dca389ff5178672733478199c432ee5a2835ffd8e129b30e047a-userdata-shm.mount: Deactivated successfully.
Jan 31 03:30:54 np0005603609 systemd[1]: var-lib-containers-storage-overlay-0454effab9d4bd52bcad53ff17348235ee8ae6105715b86e8b64dfe082b9ff19-merged.mount: Deactivated successfully.
Jan 31 03:30:54 np0005603609 podman[291511]: 2026-01-31 08:30:54.241533162 +0000 UTC m=+0.851747173 container cleanup eb65e349e968dca389ff5178672733478199c432ee5a2835ffd8e129b30e047a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:30:54 np0005603609 systemd[1]: libpod-conmon-eb65e349e968dca389ff5178672733478199c432ee5a2835ffd8e129b30e047a.scope: Deactivated successfully.
Jan 31 03:30:54 np0005603609 nova_compute[221550]: 2026-01-31 08:30:54.309 221554 INFO nova.virt.libvirt.driver [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Beginning cold snapshot process#033[00m
Jan 31 03:30:54 np0005603609 podman[291551]: 2026-01-31 08:30:54.536508528 +0000 UTC m=+0.271700593 container remove eb65e349e968dca389ff5178672733478199c432ee5a2835ffd8e129b30e047a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:30:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:54.542 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[74bda154-eba0-4e0f-868f-27f338b5d972]: (4, ('Sat Jan 31 08:30:53 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 (eb65e349e968dca389ff5178672733478199c432ee5a2835ffd8e129b30e047a)\neb65e349e968dca389ff5178672733478199c432ee5a2835ffd8e129b30e047a\nSat Jan 31 08:30:54 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 (eb65e349e968dca389ff5178672733478199c432ee5a2835ffd8e129b30e047a)\neb65e349e968dca389ff5178672733478199c432ee5a2835ffd8e129b30e047a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:54.545 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc55e37-a449-4027-8ffd-0e70d7440518]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:54.546 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape14a8b1b-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:30:54 np0005603609 kernel: tape14a8b1b-10: left promiscuous mode
Jan 31 03:30:54 np0005603609 nova_compute[221550]: 2026-01-31 08:30:54.549 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:54 np0005603609 nova_compute[221550]: 2026-01-31 08:30:54.558 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:54.562 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8f21ee44-309b-4deb-b34d-2c8c885b3c6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:54.578 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[99fedd9b-61a9-45b6-9d2a-6cac2218ed8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:54.582 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[471d9360-d505-4aaa-a3bc-9ee61898ce22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:54.598 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb153ce-c130-4c95-ba6c-c7b02d93415a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 833752, 'reachable_time': 29912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291586, 'error': None, 'target': 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:54 np0005603609 systemd[1]: run-netns-ovnmeta\x2de14a8b1b\x2d1e10\x2d44db\x2d9858\x2de5a0ae5f2476.mount: Deactivated successfully.
Jan 31 03:30:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:54.604 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:30:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:30:54.604 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[e84729fe-d4fd-474b-b368-4da4093e3a60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:54 np0005603609 nova_compute[221550]: 2026-01-31 08:30:54.638 221554 DEBUG nova.virt.libvirt.imagebackend [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:30:54 np0005603609 nova_compute[221550]: 2026-01-31 08:30:54.790 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:54.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:54.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:55 np0005603609 nova_compute[221550]: 2026-01-31 08:30:55.222 221554 DEBUG nova.storage.rbd_utils [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] creating snapshot(1a8c026640a74c38bfd678a318f2de42) on rbd image(20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:30:55 np0005603609 nova_compute[221550]: 2026-01-31 08:30:55.494 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:55 np0005603609 nova_compute[221550]: 2026-01-31 08:30:55.495 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:55 np0005603609 nova_compute[221550]: 2026-01-31 08:30:55.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:55 np0005603609 nova_compute[221550]: 2026-01-31 08:30:55.933 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:56 np0005603609 nova_compute[221550]: 2026-01-31 08:30:56.196 221554 DEBUG nova.compute.manager [req-6b6d6ef0-eded-43d0-9c2f-edcb634ca015 req-93e9f8cd-9448-4f80-86c6-b0cc484e0284 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:30:56 np0005603609 nova_compute[221550]: 2026-01-31 08:30:56.197 221554 DEBUG oslo_concurrency.lockutils [req-6b6d6ef0-eded-43d0-9c2f-edcb634ca015 req-93e9f8cd-9448-4f80-86c6-b0cc484e0284 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:56 np0005603609 nova_compute[221550]: 2026-01-31 08:30:56.197 221554 DEBUG oslo_concurrency.lockutils [req-6b6d6ef0-eded-43d0-9c2f-edcb634ca015 req-93e9f8cd-9448-4f80-86c6-b0cc484e0284 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:56 np0005603609 nova_compute[221550]: 2026-01-31 08:30:56.197 221554 DEBUG oslo_concurrency.lockutils [req-6b6d6ef0-eded-43d0-9c2f-edcb634ca015 req-93e9f8cd-9448-4f80-86c6-b0cc484e0284 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:56 np0005603609 nova_compute[221550]: 2026-01-31 08:30:56.197 221554 DEBUG nova.compute.manager [req-6b6d6ef0-eded-43d0-9c2f-edcb634ca015 req-93e9f8cd-9448-4f80-86c6-b0cc484e0284 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] No waiting events found dispatching network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:30:56 np0005603609 nova_compute[221550]: 2026-01-31 08:30:56.197 221554 WARNING nova.compute.manager [req-6b6d6ef0-eded-43d0-9c2f-edcb634ca015 req-93e9f8cd-9448-4f80-86c6-b0cc484e0284 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received unexpected event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Jan 31 03:30:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e372 e372: 3 total, 3 up, 3 in
Jan 31 03:30:56 np0005603609 nova_compute[221550]: 2026-01-31 08:30:56.770 221554 DEBUG nova.storage.rbd_utils [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] cloning vms/20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk@1a8c026640a74c38bfd678a318f2de42 to images/1ba15ea8-5b3a-4a00-83dd-a3e208812118 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:30:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:56.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:30:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:56.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:30:56 np0005603609 nova_compute[221550]: 2026-01-31 08:30:56.981 221554 DEBUG nova.storage.rbd_utils [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] flattening images/1ba15ea8-5b3a-4a00-83dd-a3e208812118 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:30:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e373 e373: 3 total, 3 up, 3 in
Jan 31 03:30:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:58.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:58 np0005603609 nova_compute[221550]: 2026-01-31 08:30:58.933 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:30:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:58.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:59 np0005603609 nova_compute[221550]: 2026-01-31 08:30:59.791 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:59 np0005603609 nova_compute[221550]: 2026-01-31 08:30:59.803 221554 DEBUG nova.storage.rbd_utils [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] removing snapshot(1a8c026640a74c38bfd678a318f2de42) on rbd image(20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:31:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:00.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:00.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e374 e374: 3 total, 3 up, 3 in
Jan 31 03:31:00 np0005603609 nova_compute[221550]: 2026-01-31 08:31:00.971 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:00 np0005603609 nova_compute[221550]: 2026-01-31 08:31:00.993 221554 DEBUG nova.storage.rbd_utils [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] creating snapshot(snap) on rbd image(1ba15ea8-5b3a-4a00-83dd-a3e208812118) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:31:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:02.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:31:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:02.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:31:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e375 e375: 3 total, 3 up, 3 in
Jan 31 03:31:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:31:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2377867101' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:31:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:31:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2377867101' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:31:04 np0005603609 nova_compute[221550]: 2026-01-31 08:31:04.793 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:04.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:04.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:05 np0005603609 nova_compute[221550]: 2026-01-31 08:31:05.973 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:06.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:06.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:07 np0005603609 nova_compute[221550]: 2026-01-31 08:31:07.306 221554 INFO nova.virt.libvirt.driver [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Snapshot image upload complete#033[00m
Jan 31 03:31:07 np0005603609 nova_compute[221550]: 2026-01-31 08:31:07.308 221554 DEBUG nova.compute.manager [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:31:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:07.523 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:07.523 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:07.524 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e376 e376: 3 total, 3 up, 3 in
Jan 31 03:31:08 np0005603609 nova_compute[221550]: 2026-01-31 08:31:08.235 221554 INFO nova.compute.manager [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Shelve offloading#033[00m
Jan 31 03:31:08 np0005603609 nova_compute[221550]: 2026-01-31 08:31:08.242 221554 INFO nova.virt.libvirt.driver [-] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Instance destroyed successfully.#033[00m
Jan 31 03:31:08 np0005603609 nova_compute[221550]: 2026-01-31 08:31:08.242 221554 DEBUG nova.compute.manager [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:31:08 np0005603609 nova_compute[221550]: 2026-01-31 08:31:08.245 221554 DEBUG oslo_concurrency.lockutils [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:31:08 np0005603609 nova_compute[221550]: 2026-01-31 08:31:08.245 221554 DEBUG oslo_concurrency.lockutils [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquired lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:31:08 np0005603609 nova_compute[221550]: 2026-01-31 08:31:08.245 221554 DEBUG nova.network.neutron [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:31:08 np0005603609 nova_compute[221550]: 2026-01-31 08:31:08.523 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848253.5222535, 20d5fd84-de90-46b0-816e-0f378fd7d0c7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:31:08 np0005603609 nova_compute[221550]: 2026-01-31 08:31:08.524 221554 INFO nova.compute.manager [-] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:31:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:08.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:08.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:09 np0005603609 nova_compute[221550]: 2026-01-31 08:31:09.294 221554 DEBUG nova.compute.manager [None req-6242889b-f1e7-4da5-a41f-88862862777c - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:31:09 np0005603609 nova_compute[221550]: 2026-01-31 08:31:09.302 221554 DEBUG nova.compute.manager [None req-6242889b-f1e7-4da5-a41f-88862862777c - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:31:09 np0005603609 nova_compute[221550]: 2026-01-31 08:31:09.795 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:09 np0005603609 nova_compute[221550]: 2026-01-31 08:31:09.960 221554 INFO nova.compute.manager [None req-6242889b-f1e7-4da5-a41f-88862862777c - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Jan 31 03:31:10 np0005603609 nova_compute[221550]: 2026-01-31 08:31:10.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:10.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:10.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:11 np0005603609 nova_compute[221550]: 2026-01-31 08:31:11.015 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:11 np0005603609 nova_compute[221550]: 2026-01-31 08:31:11.316 221554 DEBUG nova.network.neutron [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Updating instance_info_cache with network_info: [{"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:31:11 np0005603609 nova_compute[221550]: 2026-01-31 08:31:11.791 221554 DEBUG oslo_concurrency.lockutils [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Releasing lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:31:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:12.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:12.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:13 np0005603609 podman[291714]: 2026-01-31 08:31:13.182448823 +0000 UTC m=+0.063040388 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 03:31:13 np0005603609 podman[291713]: 2026-01-31 08:31:13.199747632 +0000 UTC m=+0.079493276 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 03:31:14 np0005603609 nova_compute[221550]: 2026-01-31 08:31:14.797 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:14.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:14.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:16 np0005603609 nova_compute[221550]: 2026-01-31 08:31:16.016 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:16 np0005603609 nova_compute[221550]: 2026-01-31 08:31:16.654 221554 INFO nova.virt.libvirt.driver [-] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Instance destroyed successfully.#033[00m
Jan 31 03:31:16 np0005603609 nova_compute[221550]: 2026-01-31 08:31:16.655 221554 DEBUG nova.objects.instance [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'resources' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:31:16 np0005603609 nova_compute[221550]: 2026-01-31 08:31:16.717 221554 DEBUG nova.virt.libvirt.vif [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:27:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-711445957',display_name='tempest-ServersNegativeTestJSON-server-711445957',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-711445957',id=167,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:28:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='722ab2e9dd674709953be812d4c88493',ramdisk_id='',reservation_id='r-48kv7dus',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1810683467',owner_user_name='tempest-ServersNegativeTestJSON-1810683467-project-member',shelved_at='2026-01-31T08:31:07.307846',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='1ba15ea8-5b3a-4a00-83dd-a3e208812118'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:30:54Z,user_data=None,user_id='eac51187531841e2891fc5d3c5f84123',uuid=20d5fd84-de90-46b0-816e-0f378fd7d0c7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:31:16 np0005603609 nova_compute[221550]: 2026-01-31 08:31:16.718 221554 DEBUG nova.network.os_vif_util [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converting VIF {"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:31:16 np0005603609 nova_compute[221550]: 2026-01-31 08:31:16.719 221554 DEBUG nova.network.os_vif_util [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:38:16,bridge_name='br-int',has_traffic_filtering=True,id=54f55019-4a06-4846-ad7a-5d67daa3e094,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f55019-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:31:16 np0005603609 nova_compute[221550]: 2026-01-31 08:31:16.720 221554 DEBUG os_vif [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:38:16,bridge_name='br-int',has_traffic_filtering=True,id=54f55019-4a06-4846-ad7a-5d67daa3e094,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f55019-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:31:16 np0005603609 nova_compute[221550]: 2026-01-31 08:31:16.721 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:16 np0005603609 nova_compute[221550]: 2026-01-31 08:31:16.722 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54f55019-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:31:16 np0005603609 nova_compute[221550]: 2026-01-31 08:31:16.723 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:16 np0005603609 nova_compute[221550]: 2026-01-31 08:31:16.725 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:16 np0005603609 nova_compute[221550]: 2026-01-31 08:31:16.728 221554 INFO os_vif [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:38:16,bridge_name='br-int',has_traffic_filtering=True,id=54f55019-4a06-4846-ad7a-5d67daa3e094,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f55019-4a')#033[00m
Jan 31 03:31:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:16.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:16.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:18.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:18.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:19 np0005603609 nova_compute[221550]: 2026-01-31 08:31:19.005 221554 DEBUG nova.compute.manager [req-b3b23612-3218-4df2-bd8d-e866cb90fa35 req-30eeaa3a-1776-4773-b232-420d9e0bf9ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received event network-changed-54f55019-4a06-4846-ad7a-5d67daa3e094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:31:19 np0005603609 nova_compute[221550]: 2026-01-31 08:31:19.005 221554 DEBUG nova.compute.manager [req-b3b23612-3218-4df2-bd8d-e866cb90fa35 req-30eeaa3a-1776-4773-b232-420d9e0bf9ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Refreshing instance network info cache due to event network-changed-54f55019-4a06-4846-ad7a-5d67daa3e094. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:31:19 np0005603609 nova_compute[221550]: 2026-01-31 08:31:19.005 221554 DEBUG oslo_concurrency.lockutils [req-b3b23612-3218-4df2-bd8d-e866cb90fa35 req-30eeaa3a-1776-4773-b232-420d9e0bf9ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:31:19 np0005603609 nova_compute[221550]: 2026-01-31 08:31:19.006 221554 DEBUG oslo_concurrency.lockutils [req-b3b23612-3218-4df2-bd8d-e866cb90fa35 req-30eeaa3a-1776-4773-b232-420d9e0bf9ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:31:19 np0005603609 nova_compute[221550]: 2026-01-31 08:31:19.006 221554 DEBUG nova.network.neutron [req-b3b23612-3218-4df2-bd8d-e866cb90fa35 req-30eeaa3a-1776-4773-b232-420d9e0bf9ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Refreshing network info cache for port 54f55019-4a06-4846-ad7a-5d67daa3e094 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:31:19 np0005603609 nova_compute[221550]: 2026-01-31 08:31:19.224 221554 INFO nova.virt.libvirt.driver [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Deleting instance files /var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7_del#033[00m
Jan 31 03:31:19 np0005603609 nova_compute[221550]: 2026-01-31 08:31:19.225 221554 INFO nova.virt.libvirt.driver [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Deletion of /var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7_del complete#033[00m
Jan 31 03:31:20 np0005603609 nova_compute[221550]: 2026-01-31 08:31:20.201 221554 INFO nova.scheduler.client.report [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Deleted allocations for instance 20d5fd84-de90-46b0-816e-0f378fd7d0c7#033[00m
Jan 31 03:31:20 np0005603609 nova_compute[221550]: 2026-01-31 08:31:20.892 221554 DEBUG oslo_concurrency.lockutils [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:20 np0005603609 nova_compute[221550]: 2026-01-31 08:31:20.892 221554 DEBUG oslo_concurrency.lockutils [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:20.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:20 np0005603609 nova_compute[221550]: 2026-01-31 08:31:20.939 221554 DEBUG oslo_concurrency.processutils [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:20.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:21 np0005603609 nova_compute[221550]: 2026-01-31 08:31:21.019 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:31:21 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3503097286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:31:21 np0005603609 nova_compute[221550]: 2026-01-31 08:31:21.376 221554 DEBUG oslo_concurrency.processutils [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:21 np0005603609 nova_compute[221550]: 2026-01-31 08:31:21.382 221554 DEBUG nova.compute.provider_tree [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:31:21 np0005603609 nova_compute[221550]: 2026-01-31 08:31:21.612 221554 DEBUG nova.scheduler.client.report [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:31:21 np0005603609 nova_compute[221550]: 2026-01-31 08:31:21.725 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:22 np0005603609 nova_compute[221550]: 2026-01-31 08:31:22.132 221554 DEBUG oslo_concurrency.lockutils [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.240s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:22 np0005603609 nova_compute[221550]: 2026-01-31 08:31:22.367 221554 DEBUG oslo_concurrency.lockutils [None req-2d85c1fe-af61-45f0-bfbb-2052675739e2 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 31.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:22 np0005603609 nova_compute[221550]: 2026-01-31 08:31:22.629 221554 DEBUG nova.network.neutron [req-b3b23612-3218-4df2-bd8d-e866cb90fa35 req-30eeaa3a-1776-4773-b232-420d9e0bf9ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Updated VIF entry in instance network info cache for port 54f55019-4a06-4846-ad7a-5d67daa3e094. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:31:22 np0005603609 nova_compute[221550]: 2026-01-31 08:31:22.631 221554 DEBUG nova.network.neutron [req-b3b23612-3218-4df2-bd8d-e866cb90fa35 req-30eeaa3a-1776-4773-b232-420d9e0bf9ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Updating instance_info_cache with network_info: [{"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": null, "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap54f55019-4a", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:31:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:31:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:22.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:31:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:22.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:23 np0005603609 nova_compute[221550]: 2026-01-31 08:31:23.212 221554 DEBUG oslo_concurrency.lockutils [req-b3b23612-3218-4df2-bd8d-e866cb90fa35 req-30eeaa3a-1776-4773-b232-420d9e0bf9ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:31:23 np0005603609 nova_compute[221550]: 2026-01-31 08:31:23.341 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:23.341 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:31:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:23.342 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:31:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:24.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:24.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:26 np0005603609 nova_compute[221550]: 2026-01-31 08:31:26.020 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:31:26 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3568431585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:31:26 np0005603609 nova_compute[221550]: 2026-01-31 08:31:26.768 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:26.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:26.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:31:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 03:31:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:31:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:28.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:31:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:28.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:31:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:29.344 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:31:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:31:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:31:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 03:31:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:31:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:31:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:31:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:30.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:30.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:31 np0005603609 nova_compute[221550]: 2026-01-31 08:31:31.022 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:31 np0005603609 nova_compute[221550]: 2026-01-31 08:31:31.770 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:32.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:32 np0005603609 nova_compute[221550]: 2026-01-31 08:31:32.945 221554 DEBUG oslo_concurrency.lockutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:32 np0005603609 nova_compute[221550]: 2026-01-31 08:31:32.947 221554 DEBUG oslo_concurrency.lockutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:32 np0005603609 nova_compute[221550]: 2026-01-31 08:31:32.947 221554 INFO nova.compute.manager [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Unshelving#033[00m
Jan 31 03:31:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:31:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:32.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:31:33 np0005603609 nova_compute[221550]: 2026-01-31 08:31:33.101 221554 DEBUG oslo_concurrency.lockutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:33 np0005603609 nova_compute[221550]: 2026-01-31 08:31:33.102 221554 DEBUG oslo_concurrency.lockutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:33 np0005603609 nova_compute[221550]: 2026-01-31 08:31:33.106 221554 DEBUG nova.objects.instance [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'pci_requests' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:31:33 np0005603609 nova_compute[221550]: 2026-01-31 08:31:33.129 221554 DEBUG nova.objects.instance [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'numa_topology' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:31:33 np0005603609 nova_compute[221550]: 2026-01-31 08:31:33.160 221554 DEBUG nova.virt.hardware [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:31:33 np0005603609 nova_compute[221550]: 2026-01-31 08:31:33.161 221554 INFO nova.compute.claims [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:31:33 np0005603609 nova_compute[221550]: 2026-01-31 08:31:33.380 221554 DEBUG oslo_concurrency.processutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:31:33 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/245584937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:31:33 np0005603609 nova_compute[221550]: 2026-01-31 08:31:33.894 221554 DEBUG oslo_concurrency.processutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:33 np0005603609 nova_compute[221550]: 2026-01-31 08:31:33.901 221554 DEBUG nova.compute.provider_tree [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:31:33 np0005603609 nova_compute[221550]: 2026-01-31 08:31:33.950 221554 DEBUG nova.scheduler.client.report [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:31:34 np0005603609 nova_compute[221550]: 2026-01-31 08:31:34.011 221554 DEBUG oslo_concurrency.lockutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:34 np0005603609 nova_compute[221550]: 2026-01-31 08:31:34.300 221554 INFO nova.network.neutron [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Updating port 54f55019-4a06-4846-ad7a-5d67daa3e094 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 31 03:31:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:34.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:34.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:36 np0005603609 nova_compute[221550]: 2026-01-31 08:31:36.026 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:36 np0005603609 nova_compute[221550]: 2026-01-31 08:31:36.315 221554 DEBUG oslo_concurrency.lockutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:31:36 np0005603609 nova_compute[221550]: 2026-01-31 08:31:36.315 221554 DEBUG oslo_concurrency.lockutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquired lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:31:36 np0005603609 nova_compute[221550]: 2026-01-31 08:31:36.316 221554 DEBUG nova.network.neutron [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:31:36 np0005603609 nova_compute[221550]: 2026-01-31 08:31:36.773 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:36.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:31:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:37.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:31:37 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:31:37 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:31:37 np0005603609 nova_compute[221550]: 2026-01-31 08:31:37.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:38.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:39.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:39 np0005603609 nova_compute[221550]: 2026-01-31 08:31:39.029 221554 DEBUG nova.compute.manager [req-2b86b6a8-8460-40b4-8a2e-421b37ecc88b req-0bd223cd-a829-4227-a14c-22730cf3584f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received event network-changed-54f55019-4a06-4846-ad7a-5d67daa3e094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:31:39 np0005603609 nova_compute[221550]: 2026-01-31 08:31:39.030 221554 DEBUG nova.compute.manager [req-2b86b6a8-8460-40b4-8a2e-421b37ecc88b req-0bd223cd-a829-4227-a14c-22730cf3584f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Refreshing instance network info cache due to event network-changed-54f55019-4a06-4846-ad7a-5d67daa3e094. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:31:39 np0005603609 nova_compute[221550]: 2026-01-31 08:31:39.030 221554 DEBUG oslo_concurrency.lockutils [req-2b86b6a8-8460-40b4-8a2e-421b37ecc88b req-0bd223cd-a829-4227-a14c-22730cf3584f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:31:39 np0005603609 nova_compute[221550]: 2026-01-31 08:31:39.317 221554 DEBUG nova.network.neutron [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Updating instance_info_cache with network_info: [{"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:31:39 np0005603609 nova_compute[221550]: 2026-01-31 08:31:39.546 221554 DEBUG oslo_concurrency.lockutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Releasing lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:31:39 np0005603609 nova_compute[221550]: 2026-01-31 08:31:39.548 221554 DEBUG nova.virt.libvirt.driver [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:31:39 np0005603609 nova_compute[221550]: 2026-01-31 08:31:39.548 221554 INFO nova.virt.libvirt.driver [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Creating image(s)#033[00m
Jan 31 03:31:39 np0005603609 nova_compute[221550]: 2026-01-31 08:31:39.573 221554 DEBUG nova.storage.rbd_utils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:31:39 np0005603609 nova_compute[221550]: 2026-01-31 08:31:39.578 221554 DEBUG nova.objects.instance [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:31:39 np0005603609 nova_compute[221550]: 2026-01-31 08:31:39.579 221554 DEBUG oslo_concurrency.lockutils [req-2b86b6a8-8460-40b4-8a2e-421b37ecc88b req-0bd223cd-a829-4227-a14c-22730cf3584f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:31:39 np0005603609 nova_compute[221550]: 2026-01-31 08:31:39.580 221554 DEBUG nova.network.neutron [req-2b86b6a8-8460-40b4-8a2e-421b37ecc88b req-0bd223cd-a829-4227-a14c-22730cf3584f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Refreshing network info cache for port 54f55019-4a06-4846-ad7a-5d67daa3e094 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:31:39 np0005603609 nova_compute[221550]: 2026-01-31 08:31:39.698 221554 DEBUG nova.storage.rbd_utils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:31:39 np0005603609 nova_compute[221550]: 2026-01-31 08:31:39.762 221554 DEBUG nova.storage.rbd_utils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:31:39 np0005603609 nova_compute[221550]: 2026-01-31 08:31:39.765 221554 DEBUG oslo_concurrency.lockutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "44e600bd165e6f3906d74a3af3930d76d54415d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:39 np0005603609 nova_compute[221550]: 2026-01-31 08:31:39.766 221554 DEBUG oslo_concurrency.lockutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "44e600bd165e6f3906d74a3af3930d76d54415d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.281 221554 DEBUG nova.virt.libvirt.imagebackend [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/1ba15ea8-5b3a-4a00-83dd-a3e208812118/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/1ba15ea8-5b3a-4a00-83dd-a3e208812118/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.332 221554 DEBUG nova.virt.libvirt.imagebackend [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Selected location: {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/1ba15ea8-5b3a-4a00-83dd-a3e208812118/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.332 221554 DEBUG nova.storage.rbd_utils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] cloning images/1ba15ea8-5b3a-4a00-83dd-a3e208812118@snap to None/20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.467 221554 DEBUG oslo_concurrency.lockutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "44e600bd165e6f3906d74a3af3930d76d54415d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.634 221554 DEBUG nova.objects.instance [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'migration_context' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.668 221554 DEBUG oslo_concurrency.lockutils [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "8bf29da0-2e1a-4e89-bce8-b293a938c742" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.669 221554 DEBUG oslo_concurrency.lockutils [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.669 221554 DEBUG oslo_concurrency.lockutils [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "8bf29da0-2e1a-4e89-bce8-b293a938c742-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.669 221554 DEBUG oslo_concurrency.lockutils [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.669 221554 DEBUG oslo_concurrency.lockutils [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.670 221554 INFO nova.compute.manager [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Terminating instance#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.671 221554 DEBUG nova.compute.manager [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:31:40 np0005603609 kernel: tap2b86ae61-4c (unregistering): left promiscuous mode
Jan 31 03:31:40 np0005603609 NetworkManager[49064]: <info>  [1769848300.7339] device (tap2b86ae61-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.736 221554 DEBUG nova.storage.rbd_utils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] flattening vms/20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:31:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:31:40Z|00749|binding|INFO|Releasing lport 2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 from this chassis (sb_readonly=0)
Jan 31 03:31:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:31:40Z|00750|binding|INFO|Setting lport 2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 down in Southbound
Jan 31 03:31:40 np0005603609 ovn_controller[130359]: 2026-01-31T08:31:40Z|00751|binding|INFO|Removing iface tap2b86ae61-4c ovn-installed in OVS
Jan 31 03:31:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:40.749 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:fa:8d 10.100.0.6'], port_security=['fa:16:3e:85:fa:8d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8bf29da0-2e1a-4e89-bce8-b293a938c742', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbdbdee526499e90da7e971ede68d3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8d5863f6-4aa0-486a-96ed-eb36f7d4a61d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=379aaecc-3dde-4f00-82cf-dc8bd8367d4b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:31:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:40.750 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 in datapath 26ad6a8f-33d5-432e-83d3-63a9d2f165ad unbound from our chassis#033[00m
Jan 31 03:31:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:40.751 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 26ad6a8f-33d5-432e-83d3-63a9d2f165ad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:31:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:40.754 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[137119df-7914-4a6b-ab40-c0a4e408efe5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:40 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:40.756 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad namespace which is not needed anymore#033[00m
Jan 31 03:31:40 np0005603609 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d0000009d.scope: Deactivated successfully.
Jan 31 03:31:40 np0005603609 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d0000009d.scope: Consumed 27.065s CPU time.
Jan 31 03:31:40 np0005603609 systemd-machined[190912]: Machine qemu-84-instance-0000009d terminated.
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.818 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:40 np0005603609 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[286725]: [NOTICE]   (286730) : haproxy version is 2.8.14-c23fe91
Jan 31 03:31:40 np0005603609 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[286725]: [NOTICE]   (286730) : path to executable is /usr/sbin/haproxy
Jan 31 03:31:40 np0005603609 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[286725]: [ALERT]    (286730) : Current worker (286732) exited with code 143 (Terminated)
Jan 31 03:31:40 np0005603609 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[286725]: [WARNING]  (286730) : All workers exited. Exiting... (0)
Jan 31 03:31:40 np0005603609 systemd[1]: libpod-8259dcf8c146c34f6db6bb389dcbbb5c7a50ba1f643290a1c3e7d2e0ea089d37.scope: Deactivated successfully.
Jan 31 03:31:40 np0005603609 conmon[286725]: conmon 8259dcf8c146c34f6db6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8259dcf8c146c34f6db6bb389dcbbb5c7a50ba1f643290a1c3e7d2e0ea089d37.scope/container/memory.events
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.909 221554 INFO nova.virt.libvirt.driver [-] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Instance destroyed successfully.#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.910 221554 DEBUG nova.objects.instance [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'resources' on Instance uuid 8bf29da0-2e1a-4e89-bce8-b293a938c742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:31:40 np0005603609 podman[292362]: 2026-01-31 08:31:40.911612419 +0000 UTC m=+0.066333819 container died 8259dcf8c146c34f6db6bb389dcbbb5c7a50ba1f643290a1c3e7d2e0ea089d37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:31:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.926 221554 DEBUG nova.virt.libvirt.vif [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:25:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-0',id=157,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMx4MwFIYcLufTgJTIjkaLBrJONZCyY8Yf6X6pbg0U3Us81VO6LfliTNhhDzgfgfMWpf9GXPg5uphWD0tDnxS1Zf2IaRx1ENKXJOF1zVaOJTSt3BjSDZpbJsUpD0/zLEPw==',key_name='tempest-keypair-253684506',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:25:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-sg9y73tv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:25:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=8bf29da0-2e1a-4e89-bce8-b293a938c742,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "address": "fa:16:3e:85:fa:8d", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b86ae61-4c", "ovs_interfaceid": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.927 221554 DEBUG nova.network.os_vif_util [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "address": "fa:16:3e:85:fa:8d", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b86ae61-4c", "ovs_interfaceid": "2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:31:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:31:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:40.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.927 221554 DEBUG nova.network.os_vif_util [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:fa:8d,bridge_name='br-int',has_traffic_filtering=True,id=2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b86ae61-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.928 221554 DEBUG os_vif [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:fa:8d,bridge_name='br-int',has_traffic_filtering=True,id=2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b86ae61-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.930 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.930 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b86ae61-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.957 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.959 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:40 np0005603609 nova_compute[221550]: 2026-01-31 08:31:40.962 221554 INFO os_vif [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:fa:8d,bridge_name='br-int',has_traffic_filtering=True,id=2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b86ae61-4c')#033[00m
Jan 31 03:31:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:41.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:41 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8259dcf8c146c34f6db6bb389dcbbb5c7a50ba1f643290a1c3e7d2e0ea089d37-userdata-shm.mount: Deactivated successfully.
Jan 31 03:31:41 np0005603609 systemd[1]: var-lib-containers-storage-overlay-d1302cd6881f7a463c3ea1971cf11568d54399ff56afcd5de2f77193f073bd78-merged.mount: Deactivated successfully.
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.029 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:41 np0005603609 podman[292362]: 2026-01-31 08:31:41.065400825 +0000 UTC m=+0.220122225 container cleanup 8259dcf8c146c34f6db6bb389dcbbb5c7a50ba1f643290a1c3e7d2e0ea089d37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:31:41 np0005603609 podman[292419]: 2026-01-31 08:31:41.161518792 +0000 UTC m=+0.079044835 container remove 8259dcf8c146c34f6db6bb389dcbbb5c7a50ba1f643290a1c3e7d2e0ea089d37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 03:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:41.168 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[02ba4f89-8d62-4987-acd5-7a7b186adb19]: (4, ('Sat Jan 31 08:31:40 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad (8259dcf8c146c34f6db6bb389dcbbb5c7a50ba1f643290a1c3e7d2e0ea089d37)\n8259dcf8c146c34f6db6bb389dcbbb5c7a50ba1f643290a1c3e7d2e0ea089d37\nSat Jan 31 08:31:41 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad (8259dcf8c146c34f6db6bb389dcbbb5c7a50ba1f643290a1c3e7d2e0ea089d37)\n8259dcf8c146c34f6db6bb389dcbbb5c7a50ba1f643290a1c3e7d2e0ea089d37\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:41 np0005603609 systemd[1]: libpod-conmon-8259dcf8c146c34f6db6bb389dcbbb5c7a50ba1f643290a1c3e7d2e0ea089d37.scope: Deactivated successfully.
Jan 31 03:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:41.170 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7b541590-664a-42ef-981a-fa212790a630]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:41.171 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26ad6a8f-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.173 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:41 np0005603609 kernel: tap26ad6a8f-30: left promiscuous mode
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.178 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:41.180 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[af0444ac-a08d-4815-8515-27d11fead126]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:41.194 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5815b64d-4efb-456d-846a-ab401d5ce1a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:41.195 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0014cbc3-2009-4693-9e11-cd103880621f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:41.209 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[429ec66e-0940-4abd-aa41-86d013da8d8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817084, 'reachable_time': 26806, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292434, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:41.211 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:31:41 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:41.211 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[60a4378e-6c1a-4610-b8ae-f75cfeae5fb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:41 np0005603609 systemd[1]: run-netns-ovnmeta\x2d26ad6a8f\x2d33d5\x2d432e\x2d83d3\x2d63a9d2f165ad.mount: Deactivated successfully.
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.224 221554 DEBUG nova.virt.libvirt.driver [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Image rbd:vms/20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.225 221554 DEBUG nova.virt.libvirt.driver [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.225 221554 DEBUG nova.virt.libvirt.driver [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Ensure instance console log exists: /var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.226 221554 DEBUG oslo_concurrency.lockutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.226 221554 DEBUG oslo_concurrency.lockutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.227 221554 DEBUG oslo_concurrency.lockutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.229 221554 DEBUG nova.virt.libvirt.driver [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Start _get_guest_xml network_info=[{"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T08:30:50Z,direct_url=<?>,disk_format='raw',id=1ba15ea8-5b3a-4a00-83dd-a3e208812118,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-711445957-shelved',owner='722ab2e9dd674709953be812d4c88493',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T08:31:06Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.234 221554 WARNING nova.virt.libvirt.driver [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.241 221554 DEBUG nova.virt.libvirt.host [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.243 221554 DEBUG nova.virt.libvirt.host [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.247 221554 DEBUG nova.virt.libvirt.host [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.247 221554 DEBUG nova.virt.libvirt.host [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.249 221554 DEBUG nova.virt.libvirt.driver [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.249 221554 DEBUG nova.virt.hardware [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T08:30:50Z,direct_url=<?>,disk_format='raw',id=1ba15ea8-5b3a-4a00-83dd-a3e208812118,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-711445957-shelved',owner='722ab2e9dd674709953be812d4c88493',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T08:31:06Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.250 221554 DEBUG nova.virt.hardware [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.250 221554 DEBUG nova.virt.hardware [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.250 221554 DEBUG nova.virt.hardware [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.250 221554 DEBUG nova.virt.hardware [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.250 221554 DEBUG nova.virt.hardware [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.251 221554 DEBUG nova.virt.hardware [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.251 221554 DEBUG nova.virt.hardware [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.251 221554 DEBUG nova.virt.hardware [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.251 221554 DEBUG nova.virt.hardware [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.251 221554 DEBUG nova.virt.hardware [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.252 221554 DEBUG nova.objects.instance [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.273 221554 DEBUG oslo_concurrency.processutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:31:41 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/632567154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.902 221554 DEBUG oslo_concurrency.processutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.925 221554 DEBUG nova.storage.rbd_utils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:31:41 np0005603609 nova_compute[221550]: 2026-01-31 08:31:41.929 221554 DEBUG oslo_concurrency.processutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.314 221554 DEBUG nova.compute.manager [req-b1eb9246-8e88-4940-89d8-95d96fc7cf68 req-1c17ef9c-13da-4bde-9c9e-a35b05120d46 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Received event network-vif-unplugged-2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.315 221554 DEBUG oslo_concurrency.lockutils [req-b1eb9246-8e88-4940-89d8-95d96fc7cf68 req-1c17ef9c-13da-4bde-9c9e-a35b05120d46 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8bf29da0-2e1a-4e89-bce8-b293a938c742-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.315 221554 DEBUG oslo_concurrency.lockutils [req-b1eb9246-8e88-4940-89d8-95d96fc7cf68 req-1c17ef9c-13da-4bde-9c9e-a35b05120d46 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.315 221554 DEBUG oslo_concurrency.lockutils [req-b1eb9246-8e88-4940-89d8-95d96fc7cf68 req-1c17ef9c-13da-4bde-9c9e-a35b05120d46 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.315 221554 DEBUG nova.compute.manager [req-b1eb9246-8e88-4940-89d8-95d96fc7cf68 req-1c17ef9c-13da-4bde-9c9e-a35b05120d46 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] No waiting events found dispatching network-vif-unplugged-2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.316 221554 DEBUG nova.compute.manager [req-b1eb9246-8e88-4940-89d8-95d96fc7cf68 req-1c17ef9c-13da-4bde-9c9e-a35b05120d46 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Received event network-vif-unplugged-2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:31:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:31:42 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/185363447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.344 221554 DEBUG oslo_concurrency.processutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.346 221554 DEBUG nova.virt.libvirt.vif [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:27:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-711445957',display_name='tempest-ServersNegativeTestJSON-server-711445957',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-711445957',id=167,image_ref='1ba15ea8-5b3a-4a00-83dd-a3e208812118',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:28:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='722ab2e9dd674709953be812d4c88493',ramdisk_id='',reservation_id='r-48kv7dus',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1810683467',owner_user_name='tempest-ServersNegativeTestJSON-1810683467-project-member',shelved_at='2026-01-31T08:31:07.307846',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='1ba15ea8-5b3a-4a00-83dd-a3e208812118'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:31:33Z,user_data=None,user_id='eac51187531841e2891fc5d3c5f84123',uuid=20d5fd84-de90-46b0-816e-0f378fd7d0c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.346 221554 DEBUG nova.network.os_vif_util [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converting VIF {"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.346 221554 DEBUG nova.network.os_vif_util [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:38:16,bridge_name='br-int',has_traffic_filtering=True,id=54f55019-4a06-4846-ad7a-5d67daa3e094,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f55019-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.348 221554 DEBUG nova.objects.instance [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'pci_devices' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.370 221554 DEBUG nova.virt.libvirt.driver [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:31:42 np0005603609 nova_compute[221550]:  <uuid>20d5fd84-de90-46b0-816e-0f378fd7d0c7</uuid>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:  <name>instance-000000a7</name>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <nova:name>tempest-ServersNegativeTestJSON-server-711445957</nova:name>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:31:41</nova:creationTime>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:31:42 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:        <nova:user uuid="eac51187531841e2891fc5d3c5f84123">tempest-ServersNegativeTestJSON-1810683467-project-member</nova:user>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:        <nova:project uuid="722ab2e9dd674709953be812d4c88493">tempest-ServersNegativeTestJSON-1810683467</nova:project>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="1ba15ea8-5b3a-4a00-83dd-a3e208812118"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:        <nova:port uuid="54f55019-4a06-4846-ad7a-5d67daa3e094">
Jan 31 03:31:42 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <entry name="serial">20d5fd84-de90-46b0-816e-0f378fd7d0c7</entry>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <entry name="uuid">20d5fd84-de90-46b0-816e-0f378fd7d0c7</entry>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk">
Jan 31 03:31:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:31:42 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk.config">
Jan 31 03:31:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:31:42 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:31:38:16"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <target dev="tap54f55019-4a"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7/console.log" append="off"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <input type="keyboard" bus="usb"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:31:42 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:31:42 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:31:42 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:31:42 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.371 221554 DEBUG nova.compute.manager [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Preparing to wait for external event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.371 221554 DEBUG oslo_concurrency.lockutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.371 221554 DEBUG oslo_concurrency.lockutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.371 221554 DEBUG oslo_concurrency.lockutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.372 221554 DEBUG nova.virt.libvirt.vif [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:27:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-711445957',display_name='tempest-ServersNegativeTestJSON-server-711445957',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-711445957',id=167,image_ref='1ba15ea8-5b3a-4a00-83dd-a3e208812118',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:28:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='722ab2e9dd674709953be812d4c88493',ramdisk_id='',reservation_id='r-48kv7dus',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1810683467',owner_user_name='tempest-ServersNegativeTestJSON-1810683467-project-member',shelved_at='2026-01-31T08:31:07.307846',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='1ba15ea8-5b3a-4a00-83dd-a3e208812118'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:31:33Z,user_data=None,user_id='eac51187531841e2891fc5d3c5f84123',uuid=20d5fd84-de90-46b0-816e-0f378fd7d0c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.373 221554 DEBUG nova.network.os_vif_util [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converting VIF {"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.373 221554 DEBUG nova.network.os_vif_util [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:38:16,bridge_name='br-int',has_traffic_filtering=True,id=54f55019-4a06-4846-ad7a-5d67daa3e094,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f55019-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.374 221554 DEBUG os_vif [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:38:16,bridge_name='br-int',has_traffic_filtering=True,id=54f55019-4a06-4846-ad7a-5d67daa3e094,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f55019-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.374 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.375 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.375 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.378 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.378 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54f55019-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.379 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54f55019-4a, col_values=(('external_ids', {'iface-id': '54f55019-4a06-4846-ad7a-5d67daa3e094', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:31:38:16', 'vm-uuid': '20d5fd84-de90-46b0-816e-0f378fd7d0c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:31:42 np0005603609 NetworkManager[49064]: <info>  [1769848302.3814] manager: (tap54f55019-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.383 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.384 221554 INFO os_vif [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:38:16,bridge_name='br-int',has_traffic_filtering=True,id=54f55019-4a06-4846-ad7a-5d67daa3e094,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f55019-4a')#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.441 221554 DEBUG nova.virt.libvirt.driver [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.442 221554 DEBUG nova.virt.libvirt.driver [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.442 221554 DEBUG nova.virt.libvirt.driver [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] No VIF found with MAC fa:16:3e:31:38:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.442 221554 INFO nova.virt.libvirt.driver [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Using config drive#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.469 221554 DEBUG nova.storage.rbd_utils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.532 221554 DEBUG nova.objects.instance [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.623 221554 INFO nova.virt.libvirt.driver [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Deleting instance files /var/lib/nova/instances/8bf29da0-2e1a-4e89-bce8-b293a938c742_del#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.624 221554 INFO nova.virt.libvirt.driver [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Deletion of /var/lib/nova/instances/8bf29da0-2e1a-4e89-bce8-b293a938c742_del complete#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.645 221554 DEBUG nova.objects.instance [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'keypairs' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.842 221554 INFO nova.compute.manager [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Took 2.17 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.843 221554 DEBUG oslo.service.loopingcall [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.844 221554 DEBUG nova.compute.manager [-] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:31:42 np0005603609 nova_compute[221550]: 2026-01-31 08:31:42.844 221554 DEBUG nova.network.neutron [-] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:31:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:42.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:43.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:43 np0005603609 nova_compute[221550]: 2026-01-31 08:31:43.309 221554 DEBUG nova.network.neutron [req-2b86b6a8-8460-40b4-8a2e-421b37ecc88b req-0bd223cd-a829-4227-a14c-22730cf3584f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Updated VIF entry in instance network info cache for port 54f55019-4a06-4846-ad7a-5d67daa3e094. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:31:43 np0005603609 nova_compute[221550]: 2026-01-31 08:31:43.309 221554 DEBUG nova.network.neutron [req-2b86b6a8-8460-40b4-8a2e-421b37ecc88b req-0bd223cd-a829-4227-a14c-22730cf3584f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Updating instance_info_cache with network_info: [{"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:31:43 np0005603609 nova_compute[221550]: 2026-01-31 08:31:43.375 221554 DEBUG oslo_concurrency.lockutils [req-2b86b6a8-8460-40b4-8a2e-421b37ecc88b req-0bd223cd-a829-4227-a14c-22730cf3584f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:31:43 np0005603609 nova_compute[221550]: 2026-01-31 08:31:43.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:43 np0005603609 nova_compute[221550]: 2026-01-31 08:31:43.684 221554 INFO nova.virt.libvirt.driver [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Creating config drive at /var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7/disk.config#033[00m
Jan 31 03:31:43 np0005603609 nova_compute[221550]: 2026-01-31 08:31:43.688 221554 DEBUG oslo_concurrency.processutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpli17rwo8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:43 np0005603609 nova_compute[221550]: 2026-01-31 08:31:43.814 221554 DEBUG oslo_concurrency.processutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpli17rwo8" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:43 np0005603609 nova_compute[221550]: 2026-01-31 08:31:43.838 221554 DEBUG nova.storage.rbd_utils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:31:43 np0005603609 nova_compute[221550]: 2026-01-31 08:31:43.841 221554 DEBUG oslo_concurrency.processutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7/disk.config 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:43 np0005603609 nova_compute[221550]: 2026-01-31 08:31:43.997 221554 DEBUG oslo_concurrency.processutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7/disk.config 20d5fd84-de90-46b0-816e-0f378fd7d0c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:43 np0005603609 nova_compute[221550]: 2026-01-31 08:31:43.998 221554 INFO nova.virt.libvirt.driver [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Deleting local config drive /var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7/disk.config because it was imported into RBD.#033[00m
Jan 31 03:31:44 np0005603609 kernel: tap54f55019-4a: entered promiscuous mode
Jan 31 03:31:44 np0005603609 nova_compute[221550]: 2026-01-31 08:31:44.042 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:44 np0005603609 ovn_controller[130359]: 2026-01-31T08:31:44Z|00752|binding|INFO|Claiming lport 54f55019-4a06-4846-ad7a-5d67daa3e094 for this chassis.
Jan 31 03:31:44 np0005603609 ovn_controller[130359]: 2026-01-31T08:31:44Z|00753|binding|INFO|54f55019-4a06-4846-ad7a-5d67daa3e094: Claiming fa:16:3e:31:38:16 10.100.0.12
Jan 31 03:31:44 np0005603609 NetworkManager[49064]: <info>  [1769848304.0444] manager: (tap54f55019-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/347)
Jan 31 03:31:44 np0005603609 nova_compute[221550]: 2026-01-31 08:31:44.051 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:44 np0005603609 ovn_controller[130359]: 2026-01-31T08:31:44Z|00754|binding|INFO|Setting lport 54f55019-4a06-4846-ad7a-5d67daa3e094 ovn-installed in OVS
Jan 31 03:31:44 np0005603609 nova_compute[221550]: 2026-01-31 08:31:44.054 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:44 np0005603609 systemd-machined[190912]: New machine qemu-92-instance-000000a7.
Jan 31 03:31:44 np0005603609 systemd[1]: Started Virtual Machine qemu-92-instance-000000a7.
Jan 31 03:31:44 np0005603609 systemd-udevd[292592]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:31:44 np0005603609 NetworkManager[49064]: <info>  [1769848304.0983] device (tap54f55019-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:31:44 np0005603609 NetworkManager[49064]: <info>  [1769848304.0990] device (tap54f55019-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:31:44 np0005603609 podman[292570]: 2026-01-31 08:31:44.124443578 +0000 UTC m=+0.058418825 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Jan 31 03:31:44 np0005603609 podman[292569]: 2026-01-31 08:31:44.14348624 +0000 UTC m=+0.076307410 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 31 03:31:44 np0005603609 ovn_controller[130359]: 2026-01-31T08:31:44Z|00755|binding|INFO|Setting lport 54f55019-4a06-4846-ad7a-5d67daa3e094 up in Southbound
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.148 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:38:16 10.100.0.12'], port_security=['fa:16:3e:31:38:16 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '20d5fd84-de90-46b0-816e-0f378fd7d0c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '722ab2e9dd674709953be812d4c88493', 'neutron:revision_number': '7', 'neutron:security_group_ids': '9520bea1-c264-4093-b5f2-d0e6aba13129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2519b9f-dcfa-4a01-b5ee-7b195f240dfa, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=54f55019-4a06-4846-ad7a-5d67daa3e094) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.150 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 54f55019-4a06-4846-ad7a-5d67daa3e094 in datapath e14a8b1b-1e10-44db-9858-e5a0ae5f2476 bound to our chassis#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.151 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e14a8b1b-1e10-44db-9858-e5a0ae5f2476#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.160 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f4de316c-9aa3-4e73-86da-ee2ae2ed2f5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.161 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape14a8b1b-11 in ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.163 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape14a8b1b-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.163 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9916226f-a0a9-4f71-a0fa-3c7e028a3fb2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.164 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e52caa3e-9f5c-413a-9fe7-0ae7608e9d7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.171 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[f13d6a53-5e35-4df3-a11e-3d89aff71641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.189 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb6a565-19c7-4039-b952-b7e9bba776a8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.213 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[5e281791-f09c-4df2-89fd-a3110027eafe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.219 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe9de9b-abac-46b8-adfa-40590b25083e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:44 np0005603609 NetworkManager[49064]: <info>  [1769848304.2203] manager: (tape14a8b1b-10): new Veth device (/org/freedesktop/NetworkManager/Devices/348)
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.244 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[78884b73-c31d-4d26-b3dc-53a4f86fcbe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.247 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7c0d51-66af-462d-a7a0-69e3058e50e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:44 np0005603609 NetworkManager[49064]: <info>  [1769848304.2755] device (tape14a8b1b-10): carrier: link connected
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.279 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c701bf66-ecc2-49a4-bb3f-652ae71a49bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.292 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d0233d71-4be5-4a3f-a0d2-44c92769588e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape14a8b1b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:d3:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 854397, 'reachable_time': 23812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292650, 'error': None, 'target': 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.305 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8929496b-7351-4930-b871-33a0b2e9ae42]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:d359'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 854397, 'tstamp': 854397}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292651, 'error': None, 'target': 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.319 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b07f8a96-b432-4625-909b-4e30de312190]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape14a8b1b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:d3:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 854397, 'reachable_time': 23812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292652, 'error': None, 'target': 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.340 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[dea14d55-0371-45e2-ae22-f18be57f8690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.390 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8e7ed776-696d-4ef7-8164-5468c2570eda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.391 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape14a8b1b-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.392 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.392 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape14a8b1b-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:31:44 np0005603609 nova_compute[221550]: 2026-01-31 08:31:44.394 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:44 np0005603609 NetworkManager[49064]: <info>  [1769848304.3947] manager: (tape14a8b1b-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/349)
Jan 31 03:31:44 np0005603609 kernel: tape14a8b1b-10: entered promiscuous mode
Jan 31 03:31:44 np0005603609 nova_compute[221550]: 2026-01-31 08:31:44.396 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.397 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape14a8b1b-10, col_values=(('external_ids', {'iface-id': '6550f58c-c002-4c91-85a2-53b4c1c807b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:31:44 np0005603609 nova_compute[221550]: 2026-01-31 08:31:44.398 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:44 np0005603609 ovn_controller[130359]: 2026-01-31T08:31:44Z|00756|binding|INFO|Releasing lport 6550f58c-c002-4c91-85a2-53b4c1c807b7 from this chassis (sb_readonly=0)
Jan 31 03:31:44 np0005603609 nova_compute[221550]: 2026-01-31 08:31:44.399 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.399 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e14a8b1b-1e10-44db-9858-e5a0ae5f2476.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e14a8b1b-1e10-44db-9858-e5a0ae5f2476.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.400 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d85034d5-fdbf-4af3-9d8e-bc7b04812d20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.401 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-e14a8b1b-1e10-44db-9858-e5a0ae5f2476
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/e14a8b1b-1e10-44db-9858-e5a0ae5f2476.pid.haproxy
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID e14a8b1b-1e10-44db-9858-e5a0ae5f2476
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:31:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:31:44.401 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'env', 'PROCESS_TAG=haproxy-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e14a8b1b-1e10-44db-9858-e5a0ae5f2476.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:31:44 np0005603609 nova_compute[221550]: 2026-01-31 08:31:44.403 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:44 np0005603609 nova_compute[221550]: 2026-01-31 08:31:44.502 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848304.5016835, 20d5fd84-de90-46b0-816e-0f378fd7d0c7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:31:44 np0005603609 nova_compute[221550]: 2026-01-31 08:31:44.502 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] VM Started (Lifecycle Event)#033[00m
Jan 31 03:31:44 np0005603609 nova_compute[221550]: 2026-01-31 08:31:44.680 221554 DEBUG nova.compute.manager [req-d2c81a6f-82d1-4591-826d-9eea603cda50 req-21011050-fd91-4899-a77e-5a2f86d8c606 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Received event network-vif-plugged-2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:31:44 np0005603609 nova_compute[221550]: 2026-01-31 08:31:44.680 221554 DEBUG oslo_concurrency.lockutils [req-d2c81a6f-82d1-4591-826d-9eea603cda50 req-21011050-fd91-4899-a77e-5a2f86d8c606 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8bf29da0-2e1a-4e89-bce8-b293a938c742-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:44 np0005603609 nova_compute[221550]: 2026-01-31 08:31:44.681 221554 DEBUG oslo_concurrency.lockutils [req-d2c81a6f-82d1-4591-826d-9eea603cda50 req-21011050-fd91-4899-a77e-5a2f86d8c606 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:44 np0005603609 nova_compute[221550]: 2026-01-31 08:31:44.681 221554 DEBUG oslo_concurrency.lockutils [req-d2c81a6f-82d1-4591-826d-9eea603cda50 req-21011050-fd91-4899-a77e-5a2f86d8c606 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:44 np0005603609 nova_compute[221550]: 2026-01-31 08:31:44.681 221554 DEBUG nova.compute.manager [req-d2c81a6f-82d1-4591-826d-9eea603cda50 req-21011050-fd91-4899-a77e-5a2f86d8c606 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] No waiting events found dispatching network-vif-plugged-2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:31:44 np0005603609 nova_compute[221550]: 2026-01-31 08:31:44.681 221554 WARNING nova.compute.manager [req-d2c81a6f-82d1-4591-826d-9eea603cda50 req-21011050-fd91-4899-a77e-5a2f86d8c606 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Received unexpected event network-vif-plugged-2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:31:44 np0005603609 nova_compute[221550]: 2026-01-31 08:31:44.711 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:31:44 np0005603609 nova_compute[221550]: 2026-01-31 08:31:44.717 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848304.5024023, 20d5fd84-de90-46b0-816e-0f378fd7d0c7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:31:44 np0005603609 nova_compute[221550]: 2026-01-31 08:31:44.717 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:31:44 np0005603609 podman[292726]: 2026-01-31 08:31:44.729261679 +0000 UTC m=+0.041890805 container create fed3ec61b60710c6fd685262516e18ef8284a5c005d0161f0b834d19fafc839e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:31:44 np0005603609 systemd[1]: Started libpod-conmon-fed3ec61b60710c6fd685262516e18ef8284a5c005d0161f0b834d19fafc839e.scope.
Jan 31 03:31:44 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:31:44 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5f6a9f79358ac73e6414d50c20bf448bc896d35a206afc620fc982c8f60584b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:31:44 np0005603609 podman[292726]: 2026-01-31 08:31:44.80315688 +0000 UTC m=+0.115786006 container init fed3ec61b60710c6fd685262516e18ef8284a5c005d0161f0b834d19fafc839e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:31:44 np0005603609 podman[292726]: 2026-01-31 08:31:44.70825075 +0000 UTC m=+0.020879896 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:31:44 np0005603609 podman[292726]: 2026-01-31 08:31:44.807689879 +0000 UTC m=+0.120319005 container start fed3ec61b60710c6fd685262516e18ef8284a5c005d0161f0b834d19fafc839e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 03:31:44 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[292741]: [NOTICE]   (292745) : New worker (292747) forked
Jan 31 03:31:44 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[292741]: [NOTICE]   (292745) : Loading success.
Jan 31 03:31:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:31:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:44.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:31:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:45.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:45 np0005603609 nova_compute[221550]: 2026-01-31 08:31:45.221 221554 DEBUG nova.compute.manager [req-0e8900cd-1c29-49e9-86c4-be39324cf3eb req-335b38b9-6c9b-4cb1-bc4b-2e14f26c477e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Received event network-vif-deleted-2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:31:45 np0005603609 nova_compute[221550]: 2026-01-31 08:31:45.222 221554 INFO nova.compute.manager [req-0e8900cd-1c29-49e9-86c4-be39324cf3eb req-335b38b9-6c9b-4cb1-bc4b-2e14f26c477e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Neutron deleted interface 2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:31:45 np0005603609 nova_compute[221550]: 2026-01-31 08:31:45.222 221554 DEBUG nova.network.neutron [req-0e8900cd-1c29-49e9-86c4-be39324cf3eb req-335b38b9-6c9b-4cb1-bc4b-2e14f26c477e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:31:45 np0005603609 nova_compute[221550]: 2026-01-31 08:31:45.243 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:31:45 np0005603609 nova_compute[221550]: 2026-01-31 08:31:45.247 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:31:45 np0005603609 nova_compute[221550]: 2026-01-31 08:31:45.249 221554 DEBUG nova.network.neutron [-] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:31:45 np0005603609 nova_compute[221550]: 2026-01-31 08:31:45.513 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:31:45 np0005603609 nova_compute[221550]: 2026-01-31 08:31:45.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:45 np0005603609 nova_compute[221550]: 2026-01-31 08:31:45.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:31:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:46 np0005603609 nova_compute[221550]: 2026-01-31 08:31:46.030 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:46 np0005603609 nova_compute[221550]: 2026-01-31 08:31:46.074 221554 INFO nova.compute.manager [-] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Took 3.23 seconds to deallocate network for instance.#033[00m
Jan 31 03:31:46 np0005603609 nova_compute[221550]: 2026-01-31 08:31:46.079 221554 DEBUG nova.compute.manager [req-0e8900cd-1c29-49e9-86c4-be39324cf3eb req-335b38b9-6c9b-4cb1-bc4b-2e14f26c477e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Detach interface failed, port_id=2b86ae61-4c6b-4b56-9fd5-ad18734c6ba3, reason: Instance 8bf29da0-2e1a-4e89-bce8-b293a938c742 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:31:46 np0005603609 nova_compute[221550]: 2026-01-31 08:31:46.368 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:31:46 np0005603609 nova_compute[221550]: 2026-01-31 08:31:46.369 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:46 np0005603609 nova_compute[221550]: 2026-01-31 08:31:46.370 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:31:46 np0005603609 nova_compute[221550]: 2026-01-31 08:31:46.614 221554 DEBUG oslo_concurrency.lockutils [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:46 np0005603609 nova_compute[221550]: 2026-01-31 08:31:46.615 221554 DEBUG oslo_concurrency.lockutils [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:46 np0005603609 nova_compute[221550]: 2026-01-31 08:31:46.731 221554 DEBUG oslo_concurrency.processutils [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:31:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:46.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:31:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:47.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:31:47 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/881127251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.178 221554 DEBUG oslo_concurrency.processutils [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.185 221554 DEBUG nova.compute.provider_tree [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.382 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.582 221554 DEBUG nova.scheduler.client.report [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.763 221554 DEBUG nova.compute.manager [req-1d8be6e7-4070-498f-a090-e970978594d5 req-555a32a3-b4f7-4d34-8810-b7094e4cdeaa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.764 221554 DEBUG oslo_concurrency.lockutils [req-1d8be6e7-4070-498f-a090-e970978594d5 req-555a32a3-b4f7-4d34-8810-b7094e4cdeaa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.764 221554 DEBUG oslo_concurrency.lockutils [req-1d8be6e7-4070-498f-a090-e970978594d5 req-555a32a3-b4f7-4d34-8810-b7094e4cdeaa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.764 221554 DEBUG oslo_concurrency.lockutils [req-1d8be6e7-4070-498f-a090-e970978594d5 req-555a32a3-b4f7-4d34-8810-b7094e4cdeaa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.764 221554 DEBUG nova.compute.manager [req-1d8be6e7-4070-498f-a090-e970978594d5 req-555a32a3-b4f7-4d34-8810-b7094e4cdeaa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Processing event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.764 221554 DEBUG nova.compute.manager [req-1d8be6e7-4070-498f-a090-e970978594d5 req-555a32a3-b4f7-4d34-8810-b7094e4cdeaa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.765 221554 DEBUG oslo_concurrency.lockutils [req-1d8be6e7-4070-498f-a090-e970978594d5 req-555a32a3-b4f7-4d34-8810-b7094e4cdeaa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.765 221554 DEBUG oslo_concurrency.lockutils [req-1d8be6e7-4070-498f-a090-e970978594d5 req-555a32a3-b4f7-4d34-8810-b7094e4cdeaa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.765 221554 DEBUG oslo_concurrency.lockutils [req-1d8be6e7-4070-498f-a090-e970978594d5 req-555a32a3-b4f7-4d34-8810-b7094e4cdeaa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.765 221554 DEBUG nova.compute.manager [req-1d8be6e7-4070-498f-a090-e970978594d5 req-555a32a3-b4f7-4d34-8810-b7094e4cdeaa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] No waiting events found dispatching network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.766 221554 WARNING nova.compute.manager [req-1d8be6e7-4070-498f-a090-e970978594d5 req-555a32a3-b4f7-4d34-8810-b7094e4cdeaa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received unexpected event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 for instance with vm_state shelved_offloaded and task_state spawning.#033[00m
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.766 221554 DEBUG nova.compute.manager [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.769 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848307.7693787, 20d5fd84-de90-46b0-816e-0f378fd7d0c7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.769 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.771 221554 DEBUG nova.virt.libvirt.driver [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:31:47 np0005603609 nova_compute[221550]: 2026-01-31 08:31:47.773 221554 INFO nova.virt.libvirt.driver [-] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Instance spawned successfully.#033[00m
Jan 31 03:31:48 np0005603609 nova_compute[221550]: 2026-01-31 08:31:48.054 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:48 np0005603609 nova_compute[221550]: 2026-01-31 08:31:48.055 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:31:48 np0005603609 nova_compute[221550]: 2026-01-31 08:31:48.055 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:48 np0005603609 nova_compute[221550]: 2026-01-31 08:31:48.145 221554 DEBUG oslo_concurrency.lockutils [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:48 np0005603609 nova_compute[221550]: 2026-01-31 08:31:48.328 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:31:48 np0005603609 nova_compute[221550]: 2026-01-31 08:31:48.332 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:31:48 np0005603609 nova_compute[221550]: 2026-01-31 08:31:48.417 221554 INFO nova.scheduler.client.report [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Deleted allocations for instance 8bf29da0-2e1a-4e89-bce8-b293a938c742#033[00m
Jan 31 03:31:48 np0005603609 nova_compute[221550]: 2026-01-31 08:31:48.589 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:48 np0005603609 nova_compute[221550]: 2026-01-31 08:31:48.589 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:48 np0005603609 nova_compute[221550]: 2026-01-31 08:31:48.590 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:48 np0005603609 nova_compute[221550]: 2026-01-31 08:31:48.590 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:31:48 np0005603609 nova_compute[221550]: 2026-01-31 08:31:48.590 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:48.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:31:49 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/602892621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:31:49 np0005603609 nova_compute[221550]: 2026-01-31 08:31:49.010 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:31:49 np0005603609 nova_compute[221550]: 2026-01-31 08:31:49.025 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:49.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:49 np0005603609 nova_compute[221550]: 2026-01-31 08:31:49.597 221554 DEBUG oslo_concurrency.lockutils [None req-40187e67-f813-42ee-8b9b-8f2e8bbf05be 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "8bf29da0-2e1a-4e89-bce8-b293a938c742" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:49 np0005603609 nova_compute[221550]: 2026-01-31 08:31:49.703 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:31:49 np0005603609 nova_compute[221550]: 2026-01-31 08:31:49.704 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:31:49 np0005603609 nova_compute[221550]: 2026-01-31 08:31:49.845 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:31:49 np0005603609 nova_compute[221550]: 2026-01-31 08:31:49.846 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4164MB free_disk=20.912765502929688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:31:49 np0005603609 nova_compute[221550]: 2026-01-31 08:31:49.846 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:49 np0005603609 nova_compute[221550]: 2026-01-31 08:31:49.846 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:50 np0005603609 nova_compute[221550]: 2026-01-31 08:31:50.559 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 20d5fd84-de90-46b0-816e-0f378fd7d0c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:31:50 np0005603609 nova_compute[221550]: 2026-01-31 08:31:50.560 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:31:50 np0005603609 nova_compute[221550]: 2026-01-31 08:31:50.560 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:31:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e377 e377: 3 total, 3 up, 3 in
Jan 31 03:31:50 np0005603609 nova_compute[221550]: 2026-01-31 08:31:50.636 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:50.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:51 np0005603609 nova_compute[221550]: 2026-01-31 08:31:51.033 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:51.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:31:51 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3824279903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:31:51 np0005603609 nova_compute[221550]: 2026-01-31 08:31:51.079 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:51 np0005603609 nova_compute[221550]: 2026-01-31 08:31:51.083 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:31:51 np0005603609 nova_compute[221550]: 2026-01-31 08:31:51.182 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:31:51 np0005603609 nova_compute[221550]: 2026-01-31 08:31:51.673 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:31:51 np0005603609 nova_compute[221550]: 2026-01-31 08:31:51.673 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:52 np0005603609 nova_compute[221550]: 2026-01-31 08:31:52.386 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:52.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:31:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:53.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:31:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:54.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:55.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:55 np0005603609 nova_compute[221550]: 2026-01-31 08:31:55.909 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848300.9077394, 8bf29da0-2e1a-4e89-bce8-b293a938c742 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:31:55 np0005603609 nova_compute[221550]: 2026-01-31 08:31:55.909 221554 INFO nova.compute.manager [-] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:31:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:56 np0005603609 nova_compute[221550]: 2026-01-31 08:31:56.035 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:56 np0005603609 nova_compute[221550]: 2026-01-31 08:31:56.253 221554 DEBUG nova.compute.manager [None req-71718fb5-fddf-4028-8dff-75fb1b9c6eea - - - - - -] [instance: 8bf29da0-2e1a-4e89-bce8-b293a938c742] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:31:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:56.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:57.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:57 np0005603609 nova_compute[221550]: 2026-01-31 08:31:57.273 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:57 np0005603609 nova_compute[221550]: 2026-01-31 08:31:57.274 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:57 np0005603609 nova_compute[221550]: 2026-01-31 08:31:57.389 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:57 np0005603609 nova_compute[221550]: 2026-01-31 08:31:57.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:58.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:31:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:59.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:59 np0005603609 nova_compute[221550]: 2026-01-31 08:31:59.877 221554 DEBUG nova.compute.manager [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:32:00 np0005603609 nova_compute[221550]: 2026-01-31 08:32:00.369 221554 DEBUG oslo_concurrency.lockutils [None req-0c9a8d19-17c6-42c3-ba86-15e86358100d eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 27.422s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:00 np0005603609 nova_compute[221550]: 2026-01-31 08:32:00.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:00 np0005603609 nova_compute[221550]: 2026-01-31 08:32:00.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:32:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:00.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:01 np0005603609 nova_compute[221550]: 2026-01-31 08:32:01.035 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:32:01 np0005603609 nova_compute[221550]: 2026-01-31 08:32:01.037 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:32:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:01.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:32:01 np0005603609 ovn_controller[130359]: 2026-01-31T08:32:01Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:31:38:16 10.100.0.12
Jan 31 03:32:02 np0005603609 nova_compute[221550]: 2026-01-31 08:32:02.391 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:02.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:32:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:03.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 e378: 3 total, 3 up, 3 in
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #139. Immutable memtables: 0.
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:03.361675) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 139
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848323362244, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 1587, "num_deletes": 255, "total_data_size": 3415203, "memory_usage": 3459704, "flush_reason": "Manual Compaction"}
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #140: started
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848323380241, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 140, "file_size": 1478122, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68866, "largest_seqno": 70448, "table_properties": {"data_size": 1472581, "index_size": 2744, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14755, "raw_average_key_size": 21, "raw_value_size": 1460436, "raw_average_value_size": 2138, "num_data_blocks": 119, "num_entries": 683, "num_filter_entries": 683, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848212, "oldest_key_time": 1769848212, "file_creation_time": 1769848323, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 18599 microseconds, and 3843 cpu microseconds.
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:03.380286) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #140: 1478122 bytes OK
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:03.380302) [db/memtable_list.cc:519] [default] Level-0 commit table #140 started
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:03.382514) [db/memtable_list.cc:722] [default] Level-0 commit table #140: memtable #1 done
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:03.382530) EVENT_LOG_v1 {"time_micros": 1769848323382524, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:03.382549) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 3407811, prev total WAL file size 3407811, number of live WAL files 2.
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000136.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:03.383721) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323539' seq:72057594037927935, type:22 .. '6D6772737461740032353131' seq:0, type:0; will stop at (end)
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [140(1443KB)], [138(11MB)]
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848323383747, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [140], "files_L6": [138], "score": -1, "input_data_size": 13973780, "oldest_snapshot_seqno": -1}
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #141: 9389 keys, 10836769 bytes, temperature: kUnknown
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848323512491, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 141, "file_size": 10836769, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10778133, "index_size": 34075, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23493, "raw_key_size": 246187, "raw_average_key_size": 26, "raw_value_size": 10615209, "raw_average_value_size": 1130, "num_data_blocks": 1299, "num_entries": 9389, "num_filter_entries": 9389, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769848323, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 141, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:03.512731) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10836769 bytes
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:03.555001) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.5 rd, 84.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 11.9 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(16.8) write-amplify(7.3) OK, records in: 9870, records dropped: 481 output_compression: NoCompression
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:03.555037) EVENT_LOG_v1 {"time_micros": 1769848323555024, "job": 88, "event": "compaction_finished", "compaction_time_micros": 128810, "compaction_time_cpu_micros": 19612, "output_level": 6, "num_output_files": 1, "total_output_size": 10836769, "num_input_records": 9870, "num_output_records": 9389, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848323555374, "job": 88, "event": "table_file_deletion", "file_number": 140}
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000138.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848323556515, "job": 88, "event": "table_file_deletion", "file_number": 138}
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:03.383614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:03.556579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:03.556586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:03.556588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:03.556590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:03.556591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:04 np0005603609 nova_compute[221550]: 2026-01-31 08:32:04.752 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:04.751 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:32:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:04.753 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:32:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:04.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:32:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:05.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:32:06 np0005603609 nova_compute[221550]: 2026-01-31 08:32:06.037 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:32:06 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3019601616' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:32:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:32:06 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3019601616' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:32:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:06.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:32:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:07.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:32:07 np0005603609 nova_compute[221550]: 2026-01-31 08:32:07.393 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:07.524 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:07.525 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:07.525 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:08.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:09.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:10.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:11 np0005603609 nova_compute[221550]: 2026-01-31 08:32:11.039 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:11.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:12 np0005603609 nova_compute[221550]: 2026-01-31 08:32:12.396 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:12.756 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:12.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:13.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:14 np0005603609 nova_compute[221550]: 2026-01-31 08:32:14.406 221554 DEBUG nova.objects.instance [None req-5f87789c-02e7-4415-9858-41a64d4e0aec eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'pci_devices' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:32:14 np0005603609 nova_compute[221550]: 2026-01-31 08:32:14.502 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848334.5021868, 20d5fd84-de90-46b0-816e-0f378fd7d0c7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:32:14 np0005603609 nova_compute[221550]: 2026-01-31 08:32:14.502 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:32:14 np0005603609 nova_compute[221550]: 2026-01-31 08:32:14.654 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:32:14 np0005603609 nova_compute[221550]: 2026-01-31 08:32:14.658 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:32:14 np0005603609 nova_compute[221550]: 2026-01-31 08:32:14.909 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 31 03:32:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:32:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:14.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:32:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:32:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:15.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:32:15 np0005603609 podman[292827]: 2026-01-31 08:32:15.160760062 +0000 UTC m=+0.043643378 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:32:15 np0005603609 podman[292826]: 2026-01-31 08:32:15.183838 +0000 UTC m=+0.066745947 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:32:15 np0005603609 kernel: tap54f55019-4a (unregistering): left promiscuous mode
Jan 31 03:32:15 np0005603609 NetworkManager[49064]: <info>  [1769848335.9279] device (tap54f55019-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:32:15 np0005603609 nova_compute[221550]: 2026-01-31 08:32:15.935 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:15 np0005603609 ovn_controller[130359]: 2026-01-31T08:32:15Z|00757|binding|INFO|Releasing lport 54f55019-4a06-4846-ad7a-5d67daa3e094 from this chassis (sb_readonly=0)
Jan 31 03:32:15 np0005603609 ovn_controller[130359]: 2026-01-31T08:32:15Z|00758|binding|INFO|Setting lport 54f55019-4a06-4846-ad7a-5d67daa3e094 down in Southbound
Jan 31 03:32:15 np0005603609 ovn_controller[130359]: 2026-01-31T08:32:15Z|00759|binding|INFO|Removing iface tap54f55019-4a ovn-installed in OVS
Jan 31 03:32:15 np0005603609 nova_compute[221550]: 2026-01-31 08:32:15.938 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:15 np0005603609 nova_compute[221550]: 2026-01-31 08:32:15.948 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:15 np0005603609 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000a7.scope: Deactivated successfully.
Jan 31 03:32:15 np0005603609 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000a7.scope: Consumed 13.695s CPU time.
Jan 31 03:32:15 np0005603609 systemd-machined[190912]: Machine qemu-92-instance-000000a7 terminated.
Jan 31 03:32:16 np0005603609 nova_compute[221550]: 2026-01-31 08:32:16.041 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:16 np0005603609 nova_compute[221550]: 2026-01-31 08:32:16.148 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:16 np0005603609 nova_compute[221550]: 2026-01-31 08:32:16.151 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:16 np0005603609 nova_compute[221550]: 2026-01-31 08:32:16.167 221554 DEBUG nova.compute.manager [None req-5f87789c-02e7-4415-9858-41a64d4e0aec eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:32:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:16.330 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:38:16 10.100.0.12'], port_security=['fa:16:3e:31:38:16 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '20d5fd84-de90-46b0-816e-0f378fd7d0c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '722ab2e9dd674709953be812d4c88493', 'neutron:revision_number': '9', 'neutron:security_group_ids': '9520bea1-c264-4093-b5f2-d0e6aba13129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2519b9f-dcfa-4a01-b5ee-7b195f240dfa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=54f55019-4a06-4846-ad7a-5d67daa3e094) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:32:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:16.331 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 54f55019-4a06-4846-ad7a-5d67daa3e094 in datapath e14a8b1b-1e10-44db-9858-e5a0ae5f2476 unbound from our chassis#033[00m
Jan 31 03:32:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:16.332 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e14a8b1b-1e10-44db-9858-e5a0ae5f2476, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:32:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:16.334 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c09cec7c-7ba5-4df9-ae34-0d9a4b13670a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:16.335 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 namespace which is not needed anymore#033[00m
Jan 31 03:32:16 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[292741]: [NOTICE]   (292745) : haproxy version is 2.8.14-c23fe91
Jan 31 03:32:16 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[292741]: [NOTICE]   (292745) : path to executable is /usr/sbin/haproxy
Jan 31 03:32:16 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[292741]: [WARNING]  (292745) : Exiting Master process...
Jan 31 03:32:16 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[292741]: [ALERT]    (292745) : Current worker (292747) exited with code 143 (Terminated)
Jan 31 03:32:16 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[292741]: [WARNING]  (292745) : All workers exited. Exiting... (0)
Jan 31 03:32:16 np0005603609 systemd[1]: libpod-fed3ec61b60710c6fd685262516e18ef8284a5c005d0161f0b834d19fafc839e.scope: Deactivated successfully.
Jan 31 03:32:16 np0005603609 podman[292904]: 2026-01-31 08:32:16.589832849 +0000 UTC m=+0.181150620 container died fed3ec61b60710c6fd685262516e18ef8284a5c005d0161f0b834d19fafc839e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:32:16 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fed3ec61b60710c6fd685262516e18ef8284a5c005d0161f0b834d19fafc839e-userdata-shm.mount: Deactivated successfully.
Jan 31 03:32:16 np0005603609 systemd[1]: var-lib-containers-storage-overlay-a5f6a9f79358ac73e6414d50c20bf448bc896d35a206afc620fc982c8f60584b-merged.mount: Deactivated successfully.
Jan 31 03:32:16 np0005603609 podman[292904]: 2026-01-31 08:32:16.786581185 +0000 UTC m=+0.377898916 container cleanup fed3ec61b60710c6fd685262516e18ef8284a5c005d0161f0b834d19fafc839e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:32:16 np0005603609 systemd[1]: libpod-conmon-fed3ec61b60710c6fd685262516e18ef8284a5c005d0161f0b834d19fafc839e.scope: Deactivated successfully.
Jan 31 03:32:16 np0005603609 podman[292936]: 2026-01-31 08:32:16.926082594 +0000 UTC m=+0.121616207 container remove fed3ec61b60710c6fd685262516e18ef8284a5c005d0161f0b834d19fafc839e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:32:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:16.929 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5232d994-99f4-4aee-bd83-d185aefbb5b7]: (4, ('Sat Jan 31 08:32:16 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 (fed3ec61b60710c6fd685262516e18ef8284a5c005d0161f0b834d19fafc839e)\nfed3ec61b60710c6fd685262516e18ef8284a5c005d0161f0b834d19fafc839e\nSat Jan 31 08:32:16 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 (fed3ec61b60710c6fd685262516e18ef8284a5c005d0161f0b834d19fafc839e)\nfed3ec61b60710c6fd685262516e18ef8284a5c005d0161f0b834d19fafc839e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:16.931 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[21362fb0-e7f9-45f1-9d2d-d902727c5dba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:16.931 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape14a8b1b-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:16 np0005603609 nova_compute[221550]: 2026-01-31 08:32:16.933 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:16 np0005603609 kernel: tape14a8b1b-10: left promiscuous mode
Jan 31 03:32:16 np0005603609 nova_compute[221550]: 2026-01-31 08:32:16.940 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:16.943 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c54dc98a-1aa4-466e-93a9-401f1ac774e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:16.958 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b5704791-4d82-44c8-8383-5886b853219b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:16.959 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e1baad70-d2cd-413c-a456-39fded33fea0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:16.971 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e38dc0ec-8e07-4d72-bae7-650cf00cae1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 854390, 'reachable_time': 20024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292954, 'error': None, 'target': 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:16.973 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:32:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:16.973 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[0ebd9bb0-2d16-4878-a3b2-da3e31e8c5d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:16 np0005603609 systemd[1]: run-netns-ovnmeta\x2de14a8b1b\x2d1e10\x2d44db\x2d9858\x2de5a0ae5f2476.mount: Deactivated successfully.
Jan 31 03:32:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:16.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:17 np0005603609 nova_compute[221550]: 2026-01-31 08:32:17.027 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:32:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:17.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:32:17 np0005603609 nova_compute[221550]: 2026-01-31 08:32:17.087 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:17 np0005603609 nova_compute[221550]: 2026-01-31 08:32:17.367 221554 DEBUG nova.compute.manager [req-b7ea436c-1c06-4b3e-82b1-b950805e9338 req-507edb31-172e-40ae-98a7-808f05f5eb12 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received event network-vif-unplugged-54f55019-4a06-4846-ad7a-5d67daa3e094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:32:17 np0005603609 nova_compute[221550]: 2026-01-31 08:32:17.368 221554 DEBUG oslo_concurrency.lockutils [req-b7ea436c-1c06-4b3e-82b1-b950805e9338 req-507edb31-172e-40ae-98a7-808f05f5eb12 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:17 np0005603609 nova_compute[221550]: 2026-01-31 08:32:17.368 221554 DEBUG oslo_concurrency.lockutils [req-b7ea436c-1c06-4b3e-82b1-b950805e9338 req-507edb31-172e-40ae-98a7-808f05f5eb12 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:17 np0005603609 nova_compute[221550]: 2026-01-31 08:32:17.368 221554 DEBUG oslo_concurrency.lockutils [req-b7ea436c-1c06-4b3e-82b1-b950805e9338 req-507edb31-172e-40ae-98a7-808f05f5eb12 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:17 np0005603609 nova_compute[221550]: 2026-01-31 08:32:17.368 221554 DEBUG nova.compute.manager [req-b7ea436c-1c06-4b3e-82b1-b950805e9338 req-507edb31-172e-40ae-98a7-808f05f5eb12 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] No waiting events found dispatching network-vif-unplugged-54f55019-4a06-4846-ad7a-5d67daa3e094 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:32:17 np0005603609 nova_compute[221550]: 2026-01-31 08:32:17.368 221554 WARNING nova.compute.manager [req-b7ea436c-1c06-4b3e-82b1-b950805e9338 req-507edb31-172e-40ae-98a7-808f05f5eb12 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received unexpected event network-vif-unplugged-54f55019-4a06-4846-ad7a-5d67daa3e094 for instance with vm_state suspended and task_state None.#033[00m
Jan 31 03:32:17 np0005603609 nova_compute[221550]: 2026-01-31 08:32:17.398 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:32:17 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/216193617' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:32:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:32:17 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/216193617' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:32:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:32:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:18.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:32:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:19.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:19 np0005603609 nova_compute[221550]: 2026-01-31 08:32:19.738 221554 DEBUG nova.compute.manager [req-b63eabd5-b4e5-470d-8245-2f12b4603574 req-2fd576ef-b326-466a-bc32-5d3827324a85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:32:19 np0005603609 nova_compute[221550]: 2026-01-31 08:32:19.738 221554 DEBUG oslo_concurrency.lockutils [req-b63eabd5-b4e5-470d-8245-2f12b4603574 req-2fd576ef-b326-466a-bc32-5d3827324a85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:19 np0005603609 nova_compute[221550]: 2026-01-31 08:32:19.739 221554 DEBUG oslo_concurrency.lockutils [req-b63eabd5-b4e5-470d-8245-2f12b4603574 req-2fd576ef-b326-466a-bc32-5d3827324a85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:19 np0005603609 nova_compute[221550]: 2026-01-31 08:32:19.739 221554 DEBUG oslo_concurrency.lockutils [req-b63eabd5-b4e5-470d-8245-2f12b4603574 req-2fd576ef-b326-466a-bc32-5d3827324a85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:19 np0005603609 nova_compute[221550]: 2026-01-31 08:32:19.739 221554 DEBUG nova.compute.manager [req-b63eabd5-b4e5-470d-8245-2f12b4603574 req-2fd576ef-b326-466a-bc32-5d3827324a85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] No waiting events found dispatching network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:32:19 np0005603609 nova_compute[221550]: 2026-01-31 08:32:19.739 221554 WARNING nova.compute.manager [req-b63eabd5-b4e5-470d-8245-2f12b4603574 req-2fd576ef-b326-466a-bc32-5d3827324a85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received unexpected event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 for instance with vm_state suspended and task_state None.#033[00m
Jan 31 03:32:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:20.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:21 np0005603609 nova_compute[221550]: 2026-01-31 08:32:21.043 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:32:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:21.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:32:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:21 np0005603609 nova_compute[221550]: 2026-01-31 08:32:21.786 221554 INFO nova.compute.manager [None req-5b4b1f47-72a0-4234-8b8a-e974c134fc4b eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Resuming#033[00m
Jan 31 03:32:21 np0005603609 nova_compute[221550]: 2026-01-31 08:32:21.787 221554 DEBUG nova.objects.instance [None req-5b4b1f47-72a0-4234-8b8a-e974c134fc4b eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'flavor' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:32:21 np0005603609 nova_compute[221550]: 2026-01-31 08:32:21.998 221554 DEBUG oslo_concurrency.lockutils [None req-5b4b1f47-72a0-4234-8b8a-e974c134fc4b eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:32:21 np0005603609 nova_compute[221550]: 2026-01-31 08:32:21.998 221554 DEBUG oslo_concurrency.lockutils [None req-5b4b1f47-72a0-4234-8b8a-e974c134fc4b eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquired lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:32:21 np0005603609 nova_compute[221550]: 2026-01-31 08:32:21.999 221554 DEBUG nova.network.neutron [None req-5b4b1f47-72a0-4234-8b8a-e974c134fc4b eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:32:22 np0005603609 nova_compute[221550]: 2026-01-31 08:32:22.400 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:22.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:23.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:24.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:25.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:25 np0005603609 nova_compute[221550]: 2026-01-31 08:32:25.662 221554 DEBUG nova.network.neutron [None req-5b4b1f47-72a0-4234-8b8a-e974c134fc4b eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Updating instance_info_cache with network_info: [{"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:32:25 np0005603609 nova_compute[221550]: 2026-01-31 08:32:25.839 221554 DEBUG oslo_concurrency.lockutils [None req-5b4b1f47-72a0-4234-8b8a-e974c134fc4b eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Releasing lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:32:25 np0005603609 nova_compute[221550]: 2026-01-31 08:32:25.847 221554 DEBUG nova.virt.libvirt.vif [None req-5b4b1f47-72a0-4234-8b8a-e974c134fc4b eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:27:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-711445957',display_name='tempest-ServersNegativeTestJSON-server-711445957',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-711445957',id=167,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:31:59Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='722ab2e9dd674709953be812d4c88493',ramdisk_id='',reservation_id='r-48kv7dus',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-1810683467',owner_user_name='tempest-ServersNegativeTestJSON-1810683467-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:32:16Z,user_data=None,user_id='eac51187531841e2891fc5d3c5f84123',uuid=20d5fd84-de90-46b0-816e-0f378fd7d0c7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:32:25 np0005603609 nova_compute[221550]: 2026-01-31 08:32:25.848 221554 DEBUG nova.network.os_vif_util [None req-5b4b1f47-72a0-4234-8b8a-e974c134fc4b eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converting VIF {"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:32:25 np0005603609 nova_compute[221550]: 2026-01-31 08:32:25.848 221554 DEBUG nova.network.os_vif_util [None req-5b4b1f47-72a0-4234-8b8a-e974c134fc4b eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:38:16,bridge_name='br-int',has_traffic_filtering=True,id=54f55019-4a06-4846-ad7a-5d67daa3e094,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f55019-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:32:25 np0005603609 nova_compute[221550]: 2026-01-31 08:32:25.849 221554 DEBUG os_vif [None req-5b4b1f47-72a0-4234-8b8a-e974c134fc4b eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:38:16,bridge_name='br-int',has_traffic_filtering=True,id=54f55019-4a06-4846-ad7a-5d67daa3e094,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f55019-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:32:25 np0005603609 nova_compute[221550]: 2026-01-31 08:32:25.849 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:25 np0005603609 nova_compute[221550]: 2026-01-31 08:32:25.850 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:25 np0005603609 nova_compute[221550]: 2026-01-31 08:32:25.850 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:32:25 np0005603609 nova_compute[221550]: 2026-01-31 08:32:25.853 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:25 np0005603609 nova_compute[221550]: 2026-01-31 08:32:25.854 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54f55019-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:25 np0005603609 nova_compute[221550]: 2026-01-31 08:32:25.854 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54f55019-4a, col_values=(('external_ids', {'iface-id': '54f55019-4a06-4846-ad7a-5d67daa3e094', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:31:38:16', 'vm-uuid': '20d5fd84-de90-46b0-816e-0f378fd7d0c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:25 np0005603609 nova_compute[221550]: 2026-01-31 08:32:25.855 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:32:25 np0005603609 nova_compute[221550]: 2026-01-31 08:32:25.855 221554 INFO os_vif [None req-5b4b1f47-72a0-4234-8b8a-e974c134fc4b eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:38:16,bridge_name='br-int',has_traffic_filtering=True,id=54f55019-4a06-4846-ad7a-5d67daa3e094,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f55019-4a')#033[00m
Jan 31 03:32:25 np0005603609 nova_compute[221550]: 2026-01-31 08:32:25.925 221554 DEBUG nova.objects.instance [None req-5b4b1f47-72a0-4234-8b8a-e974c134fc4b eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'numa_topology' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:32:26 np0005603609 nova_compute[221550]: 2026-01-31 08:32:26.045 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:26 np0005603609 kernel: tap54f55019-4a: entered promiscuous mode
Jan 31 03:32:26 np0005603609 NetworkManager[49064]: <info>  [1769848346.3622] manager: (tap54f55019-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/350)
Jan 31 03:32:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:32:26Z|00760|binding|INFO|Claiming lport 54f55019-4a06-4846-ad7a-5d67daa3e094 for this chassis.
Jan 31 03:32:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:32:26Z|00761|binding|INFO|54f55019-4a06-4846-ad7a-5d67daa3e094: Claiming fa:16:3e:31:38:16 10.100.0.12
Jan 31 03:32:26 np0005603609 nova_compute[221550]: 2026-01-31 08:32:26.360 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:26 np0005603609 nova_compute[221550]: 2026-01-31 08:32:26.365 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:26 np0005603609 nova_compute[221550]: 2026-01-31 08:32:26.369 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:26 np0005603609 systemd-udevd[292970]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:32:26 np0005603609 NetworkManager[49064]: <info>  [1769848346.3953] device (tap54f55019-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:32:26 np0005603609 NetworkManager[49064]: <info>  [1769848346.3960] device (tap54f55019-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:32:26 np0005603609 systemd-machined[190912]: New machine qemu-93-instance-000000a7.
Jan 31 03:32:26 np0005603609 nova_compute[221550]: 2026-01-31 08:32:26.400 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:32:26Z|00762|binding|INFO|Setting lport 54f55019-4a06-4846-ad7a-5d67daa3e094 ovn-installed in OVS
Jan 31 03:32:26 np0005603609 nova_compute[221550]: 2026-01-31 08:32:26.405 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:26 np0005603609 systemd[1]: Started Virtual Machine qemu-93-instance-000000a7.
Jan 31 03:32:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:32:26Z|00763|binding|INFO|Setting lport 54f55019-4a06-4846-ad7a-5d67daa3e094 up in Southbound
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.438 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:38:16 10.100.0.12'], port_security=['fa:16:3e:31:38:16 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '20d5fd84-de90-46b0-816e-0f378fd7d0c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '722ab2e9dd674709953be812d4c88493', 'neutron:revision_number': '10', 'neutron:security_group_ids': '9520bea1-c264-4093-b5f2-d0e6aba13129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2519b9f-dcfa-4a01-b5ee-7b195f240dfa, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=54f55019-4a06-4846-ad7a-5d67daa3e094) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.439 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 54f55019-4a06-4846-ad7a-5d67daa3e094 in datapath e14a8b1b-1e10-44db-9858-e5a0ae5f2476 bound to our chassis#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.440 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e14a8b1b-1e10-44db-9858-e5a0ae5f2476#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.451 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f759fa1d-6f43-45fb-9fb9-b10147a3f805]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.451 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape14a8b1b-11 in ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.453 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape14a8b1b-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.453 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5f6d957b-ae21-45e2-9df6-5b840f461d0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.454 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0a91dd74-0f08-476d-b3cf-e00e427701ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.463 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[19a6dd38-80aa-4531-a10e-461e2d9d0230]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.472 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3747d1-9247-476b-a1f1-a5bad6919bb7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.492 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[158f6aad-709e-41b8-81c4-36dd98f9a3f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.495 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[631aa91d-e66d-4bf0-b973-a22d29edba3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:26 np0005603609 NetworkManager[49064]: <info>  [1769848346.4964] manager: (tape14a8b1b-10): new Veth device (/org/freedesktop/NetworkManager/Devices/351)
Jan 31 03:32:26 np0005603609 systemd-udevd[292975]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.518 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[4da8fc51-29ba-49e1-9bd0-2acdbfdad509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.520 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4bc75f-48eb-4da2-a27b-ca4f9f9f8edd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:26 np0005603609 NetworkManager[49064]: <info>  [1769848346.5344] device (tape14a8b1b-10): carrier: link connected
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.539 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[474e3926-93df-4828-888e-70bdf0cfe31c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.553 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8e8b1af9-b9ba-4e01-9fb0-a740248f08ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape14a8b1b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:d3:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858623, 'reachable_time': 36678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293006, 'error': None, 'target': 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.565 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd75768-6bb3-43f7-bb23-eb9fb7b98dd3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:d359'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858623, 'tstamp': 858623}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293007, 'error': None, 'target': 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.579 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9826bf9b-f83b-42f0-ac10-6a2a33d81974]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape14a8b1b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:d3:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858623, 'reachable_time': 36678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293008, 'error': None, 'target': 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.603 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf23167-c841-44be-8db1-b06c739e76ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.642 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[52306a1b-a7d4-4b27-a9d7-671232739f73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.643 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape14a8b1b-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.644 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.644 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape14a8b1b-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:26 np0005603609 NetworkManager[49064]: <info>  [1769848346.6466] manager: (tape14a8b1b-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Jan 31 03:32:26 np0005603609 nova_compute[221550]: 2026-01-31 08:32:26.646 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:26 np0005603609 kernel: tape14a8b1b-10: entered promiscuous mode
Jan 31 03:32:26 np0005603609 nova_compute[221550]: 2026-01-31 08:32:26.649 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.651 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape14a8b1b-10, col_values=(('external_ids', {'iface-id': '6550f58c-c002-4c91-85a2-53b4c1c807b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:26 np0005603609 nova_compute[221550]: 2026-01-31 08:32:26.652 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:26 np0005603609 nova_compute[221550]: 2026-01-31 08:32:26.652 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:32:26Z|00764|binding|INFO|Releasing lport 6550f58c-c002-4c91-85a2-53b4c1c807b7 from this chassis (sb_readonly=1)
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.655 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e14a8b1b-1e10-44db-9858-e5a0ae5f2476.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e14a8b1b-1e10-44db-9858-e5a0ae5f2476.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.655 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec44e73-2d97-4f65-8ddd-3bca5d6b6e4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.656 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-e14a8b1b-1e10-44db-9858-e5a0ae5f2476
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/e14a8b1b-1e10-44db-9858-e5a0ae5f2476.pid.haproxy
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID e14a8b1b-1e10-44db-9858-e5a0ae5f2476
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:32:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:32:26.657 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'env', 'PROCESS_TAG=haproxy-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e14a8b1b-1e10-44db-9858-e5a0ae5f2476.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:32:26 np0005603609 nova_compute[221550]: 2026-01-31 08:32:26.659 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:26.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:27 np0005603609 podman[293081]: 2026-01-31 08:32:26.939315706 +0000 UTC m=+0.015743722 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:32:27 np0005603609 nova_compute[221550]: 2026-01-31 08:32:27.039 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 20d5fd84-de90-46b0-816e-0f378fd7d0c7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:32:27 np0005603609 nova_compute[221550]: 2026-01-31 08:32:27.040 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848347.039657, 20d5fd84-de90-46b0-816e-0f378fd7d0c7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:32:27 np0005603609 nova_compute[221550]: 2026-01-31 08:32:27.040 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] VM Started (Lifecycle Event)#033[00m
Jan 31 03:32:27 np0005603609 nova_compute[221550]: 2026-01-31 08:32:27.067 221554 DEBUG nova.compute.manager [None req-5b4b1f47-72a0-4234-8b8a-e974c134fc4b eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:32:27 np0005603609 nova_compute[221550]: 2026-01-31 08:32:27.067 221554 DEBUG nova.objects.instance [None req-5b4b1f47-72a0-4234-8b8a-e974c134fc4b eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'pci_devices' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:32:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:27.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:27 np0005603609 nova_compute[221550]: 2026-01-31 08:32:27.109 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:32:27 np0005603609 nova_compute[221550]: 2026-01-31 08:32:27.114 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:32:27 np0005603609 podman[293081]: 2026-01-31 08:32:27.132359422 +0000 UTC m=+0.208787428 container create 856449b5e5f06a7ddce9534050f92c4f38ebc5e5ef905a6795d8e4a72d33362d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:32:27 np0005603609 nova_compute[221550]: 2026-01-31 08:32:27.168 221554 INFO nova.virt.libvirt.driver [-] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Instance running successfully.#033[00m
Jan 31 03:32:27 np0005603609 virtqemud[221292]: argument unsupported: QEMU guest agent is not configured
Jan 31 03:32:27 np0005603609 nova_compute[221550]: 2026-01-31 08:32:27.170 221554 DEBUG nova.virt.libvirt.guest [None req-5b4b1f47-72a0-4234-8b8a-e974c134fc4b eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 03:32:27 np0005603609 nova_compute[221550]: 2026-01-31 08:32:27.171 221554 DEBUG nova.compute.manager [None req-5b4b1f47-72a0-4234-8b8a-e974c134fc4b eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:32:27 np0005603609 systemd[1]: Started libpod-conmon-856449b5e5f06a7ddce9534050f92c4f38ebc5e5ef905a6795d8e4a72d33362d.scope.
Jan 31 03:32:27 np0005603609 nova_compute[221550]: 2026-01-31 08:32:27.206 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Jan 31 03:32:27 np0005603609 nova_compute[221550]: 2026-01-31 08:32:27.207 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848347.042004, 20d5fd84-de90-46b0-816e-0f378fd7d0c7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:32:27 np0005603609 nova_compute[221550]: 2026-01-31 08:32:27.207 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:32:27 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:32:27 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d227b51a9debc9940ec0c82147dfa0531808f0dd0b7bb04c59a811471cbaf1c9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:27 np0005603609 podman[293081]: 2026-01-31 08:32:27.289095419 +0000 UTC m=+0.365523435 container init 856449b5e5f06a7ddce9534050f92c4f38ebc5e5ef905a6795d8e4a72d33362d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:32:27 np0005603609 podman[293081]: 2026-01-31 08:32:27.297676407 +0000 UTC m=+0.374104443 container start 856449b5e5f06a7ddce9534050f92c4f38ebc5e5ef905a6795d8e4a72d33362d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:32:27 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[293097]: [NOTICE]   (293101) : New worker (293103) forked
Jan 31 03:32:27 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[293097]: [NOTICE]   (293101) : Loading success.
Jan 31 03:32:27 np0005603609 nova_compute[221550]: 2026-01-31 08:32:27.402 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:27 np0005603609 nova_compute[221550]: 2026-01-31 08:32:27.552 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:32:27 np0005603609 nova_compute[221550]: 2026-01-31 08:32:27.556 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:32:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:32:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:28.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:32:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:29.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:30.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:31 np0005603609 nova_compute[221550]: 2026-01-31 08:32:31.085 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:32:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:31.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:32:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:31 np0005603609 nova_compute[221550]: 2026-01-31 08:32:31.949 221554 DEBUG nova.compute.manager [req-96bc8c08-d20b-47c9-868a-ac79a2eaac25 req-6d8cc79a-04f8-4fe6-b1df-033f3924eaee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:32:31 np0005603609 nova_compute[221550]: 2026-01-31 08:32:31.950 221554 DEBUG oslo_concurrency.lockutils [req-96bc8c08-d20b-47c9-868a-ac79a2eaac25 req-6d8cc79a-04f8-4fe6-b1df-033f3924eaee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:31 np0005603609 nova_compute[221550]: 2026-01-31 08:32:31.950 221554 DEBUG oslo_concurrency.lockutils [req-96bc8c08-d20b-47c9-868a-ac79a2eaac25 req-6d8cc79a-04f8-4fe6-b1df-033f3924eaee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:31 np0005603609 nova_compute[221550]: 2026-01-31 08:32:31.951 221554 DEBUG oslo_concurrency.lockutils [req-96bc8c08-d20b-47c9-868a-ac79a2eaac25 req-6d8cc79a-04f8-4fe6-b1df-033f3924eaee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:31 np0005603609 nova_compute[221550]: 2026-01-31 08:32:31.951 221554 DEBUG nova.compute.manager [req-96bc8c08-d20b-47c9-868a-ac79a2eaac25 req-6d8cc79a-04f8-4fe6-b1df-033f3924eaee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] No waiting events found dispatching network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:32:31 np0005603609 nova_compute[221550]: 2026-01-31 08:32:31.952 221554 WARNING nova.compute.manager [req-96bc8c08-d20b-47c9-868a-ac79a2eaac25 req-6d8cc79a-04f8-4fe6-b1df-033f3924eaee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received unexpected event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:32:32 np0005603609 nova_compute[221550]: 2026-01-31 08:32:32.404 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:33.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:33.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:34 np0005603609 nova_compute[221550]: 2026-01-31 08:32:34.266 221554 DEBUG nova.compute.manager [req-372de587-09b8-4f86-9c26-2f5a9acade3d req-419a4bd7-a2b6-4b4c-ab27-1071f045133e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:32:34 np0005603609 nova_compute[221550]: 2026-01-31 08:32:34.266 221554 DEBUG oslo_concurrency.lockutils [req-372de587-09b8-4f86-9c26-2f5a9acade3d req-419a4bd7-a2b6-4b4c-ab27-1071f045133e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:34 np0005603609 nova_compute[221550]: 2026-01-31 08:32:34.267 221554 DEBUG oslo_concurrency.lockutils [req-372de587-09b8-4f86-9c26-2f5a9acade3d req-419a4bd7-a2b6-4b4c-ab27-1071f045133e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:34 np0005603609 nova_compute[221550]: 2026-01-31 08:32:34.267 221554 DEBUG oslo_concurrency.lockutils [req-372de587-09b8-4f86-9c26-2f5a9acade3d req-419a4bd7-a2b6-4b4c-ab27-1071f045133e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:34 np0005603609 nova_compute[221550]: 2026-01-31 08:32:34.267 221554 DEBUG nova.compute.manager [req-372de587-09b8-4f86-9c26-2f5a9acade3d req-419a4bd7-a2b6-4b4c-ab27-1071f045133e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] No waiting events found dispatching network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:32:34 np0005603609 nova_compute[221550]: 2026-01-31 08:32:34.267 221554 WARNING nova.compute.manager [req-372de587-09b8-4f86-9c26-2f5a9acade3d req-419a4bd7-a2b6-4b4c-ab27-1071f045133e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received unexpected event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:32:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:35.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:35.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:36 np0005603609 nova_compute[221550]: 2026-01-31 08:32:36.087 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:37.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:37.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:37 np0005603609 nova_compute[221550]: 2026-01-31 08:32:37.407 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:37 np0005603609 nova_compute[221550]: 2026-01-31 08:32:37.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:37 np0005603609 nova_compute[221550]: 2026-01-31 08:32:37.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #142. Immutable memtables: 0.
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:38.032905) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 142
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848358032979, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 570, "num_deletes": 255, "total_data_size": 858070, "memory_usage": 869224, "flush_reason": "Manual Compaction"}
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #143: started
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848358038687, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 143, "file_size": 565752, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70453, "largest_seqno": 71018, "table_properties": {"data_size": 562822, "index_size": 901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6727, "raw_average_key_size": 18, "raw_value_size": 556979, "raw_average_value_size": 1525, "num_data_blocks": 40, "num_entries": 365, "num_filter_entries": 365, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848324, "oldest_key_time": 1769848324, "file_creation_time": 1769848358, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 5818 microseconds, and 1713 cpu microseconds.
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:38.038731) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #143: 565752 bytes OK
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:38.038755) [db/memtable_list.cc:519] [default] Level-0 commit table #143 started
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:38.040806) [db/memtable_list.cc:722] [default] Level-0 commit table #143: memtable #1 done
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:38.040819) EVENT_LOG_v1 {"time_micros": 1769848358040815, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:38.040837) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 854821, prev total WAL file size 854821, number of live WAL files 2.
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000139.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:38.041313) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353133' seq:72057594037927935, type:22 .. '6C6F676D0032373634' seq:0, type:0; will stop at (end)
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [143(552KB)], [141(10MB)]
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848358041337, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [143], "files_L6": [141], "score": -1, "input_data_size": 11402521, "oldest_snapshot_seqno": -1}
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #144: 9234 keys, 11267953 bytes, temperature: kUnknown
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848358402920, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 144, "file_size": 11267953, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11209467, "index_size": 34309, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23109, "raw_key_size": 243901, "raw_average_key_size": 26, "raw_value_size": 11048401, "raw_average_value_size": 1196, "num_data_blocks": 1306, "num_entries": 9234, "num_filter_entries": 9234, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769848358, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 144, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:38.403190) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11267953 bytes
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:38.433545) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 31.5 rd, 31.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.3 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(40.1) write-amplify(19.9) OK, records in: 9754, records dropped: 520 output_compression: NoCompression
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:38.433578) EVENT_LOG_v1 {"time_micros": 1769848358433564, "job": 90, "event": "compaction_finished", "compaction_time_micros": 361681, "compaction_time_cpu_micros": 22023, "output_level": 6, "num_output_files": 1, "total_output_size": 11267953, "num_input_records": 9754, "num_output_records": 9234, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848358433789, "job": 90, "event": "table_file_deletion", "file_number": 143}
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000141.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848358435156, "job": 90, "event": "table_file_deletion", "file_number": 141}
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:38.041209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:38.435255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:38.435261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:38.435265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:38.435268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:32:38.435271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:32:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:32:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:39.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:32:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:39.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:32:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:32:39 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2808292028' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:32:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:32:39 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2808292028' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:32:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:41.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:41 np0005603609 nova_compute[221550]: 2026-01-31 08:32:41.089 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:41.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:42 np0005603609 nova_compute[221550]: 2026-01-31 08:32:42.409 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:43.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:43.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:43 np0005603609 nova_compute[221550]: 2026-01-31 08:32:43.763 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:43 np0005603609 nova_compute[221550]: 2026-01-31 08:32:43.764 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:32:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:32:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:45.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:32:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:45.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:32:46 np0005603609 nova_compute[221550]: 2026-01-31 08:32:46.091 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:46 np0005603609 podman[293294]: 2026-01-31 08:32:46.173918268 +0000 UTC m=+0.048447385 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 03:32:46 np0005603609 podman[293293]: 2026-01-31 08:32:46.213775013 +0000 UTC m=+0.091888776 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 31 03:32:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:47.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:47.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:47 np0005603609 nova_compute[221550]: 2026-01-31 08:32:47.411 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:47 np0005603609 nova_compute[221550]: 2026-01-31 08:32:47.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:47 np0005603609 nova_compute[221550]: 2026-01-31 08:32:47.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:32:47 np0005603609 nova_compute[221550]: 2026-01-31 08:32:47.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:32:48 np0005603609 nova_compute[221550]: 2026-01-31 08:32:48.159 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:32:48 np0005603609 nova_compute[221550]: 2026-01-31 08:32:48.160 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:32:48 np0005603609 nova_compute[221550]: 2026-01-31 08:32:48.160 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:32:48 np0005603609 nova_compute[221550]: 2026-01-31 08:32:48.160 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:32:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:49.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:32:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:49.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:32:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:32:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:51.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:32:51 np0005603609 nova_compute[221550]: 2026-01-31 08:32:51.094 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:51.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:51 np0005603609 nova_compute[221550]: 2026-01-31 08:32:51.439 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Updating instance_info_cache with network_info: [{"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:32:51 np0005603609 nova_compute[221550]: 2026-01-31 08:32:51.872 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-20d5fd84-de90-46b0-816e-0f378fd7d0c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:32:51 np0005603609 nova_compute[221550]: 2026-01-31 08:32:51.873 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:32:51 np0005603609 nova_compute[221550]: 2026-01-31 08:32:51.873 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:51 np0005603609 nova_compute[221550]: 2026-01-31 08:32:51.873 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:32:51 np0005603609 nova_compute[221550]: 2026-01-31 08:32:51.874 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:52 np0005603609 nova_compute[221550]: 2026-01-31 08:32:52.119 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:52 np0005603609 nova_compute[221550]: 2026-01-31 08:32:52.120 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:52 np0005603609 nova_compute[221550]: 2026-01-31 08:32:52.120 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:52 np0005603609 nova_compute[221550]: 2026-01-31 08:32:52.120 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:32:52 np0005603609 nova_compute[221550]: 2026-01-31 08:32:52.121 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:52 np0005603609 nova_compute[221550]: 2026-01-31 08:32:52.413 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:32:52 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/723900355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:32:52 np0005603609 nova_compute[221550]: 2026-01-31 08:32:52.533 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:32:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:53.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:32:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:53.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:54 np0005603609 nova_compute[221550]: 2026-01-31 08:32:54.621 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:32:54 np0005603609 nova_compute[221550]: 2026-01-31 08:32:54.621 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:32:54 np0005603609 nova_compute[221550]: 2026-01-31 08:32:54.776 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:32:54 np0005603609 nova_compute[221550]: 2026-01-31 08:32:54.777 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4048MB free_disk=20.94268798828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:32:54 np0005603609 nova_compute[221550]: 2026-01-31 08:32:54.777 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:54 np0005603609 nova_compute[221550]: 2026-01-31 08:32:54.777 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:55.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:55.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:56 np0005603609 nova_compute[221550]: 2026-01-31 08:32:56.096 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:57.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:57.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:57 np0005603609 nova_compute[221550]: 2026-01-31 08:32:57.417 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:59.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:32:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:59.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:01.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:01 np0005603609 nova_compute[221550]: 2026-01-31 08:33:01.097 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:01.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:02 np0005603609 nova_compute[221550]: 2026-01-31 08:33:02.163 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 20d5fd84-de90-46b0-816e-0f378fd7d0c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:33:02 np0005603609 nova_compute[221550]: 2026-01-31 08:33:02.165 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:33:02 np0005603609 nova_compute[221550]: 2026-01-31 08:33:02.165 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:33:02 np0005603609 nova_compute[221550]: 2026-01-31 08:33:02.191 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:33:02 np0005603609 nova_compute[221550]: 2026-01-31 08:33:02.216 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:33:02 np0005603609 nova_compute[221550]: 2026-01-31 08:33:02.217 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:33:02 np0005603609 nova_compute[221550]: 2026-01-31 08:33:02.418 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:02 np0005603609 nova_compute[221550]: 2026-01-31 08:33:02.799 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:33:02 np0005603609 nova_compute[221550]: 2026-01-31 08:33:02.843 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:33:02 np0005603609 nova_compute[221550]: 2026-01-31 08:33:02.885 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:03.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:03.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:33:03 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4132899862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:33:03 np0005603609 nova_compute[221550]: 2026-01-31 08:33:03.302 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:03 np0005603609 nova_compute[221550]: 2026-01-31 08:33:03.310 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:33:04 np0005603609 nova_compute[221550]: 2026-01-31 08:33:04.077 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:33:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:33:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:05.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:33:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:05.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:05 np0005603609 nova_compute[221550]: 2026-01-31 08:33:05.576 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:33:05 np0005603609 nova_compute[221550]: 2026-01-31 08:33:05.577 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 10.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:06 np0005603609 nova_compute[221550]: 2026-01-31 08:33:06.137 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:07.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:07.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:07 np0005603609 nova_compute[221550]: 2026-01-31 08:33:07.421 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:07.525 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:07.526 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:07.526 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:08 np0005603609 nova_compute[221550]: 2026-01-31 08:33:08.092 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:08.094 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:33:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:08.095 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:33:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:09.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:33:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:09.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:33:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:11.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:11 np0005603609 nova_compute[221550]: 2026-01-31 08:33:11.141 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:11.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:11 np0005603609 nova_compute[221550]: 2026-01-31 08:33:11.363 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:11 np0005603609 nova_compute[221550]: 2026-01-31 08:33:11.364 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:11 np0005603609 nova_compute[221550]: 2026-01-31 08:33:11.364 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:12 np0005603609 nova_compute[221550]: 2026-01-31 08:33:12.424 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:13.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:13.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:33:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:15.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:33:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:15.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:15 np0005603609 nova_compute[221550]: 2026-01-31 08:33:15.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:16 np0005603609 nova_compute[221550]: 2026-01-31 08:33:16.142 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:33:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:17.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:33:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:17.097 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:17 np0005603609 podman[293384]: 2026-01-31 08:33:17.158130275 +0000 UTC m=+0.042557731 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:33:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:17.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:17 np0005603609 podman[293383]: 2026-01-31 08:33:17.184614177 +0000 UTC m=+0.069143176 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:33:17 np0005603609 nova_compute[221550]: 2026-01-31 08:33:17.425 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:33:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:19.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:33:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:19.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.370 221554 DEBUG oslo_concurrency.lockutils [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.371 221554 DEBUG oslo_concurrency.lockutils [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.371 221554 DEBUG oslo_concurrency.lockutils [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.372 221554 DEBUG oslo_concurrency.lockutils [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.372 221554 DEBUG oslo_concurrency.lockutils [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.374 221554 INFO nova.compute.manager [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Terminating instance#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.377 221554 DEBUG nova.compute.manager [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:33:20 np0005603609 kernel: tap54f55019-4a (unregistering): left promiscuous mode
Jan 31 03:33:20 np0005603609 NetworkManager[49064]: <info>  [1769848400.4405] device (tap54f55019-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:33:20 np0005603609 ovn_controller[130359]: 2026-01-31T08:33:20Z|00765|binding|INFO|Releasing lport 54f55019-4a06-4846-ad7a-5d67daa3e094 from this chassis (sb_readonly=0)
Jan 31 03:33:20 np0005603609 ovn_controller[130359]: 2026-01-31T08:33:20Z|00766|binding|INFO|Setting lport 54f55019-4a06-4846-ad7a-5d67daa3e094 down in Southbound
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.446 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:20 np0005603609 ovn_controller[130359]: 2026-01-31T08:33:20Z|00767|binding|INFO|Removing iface tap54f55019-4a ovn-installed in OVS
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.448 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.456 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:20 np0005603609 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000a7.scope: Deactivated successfully.
Jan 31 03:33:20 np0005603609 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000a7.scope: Consumed 2.805s CPU time.
Jan 31 03:33:20 np0005603609 systemd-machined[190912]: Machine qemu-93-instance-000000a7 terminated.
Jan 31 03:33:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:20.577 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:38:16 10.100.0.12'], port_security=['fa:16:3e:31:38:16 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '20d5fd84-de90-46b0-816e-0f378fd7d0c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '722ab2e9dd674709953be812d4c88493', 'neutron:revision_number': '11', 'neutron:security_group_ids': '9520bea1-c264-4093-b5f2-d0e6aba13129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2519b9f-dcfa-4a01-b5ee-7b195f240dfa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=54f55019-4a06-4846-ad7a-5d67daa3e094) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:33:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:20.579 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 54f55019-4a06-4846-ad7a-5d67daa3e094 in datapath e14a8b1b-1e10-44db-9858-e5a0ae5f2476 unbound from our chassis#033[00m
Jan 31 03:33:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:20.580 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e14a8b1b-1e10-44db-9858-e5a0ae5f2476, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:33:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:20.582 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c2731267-76ba-4ea1-a1f6-73210176a064]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:20.583 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 namespace which is not needed anymore#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.598 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.603 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.613 221554 INFO nova.virt.libvirt.driver [-] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Instance destroyed successfully.#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.614 221554 DEBUG nova.objects.instance [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'resources' on Instance uuid 20d5fd84-de90-46b0-816e-0f378fd7d0c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:33:20 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[293097]: [NOTICE]   (293101) : haproxy version is 2.8.14-c23fe91
Jan 31 03:33:20 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[293097]: [NOTICE]   (293101) : path to executable is /usr/sbin/haproxy
Jan 31 03:33:20 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[293097]: [WARNING]  (293101) : Exiting Master process...
Jan 31 03:33:20 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[293097]: [ALERT]    (293101) : Current worker (293103) exited with code 143 (Terminated)
Jan 31 03:33:20 np0005603609 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[293097]: [WARNING]  (293101) : All workers exited. Exiting... (0)
Jan 31 03:33:20 np0005603609 systemd[1]: libpod-856449b5e5f06a7ddce9534050f92c4f38ebc5e5ef905a6795d8e4a72d33362d.scope: Deactivated successfully.
Jan 31 03:33:20 np0005603609 podman[293462]: 2026-01-31 08:33:20.695138228 +0000 UTC m=+0.044707623 container died 856449b5e5f06a7ddce9534050f92c4f38ebc5e5ef905a6795d8e4a72d33362d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:33:20 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-856449b5e5f06a7ddce9534050f92c4f38ebc5e5ef905a6795d8e4a72d33362d-userdata-shm.mount: Deactivated successfully.
Jan 31 03:33:20 np0005603609 systemd[1]: var-lib-containers-storage-overlay-d227b51a9debc9940ec0c82147dfa0531808f0dd0b7bb04c59a811471cbaf1c9-merged.mount: Deactivated successfully.
Jan 31 03:33:20 np0005603609 podman[293462]: 2026-01-31 08:33:20.728524917 +0000 UTC m=+0.078094302 container cleanup 856449b5e5f06a7ddce9534050f92c4f38ebc5e5ef905a6795d8e4a72d33362d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:33:20 np0005603609 systemd[1]: libpod-conmon-856449b5e5f06a7ddce9534050f92c4f38ebc5e5ef905a6795d8e4a72d33362d.scope: Deactivated successfully.
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.764 221554 DEBUG nova.virt.libvirt.vif [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:27:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-711445957',display_name='tempest-ServersNegativeTestJSON-server-711445957',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-711445957',id=167,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:31:59Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='722ab2e9dd674709953be812d4c88493',ramdisk_id='',reservation_id='r-48kv7dus',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1810683467',owner_user_name='tempest-ServersNegativeTestJSON-1810683467-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:32:27Z,user_data=None,user_id='eac51187531841e2891fc5d3c5f84123',uuid=20d5fd84-de90-46b0-816e-0f378fd7d0c7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.764 221554 DEBUG nova.network.os_vif_util [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converting VIF {"id": "54f55019-4a06-4846-ad7a-5d67daa3e094", "address": "fa:16:3e:31:38:16", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f55019-4a", "ovs_interfaceid": "54f55019-4a06-4846-ad7a-5d67daa3e094", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.765 221554 DEBUG nova.network.os_vif_util [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:31:38:16,bridge_name='br-int',has_traffic_filtering=True,id=54f55019-4a06-4846-ad7a-5d67daa3e094,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f55019-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.765 221554 DEBUG os_vif [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:31:38:16,bridge_name='br-int',has_traffic_filtering=True,id=54f55019-4a06-4846-ad7a-5d67daa3e094,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f55019-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.767 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.768 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54f55019-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.769 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.770 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.773 221554 INFO os_vif [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:31:38:16,bridge_name='br-int',has_traffic_filtering=True,id=54f55019-4a06-4846-ad7a-5d67daa3e094,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f55019-4a')#033[00m
Jan 31 03:33:20 np0005603609 podman[293489]: 2026-01-31 08:33:20.793775778 +0000 UTC m=+0.048366024 container remove 856449b5e5f06a7ddce9534050f92c4f38ebc5e5ef905a6795d8e4a72d33362d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:33:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:20.798 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[67af9310-58b9-418b-9793-e7d4fff3b949]: (4, ('Sat Jan 31 08:33:20 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 (856449b5e5f06a7ddce9534050f92c4f38ebc5e5ef905a6795d8e4a72d33362d)\n856449b5e5f06a7ddce9534050f92c4f38ebc5e5ef905a6795d8e4a72d33362d\nSat Jan 31 08:33:20 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 (856449b5e5f06a7ddce9534050f92c4f38ebc5e5ef905a6795d8e4a72d33362d)\n856449b5e5f06a7ddce9534050f92c4f38ebc5e5ef905a6795d8e4a72d33362d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:20.800 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6789d3-8c29-4692-b45f-be478b1824f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:20.801 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape14a8b1b-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.803 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:20 np0005603609 kernel: tape14a8b1b-10: left promiscuous mode
Jan 31 03:33:20 np0005603609 nova_compute[221550]: 2026-01-31 08:33:20.808 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:20.810 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[453b14b0-b659-4284-8086-3dd40c956c95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:20.823 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d04854bb-888b-46b9-8213-e3e8d906efaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:20.824 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3168d185-9d95-4b86-baf9-e04508c428c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:20.843 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[432e5ade-7661-405f-8824-5fd17121a87e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858618, 'reachable_time': 24140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293521, 'error': None, 'target': 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:20.845 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:33:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:20.845 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[9054d268-f146-4e91-b9cc-d45248372d64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:20 np0005603609 systemd[1]: run-netns-ovnmeta\x2de14a8b1b\x2d1e10\x2d44db\x2d9858\x2de5a0ae5f2476.mount: Deactivated successfully.
Jan 31 03:33:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:33:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:21.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:33:21 np0005603609 nova_compute[221550]: 2026-01-31 08:33:21.144 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:21.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:21 np0005603609 nova_compute[221550]: 2026-01-31 08:33:21.371 221554 INFO nova.virt.libvirt.driver [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Deleting instance files /var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7_del#033[00m
Jan 31 03:33:21 np0005603609 nova_compute[221550]: 2026-01-31 08:33:21.371 221554 INFO nova.virt.libvirt.driver [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Deletion of /var/lib/nova/instances/20d5fd84-de90-46b0-816e-0f378fd7d0c7_del complete#033[00m
Jan 31 03:33:21 np0005603609 nova_compute[221550]: 2026-01-31 08:33:21.629 221554 INFO nova.compute.manager [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Took 1.25 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:33:21 np0005603609 nova_compute[221550]: 2026-01-31 08:33:21.629 221554 DEBUG oslo.service.loopingcall [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:33:21 np0005603609 nova_compute[221550]: 2026-01-31 08:33:21.629 221554 DEBUG nova.compute.manager [-] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:33:21 np0005603609 nova_compute[221550]: 2026-01-31 08:33:21.630 221554 DEBUG nova.network.neutron [-] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:33:21 np0005603609 nova_compute[221550]: 2026-01-31 08:33:21.664 221554 DEBUG nova.compute.manager [req-bc4869b1-7402-4ed2-b6e4-521392f04adc req-55dd51fc-92af-4031-aeaa-109114948e4c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received event network-vif-unplugged-54f55019-4a06-4846-ad7a-5d67daa3e094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:33:21 np0005603609 nova_compute[221550]: 2026-01-31 08:33:21.664 221554 DEBUG oslo_concurrency.lockutils [req-bc4869b1-7402-4ed2-b6e4-521392f04adc req-55dd51fc-92af-4031-aeaa-109114948e4c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:21 np0005603609 nova_compute[221550]: 2026-01-31 08:33:21.665 221554 DEBUG oslo_concurrency.lockutils [req-bc4869b1-7402-4ed2-b6e4-521392f04adc req-55dd51fc-92af-4031-aeaa-109114948e4c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:21 np0005603609 nova_compute[221550]: 2026-01-31 08:33:21.665 221554 DEBUG oslo_concurrency.lockutils [req-bc4869b1-7402-4ed2-b6e4-521392f04adc req-55dd51fc-92af-4031-aeaa-109114948e4c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:21 np0005603609 nova_compute[221550]: 2026-01-31 08:33:21.665 221554 DEBUG nova.compute.manager [req-bc4869b1-7402-4ed2-b6e4-521392f04adc req-55dd51fc-92af-4031-aeaa-109114948e4c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] No waiting events found dispatching network-vif-unplugged-54f55019-4a06-4846-ad7a-5d67daa3e094 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:33:21 np0005603609 nova_compute[221550]: 2026-01-31 08:33:21.666 221554 DEBUG nova.compute.manager [req-bc4869b1-7402-4ed2-b6e4-521392f04adc req-55dd51fc-92af-4031-aeaa-109114948e4c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received event network-vif-unplugged-54f55019-4a06-4846-ad7a-5d67daa3e094 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:33:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:33:22 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1846240337' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:33:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:33:22 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1846240337' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:33:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:33:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:23.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:33:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:33:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:23.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:33:23 np0005603609 nova_compute[221550]: 2026-01-31 08:33:23.944 221554 DEBUG nova.compute.manager [req-034eff73-95ff-4968-b0a4-571e16e69613 req-9ca7b00e-9525-4678-b907-6d367a54b715 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:33:23 np0005603609 nova_compute[221550]: 2026-01-31 08:33:23.944 221554 DEBUG oslo_concurrency.lockutils [req-034eff73-95ff-4968-b0a4-571e16e69613 req-9ca7b00e-9525-4678-b907-6d367a54b715 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:23 np0005603609 nova_compute[221550]: 2026-01-31 08:33:23.945 221554 DEBUG oslo_concurrency.lockutils [req-034eff73-95ff-4968-b0a4-571e16e69613 req-9ca7b00e-9525-4678-b907-6d367a54b715 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:23 np0005603609 nova_compute[221550]: 2026-01-31 08:33:23.945 221554 DEBUG oslo_concurrency.lockutils [req-034eff73-95ff-4968-b0a4-571e16e69613 req-9ca7b00e-9525-4678-b907-6d367a54b715 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:23 np0005603609 nova_compute[221550]: 2026-01-31 08:33:23.945 221554 DEBUG nova.compute.manager [req-034eff73-95ff-4968-b0a4-571e16e69613 req-9ca7b00e-9525-4678-b907-6d367a54b715 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] No waiting events found dispatching network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:33:23 np0005603609 nova_compute[221550]: 2026-01-31 08:33:23.946 221554 WARNING nova.compute.manager [req-034eff73-95ff-4968-b0a4-571e16e69613 req-9ca7b00e-9525-4678-b907-6d367a54b715 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received unexpected event network-vif-plugged-54f55019-4a06-4846-ad7a-5d67daa3e094 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:33:24 np0005603609 nova_compute[221550]: 2026-01-31 08:33:24.740 221554 DEBUG nova.network.neutron [-] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:33:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:33:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:25.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:33:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:33:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:25.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:33:25 np0005603609 nova_compute[221550]: 2026-01-31 08:33:25.264 221554 DEBUG nova.compute.manager [req-cb757dc0-7284-40d3-aff4-b7a6238fb9dd req-ae9122aa-08d6-4e8d-886c-a504e37e3811 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Received event network-vif-deleted-54f55019-4a06-4846-ad7a-5d67daa3e094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:33:25 np0005603609 nova_compute[221550]: 2026-01-31 08:33:25.264 221554 INFO nova.compute.manager [req-cb757dc0-7284-40d3-aff4-b7a6238fb9dd req-ae9122aa-08d6-4e8d-886c-a504e37e3811 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Neutron deleted interface 54f55019-4a06-4846-ad7a-5d67daa3e094; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:33:25 np0005603609 nova_compute[221550]: 2026-01-31 08:33:25.264 221554 DEBUG nova.network.neutron [req-cb757dc0-7284-40d3-aff4-b7a6238fb9dd req-ae9122aa-08d6-4e8d-886c-a504e37e3811 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:33:25 np0005603609 nova_compute[221550]: 2026-01-31 08:33:25.327 221554 INFO nova.compute.manager [-] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Took 3.70 seconds to deallocate network for instance.#033[00m
Jan 31 03:33:25 np0005603609 nova_compute[221550]: 2026-01-31 08:33:25.337 221554 DEBUG nova.compute.manager [req-cb757dc0-7284-40d3-aff4-b7a6238fb9dd req-ae9122aa-08d6-4e8d-886c-a504e37e3811 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Detach interface failed, port_id=54f55019-4a06-4846-ad7a-5d67daa3e094, reason: Instance 20d5fd84-de90-46b0-816e-0f378fd7d0c7 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:33:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:33:25 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1196754735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:33:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:33:25 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1196754735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:33:25 np0005603609 nova_compute[221550]: 2026-01-31 08:33:25.585 221554 DEBUG oslo_concurrency.lockutils [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:25 np0005603609 nova_compute[221550]: 2026-01-31 08:33:25.586 221554 DEBUG oslo_concurrency.lockutils [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:25 np0005603609 nova_compute[221550]: 2026-01-31 08:33:25.664 221554 DEBUG oslo_concurrency.processutils [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:25 np0005603609 nova_compute[221550]: 2026-01-31 08:33:25.770 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:33:26 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1780901387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:33:26 np0005603609 nova_compute[221550]: 2026-01-31 08:33:26.075 221554 DEBUG oslo_concurrency.processutils [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:26 np0005603609 nova_compute[221550]: 2026-01-31 08:33:26.079 221554 DEBUG nova.compute.provider_tree [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:33:26 np0005603609 nova_compute[221550]: 2026-01-31 08:33:26.146 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:26 np0005603609 nova_compute[221550]: 2026-01-31 08:33:26.153 221554 DEBUG nova.scheduler.client.report [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:33:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:26 np0005603609 nova_compute[221550]: 2026-01-31 08:33:26.303 221554 DEBUG oslo_concurrency.lockutils [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:26 np0005603609 nova_compute[221550]: 2026-01-31 08:33:26.572 221554 INFO nova.scheduler.client.report [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Deleted allocations for instance 20d5fd84-de90-46b0-816e-0f378fd7d0c7#033[00m
Jan 31 03:33:26 np0005603609 nova_compute[221550]: 2026-01-31 08:33:26.759 221554 DEBUG oslo_concurrency.lockutils [None req-9c8fc4c2-e33f-4017-baec-8d00589a4201 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "20d5fd84-de90-46b0-816e-0f378fd7d0c7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:27.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:27.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:29.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:29.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:30 np0005603609 nova_compute[221550]: 2026-01-31 08:33:30.773 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:33:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:31.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:33:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:33:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:31.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:33:31 np0005603609 nova_compute[221550]: 2026-01-31 08:33:31.214 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #145. Immutable memtables: 0.
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:33:32.851783) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 145
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848412851915, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 798, "num_deletes": 251, "total_data_size": 1497543, "memory_usage": 1528112, "flush_reason": "Manual Compaction"}
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #146: started
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848412869895, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 146, "file_size": 987800, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71023, "largest_seqno": 71816, "table_properties": {"data_size": 983972, "index_size": 1607, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8809, "raw_average_key_size": 19, "raw_value_size": 976286, "raw_average_value_size": 2179, "num_data_blocks": 71, "num_entries": 448, "num_filter_entries": 448, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848358, "oldest_key_time": 1769848358, "file_creation_time": 1769848412, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 18128 microseconds, and 3847 cpu microseconds.
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:33:32.869937) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #146: 987800 bytes OK
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:33:32.869975) [db/memtable_list.cc:519] [default] Level-0 commit table #146 started
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:33:32.872637) [db/memtable_list.cc:722] [default] Level-0 commit table #146: memtable #1 done
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:33:32.872649) EVENT_LOG_v1 {"time_micros": 1769848412872644, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:33:32.872663) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1493374, prev total WAL file size 1493374, number of live WAL files 2.
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000142.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:33:32.873068) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [146(964KB)], [144(10MB)]
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848412873107, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [146], "files_L6": [144], "score": -1, "input_data_size": 12255753, "oldest_snapshot_seqno": -1}
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #147: 9168 keys, 10449564 bytes, temperature: kUnknown
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848412988342, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 147, "file_size": 10449564, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10392182, "index_size": 33333, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 243299, "raw_average_key_size": 26, "raw_value_size": 10233098, "raw_average_value_size": 1116, "num_data_blocks": 1259, "num_entries": 9168, "num_filter_entries": 9168, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769848412, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 147, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:33:32.988648) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 10449564 bytes
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:33:32.990663) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 106.3 rd, 90.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.7 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(23.0) write-amplify(10.6) OK, records in: 9682, records dropped: 514 output_compression: NoCompression
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:33:32.990685) EVENT_LOG_v1 {"time_micros": 1769848412990675, "job": 92, "event": "compaction_finished", "compaction_time_micros": 115314, "compaction_time_cpu_micros": 26916, "output_level": 6, "num_output_files": 1, "total_output_size": 10449564, "num_input_records": 9682, "num_output_records": 9168, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848412990945, "job": 92, "event": "table_file_deletion", "file_number": 146}
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000144.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848412992385, "job": 92, "event": "table_file_deletion", "file_number": 144}
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:33:32.872981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:33:32.992415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:33:32.992420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:33:32.992422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:33:32.992424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:33:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:33:32.992426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:33:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:33.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:33.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:35 np0005603609 nova_compute[221550]: 2026-01-31 08:33:35.091 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:35.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:33:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:35.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:33:35 np0005603609 nova_compute[221550]: 2026-01-31 08:33:35.610 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848400.609329, 20d5fd84-de90-46b0-816e-0f378fd7d0c7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:33:35 np0005603609 nova_compute[221550]: 2026-01-31 08:33:35.611 221554 INFO nova.compute.manager [-] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:33:35 np0005603609 nova_compute[221550]: 2026-01-31 08:33:35.672 221554 DEBUG nova.compute.manager [None req-0e3a4f68-8551-46ab-8a98-41c5bd93f060 - - - - - -] [instance: 20d5fd84-de90-46b0-816e-0f378fd7d0c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:33:35 np0005603609 nova_compute[221550]: 2026-01-31 08:33:35.776 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:36 np0005603609 nova_compute[221550]: 2026-01-31 08:33:36.215 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:37.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:33:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:37.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:33:38 np0005603609 nova_compute[221550]: 2026-01-31 08:33:38.822 221554 DEBUG oslo_concurrency.lockutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:38 np0005603609 nova_compute[221550]: 2026-01-31 08:33:38.823 221554 DEBUG oslo_concurrency.lockutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:38 np0005603609 nova_compute[221550]: 2026-01-31 08:33:38.889 221554 DEBUG nova.compute.manager [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:33:38 np0005603609 nova_compute[221550]: 2026-01-31 08:33:38.989 221554 DEBUG oslo_concurrency.lockutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:38 np0005603609 nova_compute[221550]: 2026-01-31 08:33:38.990 221554 DEBUG oslo_concurrency.lockutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:38 np0005603609 nova_compute[221550]: 2026-01-31 08:33:38.999 221554 DEBUG nova.virt.hardware [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:33:39 np0005603609 nova_compute[221550]: 2026-01-31 08:33:38.999 221554 INFO nova.compute.claims [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:33:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:39.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:39 np0005603609 nova_compute[221550]: 2026-01-31 08:33:39.189 221554 DEBUG oslo_concurrency.processutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:33:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:39.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:33:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:33:39 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2748956884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:33:39 np0005603609 nova_compute[221550]: 2026-01-31 08:33:39.574 221554 DEBUG oslo_concurrency.processutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:39 np0005603609 nova_compute[221550]: 2026-01-31 08:33:39.579 221554 DEBUG nova.compute.provider_tree [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:33:39 np0005603609 nova_compute[221550]: 2026-01-31 08:33:39.635 221554 DEBUG nova.scheduler.client.report [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:33:39 np0005603609 nova_compute[221550]: 2026-01-31 08:33:39.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:39 np0005603609 nova_compute[221550]: 2026-01-31 08:33:39.678 221554 DEBUG oslo_concurrency.lockutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:39 np0005603609 nova_compute[221550]: 2026-01-31 08:33:39.678 221554 DEBUG nova.compute.manager [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:33:39 np0005603609 nova_compute[221550]: 2026-01-31 08:33:39.774 221554 DEBUG nova.compute.manager [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:33:39 np0005603609 nova_compute[221550]: 2026-01-31 08:33:39.774 221554 DEBUG nova.network.neutron [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:33:39 np0005603609 nova_compute[221550]: 2026-01-31 08:33:39.818 221554 INFO nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:33:39 np0005603609 nova_compute[221550]: 2026-01-31 08:33:39.873 221554 DEBUG nova.compute.manager [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:33:40 np0005603609 nova_compute[221550]: 2026-01-31 08:33:40.080 221554 DEBUG nova.compute.manager [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:33:40 np0005603609 nova_compute[221550]: 2026-01-31 08:33:40.081 221554 DEBUG nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:33:40 np0005603609 nova_compute[221550]: 2026-01-31 08:33:40.081 221554 INFO nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Creating image(s)#033[00m
Jan 31 03:33:40 np0005603609 nova_compute[221550]: 2026-01-31 08:33:40.104 221554 DEBUG nova.storage.rbd_utils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:40 np0005603609 nova_compute[221550]: 2026-01-31 08:33:40.129 221554 DEBUG nova.storage.rbd_utils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:40 np0005603609 nova_compute[221550]: 2026-01-31 08:33:40.154 221554 DEBUG nova.storage.rbd_utils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:40 np0005603609 nova_compute[221550]: 2026-01-31 08:33:40.159 221554 DEBUG oslo_concurrency.processutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:40 np0005603609 nova_compute[221550]: 2026-01-31 08:33:40.210 221554 DEBUG oslo_concurrency.processutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:40 np0005603609 nova_compute[221550]: 2026-01-31 08:33:40.212 221554 DEBUG oslo_concurrency.lockutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:40 np0005603609 nova_compute[221550]: 2026-01-31 08:33:40.212 221554 DEBUG oslo_concurrency.lockutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:40 np0005603609 nova_compute[221550]: 2026-01-31 08:33:40.213 221554 DEBUG oslo_concurrency.lockutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:40 np0005603609 nova_compute[221550]: 2026-01-31 08:33:40.240 221554 DEBUG nova.storage.rbd_utils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:40 np0005603609 nova_compute[221550]: 2026-01-31 08:33:40.244 221554 DEBUG oslo_concurrency.processutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:40 np0005603609 nova_compute[221550]: 2026-01-31 08:33:40.515 221554 DEBUG oslo_concurrency.processutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:40 np0005603609 nova_compute[221550]: 2026-01-31 08:33:40.567 221554 DEBUG nova.storage.rbd_utils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] resizing rbd image 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:33:40 np0005603609 nova_compute[221550]: 2026-01-31 08:33:40.657 221554 DEBUG nova.objects.instance [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'migration_context' on Instance uuid 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:33:40 np0005603609 nova_compute[221550]: 2026-01-31 08:33:40.778 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:41.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:41 np0005603609 nova_compute[221550]: 2026-01-31 08:33:41.161 221554 DEBUG nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:33:41 np0005603609 nova_compute[221550]: 2026-01-31 08:33:41.161 221554 DEBUG nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Ensure instance console log exists: /var/lib/nova/instances/4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:33:41 np0005603609 nova_compute[221550]: 2026-01-31 08:33:41.162 221554 DEBUG oslo_concurrency.lockutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:41 np0005603609 nova_compute[221550]: 2026-01-31 08:33:41.162 221554 DEBUG oslo_concurrency.lockutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:41 np0005603609 nova_compute[221550]: 2026-01-31 08:33:41.162 221554 DEBUG oslo_concurrency.lockutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:41 np0005603609 nova_compute[221550]: 2026-01-31 08:33:41.217 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:41.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:41 np0005603609 nova_compute[221550]: 2026-01-31 08:33:41.913 221554 DEBUG nova.policy [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4d0e9d918b4041fabd5ded633b4cf404', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9710f0cf77d84353ae13fa47922b085d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:33:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:43.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:33:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:43.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:33:43 np0005603609 nova_compute[221550]: 2026-01-31 08:33:43.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:44 np0005603609 nova_compute[221550]: 2026-01-31 08:33:44.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:44 np0005603609 nova_compute[221550]: 2026-01-31 08:33:44.721 221554 DEBUG nova.network.neutron [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Successfully created port: fcbad7c1-624e-4776-8e78-ae63868c5ec4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:33:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:33:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:45.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:33:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:33:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:45.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:33:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:33:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:33:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:33:45 np0005603609 nova_compute[221550]: 2026-01-31 08:33:45.780 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:46 np0005603609 nova_compute[221550]: 2026-01-31 08:33:46.268 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:46 np0005603609 nova_compute[221550]: 2026-01-31 08:33:46.732 221554 DEBUG nova.network.neutron [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Successfully updated port: fcbad7c1-624e-4776-8e78-ae63868c5ec4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:33:46 np0005603609 nova_compute[221550]: 2026-01-31 08:33:46.865 221554 DEBUG oslo_concurrency.lockutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "refresh_cache-4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:33:46 np0005603609 nova_compute[221550]: 2026-01-31 08:33:46.865 221554 DEBUG oslo_concurrency.lockutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquired lock "refresh_cache-4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:33:46 np0005603609 nova_compute[221550]: 2026-01-31 08:33:46.866 221554 DEBUG nova.network.neutron [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:33:47 np0005603609 nova_compute[221550]: 2026-01-31 08:33:47.013 221554 DEBUG nova.compute.manager [req-a049ad06-9e29-449a-aaab-32af7ff8a8c5 req-6be88216-1c3f-48ef-b4bc-6e8fb8b5299c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Received event network-changed-fcbad7c1-624e-4776-8e78-ae63868c5ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:33:47 np0005603609 nova_compute[221550]: 2026-01-31 08:33:47.013 221554 DEBUG nova.compute.manager [req-a049ad06-9e29-449a-aaab-32af7ff8a8c5 req-6be88216-1c3f-48ef-b4bc-6e8fb8b5299c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Refreshing instance network info cache due to event network-changed-fcbad7c1-624e-4776-8e78-ae63868c5ec4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:33:47 np0005603609 nova_compute[221550]: 2026-01-31 08:33:47.013 221554 DEBUG oslo_concurrency.lockutils [req-a049ad06-9e29-449a-aaab-32af7ff8a8c5 req-6be88216-1c3f-48ef-b4bc-6e8fb8b5299c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:33:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:47.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:47 np0005603609 nova_compute[221550]: 2026-01-31 08:33:47.177 221554 DEBUG nova.network.neutron [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:33:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:33:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:47.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:33:48 np0005603609 podman[293865]: 2026-01-31 08:33:48.163720789 +0000 UTC m=+0.046182045 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:33:48 np0005603609 podman[293864]: 2026-01-31 08:33:48.185634553 +0000 UTC m=+0.071622825 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Jan 31 03:33:48 np0005603609 nova_compute[221550]: 2026-01-31 08:33:48.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:48 np0005603609 nova_compute[221550]: 2026-01-31 08:33:48.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:33:48 np0005603609 nova_compute[221550]: 2026-01-31 08:33:48.662 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:33:48 np0005603609 nova_compute[221550]: 2026-01-31 08:33:48.702 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:33:48 np0005603609 nova_compute[221550]: 2026-01-31 08:33:48.703 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:33:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:49.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:33:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:49.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.391 221554 DEBUG nova.network.neutron [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Updating instance_info_cache with network_info: [{"id": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "address": "fa:16:3e:3a:bd:05", "network": {"id": "81287ae0-230a-4745-9d4d-f12a02a04d02", "bridge": "br-int", "label": "tempest-network-smoke--608182900", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcbad7c1-62", "ovs_interfaceid": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.484 221554 DEBUG oslo_concurrency.lockutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Releasing lock "refresh_cache-4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.484 221554 DEBUG nova.compute.manager [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Instance network_info: |[{"id": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "address": "fa:16:3e:3a:bd:05", "network": {"id": "81287ae0-230a-4745-9d4d-f12a02a04d02", "bridge": "br-int", "label": "tempest-network-smoke--608182900", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcbad7c1-62", "ovs_interfaceid": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.485 221554 DEBUG oslo_concurrency.lockutils [req-a049ad06-9e29-449a-aaab-32af7ff8a8c5 req-6be88216-1c3f-48ef-b4bc-6e8fb8b5299c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.485 221554 DEBUG nova.network.neutron [req-a049ad06-9e29-449a-aaab-32af7ff8a8c5 req-6be88216-1c3f-48ef-b4bc-6e8fb8b5299c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Refreshing network info cache for port fcbad7c1-624e-4776-8e78-ae63868c5ec4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.490 221554 DEBUG nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Start _get_guest_xml network_info=[{"id": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "address": "fa:16:3e:3a:bd:05", "network": {"id": "81287ae0-230a-4745-9d4d-f12a02a04d02", "bridge": "br-int", "label": "tempest-network-smoke--608182900", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcbad7c1-62", "ovs_interfaceid": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.495 221554 WARNING nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.504 221554 DEBUG nova.virt.libvirt.host [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.505 221554 DEBUG nova.virt.libvirt.host [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.509 221554 DEBUG nova.virt.libvirt.host [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.510 221554 DEBUG nova.virt.libvirt.host [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.512 221554 DEBUG nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.512 221554 DEBUG nova.virt.hardware [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.512 221554 DEBUG nova.virt.hardware [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.513 221554 DEBUG nova.virt.hardware [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.513 221554 DEBUG nova.virt.hardware [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.513 221554 DEBUG nova.virt.hardware [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.513 221554 DEBUG nova.virt.hardware [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.514 221554 DEBUG nova.virt.hardware [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.514 221554 DEBUG nova.virt.hardware [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.514 221554 DEBUG nova.virt.hardware [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.515 221554 DEBUG nova.virt.hardware [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.516 221554 DEBUG nova.virt.hardware [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.519 221554 DEBUG oslo_concurrency.processutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:49 np0005603609 nova_compute[221550]: 2026-01-31 08:33:49.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:33:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:33:50 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/522710267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.027 221554 DEBUG oslo_concurrency.processutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.054 221554 DEBUG nova.storage.rbd_utils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.058 221554 DEBUG oslo_concurrency.processutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:33:50 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3827577837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.465 221554 DEBUG oslo_concurrency.processutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.467 221554 DEBUG nova.virt.libvirt.vif [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:33:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1300355043',display_name='tempest-TestNetworkAdvancedServerOps-server-1300355043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1300355043',id=174,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCmbDmPDaurVYZqmnL0+j7llg8wQN2SUlST66gsq2LIihnID4aWDxrYDMI7+BqXpoedcRI57zeHuiAACSHAfyCXMEU86mXXr9C3v/uMY2IApjIK5OhnhGcW/b+OEEa2kWw==',key_name='tempest-TestNetworkAdvancedServerOps-2013984758',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-s0237wpe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:33:39Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "address": "fa:16:3e:3a:bd:05", "network": {"id": "81287ae0-230a-4745-9d4d-f12a02a04d02", "bridge": "br-int", "label": "tempest-network-smoke--608182900", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcbad7c1-62", "ovs_interfaceid": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.467 221554 DEBUG nova.network.os_vif_util [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "address": "fa:16:3e:3a:bd:05", "network": {"id": "81287ae0-230a-4745-9d4d-f12a02a04d02", "bridge": "br-int", "label": "tempest-network-smoke--608182900", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcbad7c1-62", "ovs_interfaceid": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.468 221554 DEBUG nova.network.os_vif_util [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:bd:05,bridge_name='br-int',has_traffic_filtering=True,id=fcbad7c1-624e-4776-8e78-ae63868c5ec4,network=Network(81287ae0-230a-4745-9d4d-f12a02a04d02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcbad7c1-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.469 221554 DEBUG nova.objects.instance [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'pci_devices' on Instance uuid 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.500 221554 DEBUG nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:33:50 np0005603609 nova_compute[221550]:  <uuid>4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6</uuid>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:  <name>instance-000000ae</name>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1300355043</nova:name>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:33:49</nova:creationTime>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:33:50 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:        <nova:user uuid="4d0e9d918b4041fabd5ded633b4cf404">tempest-TestNetworkAdvancedServerOps-483180749-project-member</nova:user>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:        <nova:project uuid="9710f0cf77d84353ae13fa47922b085d">tempest-TestNetworkAdvancedServerOps-483180749</nova:project>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:        <nova:port uuid="fcbad7c1-624e-4776-8e78-ae63868c5ec4">
Jan 31 03:33:50 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <entry name="serial">4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6</entry>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <entry name="uuid">4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6</entry>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6_disk">
Jan 31 03:33:50 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:33:50 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6_disk.config">
Jan 31 03:33:50 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:33:50 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:3a:bd:05"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <target dev="tapfcbad7c1-62"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6/console.log" append="off"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:33:50 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:33:50 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:33:50 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:33:50 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.501 221554 DEBUG nova.compute.manager [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Preparing to wait for external event network-vif-plugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.502 221554 DEBUG oslo_concurrency.lockutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.502 221554 DEBUG oslo_concurrency.lockutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.502 221554 DEBUG oslo_concurrency.lockutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.503 221554 DEBUG nova.virt.libvirt.vif [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:33:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1300355043',display_name='tempest-TestNetworkAdvancedServerOps-server-1300355043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1300355043',id=174,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCmbDmPDaurVYZqmnL0+j7llg8wQN2SUlST66gsq2LIihnID4aWDxrYDMI7+BqXpoedcRI57zeHuiAACSHAfyCXMEU86mXXr9C3v/uMY2IApjIK5OhnhGcW/b+OEEa2kWw==',key_name='tempest-TestNetworkAdvancedServerOps-2013984758',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-s0237wpe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:33:39Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "address": "fa:16:3e:3a:bd:05", "network": {"id": "81287ae0-230a-4745-9d4d-f12a02a04d02", "bridge": "br-int", "label": "tempest-network-smoke--608182900", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcbad7c1-62", "ovs_interfaceid": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.503 221554 DEBUG nova.network.os_vif_util [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "address": "fa:16:3e:3a:bd:05", "network": {"id": "81287ae0-230a-4745-9d4d-f12a02a04d02", "bridge": "br-int", "label": "tempest-network-smoke--608182900", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcbad7c1-62", "ovs_interfaceid": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.504 221554 DEBUG nova.network.os_vif_util [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:bd:05,bridge_name='br-int',has_traffic_filtering=True,id=fcbad7c1-624e-4776-8e78-ae63868c5ec4,network=Network(81287ae0-230a-4745-9d4d-f12a02a04d02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcbad7c1-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.504 221554 DEBUG os_vif [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:bd:05,bridge_name='br-int',has_traffic_filtering=True,id=fcbad7c1-624e-4776-8e78-ae63868c5ec4,network=Network(81287ae0-230a-4745-9d4d-f12a02a04d02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcbad7c1-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.504 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.505 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.505 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.507 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.507 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfcbad7c1-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.508 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfcbad7c1-62, col_values=(('external_ids', {'iface-id': 'fcbad7c1-624e-4776-8e78-ae63868c5ec4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:bd:05', 'vm-uuid': '4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:50 np0005603609 NetworkManager[49064]: <info>  [1769848430.5100] manager: (tapfcbad7c1-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.509 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.512 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.515 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.517 221554 INFO os_vif [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:bd:05,bridge_name='br-int',has_traffic_filtering=True,id=fcbad7c1-624e-4776-8e78-ae63868c5ec4,network=Network(81287ae0-230a-4745-9d4d-f12a02a04d02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcbad7c1-62')#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.691 221554 DEBUG nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.691 221554 DEBUG nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.692 221554 DEBUG nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No VIF found with MAC fa:16:3e:3a:bd:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.692 221554 INFO nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Using config drive#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.717 221554 DEBUG nova.storage.rbd_utils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.727 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.728 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.728 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.728 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:33:50 np0005603609 nova_compute[221550]: 2026-01-31 08:33:50.728 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:33:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:51.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:33:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:33:51 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2128890528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:33:51 np0005603609 nova_compute[221550]: 2026-01-31 08:33:51.180 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:51.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:51 np0005603609 nova_compute[221550]: 2026-01-31 08:33:51.270 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:51 np0005603609 nova_compute[221550]: 2026-01-31 08:33:51.287 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000ae as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:33:51 np0005603609 nova_compute[221550]: 2026-01-31 08:33:51.287 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000ae as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:33:51 np0005603609 nova_compute[221550]: 2026-01-31 08:33:51.411 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:33:51 np0005603609 nova_compute[221550]: 2026-01-31 08:33:51.413 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4204MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:33:51 np0005603609 nova_compute[221550]: 2026-01-31 08:33:51.413 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:51 np0005603609 nova_compute[221550]: 2026-01-31 08:33:51.413 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:51 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:33:51 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:33:51 np0005603609 nova_compute[221550]: 2026-01-31 08:33:51.961 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:33:51 np0005603609 nova_compute[221550]: 2026-01-31 08:33:51.962 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:33:51 np0005603609 nova_compute[221550]: 2026-01-31 08:33:51.963 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.294 221554 INFO nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Creating config drive at /var/lib/nova/instances/4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6/disk.config#033[00m
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.300 221554 DEBUG oslo_concurrency.processutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp3cbcuxh8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.402 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.423 221554 DEBUG oslo_concurrency.processutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp3cbcuxh8" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.454 221554 DEBUG nova.storage.rbd_utils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.458 221554 DEBUG oslo_concurrency.processutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6/disk.config 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.642 221554 DEBUG oslo_concurrency.processutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6/disk.config 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.643 221554 INFO nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Deleting local config drive /var/lib/nova/instances/4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6/disk.config because it was imported into RBD.#033[00m
Jan 31 03:33:52 np0005603609 kernel: tapfcbad7c1-62: entered promiscuous mode
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.695 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:52 np0005603609 ovn_controller[130359]: 2026-01-31T08:33:52Z|00768|binding|INFO|Claiming lport fcbad7c1-624e-4776-8e78-ae63868c5ec4 for this chassis.
Jan 31 03:33:52 np0005603609 ovn_controller[130359]: 2026-01-31T08:33:52Z|00769|binding|INFO|fcbad7c1-624e-4776-8e78-ae63868c5ec4: Claiming fa:16:3e:3a:bd:05 10.100.0.9
Jan 31 03:33:52 np0005603609 NetworkManager[49064]: <info>  [1769848432.6993] manager: (tapfcbad7c1-62): new Tun device (/org/freedesktop/NetworkManager/Devices/354)
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.711 221554 DEBUG nova.network.neutron [req-a049ad06-9e29-449a-aaab-32af7ff8a8c5 req-6be88216-1c3f-48ef-b4bc-6e8fb8b5299c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Updated VIF entry in instance network info cache for port fcbad7c1-624e-4776-8e78-ae63868c5ec4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.713 221554 DEBUG nova.network.neutron [req-a049ad06-9e29-449a-aaab-32af7ff8a8c5 req-6be88216-1c3f-48ef-b4bc-6e8fb8b5299c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Updating instance_info_cache with network_info: [{"id": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "address": "fa:16:3e:3a:bd:05", "network": {"id": "81287ae0-230a-4745-9d4d-f12a02a04d02", "bridge": "br-int", "label": "tempest-network-smoke--608182900", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcbad7c1-62", "ovs_interfaceid": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.716 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:bd:05 10.100.0.9'], port_security=['fa:16:3e:3a:bd:05 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81287ae0-230a-4745-9d4d-f12a02a04d02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '06c0a46d-73c1-48f8-970d-bf276115204e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef683f19-a225-4b3e-868c-bb553f5f4aa2, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=fcbad7c1-624e-4776-8e78-ae63868c5ec4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.717 140058 INFO neutron.agent.ovn.metadata.agent [-] Port fcbad7c1-624e-4776-8e78-ae63868c5ec4 in datapath 81287ae0-230a-4745-9d4d-f12a02a04d02 bound to our chassis#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.718 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81287ae0-230a-4745-9d4d-f12a02a04d02#033[00m
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.727 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:52 np0005603609 systemd-machined[190912]: New machine qemu-94-instance-000000ae.
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.730 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1c0a56-6123-4c51-a70b-000a8da793ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.731 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap81287ae0-21 in ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:33:52 np0005603609 ovn_controller[130359]: 2026-01-31T08:33:52Z|00770|binding|INFO|Setting lport fcbad7c1-624e-4776-8e78-ae63868c5ec4 ovn-installed in OVS
Jan 31 03:33:52 np0005603609 ovn_controller[130359]: 2026-01-31T08:33:52Z|00771|binding|INFO|Setting lport fcbad7c1-624e-4776-8e78-ae63868c5ec4 up in Southbound
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.733 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap81287ae0-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.733 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.734 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[efaf9d30-e442-44fa-bdd6-083023917e8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.735 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0b443251-defe-4b2a-bae1-ec8a5adffc85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:52 np0005603609 systemd[1]: Started Virtual Machine qemu-94-instance-000000ae.
Jan 31 03:33:52 np0005603609 systemd-udevd[294138]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.745 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[b36fe7ad-a0bc-422e-b3c3-cc8a40bfa531]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.745 221554 DEBUG oslo_concurrency.lockutils [req-a049ad06-9e29-449a-aaab-32af7ff8a8c5 req-6be88216-1c3f-48ef-b4bc-6e8fb8b5299c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:33:52 np0005603609 NetworkManager[49064]: <info>  [1769848432.7553] device (tapfcbad7c1-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:33:52 np0005603609 NetworkManager[49064]: <info>  [1769848432.7561] device (tapfcbad7c1-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.771 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ebb3502b-3783-4eeb-9645-77e2ee3b4632]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.792 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[95b291bb-1a5c-476d-b57e-161e53198410]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:52 np0005603609 NetworkManager[49064]: <info>  [1769848432.7993] manager: (tap81287ae0-20): new Veth device (/org/freedesktop/NetworkManager/Devices/355)
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.799 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[48b54528-feec-4886-a1cb-6153a2cf2cc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:33:52 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3946079666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.829 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c4471191-b757-4761-b7d9-4ec7259ff00f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.832 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b6557d-792c-4eb0-96ac-4a9e8440459b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.841 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.846 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:33:52 np0005603609 NetworkManager[49064]: <info>  [1769848432.8492] device (tap81287ae0-20): carrier: link connected
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.852 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[9b4b6215-0233-47f1-9ae9-c71ba7ba2bf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.863 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[962674a3-0915-4968-8ad2-46bf00251462]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81287ae0-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:97:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 867254, 'reachable_time': 31402, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294171, 'error': None, 'target': 'ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.873 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[447888bc-04ed-43a2-8ec1-6ce3b80ab7a9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe45:9728'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 867254, 'tstamp': 867254}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294172, 'error': None, 'target': 'ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.886 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[665e85d6-e19e-42a9-bc19-395029752777]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81287ae0-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:97:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 867254, 'reachable_time': 31402, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294173, 'error': None, 'target': 'ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.905 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.905 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[27254396-573c-4d55-ab7b-61ea47ebf8b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.945 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe82c15-235f-45fa-b273-fe993c66f92e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.946 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81287ae0-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.946 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.947 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81287ae0-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.948 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:52 np0005603609 NetworkManager[49064]: <info>  [1769848432.9489] manager: (tap81287ae0-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/356)
Jan 31 03:33:52 np0005603609 kernel: tap81287ae0-20: entered promiscuous mode
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.950 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.953 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81287ae0-20, col_values=(('external_ids', {'iface-id': '2b2712a7-0919-431b-ae5b-c38d96bf7480'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:52 np0005603609 ovn_controller[130359]: 2026-01-31T08:33:52Z|00772|binding|INFO|Releasing lport 2b2712a7-0919-431b-ae5b-c38d96bf7480 from this chassis (sb_readonly=0)
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.954 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.959 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.960 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/81287ae0-230a-4745-9d4d-f12a02a04d02.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/81287ae0-230a-4745-9d4d-f12a02a04d02.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.961 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0f523e71-9275-426b-9242-9b84aed4d845]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.962 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-81287ae0-230a-4745-9d4d-f12a02a04d02
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/81287ae0-230a-4745-9d4d-f12a02a04d02.pid.haproxy
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 81287ae0-230a-4745-9d4d-f12a02a04d02
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:33:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:52.962 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02', 'env', 'PROCESS_TAG=haproxy-81287ae0-230a-4745-9d4d-f12a02a04d02', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/81287ae0-230a-4745-9d4d-f12a02a04d02.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.962 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:33:52 np0005603609 nova_compute[221550]: 2026-01-31 08:33:52.963 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:33:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:53.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:33:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:53.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:53 np0005603609 podman[294205]: 2026-01-31 08:33:53.280386139 +0000 UTC m=+0.045645812 container create 861659c061d4da8bd37eb2a113d7d11e62eb6339079905cd3487bb6d11946efa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.294 221554 DEBUG nova.compute.manager [req-64e35caf-d00b-4f40-b086-68db0040ee38 req-aa554c5e-ce0f-42cc-97f2-4818056e88f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Received event network-vif-plugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.295 221554 DEBUG oslo_concurrency.lockutils [req-64e35caf-d00b-4f40-b086-68db0040ee38 req-aa554c5e-ce0f-42cc-97f2-4818056e88f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.296 221554 DEBUG oslo_concurrency.lockutils [req-64e35caf-d00b-4f40-b086-68db0040ee38 req-aa554c5e-ce0f-42cc-97f2-4818056e88f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.296 221554 DEBUG oslo_concurrency.lockutils [req-64e35caf-d00b-4f40-b086-68db0040ee38 req-aa554c5e-ce0f-42cc-97f2-4818056e88f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.299 221554 DEBUG nova.compute.manager [req-64e35caf-d00b-4f40-b086-68db0040ee38 req-aa554c5e-ce0f-42cc-97f2-4818056e88f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Processing event network-vif-plugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:33:53 np0005603609 systemd[1]: Started libpod-conmon-861659c061d4da8bd37eb2a113d7d11e62eb6339079905cd3487bb6d11946efa.scope.
Jan 31 03:33:53 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:33:53 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11d684656562158cd66a6d0293d22137b9cadf3231bdae24d4afb63f4c4f6abc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:53 np0005603609 podman[294205]: 2026-01-31 08:33:53.347335068 +0000 UTC m=+0.112594761 container init 861659c061d4da8bd37eb2a113d7d11e62eb6339079905cd3487bb6d11946efa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:33:53 np0005603609 podman[294205]: 2026-01-31 08:33:53.254245023 +0000 UTC m=+0.019504716 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:33:53 np0005603609 podman[294205]: 2026-01-31 08:33:53.352089214 +0000 UTC m=+0.117348877 container start 861659c061d4da8bd37eb2a113d7d11e62eb6339079905cd3487bb6d11946efa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:33:53 np0005603609 neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02[294220]: [NOTICE]   (294224) : New worker (294226) forked
Jan 31 03:33:53 np0005603609 neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02[294220]: [NOTICE]   (294224) : Loading success.
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.671 221554 DEBUG nova.compute.manager [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.672 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848433.67091, 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.672 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] VM Started (Lifecycle Event)#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.676 221554 DEBUG nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.680 221554 INFO nova.virt.libvirt.driver [-] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Instance spawned successfully.#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.680 221554 DEBUG nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.905 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.911 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.963 221554 DEBUG nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.967 221554 DEBUG nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.968 221554 DEBUG nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.969 221554 DEBUG nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.970 221554 DEBUG nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.971 221554 DEBUG nova.virt.libvirt.driver [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.978 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.978 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848433.671216, 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:33:53 np0005603609 nova_compute[221550]: 2026-01-31 08:33:53.979 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:33:54 np0005603609 nova_compute[221550]: 2026-01-31 08:33:54.018 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:33:54 np0005603609 nova_compute[221550]: 2026-01-31 08:33:54.021 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848433.6756423, 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:33:54 np0005603609 nova_compute[221550]: 2026-01-31 08:33:54.021 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:33:54 np0005603609 nova_compute[221550]: 2026-01-31 08:33:54.048 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:33:54 np0005603609 nova_compute[221550]: 2026-01-31 08:33:54.052 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:33:54 np0005603609 nova_compute[221550]: 2026-01-31 08:33:54.069 221554 INFO nova.compute.manager [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Took 13.99 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:33:54 np0005603609 nova_compute[221550]: 2026-01-31 08:33:54.069 221554 DEBUG nova.compute.manager [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:33:54 np0005603609 nova_compute[221550]: 2026-01-31 08:33:54.105 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:33:54 np0005603609 nova_compute[221550]: 2026-01-31 08:33:54.157 221554 INFO nova.compute.manager [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Took 15.20 seconds to build instance.#033[00m
Jan 31 03:33:54 np0005603609 nova_compute[221550]: 2026-01-31 08:33:54.267 221554 DEBUG oslo_concurrency.lockutils [None req-6fb27e58-bf83-477c-b5ea-73b93c2cfd38 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:33:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:55.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:33:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:55.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:55 np0005603609 nova_compute[221550]: 2026-01-31 08:33:55.510 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:55 np0005603609 nova_compute[221550]: 2026-01-31 08:33:55.648 221554 DEBUG nova.compute.manager [req-10950a9a-de08-4669-b4f0-b593782020eb req-2935102b-5e52-44bb-93c5-763521e3615b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Received event network-vif-plugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:33:55 np0005603609 nova_compute[221550]: 2026-01-31 08:33:55.649 221554 DEBUG oslo_concurrency.lockutils [req-10950a9a-de08-4669-b4f0-b593782020eb req-2935102b-5e52-44bb-93c5-763521e3615b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:55 np0005603609 nova_compute[221550]: 2026-01-31 08:33:55.649 221554 DEBUG oslo_concurrency.lockutils [req-10950a9a-de08-4669-b4f0-b593782020eb req-2935102b-5e52-44bb-93c5-763521e3615b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:55 np0005603609 nova_compute[221550]: 2026-01-31 08:33:55.650 221554 DEBUG oslo_concurrency.lockutils [req-10950a9a-de08-4669-b4f0-b593782020eb req-2935102b-5e52-44bb-93c5-763521e3615b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:55 np0005603609 nova_compute[221550]: 2026-01-31 08:33:55.650 221554 DEBUG nova.compute.manager [req-10950a9a-de08-4669-b4f0-b593782020eb req-2935102b-5e52-44bb-93c5-763521e3615b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] No waiting events found dispatching network-vif-plugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:33:55 np0005603609 nova_compute[221550]: 2026-01-31 08:33:55.650 221554 WARNING nova.compute.manager [req-10950a9a-de08-4669-b4f0-b593782020eb req-2935102b-5e52-44bb-93c5-763521e3615b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Received unexpected event network-vif-plugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:33:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:56.046 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:33:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:33:56.047 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:33:56 np0005603609 nova_compute[221550]: 2026-01-31 08:33:56.049 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:56 np0005603609 nova_compute[221550]: 2026-01-31 08:33:56.272 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:33:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:57.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:33:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:57.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:33:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:59.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:33:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:33:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:59.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:59 np0005603609 nova_compute[221550]: 2026-01-31 08:33:59.959 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:59 np0005603609 nova_compute[221550]: 2026-01-31 08:33:59.961 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:59 np0005603609 nova_compute[221550]: 2026-01-31 08:33:59.961 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:00.049 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:00 np0005603609 nova_compute[221550]: 2026-01-31 08:34:00.363 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:00 np0005603609 NetworkManager[49064]: <info>  [1769848440.3636] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Jan 31 03:34:00 np0005603609 NetworkManager[49064]: <info>  [1769848440.3644] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/358)
Jan 31 03:34:00 np0005603609 nova_compute[221550]: 2026-01-31 08:34:00.404 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:00 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:00Z|00773|binding|INFO|Releasing lport 2b2712a7-0919-431b-ae5b-c38d96bf7480 from this chassis (sb_readonly=0)
Jan 31 03:34:00 np0005603609 nova_compute[221550]: 2026-01-31 08:34:00.419 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:00 np0005603609 nova_compute[221550]: 2026-01-31 08:34:00.513 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:00 np0005603609 nova_compute[221550]: 2026-01-31 08:34:00.528 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:01.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:01.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:01 np0005603609 nova_compute[221550]: 2026-01-31 08:34:01.275 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:01 np0005603609 nova_compute[221550]: 2026-01-31 08:34:01.693 221554 DEBUG nova.compute.manager [req-6ad478d0-6261-4a83-9bcd-01a396143899 req-d5758f16-4175-4125-a44c-87687bb98181 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Received event network-changed-fcbad7c1-624e-4776-8e78-ae63868c5ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:01 np0005603609 nova_compute[221550]: 2026-01-31 08:34:01.693 221554 DEBUG nova.compute.manager [req-6ad478d0-6261-4a83-9bcd-01a396143899 req-d5758f16-4175-4125-a44c-87687bb98181 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Refreshing instance network info cache due to event network-changed-fcbad7c1-624e-4776-8e78-ae63868c5ec4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:34:01 np0005603609 nova_compute[221550]: 2026-01-31 08:34:01.694 221554 DEBUG oslo_concurrency.lockutils [req-6ad478d0-6261-4a83-9bcd-01a396143899 req-d5758f16-4175-4125-a44c-87687bb98181 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:34:01 np0005603609 nova_compute[221550]: 2026-01-31 08:34:01.694 221554 DEBUG oslo_concurrency.lockutils [req-6ad478d0-6261-4a83-9bcd-01a396143899 req-d5758f16-4175-4125-a44c-87687bb98181 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:34:01 np0005603609 nova_compute[221550]: 2026-01-31 08:34:01.694 221554 DEBUG nova.network.neutron [req-6ad478d0-6261-4a83-9bcd-01a396143899 req-d5758f16-4175-4125-a44c-87687bb98181 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Refreshing network info cache for port fcbad7c1-624e-4776-8e78-ae63868c5ec4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:34:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:34:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:03.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:34:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:03.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:04 np0005603609 nova_compute[221550]: 2026-01-31 08:34:04.211 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:04 np0005603609 nova_compute[221550]: 2026-01-31 08:34:04.615 221554 DEBUG nova.network.neutron [req-6ad478d0-6261-4a83-9bcd-01a396143899 req-d5758f16-4175-4125-a44c-87687bb98181 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Updated VIF entry in instance network info cache for port fcbad7c1-624e-4776-8e78-ae63868c5ec4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:34:04 np0005603609 nova_compute[221550]: 2026-01-31 08:34:04.615 221554 DEBUG nova.network.neutron [req-6ad478d0-6261-4a83-9bcd-01a396143899 req-d5758f16-4175-4125-a44c-87687bb98181 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Updating instance_info_cache with network_info: [{"id": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "address": "fa:16:3e:3a:bd:05", "network": {"id": "81287ae0-230a-4745-9d4d-f12a02a04d02", "bridge": "br-int", "label": "tempest-network-smoke--608182900", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcbad7c1-62", "ovs_interfaceid": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:34:04 np0005603609 nova_compute[221550]: 2026-01-31 08:34:04.683 221554 DEBUG oslo_concurrency.lockutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "a3943c42-01b0-4548-99b8-1fbb3e72306d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:04 np0005603609 nova_compute[221550]: 2026-01-31 08:34:04.683 221554 DEBUG oslo_concurrency.lockutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "a3943c42-01b0-4548-99b8-1fbb3e72306d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:05 np0005603609 nova_compute[221550]: 2026-01-31 08:34:05.084 221554 DEBUG oslo_concurrency.lockutils [req-6ad478d0-6261-4a83-9bcd-01a396143899 req-d5758f16-4175-4125-a44c-87687bb98181 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:34:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:05.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:05.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:05 np0005603609 nova_compute[221550]: 2026-01-31 08:34:05.301 221554 DEBUG nova.compute.manager [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:34:05 np0005603609 nova_compute[221550]: 2026-01-31 08:34:05.516 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:06 np0005603609 nova_compute[221550]: 2026-01-31 08:34:06.278 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:06 np0005603609 nova_compute[221550]: 2026-01-31 08:34:06.521 221554 DEBUG oslo_concurrency.lockutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:06 np0005603609 nova_compute[221550]: 2026-01-31 08:34:06.522 221554 DEBUG oslo_concurrency.lockutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:06 np0005603609 nova_compute[221550]: 2026-01-31 08:34:06.530 221554 DEBUG nova.virt.hardware [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:34:06 np0005603609 nova_compute[221550]: 2026-01-31 08:34:06.530 221554 INFO nova.compute.claims [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:34:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:06Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3a:bd:05 10.100.0.9
Jan 31 03:34:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:06Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:bd:05 10.100.0.9
Jan 31 03:34:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:07.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:07.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:07 np0005603609 nova_compute[221550]: 2026-01-31 08:34:07.524 221554 DEBUG oslo_concurrency.processutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:07.526 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:07.526 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:07.527 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:34:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1097566403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:34:07 np0005603609 nova_compute[221550]: 2026-01-31 08:34:07.979 221554 DEBUG oslo_concurrency.processutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:07 np0005603609 nova_compute[221550]: 2026-01-31 08:34:07.984 221554 DEBUG nova.compute.provider_tree [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:34:08 np0005603609 nova_compute[221550]: 2026-01-31 08:34:08.142 221554 DEBUG nova.scheduler.client.report [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:34:08 np0005603609 nova_compute[221550]: 2026-01-31 08:34:08.577 221554 DEBUG oslo_concurrency.lockutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:08 np0005603609 nova_compute[221550]: 2026-01-31 08:34:08.578 221554 DEBUG nova.compute.manager [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:34:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:09.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:09 np0005603609 nova_compute[221550]: 2026-01-31 08:34:09.166 221554 DEBUG nova.compute.manager [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:34:09 np0005603609 nova_compute[221550]: 2026-01-31 08:34:09.167 221554 DEBUG nova.network.neutron [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:34:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:09.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:09 np0005603609 nova_compute[221550]: 2026-01-31 08:34:09.694 221554 INFO nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:34:09 np0005603609 nova_compute[221550]: 2026-01-31 08:34:09.827 221554 DEBUG nova.compute.manager [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:34:10 np0005603609 nova_compute[221550]: 2026-01-31 08:34:10.558 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.018 221554 DEBUG nova.compute.manager [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.019 221554 DEBUG nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.019 221554 INFO nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Creating image(s)#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.052 221554 DEBUG nova.storage.rbd_utils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image a3943c42-01b0-4548-99b8-1fbb3e72306d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.099 221554 DEBUG nova.storage.rbd_utils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image a3943c42-01b0-4548-99b8-1fbb3e72306d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.137 221554 DEBUG nova.storage.rbd_utils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image a3943c42-01b0-4548-99b8-1fbb3e72306d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:34:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:34:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:11.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.143 221554 DEBUG oslo_concurrency.processutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.199 221554 DEBUG oslo_concurrency.processutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.200 221554 DEBUG oslo_concurrency.lockutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.201 221554 DEBUG oslo_concurrency.lockutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.201 221554 DEBUG oslo_concurrency.lockutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.231 221554 DEBUG nova.storage.rbd_utils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image a3943c42-01b0-4548-99b8-1fbb3e72306d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.235 221554 DEBUG oslo_concurrency.processutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 a3943c42-01b0-4548-99b8-1fbb3e72306d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:34:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:11.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:34:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.280 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.511 221554 DEBUG oslo_concurrency.processutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 a3943c42-01b0-4548-99b8-1fbb3e72306d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.585 221554 DEBUG nova.storage.rbd_utils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] resizing rbd image a3943c42-01b0-4548-99b8-1fbb3e72306d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.722 221554 DEBUG nova.objects.instance [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lazy-loading 'migration_context' on Instance uuid a3943c42-01b0-4548-99b8-1fbb3e72306d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.824 221554 DEBUG nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.824 221554 DEBUG nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Ensure instance console log exists: /var/lib/nova/instances/a3943c42-01b0-4548-99b8-1fbb3e72306d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.825 221554 DEBUG oslo_concurrency.lockutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.825 221554 DEBUG oslo_concurrency.lockutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:11 np0005603609 nova_compute[221550]: 2026-01-31 08:34:11.826 221554 DEBUG oslo_concurrency.lockutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:12 np0005603609 nova_compute[221550]: 2026-01-31 08:34:12.027 221554 DEBUG nova.network.neutron [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Successfully created port: 97668e1e-b6e0-47c7-b18b-e64ba12e23b2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:34:12 np0005603609 nova_compute[221550]: 2026-01-31 08:34:12.351 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:34:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:13.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:34:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:13.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:34:13 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4035119726' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:34:13 np0005603609 nova_compute[221550]: 2026-01-31 08:34:13.694 221554 INFO nova.compute.manager [None req-7a212ab8-5788-4b97-8c4d-04a48f8ae2ce 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Get console output#033[00m
Jan 31 03:34:13 np0005603609 nova_compute[221550]: 2026-01-31 08:34:13.702 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:34:14 np0005603609 nova_compute[221550]: 2026-01-31 08:34:14.320 221554 DEBUG nova.network.neutron [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Successfully updated port: 97668e1e-b6e0-47c7-b18b-e64ba12e23b2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:34:14 np0005603609 nova_compute[221550]: 2026-01-31 08:34:14.677 221554 DEBUG oslo_concurrency.lockutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "refresh_cache-a3943c42-01b0-4548-99b8-1fbb3e72306d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:34:14 np0005603609 nova_compute[221550]: 2026-01-31 08:34:14.677 221554 DEBUG oslo_concurrency.lockutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquired lock "refresh_cache-a3943c42-01b0-4548-99b8-1fbb3e72306d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:34:14 np0005603609 nova_compute[221550]: 2026-01-31 08:34:14.678 221554 DEBUG nova.network.neutron [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:34:14 np0005603609 nova_compute[221550]: 2026-01-31 08:34:14.833 221554 DEBUG oslo_concurrency.lockutils [None req-9475d52a-ea40-4ea2-bcaa-6756279f22b8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:14 np0005603609 nova_compute[221550]: 2026-01-31 08:34:14.835 221554 DEBUG oslo_concurrency.lockutils [None req-9475d52a-ea40-4ea2-bcaa-6756279f22b8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:14 np0005603609 nova_compute[221550]: 2026-01-31 08:34:14.836 221554 INFO nova.compute.manager [None req-9475d52a-ea40-4ea2-bcaa-6756279f22b8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Rebooting instance#033[00m
Jan 31 03:34:14 np0005603609 nova_compute[221550]: 2026-01-31 08:34:14.854 221554 DEBUG nova.compute.manager [req-83edb083-b3a9-43b3-ba10-a2ce33f43c4b req-04214913-648f-42a3-a563-a731730e51ee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Received event network-changed-97668e1e-b6e0-47c7-b18b-e64ba12e23b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:14 np0005603609 nova_compute[221550]: 2026-01-31 08:34:14.854 221554 DEBUG nova.compute.manager [req-83edb083-b3a9-43b3-ba10-a2ce33f43c4b req-04214913-648f-42a3-a563-a731730e51ee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Refreshing instance network info cache due to event network-changed-97668e1e-b6e0-47c7-b18b-e64ba12e23b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:34:14 np0005603609 nova_compute[221550]: 2026-01-31 08:34:14.855 221554 DEBUG oslo_concurrency.lockutils [req-83edb083-b3a9-43b3-ba10-a2ce33f43c4b req-04214913-648f-42a3-a563-a731730e51ee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-a3943c42-01b0-4548-99b8-1fbb3e72306d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:34:14 np0005603609 nova_compute[221550]: 2026-01-31 08:34:14.907 221554 DEBUG oslo_concurrency.lockutils [None req-9475d52a-ea40-4ea2-bcaa-6756279f22b8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "refresh_cache-4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:34:14 np0005603609 nova_compute[221550]: 2026-01-31 08:34:14.908 221554 DEBUG oslo_concurrency.lockutils [None req-9475d52a-ea40-4ea2-bcaa-6756279f22b8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquired lock "refresh_cache-4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:34:14 np0005603609 nova_compute[221550]: 2026-01-31 08:34:14.909 221554 DEBUG nova.network.neutron [None req-9475d52a-ea40-4ea2-bcaa-6756279f22b8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:34:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:15.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:15 np0005603609 nova_compute[221550]: 2026-01-31 08:34:15.236 221554 DEBUG nova.network.neutron [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:34:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:15.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:15 np0005603609 nova_compute[221550]: 2026-01-31 08:34:15.560 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:16 np0005603609 nova_compute[221550]: 2026-01-31 08:34:16.323 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:34:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:17.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:34:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:17.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.204 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.531 221554 DEBUG nova.network.neutron [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Updating instance_info_cache with network_info: [{"id": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "address": "fa:16:3e:65:ab:76", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97668e1e-b6", "ovs_interfaceid": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.742 221554 DEBUG oslo_concurrency.lockutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Releasing lock "refresh_cache-a3943c42-01b0-4548-99b8-1fbb3e72306d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.743 221554 DEBUG nova.compute.manager [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Instance network_info: |[{"id": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "address": "fa:16:3e:65:ab:76", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97668e1e-b6", "ovs_interfaceid": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.744 221554 DEBUG oslo_concurrency.lockutils [req-83edb083-b3a9-43b3-ba10-a2ce33f43c4b req-04214913-648f-42a3-a563-a731730e51ee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-a3943c42-01b0-4548-99b8-1fbb3e72306d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.744 221554 DEBUG nova.network.neutron [req-83edb083-b3a9-43b3-ba10-a2ce33f43c4b req-04214913-648f-42a3-a563-a731730e51ee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Refreshing network info cache for port 97668e1e-b6e0-47c7-b18b-e64ba12e23b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.748 221554 DEBUG nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Start _get_guest_xml network_info=[{"id": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "address": "fa:16:3e:65:ab:76", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97668e1e-b6", "ovs_interfaceid": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.753 221554 WARNING nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.762 221554 DEBUG nova.virt.libvirt.host [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.765 221554 DEBUG nova.virt.libvirt.host [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.772 221554 DEBUG nova.virt.libvirt.host [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.773 221554 DEBUG nova.virt.libvirt.host [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.776 221554 DEBUG nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.777 221554 DEBUG nova.virt.hardware [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.778 221554 DEBUG nova.virt.hardware [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.779 221554 DEBUG nova.virt.hardware [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.780 221554 DEBUG nova.virt.hardware [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.780 221554 DEBUG nova.virt.hardware [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.781 221554 DEBUG nova.virt.hardware [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.781 221554 DEBUG nova.virt.hardware [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.782 221554 DEBUG nova.virt.hardware [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.782 221554 DEBUG nova.virt.hardware [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.783 221554 DEBUG nova.virt.hardware [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.783 221554 DEBUG nova.virt.hardware [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:34:18 np0005603609 nova_compute[221550]: 2026-01-31 08:34:18.790 221554 DEBUG oslo_concurrency.processutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:19.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:19 np0005603609 podman[294489]: 2026-01-31 08:34:19.186160041 +0000 UTC m=+0.062568624 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 03:34:19 np0005603609 podman[294490]: 2026-01-31 08:34:19.202669842 +0000 UTC m=+0.072880964 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 03:34:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:34:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/148716107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.250 221554 DEBUG oslo_concurrency.processutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.274 221554 DEBUG nova.storage.rbd_utils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image a3943c42-01b0-4548-99b8-1fbb3e72306d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.277 221554 DEBUG oslo_concurrency.processutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:19.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:34:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3577396168' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.674 221554 DEBUG oslo_concurrency.processutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.675 221554 DEBUG nova.virt.libvirt.vif [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:34:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-332242089',display_name='tempest-TestServerMultinode-server-332242089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-332242089',id=177,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d69b70b0e4e340758cc43d45c1113d2f',ramdisk_id='',reservation_id='r-ylj1hqbj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-2117392928',owner_user_name='tempest-TestServerMultinode-2117392928-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:34:10Z,user_data=None,user_id='f601ac628957410b995fa67e240e4871',uuid=a3943c42-01b0-4548-99b8-1fbb3e72306d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "address": "fa:16:3e:65:ab:76", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97668e1e-b6", "ovs_interfaceid": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.676 221554 DEBUG nova.network.os_vif_util [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Converting VIF {"id": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "address": "fa:16:3e:65:ab:76", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97668e1e-b6", "ovs_interfaceid": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.676 221554 DEBUG nova.network.os_vif_util [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:ab:76,bridge_name='br-int',has_traffic_filtering=True,id=97668e1e-b6e0-47c7-b18b-e64ba12e23b2,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97668e1e-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.677 221554 DEBUG nova.objects.instance [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lazy-loading 'pci_devices' on Instance uuid a3943c42-01b0-4548-99b8-1fbb3e72306d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.720 221554 DEBUG nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:34:19 np0005603609 nova_compute[221550]:  <uuid>a3943c42-01b0-4548-99b8-1fbb3e72306d</uuid>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:  <name>instance-000000b1</name>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestServerMultinode-server-332242089</nova:name>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:34:18</nova:creationTime>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:34:19 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:        <nova:user uuid="f601ac628957410b995fa67e240e4871">tempest-TestServerMultinode-2117392928-project-admin</nova:user>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:        <nova:project uuid="d69b70b0e4e340758cc43d45c1113d2f">tempest-TestServerMultinode-2117392928</nova:project>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:        <nova:port uuid="97668e1e-b6e0-47c7-b18b-e64ba12e23b2">
Jan 31 03:34:19 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <entry name="serial">a3943c42-01b0-4548-99b8-1fbb3e72306d</entry>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <entry name="uuid">a3943c42-01b0-4548-99b8-1fbb3e72306d</entry>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/a3943c42-01b0-4548-99b8-1fbb3e72306d_disk">
Jan 31 03:34:19 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:34:19 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/a3943c42-01b0-4548-99b8-1fbb3e72306d_disk.config">
Jan 31 03:34:19 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:34:19 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:65:ab:76"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <target dev="tap97668e1e-b6"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/a3943c42-01b0-4548-99b8-1fbb3e72306d/console.log" append="off"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:34:19 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:34:19 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:34:19 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:34:19 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.721 221554 DEBUG nova.compute.manager [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Preparing to wait for external event network-vif-plugged-97668e1e-b6e0-47c7-b18b-e64ba12e23b2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.721 221554 DEBUG oslo_concurrency.lockutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "a3943c42-01b0-4548-99b8-1fbb3e72306d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.721 221554 DEBUG oslo_concurrency.lockutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "a3943c42-01b0-4548-99b8-1fbb3e72306d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.722 221554 DEBUG oslo_concurrency.lockutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "a3943c42-01b0-4548-99b8-1fbb3e72306d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.722 221554 DEBUG nova.virt.libvirt.vif [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:34:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-332242089',display_name='tempest-TestServerMultinode-server-332242089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-332242089',id=177,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d69b70b0e4e340758cc43d45c1113d2f',ramdisk_id='',reservation_id='r-ylj1hqbj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-2117392928',owner_user_name='tempest-TestServerMultinode-2117392928-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:34:10Z,user_data=None,user_id='f601ac628957410b995fa67e240e4871',uuid=a3943c42-01b0-4548-99b8-1fbb3e72306d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "address": "fa:16:3e:65:ab:76", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97668e1e-b6", "ovs_interfaceid": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.722 221554 DEBUG nova.network.os_vif_util [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Converting VIF {"id": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "address": "fa:16:3e:65:ab:76", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97668e1e-b6", "ovs_interfaceid": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.723 221554 DEBUG nova.network.os_vif_util [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:ab:76,bridge_name='br-int',has_traffic_filtering=True,id=97668e1e-b6e0-47c7-b18b-e64ba12e23b2,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97668e1e-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.723 221554 DEBUG os_vif [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:ab:76,bridge_name='br-int',has_traffic_filtering=True,id=97668e1e-b6e0-47c7-b18b-e64ba12e23b2,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97668e1e-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.724 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.724 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.724 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.727 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.727 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97668e1e-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.728 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97668e1e-b6, col_values=(('external_ids', {'iface-id': '97668e1e-b6e0-47c7-b18b-e64ba12e23b2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:ab:76', 'vm-uuid': 'a3943c42-01b0-4548-99b8-1fbb3e72306d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:19 np0005603609 NetworkManager[49064]: <info>  [1769848459.7297] manager: (tap97668e1e-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/359)
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.731 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.736 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:19 np0005603609 nova_compute[221550]: 2026-01-31 08:34:19.737 221554 INFO os_vif [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:ab:76,bridge_name='br-int',has_traffic_filtering=True,id=97668e1e-b6e0-47c7-b18b-e64ba12e23b2,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97668e1e-b6')#033[00m
Jan 31 03:34:20 np0005603609 nova_compute[221550]: 2026-01-31 08:34:20.374 221554 DEBUG nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:34:20 np0005603609 nova_compute[221550]: 2026-01-31 08:34:20.374 221554 DEBUG nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:34:20 np0005603609 nova_compute[221550]: 2026-01-31 08:34:20.374 221554 DEBUG nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] No VIF found with MAC fa:16:3e:65:ab:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:34:20 np0005603609 nova_compute[221550]: 2026-01-31 08:34:20.375 221554 INFO nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Using config drive#033[00m
Jan 31 03:34:20 np0005603609 nova_compute[221550]: 2026-01-31 08:34:20.395 221554 DEBUG nova.storage.rbd_utils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image a3943c42-01b0-4548-99b8-1fbb3e72306d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:34:21 np0005603609 nova_compute[221550]: 2026-01-31 08:34:21.001 221554 DEBUG nova.network.neutron [None req-9475d52a-ea40-4ea2-bcaa-6756279f22b8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Updating instance_info_cache with network_info: [{"id": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "address": "fa:16:3e:3a:bd:05", "network": {"id": "81287ae0-230a-4745-9d4d-f12a02a04d02", "bridge": "br-int", "label": "tempest-network-smoke--608182900", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcbad7c1-62", "ovs_interfaceid": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:34:21 np0005603609 nova_compute[221550]: 2026-01-31 08:34:21.102 221554 DEBUG oslo_concurrency.lockutils [None req-9475d52a-ea40-4ea2-bcaa-6756279f22b8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Releasing lock "refresh_cache-4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:34:21 np0005603609 nova_compute[221550]: 2026-01-31 08:34:21.103 221554 DEBUG nova.compute.manager [None req-9475d52a-ea40-4ea2-bcaa-6756279f22b8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:21.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:21.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:21 np0005603609 nova_compute[221550]: 2026-01-31 08:34:21.325 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:21 np0005603609 nova_compute[221550]: 2026-01-31 08:34:21.627 221554 INFO nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Creating config drive at /var/lib/nova/instances/a3943c42-01b0-4548-99b8-1fbb3e72306d/disk.config#033[00m
Jan 31 03:34:21 np0005603609 nova_compute[221550]: 2026-01-31 08:34:21.632 221554 DEBUG oslo_concurrency.processutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a3943c42-01b0-4548-99b8-1fbb3e72306d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmppged907s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:21 np0005603609 nova_compute[221550]: 2026-01-31 08:34:21.761 221554 DEBUG oslo_concurrency.processutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a3943c42-01b0-4548-99b8-1fbb3e72306d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmppged907s" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:21 np0005603609 nova_compute[221550]: 2026-01-31 08:34:21.797 221554 DEBUG nova.storage.rbd_utils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image a3943c42-01b0-4548-99b8-1fbb3e72306d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:34:21 np0005603609 nova_compute[221550]: 2026-01-31 08:34:21.800 221554 DEBUG oslo_concurrency.processutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a3943c42-01b0-4548-99b8-1fbb3e72306d/disk.config a3943c42-01b0-4548-99b8-1fbb3e72306d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:21 np0005603609 nova_compute[221550]: 2026-01-31 08:34:21.985 221554 DEBUG oslo_concurrency.processutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a3943c42-01b0-4548-99b8-1fbb3e72306d/disk.config a3943c42-01b0-4548-99b8-1fbb3e72306d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:21 np0005603609 nova_compute[221550]: 2026-01-31 08:34:21.986 221554 INFO nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Deleting local config drive /var/lib/nova/instances/a3943c42-01b0-4548-99b8-1fbb3e72306d/disk.config because it was imported into RBD.#033[00m
Jan 31 03:34:22 np0005603609 kernel: tap97668e1e-b6: entered promiscuous mode
Jan 31 03:34:22 np0005603609 NetworkManager[49064]: <info>  [1769848462.0480] manager: (tap97668e1e-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/360)
Jan 31 03:34:22 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:22Z|00774|binding|INFO|Claiming lport 97668e1e-b6e0-47c7-b18b-e64ba12e23b2 for this chassis.
Jan 31 03:34:22 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:22Z|00775|binding|INFO|97668e1e-b6e0-47c7-b18b-e64ba12e23b2: Claiming fa:16:3e:65:ab:76 10.100.0.3
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.049 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:22 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:22Z|00776|binding|INFO|Setting lport 97668e1e-b6e0-47c7-b18b-e64ba12e23b2 ovn-installed in OVS
Jan 31 03:34:22 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:22Z|00777|binding|INFO|Setting lport 97668e1e-b6e0-47c7-b18b-e64ba12e23b2 up in Southbound
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.063 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.065 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:ab:76 10.100.0.3'], port_security=['fa:16:3e:65:ab:76 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'a3943c42-01b0-4548-99b8-1fbb3e72306d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd69b70b0e4e340758cc43d45c1113d2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c7793481-993b-4688-8da1-03fa2a12b295', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f6c418e-e1df-4f83-9060-d1845d483973, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=97668e1e-b6e0-47c7-b18b-e64ba12e23b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.067 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 97668e1e-b6e0-47c7-b18b-e64ba12e23b2 in datapath 786b4c20-d3c9-4eba-b2c7-0e7b9805b52d bound to our chassis#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.067 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.069 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 786b4c20-d3c9-4eba-b2c7-0e7b9805b52d#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.077 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7adb9759-f380-45ee-86b3-a36953614f45]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.078 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap786b4c20-d1 in ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.080 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap786b4c20-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.080 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1867737b-8977-4c2e-a000-2c8136323a5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.081 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0668b8c5-df6c-499e-8d8b-5841580b10e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:22 np0005603609 systemd-udevd[294647]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:34:22 np0005603609 systemd-machined[190912]: New machine qemu-95-instance-000000b1.
Jan 31 03:34:22 np0005603609 NetworkManager[49064]: <info>  [1769848462.0935] device (tap97668e1e-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:34:22 np0005603609 NetworkManager[49064]: <info>  [1769848462.0949] device (tap97668e1e-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.094 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[1f9e8d4a-32cc-4c28-81bd-3b63e69d9226]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:22 np0005603609 systemd[1]: Started Virtual Machine qemu-95-instance-000000b1.
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.118 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[83869517-4291-4379-82da-13aa1aa2d0a7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.146 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[376e3ed4-bb36-457c-8b96-577cf02c80b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:22 np0005603609 NetworkManager[49064]: <info>  [1769848462.1526] manager: (tap786b4c20-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/361)
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.151 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7e12cff8-b948-47b4-a13f-26637d347c6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.182 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[431bdbd3-a6d8-4999-b3d6-5a17afe21340]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.184 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[6e27e606-8bfe-4477-bf16-c398938f4784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:22 np0005603609 NetworkManager[49064]: <info>  [1769848462.1981] device (tap786b4c20-d0): carrier: link connected
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.202 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a1af9d-c383-47b7-bb18-821dadd6b637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.221 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3309007f-dc34-436e-ae6e-898a85764d99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap786b4c20-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:d1:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870189, 'reachable_time': 28584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294680, 'error': None, 'target': 'ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.234 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3de81480-9052-4b62-a4b9-a1074ff4a8be]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:d113'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 870189, 'tstamp': 870189}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294681, 'error': None, 'target': 'ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.251 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[088dd0de-4ca6-49d0-990f-ed09bea499e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap786b4c20-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:d1:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870189, 'reachable_time': 28584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294682, 'error': None, 'target': 'ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.274 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a8848bbf-f8f8-4ae1-b27f-d63f17a58019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.336 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8e3dcdbf-c453-4058-a1de-fd19d856aeda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.337 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap786b4c20-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.337 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.338 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap786b4c20-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:22 np0005603609 NetworkManager[49064]: <info>  [1769848462.3401] manager: (tap786b4c20-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Jan 31 03:34:22 np0005603609 kernel: tap786b4c20-d0: entered promiscuous mode
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.339 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.342 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap786b4c20-d0, col_values=(('external_ids', {'iface-id': '7b0df60c-a238-46c0-acd9-976f981537f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:22 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:22Z|00778|binding|INFO|Releasing lport 7b0df60c-a238-46c0-acd9-976f981537f3 from this chassis (sb_readonly=0)
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.343 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.344 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/786b4c20-d3c9-4eba-b2c7-0e7b9805b52d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/786b4c20-d3c9-4eba-b2c7-0e7b9805b52d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.345 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4c52d557-5d59-45ae-830b-d9cc65becd3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.345 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/786b4c20-d3c9-4eba-b2c7-0e7b9805b52d.pid.haproxy
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 786b4c20-d3c9-4eba-b2c7-0e7b9805b52d
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:34:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:22.346 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'env', 'PROCESS_TAG=haproxy-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/786b4c20-d3c9-4eba-b2c7-0e7b9805b52d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.349 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.454 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848462.4540246, a3943c42-01b0-4548-99b8-1fbb3e72306d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.455 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] VM Started (Lifecycle Event)#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.515 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.520 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848462.4553723, a3943c42-01b0-4548-99b8-1fbb3e72306d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.520 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.561 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.565 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.648 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:34:22 np0005603609 podman[294756]: 2026-01-31 08:34:22.680812097 +0000 UTC m=+0.047431405 container create 0fc1e1b5b83d23453d35a551f5918de92943ebb96b5560ede7b8379b4a2d9f21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.691 221554 DEBUG nova.compute.manager [req-0f706c02-3773-4e77-8f71-8dcde76930cf req-895fbd64-b2b4-4105-9272-8005eecd6f57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Received event network-vif-plugged-97668e1e-b6e0-47c7-b18b-e64ba12e23b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.692 221554 DEBUG oslo_concurrency.lockutils [req-0f706c02-3773-4e77-8f71-8dcde76930cf req-895fbd64-b2b4-4105-9272-8005eecd6f57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a3943c42-01b0-4548-99b8-1fbb3e72306d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.693 221554 DEBUG oslo_concurrency.lockutils [req-0f706c02-3773-4e77-8f71-8dcde76930cf req-895fbd64-b2b4-4105-9272-8005eecd6f57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a3943c42-01b0-4548-99b8-1fbb3e72306d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.693 221554 DEBUG oslo_concurrency.lockutils [req-0f706c02-3773-4e77-8f71-8dcde76930cf req-895fbd64-b2b4-4105-9272-8005eecd6f57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a3943c42-01b0-4548-99b8-1fbb3e72306d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.693 221554 DEBUG nova.compute.manager [req-0f706c02-3773-4e77-8f71-8dcde76930cf req-895fbd64-b2b4-4105-9272-8005eecd6f57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Processing event network-vif-plugged-97668e1e-b6e0-47c7-b18b-e64ba12e23b2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.694 221554 DEBUG nova.compute.manager [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.698 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848462.6977541, a3943c42-01b0-4548-99b8-1fbb3e72306d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.699 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.701 221554 DEBUG nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.704 221554 INFO nova.virt.libvirt.driver [-] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Instance spawned successfully.#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.704 221554 DEBUG nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:34:22 np0005603609 systemd[1]: Started libpod-conmon-0fc1e1b5b83d23453d35a551f5918de92943ebb96b5560ede7b8379b4a2d9f21.scope.
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.747 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:22 np0005603609 podman[294756]: 2026-01-31 08:34:22.655411129 +0000 UTC m=+0.022030427 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.753 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:34:22 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.757 221554 DEBUG nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.759 221554 DEBUG nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.760 221554 DEBUG nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.761 221554 DEBUG nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.761 221554 DEBUG nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.762 221554 DEBUG nova.virt.libvirt.driver [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:34:22 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a51e12a9c610ee078faee9831b5b0ee255c08e2d8eb7673072ed2a3d98923b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:22 np0005603609 podman[294756]: 2026-01-31 08:34:22.774866176 +0000 UTC m=+0.141485534 container init 0fc1e1b5b83d23453d35a551f5918de92943ebb96b5560ede7b8379b4a2d9f21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:34:22 np0005603609 podman[294756]: 2026-01-31 08:34:22.780568885 +0000 UTC m=+0.147188183 container start 0fc1e1b5b83d23453d35a551f5918de92943ebb96b5560ede7b8379b4a2d9f21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 03:34:22 np0005603609 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[294771]: [NOTICE]   (294775) : New worker (294777) forked
Jan 31 03:34:22 np0005603609 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[294771]: [NOTICE]   (294775) : Loading success.
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.805 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.939 221554 INFO nova.compute.manager [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Took 11.92 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:34:22 np0005603609 nova_compute[221550]: 2026-01-31 08:34:22.940 221554 DEBUG nova.compute.manager [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:23.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:23 np0005603609 nova_compute[221550]: 2026-01-31 08:34:23.174 221554 INFO nova.compute.manager [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Took 17.41 seconds to build instance.#033[00m
Jan 31 03:34:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:23.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:23 np0005603609 nova_compute[221550]: 2026-01-31 08:34:23.342 221554 DEBUG oslo_concurrency.lockutils [None req-df88704c-54c2-4561-8b0d-145582eec8e4 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "a3943c42-01b0-4548-99b8-1fbb3e72306d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:23 np0005603609 nova_compute[221550]: 2026-01-31 08:34:23.352 221554 DEBUG nova.network.neutron [req-83edb083-b3a9-43b3-ba10-a2ce33f43c4b req-04214913-648f-42a3-a563-a731730e51ee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Updated VIF entry in instance network info cache for port 97668e1e-b6e0-47c7-b18b-e64ba12e23b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:34:23 np0005603609 nova_compute[221550]: 2026-01-31 08:34:23.352 221554 DEBUG nova.network.neutron [req-83edb083-b3a9-43b3-ba10-a2ce33f43c4b req-04214913-648f-42a3-a563-a731730e51ee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Updating instance_info_cache with network_info: [{"id": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "address": "fa:16:3e:65:ab:76", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97668e1e-b6", "ovs_interfaceid": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:34:23 np0005603609 kernel: tapfcbad7c1-62 (unregistering): left promiscuous mode
Jan 31 03:34:23 np0005603609 NetworkManager[49064]: <info>  [1769848463.5546] device (tapfcbad7c1-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:34:23 np0005603609 nova_compute[221550]: 2026-01-31 08:34:23.561 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:23Z|00779|binding|INFO|Releasing lport fcbad7c1-624e-4776-8e78-ae63868c5ec4 from this chassis (sb_readonly=0)
Jan 31 03:34:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:23Z|00780|binding|INFO|Setting lport fcbad7c1-624e-4776-8e78-ae63868c5ec4 down in Southbound
Jan 31 03:34:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:23Z|00781|binding|INFO|Removing iface tapfcbad7c1-62 ovn-installed in OVS
Jan 31 03:34:23 np0005603609 nova_compute[221550]: 2026-01-31 08:34:23.565 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:23 np0005603609 nova_compute[221550]: 2026-01-31 08:34:23.567 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:23 np0005603609 nova_compute[221550]: 2026-01-31 08:34:23.608 221554 DEBUG oslo_concurrency.lockutils [req-83edb083-b3a9-43b3-ba10-a2ce33f43c4b req-04214913-648f-42a3-a563-a731730e51ee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-a3943c42-01b0-4548-99b8-1fbb3e72306d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:34:23 np0005603609 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000ae.scope: Deactivated successfully.
Jan 31 03:34:23 np0005603609 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000ae.scope: Consumed 13.959s CPU time.
Jan 31 03:34:23 np0005603609 systemd-machined[190912]: Machine qemu-94-instance-000000ae terminated.
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.044 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:bd:05 10.100.0.9'], port_security=['fa:16:3e:3a:bd:05 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81287ae0-230a-4745-9d4d-f12a02a04d02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '06c0a46d-73c1-48f8-970d-bf276115204e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.235'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef683f19-a225-4b3e-868c-bb553f5f4aa2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=fcbad7c1-624e-4776-8e78-ae63868c5ec4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.047 140058 INFO neutron.agent.ovn.metadata.agent [-] Port fcbad7c1-624e-4776-8e78-ae63868c5ec4 in datapath 81287ae0-230a-4745-9d4d-f12a02a04d02 unbound from our chassis#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.049 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 81287ae0-230a-4745-9d4d-f12a02a04d02, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.050 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[10b1379e-d160-4c44-b399-ccd92fe9c714]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.051 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02 namespace which is not needed anymore#033[00m
Jan 31 03:34:24 np0005603609 neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02[294220]: [NOTICE]   (294224) : haproxy version is 2.8.14-c23fe91
Jan 31 03:34:24 np0005603609 neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02[294220]: [NOTICE]   (294224) : path to executable is /usr/sbin/haproxy
Jan 31 03:34:24 np0005603609 neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02[294220]: [WARNING]  (294224) : Exiting Master process...
Jan 31 03:34:24 np0005603609 neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02[294220]: [WARNING]  (294224) : Exiting Master process...
Jan 31 03:34:24 np0005603609 neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02[294220]: [ALERT]    (294224) : Current worker (294226) exited with code 143 (Terminated)
Jan 31 03:34:24 np0005603609 neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02[294220]: [WARNING]  (294224) : All workers exited. Exiting... (0)
Jan 31 03:34:24 np0005603609 systemd[1]: libpod-861659c061d4da8bd37eb2a113d7d11e62eb6339079905cd3487bb6d11946efa.scope: Deactivated successfully.
Jan 31 03:34:24 np0005603609 podman[294818]: 2026-01-31 08:34:24.14827355 +0000 UTC m=+0.038923089 container died 861659c061d4da8bd37eb2a113d7d11e62eb6339079905cd3487bb6d11946efa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 03:34:24 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-861659c061d4da8bd37eb2a113d7d11e62eb6339079905cd3487bb6d11946efa-userdata-shm.mount: Deactivated successfully.
Jan 31 03:34:24 np0005603609 systemd[1]: var-lib-containers-storage-overlay-11d684656562158cd66a6d0293d22137b9cadf3231bdae24d4afb63f4c4f6abc-merged.mount: Deactivated successfully.
Jan 31 03:34:24 np0005603609 podman[294818]: 2026-01-31 08:34:24.185047074 +0000 UTC m=+0.075696613 container cleanup 861659c061d4da8bd37eb2a113d7d11e62eb6339079905cd3487bb6d11946efa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:34:24 np0005603609 systemd[1]: libpod-conmon-861659c061d4da8bd37eb2a113d7d11e62eb6339079905cd3487bb6d11946efa.scope: Deactivated successfully.
Jan 31 03:34:24 np0005603609 podman[294848]: 2026-01-31 08:34:24.240804481 +0000 UTC m=+0.039169784 container remove 861659c061d4da8bd37eb2a113d7d11e62eb6339079905cd3487bb6d11946efa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.246 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[892d7938-d023-489c-a109-4f8ac6a0ee7b]: (4, ('Sat Jan 31 08:34:24 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02 (861659c061d4da8bd37eb2a113d7d11e62eb6339079905cd3487bb6d11946efa)\n861659c061d4da8bd37eb2a113d7d11e62eb6339079905cd3487bb6d11946efa\nSat Jan 31 08:34:24 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02 (861659c061d4da8bd37eb2a113d7d11e62eb6339079905cd3487bb6d11946efa)\n861659c061d4da8bd37eb2a113d7d11e62eb6339079905cd3487bb6d11946efa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.248 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9d64a23b-7570-4688-a055-94b1e03006a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.249 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81287ae0-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:24 np0005603609 nova_compute[221550]: 2026-01-31 08:34:24.252 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:24 np0005603609 kernel: tap81287ae0-20: left promiscuous mode
Jan 31 03:34:24 np0005603609 nova_compute[221550]: 2026-01-31 08:34:24.260 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.264 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[513e5568-db74-4667-bb18-749219fe5a47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.286 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ac2bc8-4e8a-46c5-8f23-bb66ee8c9e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.288 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[db56b8e1-bc86-432e-a42f-75c69dc79063]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.304 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[157205d8-b685-4996-895b-24f738032856]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 867249, 'reachable_time': 22389, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294867, 'error': None, 'target': 'ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 systemd[1]: run-netns-ovnmeta\x2d81287ae0\x2d230a\x2d4745\x2d9d4d\x2df12a02a04d02.mount: Deactivated successfully.
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.306 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.306 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[4499cc47-7e8c-4ff5-bd0f-f8d95ee63cfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 nova_compute[221550]: 2026-01-31 08:34:24.324 221554 INFO nova.virt.libvirt.driver [None req-9475d52a-ea40-4ea2-bcaa-6756279f22b8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Instance shutdown successfully.#033[00m
Jan 31 03:34:24 np0005603609 kernel: tapfcbad7c1-62: entered promiscuous mode
Jan 31 03:34:24 np0005603609 systemd-udevd[294668]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:34:24 np0005603609 NetworkManager[49064]: <info>  [1769848464.3908] manager: (tapfcbad7c1-62): new Tun device (/org/freedesktop/NetworkManager/Devices/363)
Jan 31 03:34:24 np0005603609 nova_compute[221550]: 2026-01-31 08:34:24.391 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:24 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:24Z|00782|binding|INFO|Claiming lport fcbad7c1-624e-4776-8e78-ae63868c5ec4 for this chassis.
Jan 31 03:34:24 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:24Z|00783|binding|INFO|fcbad7c1-624e-4776-8e78-ae63868c5ec4: Claiming fa:16:3e:3a:bd:05 10.100.0.9
Jan 31 03:34:24 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:24Z|00784|binding|INFO|Setting lport fcbad7c1-624e-4776-8e78-ae63868c5ec4 ovn-installed in OVS
Jan 31 03:34:24 np0005603609 nova_compute[221550]: 2026-01-31 08:34:24.400 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:24 np0005603609 NetworkManager[49064]: <info>  [1769848464.4048] device (tapfcbad7c1-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:34:24 np0005603609 NetworkManager[49064]: <info>  [1769848464.4057] device (tapfcbad7c1-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:34:24 np0005603609 systemd-machined[190912]: New machine qemu-96-instance-000000ae.
Jan 31 03:34:24 np0005603609 systemd[1]: Started Virtual Machine qemu-96-instance-000000ae.
Jan 31 03:34:24 np0005603609 nova_compute[221550]: 2026-01-31 08:34:24.729 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:24 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:24Z|00785|binding|INFO|Setting lport fcbad7c1-624e-4776-8e78-ae63868c5ec4 up in Southbound
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.796 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:bd:05 10.100.0.9'], port_security=['fa:16:3e:3a:bd:05 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81287ae0-230a-4745-9d4d-f12a02a04d02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '06c0a46d-73c1-48f8-970d-bf276115204e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.235'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef683f19-a225-4b3e-868c-bb553f5f4aa2, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=fcbad7c1-624e-4776-8e78-ae63868c5ec4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.798 140058 INFO neutron.agent.ovn.metadata.agent [-] Port fcbad7c1-624e-4776-8e78-ae63868c5ec4 in datapath 81287ae0-230a-4745-9d4d-f12a02a04d02 bound to our chassis#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.799 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81287ae0-230a-4745-9d4d-f12a02a04d02#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.810 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a544a3c9-3484-459d-9131-3c547abe66b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.811 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap81287ae0-21 in ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.813 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap81287ae0-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.813 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[83784a83-6053-4e19-896b-3b48f0eaa3d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.814 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[27a01a89-c94e-4e7f-9689-0dacf00345c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.822 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[561d3615-0e1b-4834-98c4-02eda9852381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.843 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5852027d-0167-4d97-99a1-fb816b9285fa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.864 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[417ff6ec-d712-482e-b86c-00f394a119d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 NetworkManager[49064]: <info>  [1769848464.8703] manager: (tap81287ae0-20): new Veth device (/org/freedesktop/NetworkManager/Devices/364)
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.871 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5c9dcf56-6fd6-4112-951f-d593f7e26a71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.890 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[16e7d45b-bc4c-4360-80be-7c7d77d7c97a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.893 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[2784ba69-09e7-4645-b978-39a1c8d89dc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 NetworkManager[49064]: <info>  [1769848464.9120] device (tap81287ae0-20): carrier: link connected
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.914 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[20a3fe0e-7a4c-4f04-b772-e23b7e7f5280]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.928 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b11b73ae-6f97-4cf9-8652-64428b2f006a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81287ae0-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:97:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870461, 'reachable_time': 36671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294939, 'error': None, 'target': 'ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 nova_compute[221550]: 2026-01-31 08:34:24.936 221554 DEBUG nova.compute.manager [req-f941ce8e-5977-4fdc-86b7-64d445ffcb64 req-af4abc1a-9b39-4bd1-9407-72d70a39a948 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Received event network-vif-plugged-97668e1e-b6e0-47c7-b18b-e64ba12e23b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:24 np0005603609 nova_compute[221550]: 2026-01-31 08:34:24.936 221554 DEBUG oslo_concurrency.lockutils [req-f941ce8e-5977-4fdc-86b7-64d445ffcb64 req-af4abc1a-9b39-4bd1-9407-72d70a39a948 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a3943c42-01b0-4548-99b8-1fbb3e72306d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:24 np0005603609 nova_compute[221550]: 2026-01-31 08:34:24.937 221554 DEBUG oslo_concurrency.lockutils [req-f941ce8e-5977-4fdc-86b7-64d445ffcb64 req-af4abc1a-9b39-4bd1-9407-72d70a39a948 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a3943c42-01b0-4548-99b8-1fbb3e72306d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:24 np0005603609 nova_compute[221550]: 2026-01-31 08:34:24.937 221554 DEBUG oslo_concurrency.lockutils [req-f941ce8e-5977-4fdc-86b7-64d445ffcb64 req-af4abc1a-9b39-4bd1-9407-72d70a39a948 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a3943c42-01b0-4548-99b8-1fbb3e72306d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:24 np0005603609 nova_compute[221550]: 2026-01-31 08:34:24.937 221554 DEBUG nova.compute.manager [req-f941ce8e-5977-4fdc-86b7-64d445ffcb64 req-af4abc1a-9b39-4bd1-9407-72d70a39a948 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] No waiting events found dispatching network-vif-plugged-97668e1e-b6e0-47c7-b18b-e64ba12e23b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:34:24 np0005603609 nova_compute[221550]: 2026-01-31 08:34:24.937 221554 WARNING nova.compute.manager [req-f941ce8e-5977-4fdc-86b7-64d445ffcb64 req-af4abc1a-9b39-4bd1-9407-72d70a39a948 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Received unexpected event network-vif-plugged-97668e1e-b6e0-47c7-b18b-e64ba12e23b2 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.940 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4c0921db-efb8-497a-8ba3-3a2df4e3efc6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe45:9728'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 870461, 'tstamp': 870461}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294940, 'error': None, 'target': 'ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.954 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[137c7e24-61c9-4abd-b332-39ec6df19f69]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81287ae0-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:97:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870461, 'reachable_time': 36671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294942, 'error': None, 'target': 'ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:24 np0005603609 nova_compute[221550]: 2026-01-31 08:34:24.971 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:34:24 np0005603609 nova_compute[221550]: 2026-01-31 08:34:24.971 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848464.9709907, 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:34:24 np0005603609 nova_compute[221550]: 2026-01-31 08:34:24.971 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:34:24 np0005603609 nova_compute[221550]: 2026-01-31 08:34:24.978 221554 INFO nova.virt.libvirt.driver [-] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Instance running successfully.#033[00m
Jan 31 03:34:24 np0005603609 nova_compute[221550]: 2026-01-31 08:34:24.978 221554 INFO nova.virt.libvirt.driver [None req-9475d52a-ea40-4ea2-bcaa-6756279f22b8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Instance soft rebooted successfully.#033[00m
Jan 31 03:34:24 np0005603609 nova_compute[221550]: 2026-01-31 08:34:24.979 221554 DEBUG nova.compute.manager [None req-9475d52a-ea40-4ea2-bcaa-6756279f22b8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:24.983 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[db66dbe9-e041-4497-adcc-a58e4625c12b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:25.030 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[de3790cb-22a5-4263-aad2-752d35765de2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:25.031 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81287ae0-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:25.032 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:25.032 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81287ae0-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:25 np0005603609 nova_compute[221550]: 2026-01-31 08:34:25.034 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:25 np0005603609 kernel: tap81287ae0-20: entered promiscuous mode
Jan 31 03:34:25 np0005603609 NetworkManager[49064]: <info>  [1769848465.0348] manager: (tap81287ae0-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:25.037 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81287ae0-20, col_values=(('external_ids', {'iface-id': '2b2712a7-0919-431b-ae5b-c38d96bf7480'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:25 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:25Z|00786|binding|INFO|Releasing lport 2b2712a7-0919-431b-ae5b-c38d96bf7480 from this chassis (sb_readonly=0)
Jan 31 03:34:25 np0005603609 nova_compute[221550]: 2026-01-31 08:34:25.039 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:25 np0005603609 nova_compute[221550]: 2026-01-31 08:34:25.049 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:25.050 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/81287ae0-230a-4745-9d4d-f12a02a04d02.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/81287ae0-230a-4745-9d4d-f12a02a04d02.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:34:25 np0005603609 nova_compute[221550]: 2026-01-31 08:34:25.051 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:25.051 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[03f1f3ca-a4c9-4bb1-9426-5fe0312c69b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:25.052 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-81287ae0-230a-4745-9d4d-f12a02a04d02
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/81287ae0-230a-4745-9d4d-f12a02a04d02.pid.haproxy
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 81287ae0-230a-4745-9d4d-f12a02a04d02
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:34:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:25.052 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02', 'env', 'PROCESS_TAG=haproxy-81287ae0-230a-4745-9d4d-f12a02a04d02', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/81287ae0-230a-4745-9d4d-f12a02a04d02.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:34:25 np0005603609 nova_compute[221550]: 2026-01-31 08:34:25.055 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:34:25 np0005603609 nova_compute[221550]: 2026-01-31 08:34:25.140 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] During sync_power_state the instance has a pending task (reboot_started). Skip.#033[00m
Jan 31 03:34:25 np0005603609 nova_compute[221550]: 2026-01-31 08:34:25.141 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848464.9710965, 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:34:25 np0005603609 nova_compute[221550]: 2026-01-31 08:34:25.141 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] VM Started (Lifecycle Event)#033[00m
Jan 31 03:34:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:25.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:25 np0005603609 nova_compute[221550]: 2026-01-31 08:34:25.164 221554 DEBUG oslo_concurrency.lockutils [None req-9475d52a-ea40-4ea2-bcaa-6756279f22b8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 10.328s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:25 np0005603609 nova_compute[221550]: 2026-01-31 08:34:25.180 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:25 np0005603609 nova_compute[221550]: 2026-01-31 08:34:25.184 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:34:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:25.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:25 np0005603609 podman[294973]: 2026-01-31 08:34:25.368038144 +0000 UTC m=+0.040108268 container create 29ba62c76f5587a7bf5dff06fad4a4ca5669b7d316aa939d9fdbfa774fdc2bae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 03:34:25 np0005603609 systemd[1]: Started libpod-conmon-29ba62c76f5587a7bf5dff06fad4a4ca5669b7d316aa939d9fdbfa774fdc2bae.scope.
Jan 31 03:34:25 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:34:25 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba9bd031c98c0f2c6f8cceb63b0bb91a3fb67fc43e191e957292dfd2c93b6d1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:25 np0005603609 podman[294973]: 2026-01-31 08:34:25.348437016 +0000 UTC m=+0.020507170 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:34:25 np0005603609 podman[294973]: 2026-01-31 08:34:25.464270755 +0000 UTC m=+0.136340899 container init 29ba62c76f5587a7bf5dff06fad4a4ca5669b7d316aa939d9fdbfa774fdc2bae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 03:34:25 np0005603609 podman[294973]: 2026-01-31 08:34:25.471580153 +0000 UTC m=+0.143650267 container start 29ba62c76f5587a7bf5dff06fad4a4ca5669b7d316aa939d9fdbfa774fdc2bae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:34:25 np0005603609 neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02[294986]: [NOTICE]   (294990) : New worker (294992) forked
Jan 31 03:34:25 np0005603609 neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02[294986]: [NOTICE]   (294990) : Loading success.
Jan 31 03:34:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:26 np0005603609 nova_compute[221550]: 2026-01-31 08:34:26.327 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:27.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.289 221554 DEBUG oslo_concurrency.lockutils [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "a3943c42-01b0-4548-99b8-1fbb3e72306d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.290 221554 DEBUG oslo_concurrency.lockutils [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "a3943c42-01b0-4548-99b8-1fbb3e72306d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.291 221554 DEBUG oslo_concurrency.lockutils [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "a3943c42-01b0-4548-99b8-1fbb3e72306d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.292 221554 DEBUG oslo_concurrency.lockutils [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "a3943c42-01b0-4548-99b8-1fbb3e72306d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.292 221554 DEBUG oslo_concurrency.lockutils [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "a3943c42-01b0-4548-99b8-1fbb3e72306d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.294 221554 INFO nova.compute.manager [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Terminating instance#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.296 221554 DEBUG nova.compute.manager [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:34:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:34:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:27.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:34:27 np0005603609 kernel: tap97668e1e-b6 (unregistering): left promiscuous mode
Jan 31 03:34:27 np0005603609 NetworkManager[49064]: <info>  [1769848467.3454] device (tap97668e1e-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:34:27 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:27Z|00787|binding|INFO|Releasing lport 97668e1e-b6e0-47c7-b18b-e64ba12e23b2 from this chassis (sb_readonly=0)
Jan 31 03:34:27 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:27Z|00788|binding|INFO|Setting lport 97668e1e-b6e0-47c7-b18b-e64ba12e23b2 down in Southbound
Jan 31 03:34:27 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:27Z|00789|binding|INFO|Removing iface tap97668e1e-b6 ovn-installed in OVS
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.370 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.372 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.381 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:27.384 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:ab:76 10.100.0.3'], port_security=['fa:16:3e:65:ab:76 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'a3943c42-01b0-4548-99b8-1fbb3e72306d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd69b70b0e4e340758cc43d45c1113d2f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c7793481-993b-4688-8da1-03fa2a12b295', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f6c418e-e1df-4f83-9060-d1845d483973, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=97668e1e-b6e0-47c7-b18b-e64ba12e23b2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:34:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:27.385 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 97668e1e-b6e0-47c7-b18b-e64ba12e23b2 in datapath 786b4c20-d3c9-4eba-b2c7-0e7b9805b52d unbound from our chassis#033[00m
Jan 31 03:34:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:27.387 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:34:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:27.389 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[298fe4e1-7405-4c1f-a974-f559dedf0b95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:27.390 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d namespace which is not needed anymore#033[00m
Jan 31 03:34:27 np0005603609 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000b1.scope: Deactivated successfully.
Jan 31 03:34:27 np0005603609 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000b1.scope: Consumed 5.025s CPU time.
Jan 31 03:34:27 np0005603609 systemd-machined[190912]: Machine qemu-95-instance-000000b1 terminated.
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.411 221554 DEBUG nova.compute.manager [req-b81da8f4-2da3-4d81-baa9-99169341ee3a req-3ad609a8-e19a-4b0b-8ef9-3db188766853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Received event network-vif-unplugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.411 221554 DEBUG oslo_concurrency.lockutils [req-b81da8f4-2da3-4d81-baa9-99169341ee3a req-3ad609a8-e19a-4b0b-8ef9-3db188766853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.412 221554 DEBUG oslo_concurrency.lockutils [req-b81da8f4-2da3-4d81-baa9-99169341ee3a req-3ad609a8-e19a-4b0b-8ef9-3db188766853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.412 221554 DEBUG oslo_concurrency.lockutils [req-b81da8f4-2da3-4d81-baa9-99169341ee3a req-3ad609a8-e19a-4b0b-8ef9-3db188766853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.412 221554 DEBUG nova.compute.manager [req-b81da8f4-2da3-4d81-baa9-99169341ee3a req-3ad609a8-e19a-4b0b-8ef9-3db188766853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] No waiting events found dispatching network-vif-unplugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.412 221554 WARNING nova.compute.manager [req-b81da8f4-2da3-4d81-baa9-99169341ee3a req-3ad609a8-e19a-4b0b-8ef9-3db188766853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Received unexpected event network-vif-unplugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.412 221554 DEBUG nova.compute.manager [req-b81da8f4-2da3-4d81-baa9-99169341ee3a req-3ad609a8-e19a-4b0b-8ef9-3db188766853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Received event network-vif-plugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.412 221554 DEBUG oslo_concurrency.lockutils [req-b81da8f4-2da3-4d81-baa9-99169341ee3a req-3ad609a8-e19a-4b0b-8ef9-3db188766853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.412 221554 DEBUG oslo_concurrency.lockutils [req-b81da8f4-2da3-4d81-baa9-99169341ee3a req-3ad609a8-e19a-4b0b-8ef9-3db188766853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.413 221554 DEBUG oslo_concurrency.lockutils [req-b81da8f4-2da3-4d81-baa9-99169341ee3a req-3ad609a8-e19a-4b0b-8ef9-3db188766853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.413 221554 DEBUG nova.compute.manager [req-b81da8f4-2da3-4d81-baa9-99169341ee3a req-3ad609a8-e19a-4b0b-8ef9-3db188766853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] No waiting events found dispatching network-vif-plugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.413 221554 WARNING nova.compute.manager [req-b81da8f4-2da3-4d81-baa9-99169341ee3a req-3ad609a8-e19a-4b0b-8ef9-3db188766853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Received unexpected event network-vif-plugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.413 221554 DEBUG nova.compute.manager [req-b81da8f4-2da3-4d81-baa9-99169341ee3a req-3ad609a8-e19a-4b0b-8ef9-3db188766853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Received event network-vif-plugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.413 221554 DEBUG oslo_concurrency.lockutils [req-b81da8f4-2da3-4d81-baa9-99169341ee3a req-3ad609a8-e19a-4b0b-8ef9-3db188766853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.413 221554 DEBUG oslo_concurrency.lockutils [req-b81da8f4-2da3-4d81-baa9-99169341ee3a req-3ad609a8-e19a-4b0b-8ef9-3db188766853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.414 221554 DEBUG oslo_concurrency.lockutils [req-b81da8f4-2da3-4d81-baa9-99169341ee3a req-3ad609a8-e19a-4b0b-8ef9-3db188766853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.414 221554 DEBUG nova.compute.manager [req-b81da8f4-2da3-4d81-baa9-99169341ee3a req-3ad609a8-e19a-4b0b-8ef9-3db188766853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] No waiting events found dispatching network-vif-plugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.414 221554 WARNING nova.compute.manager [req-b81da8f4-2da3-4d81-baa9-99169341ee3a req-3ad609a8-e19a-4b0b-8ef9-3db188766853 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Received unexpected event network-vif-plugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:34:27 np0005603609 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[294771]: [NOTICE]   (294775) : haproxy version is 2.8.14-c23fe91
Jan 31 03:34:27 np0005603609 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[294771]: [NOTICE]   (294775) : path to executable is /usr/sbin/haproxy
Jan 31 03:34:27 np0005603609 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[294771]: [ALERT]    (294775) : Current worker (294777) exited with code 143 (Terminated)
Jan 31 03:34:27 np0005603609 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[294771]: [WARNING]  (294775) : All workers exited. Exiting... (0)
Jan 31 03:34:27 np0005603609 systemd[1]: libpod-0fc1e1b5b83d23453d35a551f5918de92943ebb96b5560ede7b8379b4a2d9f21.scope: Deactivated successfully.
Jan 31 03:34:27 np0005603609 podman[295023]: 2026-01-31 08:34:27.494023391 +0000 UTC m=+0.039059482 container died 0fc1e1b5b83d23453d35a551f5918de92943ebb96b5560ede7b8379b4a2d9f21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.513 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:27 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0fc1e1b5b83d23453d35a551f5918de92943ebb96b5560ede7b8379b4a2d9f21-userdata-shm.mount: Deactivated successfully.
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.519 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:27 np0005603609 systemd[1]: var-lib-containers-storage-overlay-b3a51e12a9c610ee078faee9831b5b0ee255c08e2d8eb7673072ed2a3d98923b-merged.mount: Deactivated successfully.
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.529 221554 INFO nova.virt.libvirt.driver [-] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Instance destroyed successfully.#033[00m
Jan 31 03:34:27 np0005603609 podman[295023]: 2026-01-31 08:34:27.530224132 +0000 UTC m=+0.075260233 container cleanup 0fc1e1b5b83d23453d35a551f5918de92943ebb96b5560ede7b8379b4a2d9f21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.530 221554 DEBUG nova.objects.instance [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lazy-loading 'resources' on Instance uuid a3943c42-01b0-4548-99b8-1fbb3e72306d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:34:27 np0005603609 systemd[1]: libpod-conmon-0fc1e1b5b83d23453d35a551f5918de92943ebb96b5560ede7b8379b4a2d9f21.scope: Deactivated successfully.
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.574 221554 DEBUG nova.virt.libvirt.vif [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:34:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-332242089',display_name='tempest-TestServerMultinode-server-332242089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-332242089',id=177,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:34:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d69b70b0e4e340758cc43d45c1113d2f',ramdisk_id='',reservation_id='r-ylj1hqbj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-2117392928',owner_user_name='tempest-TestServerMultinode-2117392928-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:34:23Z,user_data=None,user_id='f601ac628957410b995fa67e240e4871',uuid=a3943c42-01b0-4548-99b8-1fbb3e72306d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "address": "fa:16:3e:65:ab:76", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97668e1e-b6", "ovs_interfaceid": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.574 221554 DEBUG nova.network.os_vif_util [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Converting VIF {"id": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "address": "fa:16:3e:65:ab:76", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97668e1e-b6", "ovs_interfaceid": "97668e1e-b6e0-47c7-b18b-e64ba12e23b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.575 221554 DEBUG nova.network.os_vif_util [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:ab:76,bridge_name='br-int',has_traffic_filtering=True,id=97668e1e-b6e0-47c7-b18b-e64ba12e23b2,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97668e1e-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.575 221554 DEBUG os_vif [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:ab:76,bridge_name='br-int',has_traffic_filtering=True,id=97668e1e-b6e0-47c7-b18b-e64ba12e23b2,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97668e1e-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.578 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.578 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97668e1e-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.580 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.583 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.585 221554 INFO os_vif [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:ab:76,bridge_name='br-int',has_traffic_filtering=True,id=97668e1e-b6e0-47c7-b18b-e64ba12e23b2,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97668e1e-b6')#033[00m
Jan 31 03:34:27 np0005603609 podman[295057]: 2026-01-31 08:34:27.597729445 +0000 UTC m=+0.051395341 container remove 0fc1e1b5b83d23453d35a551f5918de92943ebb96b5560ede7b8379b4a2d9f21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:34:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:27.601 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[48f286b0-abfa-4719-81bd-9b96c37cbf75]: (4, ('Sat Jan 31 08:34:27 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d (0fc1e1b5b83d23453d35a551f5918de92943ebb96b5560ede7b8379b4a2d9f21)\n0fc1e1b5b83d23453d35a551f5918de92943ebb96b5560ede7b8379b4a2d9f21\nSat Jan 31 08:34:27 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d (0fc1e1b5b83d23453d35a551f5918de92943ebb96b5560ede7b8379b4a2d9f21)\n0fc1e1b5b83d23453d35a551f5918de92943ebb96b5560ede7b8379b4a2d9f21\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:27.603 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b7db1810-f773-4896-bef8-4642ec78cb51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:27.604 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap786b4c20-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:27 np0005603609 kernel: tap786b4c20-d0: left promiscuous mode
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.608 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:27 np0005603609 nova_compute[221550]: 2026-01-31 08:34:27.610 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:27.614 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[26aa8139-cb47-4c97-8967-01983bb4fe4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:27.635 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[be7f32ee-8cb6-4d59-b60d-6eb587f08a43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:27.636 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6cc5893e-dfb8-4c48-87b4-12f8ac3ff5bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:27.647 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[72e08b2e-6e35-497c-87e0-020a4dfe781d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870184, 'reachable_time': 35696, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295091, 'error': None, 'target': 'ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:27 np0005603609 systemd[1]: run-netns-ovnmeta\x2d786b4c20\x2dd3c9\x2d4eba\x2db2c7\x2d0e7b9805b52d.mount: Deactivated successfully.
Jan 31 03:34:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:27.651 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:34:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:27.651 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[a38ed482-6160-44ba-8dc4-d3203f1c9e86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:28 np0005603609 nova_compute[221550]: 2026-01-31 08:34:28.052 221554 INFO nova.virt.libvirt.driver [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Deleting instance files /var/lib/nova/instances/a3943c42-01b0-4548-99b8-1fbb3e72306d_del#033[00m
Jan 31 03:34:28 np0005603609 nova_compute[221550]: 2026-01-31 08:34:28.053 221554 INFO nova.virt.libvirt.driver [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Deletion of /var/lib/nova/instances/a3943c42-01b0-4548-99b8-1fbb3e72306d_del complete#033[00m
Jan 31 03:34:28 np0005603609 nova_compute[221550]: 2026-01-31 08:34:28.215 221554 INFO nova.compute.manager [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Took 0.92 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:34:28 np0005603609 nova_compute[221550]: 2026-01-31 08:34:28.215 221554 DEBUG oslo.service.loopingcall [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:34:28 np0005603609 nova_compute[221550]: 2026-01-31 08:34:28.216 221554 DEBUG nova.compute.manager [-] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:34:28 np0005603609 nova_compute[221550]: 2026-01-31 08:34:28.216 221554 DEBUG nova.network.neutron [-] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:34:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:29.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:34:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:29.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:34:29 np0005603609 nova_compute[221550]: 2026-01-31 08:34:29.639 221554 DEBUG nova.compute.manager [req-c7d7fbe8-5c82-4dc0-a9e9-55eb88d5eba6 req-f2b905b3-e814-4084-a3a2-0419aa493c0a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Received event network-vif-plugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:29 np0005603609 nova_compute[221550]: 2026-01-31 08:34:29.640 221554 DEBUG oslo_concurrency.lockutils [req-c7d7fbe8-5c82-4dc0-a9e9-55eb88d5eba6 req-f2b905b3-e814-4084-a3a2-0419aa493c0a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:29 np0005603609 nova_compute[221550]: 2026-01-31 08:34:29.640 221554 DEBUG oslo_concurrency.lockutils [req-c7d7fbe8-5c82-4dc0-a9e9-55eb88d5eba6 req-f2b905b3-e814-4084-a3a2-0419aa493c0a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:29 np0005603609 nova_compute[221550]: 2026-01-31 08:34:29.641 221554 DEBUG oslo_concurrency.lockutils [req-c7d7fbe8-5c82-4dc0-a9e9-55eb88d5eba6 req-f2b905b3-e814-4084-a3a2-0419aa493c0a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:29 np0005603609 nova_compute[221550]: 2026-01-31 08:34:29.641 221554 DEBUG nova.compute.manager [req-c7d7fbe8-5c82-4dc0-a9e9-55eb88d5eba6 req-f2b905b3-e814-4084-a3a2-0419aa493c0a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] No waiting events found dispatching network-vif-plugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:34:29 np0005603609 nova_compute[221550]: 2026-01-31 08:34:29.641 221554 WARNING nova.compute.manager [req-c7d7fbe8-5c82-4dc0-a9e9-55eb88d5eba6 req-f2b905b3-e814-4084-a3a2-0419aa493c0a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Received unexpected event network-vif-plugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:34:29 np0005603609 nova_compute[221550]: 2026-01-31 08:34:29.642 221554 DEBUG nova.compute.manager [req-c7d7fbe8-5c82-4dc0-a9e9-55eb88d5eba6 req-f2b905b3-e814-4084-a3a2-0419aa493c0a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Received event network-vif-unplugged-97668e1e-b6e0-47c7-b18b-e64ba12e23b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:29 np0005603609 nova_compute[221550]: 2026-01-31 08:34:29.642 221554 DEBUG oslo_concurrency.lockutils [req-c7d7fbe8-5c82-4dc0-a9e9-55eb88d5eba6 req-f2b905b3-e814-4084-a3a2-0419aa493c0a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a3943c42-01b0-4548-99b8-1fbb3e72306d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:29 np0005603609 nova_compute[221550]: 2026-01-31 08:34:29.642 221554 DEBUG oslo_concurrency.lockutils [req-c7d7fbe8-5c82-4dc0-a9e9-55eb88d5eba6 req-f2b905b3-e814-4084-a3a2-0419aa493c0a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a3943c42-01b0-4548-99b8-1fbb3e72306d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:29 np0005603609 nova_compute[221550]: 2026-01-31 08:34:29.643 221554 DEBUG oslo_concurrency.lockutils [req-c7d7fbe8-5c82-4dc0-a9e9-55eb88d5eba6 req-f2b905b3-e814-4084-a3a2-0419aa493c0a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a3943c42-01b0-4548-99b8-1fbb3e72306d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:29 np0005603609 nova_compute[221550]: 2026-01-31 08:34:29.643 221554 DEBUG nova.compute.manager [req-c7d7fbe8-5c82-4dc0-a9e9-55eb88d5eba6 req-f2b905b3-e814-4084-a3a2-0419aa493c0a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] No waiting events found dispatching network-vif-unplugged-97668e1e-b6e0-47c7-b18b-e64ba12e23b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:34:29 np0005603609 nova_compute[221550]: 2026-01-31 08:34:29.644 221554 DEBUG nova.compute.manager [req-c7d7fbe8-5c82-4dc0-a9e9-55eb88d5eba6 req-f2b905b3-e814-4084-a3a2-0419aa493c0a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Received event network-vif-unplugged-97668e1e-b6e0-47c7-b18b-e64ba12e23b2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:34:29 np0005603609 nova_compute[221550]: 2026-01-31 08:34:29.644 221554 DEBUG nova.compute.manager [req-c7d7fbe8-5c82-4dc0-a9e9-55eb88d5eba6 req-f2b905b3-e814-4084-a3a2-0419aa493c0a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Received event network-vif-plugged-97668e1e-b6e0-47c7-b18b-e64ba12e23b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:29 np0005603609 nova_compute[221550]: 2026-01-31 08:34:29.644 221554 DEBUG oslo_concurrency.lockutils [req-c7d7fbe8-5c82-4dc0-a9e9-55eb88d5eba6 req-f2b905b3-e814-4084-a3a2-0419aa493c0a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a3943c42-01b0-4548-99b8-1fbb3e72306d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:29 np0005603609 nova_compute[221550]: 2026-01-31 08:34:29.645 221554 DEBUG oslo_concurrency.lockutils [req-c7d7fbe8-5c82-4dc0-a9e9-55eb88d5eba6 req-f2b905b3-e814-4084-a3a2-0419aa493c0a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a3943c42-01b0-4548-99b8-1fbb3e72306d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:29 np0005603609 nova_compute[221550]: 2026-01-31 08:34:29.645 221554 DEBUG oslo_concurrency.lockutils [req-c7d7fbe8-5c82-4dc0-a9e9-55eb88d5eba6 req-f2b905b3-e814-4084-a3a2-0419aa493c0a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a3943c42-01b0-4548-99b8-1fbb3e72306d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:29 np0005603609 nova_compute[221550]: 2026-01-31 08:34:29.645 221554 DEBUG nova.compute.manager [req-c7d7fbe8-5c82-4dc0-a9e9-55eb88d5eba6 req-f2b905b3-e814-4084-a3a2-0419aa493c0a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] No waiting events found dispatching network-vif-plugged-97668e1e-b6e0-47c7-b18b-e64ba12e23b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:34:29 np0005603609 nova_compute[221550]: 2026-01-31 08:34:29.646 221554 WARNING nova.compute.manager [req-c7d7fbe8-5c82-4dc0-a9e9-55eb88d5eba6 req-f2b905b3-e814-4084-a3a2-0419aa493c0a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Received unexpected event network-vif-plugged-97668e1e-b6e0-47c7-b18b-e64ba12e23b2 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:34:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:31.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:34:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:31.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:34:31 np0005603609 nova_compute[221550]: 2026-01-31 08:34:31.328 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:32 np0005603609 nova_compute[221550]: 2026-01-31 08:34:32.582 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:33 np0005603609 nova_compute[221550]: 2026-01-31 08:34:33.063 221554 DEBUG nova.network.neutron [-] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:34:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:33.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:33.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:33 np0005603609 nova_compute[221550]: 2026-01-31 08:34:33.859 221554 INFO nova.compute.manager [-] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Took 5.64 seconds to deallocate network for instance.#033[00m
Jan 31 03:34:34 np0005603609 nova_compute[221550]: 2026-01-31 08:34:34.400 221554 DEBUG oslo_concurrency.lockutils [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:34 np0005603609 nova_compute[221550]: 2026-01-31 08:34:34.401 221554 DEBUG oslo_concurrency.lockutils [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:34 np0005603609 nova_compute[221550]: 2026-01-31 08:34:34.454 221554 DEBUG nova.compute.manager [req-c6309577-8698-4723-a307-951931109108 req-528bc12f-bc29-41d9-b25d-49cb59540155 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Received event network-vif-deleted-97668e1e-b6e0-47c7-b18b-e64ba12e23b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:34 np0005603609 nova_compute[221550]: 2026-01-31 08:34:34.521 221554 DEBUG oslo_concurrency.processutils [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:34:34 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/206312902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:34:35 np0005603609 nova_compute[221550]: 2026-01-31 08:34:35.009 221554 DEBUG oslo_concurrency.processutils [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:35 np0005603609 nova_compute[221550]: 2026-01-31 08:34:35.014 221554 DEBUG nova.compute.provider_tree [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:34:35 np0005603609 nova_compute[221550]: 2026-01-31 08:34:35.171 221554 DEBUG nova.scheduler.client.report [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:34:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:34:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:35.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:34:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:35.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:35 np0005603609 nova_compute[221550]: 2026-01-31 08:34:35.360 221554 DEBUG oslo_concurrency.lockutils [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:35 np0005603609 nova_compute[221550]: 2026-01-31 08:34:35.606 221554 INFO nova.scheduler.client.report [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Deleted allocations for instance a3943c42-01b0-4548-99b8-1fbb3e72306d#033[00m
Jan 31 03:34:36 np0005603609 nova_compute[221550]: 2026-01-31 08:34:36.253 221554 DEBUG oslo_concurrency.lockutils [None req-0e7560e1-dd1f-437e-97d4-583200bc5d1d f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "a3943c42-01b0-4548-99b8-1fbb3e72306d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:36 np0005603609 nova_compute[221550]: 2026-01-31 08:34:36.329 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:34:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:37.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:34:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:37.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:37 np0005603609 nova_compute[221550]: 2026-01-31 08:34:37.584 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:37 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:37Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:bd:05 10.100.0.9
Jan 31 03:34:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:39.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:39.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:39 np0005603609 nova_compute[221550]: 2026-01-31 08:34:39.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:41.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:41.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:41 np0005603609 nova_compute[221550]: 2026-01-31 08:34:41.333 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:42 np0005603609 nova_compute[221550]: 2026-01-31 08:34:42.525 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848467.5240412, a3943c42-01b0-4548-99b8-1fbb3e72306d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:34:42 np0005603609 nova_compute[221550]: 2026-01-31 08:34:42.526 221554 INFO nova.compute.manager [-] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:34:42 np0005603609 nova_compute[221550]: 2026-01-31 08:34:42.586 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:43 np0005603609 nova_compute[221550]: 2026-01-31 08:34:43.165 221554 DEBUG nova.compute.manager [None req-4e015a7a-dd3f-4d41-b314-fccdad140d92 - - - - - -] [instance: a3943c42-01b0-4548-99b8-1fbb3e72306d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:34:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:43.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:34:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:43.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:43 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:43.647 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:34:43 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:43.648 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:34:43 np0005603609 nova_compute[221550]: 2026-01-31 08:34:43.648 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:45.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:45.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:45 np0005603609 nova_compute[221550]: 2026-01-31 08:34:45.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:46 np0005603609 nova_compute[221550]: 2026-01-31 08:34:46.337 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:46 np0005603609 nova_compute[221550]: 2026-01-31 08:34:46.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:34:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:47.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:34:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:47.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:47 np0005603609 nova_compute[221550]: 2026-01-31 08:34:47.589 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:47.650 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:48 np0005603609 nova_compute[221550]: 2026-01-31 08:34:48.690 221554 INFO nova.compute.manager [None req-1a851988-b309-4f63-af9a-23d9a2c914e6 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Get console output#033[00m
Jan 31 03:34:48 np0005603609 nova_compute[221550]: 2026-01-31 08:34:48.697 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:34:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:34:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:49.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:34:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:49.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:49 np0005603609 nova_compute[221550]: 2026-01-31 08:34:49.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:49 np0005603609 nova_compute[221550]: 2026-01-31 08:34:49.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:34:49 np0005603609 nova_compute[221550]: 2026-01-31 08:34:49.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:34:50 np0005603609 podman[295118]: 2026-01-31 08:34:50.165690989 +0000 UTC m=+0.052576121 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 03:34:50 np0005603609 podman[295117]: 2026-01-31 08:34:50.188368691 +0000 UTC m=+0.075300354 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 03:34:50 np0005603609 nova_compute[221550]: 2026-01-31 08:34:50.265 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:34:50 np0005603609 nova_compute[221550]: 2026-01-31 08:34:50.265 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:34:50 np0005603609 nova_compute[221550]: 2026-01-31 08:34:50.265 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:34:50 np0005603609 nova_compute[221550]: 2026-01-31 08:34:50.265 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:34:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:51.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:34:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:51.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:34:51 np0005603609 nova_compute[221550]: 2026-01-31 08:34:51.339 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:51 np0005603609 podman[295335]: 2026-01-31 08:34:51.672080378 +0000 UTC m=+0.061343983 container exec 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 03:34:51 np0005603609 podman[295335]: 2026-01-31 08:34:51.777467024 +0000 UTC m=+0.166730629 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:34:51 np0005603609 nova_compute[221550]: 2026-01-31 08:34:51.939 221554 DEBUG oslo_concurrency.lockutils [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:51 np0005603609 nova_compute[221550]: 2026-01-31 08:34:51.940 221554 DEBUG oslo_concurrency.lockutils [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:51 np0005603609 nova_compute[221550]: 2026-01-31 08:34:51.940 221554 DEBUG oslo_concurrency.lockutils [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:51 np0005603609 nova_compute[221550]: 2026-01-31 08:34:51.940 221554 DEBUG oslo_concurrency.lockutils [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:51 np0005603609 nova_compute[221550]: 2026-01-31 08:34:51.941 221554 DEBUG oslo_concurrency.lockutils [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:51 np0005603609 nova_compute[221550]: 2026-01-31 08:34:51.942 221554 INFO nova.compute.manager [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Terminating instance#033[00m
Jan 31 03:34:51 np0005603609 nova_compute[221550]: 2026-01-31 08:34:51.942 221554 DEBUG nova.compute.manager [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:34:52 np0005603609 kernel: tapfcbad7c1-62 (unregistering): left promiscuous mode
Jan 31 03:34:52 np0005603609 NetworkManager[49064]: <info>  [1769848492.0070] device (tapfcbad7c1-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:34:52 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:52Z|00790|binding|INFO|Releasing lport fcbad7c1-624e-4776-8e78-ae63868c5ec4 from this chassis (sb_readonly=0)
Jan 31 03:34:52 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:52Z|00791|binding|INFO|Setting lport fcbad7c1-624e-4776-8e78-ae63868c5ec4 down in Southbound
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.012 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:52 np0005603609 ovn_controller[130359]: 2026-01-31T08:34:52Z|00792|binding|INFO|Removing iface tapfcbad7c1-62 ovn-installed in OVS
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.014 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.025 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:52 np0005603609 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000ae.scope: Deactivated successfully.
Jan 31 03:34:52 np0005603609 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000ae.scope: Consumed 13.455s CPU time.
Jan 31 03:34:52 np0005603609 systemd-machined[190912]: Machine qemu-96-instance-000000ae terminated.
Jan 31 03:34:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:52.095 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:bd:05 10.100.0.9'], port_security=['fa:16:3e:3a:bd:05 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81287ae0-230a-4745-9d4d-f12a02a04d02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '06c0a46d-73c1-48f8-970d-bf276115204e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef683f19-a225-4b3e-868c-bb553f5f4aa2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=fcbad7c1-624e-4776-8e78-ae63868c5ec4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:34:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:52.096 140058 INFO neutron.agent.ovn.metadata.agent [-] Port fcbad7c1-624e-4776-8e78-ae63868c5ec4 in datapath 81287ae0-230a-4745-9d4d-f12a02a04d02 unbound from our chassis#033[00m
Jan 31 03:34:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:52.097 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 81287ae0-230a-4745-9d4d-f12a02a04d02, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:34:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:52.098 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1017390d-2ec7-43d2-b577-e9ffbea4b38a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:52.099 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02 namespace which is not needed anymore#033[00m
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.155 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.160 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.168 221554 INFO nova.virt.libvirt.driver [-] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Instance destroyed successfully.#033[00m
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.169 221554 DEBUG nova.objects.instance [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'resources' on Instance uuid 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.173 221554 DEBUG nova.compute.manager [req-3688a9f0-93bc-4ba2-96b8-d4a133ae0b1e req-7814e1de-8f7e-4dc2-be46-7ccf9eb68de0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Received event network-changed-fcbad7c1-624e-4776-8e78-ae63868c5ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.174 221554 DEBUG nova.compute.manager [req-3688a9f0-93bc-4ba2-96b8-d4a133ae0b1e req-7814e1de-8f7e-4dc2-be46-7ccf9eb68de0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Refreshing instance network info cache due to event network-changed-fcbad7c1-624e-4776-8e78-ae63868c5ec4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.174 221554 DEBUG oslo_concurrency.lockutils [req-3688a9f0-93bc-4ba2-96b8-d4a133ae0b1e req-7814e1de-8f7e-4dc2-be46-7ccf9eb68de0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:34:52 np0005603609 neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02[294986]: [NOTICE]   (294990) : haproxy version is 2.8.14-c23fe91
Jan 31 03:34:52 np0005603609 neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02[294986]: [NOTICE]   (294990) : path to executable is /usr/sbin/haproxy
Jan 31 03:34:52 np0005603609 neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02[294986]: [WARNING]  (294990) : Exiting Master process...
Jan 31 03:34:52 np0005603609 neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02[294986]: [ALERT]    (294990) : Current worker (294992) exited with code 143 (Terminated)
Jan 31 03:34:52 np0005603609 neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02[294986]: [WARNING]  (294990) : All workers exited. Exiting... (0)
Jan 31 03:34:52 np0005603609 systemd[1]: libpod-29ba62c76f5587a7bf5dff06fad4a4ca5669b7d316aa939d9fdbfa774fdc2bae.scope: Deactivated successfully.
Jan 31 03:34:52 np0005603609 podman[295486]: 2026-01-31 08:34:52.208481593 +0000 UTC m=+0.042101327 container died 29ba62c76f5587a7bf5dff06fad4a4ca5669b7d316aa939d9fdbfa774fdc2bae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 03:34:52 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-29ba62c76f5587a7bf5dff06fad4a4ca5669b7d316aa939d9fdbfa774fdc2bae-userdata-shm.mount: Deactivated successfully.
Jan 31 03:34:52 np0005603609 systemd[1]: var-lib-containers-storage-overlay-7ba9bd031c98c0f2c6f8cceb63b0bb91a3fb67fc43e191e957292dfd2c93b6d1-merged.mount: Deactivated successfully.
Jan 31 03:34:52 np0005603609 podman[295486]: 2026-01-31 08:34:52.240264106 +0000 UTC m=+0.073883820 container cleanup 29ba62c76f5587a7bf5dff06fad4a4ca5669b7d316aa939d9fdbfa774fdc2bae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.243 221554 DEBUG nova.virt.libvirt.vif [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:33:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1300355043',display_name='tempest-TestNetworkAdvancedServerOps-server-1300355043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1300355043',id=174,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCmbDmPDaurVYZqmnL0+j7llg8wQN2SUlST66gsq2LIihnID4aWDxrYDMI7+BqXpoedcRI57zeHuiAACSHAfyCXMEU86mXXr9C3v/uMY2IApjIK5OhnhGcW/b+OEEa2kWw==',key_name='tempest-TestNetworkAdvancedServerOps-2013984758',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:33:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-s0237wpe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:34:25Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "address": "fa:16:3e:3a:bd:05", "network": {"id": "81287ae0-230a-4745-9d4d-f12a02a04d02", "bridge": "br-int", "label": "tempest-network-smoke--608182900", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcbad7c1-62", "ovs_interfaceid": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.244 221554 DEBUG nova.network.os_vif_util [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "address": "fa:16:3e:3a:bd:05", "network": {"id": "81287ae0-230a-4745-9d4d-f12a02a04d02", "bridge": "br-int", "label": "tempest-network-smoke--608182900", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcbad7c1-62", "ovs_interfaceid": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:34:52 np0005603609 systemd[1]: libpod-conmon-29ba62c76f5587a7bf5dff06fad4a4ca5669b7d316aa939d9fdbfa774fdc2bae.scope: Deactivated successfully.
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.245 221554 DEBUG nova.network.os_vif_util [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:bd:05,bridge_name='br-int',has_traffic_filtering=True,id=fcbad7c1-624e-4776-8e78-ae63868c5ec4,network=Network(81287ae0-230a-4745-9d4d-f12a02a04d02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcbad7c1-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.246 221554 DEBUG os_vif [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:bd:05,bridge_name='br-int',has_traffic_filtering=True,id=fcbad7c1-624e-4776-8e78-ae63868c5ec4,network=Network(81287ae0-230a-4745-9d4d-f12a02a04d02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcbad7c1-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.248 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.248 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfcbad7c1-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.270 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.273 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.274 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.276 221554 INFO os_vif [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:bd:05,bridge_name='br-int',has_traffic_filtering=True,id=fcbad7c1-624e-4776-8e78-ae63868c5ec4,network=Network(81287ae0-230a-4745-9d4d-f12a02a04d02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcbad7c1-62')#033[00m
Jan 31 03:34:52 np0005603609 podman[295569]: 2026-01-31 08:34:52.307789209 +0000 UTC m=+0.048505211 container remove 29ba62c76f5587a7bf5dff06fad4a4ca5669b7d316aa939d9fdbfa774fdc2bae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 03:34:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:52.312 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0f848875-fda1-4315-b787-b9452d9497f9]: (4, ('Sat Jan 31 08:34:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02 (29ba62c76f5587a7bf5dff06fad4a4ca5669b7d316aa939d9fdbfa774fdc2bae)\n29ba62c76f5587a7bf5dff06fad4a4ca5669b7d316aa939d9fdbfa774fdc2bae\nSat Jan 31 08:34:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02 (29ba62c76f5587a7bf5dff06fad4a4ca5669b7d316aa939d9fdbfa774fdc2bae)\n29ba62c76f5587a7bf5dff06fad4a4ca5669b7d316aa939d9fdbfa774fdc2bae\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:52.314 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1669fdf7-58e6-4d5e-babf-f410e3f99c88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:52.316 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81287ae0-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.318 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:52 np0005603609 kernel: tap81287ae0-20: left promiscuous mode
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.326 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:52.329 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[71a9c6ff-abcf-4dc0-bd1c-d87ac99194cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:52.346 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[adc082cd-2111-4a5e-b22b-6b9102593695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:52.347 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3c4862cf-e019-46bb-8ce5-943a5722876d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:52.360 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c78c3dbb-b922-496a-b97f-55801c98fbf7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870456, 'reachable_time': 33083, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295632, 'error': None, 'target': 'ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:52 np0005603609 systemd[1]: run-netns-ovnmeta\x2d81287ae0\x2d230a\x2d4745\x2d9d4d\x2df12a02a04d02.mount: Deactivated successfully.
Jan 31 03:34:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:52.364 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-81287ae0-230a-4745-9d4d-f12a02a04d02 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:34:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:34:52.364 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[e34f4637-eed5-4fa6-8fdf-7e48bbca7d38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.692 221554 INFO nova.virt.libvirt.driver [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Deleting instance files /var/lib/nova/instances/4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6_del#033[00m
Jan 31 03:34:52 np0005603609 nova_compute[221550]: 2026-01-31 08:34:52.692 221554 INFO nova.virt.libvirt.driver [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Deletion of /var/lib/nova/instances/4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6_del complete#033[00m
Jan 31 03:34:53 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:34:53 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:34:53 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:34:53 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:34:53 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:34:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:34:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:53.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:34:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:53.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:34:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1367228748' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:34:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:34:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1367228748' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:34:54 np0005603609 nova_compute[221550]: 2026-01-31 08:34:54.133 221554 INFO nova.compute.manager [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Took 2.19 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:34:54 np0005603609 nova_compute[221550]: 2026-01-31 08:34:54.134 221554 DEBUG oslo.service.loopingcall [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:34:54 np0005603609 nova_compute[221550]: 2026-01-31 08:34:54.134 221554 DEBUG nova.compute.manager [-] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:34:54 np0005603609 nova_compute[221550]: 2026-01-31 08:34:54.135 221554 DEBUG nova.network.neutron [-] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:34:54 np0005603609 nova_compute[221550]: 2026-01-31 08:34:54.356 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Updating instance_info_cache with network_info: [{"id": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "address": "fa:16:3e:3a:bd:05", "network": {"id": "81287ae0-230a-4745-9d4d-f12a02a04d02", "bridge": "br-int", "label": "tempest-network-smoke--608182900", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcbad7c1-62", "ovs_interfaceid": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:34:54 np0005603609 nova_compute[221550]: 2026-01-31 08:34:54.578 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:34:54 np0005603609 nova_compute[221550]: 2026-01-31 08:34:54.578 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:34:54 np0005603609 nova_compute[221550]: 2026-01-31 08:34:54.578 221554 DEBUG oslo_concurrency.lockutils [req-3688a9f0-93bc-4ba2-96b8-d4a133ae0b1e req-7814e1de-8f7e-4dc2-be46-7ccf9eb68de0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:34:54 np0005603609 nova_compute[221550]: 2026-01-31 08:34:54.578 221554 DEBUG nova.network.neutron [req-3688a9f0-93bc-4ba2-96b8-d4a133ae0b1e req-7814e1de-8f7e-4dc2-be46-7ccf9eb68de0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Refreshing network info cache for port fcbad7c1-624e-4776-8e78-ae63868c5ec4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:34:54 np0005603609 nova_compute[221550]: 2026-01-31 08:34:54.579 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:54 np0005603609 nova_compute[221550]: 2026-01-31 08:34:54.580 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:34:54 np0005603609 nova_compute[221550]: 2026-01-31 08:34:54.580 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:54 np0005603609 nova_compute[221550]: 2026-01-31 08:34:54.645 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:54 np0005603609 nova_compute[221550]: 2026-01-31 08:34:54.645 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:54 np0005603609 nova_compute[221550]: 2026-01-31 08:34:54.645 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:54 np0005603609 nova_compute[221550]: 2026-01-31 08:34:54.645 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:34:54 np0005603609 nova_compute[221550]: 2026-01-31 08:34:54.646 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:34:55 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3869672271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:34:55 np0005603609 nova_compute[221550]: 2026-01-31 08:34:55.098 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:55.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:55 np0005603609 nova_compute[221550]: 2026-01-31 08:34:55.219 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:34:55 np0005603609 nova_compute[221550]: 2026-01-31 08:34:55.221 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4218MB free_disk=20.870807647705078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:34:55 np0005603609 nova_compute[221550]: 2026-01-31 08:34:55.221 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:55 np0005603609 nova_compute[221550]: 2026-01-31 08:34:55.221 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:55.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:55 np0005603609 nova_compute[221550]: 2026-01-31 08:34:55.703 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:34:55 np0005603609 nova_compute[221550]: 2026-01-31 08:34:55.703 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:34:55 np0005603609 nova_compute[221550]: 2026-01-31 08:34:55.704 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:34:55 np0005603609 nova_compute[221550]: 2026-01-31 08:34:55.774 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:34:56 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2292615389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:34:56 np0005603609 nova_compute[221550]: 2026-01-31 08:34:56.196 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:56 np0005603609 nova_compute[221550]: 2026-01-31 08:34:56.201 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:34:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:56 np0005603609 nova_compute[221550]: 2026-01-31 08:34:56.341 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:56 np0005603609 nova_compute[221550]: 2026-01-31 08:34:56.816 221554 DEBUG nova.network.neutron [-] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:34:56 np0005603609 nova_compute[221550]: 2026-01-31 08:34:56.824 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:34:56 np0005603609 nova_compute[221550]: 2026-01-31 08:34:56.834 221554 DEBUG nova.compute.manager [req-70ec3e9c-6e60-4570-8a4e-dcacf3d77eb8 req-1f70358e-43e4-411b-b8d4-bc8e0613d9c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Received event network-vif-unplugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:56 np0005603609 nova_compute[221550]: 2026-01-31 08:34:56.834 221554 DEBUG oslo_concurrency.lockutils [req-70ec3e9c-6e60-4570-8a4e-dcacf3d77eb8 req-1f70358e-43e4-411b-b8d4-bc8e0613d9c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:56 np0005603609 nova_compute[221550]: 2026-01-31 08:34:56.834 221554 DEBUG oslo_concurrency.lockutils [req-70ec3e9c-6e60-4570-8a4e-dcacf3d77eb8 req-1f70358e-43e4-411b-b8d4-bc8e0613d9c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:56 np0005603609 nova_compute[221550]: 2026-01-31 08:34:56.835 221554 DEBUG oslo_concurrency.lockutils [req-70ec3e9c-6e60-4570-8a4e-dcacf3d77eb8 req-1f70358e-43e4-411b-b8d4-bc8e0613d9c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:56 np0005603609 nova_compute[221550]: 2026-01-31 08:34:56.835 221554 DEBUG nova.compute.manager [req-70ec3e9c-6e60-4570-8a4e-dcacf3d77eb8 req-1f70358e-43e4-411b-b8d4-bc8e0613d9c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] No waiting events found dispatching network-vif-unplugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:34:56 np0005603609 nova_compute[221550]: 2026-01-31 08:34:56.835 221554 DEBUG nova.compute.manager [req-70ec3e9c-6e60-4570-8a4e-dcacf3d77eb8 req-1f70358e-43e4-411b-b8d4-bc8e0613d9c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Received event network-vif-unplugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:34:56 np0005603609 nova_compute[221550]: 2026-01-31 08:34:56.917 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:34:56 np0005603609 nova_compute[221550]: 2026-01-31 08:34:56.917 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:56 np0005603609 nova_compute[221550]: 2026-01-31 08:34:56.968 221554 INFO nova.compute.manager [-] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Took 2.83 seconds to deallocate network for instance.#033[00m
Jan 31 03:34:57 np0005603609 nova_compute[221550]: 2026-01-31 08:34:57.128 221554 DEBUG oslo_concurrency.lockutils [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:57 np0005603609 nova_compute[221550]: 2026-01-31 08:34:57.129 221554 DEBUG oslo_concurrency.lockutils [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:57.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:57 np0005603609 nova_compute[221550]: 2026-01-31 08:34:57.202 221554 DEBUG oslo_concurrency.processutils [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:57 np0005603609 nova_compute[221550]: 2026-01-31 08:34:57.270 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:57.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:57 np0005603609 nova_compute[221550]: 2026-01-31 08:34:57.646 221554 DEBUG oslo_concurrency.processutils [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:57 np0005603609 nova_compute[221550]: 2026-01-31 08:34:57.652 221554 DEBUG nova.compute.provider_tree [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:34:57 np0005603609 nova_compute[221550]: 2026-01-31 08:34:57.726 221554 DEBUG nova.scheduler.client.report [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:34:57 np0005603609 nova_compute[221550]: 2026-01-31 08:34:57.873 221554 DEBUG oslo_concurrency.lockutils [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:57 np0005603609 nova_compute[221550]: 2026-01-31 08:34:57.944 221554 INFO nova.scheduler.client.report [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Deleted allocations for instance 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6#033[00m
Jan 31 03:34:58 np0005603609 nova_compute[221550]: 2026-01-31 08:34:58.156 221554 DEBUG nova.compute.manager [req-40e05c64-d3eb-4b6d-acf3-cb148a43e8e0 req-5dca0314-6bcf-4a59-870b-912537240846 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Received event network-vif-deleted-fcbad7c1-624e-4776-8e78-ae63868c5ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:58 np0005603609 nova_compute[221550]: 2026-01-31 08:34:58.161 221554 DEBUG oslo_concurrency.lockutils [None req-6b5122d1-a2d8-4b3f-a5c3-93da24cf620c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:58 np0005603609 nova_compute[221550]: 2026-01-31 08:34:58.907 221554 DEBUG nova.network.neutron [req-3688a9f0-93bc-4ba2-96b8-d4a133ae0b1e req-7814e1de-8f7e-4dc2-be46-7ccf9eb68de0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Updated VIF entry in instance network info cache for port fcbad7c1-624e-4776-8e78-ae63868c5ec4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:34:58 np0005603609 nova_compute[221550]: 2026-01-31 08:34:58.907 221554 DEBUG nova.network.neutron [req-3688a9f0-93bc-4ba2-96b8-d4a133ae0b1e req-7814e1de-8f7e-4dc2-be46-7ccf9eb68de0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Updating instance_info_cache with network_info: [{"id": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "address": "fa:16:3e:3a:bd:05", "network": {"id": "81287ae0-230a-4745-9d4d-f12a02a04d02", "bridge": "br-int", "label": "tempest-network-smoke--608182900", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcbad7c1-62", "ovs_interfaceid": "fcbad7c1-624e-4776-8e78-ae63868c5ec4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:34:59 np0005603609 nova_compute[221550]: 2026-01-31 08:34:59.029 221554 DEBUG oslo_concurrency.lockutils [req-3688a9f0-93bc-4ba2-96b8-d4a133ae0b1e req-7814e1de-8f7e-4dc2-be46-7ccf9eb68de0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:34:59 np0005603609 nova_compute[221550]: 2026-01-31 08:34:59.103 221554 DEBUG nova.compute.manager [req-7ca8405a-6abc-4708-9ca7-221e2834f37c req-8bc87148-fdaa-4785-a758-30aa10277d6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Received event network-vif-plugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:59 np0005603609 nova_compute[221550]: 2026-01-31 08:34:59.103 221554 DEBUG oslo_concurrency.lockutils [req-7ca8405a-6abc-4708-9ca7-221e2834f37c req-8bc87148-fdaa-4785-a758-30aa10277d6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:59 np0005603609 nova_compute[221550]: 2026-01-31 08:34:59.103 221554 DEBUG oslo_concurrency.lockutils [req-7ca8405a-6abc-4708-9ca7-221e2834f37c req-8bc87148-fdaa-4785-a758-30aa10277d6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:59 np0005603609 nova_compute[221550]: 2026-01-31 08:34:59.104 221554 DEBUG oslo_concurrency.lockutils [req-7ca8405a-6abc-4708-9ca7-221e2834f37c req-8bc87148-fdaa-4785-a758-30aa10277d6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:59 np0005603609 nova_compute[221550]: 2026-01-31 08:34:59.104 221554 DEBUG nova.compute.manager [req-7ca8405a-6abc-4708-9ca7-221e2834f37c req-8bc87148-fdaa-4785-a758-30aa10277d6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] No waiting events found dispatching network-vif-plugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:34:59 np0005603609 nova_compute[221550]: 2026-01-31 08:34:59.104 221554 WARNING nova.compute.manager [req-7ca8405a-6abc-4708-9ca7-221e2834f37c req-8bc87148-fdaa-4785-a758-30aa10277d6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Received unexpected event network-vif-plugged-fcbad7c1-624e-4776-8e78-ae63868c5ec4 for instance with vm_state deleted and task_state None.#033[00m
Jan 31 03:34:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:59.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:34:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:59.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:35:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:35:00 np0005603609 nova_compute[221550]: 2026-01-31 08:35:00.998 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:00 np0005603609 nova_compute[221550]: 2026-01-31 08:35:00.998 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:01.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:01 np0005603609 nova_compute[221550]: 2026-01-31 08:35:01.342 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:35:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:01.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:35:01 np0005603609 nova_compute[221550]: 2026-01-31 08:35:01.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:02 np0005603609 nova_compute[221550]: 2026-01-31 08:35:02.274 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:03.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:03.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:35:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:05.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:35:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:05.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:06 np0005603609 nova_compute[221550]: 2026-01-31 08:35:06.344 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:07 np0005603609 nova_compute[221550]: 2026-01-31 08:35:07.169 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848492.1675296, 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:35:07 np0005603609 nova_compute[221550]: 2026-01-31 08:35:07.169 221554 INFO nova.compute.manager [-] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:35:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:35:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:07.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:35:07 np0005603609 nova_compute[221550]: 2026-01-31 08:35:07.276 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:35:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:07.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:35:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:35:07.527 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:35:07.528 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:35:07.528 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:08 np0005603609 nova_compute[221550]: 2026-01-31 08:35:08.196 221554 DEBUG nova.compute.manager [None req-c1514ed7-4b12-4e22-8c7f-95eea3ff49a8 - - - - - -] [instance: 4d69c7e6-f9f5-45ce-866a-e4f9eafbbcf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:35:08 np0005603609 nova_compute[221550]: 2026-01-31 08:35:08.840 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:08 np0005603609 nova_compute[221550]: 2026-01-31 08:35:08.961 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:09.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:09.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:11.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:11 np0005603609 nova_compute[221550]: 2026-01-31 08:35:11.346 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:35:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:11.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:35:12 np0005603609 nova_compute[221550]: 2026-01-31 08:35:12.277 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:13.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:13.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:15.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:15.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:16 np0005603609 nova_compute[221550]: 2026-01-31 08:35:16.373 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:17.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:17 np0005603609 nova_compute[221550]: 2026-01-31 08:35:17.280 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:17.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:19.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:19.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:20 np0005603609 nova_compute[221550]: 2026-01-31 08:35:20.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:21 np0005603609 podman[295808]: 2026-01-31 08:35:21.175754636 +0000 UTC m=+0.050953091 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:35:21 np0005603609 podman[295807]: 2026-01-31 08:35:21.198652074 +0000 UTC m=+0.075735024 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 03:35:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:21.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:21 np0005603609 nova_compute[221550]: 2026-01-31 08:35:21.375 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:21.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:22 np0005603609 nova_compute[221550]: 2026-01-31 08:35:22.282 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:35:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:23.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:35:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:23.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:25.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:25.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:26 np0005603609 nova_compute[221550]: 2026-01-31 08:35:26.378 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:27.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:27 np0005603609 nova_compute[221550]: 2026-01-31 08:35:27.284 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:27.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:29.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:35:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:29.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:35:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:35:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:31.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:35:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:31 np0005603609 nova_compute[221550]: 2026-01-31 08:35:31.381 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:35:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:31.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:35:32 np0005603609 nova_compute[221550]: 2026-01-31 08:35:32.286 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:35:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:33.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:35:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:33.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:35:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:35.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:35:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:35.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:36 np0005603609 nova_compute[221550]: 2026-01-31 08:35:36.383 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:37.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:37 np0005603609 nova_compute[221550]: 2026-01-31 08:35:37.289 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:35:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:37.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:35:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:39.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:35:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:39.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:35:40 np0005603609 nova_compute[221550]: 2026-01-31 08:35:40.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:41.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:41 np0005603609 nova_compute[221550]: 2026-01-31 08:35:41.385 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:35:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:41.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:35:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:42 np0005603609 nova_compute[221550]: 2026-01-31 08:35:42.290 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:43.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:43.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:45.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:35:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:45.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:35:46 np0005603609 nova_compute[221550]: 2026-01-31 08:35:46.387 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:35:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:47.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:35:47 np0005603609 nova_compute[221550]: 2026-01-31 08:35:47.293 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:47.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:47 np0005603609 nova_compute[221550]: 2026-01-31 08:35:47.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:47 np0005603609 nova_compute[221550]: 2026-01-31 08:35:47.661 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:35:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:49.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:35:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:49.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:50 np0005603609 nova_compute[221550]: 2026-01-31 08:35:50.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:50 np0005603609 nova_compute[221550]: 2026-01-31 08:35:50.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:35:50 np0005603609 nova_compute[221550]: 2026-01-31 08:35:50.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:35:51 np0005603609 nova_compute[221550]: 2026-01-31 08:35:51.074 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:35:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:51.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:51 np0005603609 nova_compute[221550]: 2026-01-31 08:35:51.389 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:51.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:51 np0005603609 nova_compute[221550]: 2026-01-31 08:35:51.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:51 np0005603609 nova_compute[221550]: 2026-01-31 08:35:51.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:35:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:52 np0005603609 podman[295854]: 2026-01-31 08:35:52.164673951 +0000 UTC m=+0.043100640 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:35:52 np0005603609 podman[295853]: 2026-01-31 08:35:52.19176674 +0000 UTC m=+0.064517691 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 03:35:52 np0005603609 nova_compute[221550]: 2026-01-31 08:35:52.294 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:53.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:35:53.357 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:35:53 np0005603609 nova_compute[221550]: 2026-01-31 08:35:53.357 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:35:53.358 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:35:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:35:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/947235518' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:35:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:35:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/947235518' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:35:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:53.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:53 np0005603609 nova_compute[221550]: 2026-01-31 08:35:53.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:53 np0005603609 nova_compute[221550]: 2026-01-31 08:35:53.707 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:53 np0005603609 nova_compute[221550]: 2026-01-31 08:35:53.708 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:53 np0005603609 nova_compute[221550]: 2026-01-31 08:35:53.709 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:53 np0005603609 nova_compute[221550]: 2026-01-31 08:35:53.709 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:35:53 np0005603609 nova_compute[221550]: 2026-01-31 08:35:53.709 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:35:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:35:54 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1126290724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:35:54 np0005603609 nova_compute[221550]: 2026-01-31 08:35:54.157 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:35:54 np0005603609 nova_compute[221550]: 2026-01-31 08:35:54.290 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:35:54 np0005603609 nova_compute[221550]: 2026-01-31 08:35:54.291 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4246MB free_disk=20.897106170654297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:35:54 np0005603609 nova_compute[221550]: 2026-01-31 08:35:54.292 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:54 np0005603609 nova_compute[221550]: 2026-01-31 08:35:54.292 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:54 np0005603609 nova_compute[221550]: 2026-01-31 08:35:54.412 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:35:54 np0005603609 nova_compute[221550]: 2026-01-31 08:35:54.413 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:35:54 np0005603609 nova_compute[221550]: 2026-01-31 08:35:54.488 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:35:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:35:54 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1326395793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:35:54 np0005603609 nova_compute[221550]: 2026-01-31 08:35:54.896 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:35:54 np0005603609 nova_compute[221550]: 2026-01-31 08:35:54.901 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:35:54 np0005603609 nova_compute[221550]: 2026-01-31 08:35:54.925 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:35:54 np0005603609 nova_compute[221550]: 2026-01-31 08:35:54.957 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:35:54 np0005603609 nova_compute[221550]: 2026-01-31 08:35:54.957 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:35:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:55.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:35:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:55.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:56 np0005603609 nova_compute[221550]: 2026-01-31 08:35:56.392 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:57.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:57 np0005603609 nova_compute[221550]: 2026-01-31 08:35:57.297 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:57.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:59.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:35:59.359 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:35:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:35:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:59.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:59 np0005603609 nova_compute[221550]: 2026-01-31 08:35:59.958 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:59 np0005603609 nova_compute[221550]: 2026-01-31 08:35:59.958 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:01.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:01 np0005603609 nova_compute[221550]: 2026-01-31 08:36:01.430 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:01.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:01 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:36:01 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:36:02 np0005603609 nova_compute[221550]: 2026-01-31 08:36:02.299 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:36:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:36:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:36:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:36:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:03.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:36:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:03.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:03 np0005603609 nova_compute[221550]: 2026-01-31 08:36:03.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:05.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:05.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:06 np0005603609 nova_compute[221550]: 2026-01-31 08:36:06.432 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:36:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:07.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:36:07 np0005603609 nova_compute[221550]: 2026-01-31 08:36:07.301 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:07.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:07.528 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:07.528 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:07.528 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:36:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:09.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:36:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:09.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:36:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:36:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:11.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:11 np0005603609 nova_compute[221550]: 2026-01-31 08:36:11.433 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:11.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:12 np0005603609 nova_compute[221550]: 2026-01-31 08:36:12.304 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:13.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:13.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:15.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:15.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:16 np0005603609 nova_compute[221550]: 2026-01-31 08:36:16.437 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:17 np0005603609 nova_compute[221550]: 2026-01-31 08:36:17.305 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:17.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:17.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:19.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:19.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:36:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:21.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:36:21 np0005603609 nova_compute[221550]: 2026-01-31 08:36:21.439 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:21.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:22 np0005603609 nova_compute[221550]: 2026-01-31 08:36:22.307 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:23 np0005603609 nova_compute[221550]: 2026-01-31 08:36:23.017 221554 DEBUG oslo_concurrency.lockutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:23 np0005603609 nova_compute[221550]: 2026-01-31 08:36:23.017 221554 DEBUG oslo_concurrency.lockutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:23 np0005603609 podman[296130]: 2026-01-31 08:36:23.154284953 +0000 UTC m=+0.041252214 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:36:23 np0005603609 podman[296129]: 2026-01-31 08:36:23.179929648 +0000 UTC m=+0.069785079 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:36:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:36:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:23.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:36:23 np0005603609 nova_compute[221550]: 2026-01-31 08:36:23.480 221554 DEBUG nova.compute.manager [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:36:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:36:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:23.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:36:24 np0005603609 nova_compute[221550]: 2026-01-31 08:36:24.120 221554 DEBUG oslo_concurrency.lockutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:24 np0005603609 nova_compute[221550]: 2026-01-31 08:36:24.121 221554 DEBUG oslo_concurrency.lockutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:24 np0005603609 nova_compute[221550]: 2026-01-31 08:36:24.131 221554 DEBUG nova.virt.hardware [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:36:24 np0005603609 nova_compute[221550]: 2026-01-31 08:36:24.131 221554 INFO nova.compute.claims [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:36:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:25.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:25.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:26 np0005603609 nova_compute[221550]: 2026-01-31 08:36:26.342 221554 DEBUG oslo_concurrency.processutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:26 np0005603609 nova_compute[221550]: 2026-01-31 08:36:26.440 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:26 np0005603609 nova_compute[221550]: 2026-01-31 08:36:26.711 221554 DEBUG oslo_concurrency.lockutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Acquiring lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:26 np0005603609 nova_compute[221550]: 2026-01-31 08:36:26.712 221554 DEBUG oslo_concurrency.lockutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:36:26 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1161939193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:36:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:26 np0005603609 nova_compute[221550]: 2026-01-31 08:36:26.822 221554 DEBUG oslo_concurrency.processutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:26 np0005603609 nova_compute[221550]: 2026-01-31 08:36:26.826 221554 DEBUG nova.compute.provider_tree [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:36:27 np0005603609 nova_compute[221550]: 2026-01-31 08:36:27.103 221554 DEBUG nova.compute.manager [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:36:27 np0005603609 nova_compute[221550]: 2026-01-31 08:36:27.160 221554 DEBUG nova.scheduler.client.report [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:36:27 np0005603609 nova_compute[221550]: 2026-01-31 08:36:27.310 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:27.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:36:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:27.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:36:27 np0005603609 nova_compute[221550]: 2026-01-31 08:36:27.577 221554 DEBUG oslo_concurrency.lockutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:27 np0005603609 nova_compute[221550]: 2026-01-31 08:36:27.578 221554 DEBUG nova.compute.manager [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:36:27 np0005603609 nova_compute[221550]: 2026-01-31 08:36:27.627 221554 DEBUG oslo_concurrency.lockutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:27 np0005603609 nova_compute[221550]: 2026-01-31 08:36:27.628 221554 DEBUG oslo_concurrency.lockutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:27 np0005603609 nova_compute[221550]: 2026-01-31 08:36:27.634 221554 DEBUG nova.virt.hardware [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:36:27 np0005603609 nova_compute[221550]: 2026-01-31 08:36:27.635 221554 INFO nova.compute.claims [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:36:27 np0005603609 nova_compute[221550]: 2026-01-31 08:36:27.863 221554 DEBUG nova.compute.manager [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:36:27 np0005603609 nova_compute[221550]: 2026-01-31 08:36:27.864 221554 DEBUG nova.network.neutron [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:36:28 np0005603609 nova_compute[221550]: 2026-01-31 08:36:28.287 221554 INFO nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:36:28 np0005603609 nova_compute[221550]: 2026-01-31 08:36:28.486 221554 DEBUG nova.compute.manager [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:36:28 np0005603609 nova_compute[221550]: 2026-01-31 08:36:28.515 221554 DEBUG nova.policy [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4d0e9d918b4041fabd5ded633b4cf404', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9710f0cf77d84353ae13fa47922b085d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:36:28 np0005603609 nova_compute[221550]: 2026-01-31 08:36:28.519 221554 DEBUG oslo_concurrency.processutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:36:28 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4077521189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:36:28 np0005603609 nova_compute[221550]: 2026-01-31 08:36:28.941 221554 DEBUG oslo_concurrency.processutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:28 np0005603609 nova_compute[221550]: 2026-01-31 08:36:28.946 221554 DEBUG nova.compute.provider_tree [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.086 221554 DEBUG nova.compute.manager [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.088 221554 DEBUG nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.088 221554 INFO nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Creating image(s)#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.114 221554 DEBUG nova.storage.rbd_utils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.142 221554 DEBUG nova.storage.rbd_utils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.169 221554 DEBUG nova.storage.rbd_utils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.173 221554 DEBUG oslo_concurrency.processutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.193 221554 DEBUG nova.scheduler.client.report [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.228 221554 DEBUG oslo_concurrency.processutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.229 221554 DEBUG oslo_concurrency.lockutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.230 221554 DEBUG oslo_concurrency.lockutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.230 221554 DEBUG oslo_concurrency.lockutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.265 221554 DEBUG nova.storage.rbd_utils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.270 221554 DEBUG oslo_concurrency.processutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.294 221554 DEBUG oslo_concurrency.lockutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.294 221554 DEBUG nova.compute.manager [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:36:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:29.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.428 221554 DEBUG nova.compute.manager [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.428 221554 DEBUG nova.network.neutron [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:36:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:29.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.589 221554 INFO nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.662 221554 DEBUG nova.compute.manager [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.797 221554 DEBUG nova.compute.manager [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.798 221554 DEBUG nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.799 221554 INFO nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Creating image(s)#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.824 221554 DEBUG nova.storage.rbd_utils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] rbd image c638ce77-3d60-4891-ae4f-13f1b8384ea0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.850 221554 DEBUG nova.storage.rbd_utils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] rbd image c638ce77-3d60-4891-ae4f-13f1b8384ea0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.874 221554 DEBUG nova.storage.rbd_utils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] rbd image c638ce77-3d60-4891-ae4f-13f1b8384ea0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.879 221554 DEBUG oslo_concurrency.processutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:36:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 55K writes, 216K keys, 55K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.04 MB/s#012Cumulative WAL: 55K writes, 20K syncs, 2.74 writes per sync, written: 0.21 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5559 writes, 22K keys, 5559 commit groups, 1.0 writes per commit group, ingest: 20.65 MB, 0.03 MB/s#012Interval WAL: 5558 writes, 2034 syncs, 2.73 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.927 221554 DEBUG nova.policy [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6bb0d6544be445f6a04b52b326336c8f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '49eb9aae5e984a72b3c0b1e5316e2172', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.935 221554 DEBUG oslo_concurrency.processutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.936 221554 DEBUG oslo_concurrency.lockutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.936 221554 DEBUG oslo_concurrency.lockutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.936 221554 DEBUG oslo_concurrency.lockutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.961 221554 DEBUG nova.storage.rbd_utils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] rbd image c638ce77-3d60-4891-ae4f-13f1b8384ea0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:29 np0005603609 nova_compute[221550]: 2026-01-31 08:36:29.964 221554 DEBUG oslo_concurrency.processutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c638ce77-3d60-4891-ae4f-13f1b8384ea0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:30 np0005603609 nova_compute[221550]: 2026-01-31 08:36:30.620 221554 DEBUG oslo_concurrency.processutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:30 np0005603609 nova_compute[221550]: 2026-01-31 08:36:30.652 221554 DEBUG oslo_concurrency.processutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c638ce77-3d60-4891-ae4f-13f1b8384ea0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:30 np0005603609 nova_compute[221550]: 2026-01-31 08:36:30.754 221554 DEBUG nova.storage.rbd_utils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] resizing rbd image c638ce77-3d60-4891-ae4f-13f1b8384ea0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:36:30 np0005603609 nova_compute[221550]: 2026-01-31 08:36:30.798 221554 DEBUG nova.storage.rbd_utils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] resizing rbd image 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:36:30 np0005603609 nova_compute[221550]: 2026-01-31 08:36:30.893 221554 DEBUG nova.objects.instance [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Lazy-loading 'migration_context' on Instance uuid c638ce77-3d60-4891-ae4f-13f1b8384ea0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:36:30 np0005603609 nova_compute[221550]: 2026-01-31 08:36:30.948 221554 DEBUG nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:36:30 np0005603609 nova_compute[221550]: 2026-01-31 08:36:30.949 221554 DEBUG nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Ensure instance console log exists: /var/lib/nova/instances/c638ce77-3d60-4891-ae4f-13f1b8384ea0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:36:30 np0005603609 nova_compute[221550]: 2026-01-31 08:36:30.950 221554 DEBUG oslo_concurrency.lockutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:30 np0005603609 nova_compute[221550]: 2026-01-31 08:36:30.950 221554 DEBUG oslo_concurrency.lockutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:30 np0005603609 nova_compute[221550]: 2026-01-31 08:36:30.951 221554 DEBUG oslo_concurrency.lockutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:31 np0005603609 nova_compute[221550]: 2026-01-31 08:36:31.199 221554 DEBUG nova.objects.instance [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'migration_context' on Instance uuid 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:36:31 np0005603609 nova_compute[221550]: 2026-01-31 08:36:31.274 221554 DEBUG nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:36:31 np0005603609 nova_compute[221550]: 2026-01-31 08:36:31.275 221554 DEBUG nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Ensure instance console log exists: /var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:36:31 np0005603609 nova_compute[221550]: 2026-01-31 08:36:31.275 221554 DEBUG oslo_concurrency.lockutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:31 np0005603609 nova_compute[221550]: 2026-01-31 08:36:31.276 221554 DEBUG oslo_concurrency.lockutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:31 np0005603609 nova_compute[221550]: 2026-01-31 08:36:31.276 221554 DEBUG oslo_concurrency.lockutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:36:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:31.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:36:31 np0005603609 nova_compute[221550]: 2026-01-31 08:36:31.442 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:31 np0005603609 nova_compute[221550]: 2026-01-31 08:36:31.473 221554 DEBUG nova.network.neutron [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Successfully created port: 76093a46-2d78-41cb-a667-6e75fa0d8ea1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:36:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:31.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:32 np0005603609 nova_compute[221550]: 2026-01-31 08:36:32.312 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:32 np0005603609 nova_compute[221550]: 2026-01-31 08:36:32.626 221554 DEBUG nova.network.neutron [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Successfully created port: 0f92835b-d022-4857-a070-51a275dffd68 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:36:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:36:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:33.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:36:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:33.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:34 np0005603609 nova_compute[221550]: 2026-01-31 08:36:34.198 221554 DEBUG nova.network.neutron [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Successfully updated port: 76093a46-2d78-41cb-a667-6e75fa0d8ea1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:36:34 np0005603609 nova_compute[221550]: 2026-01-31 08:36:34.312 221554 DEBUG oslo_concurrency.lockutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "refresh_cache-4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:36:34 np0005603609 nova_compute[221550]: 2026-01-31 08:36:34.312 221554 DEBUG oslo_concurrency.lockutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquired lock "refresh_cache-4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:36:34 np0005603609 nova_compute[221550]: 2026-01-31 08:36:34.313 221554 DEBUG nova.network.neutron [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:36:34 np0005603609 nova_compute[221550]: 2026-01-31 08:36:34.361 221554 DEBUG nova.compute.manager [req-df8a276c-4d3c-49a7-bfe4-53485b49dbad req-8a00ec56-7ecd-4077-bf27-5ac03c20fbe9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Received event network-changed-76093a46-2d78-41cb-a667-6e75fa0d8ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:36:34 np0005603609 nova_compute[221550]: 2026-01-31 08:36:34.361 221554 DEBUG nova.compute.manager [req-df8a276c-4d3c-49a7-bfe4-53485b49dbad req-8a00ec56-7ecd-4077-bf27-5ac03c20fbe9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Refreshing instance network info cache due to event network-changed-76093a46-2d78-41cb-a667-6e75fa0d8ea1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:36:34 np0005603609 nova_compute[221550]: 2026-01-31 08:36:34.361 221554 DEBUG oslo_concurrency.lockutils [req-df8a276c-4d3c-49a7-bfe4-53485b49dbad req-8a00ec56-7ecd-4077-bf27-5ac03c20fbe9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:36:34 np0005603609 nova_compute[221550]: 2026-01-31 08:36:34.574 221554 DEBUG nova.network.neutron [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:36:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:36:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:35.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:36:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:35.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:35 np0005603609 nova_compute[221550]: 2026-01-31 08:36:35.571 221554 DEBUG nova.network.neutron [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Successfully updated port: 0f92835b-d022-4857-a070-51a275dffd68 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:36:35 np0005603609 nova_compute[221550]: 2026-01-31 08:36:35.600 221554 DEBUG oslo_concurrency.lockutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Acquiring lock "refresh_cache-c638ce77-3d60-4891-ae4f-13f1b8384ea0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:36:35 np0005603609 nova_compute[221550]: 2026-01-31 08:36:35.601 221554 DEBUG oslo_concurrency.lockutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Acquired lock "refresh_cache-c638ce77-3d60-4891-ae4f-13f1b8384ea0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:36:35 np0005603609 nova_compute[221550]: 2026-01-31 08:36:35.601 221554 DEBUG nova.network.neutron [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:36:35 np0005603609 nova_compute[221550]: 2026-01-31 08:36:35.795 221554 DEBUG nova.compute.manager [req-78dc89ec-468c-498f-b8e0-038870203cc2 req-76200ece-7d6d-4995-a6fd-8f5097d14b92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Received event network-changed-0f92835b-d022-4857-a070-51a275dffd68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:36:35 np0005603609 nova_compute[221550]: 2026-01-31 08:36:35.796 221554 DEBUG nova.compute.manager [req-78dc89ec-468c-498f-b8e0-038870203cc2 req-76200ece-7d6d-4995-a6fd-8f5097d14b92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Refreshing instance network info cache due to event network-changed-0f92835b-d022-4857-a070-51a275dffd68. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:36:35 np0005603609 nova_compute[221550]: 2026-01-31 08:36:35.797 221554 DEBUG oslo_concurrency.lockutils [req-78dc89ec-468c-498f-b8e0-038870203cc2 req-76200ece-7d6d-4995-a6fd-8f5097d14b92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c638ce77-3d60-4891-ae4f-13f1b8384ea0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:36:35 np0005603609 nova_compute[221550]: 2026-01-31 08:36:35.981 221554 DEBUG nova.network.neutron [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.390 221554 DEBUG nova.network.neutron [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Updating instance_info_cache with network_info: [{"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.443 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.597 221554 DEBUG oslo_concurrency.lockutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Releasing lock "refresh_cache-4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.598 221554 DEBUG nova.compute.manager [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Instance network_info: |[{"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.598 221554 DEBUG oslo_concurrency.lockutils [req-df8a276c-4d3c-49a7-bfe4-53485b49dbad req-8a00ec56-7ecd-4077-bf27-5ac03c20fbe9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.598 221554 DEBUG nova.network.neutron [req-df8a276c-4d3c-49a7-bfe4-53485b49dbad req-8a00ec56-7ecd-4077-bf27-5ac03c20fbe9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Refreshing network info cache for port 76093a46-2d78-41cb-a667-6e75fa0d8ea1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.601 221554 DEBUG nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Start _get_guest_xml network_info=[{"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.606 221554 WARNING nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.613 221554 DEBUG nova.virt.libvirt.host [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.613 221554 DEBUG nova.virt.libvirt.host [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.621 221554 DEBUG nova.virt.libvirt.host [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.622 221554 DEBUG nova.virt.libvirt.host [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.623 221554 DEBUG nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.623 221554 DEBUG nova.virt.hardware [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.624 221554 DEBUG nova.virt.hardware [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.624 221554 DEBUG nova.virt.hardware [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.624 221554 DEBUG nova.virt.hardware [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.624 221554 DEBUG nova.virt.hardware [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.625 221554 DEBUG nova.virt.hardware [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.625 221554 DEBUG nova.virt.hardware [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.625 221554 DEBUG nova.virt.hardware [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.625 221554 DEBUG nova.virt.hardware [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.626 221554 DEBUG nova.virt.hardware [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.626 221554 DEBUG nova.virt.hardware [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:36:36 np0005603609 nova_compute[221550]: 2026-01-31 08:36:36.629 221554 DEBUG oslo_concurrency.processutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:36:37 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/120920381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.159 221554 DEBUG oslo_concurrency.processutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.185 221554 DEBUG nova.storage.rbd_utils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.189 221554 DEBUG oslo_concurrency.processutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.314 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.324 221554 DEBUG nova.network.neutron [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Updating instance_info_cache with network_info: [{"id": "0f92835b-d022-4857-a070-51a275dffd68", "address": "fa:16:3e:8e:4d:9d", "network": {"id": "f52e3ef7-1bb2-472a-b0a8-0f1822212a00", "bridge": "br-int", "label": "tempest-network-smoke--1388052711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49eb9aae5e984a72b3c0b1e5316e2172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f92835b-d0", "ovs_interfaceid": "0f92835b-d022-4857-a070-51a275dffd68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:36:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:37.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:37.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:36:37 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2551731456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.626 221554 DEBUG oslo_concurrency.processutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.628 221554 DEBUG nova.virt.libvirt.vif [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2063916310',display_name='tempest-TestNetworkAdvancedServerOps-server-2063916310',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2063916310',id=180,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNv8t9ljSfsHe7/DF/aoJ/vR6fscWRM4GStsWxMUVpZD9QqPEzaqes4Daeg29ZseIZh//ZdOETFJ8kFU4RZhY+wlBZNr//GGJe6GeYVX3d0ryXxCUAo4WEPt20+EJ7KK2A==',key_name='tempest-TestNetworkAdvancedServerOps-728081448',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-v607b9ot',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:36:28Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=4ba5c6bd-5d05-4c33-ac40-620abd0a83cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.628 221554 DEBUG nova.network.os_vif_util [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.629 221554 DEBUG nova.network.os_vif_util [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:fd:51,bridge_name='br-int',has_traffic_filtering=True,id=76093a46-2d78-41cb-a667-6e75fa0d8ea1,network=Network(64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76093a46-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.630 221554 DEBUG nova.objects.instance [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'pci_devices' on Instance uuid 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.672 221554 DEBUG oslo_concurrency.lockutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Releasing lock "refresh_cache-c638ce77-3d60-4891-ae4f-13f1b8384ea0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.673 221554 DEBUG nova.compute.manager [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Instance network_info: |[{"id": "0f92835b-d022-4857-a070-51a275dffd68", "address": "fa:16:3e:8e:4d:9d", "network": {"id": "f52e3ef7-1bb2-472a-b0a8-0f1822212a00", "bridge": "br-int", "label": "tempest-network-smoke--1388052711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49eb9aae5e984a72b3c0b1e5316e2172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f92835b-d0", "ovs_interfaceid": "0f92835b-d022-4857-a070-51a275dffd68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.673 221554 DEBUG oslo_concurrency.lockutils [req-78dc89ec-468c-498f-b8e0-038870203cc2 req-76200ece-7d6d-4995-a6fd-8f5097d14b92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c638ce77-3d60-4891-ae4f-13f1b8384ea0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.673 221554 DEBUG nova.network.neutron [req-78dc89ec-468c-498f-b8e0-038870203cc2 req-76200ece-7d6d-4995-a6fd-8f5097d14b92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Refreshing network info cache for port 0f92835b-d022-4857-a070-51a275dffd68 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.676 221554 DEBUG nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Start _get_guest_xml network_info=[{"id": "0f92835b-d022-4857-a070-51a275dffd68", "address": "fa:16:3e:8e:4d:9d", "network": {"id": "f52e3ef7-1bb2-472a-b0a8-0f1822212a00", "bridge": "br-int", "label": "tempest-network-smoke--1388052711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49eb9aae5e984a72b3c0b1e5316e2172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f92835b-d0", "ovs_interfaceid": "0f92835b-d022-4857-a070-51a275dffd68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.680 221554 DEBUG nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:36:37 np0005603609 nova_compute[221550]:  <uuid>4ba5c6bd-5d05-4c33-ac40-620abd0a83cc</uuid>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:  <name>instance-000000b4</name>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-2063916310</nova:name>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:36:36</nova:creationTime>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:36:37 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:        <nova:user uuid="4d0e9d918b4041fabd5ded633b4cf404">tempest-TestNetworkAdvancedServerOps-483180749-project-member</nova:user>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:        <nova:project uuid="9710f0cf77d84353ae13fa47922b085d">tempest-TestNetworkAdvancedServerOps-483180749</nova:project>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:        <nova:port uuid="76093a46-2d78-41cb-a667-6e75fa0d8ea1">
Jan 31 03:36:37 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <entry name="serial">4ba5c6bd-5d05-4c33-ac40-620abd0a83cc</entry>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <entry name="uuid">4ba5c6bd-5d05-4c33-ac40-620abd0a83cc</entry>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk">
Jan 31 03:36:37 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:36:37 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk.config">
Jan 31 03:36:37 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:36:37 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:88:fd:51"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <target dev="tap76093a46-2d"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc/console.log" append="off"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:36:37 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:36:37 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:36:37 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:36:37 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.681 221554 DEBUG nova.compute.manager [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Preparing to wait for external event network-vif-plugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.681 221554 DEBUG oslo_concurrency.lockutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.681 221554 DEBUG oslo_concurrency.lockutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.681 221554 DEBUG oslo_concurrency.lockutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.682 221554 DEBUG nova.virt.libvirt.vif [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2063916310',display_name='tempest-TestNetworkAdvancedServerOps-server-2063916310',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2063916310',id=180,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNv8t9ljSfsHe7/DF/aoJ/vR6fscWRM4GStsWxMUVpZD9QqPEzaqes4Daeg29ZseIZh//ZdOETFJ8kFU4RZhY+wlBZNr//GGJe6GeYVX3d0ryXxCUAo4WEPt20+EJ7KK2A==',key_name='tempest-TestNetworkAdvancedServerOps-728081448',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-v607b9ot',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:36:28Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=4ba5c6bd-5d05-4c33-ac40-620abd0a83cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.682 221554 DEBUG nova.network.os_vif_util [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.682 221554 DEBUG nova.network.os_vif_util [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:fd:51,bridge_name='br-int',has_traffic_filtering=True,id=76093a46-2d78-41cb-a667-6e75fa0d8ea1,network=Network(64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76093a46-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.683 221554 DEBUG os_vif [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:fd:51,bridge_name='br-int',has_traffic_filtering=True,id=76093a46-2d78-41cb-a667-6e75fa0d8ea1,network=Network(64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76093a46-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.683 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.683 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.684 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.686 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.687 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76093a46-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.687 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap76093a46-2d, col_values=(('external_ids', {'iface-id': '76093a46-2d78-41cb-a667-6e75fa0d8ea1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:fd:51', 'vm-uuid': '4ba5c6bd-5d05-4c33-ac40-620abd0a83cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.688 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:37 np0005603609 NetworkManager[49064]: <info>  [1769848597.6895] manager: (tap76093a46-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.690 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.691 221554 WARNING nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.694 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.695 221554 INFO os_vif [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:fd:51,bridge_name='br-int',has_traffic_filtering=True,id=76093a46-2d78-41cb-a667-6e75fa0d8ea1,network=Network(64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76093a46-2d')#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.696 221554 DEBUG nova.virt.libvirt.host [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.696 221554 DEBUG nova.virt.libvirt.host [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.703 221554 DEBUG nova.virt.libvirt.host [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.704 221554 DEBUG nova.virt.libvirt.host [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.705 221554 DEBUG nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.705 221554 DEBUG nova.virt.hardware [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.705 221554 DEBUG nova.virt.hardware [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.706 221554 DEBUG nova.virt.hardware [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.706 221554 DEBUG nova.virt.hardware [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.706 221554 DEBUG nova.virt.hardware [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.707 221554 DEBUG nova.virt.hardware [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.707 221554 DEBUG nova.virt.hardware [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.707 221554 DEBUG nova.virt.hardware [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.707 221554 DEBUG nova.virt.hardware [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.708 221554 DEBUG nova.virt.hardware [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.708 221554 DEBUG nova.virt.hardware [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:36:37 np0005603609 nova_compute[221550]: 2026-01-31 08:36:37.711 221554 DEBUG oslo_concurrency.processutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:38 np0005603609 nova_compute[221550]: 2026-01-31 08:36:38.075 221554 DEBUG nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:36:38 np0005603609 nova_compute[221550]: 2026-01-31 08:36:38.076 221554 DEBUG nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:36:38 np0005603609 nova_compute[221550]: 2026-01-31 08:36:38.076 221554 DEBUG nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No VIF found with MAC fa:16:3e:88:fd:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:36:38 np0005603609 nova_compute[221550]: 2026-01-31 08:36:38.077 221554 INFO nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Using config drive#033[00m
Jan 31 03:36:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:36:38 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3052520029' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:36:38 np0005603609 nova_compute[221550]: 2026-01-31 08:36:38.808 221554 DEBUG nova.storage.rbd_utils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:39 np0005603609 nova_compute[221550]: 2026-01-31 08:36:39.131 221554 DEBUG oslo_concurrency.processutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:39 np0005603609 nova_compute[221550]: 2026-01-31 08:36:39.163 221554 DEBUG nova.storage.rbd_utils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] rbd image c638ce77-3d60-4891-ae4f-13f1b8384ea0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:39 np0005603609 nova_compute[221550]: 2026-01-31 08:36:39.167 221554 DEBUG oslo_concurrency.processutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:39.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:39 np0005603609 nova_compute[221550]: 2026-01-31 08:36:39.457 221554 DEBUG nova.network.neutron [req-df8a276c-4d3c-49a7-bfe4-53485b49dbad req-8a00ec56-7ecd-4077-bf27-5ac03c20fbe9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Updated VIF entry in instance network info cache for port 76093a46-2d78-41cb-a667-6e75fa0d8ea1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:36:39 np0005603609 nova_compute[221550]: 2026-01-31 08:36:39.457 221554 DEBUG nova.network.neutron [req-df8a276c-4d3c-49a7-bfe4-53485b49dbad req-8a00ec56-7ecd-4077-bf27-5ac03c20fbe9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Updating instance_info_cache with network_info: [{"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:36:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:39.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:39 np0005603609 nova_compute[221550]: 2026-01-31 08:36:39.576 221554 DEBUG oslo_concurrency.lockutils [req-df8a276c-4d3c-49a7-bfe4-53485b49dbad req-8a00ec56-7ecd-4077-bf27-5ac03c20fbe9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:36:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:36:39 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/572176419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:36:39 np0005603609 nova_compute[221550]: 2026-01-31 08:36:39.617 221554 DEBUG oslo_concurrency.processutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:39 np0005603609 nova_compute[221550]: 2026-01-31 08:36:39.618 221554 DEBUG nova.virt.libvirt.vif [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:36:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-92385002-access_point-931142024',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-92385002-access_point-931142024',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-92385002-acce',id=181,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIxOKLNsAU/yJwUiFbFeg34/S+DmzZCSSYkljzx7+oYjYeWQ/G/bzOBTfKetQno+jXBYbEJAKnhKJ4gv3aNYzFmvQotKrGil62sZ53vGFjz+l4DiJYUYaw16eFWjvuPGLw==',key_name='tempest-TestSecurityGroupsBasicOps-1592407990',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='49eb9aae5e984a72b3c0b1e5316e2172',ramdisk_id='',reservation_id='r-y8et8sz5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-92385002',owner_user_name='tempest-TestSecurityGroupsBasicOps-92385002-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:36:29Z,user_data=None,user_id='6bb0d6544be445f6a04b52b326336c8f',uuid=c638ce77-3d60-4891-ae4f-13f1b8384ea0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f92835b-d022-4857-a070-51a275dffd68", "address": "fa:16:3e:8e:4d:9d", "network": {"id": "f52e3ef7-1bb2-472a-b0a8-0f1822212a00", "bridge": "br-int", "label": "tempest-network-smoke--1388052711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49eb9aae5e984a72b3c0b1e5316e2172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f92835b-d0", "ovs_interfaceid": "0f92835b-d022-4857-a070-51a275dffd68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:36:39 np0005603609 nova_compute[221550]: 2026-01-31 08:36:39.618 221554 DEBUG nova.network.os_vif_util [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Converting VIF {"id": "0f92835b-d022-4857-a070-51a275dffd68", "address": "fa:16:3e:8e:4d:9d", "network": {"id": "f52e3ef7-1bb2-472a-b0a8-0f1822212a00", "bridge": "br-int", "label": "tempest-network-smoke--1388052711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49eb9aae5e984a72b3c0b1e5316e2172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f92835b-d0", "ovs_interfaceid": "0f92835b-d022-4857-a070-51a275dffd68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:36:39 np0005603609 nova_compute[221550]: 2026-01-31 08:36:39.619 221554 DEBUG nova.network.os_vif_util [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:4d:9d,bridge_name='br-int',has_traffic_filtering=True,id=0f92835b-d022-4857-a070-51a275dffd68,network=Network(f52e3ef7-1bb2-472a-b0a8-0f1822212a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f92835b-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:36:39 np0005603609 nova_compute[221550]: 2026-01-31 08:36:39.620 221554 DEBUG nova.objects.instance [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Lazy-loading 'pci_devices' on Instance uuid c638ce77-3d60-4891-ae4f-13f1b8384ea0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.000 221554 DEBUG nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:36:40 np0005603609 nova_compute[221550]:  <uuid>c638ce77-3d60-4891-ae4f-13f1b8384ea0</uuid>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:  <name>instance-000000b5</name>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-92385002-access_point-931142024</nova:name>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:36:37</nova:creationTime>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:36:40 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:        <nova:user uuid="6bb0d6544be445f6a04b52b326336c8f">tempest-TestSecurityGroupsBasicOps-92385002-project-member</nova:user>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:        <nova:project uuid="49eb9aae5e984a72b3c0b1e5316e2172">tempest-TestSecurityGroupsBasicOps-92385002</nova:project>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:        <nova:port uuid="0f92835b-d022-4857-a070-51a275dffd68">
Jan 31 03:36:40 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <entry name="serial">c638ce77-3d60-4891-ae4f-13f1b8384ea0</entry>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <entry name="uuid">c638ce77-3d60-4891-ae4f-13f1b8384ea0</entry>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/c638ce77-3d60-4891-ae4f-13f1b8384ea0_disk">
Jan 31 03:36:40 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:36:40 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/c638ce77-3d60-4891-ae4f-13f1b8384ea0_disk.config">
Jan 31 03:36:40 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:36:40 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:8e:4d:9d"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <target dev="tap0f92835b-d0"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/c638ce77-3d60-4891-ae4f-13f1b8384ea0/console.log" append="off"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:36:40 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:36:40 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:36:40 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:36:40 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.001 221554 DEBUG nova.compute.manager [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Preparing to wait for external event network-vif-plugged-0f92835b-d022-4857-a070-51a275dffd68 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.002 221554 DEBUG oslo_concurrency.lockutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Acquiring lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.002 221554 DEBUG oslo_concurrency.lockutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.003 221554 DEBUG oslo_concurrency.lockutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.004 221554 DEBUG nova.virt.libvirt.vif [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:36:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-92385002-access_point-931142024',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-92385002-access_point-931142024',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-92385002-acce',id=181,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIxOKLNsAU/yJwUiFbFeg34/S+DmzZCSSYkljzx7+oYjYeWQ/G/bzOBTfKetQno+jXBYbEJAKnhKJ4gv3aNYzFmvQotKrGil62sZ53vGFjz+l4DiJYUYaw16eFWjvuPGLw==',key_name='tempest-TestSecurityGroupsBasicOps-1592407990',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='49eb9aae5e984a72b3c0b1e5316e2172',ramdisk_id='',reservation_id='r-y8et8sz5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-92385002',owner_user_name='tempest-TestSecurityGroupsBasicOps-92385002-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:36:29Z,user_data=None,user_id='6bb0d6544be445f6a04b52b326336c8f',uuid=c638ce77-3d60-4891-ae4f-13f1b8384ea0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f92835b-d022-4857-a070-51a275dffd68", "address": "fa:16:3e:8e:4d:9d", "network": {"id": "f52e3ef7-1bb2-472a-b0a8-0f1822212a00", "bridge": "br-int", "label": "tempest-network-smoke--1388052711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49eb9aae5e984a72b3c0b1e5316e2172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f92835b-d0", "ovs_interfaceid": "0f92835b-d022-4857-a070-51a275dffd68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.004 221554 DEBUG nova.network.os_vif_util [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Converting VIF {"id": "0f92835b-d022-4857-a070-51a275dffd68", "address": "fa:16:3e:8e:4d:9d", "network": {"id": "f52e3ef7-1bb2-472a-b0a8-0f1822212a00", "bridge": "br-int", "label": "tempest-network-smoke--1388052711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49eb9aae5e984a72b3c0b1e5316e2172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f92835b-d0", "ovs_interfaceid": "0f92835b-d022-4857-a070-51a275dffd68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.005 221554 DEBUG nova.network.os_vif_util [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:4d:9d,bridge_name='br-int',has_traffic_filtering=True,id=0f92835b-d022-4857-a070-51a275dffd68,network=Network(f52e3ef7-1bb2-472a-b0a8-0f1822212a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f92835b-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.006 221554 DEBUG os_vif [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:4d:9d,bridge_name='br-int',has_traffic_filtering=True,id=0f92835b-d022-4857-a070-51a275dffd68,network=Network(f52e3ef7-1bb2-472a-b0a8-0f1822212a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f92835b-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.007 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.008 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.009 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.012 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.012 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f92835b-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.013 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0f92835b-d0, col_values=(('external_ids', {'iface-id': '0f92835b-d022-4857-a070-51a275dffd68', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:4d:9d', 'vm-uuid': 'c638ce77-3d60-4891-ae4f-13f1b8384ea0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.015 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:40 np0005603609 NetworkManager[49064]: <info>  [1769848600.0162] manager: (tap0f92835b-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.019 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.019 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.022 221554 INFO os_vif [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:4d:9d,bridge_name='br-int',has_traffic_filtering=True,id=0f92835b-d022-4857-a070-51a275dffd68,network=Network(f52e3ef7-1bb2-472a-b0a8-0f1822212a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f92835b-d0')#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.728 221554 DEBUG nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.728 221554 DEBUG nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.729 221554 DEBUG nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] No VIF found with MAC fa:16:3e:8e:4d:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.729 221554 INFO nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Using config drive#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.754 221554 DEBUG nova.storage.rbd_utils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] rbd image c638ce77-3d60-4891-ae4f-13f1b8384ea0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.937 221554 INFO nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Creating config drive at /var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc/disk.config#033[00m
Jan 31 03:36:40 np0005603609 nova_compute[221550]: 2026-01-31 08:36:40.941 221554 DEBUG oslo_concurrency.processutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpo6hfnfqv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:41 np0005603609 nova_compute[221550]: 2026-01-31 08:36:41.370 221554 DEBUG oslo_concurrency.processutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpo6hfnfqv" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:36:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:41.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:36:41 np0005603609 nova_compute[221550]: 2026-01-31 08:36:41.402 221554 DEBUG nova.storage.rbd_utils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:41 np0005603609 nova_compute[221550]: 2026-01-31 08:36:41.528 221554 DEBUG oslo_concurrency.processutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc/disk.config 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:36:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:41.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:36:41 np0005603609 nova_compute[221550]: 2026-01-31 08:36:41.551 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:41 np0005603609 nova_compute[221550]: 2026-01-31 08:36:41.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:42 np0005603609 nova_compute[221550]: 2026-01-31 08:36:42.405 221554 DEBUG oslo_concurrency.processutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc/disk.config 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.877s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:42 np0005603609 nova_compute[221550]: 2026-01-31 08:36:42.406 221554 INFO nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Deleting local config drive /var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc/disk.config because it was imported into RBD.#033[00m
Jan 31 03:36:42 np0005603609 NetworkManager[49064]: <info>  [1769848602.4466] manager: (tap76093a46-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/368)
Jan 31 03:36:42 np0005603609 kernel: tap76093a46-2d: entered promiscuous mode
Jan 31 03:36:42 np0005603609 ovn_controller[130359]: 2026-01-31T08:36:42Z|00793|binding|INFO|Claiming lport 76093a46-2d78-41cb-a667-6e75fa0d8ea1 for this chassis.
Jan 31 03:36:42 np0005603609 nova_compute[221550]: 2026-01-31 08:36:42.451 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:42 np0005603609 ovn_controller[130359]: 2026-01-31T08:36:42Z|00794|binding|INFO|76093a46-2d78-41cb-a667-6e75fa0d8ea1: Claiming fa:16:3e:88:fd:51 10.100.0.7
Jan 31 03:36:42 np0005603609 nova_compute[221550]: 2026-01-31 08:36:42.459 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:42 np0005603609 systemd-udevd[296771]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:36:42 np0005603609 systemd-machined[190912]: New machine qemu-97-instance-000000b4.
Jan 31 03:36:42 np0005603609 NetworkManager[49064]: <info>  [1769848602.4803] device (tap76093a46-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:36:42 np0005603609 NetworkManager[49064]: <info>  [1769848602.4811] device (tap76093a46-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:36:42 np0005603609 ovn_controller[130359]: 2026-01-31T08:36:42Z|00795|binding|INFO|Setting lport 76093a46-2d78-41cb-a667-6e75fa0d8ea1 ovn-installed in OVS
Jan 31 03:36:42 np0005603609 systemd[1]: Started Virtual Machine qemu-97-instance-000000b4.
Jan 31 03:36:42 np0005603609 nova_compute[221550]: 2026-01-31 08:36:42.485 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:42 np0005603609 ovn_controller[130359]: 2026-01-31T08:36:42Z|00796|binding|INFO|Setting lport 76093a46-2d78-41cb-a667-6e75fa0d8ea1 up in Southbound
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.702 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:fd:51 10.100.0.7'], port_security=['fa:16:3e:88:fd:51 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4ba5c6bd-5d05-4c33-ac40-620abd0a83cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd5ab7674-17fe-499b-8e6a-01f2b29e7eda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e2cf9ef-4f0c-4703-b0c5-cb44d772423a, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=76093a46-2d78-41cb-a667-6e75fa0d8ea1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.703 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 76093a46-2d78-41cb-a667-6e75fa0d8ea1 in datapath 64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4 bound to our chassis#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.705 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.714 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2777f854-f474-4361-a6fa-93a6b51fabdd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.715 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap64b5ffb1-71 in ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.717 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap64b5ffb1-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.717 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ba1d58fb-8ffa-4d4a-a8ce-718662e3e0cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.718 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4218f8d2-9b84-4952-93d1-7ad2a444583e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.728 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d01b8c-94a6-4a5c-b57a-65955804fcd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.737 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e37f9ccb-3972-40e8-bfbd-8947282cff0d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.756 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[04665ebe-5f5a-4ba9-8372-55d6f1676ff9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:42 np0005603609 NetworkManager[49064]: <info>  [1769848602.7614] manager: (tap64b5ffb1-70): new Veth device (/org/freedesktop/NetworkManager/Devices/369)
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.761 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ee9e0e-f253-4600-bbde-3f2f7af20008]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.783 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[aef6d6fd-2233-4074-9bd3-df3214a6de5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.787 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd2b26e-9e17-4679-86ae-250510686bef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:42 np0005603609 NetworkManager[49064]: <info>  [1769848602.8038] device (tap64b5ffb1-70): carrier: link connected
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.808 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[1fca28c5-2b2f-4bed-a85e-e9bdea259cf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.822 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a11d455c-a8d1-45a1-9b72-2e6ee8d892a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64b5ffb1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:ee:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884250, 'reachable_time': 40223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296843, 'error': None, 'target': 'ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.834 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[202383e1-176d-44bc-8784-c499199bd657]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:eede'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 884250, 'tstamp': 884250}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296844, 'error': None, 'target': 'ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.848 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c2fd6bfd-01ac-4694-8fb9-1946aabbf18d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64b5ffb1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:ee:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884250, 'reachable_time': 40223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296846, 'error': None, 'target': 'ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.868 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0d88866d-9e8f-4c5d-ab57-4686cfdcc7ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.903 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a7cb60f7-fb8c-4580-887c-56d969caa70a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.904 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64b5ffb1-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.904 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.904 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64b5ffb1-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:42 np0005603609 NetworkManager[49064]: <info>  [1769848602.9071] manager: (tap64b5ffb1-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Jan 31 03:36:42 np0005603609 nova_compute[221550]: 2026-01-31 08:36:42.906 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:42 np0005603609 kernel: tap64b5ffb1-70: entered promiscuous mode
Jan 31 03:36:42 np0005603609 nova_compute[221550]: 2026-01-31 08:36:42.910 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.911 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64b5ffb1-70, col_values=(('external_ids', {'iface-id': '9f64b659-abb7-4398-919b-9d703dbc3cd7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:42 np0005603609 nova_compute[221550]: 2026-01-31 08:36:42.912 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:42 np0005603609 ovn_controller[130359]: 2026-01-31T08:36:42Z|00797|binding|INFO|Releasing lport 9f64b659-abb7-4398-919b-9d703dbc3cd7 from this chassis (sb_readonly=0)
Jan 31 03:36:42 np0005603609 nova_compute[221550]: 2026-01-31 08:36:42.918 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:42 np0005603609 nova_compute[221550]: 2026-01-31 08:36:42.921 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.922 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.923 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4f13e955-6663-4bf6-a210-60d22fdf82a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.923 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4.pid.haproxy
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:36:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:42.924 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4', 'env', 'PROCESS_TAG=haproxy-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:36:42 np0005603609 nova_compute[221550]: 2026-01-31 08:36:42.929 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848602.9295814, 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:36:42 np0005603609 nova_compute[221550]: 2026-01-31 08:36:42.930 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] VM Started (Lifecycle Event)#033[00m
Jan 31 03:36:43 np0005603609 nova_compute[221550]: 2026-01-31 08:36:43.249 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:36:43 np0005603609 nova_compute[221550]: 2026-01-31 08:36:43.255 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848602.9297073, 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:36:43 np0005603609 nova_compute[221550]: 2026-01-31 08:36:43.256 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:36:43 np0005603609 podman[296888]: 2026-01-31 08:36:43.196279785 +0000 UTC m=+0.021140457 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:36:43 np0005603609 podman[296888]: 2026-01-31 08:36:43.385368746 +0000 UTC m=+0.210229408 container create 751014c8c9d061220b93125c5cc1430530a62cc546504732d89ce8f545bcfd5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:36:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:43.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:43 np0005603609 systemd[1]: Started libpod-conmon-751014c8c9d061220b93125c5cc1430530a62cc546504732d89ce8f545bcfd5e.scope.
Jan 31 03:36:43 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:36:43 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5eeb43ea228f6bda75aa1edd93f0fb0fdf8a506490b2d6e64ff030ee6f2f275/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:43 np0005603609 podman[296888]: 2026-01-31 08:36:43.460273469 +0000 UTC m=+0.285134131 container init 751014c8c9d061220b93125c5cc1430530a62cc546504732d89ce8f545bcfd5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:36:43 np0005603609 podman[296888]: 2026-01-31 08:36:43.464388839 +0000 UTC m=+0.289249481 container start 751014c8c9d061220b93125c5cc1430530a62cc546504732d89ce8f545bcfd5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:36:43 np0005603609 neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4[296904]: [NOTICE]   (296908) : New worker (296910) forked
Jan 31 03:36:43 np0005603609 neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4[296904]: [NOTICE]   (296908) : Loading success.
Jan 31 03:36:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:43.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:43 np0005603609 nova_compute[221550]: 2026-01-31 08:36:43.757 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:36:43 np0005603609 nova_compute[221550]: 2026-01-31 08:36:43.761 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:36:44 np0005603609 nova_compute[221550]: 2026-01-31 08:36:44.128 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:36:45 np0005603609 nova_compute[221550]: 2026-01-31 08:36:45.018 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:45.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:45.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:46 np0005603609 nova_compute[221550]: 2026-01-31 08:36:46.447 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:47.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:47.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:47 np0005603609 nova_compute[221550]: 2026-01-31 08:36:47.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:47 np0005603609 nova_compute[221550]: 2026-01-31 08:36:47.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.426 221554 DEBUG nova.compute.manager [req-6aa88961-85fc-49d4-b8ee-16433c5c1323 req-5fcaa227-203d-44a9-836a-404546845829 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Received event network-vif-plugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.426 221554 DEBUG oslo_concurrency.lockutils [req-6aa88961-85fc-49d4-b8ee-16433c5c1323 req-5fcaa227-203d-44a9-836a-404546845829 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.426 221554 DEBUG oslo_concurrency.lockutils [req-6aa88961-85fc-49d4-b8ee-16433c5c1323 req-5fcaa227-203d-44a9-836a-404546845829 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.427 221554 DEBUG oslo_concurrency.lockutils [req-6aa88961-85fc-49d4-b8ee-16433c5c1323 req-5fcaa227-203d-44a9-836a-404546845829 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.427 221554 DEBUG nova.compute.manager [req-6aa88961-85fc-49d4-b8ee-16433c5c1323 req-5fcaa227-203d-44a9-836a-404546845829 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Processing event network-vif-plugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.428 221554 DEBUG nova.compute.manager [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.431 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848608.4313302, 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.432 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.434 221554 DEBUG nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.437 221554 INFO nova.virt.libvirt.driver [-] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Instance spawned successfully.#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.438 221554 DEBUG nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.579 221554 DEBUG nova.network.neutron [req-78dc89ec-468c-498f-b8e0-038870203cc2 req-76200ece-7d6d-4995-a6fd-8f5097d14b92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Updated VIF entry in instance network info cache for port 0f92835b-d022-4857-a070-51a275dffd68. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.579 221554 DEBUG nova.network.neutron [req-78dc89ec-468c-498f-b8e0-038870203cc2 req-76200ece-7d6d-4995-a6fd-8f5097d14b92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Updating instance_info_cache with network_info: [{"id": "0f92835b-d022-4857-a070-51a275dffd68", "address": "fa:16:3e:8e:4d:9d", "network": {"id": "f52e3ef7-1bb2-472a-b0a8-0f1822212a00", "bridge": "br-int", "label": "tempest-network-smoke--1388052711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49eb9aae5e984a72b3c0b1e5316e2172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f92835b-d0", "ovs_interfaceid": "0f92835b-d022-4857-a070-51a275dffd68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.736 221554 DEBUG oslo_concurrency.lockutils [req-78dc89ec-468c-498f-b8e0-038870203cc2 req-76200ece-7d6d-4995-a6fd-8f5097d14b92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c638ce77-3d60-4891-ae4f-13f1b8384ea0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.741 221554 DEBUG nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.741 221554 DEBUG nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.742 221554 DEBUG nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.742 221554 DEBUG nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.743 221554 DEBUG nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.743 221554 DEBUG nova.virt.libvirt.driver [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.749 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:36:48 np0005603609 nova_compute[221550]: 2026-01-31 08:36:48.754 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:36:49 np0005603609 nova_compute[221550]: 2026-01-31 08:36:49.026 221554 INFO nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Creating config drive at /var/lib/nova/instances/c638ce77-3d60-4891-ae4f-13f1b8384ea0/disk.config#033[00m
Jan 31 03:36:49 np0005603609 nova_compute[221550]: 2026-01-31 08:36:49.036 221554 DEBUG oslo_concurrency.processutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c638ce77-3d60-4891-ae4f-13f1b8384ea0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpn9udiskl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:49 np0005603609 nova_compute[221550]: 2026-01-31 08:36:49.164 221554 DEBUG oslo_concurrency.processutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c638ce77-3d60-4891-ae4f-13f1b8384ea0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpn9udiskl" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:49 np0005603609 nova_compute[221550]: 2026-01-31 08:36:49.198 221554 DEBUG nova.storage.rbd_utils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] rbd image c638ce77-3d60-4891-ae4f-13f1b8384ea0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:49 np0005603609 nova_compute[221550]: 2026-01-31 08:36:49.203 221554 DEBUG oslo_concurrency.processutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c638ce77-3d60-4891-ae4f-13f1b8384ea0/disk.config c638ce77-3d60-4891-ae4f-13f1b8384ea0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:49 np0005603609 nova_compute[221550]: 2026-01-31 08:36:49.247 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:36:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:36:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:49.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:36:49 np0005603609 nova_compute[221550]: 2026-01-31 08:36:49.426 221554 DEBUG oslo_concurrency.processutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c638ce77-3d60-4891-ae4f-13f1b8384ea0/disk.config c638ce77-3d60-4891-ae4f-13f1b8384ea0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:49 np0005603609 nova_compute[221550]: 2026-01-31 08:36:49.427 221554 INFO nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Deleting local config drive /var/lib/nova/instances/c638ce77-3d60-4891-ae4f-13f1b8384ea0/disk.config because it was imported into RBD.#033[00m
Jan 31 03:36:49 np0005603609 kernel: tap0f92835b-d0: entered promiscuous mode
Jan 31 03:36:49 np0005603609 NetworkManager[49064]: <info>  [1769848609.4824] manager: (tap0f92835b-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/371)
Jan 31 03:36:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:36:49Z|00798|binding|INFO|Claiming lport 0f92835b-d022-4857-a070-51a275dffd68 for this chassis.
Jan 31 03:36:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:36:49Z|00799|binding|INFO|0f92835b-d022-4857-a070-51a275dffd68: Claiming fa:16:3e:8e:4d:9d 10.100.0.6
Jan 31 03:36:49 np0005603609 nova_compute[221550]: 2026-01-31 08:36:49.489 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:49 np0005603609 nova_compute[221550]: 2026-01-31 08:36:49.497 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:49 np0005603609 systemd-udevd[296970]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:36:49 np0005603609 nova_compute[221550]: 2026-01-31 08:36:49.514 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:36:49Z|00800|binding|INFO|Setting lport 0f92835b-d022-4857-a070-51a275dffd68 ovn-installed in OVS
Jan 31 03:36:49 np0005603609 nova_compute[221550]: 2026-01-31 08:36:49.519 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:49 np0005603609 NetworkManager[49064]: <info>  [1769848609.5223] device (tap0f92835b-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:36:49 np0005603609 NetworkManager[49064]: <info>  [1769848609.5231] device (tap0f92835b-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:36:49 np0005603609 systemd-machined[190912]: New machine qemu-98-instance-000000b5.
Jan 31 03:36:49 np0005603609 systemd[1]: Started Virtual Machine qemu-98-instance-000000b5.
Jan 31 03:36:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:49.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:36:49Z|00801|binding|INFO|Setting lport 0f92835b-d022-4857-a070-51a275dffd68 up in Southbound
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.632 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:4d:9d 10.100.0.6'], port_security=['fa:16:3e:8e:4d:9d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c638ce77-3d60-4891-ae4f-13f1b8384ea0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f52e3ef7-1bb2-472a-b0a8-0f1822212a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '49eb9aae5e984a72b3c0b1e5316e2172', 'neutron:revision_number': '2', 'neutron:security_group_ids': '661c0951-33fe-453d-a602-b30e1d6cecdf 6de8b2f5-9ab8-4f57-be0e-97ab50a84c11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf11fd30-9366-4332-8686-a12be15026af, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=0f92835b-d022-4857-a070-51a275dffd68) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.634 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 0f92835b-d022-4857-a070-51a275dffd68 in datapath f52e3ef7-1bb2-472a-b0a8-0f1822212a00 bound to our chassis#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.635 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f52e3ef7-1bb2-472a-b0a8-0f1822212a00#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.645 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[28aedd75-7cb7-452b-8bb4-03ebac88490b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.646 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf52e3ef7-11 in ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.648 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf52e3ef7-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.648 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9f24f926-092e-4f63-84e0-21247e0f9f01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.650 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9238eadb-6de6-4bf0-a113-6d4b09b775a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.659 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[a03a521e-550f-4009-8c41-197e797a59f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.671 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d8ac00-62b1-4649-9b38-a46cde3847e9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.693 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd05b28-52a8-40d7-a3ef-66f430995524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:49 np0005603609 NetworkManager[49064]: <info>  [1769848609.6999] manager: (tapf52e3ef7-10): new Veth device (/org/freedesktop/NetworkManager/Devices/372)
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.699 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[71c09b8a-c717-4220-af74-97f8a4d478d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.728 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[e7785aec-09d2-4cac-9f88-6e3963972c5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.730 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[3c7e8bbd-bb03-4887-ab90-8b6537c120c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:49 np0005603609 NetworkManager[49064]: <info>  [1769848609.7488] device (tapf52e3ef7-10): carrier: link connected
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.753 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[2dc47d51-809b-4322-8503-eb700eda7d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.769 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a2554089-113a-43bf-aaec-105570db7c26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf52e3ef7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:cd:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884944, 'reachable_time': 39302, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297006, 'error': None, 'target': 'ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.783 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7738906e-818a-4ccf-9b02-fa8bf9e071fc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:cdce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 884944, 'tstamp': 884944}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297007, 'error': None, 'target': 'ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.798 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ce91a829-917a-49d6-b5ce-2020f68d056c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf52e3ef7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:cd:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884944, 'reachable_time': 39302, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297008, 'error': None, 'target': 'ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:49 np0005603609 nova_compute[221550]: 2026-01-31 08:36:49.819 221554 INFO nova.compute.manager [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Took 20.73 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:36:49 np0005603609 nova_compute[221550]: 2026-01-31 08:36:49.819 221554 DEBUG nova.compute.manager [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.827 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4e512325-e5ac-4043-9ed6-7b34bc0fa15b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.876 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[63646489-a8fa-4379-bf3c-9930cf9ad35b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.878 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf52e3ef7-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.878 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.879 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf52e3ef7-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:49 np0005603609 NetworkManager[49064]: <info>  [1769848609.8822] manager: (tapf52e3ef7-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Jan 31 03:36:49 np0005603609 nova_compute[221550]: 2026-01-31 08:36:49.881 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:49 np0005603609 kernel: tapf52e3ef7-10: entered promiscuous mode
Jan 31 03:36:49 np0005603609 nova_compute[221550]: 2026-01-31 08:36:49.885 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.886 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf52e3ef7-10, col_values=(('external_ids', {'iface-id': '37df9605-20bf-4033-8a64-5f2ba69b6771'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:49 np0005603609 ovn_controller[130359]: 2026-01-31T08:36:49Z|00802|binding|INFO|Releasing lport 37df9605-20bf-4033-8a64-5f2ba69b6771 from this chassis (sb_readonly=0)
Jan 31 03:36:49 np0005603609 nova_compute[221550]: 2026-01-31 08:36:49.887 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.888 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f52e3ef7-1bb2-472a-b0a8-0f1822212a00.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f52e3ef7-1bb2-472a-b0a8-0f1822212a00.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.889 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[eadc0d21-79e7-48cf-96e5-4a1471febd81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.890 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-f52e3ef7-1bb2-472a-b0a8-0f1822212a00
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/f52e3ef7-1bb2-472a-b0a8-0f1822212a00.pid.haproxy
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID f52e3ef7-1bb2-472a-b0a8-0f1822212a00
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:36:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:49.891 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00', 'env', 'PROCESS_TAG=haproxy-f52e3ef7-1bb2-472a-b0a8-0f1822212a00', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f52e3ef7-1bb2-472a-b0a8-0f1822212a00.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:36:49 np0005603609 nova_compute[221550]: 2026-01-31 08:36:49.892 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:50 np0005603609 nova_compute[221550]: 2026-01-31 08:36:50.020 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:50 np0005603609 nova_compute[221550]: 2026-01-31 08:36:50.145 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848610.1450984, c638ce77-3d60-4891-ae4f-13f1b8384ea0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:36:50 np0005603609 nova_compute[221550]: 2026-01-31 08:36:50.146 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] VM Started (Lifecycle Event)#033[00m
Jan 31 03:36:50 np0005603609 podman[297082]: 2026-01-31 08:36:50.190240169 +0000 UTC m=+0.024157168 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:36:50 np0005603609 podman[297082]: 2026-01-31 08:36:50.287290651 +0000 UTC m=+0.121207610 container create 5978d1224d27546629eb18a530cf337cafbf1a0b4a2441c345bb516a258319b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:36:50 np0005603609 nova_compute[221550]: 2026-01-31 08:36:50.295 221554 INFO nova.compute.manager [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Took 26.20 seconds to build instance.#033[00m
Jan 31 03:36:50 np0005603609 systemd[1]: Started libpod-conmon-5978d1224d27546629eb18a530cf337cafbf1a0b4a2441c345bb516a258319b7.scope.
Jan 31 03:36:50 np0005603609 nova_compute[221550]: 2026-01-31 08:36:50.375 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:36:50 np0005603609 nova_compute[221550]: 2026-01-31 08:36:50.378 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848610.145969, c638ce77-3d60-4891-ae4f-13f1b8384ea0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:36:50 np0005603609 nova_compute[221550]: 2026-01-31 08:36:50.379 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:36:50 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:36:50 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935e4a8d8a49f7fc128ea968559455e3739762cc3db8e03f7d009f8515d4b494/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:50 np0005603609 podman[297082]: 2026-01-31 08:36:50.436745008 +0000 UTC m=+0.270662067 container init 5978d1224d27546629eb18a530cf337cafbf1a0b4a2441c345bb516a258319b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Jan 31 03:36:50 np0005603609 podman[297082]: 2026-01-31 08:36:50.441843002 +0000 UTC m=+0.275760011 container start 5978d1224d27546629eb18a530cf337cafbf1a0b4a2441c345bb516a258319b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 03:36:50 np0005603609 neutron-haproxy-ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00[297098]: [NOTICE]   (297102) : New worker (297104) forked
Jan 31 03:36:50 np0005603609 neutron-haproxy-ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00[297098]: [NOTICE]   (297102) : Loading success.
Jan 31 03:36:51 np0005603609 nova_compute[221550]: 2026-01-31 08:36:51.034 221554 DEBUG oslo_concurrency.lockutils [None req-7f36115a-858f-4da3-9983-e9ddc9b11bd0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 28.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:51 np0005603609 nova_compute[221550]: 2026-01-31 08:36:51.136 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:36:51 np0005603609 nova_compute[221550]: 2026-01-31 08:36:51.143 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:36:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:51.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:51 np0005603609 nova_compute[221550]: 2026-01-31 08:36:51.449 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:51 np0005603609 nova_compute[221550]: 2026-01-31 08:36:51.560 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:36:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:51.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:51 np0005603609 nova_compute[221550]: 2026-01-31 08:36:51.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:51 np0005603609 nova_compute[221550]: 2026-01-31 08:36:51.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:36:51 np0005603609 nova_compute[221550]: 2026-01-31 08:36:51.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:36:51 np0005603609 nova_compute[221550]: 2026-01-31 08:36:51.793 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:36:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:52 np0005603609 nova_compute[221550]: 2026-01-31 08:36:52.451 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:36:52 np0005603609 nova_compute[221550]: 2026-01-31 08:36:52.452 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:36:52 np0005603609 nova_compute[221550]: 2026-01-31 08:36:52.453 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:36:52 np0005603609 nova_compute[221550]: 2026-01-31 08:36:52.453 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.271 221554 DEBUG nova.compute.manager [req-a5f1d66d-b14c-49fd-b756-c246e7151978 req-99a6b7ec-88d0-4a19-9f84-12ab055290ef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Received event network-vif-plugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.272 221554 DEBUG oslo_concurrency.lockutils [req-a5f1d66d-b14c-49fd-b756-c246e7151978 req-99a6b7ec-88d0-4a19-9f84-12ab055290ef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.272 221554 DEBUG oslo_concurrency.lockutils [req-a5f1d66d-b14c-49fd-b756-c246e7151978 req-99a6b7ec-88d0-4a19-9f84-12ab055290ef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.272 221554 DEBUG oslo_concurrency.lockutils [req-a5f1d66d-b14c-49fd-b756-c246e7151978 req-99a6b7ec-88d0-4a19-9f84-12ab055290ef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.272 221554 DEBUG nova.compute.manager [req-a5f1d66d-b14c-49fd-b756-c246e7151978 req-99a6b7ec-88d0-4a19-9f84-12ab055290ef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] No waiting events found dispatching network-vif-plugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.273 221554 WARNING nova.compute.manager [req-a5f1d66d-b14c-49fd-b756-c246e7151978 req-99a6b7ec-88d0-4a19-9f84-12ab055290ef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Received unexpected event network-vif-plugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:36:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:53.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.511 221554 DEBUG nova.compute.manager [req-cd69dda4-cbbd-4e05-adb5-308cd3179fa4 req-fe11a96b-b376-47c6-9ca8-e814421fa0e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Received event network-vif-plugged-0f92835b-d022-4857-a070-51a275dffd68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.512 221554 DEBUG oslo_concurrency.lockutils [req-cd69dda4-cbbd-4e05-adb5-308cd3179fa4 req-fe11a96b-b376-47c6-9ca8-e814421fa0e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.513 221554 DEBUG oslo_concurrency.lockutils [req-cd69dda4-cbbd-4e05-adb5-308cd3179fa4 req-fe11a96b-b376-47c6-9ca8-e814421fa0e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.513 221554 DEBUG oslo_concurrency.lockutils [req-cd69dda4-cbbd-4e05-adb5-308cd3179fa4 req-fe11a96b-b376-47c6-9ca8-e814421fa0e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.513 221554 DEBUG nova.compute.manager [req-cd69dda4-cbbd-4e05-adb5-308cd3179fa4 req-fe11a96b-b376-47c6-9ca8-e814421fa0e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Processing event network-vif-plugged-0f92835b-d022-4857-a070-51a275dffd68 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.514 221554 DEBUG nova.compute.manager [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.539 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848613.5388594, c638ce77-3d60-4891-ae4f-13f1b8384ea0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.539 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.541 221554 DEBUG nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.545 221554 INFO nova.virt.libvirt.driver [-] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Instance spawned successfully.#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.545 221554 DEBUG nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:36:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:53.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.700 221554 DEBUG nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.701 221554 DEBUG nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.701 221554 DEBUG nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.701 221554 DEBUG nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.702 221554 DEBUG nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.702 221554 DEBUG nova.virt.libvirt.driver [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.706 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:36:53 np0005603609 nova_compute[221550]: 2026-01-31 08:36:53.708 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:36:54 np0005603609 nova_compute[221550]: 2026-01-31 08:36:54.016 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:36:54 np0005603609 podman[297114]: 2026-01-31 08:36:54.202147933 +0000 UTC m=+0.086260310 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:36:54 np0005603609 podman[297115]: 2026-01-31 08:36:54.202556862 +0000 UTC m=+0.084768823 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 03:36:54 np0005603609 nova_compute[221550]: 2026-01-31 08:36:54.439 221554 INFO nova.compute.manager [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Took 24.64 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:36:54 np0005603609 nova_compute[221550]: 2026-01-31 08:36:54.439 221554 DEBUG nova.compute.manager [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:36:55 np0005603609 nova_compute[221550]: 2026-01-31 08:36:55.029 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:55.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:55.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:56 np0005603609 nova_compute[221550]: 2026-01-31 08:36:56.288 221554 INFO nova.compute.manager [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Took 28.69 seconds to build instance.#033[00m
Jan 31 03:36:56 np0005603609 nova_compute[221550]: 2026-01-31 08:36:56.451 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:56 np0005603609 nova_compute[221550]: 2026-01-31 08:36:56.725 221554 DEBUG oslo_concurrency.lockutils [None req-26794b57-894c-4ced-9ca6-436cc5c0131e 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 30.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:57.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:36:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:57.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:36:58 np0005603609 nova_compute[221550]: 2026-01-31 08:36:58.393 221554 DEBUG nova.compute.manager [req-4e4a22a1-674f-4a17-b859-830a2b607618 req-e32a9eae-0969-41bf-932d-2f19cbded4f4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Received event network-vif-plugged-0f92835b-d022-4857-a070-51a275dffd68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:36:58 np0005603609 nova_compute[221550]: 2026-01-31 08:36:58.394 221554 DEBUG oslo_concurrency.lockutils [req-4e4a22a1-674f-4a17-b859-830a2b607618 req-e32a9eae-0969-41bf-932d-2f19cbded4f4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:58 np0005603609 nova_compute[221550]: 2026-01-31 08:36:58.394 221554 DEBUG oslo_concurrency.lockutils [req-4e4a22a1-674f-4a17-b859-830a2b607618 req-e32a9eae-0969-41bf-932d-2f19cbded4f4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:58 np0005603609 nova_compute[221550]: 2026-01-31 08:36:58.394 221554 DEBUG oslo_concurrency.lockutils [req-4e4a22a1-674f-4a17-b859-830a2b607618 req-e32a9eae-0969-41bf-932d-2f19cbded4f4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:58 np0005603609 nova_compute[221550]: 2026-01-31 08:36:58.394 221554 DEBUG nova.compute.manager [req-4e4a22a1-674f-4a17-b859-830a2b607618 req-e32a9eae-0969-41bf-932d-2f19cbded4f4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] No waiting events found dispatching network-vif-plugged-0f92835b-d022-4857-a070-51a275dffd68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:36:58 np0005603609 nova_compute[221550]: 2026-01-31 08:36:58.395 221554 WARNING nova.compute.manager [req-4e4a22a1-674f-4a17-b859-830a2b607618 req-e32a9eae-0969-41bf-932d-2f19cbded4f4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Received unexpected event network-vif-plugged-0f92835b-d022-4857-a070-51a275dffd68 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:36:58 np0005603609 nova_compute[221550]: 2026-01-31 08:36:58.752 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:58.753 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:36:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:36:58.754 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:36:59 np0005603609 nova_compute[221550]: 2026-01-31 08:36:59.115 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Updating instance_info_cache with network_info: [{"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:36:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:59.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:36:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:59.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:59 np0005603609 nova_compute[221550]: 2026-01-31 08:36:59.818 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:36:59 np0005603609 nova_compute[221550]: 2026-01-31 08:36:59.819 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:36:59 np0005603609 nova_compute[221550]: 2026-01-31 08:36:59.819 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:00 np0005603609 nova_compute[221550]: 2026-01-31 08:37:00.032 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:00 np0005603609 nova_compute[221550]: 2026-01-31 08:37:00.086 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Triggering sync for uuid 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:37:00 np0005603609 nova_compute[221550]: 2026-01-31 08:37:00.086 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Triggering sync for uuid c638ce77-3d60-4891-ae4f-13f1b8384ea0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:37:00 np0005603609 nova_compute[221550]: 2026-01-31 08:37:00.086 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:00 np0005603609 nova_compute[221550]: 2026-01-31 08:37:00.087 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:00 np0005603609 nova_compute[221550]: 2026-01-31 08:37:00.087 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:00 np0005603609 nova_compute[221550]: 2026-01-31 08:37:00.087 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:00 np0005603609 nova_compute[221550]: 2026-01-31 08:37:00.087 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:00 np0005603609 nova_compute[221550]: 2026-01-31 08:37:00.087 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:37:00 np0005603609 nova_compute[221550]: 2026-01-31 08:37:00.088 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:00 np0005603609 nova_compute[221550]: 2026-01-31 08:37:00.301 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:00 np0005603609 nova_compute[221550]: 2026-01-31 08:37:00.302 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:00 np0005603609 nova_compute[221550]: 2026-01-31 08:37:00.302 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:00 np0005603609 nova_compute[221550]: 2026-01-31 08:37:00.303 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:37:00 np0005603609 nova_compute[221550]: 2026-01-31 08:37:00.303 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:00 np0005603609 nova_compute[221550]: 2026-01-31 08:37:00.327 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:00 np0005603609 nova_compute[221550]: 2026-01-31 08:37:00.328 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:01.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:01 np0005603609 nova_compute[221550]: 2026-01-31 08:37:01.451 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:37:01 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4265750067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:37:01 np0005603609 nova_compute[221550]: 2026-01-31 08:37:01.529 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:01.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:01 np0005603609 nova_compute[221550]: 2026-01-31 08:37:01.951 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000b4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:37:01 np0005603609 nova_compute[221550]: 2026-01-31 08:37:01.951 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000b4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:37:01 np0005603609 nova_compute[221550]: 2026-01-31 08:37:01.954 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:37:01 np0005603609 nova_compute[221550]: 2026-01-31 08:37:01.954 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:37:02 np0005603609 nova_compute[221550]: 2026-01-31 08:37:02.089 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:37:02 np0005603609 nova_compute[221550]: 2026-01-31 08:37:02.090 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3850MB free_disk=20.900920867919922GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:37:02 np0005603609 nova_compute[221550]: 2026-01-31 08:37:02.091 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:02 np0005603609 nova_compute[221550]: 2026-01-31 08:37:02.091 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:02 np0005603609 nova_compute[221550]: 2026-01-31 08:37:02.475 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:37:02 np0005603609 nova_compute[221550]: 2026-01-31 08:37:02.476 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance c638ce77-3d60-4891-ae4f-13f1b8384ea0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:37:02 np0005603609 nova_compute[221550]: 2026-01-31 08:37:02.477 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:37:02 np0005603609 nova_compute[221550]: 2026-01-31 08:37:02.477 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:37:02 np0005603609 nova_compute[221550]: 2026-01-31 08:37:02.632 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:37:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.3 total, 600.0 interval#012Cumulative writes: 14K writes, 73K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1556 writes, 7147 keys, 1556 commit groups, 1.0 writes per commit group, ingest: 15.33 MB, 0.03 MB/s#012Interval WAL: 1555 writes, 1555 syncs, 1.00 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     44.8      1.95              0.20        46    0.042       0      0       0.0       0.0#012  L6      1/0    9.97 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.1     73.9     63.0      7.11              1.03        45    0.158    320K    24K       0.0       0.0#012 Sum      1/0    9.97 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.1     58.0     59.1      9.06              1.23        91    0.100    320K    24K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.7     47.5     47.3      1.04              0.13         8    0.131     39K   2043       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     73.9     63.0      7.11              1.03        45    0.158    320K    24K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     45.6      1.92              0.20        45    0.043       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.3 total, 600.0 interval#012Flush(GB): cumulative 0.085, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.52 GB write, 0.10 MB/s write, 0.51 GB read, 0.10 MB/s read, 9.1 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619ad8ed1f0#2 capacity: 304.00 MB usage: 57.14 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000311 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3317,54.80 MB,18.0278%) FilterBlock(91,891.36 KB,0.286338%) IndexBlock(91,1.46 MB,0.480978%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 03:37:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:03.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:03.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:37:03 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3154223725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:37:03 np0005603609 nova_compute[221550]: 2026-01-31 08:37:03.629 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.998s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:03 np0005603609 nova_compute[221550]: 2026-01-31 08:37:03.634 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:37:04 np0005603609 nova_compute[221550]: 2026-01-31 08:37:04.111 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:37:04 np0005603609 nova_compute[221550]: 2026-01-31 08:37:04.520 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:37:04 np0005603609 nova_compute[221550]: 2026-01-31 08:37:04.521 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:04 np0005603609 nova_compute[221550]: 2026-01-31 08:37:04.521 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:04 np0005603609 nova_compute[221550]: 2026-01-31 08:37:04.521 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:37:04 np0005603609 nova_compute[221550]: 2026-01-31 08:37:04.889 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:04 np0005603609 NetworkManager[49064]: <info>  [1769848624.8898] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Jan 31 03:37:04 np0005603609 NetworkManager[49064]: <info>  [1769848624.8907] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Jan 31 03:37:04 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:04Z|00803|binding|INFO|Releasing lport 9f64b659-abb7-4398-919b-9d703dbc3cd7 from this chassis (sb_readonly=0)
Jan 31 03:37:04 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:04Z|00804|binding|INFO|Releasing lport 37df9605-20bf-4033-8a64-5f2ba69b6771 from this chassis (sb_readonly=0)
Jan 31 03:37:04 np0005603609 nova_compute[221550]: 2026-01-31 08:37:04.904 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:05 np0005603609 nova_compute[221550]: 2026-01-31 08:37:05.033 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:05.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:05.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:05 np0005603609 nova_compute[221550]: 2026-01-31 08:37:05.688 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:05 np0005603609 nova_compute[221550]: 2026-01-31 08:37:05.688 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:05 np0005603609 nova_compute[221550]: 2026-01-31 08:37:05.688 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:06 np0005603609 nova_compute[221550]: 2026-01-31 08:37:06.453 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:07.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:07.529 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:07.530 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:07.530 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:07.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:07 np0005603609 nova_compute[221550]: 2026-01-31 08:37:07.893 221554 DEBUG nova.compute.manager [req-e5c69349-02c5-40f2-821e-7181d38f3db7 req-3fb9c70d-9d36-448d-9442-351480fe1871 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Received event network-changed-76093a46-2d78-41cb-a667-6e75fa0d8ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:37:07 np0005603609 nova_compute[221550]: 2026-01-31 08:37:07.893 221554 DEBUG nova.compute.manager [req-e5c69349-02c5-40f2-821e-7181d38f3db7 req-3fb9c70d-9d36-448d-9442-351480fe1871 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Refreshing instance network info cache due to event network-changed-76093a46-2d78-41cb-a667-6e75fa0d8ea1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:37:07 np0005603609 nova_compute[221550]: 2026-01-31 08:37:07.894 221554 DEBUG oslo_concurrency.lockutils [req-e5c69349-02c5-40f2-821e-7181d38f3db7 req-3fb9c70d-9d36-448d-9442-351480fe1871 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:37:07 np0005603609 nova_compute[221550]: 2026-01-31 08:37:07.894 221554 DEBUG oslo_concurrency.lockutils [req-e5c69349-02c5-40f2-821e-7181d38f3db7 req-3fb9c70d-9d36-448d-9442-351480fe1871 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:37:07 np0005603609 nova_compute[221550]: 2026-01-31 08:37:07.895 221554 DEBUG nova.network.neutron [req-e5c69349-02c5-40f2-821e-7181d38f3db7 req-3fb9c70d-9d36-448d-9442-351480fe1871 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Refreshing network info cache for port 76093a46-2d78-41cb-a667-6e75fa0d8ea1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:37:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:08Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:fd:51 10.100.0.7
Jan 31 03:37:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:08Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:fd:51 10.100.0.7
Jan 31 03:37:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:08Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8e:4d:9d 10.100.0.6
Jan 31 03:37:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:08Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:4d:9d 10.100.0.6
Jan 31 03:37:08 np0005603609 nova_compute[221550]: 2026-01-31 08:37:08.623 221554 DEBUG nova.compute.manager [req-0655fc8f-cd2d-42f0-a441-a409c4b75d30 req-eeeb0726-70be-49d2-8e5f-231a9c16b723 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Received event network-changed-0f92835b-d022-4857-a070-51a275dffd68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:37:08 np0005603609 nova_compute[221550]: 2026-01-31 08:37:08.624 221554 DEBUG nova.compute.manager [req-0655fc8f-cd2d-42f0-a441-a409c4b75d30 req-eeeb0726-70be-49d2-8e5f-231a9c16b723 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Refreshing instance network info cache due to event network-changed-0f92835b-d022-4857-a070-51a275dffd68. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:37:08 np0005603609 nova_compute[221550]: 2026-01-31 08:37:08.624 221554 DEBUG oslo_concurrency.lockutils [req-0655fc8f-cd2d-42f0-a441-a409c4b75d30 req-eeeb0726-70be-49d2-8e5f-231a9c16b723 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c638ce77-3d60-4891-ae4f-13f1b8384ea0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:37:08 np0005603609 nova_compute[221550]: 2026-01-31 08:37:08.624 221554 DEBUG oslo_concurrency.lockutils [req-0655fc8f-cd2d-42f0-a441-a409c4b75d30 req-eeeb0726-70be-49d2-8e5f-231a9c16b723 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c638ce77-3d60-4891-ae4f-13f1b8384ea0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:37:08 np0005603609 nova_compute[221550]: 2026-01-31 08:37:08.625 221554 DEBUG nova.network.neutron [req-0655fc8f-cd2d-42f0-a441-a409c4b75d30 req-eeeb0726-70be-49d2-8e5f-231a9c16b723 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Refreshing network info cache for port 0f92835b-d022-4857-a070-51a275dffd68 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:37:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:08.756 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:08Z|00805|binding|INFO|Releasing lport 9f64b659-abb7-4398-919b-9d703dbc3cd7 from this chassis (sb_readonly=0)
Jan 31 03:37:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:08Z|00806|binding|INFO|Releasing lport 37df9605-20bf-4033-8a64-5f2ba69b6771 from this chassis (sb_readonly=0)
Jan 31 03:37:08 np0005603609 nova_compute[221550]: 2026-01-31 08:37:08.804 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:09 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #148. Immutable memtables: 0.
Jan 31 03:37:09 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:37:09.393221) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:37:09 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 148
Jan 31 03:37:09 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848629393301, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2377, "num_deletes": 251, "total_data_size": 5745301, "memory_usage": 5822032, "flush_reason": "Manual Compaction"}
Jan 31 03:37:09 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #149: started
Jan 31 03:37:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:09.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:09.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:09 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848629647256, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 149, "file_size": 3755670, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71821, "largest_seqno": 74193, "table_properties": {"data_size": 3746166, "index_size": 5997, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19787, "raw_average_key_size": 20, "raw_value_size": 3727160, "raw_average_value_size": 3842, "num_data_blocks": 262, "num_entries": 970, "num_filter_entries": 970, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848413, "oldest_key_time": 1769848413, "file_creation_time": 1769848629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:37:09 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 254084 microseconds, and 6967 cpu microseconds.
Jan 31 03:37:09 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:37:09 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:37:09.647321) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #149: 3755670 bytes OK
Jan 31 03:37:09 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:37:09.647339) [db/memtable_list.cc:519] [default] Level-0 commit table #149 started
Jan 31 03:37:09 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:37:09.723668) [db/memtable_list.cc:722] [default] Level-0 commit table #149: memtable #1 done
Jan 31 03:37:09 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:37:09.723711) EVENT_LOG_v1 {"time_micros": 1769848629723701, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:37:09 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:37:09.723742) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:37:09 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 5734951, prev total WAL file size 5734951, number of live WAL files 2.
Jan 31 03:37:09 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000145.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:37:09 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:37:09.725073) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Jan 31 03:37:09 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:37:09 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [149(3667KB)], [147(10204KB)]
Jan 31 03:37:09 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848629725173, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [149], "files_L6": [147], "score": -1, "input_data_size": 14205234, "oldest_snapshot_seqno": -1}
Jan 31 03:37:10 np0005603609 nova_compute[221550]: 2026-01-31 08:37:10.035 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:10 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #150: 9621 keys, 12285154 bytes, temperature: kUnknown
Jan 31 03:37:10 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848630043339, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 150, "file_size": 12285154, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12223364, "index_size": 36653, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24069, "raw_key_size": 253516, "raw_average_key_size": 26, "raw_value_size": 12054804, "raw_average_value_size": 1252, "num_data_blocks": 1397, "num_entries": 9621, "num_filter_entries": 9621, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769848629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 150, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:37:10 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:37:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:37:10.043582) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 12285154 bytes
Jan 31 03:37:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:37:10.134089) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 44.6 rd, 38.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.0 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(7.1) write-amplify(3.3) OK, records in: 10138, records dropped: 517 output_compression: NoCompression
Jan 31 03:37:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:37:10.134149) EVENT_LOG_v1 {"time_micros": 1769848630134127, "job": 94, "event": "compaction_finished", "compaction_time_micros": 318227, "compaction_time_cpu_micros": 32942, "output_level": 6, "num_output_files": 1, "total_output_size": 12285154, "num_input_records": 10138, "num_output_records": 9621, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:37:10 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:37:10 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848630135525, "job": 94, "event": "table_file_deletion", "file_number": 149}
Jan 31 03:37:10 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000147.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:37:10 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848630137250, "job": 94, "event": "table_file_deletion", "file_number": 147}
Jan 31 03:37:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:37:09.724888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:37:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:37:10.137317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:37:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:37:10.137325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:37:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:37:10.137328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:37:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:37:10.137332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:37:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:37:10.137335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:37:11 np0005603609 nova_compute[221550]: 2026-01-31 08:37:11.192 221554 DEBUG nova.network.neutron [req-e5c69349-02c5-40f2-821e-7181d38f3db7 req-3fb9c70d-9d36-448d-9442-351480fe1871 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Updated VIF entry in instance network info cache for port 76093a46-2d78-41cb-a667-6e75fa0d8ea1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:37:11 np0005603609 nova_compute[221550]: 2026-01-31 08:37:11.193 221554 DEBUG nova.network.neutron [req-e5c69349-02c5-40f2-821e-7181d38f3db7 req-3fb9c70d-9d36-448d-9442-351480fe1871 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Updating instance_info_cache with network_info: [{"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:37:11 np0005603609 nova_compute[221550]: 2026-01-31 08:37:11.255 221554 DEBUG oslo_concurrency.lockutils [req-e5c69349-02c5-40f2-821e-7181d38f3db7 req-3fb9c70d-9d36-448d-9442-351480fe1871 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:37:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:11.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:11 np0005603609 nova_compute[221550]: 2026-01-31 08:37:11.454 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:11.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:11 np0005603609 nova_compute[221550]: 2026-01-31 08:37:11.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:11 np0005603609 nova_compute[221550]: 2026-01-31 08:37:11.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:37:11 np0005603609 nova_compute[221550]: 2026-01-31 08:37:11.689 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:37:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:37:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:37:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:37:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:37:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:37:12 np0005603609 nova_compute[221550]: 2026-01-31 08:37:12.524 221554 DEBUG nova.network.neutron [req-0655fc8f-cd2d-42f0-a441-a409c4b75d30 req-eeeb0726-70be-49d2-8e5f-231a9c16b723 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Updated VIF entry in instance network info cache for port 0f92835b-d022-4857-a070-51a275dffd68. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:37:12 np0005603609 nova_compute[221550]: 2026-01-31 08:37:12.525 221554 DEBUG nova.network.neutron [req-0655fc8f-cd2d-42f0-a441-a409c4b75d30 req-eeeb0726-70be-49d2-8e5f-231a9c16b723 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Updating instance_info_cache with network_info: [{"id": "0f92835b-d022-4857-a070-51a275dffd68", "address": "fa:16:3e:8e:4d:9d", "network": {"id": "f52e3ef7-1bb2-472a-b0a8-0f1822212a00", "bridge": "br-int", "label": "tempest-network-smoke--1388052711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49eb9aae5e984a72b3c0b1e5316e2172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f92835b-d0", "ovs_interfaceid": "0f92835b-d022-4857-a070-51a275dffd68", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:37:12 np0005603609 nova_compute[221550]: 2026-01-31 08:37:12.619 221554 DEBUG oslo_concurrency.lockutils [req-0655fc8f-cd2d-42f0-a441-a409c4b75d30 req-eeeb0726-70be-49d2-8e5f-231a9c16b723 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c638ce77-3d60-4891-ae4f-13f1b8384ea0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:37:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:37:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:13.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:37:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:13.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:15 np0005603609 nova_compute[221550]: 2026-01-31 08:37:15.037 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:15.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 03:37:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:15.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 03:37:16 np0005603609 nova_compute[221550]: 2026-01-31 08:37:16.457 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:16 np0005603609 nova_compute[221550]: 2026-01-31 08:37:16.710 221554 INFO nova.compute.manager [None req-a2cf60f5-9081-441b-bea5-f4b23d5dff0b 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Get console output#033[00m
Jan 31 03:37:16 np0005603609 nova_compute[221550]: 2026-01-31 08:37:16.715 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:37:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:17.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:17.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:37:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:19.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:37:19 np0005603609 nova_compute[221550]: 2026-01-31 08:37:19.471 221554 INFO nova.compute.manager [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Rebuilding instance#033[00m
Jan 31 03:37:19 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:37:19 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:37:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:19.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:19 np0005603609 nova_compute[221550]: 2026-01-31 08:37:19.769 221554 DEBUG nova.objects.instance [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:19 np0005603609 nova_compute[221550]: 2026-01-31 08:37:19.929 221554 DEBUG nova.compute.manager [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:37:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:37:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:37:20 np0005603609 nova_compute[221550]: 2026-01-31 08:37:20.038 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:20 np0005603609 nova_compute[221550]: 2026-01-31 08:37:20.205 221554 DEBUG nova.objects.instance [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'pci_requests' on Instance uuid 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:20 np0005603609 nova_compute[221550]: 2026-01-31 08:37:20.281 221554 DEBUG nova.objects.instance [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'pci_devices' on Instance uuid 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:20 np0005603609 nova_compute[221550]: 2026-01-31 08:37:20.494 221554 DEBUG nova.objects.instance [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'resources' on Instance uuid 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:20 np0005603609 nova_compute[221550]: 2026-01-31 08:37:20.983 221554 DEBUG nova.objects.instance [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'migration_context' on Instance uuid 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:37:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:21.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:37:21 np0005603609 nova_compute[221550]: 2026-01-31 08:37:21.454 221554 DEBUG nova.objects.instance [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 03:37:21 np0005603609 nova_compute[221550]: 2026-01-31 08:37:21.458 221554 DEBUG nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:37:21 np0005603609 nova_compute[221550]: 2026-01-31 08:37:21.459 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:21.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:23.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:23.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:23 np0005603609 kernel: tap76093a46-2d (unregistering): left promiscuous mode
Jan 31 03:37:23 np0005603609 NetworkManager[49064]: <info>  [1769848643.8611] device (tap76093a46-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:37:23 np0005603609 nova_compute[221550]: 2026-01-31 08:37:23.910 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:23Z|00807|binding|INFO|Releasing lport 76093a46-2d78-41cb-a667-6e75fa0d8ea1 from this chassis (sb_readonly=0)
Jan 31 03:37:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:23Z|00808|binding|INFO|Setting lport 76093a46-2d78-41cb-a667-6e75fa0d8ea1 down in Southbound
Jan 31 03:37:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:23Z|00809|binding|INFO|Removing iface tap76093a46-2d ovn-installed in OVS
Jan 31 03:37:23 np0005603609 nova_compute[221550]: 2026-01-31 08:37:23.914 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:23 np0005603609 nova_compute[221550]: 2026-01-31 08:37:23.917 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:23 np0005603609 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000b4.scope: Deactivated successfully.
Jan 31 03:37:23 np0005603609 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000b4.scope: Consumed 15.363s CPU time.
Jan 31 03:37:23 np0005603609 systemd-machined[190912]: Machine qemu-97-instance-000000b4 terminated.
Jan 31 03:37:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:24.196 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:fd:51 10.100.0.7'], port_security=['fa:16:3e:88:fd:51 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4ba5c6bd-5d05-4c33-ac40-620abd0a83cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd5ab7674-17fe-499b-8e6a-01f2b29e7eda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e2cf9ef-4f0c-4703-b0c5-cb44d772423a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=76093a46-2d78-41cb-a667-6e75fa0d8ea1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:37:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:24.198 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 76093a46-2d78-41cb-a667-6e75fa0d8ea1 in datapath 64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4 unbound from our chassis#033[00m
Jan 31 03:37:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:24.199 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:37:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:24.200 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[08f2dd77-cb51-457b-8243-0c79011689a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:24.200 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4 namespace which is not needed anymore#033[00m
Jan 31 03:37:24 np0005603609 neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4[296904]: [NOTICE]   (296908) : haproxy version is 2.8.14-c23fe91
Jan 31 03:37:24 np0005603609 neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4[296904]: [NOTICE]   (296908) : path to executable is /usr/sbin/haproxy
Jan 31 03:37:24 np0005603609 neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4[296904]: [WARNING]  (296908) : Exiting Master process...
Jan 31 03:37:24 np0005603609 neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4[296904]: [ALERT]    (296908) : Current worker (296910) exited with code 143 (Terminated)
Jan 31 03:37:24 np0005603609 neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4[296904]: [WARNING]  (296908) : All workers exited. Exiting... (0)
Jan 31 03:37:24 np0005603609 systemd[1]: libpod-751014c8c9d061220b93125c5cc1430530a62cc546504732d89ce8f545bcfd5e.scope: Deactivated successfully.
Jan 31 03:37:24 np0005603609 conmon[296904]: conmon 751014c8c9d061220b93 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-751014c8c9d061220b93125c5cc1430530a62cc546504732d89ce8f545bcfd5e.scope/container/memory.events
Jan 31 03:37:24 np0005603609 podman[297422]: 2026-01-31 08:37:24.315363506 +0000 UTC m=+0.045714963 container died 751014c8c9d061220b93125c5cc1430530a62cc546504732d89ce8f545bcfd5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 03:37:24 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-751014c8c9d061220b93125c5cc1430530a62cc546504732d89ce8f545bcfd5e-userdata-shm.mount: Deactivated successfully.
Jan 31 03:37:24 np0005603609 systemd[1]: var-lib-containers-storage-overlay-c5eeb43ea228f6bda75aa1edd93f0fb0fdf8a506490b2d6e64ff030ee6f2f275-merged.mount: Deactivated successfully.
Jan 31 03:37:24 np0005603609 podman[297422]: 2026-01-31 08:37:24.35581091 +0000 UTC m=+0.086162367 container cleanup 751014c8c9d061220b93125c5cc1430530a62cc546504732d89ce8f545bcfd5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:37:24 np0005603609 systemd[1]: libpod-conmon-751014c8c9d061220b93125c5cc1430530a62cc546504732d89ce8f545bcfd5e.scope: Deactivated successfully.
Jan 31 03:37:24 np0005603609 podman[297444]: 2026-01-31 08:37:24.403107581 +0000 UTC m=+0.074939634 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:37:24 np0005603609 podman[297477]: 2026-01-31 08:37:24.414756585 +0000 UTC m=+0.044383201 container remove 751014c8c9d061220b93125c5cc1430530a62cc546504732d89ce8f545bcfd5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:37:24 np0005603609 podman[297436]: 2026-01-31 08:37:24.418337782 +0000 UTC m=+0.091311063 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:37:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:24.418 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[008eec15-933a-4cd3-8e3b-e5c189eb8956]: (4, ('Sat Jan 31 08:37:24 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4 (751014c8c9d061220b93125c5cc1430530a62cc546504732d89ce8f545bcfd5e)\n751014c8c9d061220b93125c5cc1430530a62cc546504732d89ce8f545bcfd5e\nSat Jan 31 08:37:24 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4 (751014c8c9d061220b93125c5cc1430530a62cc546504732d89ce8f545bcfd5e)\n751014c8c9d061220b93125c5cc1430530a62cc546504732d89ce8f545bcfd5e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:24.420 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[13e489dd-4b5a-4bb1-bd3f-e29e96388e06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:24.421 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64b5ffb1-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:24 np0005603609 nova_compute[221550]: 2026-01-31 08:37:24.422 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:24 np0005603609 kernel: tap64b5ffb1-70: left promiscuous mode
Jan 31 03:37:24 np0005603609 nova_compute[221550]: 2026-01-31 08:37:24.432 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:24.434 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a09f4a91-906b-4565-bb03-0908b8d77906]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:24.447 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f391d21f-b574-4c2a-a135-c69ace76cc3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:24.449 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ba1e7c8d-e0ae-4584-9102-0349c283ed13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:24.460 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4b853046-4053-434f-8dc1-afdb3dfd315f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884245, 'reachable_time': 36088, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297516, 'error': None, 'target': 'ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603609 systemd[1]: run-netns-ovnmeta\x2d64b5ffb1\x2d70ff\x2d4eaa\x2db745\x2d9b7ddd18f2b4.mount: Deactivated successfully.
Jan 31 03:37:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:24.463 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:37:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:24.463 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[9a468ddd-b804-4525-b5a5-7b970c2272d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603609 nova_compute[221550]: 2026-01-31 08:37:24.474 221554 INFO nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 03:37:24 np0005603609 nova_compute[221550]: 2026-01-31 08:37:24.478 221554 INFO nova.virt.libvirt.driver [-] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Instance destroyed successfully.#033[00m
Jan 31 03:37:24 np0005603609 nova_compute[221550]: 2026-01-31 08:37:24.481 221554 INFO nova.virt.libvirt.driver [-] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Instance destroyed successfully.#033[00m
Jan 31 03:37:24 np0005603609 nova_compute[221550]: 2026-01-31 08:37:24.482 221554 DEBUG nova.virt.libvirt.vif [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2063916310',display_name='tempest-TestNetworkAdvancedServerOps-server-2063916310',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2063916310',id=180,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNv8t9ljSfsHe7/DF/aoJ/vR6fscWRM4GStsWxMUVpZD9QqPEzaqes4Daeg29ZseIZh//ZdOETFJ8kFU4RZhY+wlBZNr//GGJe6GeYVX3d0ryXxCUAo4WEPt20+EJ7KK2A==',key_name='tempest-TestNetworkAdvancedServerOps-728081448',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:36:49Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-v607b9ot',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:37:17Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=4ba5c6bd-5d05-4c33-ac40-620abd0a83cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:37:24 np0005603609 nova_compute[221550]: 2026-01-31 08:37:24.482 221554 DEBUG nova.network.os_vif_util [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:37:24 np0005603609 nova_compute[221550]: 2026-01-31 08:37:24.483 221554 DEBUG nova.network.os_vif_util [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:fd:51,bridge_name='br-int',has_traffic_filtering=True,id=76093a46-2d78-41cb-a667-6e75fa0d8ea1,network=Network(64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76093a46-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:37:24 np0005603609 nova_compute[221550]: 2026-01-31 08:37:24.483 221554 DEBUG os_vif [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:fd:51,bridge_name='br-int',has_traffic_filtering=True,id=76093a46-2d78-41cb-a667-6e75fa0d8ea1,network=Network(64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76093a46-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:37:24 np0005603609 nova_compute[221550]: 2026-01-31 08:37:24.484 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:24 np0005603609 nova_compute[221550]: 2026-01-31 08:37:24.484 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76093a46-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:24 np0005603609 nova_compute[221550]: 2026-01-31 08:37:24.485 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:24 np0005603609 nova_compute[221550]: 2026-01-31 08:37:24.486 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:24 np0005603609 nova_compute[221550]: 2026-01-31 08:37:24.488 221554 INFO os_vif [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:fd:51,bridge_name='br-int',has_traffic_filtering=True,id=76093a46-2d78-41cb-a667-6e75fa0d8ea1,network=Network(64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76093a46-2d')#033[00m
Jan 31 03:37:24 np0005603609 nova_compute[221550]: 2026-01-31 08:37:24.685 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:24 np0005603609 nova_compute[221550]: 2026-01-31 08:37:24.913 221554 INFO nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Deleting instance files /var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_del#033[00m
Jan 31 03:37:24 np0005603609 nova_compute[221550]: 2026-01-31 08:37:24.914 221554 INFO nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Deletion of /var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_del complete#033[00m
Jan 31 03:37:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:25.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:25 np0005603609 nova_compute[221550]: 2026-01-31 08:37:25.452 221554 DEBUG nova.compute.manager [req-4623f98f-82fe-4907-959b-cfa29422c508 req-0227bb84-8ef4-4112-b189-69f02f925c9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Received event network-vif-unplugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:37:25 np0005603609 nova_compute[221550]: 2026-01-31 08:37:25.453 221554 DEBUG oslo_concurrency.lockutils [req-4623f98f-82fe-4907-959b-cfa29422c508 req-0227bb84-8ef4-4112-b189-69f02f925c9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:25 np0005603609 nova_compute[221550]: 2026-01-31 08:37:25.453 221554 DEBUG oslo_concurrency.lockutils [req-4623f98f-82fe-4907-959b-cfa29422c508 req-0227bb84-8ef4-4112-b189-69f02f925c9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:25 np0005603609 nova_compute[221550]: 2026-01-31 08:37:25.453 221554 DEBUG oslo_concurrency.lockutils [req-4623f98f-82fe-4907-959b-cfa29422c508 req-0227bb84-8ef4-4112-b189-69f02f925c9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:25 np0005603609 nova_compute[221550]: 2026-01-31 08:37:25.453 221554 DEBUG nova.compute.manager [req-4623f98f-82fe-4907-959b-cfa29422c508 req-0227bb84-8ef4-4112-b189-69f02f925c9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] No waiting events found dispatching network-vif-unplugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:37:25 np0005603609 nova_compute[221550]: 2026-01-31 08:37:25.454 221554 WARNING nova.compute.manager [req-4623f98f-82fe-4907-959b-cfa29422c508 req-0227bb84-8ef4-4112-b189-69f02f925c9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Received unexpected event network-vif-unplugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 for instance with vm_state active and task_state rebuilding.#033[00m
Jan 31 03:37:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:25.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:26 np0005603609 nova_compute[221550]: 2026-01-31 08:37:26.461 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.213 221554 DEBUG nova.compute.manager [req-aa9f494f-541e-43d8-96de-b28d39d0c89b req-3b14417d-bd9f-489e-8014-5ff3339a84a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Received event network-changed-0f92835b-d022-4857-a070-51a275dffd68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.214 221554 DEBUG nova.compute.manager [req-aa9f494f-541e-43d8-96de-b28d39d0c89b req-3b14417d-bd9f-489e-8014-5ff3339a84a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Refreshing instance network info cache due to event network-changed-0f92835b-d022-4857-a070-51a275dffd68. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.214 221554 DEBUG oslo_concurrency.lockutils [req-aa9f494f-541e-43d8-96de-b28d39d0c89b req-3b14417d-bd9f-489e-8014-5ff3339a84a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c638ce77-3d60-4891-ae4f-13f1b8384ea0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.214 221554 DEBUG oslo_concurrency.lockutils [req-aa9f494f-541e-43d8-96de-b28d39d0c89b req-3b14417d-bd9f-489e-8014-5ff3339a84a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c638ce77-3d60-4891-ae4f-13f1b8384ea0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.215 221554 DEBUG nova.network.neutron [req-aa9f494f-541e-43d8-96de-b28d39d0c89b req-3b14417d-bd9f-489e-8014-5ff3339a84a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Refreshing network info cache for port 0f92835b-d022-4857-a070-51a275dffd68 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:37:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:27.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:27.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.778 221554 DEBUG nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.778 221554 INFO nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Creating image(s)#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.815 221554 DEBUG nova.storage.rbd_utils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.844 221554 DEBUG nova.storage.rbd_utils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.874 221554 DEBUG nova.storage.rbd_utils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.878 221554 DEBUG oslo_concurrency.processutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.916 221554 DEBUG nova.compute.manager [req-3b0ce3a0-cb06-4175-b266-b8bcf5efb441 req-f9afd303-0f78-4a8a-b680-0540d0ca2ca3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Received event network-vif-plugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.917 221554 DEBUG oslo_concurrency.lockutils [req-3b0ce3a0-cb06-4175-b266-b8bcf5efb441 req-f9afd303-0f78-4a8a-b680-0540d0ca2ca3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.917 221554 DEBUG oslo_concurrency.lockutils [req-3b0ce3a0-cb06-4175-b266-b8bcf5efb441 req-f9afd303-0f78-4a8a-b680-0540d0ca2ca3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.917 221554 DEBUG oslo_concurrency.lockutils [req-3b0ce3a0-cb06-4175-b266-b8bcf5efb441 req-f9afd303-0f78-4a8a-b680-0540d0ca2ca3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.918 221554 DEBUG nova.compute.manager [req-3b0ce3a0-cb06-4175-b266-b8bcf5efb441 req-f9afd303-0f78-4a8a-b680-0540d0ca2ca3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] No waiting events found dispatching network-vif-plugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.918 221554 WARNING nova.compute.manager [req-3b0ce3a0-cb06-4175-b266-b8bcf5efb441 req-f9afd303-0f78-4a8a-b680-0540d0ca2ca3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Received unexpected event network-vif-plugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 for instance with vm_state active and task_state rebuild_block_device_mapping.#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.951 221554 DEBUG oslo_concurrency.processutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.951 221554 DEBUG oslo_concurrency.lockutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.952 221554 DEBUG oslo_concurrency.lockutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.952 221554 DEBUG oslo_concurrency.lockutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.983 221554 DEBUG nova.storage.rbd_utils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:37:27 np0005603609 nova_compute[221550]: 2026-01-31 08:37:27.987 221554 DEBUG oslo_concurrency.processutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.080 221554 DEBUG oslo_concurrency.lockutils [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Acquiring lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.081 221554 DEBUG oslo_concurrency.lockutils [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.081 221554 DEBUG oslo_concurrency.lockutils [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Acquiring lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.081 221554 DEBUG oslo_concurrency.lockutils [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.082 221554 DEBUG oslo_concurrency.lockutils [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.084 221554 INFO nova.compute.manager [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Terminating instance#033[00m
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.085 221554 DEBUG nova.compute.manager [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:37:28 np0005603609 kernel: tap0f92835b-d0 (unregistering): left promiscuous mode
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.144 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:28 np0005603609 NetworkManager[49064]: <info>  [1769848648.1451] device (tap0f92835b-d0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.150 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:28 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:28Z|00810|binding|INFO|Releasing lport 0f92835b-d022-4857-a070-51a275dffd68 from this chassis (sb_readonly=0)
Jan 31 03:37:28 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:28Z|00811|binding|INFO|Setting lport 0f92835b-d022-4857-a070-51a275dffd68 down in Southbound
Jan 31 03:37:28 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:28Z|00812|binding|INFO|Removing iface tap0f92835b-d0 ovn-installed in OVS
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.153 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.157 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:28 np0005603609 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000b5.scope: Deactivated successfully.
Jan 31 03:37:28 np0005603609 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000b5.scope: Consumed 13.801s CPU time.
Jan 31 03:37:28 np0005603609 systemd-machined[190912]: Machine qemu-98-instance-000000b5 terminated.
Jan 31 03:37:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:28.305 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:4d:9d 10.100.0.6'], port_security=['fa:16:3e:8e:4d:9d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c638ce77-3d60-4891-ae4f-13f1b8384ea0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f52e3ef7-1bb2-472a-b0a8-0f1822212a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '49eb9aae5e984a72b3c0b1e5316e2172', 'neutron:revision_number': '4', 'neutron:security_group_ids': '661c0951-33fe-453d-a602-b30e1d6cecdf 6de8b2f5-9ab8-4f57-be0e-97ab50a84c11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf11fd30-9366-4332-8686-a12be15026af, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=0f92835b-d022-4857-a070-51a275dffd68) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:37:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:28.306 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 0f92835b-d022-4857-a070-51a275dffd68 in datapath f52e3ef7-1bb2-472a-b0a8-0f1822212a00 unbound from our chassis#033[00m
Jan 31 03:37:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:28.307 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f52e3ef7-1bb2-472a-b0a8-0f1822212a00, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:37:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:28.308 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ac9139f9-85e3-41c7-8c4f-ef1a2d8724e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:28.309 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00 namespace which is not needed anymore#033[00m
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.319 221554 INFO nova.virt.libvirt.driver [-] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Instance destroyed successfully.#033[00m
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.319 221554 DEBUG nova.objects.instance [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Lazy-loading 'resources' on Instance uuid c638ce77-3d60-4891-ae4f-13f1b8384ea0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.478 221554 DEBUG nova.virt.libvirt.vif [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:36:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-92385002-access_point-931142024',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-92385002-access_point-931142024',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-92385002-acce',id=181,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIxOKLNsAU/yJwUiFbFeg34/S+DmzZCSSYkljzx7+oYjYeWQ/G/bzOBTfKetQno+jXBYbEJAKnhKJ4gv3aNYzFmvQotKrGil62sZ53vGFjz+l4DiJYUYaw16eFWjvuPGLw==',key_name='tempest-TestSecurityGroupsBasicOps-1592407990',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:36:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='49eb9aae5e984a72b3c0b1e5316e2172',ramdisk_id='',reservation_id='r-y8et8sz5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-92385002',owner_user_name='tempest-TestSecurityGroupsBasicOps-92385002-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:36:55Z,user_data=None,user_id='6bb0d6544be445f6a04b52b326336c8f',uuid=c638ce77-3d60-4891-ae4f-13f1b8384ea0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0f92835b-d022-4857-a070-51a275dffd68", "address": "fa:16:3e:8e:4d:9d", "network": {"id": "f52e3ef7-1bb2-472a-b0a8-0f1822212a00", "bridge": "br-int", "label": "tempest-network-smoke--1388052711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49eb9aae5e984a72b3c0b1e5316e2172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f92835b-d0", "ovs_interfaceid": "0f92835b-d022-4857-a070-51a275dffd68", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.479 221554 DEBUG nova.network.os_vif_util [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Converting VIF {"id": "0f92835b-d022-4857-a070-51a275dffd68", "address": "fa:16:3e:8e:4d:9d", "network": {"id": "f52e3ef7-1bb2-472a-b0a8-0f1822212a00", "bridge": "br-int", "label": "tempest-network-smoke--1388052711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49eb9aae5e984a72b3c0b1e5316e2172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f92835b-d0", "ovs_interfaceid": "0f92835b-d022-4857-a070-51a275dffd68", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.480 221554 DEBUG nova.network.os_vif_util [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:4d:9d,bridge_name='br-int',has_traffic_filtering=True,id=0f92835b-d022-4857-a070-51a275dffd68,network=Network(f52e3ef7-1bb2-472a-b0a8-0f1822212a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f92835b-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.480 221554 DEBUG os_vif [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:4d:9d,bridge_name='br-int',has_traffic_filtering=True,id=0f92835b-d022-4857-a070-51a275dffd68,network=Network(f52e3ef7-1bb2-472a-b0a8-0f1822212a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f92835b-d0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.481 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.481 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f92835b-d0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.482 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.484 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:28 np0005603609 nova_compute[221550]: 2026-01-31 08:37:28.485 221554 INFO os_vif [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:4d:9d,bridge_name='br-int',has_traffic_filtering=True,id=0f92835b-d022-4857-a070-51a275dffd68,network=Network(f52e3ef7-1bb2-472a-b0a8-0f1822212a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f92835b-d0')#033[00m
Jan 31 03:37:28 np0005603609 neutron-haproxy-ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00[297098]: [NOTICE]   (297102) : haproxy version is 2.8.14-c23fe91
Jan 31 03:37:28 np0005603609 neutron-haproxy-ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00[297098]: [NOTICE]   (297102) : path to executable is /usr/sbin/haproxy
Jan 31 03:37:28 np0005603609 neutron-haproxy-ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00[297098]: [WARNING]  (297102) : Exiting Master process...
Jan 31 03:37:28 np0005603609 neutron-haproxy-ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00[297098]: [ALERT]    (297102) : Current worker (297104) exited with code 143 (Terminated)
Jan 31 03:37:28 np0005603609 neutron-haproxy-ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00[297098]: [WARNING]  (297102) : All workers exited. Exiting... (0)
Jan 31 03:37:28 np0005603609 systemd[1]: libpod-5978d1224d27546629eb18a530cf337cafbf1a0b4a2441c345bb516a258319b7.scope: Deactivated successfully.
Jan 31 03:37:28 np0005603609 podman[297661]: 2026-01-31 08:37:28.607174972 +0000 UTC m=+0.230727276 container died 5978d1224d27546629eb18a530cf337cafbf1a0b4a2441c345bb516a258319b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:37:28 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5978d1224d27546629eb18a530cf337cafbf1a0b4a2441c345bb516a258319b7-userdata-shm.mount: Deactivated successfully.
Jan 31 03:37:28 np0005603609 systemd[1]: var-lib-containers-storage-overlay-935e4a8d8a49f7fc128ea968559455e3739762cc3db8e03f7d009f8515d4b494-merged.mount: Deactivated successfully.
Jan 31 03:37:29 np0005603609 podman[297661]: 2026-01-31 08:37:29.389212063 +0000 UTC m=+1.012764347 container cleanup 5978d1224d27546629eb18a530cf337cafbf1a0b4a2441c345bb516a258319b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:37:29 np0005603609 systemd[1]: libpod-conmon-5978d1224d27546629eb18a530cf337cafbf1a0b4a2441c345bb516a258319b7.scope: Deactivated successfully.
Jan 31 03:37:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:37:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:29.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:37:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:29.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:30 np0005603609 podman[297713]: 2026-01-31 08:37:30.065381588 +0000 UTC m=+0.653171486 container remove 5978d1224d27546629eb18a530cf337cafbf1a0b4a2441c345bb516a258319b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 03:37:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:30.069 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3b56f449-7be6-436b-9ed5-eed934e4050f]: (4, ('Sat Jan 31 08:37:28 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00 (5978d1224d27546629eb18a530cf337cafbf1a0b4a2441c345bb516a258319b7)\n5978d1224d27546629eb18a530cf337cafbf1a0b4a2441c345bb516a258319b7\nSat Jan 31 08:37:29 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00 (5978d1224d27546629eb18a530cf337cafbf1a0b4a2441c345bb516a258319b7)\n5978d1224d27546629eb18a530cf337cafbf1a0b4a2441c345bb516a258319b7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:30.070 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b5bfa94e-e01e-4872-9552-0197604998af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:30.071 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf52e3ef7-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:30 np0005603609 nova_compute[221550]: 2026-01-31 08:37:30.073 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:30 np0005603609 kernel: tapf52e3ef7-10: left promiscuous mode
Jan 31 03:37:30 np0005603609 nova_compute[221550]: 2026-01-31 08:37:30.079 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:30.082 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ade1e156-a4fe-4b79-a277-8edcb4faf341]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:30.095 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5407e164-8337-4765-8adc-9a3ea017dce0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:30.096 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[17515ef5-86d7-4fc0-917d-b9b2139d9050]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:30.108 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6b73eb72-212e-4474-89b2-8258325706cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884938, 'reachable_time': 20152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297730, 'error': None, 'target': 'ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:30.110 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f52e3ef7-1bb2-472a-b0a8-0f1822212a00 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:37:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:30.110 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab166b1-1954-40c6-a22b-d5444da9f7f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:30 np0005603609 systemd[1]: run-netns-ovnmeta\x2df52e3ef7\x2d1bb2\x2d472a\x2db0a8\x2d0f1822212a00.mount: Deactivated successfully.
Jan 31 03:37:30 np0005603609 nova_compute[221550]: 2026-01-31 08:37:30.533 221554 DEBUG oslo_concurrency.processutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:30 np0005603609 nova_compute[221550]: 2026-01-31 08:37:30.615 221554 DEBUG nova.storage.rbd_utils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] resizing rbd image 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:37:31 np0005603609 nova_compute[221550]: 2026-01-31 08:37:31.256 221554 DEBUG nova.compute.manager [req-ae9f805c-a563-46f3-af95-128238ddf9da req-716a761c-ed02-4ea3-9b0e-e8b0f7307feb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Received event network-vif-unplugged-0f92835b-d022-4857-a070-51a275dffd68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:37:31 np0005603609 nova_compute[221550]: 2026-01-31 08:37:31.257 221554 DEBUG oslo_concurrency.lockutils [req-ae9f805c-a563-46f3-af95-128238ddf9da req-716a761c-ed02-4ea3-9b0e-e8b0f7307feb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:31 np0005603609 nova_compute[221550]: 2026-01-31 08:37:31.257 221554 DEBUG oslo_concurrency.lockutils [req-ae9f805c-a563-46f3-af95-128238ddf9da req-716a761c-ed02-4ea3-9b0e-e8b0f7307feb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:31 np0005603609 nova_compute[221550]: 2026-01-31 08:37:31.257 221554 DEBUG oslo_concurrency.lockutils [req-ae9f805c-a563-46f3-af95-128238ddf9da req-716a761c-ed02-4ea3-9b0e-e8b0f7307feb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:31 np0005603609 nova_compute[221550]: 2026-01-31 08:37:31.258 221554 DEBUG nova.compute.manager [req-ae9f805c-a563-46f3-af95-128238ddf9da req-716a761c-ed02-4ea3-9b0e-e8b0f7307feb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] No waiting events found dispatching network-vif-unplugged-0f92835b-d022-4857-a070-51a275dffd68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:37:31 np0005603609 nova_compute[221550]: 2026-01-31 08:37:31.258 221554 DEBUG nova.compute.manager [req-ae9f805c-a563-46f3-af95-128238ddf9da req-716a761c-ed02-4ea3-9b0e-e8b0f7307feb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Received event network-vif-unplugged-0f92835b-d022-4857-a070-51a275dffd68 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:37:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:31.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:31 np0005603609 nova_compute[221550]: 2026-01-31 08:37:31.463 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:37:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:31.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:37:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.029 221554 DEBUG nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.030 221554 DEBUG nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Ensure instance console log exists: /var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.030 221554 DEBUG oslo_concurrency.lockutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.030 221554 DEBUG oslo_concurrency.lockutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.031 221554 DEBUG oslo_concurrency.lockutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.033 221554 DEBUG nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Start _get_guest_xml network_info=[{"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.039 221554 WARNING nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.048 221554 DEBUG nova.virt.libvirt.host [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.048 221554 DEBUG nova.virt.libvirt.host [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.052 221554 DEBUG nova.virt.libvirt.host [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.053 221554 DEBUG nova.virt.libvirt.host [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.054 221554 DEBUG nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.054 221554 DEBUG nova.virt.hardware [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.055 221554 DEBUG nova.virt.hardware [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.055 221554 DEBUG nova.virt.hardware [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.055 221554 DEBUG nova.virt.hardware [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.056 221554 DEBUG nova.virt.hardware [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.056 221554 DEBUG nova.virt.hardware [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.056 221554 DEBUG nova.virt.hardware [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.056 221554 DEBUG nova.virt.hardware [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.057 221554 DEBUG nova.virt.hardware [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.057 221554 DEBUG nova.virt.hardware [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.057 221554 DEBUG nova.virt.hardware [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.057 221554 DEBUG nova.objects.instance [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.141 221554 DEBUG oslo_concurrency.processutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.316 221554 INFO nova.virt.libvirt.driver [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Deleting instance files /var/lib/nova/instances/c638ce77-3d60-4891-ae4f-13f1b8384ea0_del#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.317 221554 INFO nova.virt.libvirt.driver [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Deletion of /var/lib/nova/instances/c638ce77-3d60-4891-ae4f-13f1b8384ea0_del complete#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.463 221554 INFO nova.compute.manager [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Took 4.38 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.463 221554 DEBUG oslo.service.loopingcall [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.464 221554 DEBUG nova.compute.manager [-] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.464 221554 DEBUG nova.network.neutron [-] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.492 221554 DEBUG nova.network.neutron [req-aa9f494f-541e-43d8-96de-b28d39d0c89b req-3b14417d-bd9f-489e-8014-5ff3339a84a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Updated VIF entry in instance network info cache for port 0f92835b-d022-4857-a070-51a275dffd68. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.493 221554 DEBUG nova.network.neutron [req-aa9f494f-541e-43d8-96de-b28d39d0c89b req-3b14417d-bd9f-489e-8014-5ff3339a84a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Updating instance_info_cache with network_info: [{"id": "0f92835b-d022-4857-a070-51a275dffd68", "address": "fa:16:3e:8e:4d:9d", "network": {"id": "f52e3ef7-1bb2-472a-b0a8-0f1822212a00", "bridge": "br-int", "label": "tempest-network-smoke--1388052711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49eb9aae5e984a72b3c0b1e5316e2172", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f92835b-d0", "ovs_interfaceid": "0f92835b-d022-4857-a070-51a275dffd68", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:37:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:37:32 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/598306189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.634 221554 DEBUG oslo_concurrency.lockutils [req-aa9f494f-541e-43d8-96de-b28d39d0c89b req-3b14417d-bd9f-489e-8014-5ff3339a84a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c638ce77-3d60-4891-ae4f-13f1b8384ea0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.637 221554 DEBUG oslo_concurrency.processutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.663 221554 DEBUG nova.storage.rbd_utils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:37:32 np0005603609 nova_compute[221550]: 2026-01-31 08:37:32.666 221554 DEBUG oslo_concurrency.processutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:37:33 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2614260154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.083 221554 DEBUG oslo_concurrency.processutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.086 221554 DEBUG nova.virt.libvirt.vif [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2063916310',display_name='tempest-TestNetworkAdvancedServerOps-server-2063916310',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2063916310',id=180,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNv8t9ljSfsHe7/DF/aoJ/vR6fscWRM4GStsWxMUVpZD9QqPEzaqes4Daeg29ZseIZh//ZdOETFJ8kFU4RZhY+wlBZNr//GGJe6GeYVX3d0ryXxCUAo4WEPt20+EJ7KK2A==',key_name='tempest-TestNetworkAdvancedServerOps-728081448',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:36:49Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-v607b9ot',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:37:26Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=4ba5c6bd-5d05-4c33-ac40-620abd0a83cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.087 221554 DEBUG nova.network.os_vif_util [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.089 221554 DEBUG nova.network.os_vif_util [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:fd:51,bridge_name='br-int',has_traffic_filtering=True,id=76093a46-2d78-41cb-a667-6e75fa0d8ea1,network=Network(64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76093a46-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.093 221554 DEBUG nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:37:33 np0005603609 nova_compute[221550]:  <uuid>4ba5c6bd-5d05-4c33-ac40-620abd0a83cc</uuid>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:  <name>instance-000000b4</name>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-2063916310</nova:name>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:37:32</nova:creationTime>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:37:33 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:        <nova:user uuid="4d0e9d918b4041fabd5ded633b4cf404">tempest-TestNetworkAdvancedServerOps-483180749-project-member</nova:user>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:        <nova:project uuid="9710f0cf77d84353ae13fa47922b085d">tempest-TestNetworkAdvancedServerOps-483180749</nova:project>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="40cf2ff3-f7ff-4843-b4ab-b7dcc843006f"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:        <nova:port uuid="76093a46-2d78-41cb-a667-6e75fa0d8ea1">
Jan 31 03:37:33 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <entry name="serial">4ba5c6bd-5d05-4c33-ac40-620abd0a83cc</entry>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <entry name="uuid">4ba5c6bd-5d05-4c33-ac40-620abd0a83cc</entry>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk">
Jan 31 03:37:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:37:33 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk.config">
Jan 31 03:37:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:37:33 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:88:fd:51"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <target dev="tap76093a46-2d"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc/console.log" append="off"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:37:33 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:37:33 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:37:33 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:37:33 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.096 221554 DEBUG nova.virt.libvirt.vif [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2063916310',display_name='tempest-TestNetworkAdvancedServerOps-server-2063916310',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2063916310',id=180,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNv8t9ljSfsHe7/DF/aoJ/vR6fscWRM4GStsWxMUVpZD9QqPEzaqes4Daeg29ZseIZh//ZdOETFJ8kFU4RZhY+wlBZNr//GGJe6GeYVX3d0ryXxCUAo4WEPt20+EJ7KK2A==',key_name='tempest-TestNetworkAdvancedServerOps-728081448',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:36:49Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-v607b9ot',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:37:26Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=4ba5c6bd-5d05-4c33-ac40-620abd0a83cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.097 221554 DEBUG nova.network.os_vif_util [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.097 221554 DEBUG nova.network.os_vif_util [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:fd:51,bridge_name='br-int',has_traffic_filtering=True,id=76093a46-2d78-41cb-a667-6e75fa0d8ea1,network=Network(64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76093a46-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.098 221554 DEBUG os_vif [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:fd:51,bridge_name='br-int',has_traffic_filtering=True,id=76093a46-2d78-41cb-a667-6e75fa0d8ea1,network=Network(64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76093a46-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.098 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.099 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.099 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.101 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.101 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76093a46-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.101 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap76093a46-2d, col_values=(('external_ids', {'iface-id': '76093a46-2d78-41cb-a667-6e75fa0d8ea1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:fd:51', 'vm-uuid': '4ba5c6bd-5d05-4c33-ac40-620abd0a83cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:33 np0005603609 NetworkManager[49064]: <info>  [1769848653.1491] manager: (tap76093a46-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.148 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.152 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.153 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.155 221554 INFO os_vif [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:fd:51,bridge_name='br-int',has_traffic_filtering=True,id=76093a46-2d78-41cb-a667-6e75fa0d8ea1,network=Network(64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76093a46-2d')#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.232 221554 DEBUG nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.233 221554 DEBUG nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.234 221554 DEBUG nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No VIF found with MAC fa:16:3e:88:fd:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.235 221554 INFO nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Using config drive#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.274 221554 DEBUG nova.storage.rbd_utils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.303 221554 DEBUG nova.objects.instance [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'ec2_ids' on Instance uuid 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.369 221554 DEBUG nova.objects.instance [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'keypairs' on Instance uuid 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.373 221554 DEBUG nova.compute.manager [req-be43294e-c180-4179-a180-4f16e69f9cda req-0540a3a4-42d1-4b77-ad3c-934de759bcfd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Received event network-vif-plugged-0f92835b-d022-4857-a070-51a275dffd68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.373 221554 DEBUG oslo_concurrency.lockutils [req-be43294e-c180-4179-a180-4f16e69f9cda req-0540a3a4-42d1-4b77-ad3c-934de759bcfd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.374 221554 DEBUG oslo_concurrency.lockutils [req-be43294e-c180-4179-a180-4f16e69f9cda req-0540a3a4-42d1-4b77-ad3c-934de759bcfd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.375 221554 DEBUG oslo_concurrency.lockutils [req-be43294e-c180-4179-a180-4f16e69f9cda req-0540a3a4-42d1-4b77-ad3c-934de759bcfd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.375 221554 DEBUG nova.compute.manager [req-be43294e-c180-4179-a180-4f16e69f9cda req-0540a3a4-42d1-4b77-ad3c-934de759bcfd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] No waiting events found dispatching network-vif-plugged-0f92835b-d022-4857-a070-51a275dffd68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.375 221554 WARNING nova.compute.manager [req-be43294e-c180-4179-a180-4f16e69f9cda req-0540a3a4-42d1-4b77-ad3c-934de759bcfd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Received unexpected event network-vif-plugged-0f92835b-d022-4857-a070-51a275dffd68 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:37:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:37:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:33.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:37:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:37:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:33.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:37:33 np0005603609 nova_compute[221550]: 2026-01-31 08:37:33.840 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:34 np0005603609 nova_compute[221550]: 2026-01-31 08:37:34.145 221554 DEBUG nova.network.neutron [-] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:37:34 np0005603609 nova_compute[221550]: 2026-01-31 08:37:34.205 221554 INFO nova.compute.manager [-] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Took 1.74 seconds to deallocate network for instance.#033[00m
Jan 31 03:37:34 np0005603609 nova_compute[221550]: 2026-01-31 08:37:34.305 221554 DEBUG oslo_concurrency.lockutils [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:34 np0005603609 nova_compute[221550]: 2026-01-31 08:37:34.305 221554 DEBUG oslo_concurrency.lockutils [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:34 np0005603609 nova_compute[221550]: 2026-01-31 08:37:34.399 221554 DEBUG oslo_concurrency.processutils [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:34 np0005603609 nova_compute[221550]: 2026-01-31 08:37:34.503 221554 INFO nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Creating config drive at /var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc/disk.config#033[00m
Jan 31 03:37:34 np0005603609 nova_compute[221550]: 2026-01-31 08:37:34.507 221554 DEBUG oslo_concurrency.processutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpat5uym0y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:34 np0005603609 nova_compute[221550]: 2026-01-31 08:37:34.575 221554 DEBUG nova.compute.manager [req-57b2cdc4-3fca-42f3-84bd-807e0099f8f4 req-648c72e1-ad9e-4b2d-a0a6-eda56b0db471 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Received event network-vif-deleted-0f92835b-d022-4857-a070-51a275dffd68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:37:34 np0005603609 nova_compute[221550]: 2026-01-31 08:37:34.640 221554 DEBUG oslo_concurrency.processutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpat5uym0y" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:34 np0005603609 nova_compute[221550]: 2026-01-31 08:37:34.667 221554 DEBUG nova.storage.rbd_utils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:37:34 np0005603609 nova_compute[221550]: 2026-01-31 08:37:34.671 221554 DEBUG oslo_concurrency.processutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc/disk.config 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:37:34 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/356581148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:37:34 np0005603609 nova_compute[221550]: 2026-01-31 08:37:34.806 221554 DEBUG oslo_concurrency.processutils [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:34 np0005603609 nova_compute[221550]: 2026-01-31 08:37:34.812 221554 DEBUG nova.compute.provider_tree [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:37:34 np0005603609 nova_compute[221550]: 2026-01-31 08:37:34.892 221554 DEBUG nova.scheduler.client.report [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:37:35 np0005603609 nova_compute[221550]: 2026-01-31 08:37:35.097 221554 DEBUG oslo_concurrency.lockutils [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:35 np0005603609 nova_compute[221550]: 2026-01-31 08:37:35.137 221554 DEBUG oslo_concurrency.processutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc/disk.config 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:35 np0005603609 nova_compute[221550]: 2026-01-31 08:37:35.138 221554 INFO nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Deleting local config drive /var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc/disk.config because it was imported into RBD.#033[00m
Jan 31 03:37:35 np0005603609 kernel: tap76093a46-2d: entered promiscuous mode
Jan 31 03:37:35 np0005603609 NetworkManager[49064]: <info>  [1769848655.1971] manager: (tap76093a46-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/377)
Jan 31 03:37:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:35Z|00813|binding|INFO|Claiming lport 76093a46-2d78-41cb-a667-6e75fa0d8ea1 for this chassis.
Jan 31 03:37:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:35Z|00814|binding|INFO|76093a46-2d78-41cb-a667-6e75fa0d8ea1: Claiming fa:16:3e:88:fd:51 10.100.0.7
Jan 31 03:37:35 np0005603609 nova_compute[221550]: 2026-01-31 08:37:35.198 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:35Z|00815|binding|INFO|Setting lport 76093a46-2d78-41cb-a667-6e75fa0d8ea1 ovn-installed in OVS
Jan 31 03:37:35 np0005603609 nova_compute[221550]: 2026-01-31 08:37:35.210 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:35 np0005603609 nova_compute[221550]: 2026-01-31 08:37:35.215 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:35 np0005603609 systemd-udevd[297960]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:37:35 np0005603609 NetworkManager[49064]: <info>  [1769848655.2357] device (tap76093a46-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:37:35 np0005603609 systemd-machined[190912]: New machine qemu-99-instance-000000b4.
Jan 31 03:37:35 np0005603609 NetworkManager[49064]: <info>  [1769848655.2372] device (tap76093a46-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:37:35 np0005603609 systemd[1]: Started Virtual Machine qemu-99-instance-000000b4.
Jan 31 03:37:35 np0005603609 nova_compute[221550]: 2026-01-31 08:37:35.271 221554 INFO nova.scheduler.client.report [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Deleted allocations for instance c638ce77-3d60-4891-ae4f-13f1b8384ea0#033[00m
Jan 31 03:37:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:35Z|00816|binding|INFO|Setting lport 76093a46-2d78-41cb-a667-6e75fa0d8ea1 up in Southbound
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.278 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:fd:51 10.100.0.7'], port_security=['fa:16:3e:88:fd:51 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4ba5c6bd-5d05-4c33-ac40-620abd0a83cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'd5ab7674-17fe-499b-8e6a-01f2b29e7eda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e2cf9ef-4f0c-4703-b0c5-cb44d772423a, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=76093a46-2d78-41cb-a667-6e75fa0d8ea1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.280 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 76093a46-2d78-41cb-a667-6e75fa0d8ea1 in datapath 64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4 bound to our chassis#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.282 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.298 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[202913a7-2cf6-4641-bde9-3090b351f141]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.299 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap64b5ffb1-71 in ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.301 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap64b5ffb1-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.302 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[81cb504b-3f3e-4f60-9ffb-aee4058ee8d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.303 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e614c30d-861a-4ec4-9d5a-e6b24972343f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.314 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[09a27834-20f2-4831-b0ea-b61c61a3fdf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.347 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[20913e5a-184e-4fac-8716-e70f7e87e937]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.377 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[1b2bad3d-913e-4819-b736-e0a5046a71ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:35 np0005603609 NetworkManager[49064]: <info>  [1769848655.3891] manager: (tap64b5ffb1-70): new Veth device (/org/freedesktop/NetworkManager/Devices/378)
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.389 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[24cd634f-eec9-466f-ac6d-5f3f50a707b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.409 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[0445906a-8506-4843-b8e5-b1ace9289a9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.413 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c3654c-2dba-441a-9229-40541bf4c773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:35 np0005603609 NetworkManager[49064]: <info>  [1769848655.4273] device (tap64b5ffb1-70): carrier: link connected
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.429 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[cab464ac-8e9a-4343-a77f-4a12a394be7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:35 np0005603609 nova_compute[221550]: 2026-01-31 08:37:35.439 221554 DEBUG oslo_concurrency.lockutils [None req-a0b798a7-b98d-40df-a027-44394628dec9 6bb0d6544be445f6a04b52b326336c8f 49eb9aae5e984a72b3c0b1e5316e2172 - - default default] Lock "c638ce77-3d60-4891-ae4f-13f1b8384ea0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.358s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.443 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3c13e4ce-a0a2-4ae0-86f1-4f7e6e47f833]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64b5ffb1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:ee:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 253], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 889512, 'reachable_time': 36365, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297997, 'error': None, 'target': 'ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.456 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6bfc834e-ba6f-4e13-a182-aefd1ed55061]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:eede'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 889512, 'tstamp': 889512}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297998, 'error': None, 'target': 'ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:35.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.470 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[09e0baeb-9d2d-4e7a-bb4f-082b22d25cc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64b5ffb1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:ee:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 253], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 889512, 'reachable_time': 36365, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297999, 'error': None, 'target': 'ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.496 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[49a6f1ae-986a-4838-9c69-9148faa9fc12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.539 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5b0a89c8-ae15-4711-957d-a1207250509d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.541 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64b5ffb1-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.541 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.541 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64b5ffb1-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:35 np0005603609 nova_compute[221550]: 2026-01-31 08:37:35.543 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:35 np0005603609 NetworkManager[49064]: <info>  [1769848655.5444] manager: (tap64b5ffb1-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Jan 31 03:37:35 np0005603609 kernel: tap64b5ffb1-70: entered promiscuous mode
Jan 31 03:37:35 np0005603609 nova_compute[221550]: 2026-01-31 08:37:35.546 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.546 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64b5ffb1-70, col_values=(('external_ids', {'iface-id': '9f64b659-abb7-4398-919b-9d703dbc3cd7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:35 np0005603609 nova_compute[221550]: 2026-01-31 08:37:35.547 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:35Z|00817|binding|INFO|Releasing lport 9f64b659-abb7-4398-919b-9d703dbc3cd7 from this chassis (sb_readonly=0)
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.549 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:37:35 np0005603609 nova_compute[221550]: 2026-01-31 08:37:35.548 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.549 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5a6e44f9-9df4-45a0-828d-160eb9342ca0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.550 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4.pid.haproxy
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:37:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:35.550 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4', 'env', 'PROCESS_TAG=haproxy-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:37:35 np0005603609 nova_compute[221550]: 2026-01-31 08:37:35.553 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:37:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:35.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:37:35 np0005603609 nova_compute[221550]: 2026-01-31 08:37:35.848 221554 DEBUG nova.compute.manager [req-040ad472-5a8f-44ab-90bf-32130efa5219 req-33254400-dd70-49d9-9559-e2be23112c44 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Received event network-vif-plugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:37:35 np0005603609 nova_compute[221550]: 2026-01-31 08:37:35.849 221554 DEBUG oslo_concurrency.lockutils [req-040ad472-5a8f-44ab-90bf-32130efa5219 req-33254400-dd70-49d9-9559-e2be23112c44 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:35 np0005603609 nova_compute[221550]: 2026-01-31 08:37:35.850 221554 DEBUG oslo_concurrency.lockutils [req-040ad472-5a8f-44ab-90bf-32130efa5219 req-33254400-dd70-49d9-9559-e2be23112c44 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:35 np0005603609 nova_compute[221550]: 2026-01-31 08:37:35.850 221554 DEBUG oslo_concurrency.lockutils [req-040ad472-5a8f-44ab-90bf-32130efa5219 req-33254400-dd70-49d9-9559-e2be23112c44 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:35 np0005603609 nova_compute[221550]: 2026-01-31 08:37:35.850 221554 DEBUG nova.compute.manager [req-040ad472-5a8f-44ab-90bf-32130efa5219 req-33254400-dd70-49d9-9559-e2be23112c44 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] No waiting events found dispatching network-vif-plugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:37:35 np0005603609 nova_compute[221550]: 2026-01-31 08:37:35.850 221554 WARNING nova.compute.manager [req-040ad472-5a8f-44ab-90bf-32130efa5219 req-33254400-dd70-49d9-9559-e2be23112c44 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Received unexpected event network-vif-plugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 31 03:37:35 np0005603609 podman[298048]: 2026-01-31 08:37:35.913220292 +0000 UTC m=+0.095044644 container create 9e950e518b601a9f6be0baf981b6928b3d007e4ae7a006da1e2b695a707ddcca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:37:35 np0005603609 podman[298048]: 2026-01-31 08:37:35.841013414 +0000 UTC m=+0.022837796 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:37:35 np0005603609 systemd[1]: Started libpod-conmon-9e950e518b601a9f6be0baf981b6928b3d007e4ae7a006da1e2b695a707ddcca.scope.
Jan 31 03:37:35 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:37:35 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f80414636faff2789a6d287ac9132368530d64df55b49e20659bc5c7628a22c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.026 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.027 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848656.0263884, 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.027 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.029 221554 DEBUG nova.compute.manager [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.029 221554 DEBUG nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:37:36 np0005603609 podman[298048]: 2026-01-31 08:37:36.032470444 +0000 UTC m=+0.214294826 container init 9e950e518b601a9f6be0baf981b6928b3d007e4ae7a006da1e2b695a707ddcca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.033 221554 INFO nova.virt.libvirt.driver [-] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Instance spawned successfully.#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.033 221554 DEBUG nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:37:36 np0005603609 podman[298048]: 2026-01-31 08:37:36.038306476 +0000 UTC m=+0.220130828 container start 9e950e518b601a9f6be0baf981b6928b3d007e4ae7a006da1e2b695a707ddcca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:37:36 np0005603609 neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4[298089]: [NOTICE]   (298093) : New worker (298095) forked
Jan 31 03:37:36 np0005603609 neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4[298089]: [NOTICE]   (298093) : Loading success.
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.087 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.093 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.099 221554 DEBUG nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.099 221554 DEBUG nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.100 221554 DEBUG nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.100 221554 DEBUG nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.101 221554 DEBUG nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.101 221554 DEBUG nova.virt.libvirt.driver [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.147 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.148 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848656.0269327, 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.148 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] VM Started (Lifecycle Event)#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.239 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.241 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.267 221554 DEBUG nova.compute.manager [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.273 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.344 221554 DEBUG oslo_concurrency.lockutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.344 221554 DEBUG oslo_concurrency.lockutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.344 221554 DEBUG nova.objects.instance [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.429 221554 DEBUG oslo_concurrency.lockutils [None req-1eecb83d-8ac7-4b81-9bb8-6611e39af9da 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:36 np0005603609 nova_compute[221550]: 2026-01-31 08:37:36.493 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:36 np0005603609 ceph-mgr[82033]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 03:37:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:37.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:37.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:37 np0005603609 nova_compute[221550]: 2026-01-31 08:37:37.998 221554 DEBUG nova.compute.manager [req-18986949-180e-42c0-a8a2-fef2498b51b7 req-61d4f659-029c-4590-9a10-56d067b8f04d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Received event network-vif-plugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:37:37 np0005603609 nova_compute[221550]: 2026-01-31 08:37:37.998 221554 DEBUG oslo_concurrency.lockutils [req-18986949-180e-42c0-a8a2-fef2498b51b7 req-61d4f659-029c-4590-9a10-56d067b8f04d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:37 np0005603609 nova_compute[221550]: 2026-01-31 08:37:37.998 221554 DEBUG oslo_concurrency.lockutils [req-18986949-180e-42c0-a8a2-fef2498b51b7 req-61d4f659-029c-4590-9a10-56d067b8f04d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:37 np0005603609 nova_compute[221550]: 2026-01-31 08:37:37.998 221554 DEBUG oslo_concurrency.lockutils [req-18986949-180e-42c0-a8a2-fef2498b51b7 req-61d4f659-029c-4590-9a10-56d067b8f04d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:37 np0005603609 nova_compute[221550]: 2026-01-31 08:37:37.998 221554 DEBUG nova.compute.manager [req-18986949-180e-42c0-a8a2-fef2498b51b7 req-61d4f659-029c-4590-9a10-56d067b8f04d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] No waiting events found dispatching network-vif-plugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:37:37 np0005603609 nova_compute[221550]: 2026-01-31 08:37:37.999 221554 WARNING nova.compute.manager [req-18986949-180e-42c0-a8a2-fef2498b51b7 req-61d4f659-029c-4590-9a10-56d067b8f04d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Received unexpected event network-vif-plugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:37:38 np0005603609 nova_compute[221550]: 2026-01-31 08:37:38.148 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:39 np0005603609 nova_compute[221550]: 2026-01-31 08:37:39.249 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:37:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:39.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:37:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:37:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:39.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:37:39 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:39Z|00818|binding|INFO|Releasing lport 9f64b659-abb7-4398-919b-9d703dbc3cd7 from this chassis (sb_readonly=0)
Jan 31 03:37:40 np0005603609 nova_compute[221550]: 2026-01-31 08:37:40.043 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:37:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:41.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:37:41 np0005603609 nova_compute[221550]: 2026-01-31 08:37:41.496 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:37:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:41.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:37:41 np0005603609 nova_compute[221550]: 2026-01-31 08:37:41.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:43 np0005603609 nova_compute[221550]: 2026-01-31 08:37:43.150 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:43 np0005603609 nova_compute[221550]: 2026-01-31 08:37:43.319 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848648.3182187, c638ce77-3d60-4891-ae4f-13f1b8384ea0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:37:43 np0005603609 nova_compute[221550]: 2026-01-31 08:37:43.319 221554 INFO nova.compute.manager [-] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:37:43 np0005603609 nova_compute[221550]: 2026-01-31 08:37:43.425 221554 DEBUG nova.compute.manager [None req-260883f0-cf53-40bf-8306-85b928f0de1c - - - - - -] [instance: c638ce77-3d60-4891-ae4f-13f1b8384ea0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:37:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:43.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:37:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:43.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:37:44 np0005603609 nova_compute[221550]: 2026-01-31 08:37:44.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:37:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:45.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:37:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:45.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:46 np0005603609 nova_compute[221550]: 2026-01-31 08:37:46.497 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:47.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:47.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:48 np0005603609 nova_compute[221550]: 2026-01-31 08:37:48.071 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:48 np0005603609 nova_compute[221550]: 2026-01-31 08:37:48.071 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:48 np0005603609 nova_compute[221550]: 2026-01-31 08:37:48.152 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:37:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:49.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:37:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:49.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:50 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:50Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:fd:51 10.100.0.7
Jan 31 03:37:50 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:50Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:fd:51 10.100.0.7
Jan 31 03:37:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:37:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:51.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:37:51 np0005603609 nova_compute[221550]: 2026-01-31 08:37:51.537 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:51.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:51 np0005603609 nova_compute[221550]: 2026-01-31 08:37:51.746 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:53 np0005603609 nova_compute[221550]: 2026-01-31 08:37:53.155 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:37:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:53.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:37:53 np0005603609 nova_compute[221550]: 2026-01-31 08:37:53.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:53 np0005603609 nova_compute[221550]: 2026-01-31 08:37:53.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:37:53 np0005603609 nova_compute[221550]: 2026-01-31 08:37:53.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:37:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:53.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:54 np0005603609 nova_compute[221550]: 2026-01-31 08:37:54.153 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:37:54 np0005603609 nova_compute[221550]: 2026-01-31 08:37:54.154 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:37:54 np0005603609 nova_compute[221550]: 2026-01-31 08:37:54.154 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:37:54 np0005603609 nova_compute[221550]: 2026-01-31 08:37:54.154 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:55 np0005603609 podman[298106]: 2026-01-31 08:37:55.159609011 +0000 UTC m=+0.045326375 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:37:55 np0005603609 podman[298105]: 2026-01-31 08:37:55.184011384 +0000 UTC m=+0.070851014 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:37:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:55.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:55.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:56 np0005603609 nova_compute[221550]: 2026-01-31 08:37:56.040 221554 INFO nova.compute.manager [None req-b5a4a2e1-6383-4d28-b10f-4f238ddf35a1 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Get console output#033[00m
Jan 31 03:37:56 np0005603609 nova_compute[221550]: 2026-01-31 08:37:56.044 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:37:56 np0005603609 nova_compute[221550]: 2026-01-31 08:37:56.540 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.169 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Updating instance_info_cache with network_info: [{"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.208 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.209 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.209 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.209 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.210 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.272 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.272 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.273 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.273 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.273 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:57.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:37:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:57.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:37:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:37:57 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/751218233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.732 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:57 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:57Z|00819|binding|INFO|Releasing lport 9f64b659-abb7-4398-919b-9d703dbc3cd7 from this chassis (sb_readonly=0)
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.780 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.822 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000b4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.822 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000b4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:37:57 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:57Z|00820|binding|INFO|Releasing lport 9f64b659-abb7-4398-919b-9d703dbc3cd7 from this chassis (sb_readonly=0)
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.832 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.869 221554 DEBUG nova.compute.manager [req-afda53b2-f710-468c-bca7-60415bd4de18 req-c16715ea-d6f3-4ea0-abce-f67cf248b612 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Received event network-changed-76093a46-2d78-41cb-a667-6e75fa0d8ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.870 221554 DEBUG nova.compute.manager [req-afda53b2-f710-468c-bca7-60415bd4de18 req-c16715ea-d6f3-4ea0-abce-f67cf248b612 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Refreshing instance network info cache due to event network-changed-76093a46-2d78-41cb-a667-6e75fa0d8ea1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.870 221554 DEBUG oslo_concurrency.lockutils [req-afda53b2-f710-468c-bca7-60415bd4de18 req-c16715ea-d6f3-4ea0-abce-f67cf248b612 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.870 221554 DEBUG oslo_concurrency.lockutils [req-afda53b2-f710-468c-bca7-60415bd4de18 req-c16715ea-d6f3-4ea0-abce-f67cf248b612 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.870 221554 DEBUG nova.network.neutron [req-afda53b2-f710-468c-bca7-60415bd4de18 req-c16715ea-d6f3-4ea0-abce-f67cf248b612 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Refreshing network info cache for port 76093a46-2d78-41cb-a667-6e75fa0d8ea1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.959 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.960 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4049MB free_disk=20.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.960 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:57 np0005603609 nova_compute[221550]: 2026-01-31 08:37:57.960 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.096 221554 DEBUG oslo_concurrency.lockutils [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.096 221554 DEBUG oslo_concurrency.lockutils [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.097 221554 DEBUG oslo_concurrency.lockutils [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.097 221554 DEBUG oslo_concurrency.lockutils [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.097 221554 DEBUG oslo_concurrency.lockutils [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.098 221554 INFO nova.compute.manager [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Terminating instance#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.099 221554 DEBUG nova.compute.manager [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.157 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:58 np0005603609 kernel: tap76093a46-2d (unregistering): left promiscuous mode
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.220 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:58 np0005603609 NetworkManager[49064]: <info>  [1769848678.2212] device (tap76093a46-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:37:58 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:58Z|00821|binding|INFO|Releasing lport 76093a46-2d78-41cb-a667-6e75fa0d8ea1 from this chassis (sb_readonly=0)
Jan 31 03:37:58 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:58Z|00822|binding|INFO|Setting lport 76093a46-2d78-41cb-a667-6e75fa0d8ea1 down in Southbound
Jan 31 03:37:58 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:58Z|00823|binding|INFO|Removing iface tap76093a46-2d ovn-installed in OVS
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.230 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.232 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.237 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:58 np0005603609 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000b4.scope: Deactivated successfully.
Jan 31 03:37:58 np0005603609 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000b4.scope: Consumed 13.475s CPU time.
Jan 31 03:37:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:58.269 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:fd:51 10.100.0.7'], port_security=['fa:16:3e:88:fd:51 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4ba5c6bd-5d05-4c33-ac40-620abd0a83cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd5ab7674-17fe-499b-8e6a-01f2b29e7eda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e2cf9ef-4f0c-4703-b0c5-cb44d772423a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=76093a46-2d78-41cb-a667-6e75fa0d8ea1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:37:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:58.271 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 76093a46-2d78-41cb-a667-6e75fa0d8ea1 in datapath 64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4 unbound from our chassis#033[00m
Jan 31 03:37:58 np0005603609 systemd-machined[190912]: Machine qemu-99-instance-000000b4 terminated.
Jan 31 03:37:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:58.272 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:37:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:58.274 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc603f8-0d6c-45fe-9cd6-dd61c33577bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:58.275 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4 namespace which is not needed anymore#033[00m
Jan 31 03:37:58 np0005603609 kernel: tap76093a46-2d: entered promiscuous mode
Jan 31 03:37:58 np0005603609 NetworkManager[49064]: <info>  [1769848678.3145] manager: (tap76093a46-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/380)
Jan 31 03:37:58 np0005603609 systemd-udevd[298175]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:37:58 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:58Z|00824|binding|INFO|Claiming lport 76093a46-2d78-41cb-a667-6e75fa0d8ea1 for this chassis.
Jan 31 03:37:58 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:58Z|00825|binding|INFO|76093a46-2d78-41cb-a667-6e75fa0d8ea1: Claiming fa:16:3e:88:fd:51 10.100.0.7
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.316 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:58 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:58Z|00826|binding|INFO|Setting lport 76093a46-2d78-41cb-a667-6e75fa0d8ea1 ovn-installed in OVS
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.322 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:58 np0005603609 kernel: tap76093a46-2d (unregistering): left promiscuous mode
Jan 31 03:37:58 np0005603609 virtnodedevd[221870]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 31 03:37:58 np0005603609 virtnodedevd[221870]: hostname: compute-1
Jan 31 03:37:58 np0005603609 virtnodedevd[221870]: ethtool ioctl error on tap76093a46-2d: No such device
Jan 31 03:37:58 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:58Z|00827|if_status|INFO|Dropped 2 log messages in last 852 seconds (most recently, 852 seconds ago) due to excessive rate
Jan 31 03:37:58 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:58Z|00828|if_status|INFO|Not setting lport 76093a46-2d78-41cb-a667-6e75fa0d8ea1 down as sb is readonly
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.333 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:58 np0005603609 virtnodedevd[221870]: ethtool ioctl error on tap76093a46-2d: No such device
Jan 31 03:37:58 np0005603609 virtnodedevd[221870]: ethtool ioctl error on tap76093a46-2d: No such device
Jan 31 03:37:58 np0005603609 virtnodedevd[221870]: ethtool ioctl error on tap76093a46-2d: No such device
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.343 221554 INFO nova.virt.libvirt.driver [-] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Instance destroyed successfully.#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.343 221554 DEBUG nova.objects.instance [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'resources' on Instance uuid 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:58 np0005603609 virtnodedevd[221870]: ethtool ioctl error on tap76093a46-2d: No such device
Jan 31 03:37:58 np0005603609 virtnodedevd[221870]: ethtool ioctl error on tap76093a46-2d: No such device
Jan 31 03:37:58 np0005603609 virtnodedevd[221870]: ethtool ioctl error on tap76093a46-2d: No such device
Jan 31 03:37:58 np0005603609 virtnodedevd[221870]: ethtool ioctl error on tap76093a46-2d: No such device
Jan 31 03:37:58 np0005603609 neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4[298089]: [NOTICE]   (298093) : haproxy version is 2.8.14-c23fe91
Jan 31 03:37:58 np0005603609 neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4[298089]: [NOTICE]   (298093) : path to executable is /usr/sbin/haproxy
Jan 31 03:37:58 np0005603609 neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4[298089]: [WARNING]  (298093) : Exiting Master process...
Jan 31 03:37:58 np0005603609 neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4[298089]: [ALERT]    (298093) : Current worker (298095) exited with code 143 (Terminated)
Jan 31 03:37:58 np0005603609 neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4[298089]: [WARNING]  (298093) : All workers exited. Exiting... (0)
Jan 31 03:37:58 np0005603609 systemd[1]: libpod-9e950e518b601a9f6be0baf981b6928b3d007e4ae7a006da1e2b695a707ddcca.scope: Deactivated successfully.
Jan 31 03:37:58 np0005603609 ovn_controller[130359]: 2026-01-31T08:37:58Z|00829|binding|INFO|Releasing lport 76093a46-2d78-41cb-a667-6e75fa0d8ea1 from this chassis (sb_readonly=0)
Jan 31 03:37:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:58.385 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:fd:51 10.100.0.7'], port_security=['fa:16:3e:88:fd:51 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4ba5c6bd-5d05-4c33-ac40-620abd0a83cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd5ab7674-17fe-499b-8e6a-01f2b29e7eda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e2cf9ef-4f0c-4703-b0c5-cb44d772423a, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=76093a46-2d78-41cb-a667-6e75fa0d8ea1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:37:58 np0005603609 podman[298205]: 2026-01-31 08:37:58.386847488 +0000 UTC m=+0.039544363 container died 9e950e518b601a9f6be0baf981b6928b3d007e4ae7a006da1e2b695a707ddcca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.389 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:58.395 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:fd:51 10.100.0.7'], port_security=['fa:16:3e:88:fd:51 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4ba5c6bd-5d05-4c33-ac40-620abd0a83cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd5ab7674-17fe-499b-8e6a-01f2b29e7eda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e2cf9ef-4f0c-4703-b0c5-cb44d772423a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=76093a46-2d78-41cb-a667-6e75fa0d8ea1) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.404 221554 DEBUG nova.virt.libvirt.vif [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2063916310',display_name='tempest-TestNetworkAdvancedServerOps-server-2063916310',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2063916310',id=180,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNv8t9ljSfsHe7/DF/aoJ/vR6fscWRM4GStsWxMUVpZD9QqPEzaqes4Daeg29ZseIZh//ZdOETFJ8kFU4RZhY+wlBZNr//GGJe6GeYVX3d0ryXxCUAo4WEPt20+EJ7KK2A==',key_name='tempest-TestNetworkAdvancedServerOps-728081448',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:37:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-v607b9ot',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:37:36Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=4ba5c6bd-5d05-4c33-ac40-620abd0a83cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.404 221554 DEBUG nova.network.os_vif_util [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.405 221554 DEBUG nova.network.os_vif_util [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:fd:51,bridge_name='br-int',has_traffic_filtering=True,id=76093a46-2d78-41cb-a667-6e75fa0d8ea1,network=Network(64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76093a46-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.406 221554 DEBUG os_vif [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:fd:51,bridge_name='br-int',has_traffic_filtering=True,id=76093a46-2d78-41cb-a667-6e75fa0d8ea1,network=Network(64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76093a46-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.407 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.408 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76093a46-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.465 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.468 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.470 221554 INFO os_vif [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:fd:51,bridge_name='br-int',has_traffic_filtering=True,id=76093a46-2d78-41cb-a667-6e75fa0d8ea1,network=Network(64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76093a46-2d')#033[00m
Jan 31 03:37:58 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9e950e518b601a9f6be0baf981b6928b3d007e4ae7a006da1e2b695a707ddcca-userdata-shm.mount: Deactivated successfully.
Jan 31 03:37:58 np0005603609 systemd[1]: var-lib-containers-storage-overlay-6f80414636faff2789a6d287ac9132368530d64df55b49e20659bc5c7628a22c-merged.mount: Deactivated successfully.
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.683 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.684 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.684 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.762 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:58 np0005603609 nova_compute[221550]: 2026-01-31 08:37:58.890 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:58.891 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:37:58 np0005603609 podman[298205]: 2026-01-31 08:37:58.902725102 +0000 UTC m=+0.555421977 container cleanup 9e950e518b601a9f6be0baf981b6928b3d007e4ae7a006da1e2b695a707ddcca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:37:58 np0005603609 systemd[1]: libpod-conmon-9e950e518b601a9f6be0baf981b6928b3d007e4ae7a006da1e2b695a707ddcca.scope: Deactivated successfully.
Jan 31 03:37:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:37:59 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3217867652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:37:59 np0005603609 nova_compute[221550]: 2026-01-31 08:37:59.179 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:59 np0005603609 nova_compute[221550]: 2026-01-31 08:37:59.183 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:37:59 np0005603609 nova_compute[221550]: 2026-01-31 08:37:59.235 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:37:59 np0005603609 nova_compute[221550]: 2026-01-31 08:37:59.364 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:37:59 np0005603609 nova_compute[221550]: 2026-01-31 08:37:59.365 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.404s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:59 np0005603609 podman[298285]: 2026-01-31 08:37:59.488944179 +0000 UTC m=+0.569809268 container remove 9e950e518b601a9f6be0baf981b6928b3d007e4ae7a006da1e2b695a707ddcca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:37:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:59.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:59.493 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf6eef3-ab36-41d4-8513-55d8b8552915]: (4, ('Sat Jan 31 08:37:58 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4 (9e950e518b601a9f6be0baf981b6928b3d007e4ae7a006da1e2b695a707ddcca)\n9e950e518b601a9f6be0baf981b6928b3d007e4ae7a006da1e2b695a707ddcca\nSat Jan 31 08:37:58 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4 (9e950e518b601a9f6be0baf981b6928b3d007e4ae7a006da1e2b695a707ddcca)\n9e950e518b601a9f6be0baf981b6928b3d007e4ae7a006da1e2b695a707ddcca\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:59.495 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9f88f3ca-a2d8-40c6-8dee-b857829d14f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:59.496 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64b5ffb1-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:59 np0005603609 nova_compute[221550]: 2026-01-31 08:37:59.542 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:59 np0005603609 kernel: tap64b5ffb1-70: left promiscuous mode
Jan 31 03:37:59 np0005603609 nova_compute[221550]: 2026-01-31 08:37:59.553 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:59.554 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[47d78291-c40c-432f-b370-52f7325fe233]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:59.569 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cb3101cc-ef7e-4778-a675-83f875a64048]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:59.570 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8cafb241-bd47-4efb-b4ad-c9d583cb757e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:59.582 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3d2f66b9-487c-493a-937c-a1e232e422b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 889507, 'reachable_time': 41458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298302, 'error': None, 'target': 'ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:59.584 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:37:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:59.584 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[9858eb62-3184-4737-b350-d684f25088c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:59 np0005603609 systemd[1]: run-netns-ovnmeta\x2d64b5ffb1\x2d70ff\x2d4eaa\x2db745\x2d9b7ddd18f2b4.mount: Deactivated successfully.
Jan 31 03:37:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:59.585 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 76093a46-2d78-41cb-a667-6e75fa0d8ea1 in datapath 64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4 unbound from our chassis#033[00m
Jan 31 03:37:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:59.586 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:37:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:59.587 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e4297202-4b99-4360-83e7-3157a38e9f5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:59.587 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 76093a46-2d78-41cb-a667-6e75fa0d8ea1 in datapath 64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4 unbound from our chassis#033[00m
Jan 31 03:37:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:59.588 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:37:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:59.589 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[985d1236-b8c2-4710-9738-d0fdd04e8f23]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:37:59.589 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:37:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:37:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:59.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:00 np0005603609 nova_compute[221550]: 2026-01-31 08:38:00.037 221554 DEBUG nova.compute.manager [req-9bf35836-1125-4406-9712-ecc25fd451ff req-39d4fc74-681f-42d5-bc8c-a2e6a128e544 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Received event network-vif-unplugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:38:00 np0005603609 nova_compute[221550]: 2026-01-31 08:38:00.038 221554 DEBUG oslo_concurrency.lockutils [req-9bf35836-1125-4406-9712-ecc25fd451ff req-39d4fc74-681f-42d5-bc8c-a2e6a128e544 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:00 np0005603609 nova_compute[221550]: 2026-01-31 08:38:00.038 221554 DEBUG oslo_concurrency.lockutils [req-9bf35836-1125-4406-9712-ecc25fd451ff req-39d4fc74-681f-42d5-bc8c-a2e6a128e544 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:00 np0005603609 nova_compute[221550]: 2026-01-31 08:38:00.039 221554 DEBUG oslo_concurrency.lockutils [req-9bf35836-1125-4406-9712-ecc25fd451ff req-39d4fc74-681f-42d5-bc8c-a2e6a128e544 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:00 np0005603609 nova_compute[221550]: 2026-01-31 08:38:00.039 221554 DEBUG nova.compute.manager [req-9bf35836-1125-4406-9712-ecc25fd451ff req-39d4fc74-681f-42d5-bc8c-a2e6a128e544 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] No waiting events found dispatching network-vif-unplugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:38:00 np0005603609 nova_compute[221550]: 2026-01-31 08:38:00.039 221554 DEBUG nova.compute.manager [req-9bf35836-1125-4406-9712-ecc25fd451ff req-39d4fc74-681f-42d5-bc8c-a2e6a128e544 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Received event network-vif-unplugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:38:00 np0005603609 nova_compute[221550]: 2026-01-31 08:38:00.039 221554 DEBUG nova.compute.manager [req-9bf35836-1125-4406-9712-ecc25fd451ff req-39d4fc74-681f-42d5-bc8c-a2e6a128e544 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Received event network-vif-plugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:38:00 np0005603609 nova_compute[221550]: 2026-01-31 08:38:00.040 221554 DEBUG oslo_concurrency.lockutils [req-9bf35836-1125-4406-9712-ecc25fd451ff req-39d4fc74-681f-42d5-bc8c-a2e6a128e544 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:00 np0005603609 nova_compute[221550]: 2026-01-31 08:38:00.040 221554 DEBUG oslo_concurrency.lockutils [req-9bf35836-1125-4406-9712-ecc25fd451ff req-39d4fc74-681f-42d5-bc8c-a2e6a128e544 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:00 np0005603609 nova_compute[221550]: 2026-01-31 08:38:00.040 221554 DEBUG oslo_concurrency.lockutils [req-9bf35836-1125-4406-9712-ecc25fd451ff req-39d4fc74-681f-42d5-bc8c-a2e6a128e544 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:00 np0005603609 nova_compute[221550]: 2026-01-31 08:38:00.040 221554 DEBUG nova.compute.manager [req-9bf35836-1125-4406-9712-ecc25fd451ff req-39d4fc74-681f-42d5-bc8c-a2e6a128e544 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] No waiting events found dispatching network-vif-plugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:38:00 np0005603609 nova_compute[221550]: 2026-01-31 08:38:00.041 221554 WARNING nova.compute.manager [req-9bf35836-1125-4406-9712-ecc25fd451ff req-39d4fc74-681f-42d5-bc8c-a2e6a128e544 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Received unexpected event network-vif-plugged-76093a46-2d78-41cb-a667-6e75fa0d8ea1 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:38:00 np0005603609 nova_compute[221550]: 2026-01-31 08:38:00.651 221554 DEBUG nova.network.neutron [req-afda53b2-f710-468c-bca7-60415bd4de18 req-c16715ea-d6f3-4ea0-abce-f67cf248b612 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Updated VIF entry in instance network info cache for port 76093a46-2d78-41cb-a667-6e75fa0d8ea1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:38:00 np0005603609 nova_compute[221550]: 2026-01-31 08:38:00.651 221554 DEBUG nova.network.neutron [req-afda53b2-f710-468c-bca7-60415bd4de18 req-c16715ea-d6f3-4ea0-abce-f67cf248b612 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Updating instance_info_cache with network_info: [{"id": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "address": "fa:16:3e:88:fd:51", "network": {"id": "64b5ffb1-70ff-4eaa-b745-9b7ddd18f2b4", "bridge": "br-int", "label": "tempest-network-smoke--749717462", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76093a46-2d", "ovs_interfaceid": "76093a46-2d78-41cb-a667-6e75fa0d8ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:38:00 np0005603609 nova_compute[221550]: 2026-01-31 08:38:00.691 221554 DEBUG oslo_concurrency.lockutils [req-afda53b2-f710-468c-bca7-60415bd4de18 req-c16715ea-d6f3-4ea0-abce-f67cf248b612 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:38:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:38:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:01.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:38:01 np0005603609 nova_compute[221550]: 2026-01-31 08:38:01.541 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:01.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:03 np0005603609 nova_compute[221550]: 2026-01-31 08:38:03.466 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:03.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:03.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:04 np0005603609 nova_compute[221550]: 2026-01-31 08:38:04.815 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:04 np0005603609 nova_compute[221550]: 2026-01-31 08:38:04.815 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:38:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:05.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:38:05 np0005603609 nova_compute[221550]: 2026-01-31 08:38:05.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:05.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:06 np0005603609 nova_compute[221550]: 2026-01-31 08:38:06.543 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:06 np0005603609 nova_compute[221550]: 2026-01-31 08:38:06.704 221554 INFO nova.virt.libvirt.driver [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Deleting instance files /var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_del#033[00m
Jan 31 03:38:06 np0005603609 nova_compute[221550]: 2026-01-31 08:38:06.705 221554 INFO nova.virt.libvirt.driver [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Deletion of /var/lib/nova/instances/4ba5c6bd-5d05-4c33-ac40-620abd0a83cc_del complete#033[00m
Jan 31 03:38:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 e379: 3 total, 3 up, 3 in
Jan 31 03:38:07 np0005603609 nova_compute[221550]: 2026-01-31 08:38:07.412 221554 INFO nova.compute.manager [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Took 9.31 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:38:07 np0005603609 nova_compute[221550]: 2026-01-31 08:38:07.412 221554 DEBUG oslo.service.loopingcall [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:38:07 np0005603609 nova_compute[221550]: 2026-01-31 08:38:07.413 221554 DEBUG nova.compute.manager [-] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:38:07 np0005603609 nova_compute[221550]: 2026-01-31 08:38:07.413 221554 DEBUG nova.network.neutron [-] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:38:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:07.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:07.530 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:07.530 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:07.530 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:07.591 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:38:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:38:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:07.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:38:08 np0005603609 nova_compute[221550]: 2026-01-31 08:38:08.510 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:09 np0005603609 nova_compute[221550]: 2026-01-31 08:38:09.469 221554 DEBUG nova.network.neutron [-] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:38:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:09.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:09 np0005603609 nova_compute[221550]: 2026-01-31 08:38:09.507 221554 INFO nova.compute.manager [-] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Took 2.09 seconds to deallocate network for instance.#033[00m
Jan 31 03:38:09 np0005603609 nova_compute[221550]: 2026-01-31 08:38:09.585 221554 DEBUG oslo_concurrency.lockutils [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:09 np0005603609 nova_compute[221550]: 2026-01-31 08:38:09.585 221554 DEBUG oslo_concurrency.lockutils [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:09 np0005603609 nova_compute[221550]: 2026-01-31 08:38:09.641 221554 DEBUG nova.scheduler.client.report [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:38:09 np0005603609 nova_compute[221550]: 2026-01-31 08:38:09.653 221554 DEBUG nova.compute.manager [req-a7fde45d-6aa0-41ad-bde6-127694ffbe08 req-d4c24642-8b85-47b6-95f5-0c8c560783d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Received event network-vif-deleted-76093a46-2d78-41cb-a667-6e75fa0d8ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:38:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:09.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:09 np0005603609 nova_compute[221550]: 2026-01-31 08:38:09.697 221554 DEBUG nova.scheduler.client.report [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:38:09 np0005603609 nova_compute[221550]: 2026-01-31 08:38:09.698 221554 DEBUG nova.compute.provider_tree [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:38:09 np0005603609 nova_compute[221550]: 2026-01-31 08:38:09.723 221554 DEBUG nova.scheduler.client.report [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:38:09 np0005603609 nova_compute[221550]: 2026-01-31 08:38:09.753 221554 DEBUG nova.scheduler.client.report [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:38:09 np0005603609 nova_compute[221550]: 2026-01-31 08:38:09.808 221554 DEBUG oslo_concurrency.processutils [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:38:10 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1947169940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:38:10 np0005603609 nova_compute[221550]: 2026-01-31 08:38:10.310 221554 DEBUG oslo_concurrency.processutils [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:10 np0005603609 nova_compute[221550]: 2026-01-31 08:38:10.314 221554 DEBUG nova.compute.provider_tree [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:38:10 np0005603609 nova_compute[221550]: 2026-01-31 08:38:10.347 221554 DEBUG nova.scheduler.client.report [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:38:10 np0005603609 nova_compute[221550]: 2026-01-31 08:38:10.387 221554 DEBUG oslo_concurrency.lockutils [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:10 np0005603609 nova_compute[221550]: 2026-01-31 08:38:10.415 221554 INFO nova.scheduler.client.report [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Deleted allocations for instance 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc#033[00m
Jan 31 03:38:10 np0005603609 nova_compute[221550]: 2026-01-31 08:38:10.541 221554 DEBUG oslo_concurrency.lockutils [None req-d91dc6d1-da6e-4589-9ec1-dc66be1267a9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "4ba5c6bd-5d05-4c33-ac40-620abd0a83cc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:11.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:11 np0005603609 nova_compute[221550]: 2026-01-31 08:38:11.544 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:11.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:13 np0005603609 nova_compute[221550]: 2026-01-31 08:38:13.342 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848678.341317, 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:38:13 np0005603609 nova_compute[221550]: 2026-01-31 08:38:13.342 221554 INFO nova.compute.manager [-] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:38:13 np0005603609 nova_compute[221550]: 2026-01-31 08:38:13.482 221554 DEBUG nova.compute.manager [None req-289bc237-60b6-47af-a8b3-26630649dfd4 - - - - - -] [instance: 4ba5c6bd-5d05-4c33-ac40-620abd0a83cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:38:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:13.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:13 np0005603609 nova_compute[221550]: 2026-01-31 08:38:13.512 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:13.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:15.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:15.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:16 np0005603609 nova_compute[221550]: 2026-01-31 08:38:16.094 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:16 np0005603609 nova_compute[221550]: 2026-01-31 08:38:16.547 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:17.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:17.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:18 np0005603609 nova_compute[221550]: 2026-01-31 08:38:18.514 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:19.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:38:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:19.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:38:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:21.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:21 np0005603609 nova_compute[221550]: 2026-01-31 08:38:21.549 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:21.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:21 np0005603609 podman[298718]: 2026-01-31 08:38:21.820462186 +0000 UTC m=+0.112124250 container create 42ffda7228ad2ecd80e1605aa0baf23c2893a0707bffcf6471d41a37b72646ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_roentgen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 03:38:21 np0005603609 podman[298718]: 2026-01-31 08:38:21.727474183 +0000 UTC m=+0.019136277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:38:21 np0005603609 systemd[1]: Started libpod-conmon-42ffda7228ad2ecd80e1605aa0baf23c2893a0707bffcf6471d41a37b72646ea.scope.
Jan 31 03:38:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:21 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:38:21 np0005603609 podman[298718]: 2026-01-31 08:38:21.935385103 +0000 UTC m=+0.227047167 container init 42ffda7228ad2ecd80e1605aa0baf23c2893a0707bffcf6471d41a37b72646ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:38:21 np0005603609 podman[298718]: 2026-01-31 08:38:21.9418293 +0000 UTC m=+0.233491374 container start 42ffda7228ad2ecd80e1605aa0baf23c2893a0707bffcf6471d41a37b72646ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 03:38:21 np0005603609 infallible_roentgen[298734]: 167 167
Jan 31 03:38:21 np0005603609 systemd[1]: libpod-42ffda7228ad2ecd80e1605aa0baf23c2893a0707bffcf6471d41a37b72646ea.scope: Deactivated successfully.
Jan 31 03:38:21 np0005603609 podman[298718]: 2026-01-31 08:38:21.950883091 +0000 UTC m=+0.242545205 container attach 42ffda7228ad2ecd80e1605aa0baf23c2893a0707bffcf6471d41a37b72646ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 31 03:38:21 np0005603609 podman[298718]: 2026-01-31 08:38:21.951370172 +0000 UTC m=+0.243032246 container died 42ffda7228ad2ecd80e1605aa0baf23c2893a0707bffcf6471d41a37b72646ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_roentgen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:38:22 np0005603609 systemd[1]: var-lib-containers-storage-overlay-dc7adab06735e3cc08f9c9853149463794c86ff33e42f290da70324815162d73-merged.mount: Deactivated successfully.
Jan 31 03:38:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:38:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:38:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:38:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:38:22 np0005603609 podman[298718]: 2026-01-31 08:38:22.16288996 +0000 UTC m=+0.454552024 container remove 42ffda7228ad2ecd80e1605aa0baf23c2893a0707bffcf6471d41a37b72646ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:38:22 np0005603609 systemd[1]: libpod-conmon-42ffda7228ad2ecd80e1605aa0baf23c2893a0707bffcf6471d41a37b72646ea.scope: Deactivated successfully.
Jan 31 03:38:22 np0005603609 podman[298760]: 2026-01-31 08:38:22.28783727 +0000 UTC m=+0.045045037 container create 8138bc58ae1527440505906c6abc0953ff4abae1722ad241b49f87b251b47b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_clarke, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:38:22 np0005603609 systemd[1]: Started libpod-conmon-8138bc58ae1527440505906c6abc0953ff4abae1722ad241b49f87b251b47b9e.scope.
Jan 31 03:38:22 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:38:22 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c35c9cd90ea19495e46a52a7851542fd010b89363f44f6c09f49ebeeab0a76a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:22 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c35c9cd90ea19495e46a52a7851542fd010b89363f44f6c09f49ebeeab0a76a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:22 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c35c9cd90ea19495e46a52a7851542fd010b89363f44f6c09f49ebeeab0a76a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:22 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c35c9cd90ea19495e46a52a7851542fd010b89363f44f6c09f49ebeeab0a76a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:22 np0005603609 podman[298760]: 2026-01-31 08:38:22.263660062 +0000 UTC m=+0.020867849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:38:22 np0005603609 podman[298760]: 2026-01-31 08:38:22.414074452 +0000 UTC m=+0.171282239 container init 8138bc58ae1527440505906c6abc0953ff4abae1722ad241b49f87b251b47b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:38:22 np0005603609 podman[298760]: 2026-01-31 08:38:22.419177267 +0000 UTC m=+0.176385034 container start 8138bc58ae1527440505906c6abc0953ff4abae1722ad241b49f87b251b47b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_clarke, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:38:22 np0005603609 podman[298760]: 2026-01-31 08:38:22.464672313 +0000 UTC m=+0.221880080 container attach 8138bc58ae1527440505906c6abc0953ff4abae1722ad241b49f87b251b47b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_clarke, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:38:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:23.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:23 np0005603609 nova_compute[221550]: 2026-01-31 08:38:23.555 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]: [
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:    {
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:        "available": false,
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:        "ceph_device": false,
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:        "lsm_data": {},
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:        "lvs": [],
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:        "path": "/dev/sr0",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:        "rejected_reasons": [
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "Has a FileSystem",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "Insufficient space (<5GB)"
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:        ],
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:        "sys_api": {
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "actuators": null,
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "device_nodes": "sr0",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "devname": "sr0",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "human_readable_size": "482.00 KB",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "id_bus": "ata",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "model": "QEMU DVD-ROM",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "nr_requests": "2",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "parent": "/dev/sr0",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "partitions": {},
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "path": "/dev/sr0",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "removable": "1",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "rev": "2.5+",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "ro": "0",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "rotational": "1",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "sas_address": "",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "sas_device_handle": "",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "scheduler_mode": "mq-deadline",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "sectors": 0,
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "sectorsize": "2048",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "size": 493568.0,
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "support_discard": "2048",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "type": "disk",
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:            "vendor": "QEMU"
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:        }
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]:    }
Jan 31 03:38:23 np0005603609 infallible_clarke[298777]: ]
Jan 31 03:38:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:38:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:38:23 np0005603609 systemd[1]: libpod-8138bc58ae1527440505906c6abc0953ff4abae1722ad241b49f87b251b47b9e.scope: Deactivated successfully.
Jan 31 03:38:23 np0005603609 systemd[1]: libpod-8138bc58ae1527440505906c6abc0953ff4abae1722ad241b49f87b251b47b9e.scope: Consumed 1.096s CPU time.
Jan 31 03:38:23 np0005603609 podman[298760]: 2026-01-31 08:38:23.630582847 +0000 UTC m=+1.387790614 container died 8138bc58ae1527440505906c6abc0953ff4abae1722ad241b49f87b251b47b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:38:23 np0005603609 systemd[1]: var-lib-containers-storage-overlay-6c35c9cd90ea19495e46a52a7851542fd010b89363f44f6c09f49ebeeab0a76a-merged.mount: Deactivated successfully.
Jan 31 03:38:23 np0005603609 podman[298760]: 2026-01-31 08:38:23.696102982 +0000 UTC m=+1.453310749 container remove 8138bc58ae1527440505906c6abc0953ff4abae1722ad241b49f87b251b47b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_clarke, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:38:23 np0005603609 systemd[1]: libpod-conmon-8138bc58ae1527440505906c6abc0953ff4abae1722ad241b49f87b251b47b9e.scope: Deactivated successfully.
Jan 31 03:38:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:23.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:24 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:38:24 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:38:24 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:38:24 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:38:24 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:38:24 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:38:24 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:38:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.003000071s ======
Jan 31 03:38:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:25.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Jan 31 03:38:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:38:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:25.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:38:26 np0005603609 podman[300050]: 2026-01-31 08:38:26.187848892 +0000 UTC m=+0.075272973 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 03:38:26 np0005603609 podman[300049]: 2026-01-31 08:38:26.19685284 +0000 UTC m=+0.084623000 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:38:26 np0005603609 nova_compute[221550]: 2026-01-31 08:38:26.551 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:27.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:38:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:27.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:38:28 np0005603609 nova_compute[221550]: 2026-01-31 08:38:28.557 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:38:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:29.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:38:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:29.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:38:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:38:31 np0005603609 nova_compute[221550]: 2026-01-31 08:38:31.552 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:31.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:31.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:33.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:33 np0005603609 nova_compute[221550]: 2026-01-31 08:38:33.602 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:38:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:33.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:38:34 np0005603609 nova_compute[221550]: 2026-01-31 08:38:34.107 221554 DEBUG oslo_concurrency.lockutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "233d3314-7d9d-49a5-818f-909d78422fb9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:34 np0005603609 nova_compute[221550]: 2026-01-31 08:38:34.107 221554 DEBUG oslo_concurrency.lockutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:34 np0005603609 nova_compute[221550]: 2026-01-31 08:38:34.179 221554 DEBUG nova.compute.manager [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:38:34 np0005603609 nova_compute[221550]: 2026-01-31 08:38:34.323 221554 DEBUG oslo_concurrency.lockutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:34 np0005603609 nova_compute[221550]: 2026-01-31 08:38:34.323 221554 DEBUG oslo_concurrency.lockutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:34 np0005603609 nova_compute[221550]: 2026-01-31 08:38:34.331 221554 DEBUG nova.virt.hardware [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:38:34 np0005603609 nova_compute[221550]: 2026-01-31 08:38:34.332 221554 INFO nova.compute.claims [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:38:34 np0005603609 nova_compute[221550]: 2026-01-31 08:38:34.633 221554 DEBUG oslo_concurrency.processutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:38:35 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/906920371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.057 221554 DEBUG oslo_concurrency.processutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.062 221554 DEBUG nova.compute.provider_tree [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.109 221554 DEBUG nova.scheduler.client.report [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.189 221554 DEBUG oslo_concurrency.lockutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.190 221554 DEBUG nova.compute.manager [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.244 221554 DEBUG nova.compute.manager [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.245 221554 DEBUG nova.network.neutron [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.266 221554 INFO nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.287 221554 DEBUG nova.compute.manager [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.356 221554 INFO nova.virt.block_device [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Booting with volume 98644432-f138-4ffa-ac72-90128df20205 at /dev/vda#033[00m
Jan 31 03:38:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:35.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.576 221554 DEBUG nova.policy [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cfc8a271e75e4a92b16ee6b5da9cfc9f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4b38141686534a0fb9b947a7886cd4b6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.641 221554 DEBUG os_brick.utils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.642 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.650 226739 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.650 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0b003a-f390-4084-b810-33cbc8258cfb]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.651 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.655 226739 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.656 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[2dc77172-26b2-4405-a7a6-23268c183a75]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1765b9b6275c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.657 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.666 226739 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.666 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[010bca69-15b5-452c-82ce-978db2795e6e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.669 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[371de7b4-b2e9-4f5b-ad6a-cf38d0ecb52e]: (4, '231927d4-1ded-4b84-843c-456d697af567') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.670 221554 DEBUG oslo_concurrency.processutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.695 221554 DEBUG oslo_concurrency.processutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.698 221554 DEBUG os_brick.initiator.connectors.lightos [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.698 221554 DEBUG os_brick.initiator.connectors.lightos [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.698 221554 DEBUG os_brick.initiator.connectors.lightos [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.699 221554 DEBUG os_brick.utils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] <== get_connector_properties: return (56ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1765b9b6275c', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '231927d4-1ded-4b84-843c-456d697af567', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:38:35 np0005603609 nova_compute[221550]: 2026-01-31 08:38:35.699 221554 DEBUG nova.virt.block_device [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updating existing volume attachment record: 85d6cc81-8ef1-4d41-b5be-78882cf4696d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:38:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:38:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:35.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:38:36 np0005603609 nova_compute[221550]: 2026-01-31 08:38:36.552 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:37 np0005603609 nova_compute[221550]: 2026-01-31 08:38:37.395 221554 DEBUG nova.network.neutron [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Successfully created port: 93519b56-e167-470c-9933-432512222982 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:38:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:38:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:37.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:38:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:37.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:37 np0005603609 nova_compute[221550]: 2026-01-31 08:38:37.842 221554 DEBUG nova.compute.manager [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:38:37 np0005603609 nova_compute[221550]: 2026-01-31 08:38:37.844 221554 DEBUG nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:38:37 np0005603609 nova_compute[221550]: 2026-01-31 08:38:37.844 221554 INFO nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Creating image(s)#033[00m
Jan 31 03:38:37 np0005603609 nova_compute[221550]: 2026-01-31 08:38:37.844 221554 DEBUG nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 03:38:37 np0005603609 nova_compute[221550]: 2026-01-31 08:38:37.845 221554 DEBUG nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Ensure instance console log exists: /var/lib/nova/instances/233d3314-7d9d-49a5-818f-909d78422fb9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:38:37 np0005603609 nova_compute[221550]: 2026-01-31 08:38:37.845 221554 DEBUG oslo_concurrency.lockutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:37 np0005603609 nova_compute[221550]: 2026-01-31 08:38:37.845 221554 DEBUG oslo_concurrency.lockutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:37 np0005603609 nova_compute[221550]: 2026-01-31 08:38:37.846 221554 DEBUG oslo_concurrency.lockutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:38 np0005603609 nova_compute[221550]: 2026-01-31 08:38:38.604 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:39 np0005603609 nova_compute[221550]: 2026-01-31 08:38:39.427 221554 DEBUG nova.network.neutron [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Successfully updated port: 93519b56-e167-470c-9933-432512222982 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:38:39 np0005603609 nova_compute[221550]: 2026-01-31 08:38:39.475 221554 DEBUG oslo_concurrency.lockutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:38:39 np0005603609 nova_compute[221550]: 2026-01-31 08:38:39.476 221554 DEBUG oslo_concurrency.lockutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquired lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:38:39 np0005603609 nova_compute[221550]: 2026-01-31 08:38:39.476 221554 DEBUG nova.network.neutron [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:38:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:38:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:39.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:38:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:38:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:39.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:38:39 np0005603609 nova_compute[221550]: 2026-01-31 08:38:39.765 221554 DEBUG nova.compute.manager [req-e80d6099-599d-4160-b20e-f96c86fd50b4 req-c87430ba-e403-497d-a99c-bb1d0447ccac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Received event network-changed-93519b56-e167-470c-9933-432512222982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:38:39 np0005603609 nova_compute[221550]: 2026-01-31 08:38:39.765 221554 DEBUG nova.compute.manager [req-e80d6099-599d-4160-b20e-f96c86fd50b4 req-c87430ba-e403-497d-a99c-bb1d0447ccac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Refreshing instance network info cache due to event network-changed-93519b56-e167-470c-9933-432512222982. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:38:39 np0005603609 nova_compute[221550]: 2026-01-31 08:38:39.766 221554 DEBUG oslo_concurrency.lockutils [req-e80d6099-599d-4160-b20e-f96c86fd50b4 req-c87430ba-e403-497d-a99c-bb1d0447ccac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:38:39 np0005603609 nova_compute[221550]: 2026-01-31 08:38:39.801 221554 DEBUG nova.network.neutron [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.546 221554 DEBUG nova.network.neutron [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updating instance_info_cache with network_info: [{"id": "93519b56-e167-470c-9933-432512222982", "address": "fa:16:3e:20:44:32", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93519b56-e1", "ovs_interfaceid": "93519b56-e167-470c-9933-432512222982", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.553 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:38:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:41.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:38:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:41.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.757 221554 DEBUG oslo_concurrency.lockutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Releasing lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.757 221554 DEBUG nova.compute.manager [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Instance network_info: |[{"id": "93519b56-e167-470c-9933-432512222982", "address": "fa:16:3e:20:44:32", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93519b56-e1", "ovs_interfaceid": "93519b56-e167-470c-9933-432512222982", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.757 221554 DEBUG oslo_concurrency.lockutils [req-e80d6099-599d-4160-b20e-f96c86fd50b4 req-c87430ba-e403-497d-a99c-bb1d0447ccac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.758 221554 DEBUG nova.network.neutron [req-e80d6099-599d-4160-b20e-f96c86fd50b4 req-c87430ba-e403-497d-a99c-bb1d0447ccac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Refreshing network info cache for port 93519b56-e167-470c-9933-432512222982 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.761 221554 DEBUG nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Start _get_guest_xml network_info=[{"id": "93519b56-e167-470c-9933-432512222982", "address": "fa:16:3e:20:44:32", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93519b56-e1", "ovs_interfaceid": "93519b56-e167-470c-9933-432512222982", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '85d6cc81-8ef1-4d41-b5be-78882cf4696d', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-98644432-f138-4ffa-ac72-90128df20205', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '98644432-f138-4ffa-ac72-90128df20205', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '233d3314-7d9d-49a5-818f-909d78422fb9', 'attached_at': '', 'detached_at': '', 'volume_id': '98644432-f138-4ffa-ac72-90128df20205', 'serial': '98644432-f138-4ffa-ac72-90128df20205'}, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'guest_format': None, 'mount_device': '/dev/vda', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.764 221554 WARNING nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.770 221554 DEBUG nova.virt.libvirt.host [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.770 221554 DEBUG nova.virt.libvirt.host [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.774 221554 DEBUG nova.virt.libvirt.host [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.774 221554 DEBUG nova.virt.libvirt.host [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.775 221554 DEBUG nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.775 221554 DEBUG nova.virt.hardware [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.776 221554 DEBUG nova.virt.hardware [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.776 221554 DEBUG nova.virt.hardware [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.776 221554 DEBUG nova.virt.hardware [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.776 221554 DEBUG nova.virt.hardware [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.776 221554 DEBUG nova.virt.hardware [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.776 221554 DEBUG nova.virt.hardware [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.777 221554 DEBUG nova.virt.hardware [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.777 221554 DEBUG nova.virt.hardware [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.777 221554 DEBUG nova.virt.hardware [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.777 221554 DEBUG nova.virt.hardware [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.804 221554 DEBUG nova.storage.rbd_utils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] rbd image 233d3314-7d9d-49a5-818f-909d78422fb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:38:41 np0005603609 nova_compute[221550]: 2026-01-31 08:38:41.809 221554 DEBUG oslo_concurrency.processutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:38:42 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/929030816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.219 221554 DEBUG oslo_concurrency.processutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.386 221554 DEBUG nova.virt.libvirt.vif [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:38:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1789084214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1789084214',id=183,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKzg6Lk8OuDDWEBQQtNVGTD92uncKX4uuGYvXITCu78FVc0dCeMJjMpvMnamF80j6P2vfKzi9siS1JCEwYFhLgZ6vk2tD+oJq2pafl3D7QkbaZkrlvSItHgJLM4cymh3Sg==',key_name='tempest-TestInstancesWithCinderVolumes-232350541',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b38141686534a0fb9b947a7886cd4b6',ramdisk_id='',reservation_id='r-grhkzysi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-791993230',owner_user_name='tempest-TestInstancesWithCinderVolumes-791993230-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:38:35Z,user_data=None,user_id='cfc8a271e75e4a92b16ee6b5da9cfc9f',uuid=233d3314-7d9d-49a5-818f-909d78422fb9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "93519b56-e167-470c-9933-432512222982", "address": "fa:16:3e:20:44:32", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93519b56-e1", "ovs_interfaceid": "93519b56-e167-470c-9933-432512222982", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.387 221554 DEBUG nova.network.os_vif_util [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Converting VIF {"id": "93519b56-e167-470c-9933-432512222982", "address": "fa:16:3e:20:44:32", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93519b56-e1", "ovs_interfaceid": "93519b56-e167-470c-9933-432512222982", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.387 221554 DEBUG nova.network.os_vif_util [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:44:32,bridge_name='br-int',has_traffic_filtering=True,id=93519b56-e167-470c-9933-432512222982,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93519b56-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.388 221554 DEBUG nova.objects.instance [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 233d3314-7d9d-49a5-818f-909d78422fb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.434 221554 DEBUG nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:38:42 np0005603609 nova_compute[221550]:  <uuid>233d3314-7d9d-49a5-818f-909d78422fb9</uuid>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:  <name>instance-000000b7</name>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestInstancesWithCinderVolumes-server-1789084214</nova:name>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:38:41</nova:creationTime>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:38:42 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:        <nova:user uuid="cfc8a271e75e4a92b16ee6b5da9cfc9f">tempest-TestInstancesWithCinderVolumes-791993230-project-member</nova:user>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:        <nova:project uuid="4b38141686534a0fb9b947a7886cd4b6">tempest-TestInstancesWithCinderVolumes-791993230</nova:project>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:        <nova:port uuid="93519b56-e167-470c-9933-432512222982">
Jan 31 03:38:42 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <entry name="serial">233d3314-7d9d-49a5-818f-909d78422fb9</entry>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <entry name="uuid">233d3314-7d9d-49a5-818f-909d78422fb9</entry>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/233d3314-7d9d-49a5-818f-909d78422fb9_disk.config">
Jan 31 03:38:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:38:42 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="volumes/volume-98644432-f138-4ffa-ac72-90128df20205">
Jan 31 03:38:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:38:42 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <serial>98644432-f138-4ffa-ac72-90128df20205</serial>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:20:44:32"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <target dev="tap93519b56-e1"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/233d3314-7d9d-49a5-818f-909d78422fb9/console.log" append="off"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:38:42 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:38:42 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:38:42 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:38:42 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.435 221554 DEBUG nova.compute.manager [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Preparing to wait for external event network-vif-plugged-93519b56-e167-470c-9933-432512222982 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.436 221554 DEBUG oslo_concurrency.lockutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "233d3314-7d9d-49a5-818f-909d78422fb9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.436 221554 DEBUG oslo_concurrency.lockutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.436 221554 DEBUG oslo_concurrency.lockutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.437 221554 DEBUG nova.virt.libvirt.vif [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:38:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1789084214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1789084214',id=183,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKzg6Lk8OuDDWEBQQtNVGTD92uncKX4uuGYvXITCu78FVc0dCeMJjMpvMnamF80j6P2vfKzi9siS1JCEwYFhLgZ6vk2tD+oJq2pafl3D7QkbaZkrlvSItHgJLM4cymh3Sg==',key_name='tempest-TestInstancesWithCinderVolumes-232350541',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b38141686534a0fb9b947a7886cd4b6',ramdisk_id='',reservation_id='r-grhkzysi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-791993230',owner_user_name='tempest-TestInstancesWithCinderVolumes-791993230-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:38:35Z,user_data=None,user_id='cfc8a271e75e4a92b16ee6b5da9cfc9f',uuid=233d3314-7d9d-49a5-818f-909d78422fb9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "93519b56-e167-470c-9933-432512222982", "address": "fa:16:3e:20:44:32", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93519b56-e1", "ovs_interfaceid": "93519b56-e167-470c-9933-432512222982", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.437 221554 DEBUG nova.network.os_vif_util [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Converting VIF {"id": "93519b56-e167-470c-9933-432512222982", "address": "fa:16:3e:20:44:32", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93519b56-e1", "ovs_interfaceid": "93519b56-e167-470c-9933-432512222982", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.438 221554 DEBUG nova.network.os_vif_util [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:44:32,bridge_name='br-int',has_traffic_filtering=True,id=93519b56-e167-470c-9933-432512222982,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93519b56-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.438 221554 DEBUG os_vif [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:44:32,bridge_name='br-int',has_traffic_filtering=True,id=93519b56-e167-470c-9933-432512222982,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93519b56-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.438 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.439 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.439 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.442 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.442 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93519b56-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.442 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap93519b56-e1, col_values=(('external_ids', {'iface-id': '93519b56-e167-470c-9933-432512222982', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:44:32', 'vm-uuid': '233d3314-7d9d-49a5-818f-909d78422fb9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:38:42 np0005603609 NetworkManager[49064]: <info>  [1769848722.4453] manager: (tap93519b56-e1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.446 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.449 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.450 221554 INFO os_vif [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:44:32,bridge_name='br-int',has_traffic_filtering=True,id=93519b56-e167-470c-9933-432512222982,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93519b56-e1')#033[00m
Jan 31 03:38:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:38:42 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2556544296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:38:42 np0005603609 nova_compute[221550]: 2026-01-31 08:38:42.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:43 np0005603609 nova_compute[221550]: 2026-01-31 08:38:43.496 221554 DEBUG nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:38:43 np0005603609 nova_compute[221550]: 2026-01-31 08:38:43.497 221554 DEBUG nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:38:43 np0005603609 nova_compute[221550]: 2026-01-31 08:38:43.497 221554 DEBUG nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No VIF found with MAC fa:16:3e:20:44:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:38:43 np0005603609 nova_compute[221550]: 2026-01-31 08:38:43.498 221554 INFO nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Using config drive#033[00m
Jan 31 03:38:43 np0005603609 nova_compute[221550]: 2026-01-31 08:38:43.519 221554 DEBUG nova.storage.rbd_utils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] rbd image 233d3314-7d9d-49a5-818f-909d78422fb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:38:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:38:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:43.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:38:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:38:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:43.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:38:45 np0005603609 nova_compute[221550]: 2026-01-31 08:38:45.224 221554 DEBUG nova.network.neutron [req-e80d6099-599d-4160-b20e-f96c86fd50b4 req-c87430ba-e403-497d-a99c-bb1d0447ccac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updated VIF entry in instance network info cache for port 93519b56-e167-470c-9933-432512222982. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:38:45 np0005603609 nova_compute[221550]: 2026-01-31 08:38:45.225 221554 DEBUG nova.network.neutron [req-e80d6099-599d-4160-b20e-f96c86fd50b4 req-c87430ba-e403-497d-a99c-bb1d0447ccac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updating instance_info_cache with network_info: [{"id": "93519b56-e167-470c-9933-432512222982", "address": "fa:16:3e:20:44:32", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93519b56-e1", "ovs_interfaceid": "93519b56-e167-470c-9933-432512222982", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:38:45 np0005603609 nova_compute[221550]: 2026-01-31 08:38:45.400 221554 INFO nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Creating config drive at /var/lib/nova/instances/233d3314-7d9d-49a5-818f-909d78422fb9/disk.config#033[00m
Jan 31 03:38:45 np0005603609 nova_compute[221550]: 2026-01-31 08:38:45.406 221554 DEBUG oslo_concurrency.processutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/233d3314-7d9d-49a5-818f-909d78422fb9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpf1fcvxyi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:45 np0005603609 nova_compute[221550]: 2026-01-31 08:38:45.535 221554 DEBUG oslo_concurrency.processutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/233d3314-7d9d-49a5-818f-909d78422fb9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpf1fcvxyi" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:45 np0005603609 nova_compute[221550]: 2026-01-31 08:38:45.569 221554 DEBUG nova.storage.rbd_utils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] rbd image 233d3314-7d9d-49a5-818f-909d78422fb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:38:45 np0005603609 nova_compute[221550]: 2026-01-31 08:38:45.574 221554 DEBUG oslo_concurrency.processutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/233d3314-7d9d-49a5-818f-909d78422fb9/disk.config 233d3314-7d9d-49a5-818f-909d78422fb9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:45.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:45 np0005603609 nova_compute[221550]: 2026-01-31 08:38:45.597 221554 DEBUG oslo_concurrency.lockutils [req-e80d6099-599d-4160-b20e-f96c86fd50b4 req-c87430ba-e403-497d-a99c-bb1d0447ccac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:38:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:45.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:46 np0005603609 nova_compute[221550]: 2026-01-31 08:38:46.365 221554 DEBUG oslo_concurrency.processutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/233d3314-7d9d-49a5-818f-909d78422fb9/disk.config 233d3314-7d9d-49a5-818f-909d78422fb9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.791s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:46 np0005603609 nova_compute[221550]: 2026-01-31 08:38:46.366 221554 INFO nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Deleting local config drive /var/lib/nova/instances/233d3314-7d9d-49a5-818f-909d78422fb9/disk.config because it was imported into RBD.#033[00m
Jan 31 03:38:46 np0005603609 kernel: tap93519b56-e1: entered promiscuous mode
Jan 31 03:38:46 np0005603609 NetworkManager[49064]: <info>  [1769848726.4096] manager: (tap93519b56-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/382)
Jan 31 03:38:46 np0005603609 ovn_controller[130359]: 2026-01-31T08:38:46Z|00830|binding|INFO|Claiming lport 93519b56-e167-470c-9933-432512222982 for this chassis.
Jan 31 03:38:46 np0005603609 ovn_controller[130359]: 2026-01-31T08:38:46Z|00831|binding|INFO|93519b56-e167-470c-9933-432512222982: Claiming fa:16:3e:20:44:32 10.100.0.4
Jan 31 03:38:46 np0005603609 nova_compute[221550]: 2026-01-31 08:38:46.410 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:46 np0005603609 nova_compute[221550]: 2026-01-31 08:38:46.414 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:46 np0005603609 systemd-udevd[300285]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:38:46 np0005603609 systemd-machined[190912]: New machine qemu-100-instance-000000b7.
Jan 31 03:38:46 np0005603609 nova_compute[221550]: 2026-01-31 08:38:46.441 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:46 np0005603609 NetworkManager[49064]: <info>  [1769848726.4492] device (tap93519b56-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:38:46 np0005603609 NetworkManager[49064]: <info>  [1769848726.4502] device (tap93519b56-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:38:46 np0005603609 systemd[1]: Started Virtual Machine qemu-100-instance-000000b7.
Jan 31 03:38:46 np0005603609 ovn_controller[130359]: 2026-01-31T08:38:46Z|00832|binding|INFO|Setting lport 93519b56-e167-470c-9933-432512222982 ovn-installed in OVS
Jan 31 03:38:46 np0005603609 nova_compute[221550]: 2026-01-31 08:38:46.451 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:46 np0005603609 nova_compute[221550]: 2026-01-31 08:38:46.556 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:46 np0005603609 ovn_controller[130359]: 2026-01-31T08:38:46Z|00833|binding|INFO|Setting lport 93519b56-e167-470c-9933-432512222982 up in Southbound
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.737 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:44:32 10.100.0.4'], port_security=['fa:16:3e:20:44:32 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '233d3314-7d9d-49a5-818f-909d78422fb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-090376c2-ac34-46f0-acd4-344bb2bc1154', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b38141686534a0fb9b947a7886cd4b6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f827482b-eee1-43ff-a797-1ec84e5a6d1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05d66413-db50-49eb-973e-490542297b8d, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=93519b56-e167-470c-9933-432512222982) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.739 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 93519b56-e167-470c-9933-432512222982 in datapath 090376c2-ac34-46f0-acd4-344bb2bc1154 bound to our chassis#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.740 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 090376c2-ac34-46f0-acd4-344bb2bc1154#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.751 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2e78f19b-bf22-4958-860c-53111379484d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.752 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap090376c2-a1 in ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.754 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap090376c2-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.754 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1eb0b0-1514-4eaa-a9c3-ec7edba022ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.755 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[68c9e2a7-6222-41d8-a3fe-bf5746573770]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.762 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[c97c9a53-95f1-4bec-bdfc-562e21dab61f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.773 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e13309-4ee9-4d1f-8447-93d77bbe56de]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.795 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[44348268-ebc5-40a5-a7eb-a460aa6c8f85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:46 np0005603609 NetworkManager[49064]: <info>  [1769848726.8007] manager: (tap090376c2-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/383)
Jan 31 03:38:46 np0005603609 systemd-udevd[300288]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.800 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[18b2be41-e5e9-4f1c-8601-6da6565bf9ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.824 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[0744d81d-9378-4dd2-a0fe-1993d01357fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.828 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d20b7b68-b0d6-4e48-86e2-40553f421cfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:46 np0005603609 NetworkManager[49064]: <info>  [1769848726.8494] device (tap090376c2-a0): carrier: link connected
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.853 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[0576ad36-96d9-45ba-b0d7-922b51f15229]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.867 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e06067-137c-4d52-910a-21c3a770efb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap090376c2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:64:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 896654, 'reachable_time': 20701, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300357, 'error': None, 'target': 'ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.881 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[087cef0d-e49c-4800-bc90-66300f26a761]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:640e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 896654, 'tstamp': 896654}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300358, 'error': None, 'target': 'ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.894 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[35ff6d88-8956-459d-99d0-fe7fd2149d2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap090376c2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:64:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 896654, 'reachable_time': 20701, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300359, 'error': None, 'target': 'ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.913 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[85e0e98c-264e-4a13-ab0e-82f957423194]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.966 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0d92d125-7f33-4cb0-9eaa-af0dba2feb49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.968 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap090376c2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.968 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.969 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap090376c2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:38:46 np0005603609 NetworkManager[49064]: <info>  [1769848726.9717] manager: (tap090376c2-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Jan 31 03:38:46 np0005603609 nova_compute[221550]: 2026-01-31 08:38:46.970 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:46 np0005603609 kernel: tap090376c2-a0: entered promiscuous mode
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.974 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap090376c2-a0, col_values=(('external_ids', {'iface-id': 'c8a7eefb-b644-411b-b95a-f875570edfa9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:38:46 np0005603609 ovn_controller[130359]: 2026-01-31T08:38:46Z|00834|binding|INFO|Releasing lport c8a7eefb-b644-411b-b95a-f875570edfa9 from this chassis (sb_readonly=0)
Jan 31 03:38:46 np0005603609 nova_compute[221550]: 2026-01-31 08:38:46.979 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.980 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/090376c2-ac34-46f0-acd4-344bb2bc1154.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/090376c2-ac34-46f0-acd4-344bb2bc1154.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.981 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[caaabb45-0dd5-45aa-9c9f-f184ccbece86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.982 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-090376c2-ac34-46f0-acd4-344bb2bc1154
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/090376c2-ac34-46f0-acd4-344bb2bc1154.pid.haproxy
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 090376c2-ac34-46f0-acd4-344bb2bc1154
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:38:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:38:46.982 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154', 'env', 'PROCESS_TAG=haproxy-090376c2-ac34-46f0-acd4-344bb2bc1154', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/090376c2-ac34-46f0-acd4-344bb2bc1154.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:38:47 np0005603609 nova_compute[221550]: 2026-01-31 08:38:47.039 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848727.0381389, 233d3314-7d9d-49a5-818f-909d78422fb9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:38:47 np0005603609 nova_compute[221550]: 2026-01-31 08:38:47.039 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] VM Started (Lifecycle Event)#033[00m
Jan 31 03:38:47 np0005603609 nova_compute[221550]: 2026-01-31 08:38:47.153 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:38:47 np0005603609 nova_compute[221550]: 2026-01-31 08:38:47.158 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848727.0384402, 233d3314-7d9d-49a5-818f-909d78422fb9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:38:47 np0005603609 nova_compute[221550]: 2026-01-31 08:38:47.159 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:38:47 np0005603609 podman[300396]: 2026-01-31 08:38:47.285146352 +0000 UTC m=+0.020837358 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:38:47 np0005603609 nova_compute[221550]: 2026-01-31 08:38:47.385 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:38:47 np0005603609 nova_compute[221550]: 2026-01-31 08:38:47.389 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:38:47 np0005603609 nova_compute[221550]: 2026-01-31 08:38:47.444 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:47.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:47 np0005603609 podman[300396]: 2026-01-31 08:38:47.641078104 +0000 UTC m=+0.376769100 container create 9572a869c6724f21f3c7788c6336c2f3ae3a1c78dd3fef4a8e2e576e4102a0ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 03:38:47 np0005603609 systemd[1]: Started libpod-conmon-9572a869c6724f21f3c7788c6336c2f3ae3a1c78dd3fef4a8e2e576e4102a0ab.scope.
Jan 31 03:38:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:47.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:47 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:38:47 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f64b7e9257aa18e04f38daca0a0b145ac04fa5460dc62ad1c9599649223a55/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:47 np0005603609 nova_compute[221550]: 2026-01-31 08:38:47.755 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:38:47 np0005603609 podman[300396]: 2026-01-31 08:38:47.806830247 +0000 UTC m=+0.542521233 container init 9572a869c6724f21f3c7788c6336c2f3ae3a1c78dd3fef4a8e2e576e4102a0ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:38:47 np0005603609 podman[300396]: 2026-01-31 08:38:47.813204873 +0000 UTC m=+0.548895859 container start 9572a869c6724f21f3c7788c6336c2f3ae3a1c78dd3fef4a8e2e576e4102a0ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:38:47 np0005603609 neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154[300411]: [NOTICE]   (300416) : New worker (300418) forked
Jan 31 03:38:47 np0005603609 neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154[300411]: [NOTICE]   (300416) : Loading success.
Jan 31 03:38:48 np0005603609 nova_compute[221550]: 2026-01-31 08:38:48.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:48 np0005603609 nova_compute[221550]: 2026-01-31 08:38:48.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:49.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:49.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:51 np0005603609 nova_compute[221550]: 2026-01-31 08:38:51.558 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:51.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:51.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:52 np0005603609 nova_compute[221550]: 2026-01-31 08:38:52.445 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:53.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:53.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:54 np0005603609 nova_compute[221550]: 2026-01-31 08:38:54.588 221554 DEBUG oslo_concurrency.lockutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "6193a6a7-8918-4625-b626-5d53c31bf0ff" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:54 np0005603609 nova_compute[221550]: 2026-01-31 08:38:54.588 221554 DEBUG oslo_concurrency.lockutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:54 np0005603609 nova_compute[221550]: 2026-01-31 08:38:54.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:54 np0005603609 nova_compute[221550]: 2026-01-31 08:38:54.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:38:54 np0005603609 nova_compute[221550]: 2026-01-31 08:38:54.889 221554 DEBUG nova.compute.manager [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:38:55 np0005603609 nova_compute[221550]: 2026-01-31 08:38:55.197 221554 DEBUG oslo_concurrency.lockutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:55 np0005603609 nova_compute[221550]: 2026-01-31 08:38:55.198 221554 DEBUG oslo_concurrency.lockutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:55 np0005603609 nova_compute[221550]: 2026-01-31 08:38:55.208 221554 DEBUG nova.virt.hardware [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:38:55 np0005603609 nova_compute[221550]: 2026-01-31 08:38:55.208 221554 INFO nova.compute.claims [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:38:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:38:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:55.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:38:55 np0005603609 nova_compute[221550]: 2026-01-31 08:38:55.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:55 np0005603609 nova_compute[221550]: 2026-01-31 08:38:55.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:38:55 np0005603609 nova_compute[221550]: 2026-01-31 08:38:55.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:38:55 np0005603609 nova_compute[221550]: 2026-01-31 08:38:55.717 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:38:55 np0005603609 nova_compute[221550]: 2026-01-31 08:38:55.717 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:38:55 np0005603609 nova_compute[221550]: 2026-01-31 08:38:55.718 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:38:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:55.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.039 221554 DEBUG nova.compute.manager [req-d2573328-69b8-4665-a4bf-8df47a44ff20 req-6bd74179-1d4c-4e4c-9e44-ef433107d62c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Received event network-vif-plugged-93519b56-e167-470c-9933-432512222982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.040 221554 DEBUG oslo_concurrency.lockutils [req-d2573328-69b8-4665-a4bf-8df47a44ff20 req-6bd74179-1d4c-4e4c-9e44-ef433107d62c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "233d3314-7d9d-49a5-818f-909d78422fb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.040 221554 DEBUG oslo_concurrency.lockutils [req-d2573328-69b8-4665-a4bf-8df47a44ff20 req-6bd74179-1d4c-4e4c-9e44-ef433107d62c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.040 221554 DEBUG oslo_concurrency.lockutils [req-d2573328-69b8-4665-a4bf-8df47a44ff20 req-6bd74179-1d4c-4e4c-9e44-ef433107d62c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.040 221554 DEBUG nova.compute.manager [req-d2573328-69b8-4665-a4bf-8df47a44ff20 req-6bd74179-1d4c-4e4c-9e44-ef433107d62c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Processing event network-vif-plugged-93519b56-e167-470c-9933-432512222982 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.041 221554 DEBUG nova.compute.manager [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Instance event wait completed in 9 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.047 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848736.0469396, 233d3314-7d9d-49a5-818f-909d78422fb9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.048 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.049 221554 DEBUG nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.053 221554 INFO nova.virt.libvirt.driver [-] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Instance spawned successfully.#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.054 221554 DEBUG nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.089 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.093 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.169 221554 DEBUG nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.170 221554 DEBUG nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.171 221554 DEBUG nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.171 221554 DEBUG nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.171 221554 DEBUG nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.172 221554 DEBUG nova.virt.libvirt.driver [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.184 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.214 221554 DEBUG oslo_concurrency.processutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.554 221554 INFO nova.compute.manager [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Took 18.71 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.554 221554 DEBUG nova.compute.manager [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.562 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.654 221554 DEBUG oslo_concurrency.processutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.660 221554 DEBUG nova.compute.provider_tree [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:38:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.901 221554 DEBUG nova.scheduler.client.report [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:38:56 np0005603609 nova_compute[221550]: 2026-01-31 08:38:56.962 221554 INFO nova.compute.manager [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Took 22.67 seconds to build instance.#033[00m
Jan 31 03:38:57 np0005603609 podman[300451]: 2026-01-31 08:38:57.166856662 +0000 UTC m=+0.048221864 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Jan 31 03:38:57 np0005603609 podman[300450]: 2026-01-31 08:38:57.18937308 +0000 UTC m=+0.072336691 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 03:38:57 np0005603609 nova_compute[221550]: 2026-01-31 08:38:57.404 221554 DEBUG oslo_concurrency.lockutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:57 np0005603609 nova_compute[221550]: 2026-01-31 08:38:57.404 221554 DEBUG nova.compute.manager [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:38:57 np0005603609 nova_compute[221550]: 2026-01-31 08:38:57.446 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:38:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:57.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:38:57 np0005603609 nova_compute[221550]: 2026-01-31 08:38:57.617 221554 DEBUG oslo_concurrency.lockutils [None req-3531c767-8f76-47c6-b55b-83f4e0475126 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:57 np0005603609 nova_compute[221550]: 2026-01-31 08:38:57.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:57 np0005603609 nova_compute[221550]: 2026-01-31 08:38:57.747 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:57 np0005603609 nova_compute[221550]: 2026-01-31 08:38:57.748 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:57 np0005603609 nova_compute[221550]: 2026-01-31 08:38:57.748 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:57 np0005603609 nova_compute[221550]: 2026-01-31 08:38:57.748 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:38:57 np0005603609 nova_compute[221550]: 2026-01-31 08:38:57.749 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:38:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:57.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:38:57 np0005603609 nova_compute[221550]: 2026-01-31 08:38:57.788 221554 DEBUG nova.compute.manager [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:38:57 np0005603609 nova_compute[221550]: 2026-01-31 08:38:57.788 221554 DEBUG nova.network.neutron [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:38:58 np0005603609 nova_compute[221550]: 2026-01-31 08:38:58.008 221554 INFO nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:38:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:38:58 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/409800787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:38:58 np0005603609 nova_compute[221550]: 2026-01-31 08:38:58.185 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:58 np0005603609 nova_compute[221550]: 2026-01-31 08:38:58.296 221554 DEBUG nova.compute.manager [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:38:58 np0005603609 nova_compute[221550]: 2026-01-31 08:38:58.528 221554 DEBUG nova.compute.manager [req-f05ffe86-e935-4aa3-a40a-de1324c89e14 req-d9b0b996-4671-43ed-8428-89a16e889d56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Received event network-vif-plugged-93519b56-e167-470c-9933-432512222982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:38:58 np0005603609 nova_compute[221550]: 2026-01-31 08:38:58.529 221554 DEBUG oslo_concurrency.lockutils [req-f05ffe86-e935-4aa3-a40a-de1324c89e14 req-d9b0b996-4671-43ed-8428-89a16e889d56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "233d3314-7d9d-49a5-818f-909d78422fb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:58 np0005603609 nova_compute[221550]: 2026-01-31 08:38:58.529 221554 DEBUG oslo_concurrency.lockutils [req-f05ffe86-e935-4aa3-a40a-de1324c89e14 req-d9b0b996-4671-43ed-8428-89a16e889d56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:58 np0005603609 nova_compute[221550]: 2026-01-31 08:38:58.529 221554 DEBUG oslo_concurrency.lockutils [req-f05ffe86-e935-4aa3-a40a-de1324c89e14 req-d9b0b996-4671-43ed-8428-89a16e889d56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:58 np0005603609 nova_compute[221550]: 2026-01-31 08:38:58.530 221554 DEBUG nova.compute.manager [req-f05ffe86-e935-4aa3-a40a-de1324c89e14 req-d9b0b996-4671-43ed-8428-89a16e889d56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] No waiting events found dispatching network-vif-plugged-93519b56-e167-470c-9933-432512222982 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:38:58 np0005603609 nova_compute[221550]: 2026-01-31 08:38:58.530 221554 WARNING nova.compute.manager [req-f05ffe86-e935-4aa3-a40a-de1324c89e14 req-d9b0b996-4671-43ed-8428-89a16e889d56 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Received unexpected event network-vif-plugged-93519b56-e167-470c-9933-432512222982 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:38:58 np0005603609 nova_compute[221550]: 2026-01-31 08:38:58.607 221554 DEBUG nova.policy [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4d0e9d918b4041fabd5ded633b4cf404', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9710f0cf77d84353ae13fa47922b085d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:38:59 np0005603609 nova_compute[221550]: 2026-01-31 08:38:59.386 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:38:59 np0005603609 nova_compute[221550]: 2026-01-31 08:38:59.386 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:38:59 np0005603609 nova_compute[221550]: 2026-01-31 08:38:59.534 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:38:59 np0005603609 nova_compute[221550]: 2026-01-31 08:38:59.535 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4087MB free_disk=20.943286895751953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:38:59 np0005603609 nova_compute[221550]: 2026-01-31 08:38:59.535 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:59 np0005603609 nova_compute[221550]: 2026-01-31 08:38:59.536 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:59.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:38:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:59.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:00 np0005603609 nova_compute[221550]: 2026-01-31 08:39:00.042 221554 DEBUG oslo_concurrency.lockutils [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "233d3314-7d9d-49a5-818f-909d78422fb9" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:00 np0005603609 nova_compute[221550]: 2026-01-31 08:39:00.042 221554 DEBUG oslo_concurrency.lockutils [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:00 np0005603609 nova_compute[221550]: 2026-01-31 08:39:00.522 221554 DEBUG nova.compute.manager [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:39:00 np0005603609 nova_compute[221550]: 2026-01-31 08:39:00.523 221554 DEBUG nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:39:00 np0005603609 nova_compute[221550]: 2026-01-31 08:39:00.524 221554 INFO nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Creating image(s)#033[00m
Jan 31 03:39:00 np0005603609 nova_compute[221550]: 2026-01-31 08:39:00.551 221554 DEBUG nova.storage.rbd_utils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 6193a6a7-8918-4625-b626-5d53c31bf0ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:39:00 np0005603609 nova_compute[221550]: 2026-01-31 08:39:00.583 221554 DEBUG nova.storage.rbd_utils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 6193a6a7-8918-4625-b626-5d53c31bf0ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:39:00 np0005603609 nova_compute[221550]: 2026-01-31 08:39:00.613 221554 DEBUG nova.storage.rbd_utils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 6193a6a7-8918-4625-b626-5d53c31bf0ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:39:00 np0005603609 nova_compute[221550]: 2026-01-31 08:39:00.616 221554 DEBUG oslo_concurrency.processutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:00 np0005603609 nova_compute[221550]: 2026-01-31 08:39:00.670 221554 DEBUG oslo_concurrency.processutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:00 np0005603609 nova_compute[221550]: 2026-01-31 08:39:00.671 221554 DEBUG oslo_concurrency.lockutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:00 np0005603609 nova_compute[221550]: 2026-01-31 08:39:00.672 221554 DEBUG oslo_concurrency.lockutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:00 np0005603609 nova_compute[221550]: 2026-01-31 08:39:00.672 221554 DEBUG oslo_concurrency.lockutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:00 np0005603609 nova_compute[221550]: 2026-01-31 08:39:00.698 221554 DEBUG nova.storage.rbd_utils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 6193a6a7-8918-4625-b626-5d53c31bf0ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:39:00 np0005603609 nova_compute[221550]: 2026-01-31 08:39:00.701 221554 DEBUG oslo_concurrency.processutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 6193a6a7-8918-4625-b626-5d53c31bf0ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:00 np0005603609 nova_compute[221550]: 2026-01-31 08:39:00.813 221554 DEBUG nova.objects.instance [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'flavor' on Instance uuid 233d3314-7d9d-49a5-818f-909d78422fb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:39:01 np0005603609 nova_compute[221550]: 2026-01-31 08:39:01.416 221554 DEBUG oslo_concurrency.processutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 6193a6a7-8918-4625-b626-5d53c31bf0ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.714s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:01 np0005603609 nova_compute[221550]: 2026-01-31 08:39:01.486 221554 DEBUG nova.storage.rbd_utils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] resizing rbd image 6193a6a7-8918-4625-b626-5d53c31bf0ff_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:39:01 np0005603609 nova_compute[221550]: 2026-01-31 08:39:01.561 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:01 np0005603609 nova_compute[221550]: 2026-01-31 08:39:01.605 221554 DEBUG nova.objects.instance [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'migration_context' on Instance uuid 6193a6a7-8918-4625-b626-5d53c31bf0ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:39:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:01.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:01.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:02 np0005603609 nova_compute[221550]: 2026-01-31 08:39:02.092 221554 DEBUG nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:39:02 np0005603609 nova_compute[221550]: 2026-01-31 08:39:02.092 221554 DEBUG nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Ensure instance console log exists: /var/lib/nova/instances/6193a6a7-8918-4625-b626-5d53c31bf0ff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:39:02 np0005603609 nova_compute[221550]: 2026-01-31 08:39:02.093 221554 DEBUG oslo_concurrency.lockutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:02 np0005603609 nova_compute[221550]: 2026-01-31 08:39:02.094 221554 DEBUG oslo_concurrency.lockutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:02 np0005603609 nova_compute[221550]: 2026-01-31 08:39:02.094 221554 DEBUG oslo_concurrency.lockutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:02 np0005603609 nova_compute[221550]: 2026-01-31 08:39:02.414 221554 DEBUG oslo_concurrency.lockutils [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 2.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:02 np0005603609 nova_compute[221550]: 2026-01-31 08:39:02.448 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:03.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:03.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:04.099 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=79, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=78) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:39:04 np0005603609 nova_compute[221550]: 2026-01-31 08:39:04.099 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:04.100 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:39:04 np0005603609 nova_compute[221550]: 2026-01-31 08:39:04.942 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 233d3314-7d9d-49a5-818f-909d78422fb9 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:39:04 np0005603609 nova_compute[221550]: 2026-01-31 08:39:04.942 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 6193a6a7-8918-4625-b626-5d53c31bf0ff actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:39:04 np0005603609 nova_compute[221550]: 2026-01-31 08:39:04.943 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:39:04 np0005603609 nova_compute[221550]: 2026-01-31 08:39:04.943 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:39:05 np0005603609 nova_compute[221550]: 2026-01-31 08:39:05.305 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:05.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:39:05 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2964727998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:39:05 np0005603609 nova_compute[221550]: 2026-01-31 08:39:05.723 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:05 np0005603609 nova_compute[221550]: 2026-01-31 08:39:05.727 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:39:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:05.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:06 np0005603609 nova_compute[221550]: 2026-01-31 08:39:06.041 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:39:06 np0005603609 nova_compute[221550]: 2026-01-31 08:39:06.563 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:07 np0005603609 nova_compute[221550]: 2026-01-31 08:39:07.160 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:39:07 np0005603609 nova_compute[221550]: 2026-01-31 08:39:07.160 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 7.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:07 np0005603609 nova_compute[221550]: 2026-01-31 08:39:07.451 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:07.531 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:07.532 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:07.533 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:07.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:07 np0005603609 nova_compute[221550]: 2026-01-31 08:39:07.687 221554 DEBUG nova.network.neutron [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Successfully created port: 1950c089-af27-485c-8b1d-eefa73ac5064 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:39:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:07.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.622 221554 DEBUG oslo_concurrency.lockutils [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "233d3314-7d9d-49a5-818f-909d78422fb9" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.623 221554 DEBUG oslo_concurrency.lockutils [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.623 221554 INFO nova.compute.manager [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Attaching volume be0b3094-6fbc-448a-804b-4a365824d522 to /dev/vdb#033[00m
Jan 31 03:39:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:39:08Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:44:32 10.100.0.4
Jan 31 03:39:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:39:08Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:44:32 10.100.0.4
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.882 221554 DEBUG os_brick.utils [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.884 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.893 226739 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.894 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[70215c52-afc1-4999-90c8-17e78370e186]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.895 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.901 226739 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.901 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[1326f6ac-ab3e-4f8a-8027-5e69283ccaa8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1765b9b6275c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.902 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.909 226739 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.909 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[73b1a628-df8f-4364-9846-336a46fabc8c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.911 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b441a8-e704-41a9-a0d9-35424a248d09]: (4, '231927d4-1ded-4b84-843c-456d697af567') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.911 221554 DEBUG oslo_concurrency.processutils [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.935 221554 DEBUG oslo_concurrency.processutils [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.937 221554 DEBUG os_brick.initiator.connectors.lightos [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.937 221554 DEBUG os_brick.initiator.connectors.lightos [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.937 221554 DEBUG os_brick.initiator.connectors.lightos [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.937 221554 DEBUG os_brick.utils [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] <== get_connector_properties: return (54ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1765b9b6275c', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '231927d4-1ded-4b84-843c-456d697af567', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:39:08 np0005603609 nova_compute[221550]: 2026-01-31 08:39:08.938 221554 DEBUG nova.virt.block_device [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updating existing volume attachment record: b5494f9d-2db0-4fde-95c9-dc967b04b9a1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:39:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:09.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:39:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:09.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:39:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:39:10 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2241056019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:39:10 np0005603609 nova_compute[221550]: 2026-01-31 08:39:10.338 221554 DEBUG nova.objects.instance [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'flavor' on Instance uuid 233d3314-7d9d-49a5-818f-909d78422fb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:39:10 np0005603609 nova_compute[221550]: 2026-01-31 08:39:10.404 221554 DEBUG nova.virt.libvirt.driver [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Attempting to attach volume be0b3094-6fbc-448a-804b-4a365824d522 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 31 03:39:10 np0005603609 nova_compute[221550]: 2026-01-31 08:39:10.407 221554 DEBUG nova.virt.libvirt.guest [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 03:39:10 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:39:10 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-be0b3094-6fbc-448a-804b-4a365824d522">
Jan 31 03:39:10 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:39:10 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:39:10 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:39:10 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:39:10 np0005603609 nova_compute[221550]:  <auth username="openstack">
Jan 31 03:39:10 np0005603609 nova_compute[221550]:    <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:39:10 np0005603609 nova_compute[221550]:  </auth>
Jan 31 03:39:10 np0005603609 nova_compute[221550]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:39:10 np0005603609 nova_compute[221550]:  <serial>be0b3094-6fbc-448a-804b-4a365824d522</serial>
Jan 31 03:39:10 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:39:10 np0005603609 nova_compute[221550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 03:39:10 np0005603609 nova_compute[221550]: 2026-01-31 08:39:10.585 221554 DEBUG nova.virt.libvirt.driver [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:39:10 np0005603609 nova_compute[221550]: 2026-01-31 08:39:10.586 221554 DEBUG nova.virt.libvirt.driver [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:39:10 np0005603609 nova_compute[221550]: 2026-01-31 08:39:10.586 221554 DEBUG nova.virt.libvirt.driver [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:39:10 np0005603609 nova_compute[221550]: 2026-01-31 08:39:10.586 221554 DEBUG nova.virt.libvirt.driver [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No VIF found with MAC fa:16:3e:20:44:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:39:11 np0005603609 nova_compute[221550]: 2026-01-31 08:39:11.216 221554 DEBUG nova.network.neutron [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Successfully updated port: 1950c089-af27-485c-8b1d-eefa73ac5064 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:39:11 np0005603609 nova_compute[221550]: 2026-01-31 08:39:11.250 221554 DEBUG oslo_concurrency.lockutils [None req-4fbcee70-1f4b-4245-94db-1a8049269e96 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:11 np0005603609 nova_compute[221550]: 2026-01-31 08:39:11.252 221554 DEBUG oslo_concurrency.lockutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:39:11 np0005603609 nova_compute[221550]: 2026-01-31 08:39:11.252 221554 DEBUG oslo_concurrency.lockutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquired lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:39:11 np0005603609 nova_compute[221550]: 2026-01-31 08:39:11.252 221554 DEBUG nova.network.neutron [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:39:11 np0005603609 nova_compute[221550]: 2026-01-31 08:39:11.425 221554 DEBUG nova.compute.manager [req-98af96ad-9429-470a-87c1-66bd5fbb7593 req-3e70ffe7-3871-4ffe-8486-b36a52cb0750 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received event network-changed-1950c089-af27-485c-8b1d-eefa73ac5064 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:39:11 np0005603609 nova_compute[221550]: 2026-01-31 08:39:11.425 221554 DEBUG nova.compute.manager [req-98af96ad-9429-470a-87c1-66bd5fbb7593 req-3e70ffe7-3871-4ffe-8486-b36a52cb0750 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Refreshing instance network info cache due to event network-changed-1950c089-af27-485c-8b1d-eefa73ac5064. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:39:11 np0005603609 nova_compute[221550]: 2026-01-31 08:39:11.425 221554 DEBUG oslo_concurrency.lockutils [req-98af96ad-9429-470a-87c1-66bd5fbb7593 req-3e70ffe7-3871-4ffe-8486-b36a52cb0750 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:39:11 np0005603609 nova_compute[221550]: 2026-01-31 08:39:11.564 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:39:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:11.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:39:11 np0005603609 nova_compute[221550]: 2026-01-31 08:39:11.667 221554 DEBUG nova.network.neutron [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:39:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:11.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:12 np0005603609 nova_compute[221550]: 2026-01-31 08:39:12.367 221554 DEBUG oslo_concurrency.lockutils [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "233d3314-7d9d-49a5-818f-909d78422fb9" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:12 np0005603609 nova_compute[221550]: 2026-01-31 08:39:12.368 221554 DEBUG oslo_concurrency.lockutils [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:12 np0005603609 nova_compute[221550]: 2026-01-31 08:39:12.404 221554 DEBUG nova.objects.instance [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'flavor' on Instance uuid 233d3314-7d9d-49a5-818f-909d78422fb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:39:12 np0005603609 nova_compute[221550]: 2026-01-31 08:39:12.453 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:12 np0005603609 nova_compute[221550]: 2026-01-31 08:39:12.488 221554 DEBUG oslo_concurrency.lockutils [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:12 np0005603609 nova_compute[221550]: 2026-01-31 08:39:12.988 221554 DEBUG oslo_concurrency.lockutils [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "233d3314-7d9d-49a5-818f-909d78422fb9" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:12 np0005603609 nova_compute[221550]: 2026-01-31 08:39:12.988 221554 DEBUG oslo_concurrency.lockutils [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:12 np0005603609 nova_compute[221550]: 2026-01-31 08:39:12.989 221554 INFO nova.compute.manager [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Attaching volume 67e8063c-e5d3-4497-bf93-33d59b6a9eb4 to /dev/vdc#033[00m
Jan 31 03:39:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:13.102 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '79'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.155 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.156 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.156 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.179 221554 DEBUG nova.network.neutron [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updating instance_info_cache with network_info: [{"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.210 221554 DEBUG os_brick.utils [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.211 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.218 226739 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.218 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[c67a2434-8a63-4289-a87a-344a8dc6163a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.220 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.225 226739 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.225 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[bc767cd1-d2ca-489a-a34f-3923289167a6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1765b9b6275c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.226 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.231 226739 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.232 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[8b47772b-3501-490b-afae-17246262aa6a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.232 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[340121f6-d6cf-42ac-8510-69bb016892f0]: (4, '231927d4-1ded-4b84-843c-456d697af567') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.233 221554 DEBUG oslo_concurrency.processutils [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.251 221554 DEBUG oslo_concurrency.processutils [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "nvme version" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.255 221554 DEBUG os_brick.initiator.connectors.lightos [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.256 221554 DEBUG os_brick.initiator.connectors.lightos [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.256 221554 DEBUG os_brick.initiator.connectors.lightos [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.257 221554 DEBUG os_brick.utils [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] <== get_connector_properties: return (46ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1765b9b6275c', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '231927d4-1ded-4b84-843c-456d697af567', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.257 221554 DEBUG nova.virt.block_device [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updating existing volume attachment record: 36f527e6-fb0a-420f-a7d0-bbc889038fc1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.278 221554 DEBUG oslo_concurrency.lockutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Releasing lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.278 221554 DEBUG nova.compute.manager [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Instance network_info: |[{"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.278 221554 DEBUG oslo_concurrency.lockutils [req-98af96ad-9429-470a-87c1-66bd5fbb7593 req-3e70ffe7-3871-4ffe-8486-b36a52cb0750 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.279 221554 DEBUG nova.network.neutron [req-98af96ad-9429-470a-87c1-66bd5fbb7593 req-3e70ffe7-3871-4ffe-8486-b36a52cb0750 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Refreshing network info cache for port 1950c089-af27-485c-8b1d-eefa73ac5064 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.281 221554 DEBUG nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Start _get_guest_xml network_info=[{"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.284 221554 WARNING nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.288 221554 DEBUG nova.virt.libvirt.host [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.289 221554 DEBUG nova.virt.libvirt.host [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.291 221554 DEBUG nova.virt.libvirt.host [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.292 221554 DEBUG nova.virt.libvirt.host [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.293 221554 DEBUG nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.293 221554 DEBUG nova.virt.hardware [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.293 221554 DEBUG nova.virt.hardware [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.294 221554 DEBUG nova.virt.hardware [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.294 221554 DEBUG nova.virt.hardware [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.294 221554 DEBUG nova.virt.hardware [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.294 221554 DEBUG nova.virt.hardware [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.294 221554 DEBUG nova.virt.hardware [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.295 221554 DEBUG nova.virt.hardware [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.295 221554 DEBUG nova.virt.hardware [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.295 221554 DEBUG nova.virt.hardware [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.295 221554 DEBUG nova.virt.hardware [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.298 221554 DEBUG oslo_concurrency.processutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:13.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:39:13 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3098512103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.739 221554 DEBUG oslo_concurrency.processutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.768 221554 DEBUG nova.storage.rbd_utils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 6193a6a7-8918-4625-b626-5d53c31bf0ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:39:13 np0005603609 nova_compute[221550]: 2026-01-31 08:39:13.773 221554 DEBUG oslo_concurrency.processutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:13.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:39:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1072869473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.256 221554 DEBUG oslo_concurrency.processutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.258 221554 DEBUG nova.virt.libvirt.vif [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:38:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1044697057',display_name='tempest-TestNetworkAdvancedServerOps-server-1044697057',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1044697057',id=186,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFM+PhejEoD7g/QWN1GYstJMXVFnv1wBVmxecXIfvr4KjdawbuS6IVqr3y8FzKBH2jXkdyk7w4kVVSMOQCA1tqgtHQYS5flQfReRJgL/KNYlba9qTCk6eFD00Tadz9cy3w==',key_name='tempest-TestNetworkAdvancedServerOps-1072390140',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-hmxtgjg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:38:58Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=6193a6a7-8918-4625-b626-5d53c31bf0ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.259 221554 DEBUG nova.network.os_vif_util [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.260 221554 DEBUG nova.network.os_vif_util [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:d3:f9,bridge_name='br-int',has_traffic_filtering=True,id=1950c089-af27-485c-8b1d-eefa73ac5064,network=Network(afa5ed82-7034-4517-a39e-5fc8b872592f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1950c089-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.261 221554 DEBUG nova.objects.instance [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'pci_devices' on Instance uuid 6193a6a7-8918-4625-b626-5d53c31bf0ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.567 221554 DEBUG nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:39:14 np0005603609 nova_compute[221550]:  <uuid>6193a6a7-8918-4625-b626-5d53c31bf0ff</uuid>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:  <name>instance-000000ba</name>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1044697057</nova:name>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:39:13</nova:creationTime>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:39:14 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:        <nova:user uuid="4d0e9d918b4041fabd5ded633b4cf404">tempest-TestNetworkAdvancedServerOps-483180749-project-member</nova:user>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:        <nova:project uuid="9710f0cf77d84353ae13fa47922b085d">tempest-TestNetworkAdvancedServerOps-483180749</nova:project>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:        <nova:port uuid="1950c089-af27-485c-8b1d-eefa73ac5064">
Jan 31 03:39:14 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <entry name="serial">6193a6a7-8918-4625-b626-5d53c31bf0ff</entry>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <entry name="uuid">6193a6a7-8918-4625-b626-5d53c31bf0ff</entry>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/6193a6a7-8918-4625-b626-5d53c31bf0ff_disk">
Jan 31 03:39:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:39:14 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/6193a6a7-8918-4625-b626-5d53c31bf0ff_disk.config">
Jan 31 03:39:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:39:14 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:43:d3:f9"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <target dev="tap1950c089-af"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/6193a6a7-8918-4625-b626-5d53c31bf0ff/console.log" append="off"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:39:14 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:39:14 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:39:14 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:39:14 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.569 221554 DEBUG nova.compute.manager [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Preparing to wait for external event network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.570 221554 DEBUG oslo_concurrency.lockutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.570 221554 DEBUG oslo_concurrency.lockutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.571 221554 DEBUG oslo_concurrency.lockutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.572 221554 DEBUG nova.virt.libvirt.vif [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:38:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1044697057',display_name='tempest-TestNetworkAdvancedServerOps-server-1044697057',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1044697057',id=186,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFM+PhejEoD7g/QWN1GYstJMXVFnv1wBVmxecXIfvr4KjdawbuS6IVqr3y8FzKBH2jXkdyk7w4kVVSMOQCA1tqgtHQYS5flQfReRJgL/KNYlba9qTCk6eFD00Tadz9cy3w==',key_name='tempest-TestNetworkAdvancedServerOps-1072390140',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-hmxtgjg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:38:58Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=6193a6a7-8918-4625-b626-5d53c31bf0ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.573 221554 DEBUG nova.network.os_vif_util [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.574 221554 DEBUG nova.network.os_vif_util [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:d3:f9,bridge_name='br-int',has_traffic_filtering=True,id=1950c089-af27-485c-8b1d-eefa73ac5064,network=Network(afa5ed82-7034-4517-a39e-5fc8b872592f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1950c089-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.574 221554 DEBUG os_vif [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:d3:f9,bridge_name='br-int',has_traffic_filtering=True,id=1950c089-af27-485c-8b1d-eefa73ac5064,network=Network(afa5ed82-7034-4517-a39e-5fc8b872592f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1950c089-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.575 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.576 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.577 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.581 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.582 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1950c089-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.582 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1950c089-af, col_values=(('external_ids', {'iface-id': '1950c089-af27-485c-8b1d-eefa73ac5064', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:d3:f9', 'vm-uuid': '6193a6a7-8918-4625-b626-5d53c31bf0ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.585 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:14 np0005603609 NetworkManager[49064]: <info>  [1769848754.5862] manager: (tap1950c089-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.588 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.595 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.596 221554 INFO os_vif [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:d3:f9,bridge_name='br-int',has_traffic_filtering=True,id=1950c089-af27-485c-8b1d-eefa73ac5064,network=Network(afa5ed82-7034-4517-a39e-5fc8b872592f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1950c089-af')#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.721 221554 DEBUG nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.722 221554 DEBUG nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.722 221554 DEBUG nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No VIF found with MAC fa:16:3e:43:d3:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.723 221554 INFO nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Using config drive#033[00m
Jan 31 03:39:14 np0005603609 nova_compute[221550]: 2026-01-31 08:39:14.784 221554 DEBUG nova.storage.rbd_utils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 6193a6a7-8918-4625-b626-5d53c31bf0ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:39:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:39:15 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1354848748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:39:15 np0005603609 nova_compute[221550]: 2026-01-31 08:39:15.582 221554 DEBUG nova.network.neutron [req-98af96ad-9429-470a-87c1-66bd5fbb7593 req-3e70ffe7-3871-4ffe-8486-b36a52cb0750 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updated VIF entry in instance network info cache for port 1950c089-af27-485c-8b1d-eefa73ac5064. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:39:15 np0005603609 nova_compute[221550]: 2026-01-31 08:39:15.583 221554 DEBUG nova.network.neutron [req-98af96ad-9429-470a-87c1-66bd5fbb7593 req-3e70ffe7-3871-4ffe-8486-b36a52cb0750 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updating instance_info_cache with network_info: [{"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:39:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:15.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:15 np0005603609 nova_compute[221550]: 2026-01-31 08:39:15.761 221554 DEBUG oslo_concurrency.lockutils [req-98af96ad-9429-470a-87c1-66bd5fbb7593 req-3e70ffe7-3871-4ffe-8486-b36a52cb0750 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:39:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:15.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:15 np0005603609 nova_compute[221550]: 2026-01-31 08:39:15.828 221554 INFO nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Creating config drive at /var/lib/nova/instances/6193a6a7-8918-4625-b626-5d53c31bf0ff/disk.config#033[00m
Jan 31 03:39:15 np0005603609 nova_compute[221550]: 2026-01-31 08:39:15.831 221554 DEBUG oslo_concurrency.processutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6193a6a7-8918-4625-b626-5d53c31bf0ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpajhp43d5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:15 np0005603609 nova_compute[221550]: 2026-01-31 08:39:15.956 221554 DEBUG oslo_concurrency.processutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6193a6a7-8918-4625-b626-5d53c31bf0ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpajhp43d5" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:16 np0005603609 nova_compute[221550]: 2026-01-31 08:39:16.201 221554 DEBUG nova.storage.rbd_utils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 6193a6a7-8918-4625-b626-5d53c31bf0ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:39:16 np0005603609 nova_compute[221550]: 2026-01-31 08:39:16.204 221554 DEBUG oslo_concurrency.processutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6193a6a7-8918-4625-b626-5d53c31bf0ff/disk.config 6193a6a7-8918-4625-b626-5d53c31bf0ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:16 np0005603609 nova_compute[221550]: 2026-01-31 08:39:16.233 221554 DEBUG nova.objects.instance [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'flavor' on Instance uuid 233d3314-7d9d-49a5-818f-909d78422fb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:39:16 np0005603609 nova_compute[221550]: 2026-01-31 08:39:16.269 221554 DEBUG nova.virt.libvirt.driver [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Attempting to attach volume 67e8063c-e5d3-4497-bf93-33d59b6a9eb4 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 31 03:39:16 np0005603609 nova_compute[221550]: 2026-01-31 08:39:16.272 221554 DEBUG nova.virt.libvirt.guest [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 03:39:16 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:39:16 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-67e8063c-e5d3-4497-bf93-33d59b6a9eb4">
Jan 31 03:39:16 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:39:16 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:39:16 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:39:16 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:39:16 np0005603609 nova_compute[221550]:  <auth username="openstack">
Jan 31 03:39:16 np0005603609 nova_compute[221550]:    <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:39:16 np0005603609 nova_compute[221550]:  </auth>
Jan 31 03:39:16 np0005603609 nova_compute[221550]:  <target dev="vdc" bus="virtio"/>
Jan 31 03:39:16 np0005603609 nova_compute[221550]:  <serial>67e8063c-e5d3-4497-bf93-33d59b6a9eb4</serial>
Jan 31 03:39:16 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:39:16 np0005603609 nova_compute[221550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 03:39:16 np0005603609 nova_compute[221550]: 2026-01-31 08:39:16.565 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:16 np0005603609 nova_compute[221550]: 2026-01-31 08:39:16.834 221554 DEBUG nova.virt.libvirt.driver [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:39:16 np0005603609 nova_compute[221550]: 2026-01-31 08:39:16.834 221554 DEBUG nova.virt.libvirt.driver [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:39:16 np0005603609 nova_compute[221550]: 2026-01-31 08:39:16.834 221554 DEBUG nova.virt.libvirt.driver [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:39:16 np0005603609 nova_compute[221550]: 2026-01-31 08:39:16.835 221554 DEBUG nova.virt.libvirt.driver [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:39:16 np0005603609 nova_compute[221550]: 2026-01-31 08:39:16.835 221554 DEBUG nova.virt.libvirt.driver [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No VIF found with MAC fa:16:3e:20:44:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:39:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:17 np0005603609 nova_compute[221550]: 2026-01-31 08:39:17.021 221554 DEBUG oslo_concurrency.processutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6193a6a7-8918-4625-b626-5d53c31bf0ff/disk.config 6193a6a7-8918-4625-b626-5d53c31bf0ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.817s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:17 np0005603609 nova_compute[221550]: 2026-01-31 08:39:17.023 221554 INFO nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Deleting local config drive /var/lib/nova/instances/6193a6a7-8918-4625-b626-5d53c31bf0ff/disk.config because it was imported into RBD.#033[00m
Jan 31 03:39:17 np0005603609 kernel: tap1950c089-af: entered promiscuous mode
Jan 31 03:39:17 np0005603609 NetworkManager[49064]: <info>  [1769848757.0791] manager: (tap1950c089-af): new Tun device (/org/freedesktop/NetworkManager/Devices/386)
Jan 31 03:39:17 np0005603609 nova_compute[221550]: 2026-01-31 08:39:17.080 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:39:17Z|00835|binding|INFO|Claiming lport 1950c089-af27-485c-8b1d-eefa73ac5064 for this chassis.
Jan 31 03:39:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:39:17Z|00836|binding|INFO|1950c089-af27-485c-8b1d-eefa73ac5064: Claiming fa:16:3e:43:d3:f9 10.100.0.8
Jan 31 03:39:17 np0005603609 nova_compute[221550]: 2026-01-31 08:39:17.086 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:17 np0005603609 nova_compute[221550]: 2026-01-31 08:39:17.090 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:39:17Z|00837|binding|INFO|Setting lport 1950c089-af27-485c-8b1d-eefa73ac5064 ovn-installed in OVS
Jan 31 03:39:17 np0005603609 nova_compute[221550]: 2026-01-31 08:39:17.111 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:17 np0005603609 systemd-udevd[300895]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:39:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:39:17Z|00838|binding|INFO|Setting lport 1950c089-af27-485c-8b1d-eefa73ac5064 up in Southbound
Jan 31 03:39:17 np0005603609 systemd-machined[190912]: New machine qemu-101-instance-000000ba.
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.121 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:d3:f9 10.100.0.8'], port_security=['fa:16:3e:43:d3:f9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6193a6a7-8918-4625-b626-5d53c31bf0ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-afa5ed82-7034-4517-a39e-5fc8b872592f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1618d12c-038a-456c-8d56-5eb45e7b0852', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=002a439c-290e-44cc-a5b6-bc1f707c1ea1, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=1950c089-af27-485c-8b1d-eefa73ac5064) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.122 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 1950c089-af27-485c-8b1d-eefa73ac5064 in datapath afa5ed82-7034-4517-a39e-5fc8b872592f bound to our chassis#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.124 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network afa5ed82-7034-4517-a39e-5fc8b872592f#033[00m
Jan 31 03:39:17 np0005603609 NetworkManager[49064]: <info>  [1769848757.1324] device (tap1950c089-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:39:17 np0005603609 NetworkManager[49064]: <info>  [1769848757.1333] device (tap1950c089-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.135 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c205f88e-7e62-4565-afbe-1f1df572b33f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.136 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapafa5ed82-71 in ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:39:17 np0005603609 systemd[1]: Started Virtual Machine qemu-101-instance-000000ba.
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.139 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapafa5ed82-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.139 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[128f4a49-a387-4f62-b061-235c6884ea7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.140 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fc825b47-207f-4ddf-a3d6-50575f0b3a47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.153 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5fefac-e052-41ba-900f-739bb39eb9d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.179 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7de546bc-7358-4509-8f88-30e373bba3ba]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.210 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[ed76abcb-2c9c-4fe9-b108-38e887dec2a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:17 np0005603609 systemd-udevd[300898]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:39:17 np0005603609 NetworkManager[49064]: <info>  [1769848757.2194] manager: (tapafa5ed82-70): new Veth device (/org/freedesktop/NetworkManager/Devices/387)
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.219 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ea650110-09c2-40dc-8b95-eb43ef6c7337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.251 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d30e83e4-0151-4f61-b23f-ba32c9763a1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.255 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[fd0a665a-66ce-456a-b723-f2d94fd378e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:17 np0005603609 NetworkManager[49064]: <info>  [1769848757.2793] device (tapafa5ed82-70): carrier: link connected
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.286 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c54be0-dd7e-470e-b2cb-a2365deb86a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.306 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e597269e-d0af-4edd-a562-64d613092bd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapafa5ed82-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:81:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 899697, 'reachable_time': 41380, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300928, 'error': None, 'target': 'ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.322 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[055c3fbc-7c96-4715-9a0b-227610851662]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:815d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 899697, 'tstamp': 899697}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300929, 'error': None, 'target': 'ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.336 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fa122a70-9bc7-49e4-91f3-ebab02437e4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapafa5ed82-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:81:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 899697, 'reachable_time': 41380, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300930, 'error': None, 'target': 'ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.386 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6c81ebfb-183b-4303-8992-8640ea190e0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.445 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7663d064-76b3-4358-9b3c-a6ccccc777a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.446 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapafa5ed82-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.447 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.447 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapafa5ed82-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:39:17 np0005603609 nova_compute[221550]: 2026-01-31 08:39:17.449 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:17 np0005603609 NetworkManager[49064]: <info>  [1769848757.4503] manager: (tapafa5ed82-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Jan 31 03:39:17 np0005603609 kernel: tapafa5ed82-70: entered promiscuous mode
Jan 31 03:39:17 np0005603609 nova_compute[221550]: 2026-01-31 08:39:17.452 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.456 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapafa5ed82-70, col_values=(('external_ids', {'iface-id': '4738d457-86b5-496e-a4f4-02b685103bfd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:39:17 np0005603609 nova_compute[221550]: 2026-01-31 08:39:17.458 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:39:17Z|00839|binding|INFO|Releasing lport 4738d457-86b5-496e-a4f4-02b685103bfd from this chassis (sb_readonly=0)
Jan 31 03:39:17 np0005603609 nova_compute[221550]: 2026-01-31 08:39:17.467 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.469 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/afa5ed82-7034-4517-a39e-5fc8b872592f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/afa5ed82-7034-4517-a39e-5fc8b872592f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.470 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[95fa12c7-7d5a-4fab-ae4b-de1969210a15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.472 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-afa5ed82-7034-4517-a39e-5fc8b872592f
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/afa5ed82-7034-4517-a39e-5fc8b872592f.pid.haproxy
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID afa5ed82-7034-4517-a39e-5fc8b872592f
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:39:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:39:17.473 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f', 'env', 'PROCESS_TAG=haproxy-afa5ed82-7034-4517-a39e-5fc8b872592f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/afa5ed82-7034-4517-a39e-5fc8b872592f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:39:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:39:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:17.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:39:17 np0005603609 nova_compute[221550]: 2026-01-31 08:39:17.635 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848757.6348085, 6193a6a7-8918-4625-b626-5d53c31bf0ff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:39:17 np0005603609 nova_compute[221550]: 2026-01-31 08:39:17.636 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] VM Started (Lifecycle Event)#033[00m
Jan 31 03:39:17 np0005603609 nova_compute[221550]: 2026-01-31 08:39:17.741 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:39:17 np0005603609 nova_compute[221550]: 2026-01-31 08:39:17.746 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848757.6355982, 6193a6a7-8918-4625-b626-5d53c31bf0ff => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:39:17 np0005603609 nova_compute[221550]: 2026-01-31 08:39:17.747 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:39:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:17.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:17 np0005603609 nova_compute[221550]: 2026-01-31 08:39:17.861 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:39:17 np0005603609 nova_compute[221550]: 2026-01-31 08:39:17.865 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:39:17 np0005603609 podman[301004]: 2026-01-31 08:39:17.831105477 +0000 UTC m=+0.024615331 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:39:17 np0005603609 nova_compute[221550]: 2026-01-31 08:39:17.953 221554 DEBUG oslo_concurrency.lockutils [None req-73aaef53-769d-44f7-9e7a-90f2311203c1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:18 np0005603609 podman[301004]: 2026-01-31 08:39:18.079149533 +0000 UTC m=+0.272659357 container create b17d46eed0e51b366407bbd96138a7022295923ac8a286618ac9c65c3da5ce01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:39:18 np0005603609 systemd[1]: Started libpod-conmon-b17d46eed0e51b366407bbd96138a7022295923ac8a286618ac9c65c3da5ce01.scope.
Jan 31 03:39:18 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:39:18 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0185134fa8eaf1d38fb3001ab30ebdf129e042853fda7314e8e837e9dde3a45e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:39:18 np0005603609 podman[301004]: 2026-01-31 08:39:18.276809603 +0000 UTC m=+0.470319457 container init b17d46eed0e51b366407bbd96138a7022295923ac8a286618ac9c65c3da5ce01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 03:39:18 np0005603609 podman[301004]: 2026-01-31 08:39:18.282126212 +0000 UTC m=+0.475636036 container start b17d46eed0e51b366407bbd96138a7022295923ac8a286618ac9c65c3da5ce01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 03:39:18 np0005603609 neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f[301020]: [NOTICE]   (301024) : New worker (301026) forked
Jan 31 03:39:18 np0005603609 neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f[301020]: [NOTICE]   (301024) : Loading success.
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.308 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.372 221554 DEBUG nova.compute.manager [req-5d3e9b50-47e1-42f4-b613-9fe147f75e19 req-e22cec72-e726-4c23-a0d4-0da7b78063e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received event network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.372 221554 DEBUG oslo_concurrency.lockutils [req-5d3e9b50-47e1-42f4-b613-9fe147f75e19 req-e22cec72-e726-4c23-a0d4-0da7b78063e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.372 221554 DEBUG oslo_concurrency.lockutils [req-5d3e9b50-47e1-42f4-b613-9fe147f75e19 req-e22cec72-e726-4c23-a0d4-0da7b78063e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.373 221554 DEBUG oslo_concurrency.lockutils [req-5d3e9b50-47e1-42f4-b613-9fe147f75e19 req-e22cec72-e726-4c23-a0d4-0da7b78063e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.373 221554 DEBUG nova.compute.manager [req-5d3e9b50-47e1-42f4-b613-9fe147f75e19 req-e22cec72-e726-4c23-a0d4-0da7b78063e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Processing event network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.373 221554 DEBUG nova.compute.manager [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.377 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848759.3772683, 6193a6a7-8918-4625-b626-5d53c31bf0ff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.377 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.379 221554 DEBUG nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.381 221554 INFO nova.virt.libvirt.driver [-] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Instance spawned successfully.#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.382 221554 DEBUG nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.451 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.455 221554 DEBUG nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.455 221554 DEBUG nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.455 221554 DEBUG nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.456 221554 DEBUG nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.456 221554 DEBUG nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.457 221554 DEBUG nova.virt.libvirt.driver [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.460 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.585 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.609 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:39:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:19.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.708 221554 INFO nova.compute.manager [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Took 19.19 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.708 221554 DEBUG nova.compute.manager [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:39:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:39:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:19.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.881 221554 INFO nova.compute.manager [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Took 24.78 seconds to build instance.#033[00m
Jan 31 03:39:19 np0005603609 nova_compute[221550]: 2026-01-31 08:39:19.921 221554 DEBUG oslo_concurrency.lockutils [None req-724437ac-0a84-4040-ad14-ef16e7e90eb8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:21 np0005603609 nova_compute[221550]: 2026-01-31 08:39:21.569 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:21.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:39:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:21.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:39:21 np0005603609 nova_compute[221550]: 2026-01-31 08:39:21.876 221554 DEBUG nova.compute.manager [req-83ab6ec4-45a3-477f-a807-c0d5b23c789c req-5f2f781a-58ae-494e-83c9-7e90e494b0b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received event network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:39:21 np0005603609 nova_compute[221550]: 2026-01-31 08:39:21.876 221554 DEBUG oslo_concurrency.lockutils [req-83ab6ec4-45a3-477f-a807-c0d5b23c789c req-5f2f781a-58ae-494e-83c9-7e90e494b0b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:21 np0005603609 nova_compute[221550]: 2026-01-31 08:39:21.877 221554 DEBUG oslo_concurrency.lockutils [req-83ab6ec4-45a3-477f-a807-c0d5b23c789c req-5f2f781a-58ae-494e-83c9-7e90e494b0b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:21 np0005603609 nova_compute[221550]: 2026-01-31 08:39:21.877 221554 DEBUG oslo_concurrency.lockutils [req-83ab6ec4-45a3-477f-a807-c0d5b23c789c req-5f2f781a-58ae-494e-83c9-7e90e494b0b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:21 np0005603609 nova_compute[221550]: 2026-01-31 08:39:21.877 221554 DEBUG nova.compute.manager [req-83ab6ec4-45a3-477f-a807-c0d5b23c789c req-5f2f781a-58ae-494e-83c9-7e90e494b0b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] No waiting events found dispatching network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:39:21 np0005603609 nova_compute[221550]: 2026-01-31 08:39:21.877 221554 WARNING nova.compute.manager [req-83ab6ec4-45a3-477f-a807-c0d5b23c789c req-5f2f781a-58ae-494e-83c9-7e90e494b0b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received unexpected event network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:39:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:23.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:23 np0005603609 nova_compute[221550]: 2026-01-31 08:39:23.739 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:23 np0005603609 NetworkManager[49064]: <info>  [1769848763.7422] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Jan 31 03:39:23 np0005603609 NetworkManager[49064]: <info>  [1769848763.7434] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Jan 31 03:39:23 np0005603609 nova_compute[221550]: 2026-01-31 08:39:23.775 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:39:23Z|00840|binding|INFO|Releasing lport 4738d457-86b5-496e-a4f4-02b685103bfd from this chassis (sb_readonly=0)
Jan 31 03:39:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:39:23Z|00841|binding|INFO|Releasing lport c8a7eefb-b644-411b-b95a-f875570edfa9 from this chassis (sb_readonly=0)
Jan 31 03:39:23 np0005603609 nova_compute[221550]: 2026-01-31 08:39:23.790 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:39:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:23.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:39:24 np0005603609 nova_compute[221550]: 2026-01-31 08:39:24.587 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:25 np0005603609 nova_compute[221550]: 2026-01-31 08:39:25.026 221554 DEBUG nova.compute.manager [req-cc9890f7-e0be-4e15-8988-37fe8895ed9c req-8d76bcee-2811-44c9-8c05-17e19fd109b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Received event network-changed-93519b56-e167-470c-9933-432512222982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:39:25 np0005603609 nova_compute[221550]: 2026-01-31 08:39:25.026 221554 DEBUG nova.compute.manager [req-cc9890f7-e0be-4e15-8988-37fe8895ed9c req-8d76bcee-2811-44c9-8c05-17e19fd109b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Refreshing instance network info cache due to event network-changed-93519b56-e167-470c-9933-432512222982. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:39:25 np0005603609 nova_compute[221550]: 2026-01-31 08:39:25.026 221554 DEBUG oslo_concurrency.lockutils [req-cc9890f7-e0be-4e15-8988-37fe8895ed9c req-8d76bcee-2811-44c9-8c05-17e19fd109b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:39:25 np0005603609 nova_compute[221550]: 2026-01-31 08:39:25.026 221554 DEBUG oslo_concurrency.lockutils [req-cc9890f7-e0be-4e15-8988-37fe8895ed9c req-8d76bcee-2811-44c9-8c05-17e19fd109b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:39:25 np0005603609 nova_compute[221550]: 2026-01-31 08:39:25.027 221554 DEBUG nova.network.neutron [req-cc9890f7-e0be-4e15-8988-37fe8895ed9c req-8d76bcee-2811-44c9-8c05-17e19fd109b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Refreshing network info cache for port 93519b56-e167-470c-9933-432512222982 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:39:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:25.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:25.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:26 np0005603609 nova_compute[221550]: 2026-01-31 08:39:26.570 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:27 np0005603609 nova_compute[221550]: 2026-01-31 08:39:27.312 221554 DEBUG nova.compute.manager [req-62d795aa-b5fd-4b69-b6a3-1ac2bb07e139 req-98880fcf-68d8-48fa-9771-f2b423b83476 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Received event network-changed-93519b56-e167-470c-9933-432512222982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:39:27 np0005603609 nova_compute[221550]: 2026-01-31 08:39:27.313 221554 DEBUG nova.compute.manager [req-62d795aa-b5fd-4b69-b6a3-1ac2bb07e139 req-98880fcf-68d8-48fa-9771-f2b423b83476 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Refreshing instance network info cache due to event network-changed-93519b56-e167-470c-9933-432512222982. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:39:27 np0005603609 nova_compute[221550]: 2026-01-31 08:39:27.313 221554 DEBUG oslo_concurrency.lockutils [req-62d795aa-b5fd-4b69-b6a3-1ac2bb07e139 req-98880fcf-68d8-48fa-9771-f2b423b83476 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:39:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:27.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:27 np0005603609 nova_compute[221550]: 2026-01-31 08:39:27.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:27.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:28 np0005603609 nova_compute[221550]: 2026-01-31 08:39:28.148 221554 DEBUG nova.network.neutron [req-cc9890f7-e0be-4e15-8988-37fe8895ed9c req-8d76bcee-2811-44c9-8c05-17e19fd109b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updated VIF entry in instance network info cache for port 93519b56-e167-470c-9933-432512222982. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:39:28 np0005603609 nova_compute[221550]: 2026-01-31 08:39:28.148 221554 DEBUG nova.network.neutron [req-cc9890f7-e0be-4e15-8988-37fe8895ed9c req-8d76bcee-2811-44c9-8c05-17e19fd109b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updating instance_info_cache with network_info: [{"id": "93519b56-e167-470c-9933-432512222982", "address": "fa:16:3e:20:44:32", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93519b56-e1", "ovs_interfaceid": "93519b56-e167-470c-9933-432512222982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:39:28 np0005603609 podman[301037]: 2026-01-31 08:39:28.171067989 +0000 UTC m=+0.058447204 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 03:39:28 np0005603609 podman[301036]: 2026-01-31 08:39:28.224734745 +0000 UTC m=+0.114512828 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 03:39:28 np0005603609 nova_compute[221550]: 2026-01-31 08:39:28.265 221554 DEBUG oslo_concurrency.lockutils [req-cc9890f7-e0be-4e15-8988-37fe8895ed9c req-8d76bcee-2811-44c9-8c05-17e19fd109b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:39:28 np0005603609 nova_compute[221550]: 2026-01-31 08:39:28.266 221554 DEBUG oslo_concurrency.lockutils [req-62d795aa-b5fd-4b69-b6a3-1ac2bb07e139 req-98880fcf-68d8-48fa-9771-f2b423b83476 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:39:28 np0005603609 nova_compute[221550]: 2026-01-31 08:39:28.266 221554 DEBUG nova.network.neutron [req-62d795aa-b5fd-4b69-b6a3-1ac2bb07e139 req-98880fcf-68d8-48fa-9771-f2b423b83476 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Refreshing network info cache for port 93519b56-e167-470c-9933-432512222982 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:39:28 np0005603609 nova_compute[221550]: 2026-01-31 08:39:28.449 221554 DEBUG nova.compute.manager [req-804952e0-7647-4524-8aa3-664daa2177e2 req-ec1d3a8d-555e-4986-b7cc-6047607a9926 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received event network-changed-1950c089-af27-485c-8b1d-eefa73ac5064 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:39:28 np0005603609 nova_compute[221550]: 2026-01-31 08:39:28.449 221554 DEBUG nova.compute.manager [req-804952e0-7647-4524-8aa3-664daa2177e2 req-ec1d3a8d-555e-4986-b7cc-6047607a9926 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Refreshing instance network info cache due to event network-changed-1950c089-af27-485c-8b1d-eefa73ac5064. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:39:28 np0005603609 nova_compute[221550]: 2026-01-31 08:39:28.449 221554 DEBUG oslo_concurrency.lockutils [req-804952e0-7647-4524-8aa3-664daa2177e2 req-ec1d3a8d-555e-4986-b7cc-6047607a9926 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:39:28 np0005603609 nova_compute[221550]: 2026-01-31 08:39:28.450 221554 DEBUG oslo_concurrency.lockutils [req-804952e0-7647-4524-8aa3-664daa2177e2 req-ec1d3a8d-555e-4986-b7cc-6047607a9926 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:39:28 np0005603609 nova_compute[221550]: 2026-01-31 08:39:28.450 221554 DEBUG nova.network.neutron [req-804952e0-7647-4524-8aa3-664daa2177e2 req-ec1d3a8d-555e-4986-b7cc-6047607a9926 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Refreshing network info cache for port 1950c089-af27-485c-8b1d-eefa73ac5064 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:39:29 np0005603609 nova_compute[221550]: 2026-01-31 08:39:29.594 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:29.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:29.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:31 np0005603609 nova_compute[221550]: 2026-01-31 08:39:31.572 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:31.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:31.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:32 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Jan 31 03:39:33 np0005603609 nova_compute[221550]: 2026-01-31 08:39:33.076 221554 DEBUG nova.compute.manager [req-99032a23-2c92-463a-b3ce-9b572884cc14 req-a0504024-ac4c-4328-b101-166360eac1e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Received event network-changed-93519b56-e167-470c-9933-432512222982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:39:33 np0005603609 nova_compute[221550]: 2026-01-31 08:39:33.076 221554 DEBUG nova.compute.manager [req-99032a23-2c92-463a-b3ce-9b572884cc14 req-a0504024-ac4c-4328-b101-166360eac1e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Refreshing instance network info cache due to event network-changed-93519b56-e167-470c-9933-432512222982. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:39:33 np0005603609 nova_compute[221550]: 2026-01-31 08:39:33.077 221554 DEBUG oslo_concurrency.lockutils [req-99032a23-2c92-463a-b3ce-9b572884cc14 req-a0504024-ac4c-4328-b101-166360eac1e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:39:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:33.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:33.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:39:33Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:43:d3:f9 10.100.0.8
Jan 31 03:39:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:39:33Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:d3:f9 10.100.0.8
Jan 31 03:39:33 np0005603609 nova_compute[221550]: 2026-01-31 08:39:33.907 221554 DEBUG nova.network.neutron [req-62d795aa-b5fd-4b69-b6a3-1ac2bb07e139 req-98880fcf-68d8-48fa-9771-f2b423b83476 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updated VIF entry in instance network info cache for port 93519b56-e167-470c-9933-432512222982. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:39:33 np0005603609 nova_compute[221550]: 2026-01-31 08:39:33.908 221554 DEBUG nova.network.neutron [req-62d795aa-b5fd-4b69-b6a3-1ac2bb07e139 req-98880fcf-68d8-48fa-9771-f2b423b83476 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updating instance_info_cache with network_info: [{"id": "93519b56-e167-470c-9933-432512222982", "address": "fa:16:3e:20:44:32", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93519b56-e1", "ovs_interfaceid": "93519b56-e167-470c-9933-432512222982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:39:33 np0005603609 nova_compute[221550]: 2026-01-31 08:39:33.948 221554 DEBUG oslo_concurrency.lockutils [req-62d795aa-b5fd-4b69-b6a3-1ac2bb07e139 req-98880fcf-68d8-48fa-9771-f2b423b83476 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:39:33 np0005603609 nova_compute[221550]: 2026-01-31 08:39:33.949 221554 DEBUG oslo_concurrency.lockutils [req-99032a23-2c92-463a-b3ce-9b572884cc14 req-a0504024-ac4c-4328-b101-166360eac1e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:39:33 np0005603609 nova_compute[221550]: 2026-01-31 08:39:33.950 221554 DEBUG nova.network.neutron [req-99032a23-2c92-463a-b3ce-9b572884cc14 req-a0504024-ac4c-4328-b101-166360eac1e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Refreshing network info cache for port 93519b56-e167-470c-9933-432512222982 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:39:34 np0005603609 nova_compute[221550]: 2026-01-31 08:39:34.149 221554 DEBUG nova.network.neutron [req-804952e0-7647-4524-8aa3-664daa2177e2 req-ec1d3a8d-555e-4986-b7cc-6047607a9926 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updated VIF entry in instance network info cache for port 1950c089-af27-485c-8b1d-eefa73ac5064. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:39:34 np0005603609 nova_compute[221550]: 2026-01-31 08:39:34.149 221554 DEBUG nova.network.neutron [req-804952e0-7647-4524-8aa3-664daa2177e2 req-ec1d3a8d-555e-4986-b7cc-6047607a9926 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updating instance_info_cache with network_info: [{"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:39:34 np0005603609 nova_compute[221550]: 2026-01-31 08:39:34.353 221554 DEBUG oslo_concurrency.lockutils [req-804952e0-7647-4524-8aa3-664daa2177e2 req-ec1d3a8d-555e-4986-b7cc-6047607a9926 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:39:34 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:39:34 np0005603609 nova_compute[221550]: 2026-01-31 08:39:34.639 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:35.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:35 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:39:35 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:39:35 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:39:35 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:39:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:35.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:36 np0005603609 nova_compute[221550]: 2026-01-31 08:39:36.575 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:37 np0005603609 nova_compute[221550]: 2026-01-31 08:39:37.075 221554 DEBUG nova.network.neutron [req-99032a23-2c92-463a-b3ce-9b572884cc14 req-a0504024-ac4c-4328-b101-166360eac1e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updated VIF entry in instance network info cache for port 93519b56-e167-470c-9933-432512222982. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:39:37 np0005603609 nova_compute[221550]: 2026-01-31 08:39:37.076 221554 DEBUG nova.network.neutron [req-99032a23-2c92-463a-b3ce-9b572884cc14 req-a0504024-ac4c-4328-b101-166360eac1e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updating instance_info_cache with network_info: [{"id": "93519b56-e167-470c-9933-432512222982", "address": "fa:16:3e:20:44:32", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93519b56-e1", "ovs_interfaceid": "93519b56-e167-470c-9933-432512222982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:39:37 np0005603609 nova_compute[221550]: 2026-01-31 08:39:37.104 221554 DEBUG oslo_concurrency.lockutils [req-99032a23-2c92-463a-b3ce-9b572884cc14 req-a0504024-ac4c-4328-b101-166360eac1e5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:39:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:37.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:37.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:38 np0005603609 nova_compute[221550]: 2026-01-31 08:39:38.222 221554 DEBUG nova.compute.manager [req-3da78ee4-cd49-4db3-8d51-3d836b966f88 req-74943af9-f2cd-450c-a3cc-cc9413748b8c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Received event network-changed-93519b56-e167-470c-9933-432512222982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:39:38 np0005603609 nova_compute[221550]: 2026-01-31 08:39:38.223 221554 DEBUG nova.compute.manager [req-3da78ee4-cd49-4db3-8d51-3d836b966f88 req-74943af9-f2cd-450c-a3cc-cc9413748b8c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Refreshing instance network info cache due to event network-changed-93519b56-e167-470c-9933-432512222982. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:39:38 np0005603609 nova_compute[221550]: 2026-01-31 08:39:38.223 221554 DEBUG oslo_concurrency.lockutils [req-3da78ee4-cd49-4db3-8d51-3d836b966f88 req-74943af9-f2cd-450c-a3cc-cc9413748b8c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:39:38 np0005603609 nova_compute[221550]: 2026-01-31 08:39:38.223 221554 DEBUG oslo_concurrency.lockutils [req-3da78ee4-cd49-4db3-8d51-3d836b966f88 req-74943af9-f2cd-450c-a3cc-cc9413748b8c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:39:38 np0005603609 nova_compute[221550]: 2026-01-31 08:39:38.224 221554 DEBUG nova.network.neutron [req-3da78ee4-cd49-4db3-8d51-3d836b966f88 req-74943af9-f2cd-450c-a3cc-cc9413748b8c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Refreshing network info cache for port 93519b56-e167-470c-9933-432512222982 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:39:38 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #151. Immutable memtables: 0.
Jan 31 03:39:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:39:38.893822) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:39:38 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 151
Jan 31 03:39:38 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848778893947, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1840, "num_deletes": 256, "total_data_size": 4403593, "memory_usage": 4467800, "flush_reason": "Manual Compaction"}
Jan 31 03:39:38 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #152: started
Jan 31 03:39:38 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848778919061, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 152, "file_size": 2841736, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74198, "largest_seqno": 76033, "table_properties": {"data_size": 2834087, "index_size": 4527, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16107, "raw_average_key_size": 20, "raw_value_size": 2818706, "raw_average_value_size": 3510, "num_data_blocks": 198, "num_entries": 803, "num_filter_entries": 803, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848630, "oldest_key_time": 1769848630, "file_creation_time": 1769848778, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:39:38 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 25311 microseconds, and 6925 cpu microseconds.
Jan 31 03:39:38 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:39:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:39:38.919145) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #152: 2841736 bytes OK
Jan 31 03:39:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:39:38.919178) [db/memtable_list.cc:519] [default] Level-0 commit table #152 started
Jan 31 03:39:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:39:38.920999) [db/memtable_list.cc:722] [default] Level-0 commit table #152: memtable #1 done
Jan 31 03:39:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:39:38.921104) EVENT_LOG_v1 {"time_micros": 1769848778921071, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:39:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:39:38.921148) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:39:38 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 4395281, prev total WAL file size 4395281, number of live WAL files 2.
Jan 31 03:39:38 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000148.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:39:38 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:39:38.922523) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373633' seq:72057594037927935, type:22 .. '6C6F676D0033303135' seq:0, type:0; will stop at (end)
Jan 31 03:39:38 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:39:38 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [152(2775KB)], [150(11MB)]
Jan 31 03:39:38 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848778922587, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [152], "files_L6": [150], "score": -1, "input_data_size": 15126890, "oldest_snapshot_seqno": -1}
Jan 31 03:39:39 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #153: 9891 keys, 14972419 bytes, temperature: kUnknown
Jan 31 03:39:39 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848779086771, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 153, "file_size": 14972419, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14905951, "index_size": 40627, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24773, "raw_key_size": 260120, "raw_average_key_size": 26, "raw_value_size": 14730086, "raw_average_value_size": 1489, "num_data_blocks": 1564, "num_entries": 9891, "num_filter_entries": 9891, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769848778, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 153, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:39:39 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:39:39 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:39:39.087082) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 14972419 bytes
Jan 31 03:39:39 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:39:39.088905) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.1 rd, 91.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 11.7 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(10.6) write-amplify(5.3) OK, records in: 10424, records dropped: 533 output_compression: NoCompression
Jan 31 03:39:39 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:39:39.088927) EVENT_LOG_v1 {"time_micros": 1769848779088916, "job": 96, "event": "compaction_finished", "compaction_time_micros": 164244, "compaction_time_cpu_micros": 34020, "output_level": 6, "num_output_files": 1, "total_output_size": 14972419, "num_input_records": 10424, "num_output_records": 9891, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:39:39 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:39:39 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848779089449, "job": 96, "event": "table_file_deletion", "file_number": 152}
Jan 31 03:39:39 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000150.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:39:39 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848779090851, "job": 96, "event": "table_file_deletion", "file_number": 150}
Jan 31 03:39:39 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:39:38.922372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:39:39 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:39:39.090882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:39:39 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:39:39.090888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:39:39 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:39:39.090890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:39:39 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:39:39.090892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:39:39 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:39:39.090894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:39:39 np0005603609 nova_compute[221550]: 2026-01-31 08:39:39.641 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:39.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:39.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:40 np0005603609 nova_compute[221550]: 2026-01-31 08:39:40.523 221554 INFO nova.compute.manager [None req-017de19e-7841-4b6e-8445-f2ff53b9de29 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Get console output#033[00m
Jan 31 03:39:40 np0005603609 nova_compute[221550]: 2026-01-31 08:39:40.528 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:39:41 np0005603609 nova_compute[221550]: 2026-01-31 08:39:41.577 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:41.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:41.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:42 np0005603609 nova_compute[221550]: 2026-01-31 08:39:42.311 221554 DEBUG nova.network.neutron [req-3da78ee4-cd49-4db3-8d51-3d836b966f88 req-74943af9-f2cd-450c-a3cc-cc9413748b8c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updated VIF entry in instance network info cache for port 93519b56-e167-470c-9933-432512222982. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:39:42 np0005603609 nova_compute[221550]: 2026-01-31 08:39:42.311 221554 DEBUG nova.network.neutron [req-3da78ee4-cd49-4db3-8d51-3d836b966f88 req-74943af9-f2cd-450c-a3cc-cc9413748b8c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updating instance_info_cache with network_info: [{"id": "93519b56-e167-470c-9933-432512222982", "address": "fa:16:3e:20:44:32", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93519b56-e1", "ovs_interfaceid": "93519b56-e167-470c-9933-432512222982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:39:42 np0005603609 nova_compute[221550]: 2026-01-31 08:39:42.499 221554 DEBUG oslo_concurrency.lockutils [req-3da78ee4-cd49-4db3-8d51-3d836b966f88 req-74943af9-f2cd-450c-a3cc-cc9413748b8c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:39:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:43.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:43.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:44 np0005603609 nova_compute[221550]: 2026-01-31 08:39:44.265 221554 DEBUG oslo_concurrency.lockutils [None req-15592aaf-d5c2-459a-a238-09f4a8af1fc6 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "233d3314-7d9d-49a5-818f-909d78422fb9" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:44 np0005603609 nova_compute[221550]: 2026-01-31 08:39:44.267 221554 DEBUG oslo_concurrency.lockutils [None req-15592aaf-d5c2-459a-a238-09f4a8af1fc6 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:39:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:39:44 np0005603609 nova_compute[221550]: 2026-01-31 08:39:44.635 221554 INFO nova.compute.manager [None req-15592aaf-d5c2-459a-a238-09f4a8af1fc6 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Detaching volume be0b3094-6fbc-448a-804b-4a365824d522#033[00m
Jan 31 03:39:44 np0005603609 nova_compute[221550]: 2026-01-31 08:39:44.644 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:44 np0005603609 nova_compute[221550]: 2026-01-31 08:39:44.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:44 np0005603609 nova_compute[221550]: 2026-01-31 08:39:44.838 221554 INFO nova.virt.block_device [None req-15592aaf-d5c2-459a-a238-09f4a8af1fc6 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Attempting to driver detach volume be0b3094-6fbc-448a-804b-4a365824d522 from mountpoint /dev/vdb#033[00m
Jan 31 03:39:44 np0005603609 nova_compute[221550]: 2026-01-31 08:39:44.847 221554 DEBUG nova.virt.libvirt.driver [None req-15592aaf-d5c2-459a-a238-09f4a8af1fc6 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Attempting to detach device vdb from instance 233d3314-7d9d-49a5-818f-909d78422fb9 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:39:44 np0005603609 nova_compute[221550]: 2026-01-31 08:39:44.848 221554 DEBUG nova.virt.libvirt.guest [None req-15592aaf-d5c2-459a-a238-09f4a8af1fc6 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:39:44 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:39:44 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-be0b3094-6fbc-448a-804b-4a365824d522">
Jan 31 03:39:44 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:39:44 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:39:44 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:39:44 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:39:44 np0005603609 nova_compute[221550]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:39:44 np0005603609 nova_compute[221550]:  <serial>be0b3094-6fbc-448a-804b-4a365824d522</serial>
Jan 31 03:39:44 np0005603609 nova_compute[221550]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 03:39:44 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:39:44 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:39:44 np0005603609 nova_compute[221550]: 2026-01-31 08:39:44.855 221554 INFO nova.virt.libvirt.driver [None req-15592aaf-d5c2-459a-a238-09f4a8af1fc6 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Successfully detached device vdb from instance 233d3314-7d9d-49a5-818f-909d78422fb9 from the persistent domain config.#033[00m
Jan 31 03:39:44 np0005603609 nova_compute[221550]: 2026-01-31 08:39:44.856 221554 DEBUG nova.virt.libvirt.driver [None req-15592aaf-d5c2-459a-a238-09f4a8af1fc6 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 233d3314-7d9d-49a5-818f-909d78422fb9 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 03:39:44 np0005603609 nova_compute[221550]: 2026-01-31 08:39:44.856 221554 DEBUG nova.virt.libvirt.guest [None req-15592aaf-d5c2-459a-a238-09f4a8af1fc6 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:39:44 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:39:44 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-be0b3094-6fbc-448a-804b-4a365824d522">
Jan 31 03:39:44 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:39:44 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:39:44 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:39:44 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:39:44 np0005603609 nova_compute[221550]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:39:44 np0005603609 nova_compute[221550]:  <serial>be0b3094-6fbc-448a-804b-4a365824d522</serial>
Jan 31 03:39:44 np0005603609 nova_compute[221550]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 03:39:44 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:39:44 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:39:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:45.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:45.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:46 np0005603609 nova_compute[221550]: 2026-01-31 08:39:46.578 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:47 np0005603609 nova_compute[221550]: 2026-01-31 08:39:47.385 221554 DEBUG nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Received event <DeviceRemovedEvent: 1769848787.3852923, 233d3314-7d9d-49a5-818f-909d78422fb9 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 03:39:47 np0005603609 nova_compute[221550]: 2026-01-31 08:39:47.387 221554 DEBUG nova.virt.libvirt.driver [None req-15592aaf-d5c2-459a-a238-09f4a8af1fc6 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 233d3314-7d9d-49a5-818f-909d78422fb9 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 03:39:47 np0005603609 nova_compute[221550]: 2026-01-31 08:39:47.389 221554 INFO nova.virt.libvirt.driver [None req-15592aaf-d5c2-459a-a238-09f4a8af1fc6 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Successfully detached device vdb from instance 233d3314-7d9d-49a5-818f-909d78422fb9 from the live domain config.#033[00m
Jan 31 03:39:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:39:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:47.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:39:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:47.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:47 np0005603609 nova_compute[221550]: 2026-01-31 08:39:47.928 221554 DEBUG nova.objects.instance [None req-15592aaf-d5c2-459a-a238-09f4a8af1fc6 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'flavor' on Instance uuid 233d3314-7d9d-49a5-818f-909d78422fb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:39:48 np0005603609 nova_compute[221550]: 2026-01-31 08:39:48.974 221554 DEBUG oslo_concurrency.lockutils [None req-15592aaf-d5c2-459a-a238-09f4a8af1fc6 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 4.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:49 np0005603609 nova_compute[221550]: 2026-01-31 08:39:49.646 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:39:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:49.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:39:49 np0005603609 nova_compute[221550]: 2026-01-31 08:39:49.661 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:49 np0005603609 nova_compute[221550]: 2026-01-31 08:39:49.661 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:49.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:51 np0005603609 nova_compute[221550]: 2026-01-31 08:39:51.582 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:39:51 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/153502525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:39:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:39:51 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/153502525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:39:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:39:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:51.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:39:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:51.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:53.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:39:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:53.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:39:54 np0005603609 nova_compute[221550]: 2026-01-31 08:39:54.704 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:54 np0005603609 nova_compute[221550]: 2026-01-31 08:39:54.765 221554 DEBUG oslo_concurrency.lockutils [None req-48eb6f4b-5d45-4ea4-9aad-c04f5e139164 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "233d3314-7d9d-49a5-818f-909d78422fb9" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:54 np0005603609 nova_compute[221550]: 2026-01-31 08:39:54.766 221554 DEBUG oslo_concurrency.lockutils [None req-48eb6f4b-5d45-4ea4-9aad-c04f5e139164 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:54 np0005603609 nova_compute[221550]: 2026-01-31 08:39:54.997 221554 INFO nova.compute.manager [None req-48eb6f4b-5d45-4ea4-9aad-c04f5e139164 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Detaching volume 67e8063c-e5d3-4497-bf93-33d59b6a9eb4#033[00m
Jan 31 03:39:55 np0005603609 nova_compute[221550]: 2026-01-31 08:39:55.110 221554 DEBUG oslo_concurrency.lockutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:39:55 np0005603609 nova_compute[221550]: 2026-01-31 08:39:55.110 221554 DEBUG oslo_concurrency.lockutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquired lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:39:55 np0005603609 nova_compute[221550]: 2026-01-31 08:39:55.110 221554 DEBUG nova.network.neutron [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:39:55 np0005603609 nova_compute[221550]: 2026-01-31 08:39:55.208 221554 INFO nova.virt.block_device [None req-48eb6f4b-5d45-4ea4-9aad-c04f5e139164 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Attempting to driver detach volume 67e8063c-e5d3-4497-bf93-33d59b6a9eb4 from mountpoint /dev/vdc#033[00m
Jan 31 03:39:55 np0005603609 nova_compute[221550]: 2026-01-31 08:39:55.216 221554 DEBUG nova.virt.libvirt.driver [None req-48eb6f4b-5d45-4ea4-9aad-c04f5e139164 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Attempting to detach device vdc from instance 233d3314-7d9d-49a5-818f-909d78422fb9 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:39:55 np0005603609 nova_compute[221550]: 2026-01-31 08:39:55.217 221554 DEBUG nova.virt.libvirt.guest [None req-48eb6f4b-5d45-4ea4-9aad-c04f5e139164 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:39:55 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:39:55 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-67e8063c-e5d3-4497-bf93-33d59b6a9eb4">
Jan 31 03:39:55 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:39:55 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:39:55 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:39:55 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:39:55 np0005603609 nova_compute[221550]:  <target dev="vdc" bus="virtio"/>
Jan 31 03:39:55 np0005603609 nova_compute[221550]:  <serial>67e8063c-e5d3-4497-bf93-33d59b6a9eb4</serial>
Jan 31 03:39:55 np0005603609 nova_compute[221550]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 31 03:39:55 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:39:55 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:39:55 np0005603609 nova_compute[221550]: 2026-01-31 08:39:55.225 221554 INFO nova.virt.libvirt.driver [None req-48eb6f4b-5d45-4ea4-9aad-c04f5e139164 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Successfully detached device vdc from instance 233d3314-7d9d-49a5-818f-909d78422fb9 from the persistent domain config.#033[00m
Jan 31 03:39:55 np0005603609 nova_compute[221550]: 2026-01-31 08:39:55.226 221554 DEBUG nova.virt.libvirt.driver [None req-48eb6f4b-5d45-4ea4-9aad-c04f5e139164 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 233d3314-7d9d-49a5-818f-909d78422fb9 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 03:39:55 np0005603609 nova_compute[221550]: 2026-01-31 08:39:55.227 221554 DEBUG nova.virt.libvirt.guest [None req-48eb6f4b-5d45-4ea4-9aad-c04f5e139164 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:39:55 np0005603609 nova_compute[221550]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:39:55 np0005603609 nova_compute[221550]:  <source protocol="rbd" name="volumes/volume-67e8063c-e5d3-4497-bf93-33d59b6a9eb4">
Jan 31 03:39:55 np0005603609 nova_compute[221550]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:39:55 np0005603609 nova_compute[221550]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:39:55 np0005603609 nova_compute[221550]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:39:55 np0005603609 nova_compute[221550]:  </source>
Jan 31 03:39:55 np0005603609 nova_compute[221550]:  <target dev="vdc" bus="virtio"/>
Jan 31 03:39:55 np0005603609 nova_compute[221550]:  <serial>67e8063c-e5d3-4497-bf93-33d59b6a9eb4</serial>
Jan 31 03:39:55 np0005603609 nova_compute[221550]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 31 03:39:55 np0005603609 nova_compute[221550]: </disk>
Jan 31 03:39:55 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:39:55 np0005603609 nova_compute[221550]: 2026-01-31 08:39:55.339 221554 DEBUG nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Received event <DeviceRemovedEvent: 1769848795.3392253, 233d3314-7d9d-49a5-818f-909d78422fb9 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 03:39:55 np0005603609 nova_compute[221550]: 2026-01-31 08:39:55.340 221554 DEBUG nova.virt.libvirt.driver [None req-48eb6f4b-5d45-4ea4-9aad-c04f5e139164 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 233d3314-7d9d-49a5-818f-909d78422fb9 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 03:39:55 np0005603609 nova_compute[221550]: 2026-01-31 08:39:55.343 221554 INFO nova.virt.libvirt.driver [None req-48eb6f4b-5d45-4ea4-9aad-c04f5e139164 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Successfully detached device vdc from instance 233d3314-7d9d-49a5-818f-909d78422fb9 from the live domain config.#033[00m
Jan 31 03:39:55 np0005603609 nova_compute[221550]: 2026-01-31 08:39:55.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:55 np0005603609 nova_compute[221550]: 2026-01-31 08:39:55.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:39:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:39:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:55.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:39:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:39:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:55.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:39:56 np0005603609 nova_compute[221550]: 2026-01-31 08:39:56.011 221554 DEBUG nova.objects.instance [None req-48eb6f4b-5d45-4ea4-9aad-c04f5e139164 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'flavor' on Instance uuid 233d3314-7d9d-49a5-818f-909d78422fb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:39:56 np0005603609 nova_compute[221550]: 2026-01-31 08:39:56.585 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:56 np0005603609 nova_compute[221550]: 2026-01-31 08:39:56.607 221554 DEBUG oslo_concurrency.lockutils [None req-48eb6f4b-5d45-4ea4-9aad-c04f5e139164 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:56 np0005603609 nova_compute[221550]: 2026-01-31 08:39:56.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:56 np0005603609 nova_compute[221550]: 2026-01-31 08:39:56.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:39:56 np0005603609 nova_compute[221550]: 2026-01-31 08:39:56.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:39:57 np0005603609 nova_compute[221550]: 2026-01-31 08:39:57.107 221554 DEBUG nova.network.neutron [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updating instance_info_cache with network_info: [{"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:39:57 np0005603609 nova_compute[221550]: 2026-01-31 08:39:57.184 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:39:57 np0005603609 nova_compute[221550]: 2026-01-31 08:39:57.184 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:39:57 np0005603609 nova_compute[221550]: 2026-01-31 08:39:57.184 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:39:57 np0005603609 nova_compute[221550]: 2026-01-31 08:39:57.185 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 233d3314-7d9d-49a5-818f-909d78422fb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:39:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:57 np0005603609 ovn_controller[130359]: 2026-01-31T08:39:57Z|00842|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 03:39:57 np0005603609 nova_compute[221550]: 2026-01-31 08:39:57.509 221554 DEBUG oslo_concurrency.lockutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Releasing lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:39:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:57.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:57.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:59 np0005603609 podman[301267]: 2026-01-31 08:39:59.16675772 +0000 UTC m=+0.050698625 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 03:39:59 np0005603609 podman[301266]: 2026-01-31 08:39:59.188798966 +0000 UTC m=+0.075342344 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 31 03:39:59 np0005603609 nova_compute[221550]: 2026-01-31 08:39:59.432 221554 DEBUG nova.virt.libvirt.driver [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Jan 31 03:39:59 np0005603609 nova_compute[221550]: 2026-01-31 08:39:59.433 221554 DEBUG nova.virt.libvirt.volume.remotefs [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Creating file /var/lib/nova/instances/6193a6a7-8918-4625-b626-5d53c31bf0ff/b6292cf6a9f743eb8261fa33b2b859d4.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Jan 31 03:39:59 np0005603609 nova_compute[221550]: 2026-01-31 08:39:59.433 221554 DEBUG oslo_concurrency.processutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/6193a6a7-8918-4625-b626-5d53c31bf0ff/b6292cf6a9f743eb8261fa33b2b859d4.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:59.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:59 np0005603609 nova_compute[221550]: 2026-01-31 08:39:59.706 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:59 np0005603609 nova_compute[221550]: 2026-01-31 08:39:59.813 221554 DEBUG oslo_concurrency.processutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/6193a6a7-8918-4625-b626-5d53c31bf0ff/b6292cf6a9f743eb8261fa33b2b859d4.tmp" returned: 1 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:59 np0005603609 nova_compute[221550]: 2026-01-31 08:39:59.814 221554 DEBUG oslo_concurrency.processutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/6193a6a7-8918-4625-b626-5d53c31bf0ff/b6292cf6a9f743eb8261fa33b2b859d4.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 03:39:59 np0005603609 nova_compute[221550]: 2026-01-31 08:39:59.814 221554 DEBUG nova.virt.libvirt.volume.remotefs [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Creating directory /var/lib/nova/instances/6193a6a7-8918-4625-b626-5d53c31bf0ff on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Jan 31 03:39:59 np0005603609 nova_compute[221550]: 2026-01-31 08:39:59.814 221554 DEBUG oslo_concurrency.processutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/6193a6a7-8918-4625-b626-5d53c31bf0ff execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:39:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:59.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:00 np0005603609 nova_compute[221550]: 2026-01-31 08:40:00.010 221554 DEBUG oslo_concurrency.processutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/6193a6a7-8918-4625-b626-5d53c31bf0ff" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:40:00 np0005603609 nova_compute[221550]: 2026-01-31 08:40:00.013 221554 DEBUG nova.virt.libvirt.driver [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:40:00 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 03:40:01 np0005603609 nova_compute[221550]: 2026-01-31 08:40:01.587 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:01.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:01.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:02 np0005603609 kernel: tap1950c089-af (unregistering): left promiscuous mode
Jan 31 03:40:02 np0005603609 NetworkManager[49064]: <info>  [1769848802.5004] device (tap1950c089-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:40:02 np0005603609 ovn_controller[130359]: 2026-01-31T08:40:02Z|00843|binding|INFO|Releasing lport 1950c089-af27-485c-8b1d-eefa73ac5064 from this chassis (sb_readonly=0)
Jan 31 03:40:02 np0005603609 nova_compute[221550]: 2026-01-31 08:40:02.509 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:02 np0005603609 ovn_controller[130359]: 2026-01-31T08:40:02Z|00844|binding|INFO|Setting lport 1950c089-af27-485c-8b1d-eefa73ac5064 down in Southbound
Jan 31 03:40:02 np0005603609 ovn_controller[130359]: 2026-01-31T08:40:02Z|00845|binding|INFO|Removing iface tap1950c089-af ovn-installed in OVS
Jan 31 03:40:02 np0005603609 nova_compute[221550]: 2026-01-31 08:40:02.516 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:02 np0005603609 nova_compute[221550]: 2026-01-31 08:40:02.527 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:02 np0005603609 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000ba.scope: Deactivated successfully.
Jan 31 03:40:02 np0005603609 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000ba.scope: Consumed 14.202s CPU time.
Jan 31 03:40:02 np0005603609 systemd-machined[190912]: Machine qemu-101-instance-000000ba terminated.
Jan 31 03:40:02 np0005603609 nova_compute[221550]: 2026-01-31 08:40:02.605 221554 DEBUG nova.compute.manager [req-583ccb84-a33f-46b4-a0bc-83fe7884bc17 req-3fecad27-0d23-493b-9f24-8f53f2240fd3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Received event network-changed-93519b56-e167-470c-9933-432512222982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:40:02 np0005603609 nova_compute[221550]: 2026-01-31 08:40:02.605 221554 DEBUG nova.compute.manager [req-583ccb84-a33f-46b4-a0bc-83fe7884bc17 req-3fecad27-0d23-493b-9f24-8f53f2240fd3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Refreshing instance network info cache due to event network-changed-93519b56-e167-470c-9933-432512222982. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:40:02 np0005603609 nova_compute[221550]: 2026-01-31 08:40:02.606 221554 DEBUG oslo_concurrency.lockutils [req-583ccb84-a33f-46b4-a0bc-83fe7884bc17 req-3fecad27-0d23-493b-9f24-8f53f2240fd3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:40:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:02.694 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:d3:f9 10.100.0.8'], port_security=['fa:16:3e:43:d3:f9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6193a6a7-8918-4625-b626-5d53c31bf0ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-afa5ed82-7034-4517-a39e-5fc8b872592f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1618d12c-038a-456c-8d56-5eb45e7b0852', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.204'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=002a439c-290e-44cc-a5b6-bc1f707c1ea1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=1950c089-af27-485c-8b1d-eefa73ac5064) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:40:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:02.696 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 1950c089-af27-485c-8b1d-eefa73ac5064 in datapath afa5ed82-7034-4517-a39e-5fc8b872592f unbound from our chassis#033[00m
Jan 31 03:40:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:02.697 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network afa5ed82-7034-4517-a39e-5fc8b872592f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:40:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:02.698 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1edb929f-8d80-45fa-9812-4149a4383c4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:02.698 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f namespace which is not needed anymore#033[00m
Jan 31 03:40:02 np0005603609 nova_compute[221550]: 2026-01-31 08:40:02.788 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updating instance_info_cache with network_info: [{"id": "93519b56-e167-470c-9933-432512222982", "address": "fa:16:3e:20:44:32", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93519b56-e1", "ovs_interfaceid": "93519b56-e167-470c-9933-432512222982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:40:02 np0005603609 neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f[301020]: [NOTICE]   (301024) : haproxy version is 2.8.14-c23fe91
Jan 31 03:40:02 np0005603609 neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f[301020]: [NOTICE]   (301024) : path to executable is /usr/sbin/haproxy
Jan 31 03:40:02 np0005603609 neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f[301020]: [WARNING]  (301024) : Exiting Master process...
Jan 31 03:40:02 np0005603609 neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f[301020]: [ALERT]    (301024) : Current worker (301026) exited with code 143 (Terminated)
Jan 31 03:40:02 np0005603609 neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f[301020]: [WARNING]  (301024) : All workers exited. Exiting... (0)
Jan 31 03:40:02 np0005603609 systemd[1]: libpod-b17d46eed0e51b366407bbd96138a7022295923ac8a286618ac9c65c3da5ce01.scope: Deactivated successfully.
Jan 31 03:40:02 np0005603609 podman[301347]: 2026-01-31 08:40:02.822942106 +0000 UTC m=+0.040877445 container died b17d46eed0e51b366407bbd96138a7022295923ac8a286618ac9c65c3da5ce01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:40:02 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b17d46eed0e51b366407bbd96138a7022295923ac8a286618ac9c65c3da5ce01-userdata-shm.mount: Deactivated successfully.
Jan 31 03:40:02 np0005603609 systemd[1]: var-lib-containers-storage-overlay-0185134fa8eaf1d38fb3001ab30ebdf129e042853fda7314e8e837e9dde3a45e-merged.mount: Deactivated successfully.
Jan 31 03:40:02 np0005603609 podman[301347]: 2026-01-31 08:40:02.855426647 +0000 UTC m=+0.073361986 container cleanup b17d46eed0e51b366407bbd96138a7022295923ac8a286618ac9c65c3da5ce01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:40:02 np0005603609 systemd[1]: libpod-conmon-b17d46eed0e51b366407bbd96138a7022295923ac8a286618ac9c65c3da5ce01.scope: Deactivated successfully.
Jan 31 03:40:02 np0005603609 podman[301378]: 2026-01-31 08:40:02.909642816 +0000 UTC m=+0.039054211 container remove b17d46eed0e51b366407bbd96138a7022295923ac8a286618ac9c65c3da5ce01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:40:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:02.913 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[92594679-c4a2-4901-999f-b025254f048a]: (4, ('Sat Jan 31 08:40:02 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f (b17d46eed0e51b366407bbd96138a7022295923ac8a286618ac9c65c3da5ce01)\nb17d46eed0e51b366407bbd96138a7022295923ac8a286618ac9c65c3da5ce01\nSat Jan 31 08:40:02 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f (b17d46eed0e51b366407bbd96138a7022295923ac8a286618ac9c65c3da5ce01)\nb17d46eed0e51b366407bbd96138a7022295923ac8a286618ac9c65c3da5ce01\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:02.914 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad4086a-3a00-425f-a48f-a68088777986]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:02.915 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapafa5ed82-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:40:02 np0005603609 kernel: tapafa5ed82-70: left promiscuous mode
Jan 31 03:40:02 np0005603609 nova_compute[221550]: 2026-01-31 08:40:02.918 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:02 np0005603609 nova_compute[221550]: 2026-01-31 08:40:02.926 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:02.928 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[831df182-64d0-413e-8489-2f104742257a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:02.943 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1443a2bb-fe61-45b6-b603-4a47fd62edae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:02.944 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc248b9-c7cf-47b9-83bd-923d6f88e899]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:02.958 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9f595b1f-87d5-4607-9b76-5b7b31253927]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 899690, 'reachable_time': 16911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301396, 'error': None, 'target': 'ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:02.960 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-afa5ed82-7034-4517-a39e-5fc8b872592f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:40:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:02.960 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[d1cb4cd1-d1de-4697-af57-b282f73b2f8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:02 np0005603609 systemd[1]: run-netns-ovnmeta\x2dafa5ed82\x2d7034\x2d4517\x2da39e\x2d5fc8b872592f.mount: Deactivated successfully.
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.027 221554 INFO nova.virt.libvirt.driver [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.031 221554 INFO nova.virt.libvirt.driver [-] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Instance destroyed successfully.#033[00m
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.032 221554 DEBUG nova.virt.libvirt.vif [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:38:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1044697057',display_name='tempest-TestNetworkAdvancedServerOps-server-1044697057',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1044697057',id=186,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFM+PhejEoD7g/QWN1GYstJMXVFnv1wBVmxecXIfvr4KjdawbuS6IVqr3y8FzKBH2jXkdyk7w4kVVSMOQCA1tqgtHQYS5flQfReRJgL/KNYlba9qTCk6eFD00Tadz9cy3w==',key_name='tempest-TestNetworkAdvancedServerOps-1072390140',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:39:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-hmxtgjg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:39:50Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=6193a6a7-8918-4625-b626-5d53c31bf0ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1489833336", "vif_mac": "fa:16:3e:43:d3:f9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.032 221554 DEBUG nova.network.os_vif_util [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1489833336", "vif_mac": "fa:16:3e:43:d3:f9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.033 221554 DEBUG nova.network.os_vif_util [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:43:d3:f9,bridge_name='br-int',has_traffic_filtering=True,id=1950c089-af27-485c-8b1d-eefa73ac5064,network=Network(afa5ed82-7034-4517-a39e-5fc8b872592f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1950c089-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.033 221554 DEBUG os_vif [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:d3:f9,bridge_name='br-int',has_traffic_filtering=True,id=1950c089-af27-485c-8b1d-eefa73ac5064,network=Network(afa5ed82-7034-4517-a39e-5fc8b872592f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1950c089-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.035 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.035 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1950c089-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.036 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.039 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.041 221554 INFO os_vif [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:d3:f9,bridge_name='br-int',has_traffic_filtering=True,id=1950c089-af27-485c-8b1d-eefa73ac5064,network=Network(afa5ed82-7034-4517-a39e-5fc8b872592f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1950c089-af')#033[00m
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.045 221554 DEBUG nova.virt.libvirt.driver [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.045 221554 DEBUG nova.virt.libvirt.driver [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.347 221554 DEBUG neutronclient.v2_0.client [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 1950c089-af27-485c-8b1d-eefa73ac5064 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 31 03:40:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:03.656 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=80, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=79) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.656 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:03 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:03.657 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:40:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:40:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:03.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.742 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.742 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.743 221554 DEBUG oslo_concurrency.lockutils [req-583ccb84-a33f-46b4-a0bc-83fe7884bc17 req-3fecad27-0d23-493b-9f24-8f53f2240fd3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.743 221554 DEBUG nova.network.neutron [req-583ccb84-a33f-46b4-a0bc-83fe7884bc17 req-3fecad27-0d23-493b-9f24-8f53f2240fd3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Refreshing network info cache for port 93519b56-e167-470c-9933-432512222982 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:40:03 np0005603609 nova_compute[221550]: 2026-01-31 08:40:03.744 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:03.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:04 np0005603609 nova_compute[221550]: 2026-01-31 08:40:04.256 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:04 np0005603609 nova_compute[221550]: 2026-01-31 08:40:04.256 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:04 np0005603609 nova_compute[221550]: 2026-01-31 08:40:04.257 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:04 np0005603609 nova_compute[221550]: 2026-01-31 08:40:04.257 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:40:04 np0005603609 nova_compute[221550]: 2026-01-31 08:40:04.257 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:40:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:40:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1740064414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:40:04 np0005603609 nova_compute[221550]: 2026-01-31 08:40:04.675 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:40:05 np0005603609 nova_compute[221550]: 2026-01-31 08:40:05.222 221554 DEBUG nova.compute.manager [req-771c781b-a2f6-4a0a-a284-def2f24bc1a0 req-dac7d0a9-bf87-42a9-9549-fdac0e12bb96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received event network-vif-unplugged-1950c089-af27-485c-8b1d-eefa73ac5064 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:40:05 np0005603609 nova_compute[221550]: 2026-01-31 08:40:05.223 221554 DEBUG oslo_concurrency.lockutils [req-771c781b-a2f6-4a0a-a284-def2f24bc1a0 req-dac7d0a9-bf87-42a9-9549-fdac0e12bb96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:05 np0005603609 nova_compute[221550]: 2026-01-31 08:40:05.224 221554 DEBUG oslo_concurrency.lockutils [req-771c781b-a2f6-4a0a-a284-def2f24bc1a0 req-dac7d0a9-bf87-42a9-9549-fdac0e12bb96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:05 np0005603609 nova_compute[221550]: 2026-01-31 08:40:05.224 221554 DEBUG oslo_concurrency.lockutils [req-771c781b-a2f6-4a0a-a284-def2f24bc1a0 req-dac7d0a9-bf87-42a9-9549-fdac0e12bb96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:05 np0005603609 nova_compute[221550]: 2026-01-31 08:40:05.224 221554 DEBUG nova.compute.manager [req-771c781b-a2f6-4a0a-a284-def2f24bc1a0 req-dac7d0a9-bf87-42a9-9549-fdac0e12bb96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] No waiting events found dispatching network-vif-unplugged-1950c089-af27-485c-8b1d-eefa73ac5064 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:40:05 np0005603609 nova_compute[221550]: 2026-01-31 08:40:05.224 221554 WARNING nova.compute.manager [req-771c781b-a2f6-4a0a-a284-def2f24bc1a0 req-dac7d0a9-bf87-42a9-9549-fdac0e12bb96 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received unexpected event network-vif-unplugged-1950c089-af27-485c-8b1d-eefa73ac5064 for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:40:05 np0005603609 nova_compute[221550]: 2026-01-31 08:40:05.569 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:40:05 np0005603609 nova_compute[221550]: 2026-01-31 08:40:05.571 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:40:05 np0005603609 nova_compute[221550]: 2026-01-31 08:40:05.575 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:40:05 np0005603609 nova_compute[221550]: 2026-01-31 08:40:05.575 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:40:05 np0005603609 nova_compute[221550]: 2026-01-31 08:40:05.655 221554 DEBUG oslo_concurrency.lockutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:05 np0005603609 nova_compute[221550]: 2026-01-31 08:40:05.656 221554 DEBUG oslo_concurrency.lockutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:05 np0005603609 nova_compute[221550]: 2026-01-31 08:40:05.656 221554 DEBUG oslo_concurrency.lockutils [None req-e8781a1d-47d6-4587-ba79-5bafb19897b9 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:05.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:05 np0005603609 nova_compute[221550]: 2026-01-31 08:40:05.716 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:40:05 np0005603609 nova_compute[221550]: 2026-01-31 08:40:05.717 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4009MB free_disk=20.896697998046875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:40:05 np0005603609 nova_compute[221550]: 2026-01-31 08:40:05.717 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:05 np0005603609 nova_compute[221550]: 2026-01-31 08:40:05.717 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:05.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:06 np0005603609 nova_compute[221550]: 2026-01-31 08:40:06.377 221554 DEBUG nova.network.neutron [req-583ccb84-a33f-46b4-a0bc-83fe7884bc17 req-3fecad27-0d23-493b-9f24-8f53f2240fd3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updated VIF entry in instance network info cache for port 93519b56-e167-470c-9933-432512222982. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:40:06 np0005603609 nova_compute[221550]: 2026-01-31 08:40:06.378 221554 DEBUG nova.network.neutron [req-583ccb84-a33f-46b4-a0bc-83fe7884bc17 req-3fecad27-0d23-493b-9f24-8f53f2240fd3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updating instance_info_cache with network_info: [{"id": "93519b56-e167-470c-9933-432512222982", "address": "fa:16:3e:20:44:32", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93519b56-e1", "ovs_interfaceid": "93519b56-e167-470c-9933-432512222982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:40:06 np0005603609 nova_compute[221550]: 2026-01-31 08:40:06.554 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Migration for instance 6193a6a7-8918-4625-b626-5d53c31bf0ff refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 31 03:40:06 np0005603609 nova_compute[221550]: 2026-01-31 08:40:06.589 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:06 np0005603609 nova_compute[221550]: 2026-01-31 08:40:06.761 221554 DEBUG oslo_concurrency.lockutils [req-583ccb84-a33f-46b4-a0bc-83fe7884bc17 req-3fecad27-0d23-493b-9f24-8f53f2240fd3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:40:06 np0005603609 nova_compute[221550]: 2026-01-31 08:40:06.899 221554 INFO nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updating resource usage from migration cf69e0cf-b556-4b51-a26d-2a27f3defe82#033[00m
Jan 31 03:40:06 np0005603609 nova_compute[221550]: 2026-01-31 08:40:06.900 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Starting to track outgoing migration cf69e0cf-b556-4b51-a26d-2a27f3defe82 with flavor fea01737-128b-41fa-a695-aaaa6e96e4b2 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Jan 31 03:40:06 np0005603609 nova_compute[221550]: 2026-01-31 08:40:06.935 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 233d3314-7d9d-49a5-818f-909d78422fb9 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:40:06 np0005603609 nova_compute[221550]: 2026-01-31 08:40:06.936 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Migration cf69e0cf-b556-4b51-a26d-2a27f3defe82 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 03:40:06 np0005603609 nova_compute[221550]: 2026-01-31 08:40:06.936 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:40:06 np0005603609 nova_compute[221550]: 2026-01-31 08:40:06.936 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:40:07 np0005603609 nova_compute[221550]: 2026-01-31 08:40:07.006 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:40:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:40:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/397668415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:40:07 np0005603609 nova_compute[221550]: 2026-01-31 08:40:07.410 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:40:07 np0005603609 nova_compute[221550]: 2026-01-31 08:40:07.415 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:40:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:07.532 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:07.532 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:07.533 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:07.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:07 np0005603609 nova_compute[221550]: 2026-01-31 08:40:07.742 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:40:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:07.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:08 np0005603609 nova_compute[221550]: 2026-01-31 08:40:08.037 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:09 np0005603609 nova_compute[221550]: 2026-01-31 08:40:09.644 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:40:09 np0005603609 nova_compute[221550]: 2026-01-31 08:40:09.645 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:09.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:09.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:10 np0005603609 nova_compute[221550]: 2026-01-31 08:40:10.198 221554 DEBUG nova.compute.manager [req-6b05978a-72be-47c3-8cb9-0a098fac6a5a req-fca7d972-a93b-4cc7-88bc-e18608ff7b7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received event network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:40:10 np0005603609 nova_compute[221550]: 2026-01-31 08:40:10.199 221554 DEBUG oslo_concurrency.lockutils [req-6b05978a-72be-47c3-8cb9-0a098fac6a5a req-fca7d972-a93b-4cc7-88bc-e18608ff7b7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:10 np0005603609 nova_compute[221550]: 2026-01-31 08:40:10.200 221554 DEBUG oslo_concurrency.lockutils [req-6b05978a-72be-47c3-8cb9-0a098fac6a5a req-fca7d972-a93b-4cc7-88bc-e18608ff7b7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:10 np0005603609 nova_compute[221550]: 2026-01-31 08:40:10.200 221554 DEBUG oslo_concurrency.lockutils [req-6b05978a-72be-47c3-8cb9-0a098fac6a5a req-fca7d972-a93b-4cc7-88bc-e18608ff7b7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:10 np0005603609 nova_compute[221550]: 2026-01-31 08:40:10.201 221554 DEBUG nova.compute.manager [req-6b05978a-72be-47c3-8cb9-0a098fac6a5a req-fca7d972-a93b-4cc7-88bc-e18608ff7b7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] No waiting events found dispatching network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:40:10 np0005603609 nova_compute[221550]: 2026-01-31 08:40:10.201 221554 WARNING nova.compute.manager [req-6b05978a-72be-47c3-8cb9-0a098fac6a5a req-fca7d972-a93b-4cc7-88bc-e18608ff7b7f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received unexpected event network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:40:11 np0005603609 nova_compute[221550]: 2026-01-31 08:40:11.364 221554 DEBUG nova.compute.manager [req-f3b6d738-2aa5-43d2-bd4f-b084d58e7720 req-e31e5530-91ab-4e0d-ba24-76c28a317fdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received event network-changed-1950c089-af27-485c-8b1d-eefa73ac5064 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:40:11 np0005603609 nova_compute[221550]: 2026-01-31 08:40:11.364 221554 DEBUG nova.compute.manager [req-f3b6d738-2aa5-43d2-bd4f-b084d58e7720 req-e31e5530-91ab-4e0d-ba24-76c28a317fdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Refreshing instance network info cache due to event network-changed-1950c089-af27-485c-8b1d-eefa73ac5064. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:40:11 np0005603609 nova_compute[221550]: 2026-01-31 08:40:11.364 221554 DEBUG oslo_concurrency.lockutils [req-f3b6d738-2aa5-43d2-bd4f-b084d58e7720 req-e31e5530-91ab-4e0d-ba24-76c28a317fdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:40:11 np0005603609 nova_compute[221550]: 2026-01-31 08:40:11.365 221554 DEBUG oslo_concurrency.lockutils [req-f3b6d738-2aa5-43d2-bd4f-b084d58e7720 req-e31e5530-91ab-4e0d-ba24-76c28a317fdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:40:11 np0005603609 nova_compute[221550]: 2026-01-31 08:40:11.365 221554 DEBUG nova.network.neutron [req-f3b6d738-2aa5-43d2-bd4f-b084d58e7720 req-e31e5530-91ab-4e0d-ba24-76c28a317fdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Refreshing network info cache for port 1950c089-af27-485c-8b1d-eefa73ac5064 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:40:11 np0005603609 nova_compute[221550]: 2026-01-31 08:40:11.560 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:11 np0005603609 nova_compute[221550]: 2026-01-31 08:40:11.561 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:11 np0005603609 nova_compute[221550]: 2026-01-31 08:40:11.561 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:11 np0005603609 nova_compute[221550]: 2026-01-31 08:40:11.591 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:11.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:40:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:11.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:40:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:40:13 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/951835952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:40:13 np0005603609 nova_compute[221550]: 2026-01-31 08:40:13.039 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:40:13.660 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '80'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:40:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:13.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:13.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:15 np0005603609 nova_compute[221550]: 2026-01-31 08:40:15.264 221554 DEBUG nova.network.neutron [req-f3b6d738-2aa5-43d2-bd4f-b084d58e7720 req-e31e5530-91ab-4e0d-ba24-76c28a317fdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updated VIF entry in instance network info cache for port 1950c089-af27-485c-8b1d-eefa73ac5064. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:40:15 np0005603609 nova_compute[221550]: 2026-01-31 08:40:15.265 221554 DEBUG nova.network.neutron [req-f3b6d738-2aa5-43d2-bd4f-b084d58e7720 req-e31e5530-91ab-4e0d-ba24-76c28a317fdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updating instance_info_cache with network_info: [{"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:40:15 np0005603609 nova_compute[221550]: 2026-01-31 08:40:15.367 221554 DEBUG oslo_concurrency.lockutils [req-f3b6d738-2aa5-43d2-bd4f-b084d58e7720 req-e31e5530-91ab-4e0d-ba24-76c28a317fdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:40:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:15.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:15.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e380 e380: 3 total, 3 up, 3 in
Jan 31 03:40:16 np0005603609 nova_compute[221550]: 2026-01-31 08:40:16.593 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:17.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:17 np0005603609 nova_compute[221550]: 2026-01-31 08:40:17.756 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848802.7544951, 6193a6a7-8918-4625-b626-5d53c31bf0ff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:40:17 np0005603609 nova_compute[221550]: 2026-01-31 08:40:17.757 221554 INFO nova.compute.manager [-] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:40:17 np0005603609 nova_compute[221550]: 2026-01-31 08:40:17.791 221554 DEBUG nova.compute.manager [None req-5c5ad929-1792-41dc-86f7-e66476c1e265 - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:40:17 np0005603609 nova_compute[221550]: 2026-01-31 08:40:17.795 221554 DEBUG nova.compute.manager [None req-5c5ad929-1792-41dc-86f7-e66476c1e265 - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:40:17 np0005603609 nova_compute[221550]: 2026-01-31 08:40:17.830 221554 INFO nova.compute.manager [None req-5c5ad929-1792-41dc-86f7-e66476c1e265 - - - - - -] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Jan 31 03:40:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:17.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:18 np0005603609 nova_compute[221550]: 2026-01-31 08:40:18.040 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:18 np0005603609 nova_compute[221550]: 2026-01-31 08:40:18.344 221554 DEBUG nova.compute.manager [req-760c964c-e775-4d7b-97c3-8544b0679d15 req-7615d733-86d2-41f1-af1f-9bee333d31d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received event network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:40:18 np0005603609 nova_compute[221550]: 2026-01-31 08:40:18.345 221554 DEBUG oslo_concurrency.lockutils [req-760c964c-e775-4d7b-97c3-8544b0679d15 req-7615d733-86d2-41f1-af1f-9bee333d31d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:18 np0005603609 nova_compute[221550]: 2026-01-31 08:40:18.345 221554 DEBUG oslo_concurrency.lockutils [req-760c964c-e775-4d7b-97c3-8544b0679d15 req-7615d733-86d2-41f1-af1f-9bee333d31d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:18 np0005603609 nova_compute[221550]: 2026-01-31 08:40:18.345 221554 DEBUG oslo_concurrency.lockutils [req-760c964c-e775-4d7b-97c3-8544b0679d15 req-7615d733-86d2-41f1-af1f-9bee333d31d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:18 np0005603609 nova_compute[221550]: 2026-01-31 08:40:18.345 221554 DEBUG nova.compute.manager [req-760c964c-e775-4d7b-97c3-8544b0679d15 req-7615d733-86d2-41f1-af1f-9bee333d31d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] No waiting events found dispatching network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:40:18 np0005603609 nova_compute[221550]: 2026-01-31 08:40:18.346 221554 WARNING nova.compute.manager [req-760c964c-e775-4d7b-97c3-8544b0679d15 req-7615d733-86d2-41f1-af1f-9bee333d31d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received unexpected event network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 for instance with vm_state resized and task_state None.#033[00m
Jan 31 03:40:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:19.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:19.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:20 np0005603609 nova_compute[221550]: 2026-01-31 08:40:20.690 221554 DEBUG nova.compute.manager [req-a155ecf9-b5ec-41da-a54f-cb3cc94075b7 req-b4787b81-f1d9-40c5-90fd-17f7e601a92a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received event network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:40:20 np0005603609 nova_compute[221550]: 2026-01-31 08:40:20.691 221554 DEBUG oslo_concurrency.lockutils [req-a155ecf9-b5ec-41da-a54f-cb3cc94075b7 req-b4787b81-f1d9-40c5-90fd-17f7e601a92a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:20 np0005603609 nova_compute[221550]: 2026-01-31 08:40:20.692 221554 DEBUG oslo_concurrency.lockutils [req-a155ecf9-b5ec-41da-a54f-cb3cc94075b7 req-b4787b81-f1d9-40c5-90fd-17f7e601a92a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:20 np0005603609 nova_compute[221550]: 2026-01-31 08:40:20.692 221554 DEBUG oslo_concurrency.lockutils [req-a155ecf9-b5ec-41da-a54f-cb3cc94075b7 req-b4787b81-f1d9-40c5-90fd-17f7e601a92a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:20 np0005603609 nova_compute[221550]: 2026-01-31 08:40:20.692 221554 DEBUG nova.compute.manager [req-a155ecf9-b5ec-41da-a54f-cb3cc94075b7 req-b4787b81-f1d9-40c5-90fd-17f7e601a92a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] No waiting events found dispatching network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:40:20 np0005603609 nova_compute[221550]: 2026-01-31 08:40:20.693 221554 WARNING nova.compute.manager [req-a155ecf9-b5ec-41da-a54f-cb3cc94075b7 req-b4787b81-f1d9-40c5-90fd-17f7e601a92a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Received unexpected event network-vif-plugged-1950c089-af27-485c-8b1d-eefa73ac5064 for instance with vm_state resized and task_state None.#033[00m
Jan 31 03:40:21 np0005603609 nova_compute[221550]: 2026-01-31 08:40:21.044 221554 DEBUG oslo_concurrency.lockutils [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "6193a6a7-8918-4625-b626-5d53c31bf0ff" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:21 np0005603609 nova_compute[221550]: 2026-01-31 08:40:21.045 221554 DEBUG oslo_concurrency.lockutils [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:21 np0005603609 nova_compute[221550]: 2026-01-31 08:40:21.045 221554 DEBUG nova.compute.manager [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Going to confirm migration 24 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Jan 31 03:40:21 np0005603609 nova_compute[221550]: 2026-01-31 08:40:21.643 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:21.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:21.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:22 np0005603609 nova_compute[221550]: 2026-01-31 08:40:22.627 221554 DEBUG neutronclient.v2_0.client [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 1950c089-af27-485c-8b1d-eefa73ac5064 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 31 03:40:22 np0005603609 nova_compute[221550]: 2026-01-31 08:40:22.628 221554 DEBUG oslo_concurrency.lockutils [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:40:22 np0005603609 nova_compute[221550]: 2026-01-31 08:40:22.628 221554 DEBUG oslo_concurrency.lockutils [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquired lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:40:22 np0005603609 nova_compute[221550]: 2026-01-31 08:40:22.628 221554 DEBUG nova.network.neutron [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:40:22 np0005603609 nova_compute[221550]: 2026-01-31 08:40:22.628 221554 DEBUG nova.objects.instance [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'info_cache' on Instance uuid 6193a6a7-8918-4625-b626-5d53c31bf0ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:40:23 np0005603609 nova_compute[221550]: 2026-01-31 08:40:23.042 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:23.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:40:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:23.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:40:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:40:23Z|00846|binding|INFO|Releasing lport c8a7eefb-b644-411b-b95a-f875570edfa9 from this chassis (sb_readonly=0)
Jan 31 03:40:23 np0005603609 nova_compute[221550]: 2026-01-31 08:40:23.947 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:24 np0005603609 ovn_controller[130359]: 2026-01-31T08:40:24Z|00847|binding|INFO|Releasing lport c8a7eefb-b644-411b-b95a-f875570edfa9 from this chassis (sb_readonly=0)
Jan 31 03:40:24 np0005603609 nova_compute[221550]: 2026-01-31 08:40:24.015 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #154. Immutable memtables: 0.
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:24.238065) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 154
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848824238162, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 705, "num_deletes": 250, "total_data_size": 1266307, "memory_usage": 1283776, "flush_reason": "Manual Compaction"}
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #155: started
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848824284449, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 155, "file_size": 575300, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76039, "largest_seqno": 76738, "table_properties": {"data_size": 572221, "index_size": 986, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8355, "raw_average_key_size": 20, "raw_value_size": 565717, "raw_average_value_size": 1410, "num_data_blocks": 43, "num_entries": 401, "num_filter_entries": 401, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848779, "oldest_key_time": 1769848779, "file_creation_time": 1769848824, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 46425 microseconds, and 3412 cpu microseconds.
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:24.284501) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #155: 575300 bytes OK
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:24.284525) [db/memtable_list.cc:519] [default] Level-0 commit table #155 started
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:24.436016) [db/memtable_list.cc:722] [default] Level-0 commit table #155: memtable #1 done
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:24.436065) EVENT_LOG_v1 {"time_micros": 1769848824436053, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:24.436093) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 1262514, prev total WAL file size 1263510, number of live WAL files 2.
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000151.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:24.436831) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353130' seq:72057594037927935, type:22 .. '6D6772737461740032373631' seq:0, type:0; will stop at (end)
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [155(561KB)], [153(14MB)]
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848824436890, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [155], "files_L6": [153], "score": -1, "input_data_size": 15547719, "oldest_snapshot_seqno": -1}
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #156: 9793 keys, 11951331 bytes, temperature: kUnknown
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848824808754, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 156, "file_size": 11951331, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11889766, "index_size": 35980, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24517, "raw_key_size": 258330, "raw_average_key_size": 26, "raw_value_size": 11719761, "raw_average_value_size": 1196, "num_data_blocks": 1371, "num_entries": 9793, "num_filter_entries": 9793, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769848824, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 156, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:24.809042) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11951331 bytes
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:24.858785) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 41.8 rd, 32.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 14.3 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(47.8) write-amplify(20.8) OK, records in: 10292, records dropped: 499 output_compression: NoCompression
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:24.858830) EVENT_LOG_v1 {"time_micros": 1769848824858813, "job": 98, "event": "compaction_finished", "compaction_time_micros": 371932, "compaction_time_cpu_micros": 34989, "output_level": 6, "num_output_files": 1, "total_output_size": 11951331, "num_input_records": 10292, "num_output_records": 9793, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848824859203, "job": 98, "event": "table_file_deletion", "file_number": 155}
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000153.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848824861235, "job": 98, "event": "table_file_deletion", "file_number": 153}
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:24.436758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:24.861390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:24.861400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:24.861404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:24.861409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:40:24 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:24.861413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:40:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:40:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:25.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:40:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:25.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:26 np0005603609 nova_compute[221550]: 2026-01-31 08:40:26.645 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:27.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:27.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:27 np0005603609 NetworkManager[49064]: <info>  [1769848827.9473] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/391)
Jan 31 03:40:27 np0005603609 nova_compute[221550]: 2026-01-31 08:40:27.946 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:27 np0005603609 NetworkManager[49064]: <info>  [1769848827.9484] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Jan 31 03:40:28 np0005603609 nova_compute[221550]: 2026-01-31 08:40:28.003 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:28 np0005603609 ovn_controller[130359]: 2026-01-31T08:40:28Z|00848|binding|INFO|Releasing lport c8a7eefb-b644-411b-b95a-f875570edfa9 from this chassis (sb_readonly=0)
Jan 31 03:40:28 np0005603609 nova_compute[221550]: 2026-01-31 08:40:28.028 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:28 np0005603609 nova_compute[221550]: 2026-01-31 08:40:28.044 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:28 np0005603609 nova_compute[221550]: 2026-01-31 08:40:28.437 221554 DEBUG nova.network.neutron [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 6193a6a7-8918-4625-b626-5d53c31bf0ff] Updating instance_info_cache with network_info: [{"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:40:28 np0005603609 nova_compute[221550]: 2026-01-31 08:40:28.567 221554 DEBUG oslo_concurrency.lockutils [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Releasing lock "refresh_cache-6193a6a7-8918-4625-b626-5d53c31bf0ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:40:28 np0005603609 nova_compute[221550]: 2026-01-31 08:40:28.568 221554 DEBUG nova.objects.instance [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'migration_context' on Instance uuid 6193a6a7-8918-4625-b626-5d53c31bf0ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:40:28 np0005603609 nova_compute[221550]: 2026-01-31 08:40:28.717 221554 DEBUG nova.storage.rbd_utils [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] removing snapshot(nova-resize) on rbd image(6193a6a7-8918-4625-b626-5d53c31bf0ff_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:40:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e381 e381: 3 total, 3 up, 3 in
Jan 31 03:40:29 np0005603609 nova_compute[221550]: 2026-01-31 08:40:29.274 221554 DEBUG nova.virt.libvirt.vif [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:38:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1044697057',display_name='tempest-TestNetworkAdvancedServerOps-server-1044697057',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1044697057',id=186,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFM+PhejEoD7g/QWN1GYstJMXVFnv1wBVmxecXIfvr4KjdawbuS6IVqr3y8FzKBH2jXkdyk7w4kVVSMOQCA1tqgtHQYS5flQfReRJgL/KNYlba9qTCk6eFD00Tadz9cy3w==',key_name='tempest-TestNetworkAdvancedServerOps-1072390140',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:40:17Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-hmxtgjg2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:40:18Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=6193a6a7-8918-4625-b626-5d53c31bf0ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:40:29 np0005603609 nova_compute[221550]: 2026-01-31 08:40:29.274 221554 DEBUG nova.network.os_vif_util [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "1950c089-af27-485c-8b1d-eefa73ac5064", "address": "fa:16:3e:43:d3:f9", "network": {"id": "afa5ed82-7034-4517-a39e-5fc8b872592f", "bridge": "br-int", "label": "tempest-network-smoke--1489833336", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1950c089-af", "ovs_interfaceid": "1950c089-af27-485c-8b1d-eefa73ac5064", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:40:29 np0005603609 nova_compute[221550]: 2026-01-31 08:40:29.275 221554 DEBUG nova.network.os_vif_util [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:43:d3:f9,bridge_name='br-int',has_traffic_filtering=True,id=1950c089-af27-485c-8b1d-eefa73ac5064,network=Network(afa5ed82-7034-4517-a39e-5fc8b872592f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1950c089-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:40:29 np0005603609 nova_compute[221550]: 2026-01-31 08:40:29.275 221554 DEBUG os_vif [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:d3:f9,bridge_name='br-int',has_traffic_filtering=True,id=1950c089-af27-485c-8b1d-eefa73ac5064,network=Network(afa5ed82-7034-4517-a39e-5fc8b872592f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1950c089-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:40:29 np0005603609 nova_compute[221550]: 2026-01-31 08:40:29.277 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:29 np0005603609 nova_compute[221550]: 2026-01-31 08:40:29.277 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1950c089-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:40:29 np0005603609 nova_compute[221550]: 2026-01-31 08:40:29.277 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:40:29 np0005603609 nova_compute[221550]: 2026-01-31 08:40:29.279 221554 INFO os_vif [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:d3:f9,bridge_name='br-int',has_traffic_filtering=True,id=1950c089-af27-485c-8b1d-eefa73ac5064,network=Network(afa5ed82-7034-4517-a39e-5fc8b872592f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1950c089-af')#033[00m
Jan 31 03:40:29 np0005603609 nova_compute[221550]: 2026-01-31 08:40:29.280 221554 DEBUG oslo_concurrency.lockutils [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:29 np0005603609 nova_compute[221550]: 2026-01-31 08:40:29.280 221554 DEBUG oslo_concurrency.lockutils [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:29 np0005603609 nova_compute[221550]: 2026-01-31 08:40:29.543 221554 DEBUG oslo_concurrency.processutils [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:40:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:29.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:40:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:29.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:40:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:40:29 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2559611403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:40:29 np0005603609 nova_compute[221550]: 2026-01-31 08:40:29.935 221554 DEBUG oslo_concurrency.processutils [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:40:29 np0005603609 nova_compute[221550]: 2026-01-31 08:40:29.942 221554 DEBUG nova.compute.provider_tree [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:40:30 np0005603609 nova_compute[221550]: 2026-01-31 08:40:30.007 221554 DEBUG nova.scheduler.client.report [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:40:30 np0005603609 podman[301505]: 2026-01-31 08:40:30.18538254 +0000 UTC m=+0.060701018 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 03:40:30 np0005603609 nova_compute[221550]: 2026-01-31 08:40:30.203 221554 DEBUG oslo_concurrency.lockutils [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:30 np0005603609 podman[301504]: 2026-01-31 08:40:30.20467925 +0000 UTC m=+0.077843675 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:40:30 np0005603609 nova_compute[221550]: 2026-01-31 08:40:30.554 221554 INFO nova.scheduler.client.report [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Deleted allocation for migration cf69e0cf-b556-4b51-a26d-2a27f3defe82#033[00m
Jan 31 03:40:30 np0005603609 nova_compute[221550]: 2026-01-31 08:40:30.790 221554 DEBUG oslo_concurrency.lockutils [None req-d3cf1704-5071-43ac-b60b-4f0b5f1a9af0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "6193a6a7-8918-4625-b626-5d53c31bf0ff" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 9.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:31 np0005603609 nova_compute[221550]: 2026-01-31 08:40:31.647 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:40:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:31.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:40:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:40:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:31.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:40:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:33 np0005603609 nova_compute[221550]: 2026-01-31 08:40:33.049 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #157. Immutable memtables: 0.
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:33.277163) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 157
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848833277226, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 353, "num_deletes": 251, "total_data_size": 289081, "memory_usage": 296152, "flush_reason": "Manual Compaction"}
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #158: started
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848833282219, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 158, "file_size": 190497, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76744, "largest_seqno": 77091, "table_properties": {"data_size": 188326, "index_size": 334, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5534, "raw_average_key_size": 18, "raw_value_size": 183952, "raw_average_value_size": 623, "num_data_blocks": 15, "num_entries": 295, "num_filter_entries": 295, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848824, "oldest_key_time": 1769848824, "file_creation_time": 1769848833, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 5084 microseconds, and 1404 cpu microseconds.
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:33.282261) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #158: 190497 bytes OK
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:33.282276) [db/memtable_list.cc:519] [default] Level-0 commit table #158 started
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:33.284304) [db/memtable_list.cc:722] [default] Level-0 commit table #158: memtable #1 done
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:33.284320) EVENT_LOG_v1 {"time_micros": 1769848833284315, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:33.284338) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 286683, prev total WAL file size 286683, number of live WAL files 2.
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000154.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:33.284833) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [158(186KB)], [156(11MB)]
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848833285095, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [158], "files_L6": [156], "score": -1, "input_data_size": 12141828, "oldest_snapshot_seqno": -1}
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #159: 9574 keys, 10210352 bytes, temperature: kUnknown
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848833409090, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 159, "file_size": 10210352, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10151792, "index_size": 33551, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23941, "raw_key_size": 254502, "raw_average_key_size": 26, "raw_value_size": 9987117, "raw_average_value_size": 1043, "num_data_blocks": 1262, "num_entries": 9574, "num_filter_entries": 9574, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769848833, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 159, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:33.409545) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10210352 bytes
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:33.411224) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.9 rd, 82.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.4 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(117.3) write-amplify(53.6) OK, records in: 10088, records dropped: 514 output_compression: NoCompression
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:33.411241) EVENT_LOG_v1 {"time_micros": 1769848833411233, "job": 100, "event": "compaction_finished", "compaction_time_micros": 124013, "compaction_time_cpu_micros": 38588, "output_level": 6, "num_output_files": 1, "total_output_size": 10210352, "num_input_records": 10088, "num_output_records": 9574, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848833411474, "job": 100, "event": "table_file_deletion", "file_number": 158}
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000156.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848833412567, "job": 100, "event": "table_file_deletion", "file_number": 156}
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:33.284715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:33.412782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:33.412794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:33.412799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:33.412802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:40:33 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:40:33.412805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:40:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:33.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:40:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:33.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:40:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:40:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:35.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:40:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:40:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:35.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:40:36 np0005603609 nova_compute[221550]: 2026-01-31 08:40:36.650 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:37 np0005603609 nova_compute[221550]: 2026-01-31 08:40:37.068 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:37.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:37.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:38 np0005603609 nova_compute[221550]: 2026-01-31 08:40:38.051 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 e382: 3 total, 3 up, 3 in
Jan 31 03:40:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:39.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:39.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:41 np0005603609 nova_compute[221550]: 2026-01-31 08:40:41.652 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:41.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:41.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:43 np0005603609 nova_compute[221550]: 2026-01-31 08:40:43.052 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:43.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:43.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:40:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:40:44 np0005603609 nova_compute[221550]: 2026-01-31 08:40:44.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:40:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:40:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:40:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:45.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:45.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:46 np0005603609 nova_compute[221550]: 2026-01-31 08:40:46.653 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:40:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:47.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:40:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:40:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:47.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:40:48 np0005603609 nova_compute[221550]: 2026-01-31 08:40:48.054 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:49 np0005603609 nova_compute[221550]: 2026-01-31 08:40:49.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:49 np0005603609 nova_compute[221550]: 2026-01-31 08:40:49.661 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:40:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:49.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:40:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 03:40:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:49.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 03:40:51 np0005603609 nova_compute[221550]: 2026-01-31 08:40:51.654 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:40:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:51.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:40:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:40:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:51.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:40:51 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:40:51 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:40:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:53 np0005603609 nova_compute[221550]: 2026-01-31 08:40:53.056 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:40:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3761983410' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:40:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:40:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3761983410' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:40:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:53.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:53.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:55 np0005603609 nova_compute[221550]: 2026-01-31 08:40:55.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:55 np0005603609 nova_compute[221550]: 2026-01-31 08:40:55.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:40:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:40:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:55.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:40:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:40:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:55.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:40:56 np0005603609 nova_compute[221550]: 2026-01-31 08:40:56.657 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:40:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:57.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:40:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:57.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:58 np0005603609 nova_compute[221550]: 2026-01-31 08:40:58.057 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:58 np0005603609 nova_compute[221550]: 2026-01-31 08:40:58.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:58 np0005603609 nova_compute[221550]: 2026-01-31 08:40:58.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:40:58 np0005603609 nova_compute[221550]: 2026-01-31 08:40:58.822 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:40:59 np0005603609 nova_compute[221550]: 2026-01-31 08:40:59.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:40:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:59.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:40:59 np0005603609 nova_compute[221550]: 2026-01-31 08:40:59.803 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:59 np0005603609 nova_compute[221550]: 2026-01-31 08:40:59.804 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:59 np0005603609 nova_compute[221550]: 2026-01-31 08:40:59.805 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:59 np0005603609 nova_compute[221550]: 2026-01-31 08:40:59.805 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:40:59 np0005603609 nova_compute[221550]: 2026-01-31 08:40:59.806 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:40:59 np0005603609 nova_compute[221550]: 2026-01-31 08:40:59.839 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:40:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:40:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:59.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:41:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:41:00 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1036511080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:41:00 np0005603609 nova_compute[221550]: 2026-01-31 08:41:00.224 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:41:00 np0005603609 nova_compute[221550]: 2026-01-31 08:41:00.941 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:41:00 np0005603609 nova_compute[221550]: 2026-01-31 08:41:00.942 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:41:01 np0005603609 nova_compute[221550]: 2026-01-31 08:41:01.060 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:41:01 np0005603609 nova_compute[221550]: 2026-01-31 08:41:01.061 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4019MB free_disk=20.987842559814453GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:41:01 np0005603609 nova_compute[221550]: 2026-01-31 08:41:01.061 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:01 np0005603609 nova_compute[221550]: 2026-01-31 08:41:01.061 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:01 np0005603609 podman[301756]: 2026-01-31 08:41:01.158725165 +0000 UTC m=+0.047137318 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Jan 31 03:41:01 np0005603609 podman[301755]: 2026-01-31 08:41:01.212656517 +0000 UTC m=+0.099627455 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:41:01 np0005603609 nova_compute[221550]: 2026-01-31 08:41:01.660 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:41:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:01.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:41:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:01.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:02 np0005603609 nova_compute[221550]: 2026-01-31 08:41:02.303 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 233d3314-7d9d-49a5-818f-909d78422fb9 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:41:02 np0005603609 nova_compute[221550]: 2026-01-31 08:41:02.303 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:41:02 np0005603609 nova_compute[221550]: 2026-01-31 08:41:02.304 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:41:02 np0005603609 nova_compute[221550]: 2026-01-31 08:41:02.357 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:41:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:41:02 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1342224106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:41:02 np0005603609 nova_compute[221550]: 2026-01-31 08:41:02.935 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:41:02 np0005603609 nova_compute[221550]: 2026-01-31 08:41:02.943 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:41:03 np0005603609 nova_compute[221550]: 2026-01-31 08:41:03.059 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:03 np0005603609 nova_compute[221550]: 2026-01-31 08:41:03.177 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:41:03 np0005603609 nova_compute[221550]: 2026-01-31 08:41:03.179 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:41:03 np0005603609 nova_compute[221550]: 2026-01-31 08:41:03.179 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:03.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:41:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:03.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:41:03 np0005603609 ovn_controller[130359]: 2026-01-31T08:41:03Z|00849|binding|INFO|Releasing lport c8a7eefb-b644-411b-b95a-f875570edfa9 from this chassis (sb_readonly=0)
Jan 31 03:41:04 np0005603609 nova_compute[221550]: 2026-01-31 08:41:04.014 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:41:04.995 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=81, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=80) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:41:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:41:04.995 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:41:04 np0005603609 nova_compute[221550]: 2026-01-31 08:41:04.995 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:05.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:41:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:05.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:41:06 np0005603609 nova_compute[221550]: 2026-01-31 08:41:06.663 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:41:07.533 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:41:07.534 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:41:07.534 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:07.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:07.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:08 np0005603609 nova_compute[221550]: 2026-01-31 08:41:08.061 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:08 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:41:08.998 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '81'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:41:09 np0005603609 nova_compute[221550]: 2026-01-31 08:41:09.174 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:09 np0005603609 nova_compute[221550]: 2026-01-31 08:41:09.175 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:09 np0005603609 nova_compute[221550]: 2026-01-31 08:41:09.175 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:09.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:09.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:11 np0005603609 nova_compute[221550]: 2026-01-31 08:41:11.668 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:11.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:11.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:13 np0005603609 nova_compute[221550]: 2026-01-31 08:41:13.063 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:13.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:13.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:15.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:15.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:16 np0005603609 nova_compute[221550]: 2026-01-31 08:41:16.669 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:17.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:17.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:18 np0005603609 nova_compute[221550]: 2026-01-31 08:41:18.065 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:19.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:19.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:21 np0005603609 nova_compute[221550]: 2026-01-31 08:41:21.671 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:21.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:41:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:21.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:41:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:23 np0005603609 nova_compute[221550]: 2026-01-31 08:41:23.067 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:23.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:24.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:25.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:26.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:26 np0005603609 nova_compute[221550]: 2026-01-31 08:41:26.674 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:27.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:28.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:28 np0005603609 nova_compute[221550]: 2026-01-31 08:41:28.067 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:29 np0005603609 nova_compute[221550]: 2026-01-31 08:41:29.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:29.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:29 np0005603609 nova_compute[221550]: 2026-01-31 08:41:29.938 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:30.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:31 np0005603609 nova_compute[221550]: 2026-01-31 08:41:31.676 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:31.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:32.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:32 np0005603609 podman[301826]: 2026-01-31 08:41:32.188779992 +0000 UTC m=+0.062355638 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 03:41:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:32 np0005603609 podman[301825]: 2026-01-31 08:41:32.220596817 +0000 UTC m=+0.097048794 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:41:33 np0005603609 nova_compute[221550]: 2026-01-31 08:41:33.070 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:33.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:34.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:35.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:36.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:36 np0005603609 nova_compute[221550]: 2026-01-31 08:41:36.678 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:37.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:41:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:38.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:41:38 np0005603609 nova_compute[221550]: 2026-01-31 08:41:38.099 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:39.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:40.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:41 np0005603609 nova_compute[221550]: 2026-01-31 08:41:41.681 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:41.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:42.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:43 np0005603609 nova_compute[221550]: 2026-01-31 08:41:43.102 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:43 np0005603609 nova_compute[221550]: 2026-01-31 08:41:43.113 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:43 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:41:43.114 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=82, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=81) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:41:43 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:41:43.116 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:41:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:41:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:43.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:41:43 np0005603609 nova_compute[221550]: 2026-01-31 08:41:43.808 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:41:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:44.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:41:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:45.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:46.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:46 np0005603609 nova_compute[221550]: 2026-01-31 08:41:46.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:46 np0005603609 nova_compute[221550]: 2026-01-31 08:41:46.682 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:41:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:47.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:41:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:48.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:48 np0005603609 nova_compute[221550]: 2026-01-31 08:41:48.104 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:41:49.119 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '82'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:41:49 np0005603609 nova_compute[221550]: 2026-01-31 08:41:49.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:41:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:49.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:41:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:50.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:50 np0005603609 nova_compute[221550]: 2026-01-31 08:41:50.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:51 np0005603609 nova_compute[221550]: 2026-01-31 08:41:51.238 221554 DEBUG oslo_concurrency.lockutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "f297c5b1-d573-46de-95a8-be21389a4763" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:51 np0005603609 nova_compute[221550]: 2026-01-31 08:41:51.239 221554 DEBUG oslo_concurrency.lockutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:51 np0005603609 nova_compute[221550]: 2026-01-31 08:41:51.442 221554 DEBUG nova.compute.manager [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:41:51 np0005603609 nova_compute[221550]: 2026-01-31 08:41:51.685 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:51.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:51 np0005603609 nova_compute[221550]: 2026-01-31 08:41:51.833 221554 DEBUG oslo_concurrency.lockutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:51 np0005603609 nova_compute[221550]: 2026-01-31 08:41:51.833 221554 DEBUG oslo_concurrency.lockutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:51 np0005603609 nova_compute[221550]: 2026-01-31 08:41:51.841 221554 DEBUG nova.virt.hardware [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:41:51 np0005603609 nova_compute[221550]: 2026-01-31 08:41:51.841 221554 INFO nova.compute.claims [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:41:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:52.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:52 np0005603609 nova_compute[221550]: 2026-01-31 08:41:52.199 221554 DEBUG oslo_concurrency.processutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:41:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:41:52 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2637242050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:41:52 np0005603609 nova_compute[221550]: 2026-01-31 08:41:52.610 221554 DEBUG oslo_concurrency.processutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:41:52 np0005603609 nova_compute[221550]: 2026-01-31 08:41:52.617 221554 DEBUG nova.compute.provider_tree [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:41:52 np0005603609 nova_compute[221550]: 2026-01-31 08:41:52.700 221554 DEBUG nova.scheduler.client.report [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:41:52 np0005603609 nova_compute[221550]: 2026-01-31 08:41:52.842 221554 DEBUG oslo_concurrency.lockutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:52 np0005603609 nova_compute[221550]: 2026-01-31 08:41:52.843 221554 DEBUG nova.compute.manager [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:41:52 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:41:52 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:41:52 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:41:52 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 03:41:52 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:41:53 np0005603609 nova_compute[221550]: 2026-01-31 08:41:53.107 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:53 np0005603609 nova_compute[221550]: 2026-01-31 08:41:53.231 221554 DEBUG nova.compute.manager [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:41:53 np0005603609 nova_compute[221550]: 2026-01-31 08:41:53.231 221554 DEBUG nova.network.neutron [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:41:53 np0005603609 nova_compute[221550]: 2026-01-31 08:41:53.365 221554 INFO nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:41:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:41:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/537536257' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:41:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:41:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/537536257' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:41:53 np0005603609 nova_compute[221550]: 2026-01-31 08:41:53.471 221554 DEBUG nova.policy [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4d0e9d918b4041fabd5ded633b4cf404', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9710f0cf77d84353ae13fa47922b085d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:41:53 np0005603609 nova_compute[221550]: 2026-01-31 08:41:53.473 221554 DEBUG nova.compute.manager [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:41:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:53.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:54 np0005603609 nova_compute[221550]: 2026-01-31 08:41:54.004 221554 DEBUG nova.compute.manager [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:41:54 np0005603609 nova_compute[221550]: 2026-01-31 08:41:54.005 221554 DEBUG nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:41:54 np0005603609 nova_compute[221550]: 2026-01-31 08:41:54.006 221554 INFO nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Creating image(s)#033[00m
Jan 31 03:41:54 np0005603609 nova_compute[221550]: 2026-01-31 08:41:54.032 221554 DEBUG nova.storage.rbd_utils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image f297c5b1-d573-46de-95a8-be21389a4763_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:41:54 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 03:41:54 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:41:54 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:41:54 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:41:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:54.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:54 np0005603609 nova_compute[221550]: 2026-01-31 08:41:54.076 221554 DEBUG nova.storage.rbd_utils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image f297c5b1-d573-46de-95a8-be21389a4763_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:41:54 np0005603609 nova_compute[221550]: 2026-01-31 08:41:54.116 221554 DEBUG nova.storage.rbd_utils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image f297c5b1-d573-46de-95a8-be21389a4763_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:41:54 np0005603609 nova_compute[221550]: 2026-01-31 08:41:54.120 221554 DEBUG oslo_concurrency.processutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:41:54 np0005603609 nova_compute[221550]: 2026-01-31 08:41:54.173 221554 DEBUG oslo_concurrency.processutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:41:54 np0005603609 nova_compute[221550]: 2026-01-31 08:41:54.174 221554 DEBUG oslo_concurrency.lockutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:54 np0005603609 nova_compute[221550]: 2026-01-31 08:41:54.175 221554 DEBUG oslo_concurrency.lockutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:54 np0005603609 nova_compute[221550]: 2026-01-31 08:41:54.175 221554 DEBUG oslo_concurrency.lockutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:54 np0005603609 nova_compute[221550]: 2026-01-31 08:41:54.204 221554 DEBUG nova.storage.rbd_utils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image f297c5b1-d573-46de-95a8-be21389a4763_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:41:54 np0005603609 nova_compute[221550]: 2026-01-31 08:41:54.208 221554 DEBUG oslo_concurrency.processutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 f297c5b1-d573-46de-95a8-be21389a4763_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:41:55 np0005603609 nova_compute[221550]: 2026-01-31 08:41:55.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:55 np0005603609 nova_compute[221550]: 2026-01-31 08:41:55.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:41:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:41:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:55.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:41:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:41:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:56.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:41:56 np0005603609 nova_compute[221550]: 2026-01-31 08:41:56.690 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:56 np0005603609 nova_compute[221550]: 2026-01-31 08:41:56.977 221554 DEBUG nova.network.neutron [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Successfully created port: 8812315d-5bb3-40d8-860c-f525ab97a518 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:41:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:57 np0005603609 nova_compute[221550]: 2026-01-31 08:41:57.360 221554 DEBUG oslo_concurrency.processutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 f297c5b1-d573-46de-95a8-be21389a4763_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:41:57 np0005603609 nova_compute[221550]: 2026-01-31 08:41:57.441 221554 DEBUG nova.storage.rbd_utils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] resizing rbd image f297c5b1-d573-46de-95a8-be21389a4763_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:41:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:57.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:58 np0005603609 nova_compute[221550]: 2026-01-31 08:41:58.023 221554 DEBUG nova.objects.instance [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'migration_context' on Instance uuid f297c5b1-d573-46de-95a8-be21389a4763 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:41:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:58.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:58 np0005603609 nova_compute[221550]: 2026-01-31 08:41:58.110 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:58 np0005603609 nova_compute[221550]: 2026-01-31 08:41:58.153 221554 DEBUG nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:41:58 np0005603609 nova_compute[221550]: 2026-01-31 08:41:58.153 221554 DEBUG nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Ensure instance console log exists: /var/lib/nova/instances/f297c5b1-d573-46de-95a8-be21389a4763/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:41:58 np0005603609 nova_compute[221550]: 2026-01-31 08:41:58.154 221554 DEBUG oslo_concurrency.lockutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:58 np0005603609 nova_compute[221550]: 2026-01-31 08:41:58.154 221554 DEBUG oslo_concurrency.lockutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:58 np0005603609 nova_compute[221550]: 2026-01-31 08:41:58.155 221554 DEBUG oslo_concurrency.lockutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:58 np0005603609 nova_compute[221550]: 2026-01-31 08:41:58.856 221554 DEBUG nova.network.neutron [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Successfully updated port: 8812315d-5bb3-40d8-860c-f525ab97a518 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:41:59 np0005603609 nova_compute[221550]: 2026-01-31 08:41:59.069 221554 DEBUG oslo_concurrency.lockutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "refresh_cache-f297c5b1-d573-46de-95a8-be21389a4763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:41:59 np0005603609 nova_compute[221550]: 2026-01-31 08:41:59.069 221554 DEBUG oslo_concurrency.lockutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquired lock "refresh_cache-f297c5b1-d573-46de-95a8-be21389a4763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:41:59 np0005603609 nova_compute[221550]: 2026-01-31 08:41:59.069 221554 DEBUG nova.network.neutron [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:41:59 np0005603609 nova_compute[221550]: 2026-01-31 08:41:59.089 221554 DEBUG nova.compute.manager [req-0eabb415-6969-416d-b29e-c226f18e1295 req-35fb6365-8cb0-4b1a-9d3c-c47faec4769c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Received event network-changed-8812315d-5bb3-40d8-860c-f525ab97a518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:41:59 np0005603609 nova_compute[221550]: 2026-01-31 08:41:59.089 221554 DEBUG nova.compute.manager [req-0eabb415-6969-416d-b29e-c226f18e1295 req-35fb6365-8cb0-4b1a-9d3c-c47faec4769c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Refreshing instance network info cache due to event network-changed-8812315d-5bb3-40d8-860c-f525ab97a518. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:41:59 np0005603609 nova_compute[221550]: 2026-01-31 08:41:59.089 221554 DEBUG oslo_concurrency.lockutils [req-0eabb415-6969-416d-b29e-c226f18e1295 req-35fb6365-8cb0-4b1a-9d3c-c47faec4769c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-f297c5b1-d573-46de-95a8-be21389a4763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:41:59 np0005603609 nova_compute[221550]: 2026-01-31 08:41:59.625 221554 DEBUG nova.network.neutron [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:41:59 np0005603609 nova_compute[221550]: 2026-01-31 08:41:59.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:59 np0005603609 nova_compute[221550]: 2026-01-31 08:41:59.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:41:59 np0005603609 nova_compute[221550]: 2026-01-31 08:41:59.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:41:59 np0005603609 nova_compute[221550]: 2026-01-31 08:41:59.711 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:41:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:41:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:59.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:00.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:01 np0005603609 nova_compute[221550]: 2026-01-31 08:42:01.691 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:01.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:01 np0005603609 nova_compute[221550]: 2026-01-31 08:42:01.858 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:42:01 np0005603609 nova_compute[221550]: 2026-01-31 08:42:01.859 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:42:01 np0005603609 nova_compute[221550]: 2026-01-31 08:42:01.859 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:42:01 np0005603609 nova_compute[221550]: 2026-01-31 08:42:01.859 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 233d3314-7d9d-49a5-818f-909d78422fb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:42:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:42:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:02.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:42:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:42:02 np0005603609 podman[302336]: 2026-01-31 08:42:02.595826089 +0000 UTC m=+0.079914086 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:42:02 np0005603609 podman[302337]: 2026-01-31 08:42:02.601289772 +0000 UTC m=+0.084820895 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:42:03 np0005603609 nova_compute[221550]: 2026-01-31 08:42:03.112 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:42:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:03.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:03 np0005603609 nova_compute[221550]: 2026-01-31 08:42:03.920 221554 DEBUG nova.network.neutron [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Updating instance_info_cache with network_info: [{"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:42:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:42:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:04.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.312 221554 DEBUG oslo_concurrency.lockutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Releasing lock "refresh_cache-f297c5b1-d573-46de-95a8-be21389a4763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.313 221554 DEBUG nova.compute.manager [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Instance network_info: |[{"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.313 221554 DEBUG oslo_concurrency.lockutils [req-0eabb415-6969-416d-b29e-c226f18e1295 req-35fb6365-8cb0-4b1a-9d3c-c47faec4769c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-f297c5b1-d573-46de-95a8-be21389a4763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.314 221554 DEBUG nova.network.neutron [req-0eabb415-6969-416d-b29e-c226f18e1295 req-35fb6365-8cb0-4b1a-9d3c-c47faec4769c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Refreshing network info cache for port 8812315d-5bb3-40d8-860c-f525ab97a518 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.317 221554 DEBUG nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Start _get_guest_xml network_info=[{"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.322 221554 WARNING nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.335 221554 DEBUG nova.virt.libvirt.host [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.336 221554 DEBUG nova.virt.libvirt.host [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.340 221554 DEBUG nova.virt.libvirt.host [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.340 221554 DEBUG nova.virt.libvirt.host [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.342 221554 DEBUG nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.342 221554 DEBUG nova.virt.hardware [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.343 221554 DEBUG nova.virt.hardware [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.343 221554 DEBUG nova.virt.hardware [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.343 221554 DEBUG nova.virt.hardware [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.344 221554 DEBUG nova.virt.hardware [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.344 221554 DEBUG nova.virt.hardware [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.344 221554 DEBUG nova.virt.hardware [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.344 221554 DEBUG nova.virt.hardware [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.345 221554 DEBUG nova.virt.hardware [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.345 221554 DEBUG nova.virt.hardware [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.345 221554 DEBUG nova.virt.hardware [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.349 221554 DEBUG oslo_concurrency.processutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:42:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:42:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2310673947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.807 221554 DEBUG oslo_concurrency.processutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.833 221554 DEBUG nova.storage.rbd_utils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image f297c5b1-d573-46de-95a8-be21389a4763_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.837 221554 DEBUG oslo_concurrency.processutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:42:04 np0005603609 nova_compute[221550]: 2026-01-31 08:42:04.858 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updating instance_info_cache with network_info: [{"id": "93519b56-e167-470c-9933-432512222982", "address": "fa:16:3e:20:44:32", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93519b56-e1", "ovs_interfaceid": "93519b56-e167-470c-9933-432512222982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.124 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.125 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.127 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:42:05 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3250871770' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.251 221554 DEBUG oslo_concurrency.processutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.252 221554 DEBUG nova.virt.libvirt.vif [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:41:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1237528230',display_name='tempest-TestNetworkAdvancedServerOps-server-1237528230',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1237528230',id=189,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPDGaHKTTKAZSqA2FRrln0d7VEN1/jGuE6MmKztq5BUqQ2XrCXf+Zt7tRrbMrY06e567UNxUAUTFJA6tx9or8yM9YlOiWwY7AJirCCVAT5kkloGaLEdR41gyXqezB8qD5Q==',key_name='tempest-TestNetworkAdvancedServerOps-386188591',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-jcqg6id3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:41:53Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=f297c5b1-d573-46de-95a8-be21389a4763,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.252 221554 DEBUG nova.network.os_vif_util [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.253 221554 DEBUG nova.network.os_vif_util [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:12:c6,bridge_name='br-int',has_traffic_filtering=True,id=8812315d-5bb3-40d8-860c-f525ab97a518,network=Network(bf5b3835-8d63-418b-8338-959fa28d05a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8812315d-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.254 221554 DEBUG nova.objects.instance [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'pci_devices' on Instance uuid f297c5b1-d573-46de-95a8-be21389a4763 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.347 221554 DEBUG nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:42:05 np0005603609 nova_compute[221550]:  <uuid>f297c5b1-d573-46de-95a8-be21389a4763</uuid>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:  <name>instance-000000bd</name>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1237528230</nova:name>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:42:04</nova:creationTime>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:42:05 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:        <nova:user uuid="4d0e9d918b4041fabd5ded633b4cf404">tempest-TestNetworkAdvancedServerOps-483180749-project-member</nova:user>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:        <nova:project uuid="9710f0cf77d84353ae13fa47922b085d">tempest-TestNetworkAdvancedServerOps-483180749</nova:project>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:        <nova:port uuid="8812315d-5bb3-40d8-860c-f525ab97a518">
Jan 31 03:42:05 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <entry name="serial">f297c5b1-d573-46de-95a8-be21389a4763</entry>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <entry name="uuid">f297c5b1-d573-46de-95a8-be21389a4763</entry>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/f297c5b1-d573-46de-95a8-be21389a4763_disk">
Jan 31 03:42:05 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:42:05 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/f297c5b1-d573-46de-95a8-be21389a4763_disk.config">
Jan 31 03:42:05 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:42:05 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:a7:12:c6"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <target dev="tap8812315d-5b"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/f297c5b1-d573-46de-95a8-be21389a4763/console.log" append="off"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:42:05 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:42:05 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:42:05 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:42:05 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.349 221554 DEBUG nova.compute.manager [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Preparing to wait for external event network-vif-plugged-8812315d-5bb3-40d8-860c-f525ab97a518 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.349 221554 DEBUG oslo_concurrency.lockutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "f297c5b1-d573-46de-95a8-be21389a4763-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.350 221554 DEBUG oslo_concurrency.lockutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.350 221554 DEBUG oslo_concurrency.lockutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.351 221554 DEBUG nova.virt.libvirt.vif [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:41:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1237528230',display_name='tempest-TestNetworkAdvancedServerOps-server-1237528230',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1237528230',id=189,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPDGaHKTTKAZSqA2FRrln0d7VEN1/jGuE6MmKztq5BUqQ2XrCXf+Zt7tRrbMrY06e567UNxUAUTFJA6tx9or8yM9YlOiWwY7AJirCCVAT5kkloGaLEdR41gyXqezB8qD5Q==',key_name='tempest-TestNetworkAdvancedServerOps-386188591',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-jcqg6id3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:41:53Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=f297c5b1-d573-46de-95a8-be21389a4763,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.351 221554 DEBUG nova.network.os_vif_util [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.352 221554 DEBUG nova.network.os_vif_util [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:12:c6,bridge_name='br-int',has_traffic_filtering=True,id=8812315d-5bb3-40d8-860c-f525ab97a518,network=Network(bf5b3835-8d63-418b-8338-959fa28d05a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8812315d-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.353 221554 DEBUG os_vif [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:12:c6,bridge_name='br-int',has_traffic_filtering=True,id=8812315d-5bb3-40d8-860c-f525ab97a518,network=Network(bf5b3835-8d63-418b-8338-959fa28d05a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8812315d-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.353 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.354 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.355 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.358 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.358 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.358 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.358 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.359 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.386 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.388 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8812315d-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.389 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8812315d-5b, col_values=(('external_ids', {'iface-id': '8812315d-5bb3-40d8-860c-f525ab97a518', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:12:c6', 'vm-uuid': 'f297c5b1-d573-46de-95a8-be21389a4763'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:42:05 np0005603609 NetworkManager[49064]: <info>  [1769848925.3922] manager: (tap8812315d-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.392 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.397 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.399 221554 INFO os_vif [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:12:c6,bridge_name='br-int',has_traffic_filtering=True,id=8812315d-5bb3-40d8-860c-f525ab97a518,network=Network(bf5b3835-8d63-418b-8338-959fa28d05a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8812315d-5b')#033[00m
Jan 31 03:42:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:42:05 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/471124134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:42:05 np0005603609 nova_compute[221550]: 2026-01-31 08:42:05.784 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:42:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:05.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:06.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:06 np0005603609 nova_compute[221550]: 2026-01-31 08:42:06.333 221554 DEBUG nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:42:06 np0005603609 nova_compute[221550]: 2026-01-31 08:42:06.333 221554 DEBUG nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:42:06 np0005603609 nova_compute[221550]: 2026-01-31 08:42:06.333 221554 DEBUG nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No VIF found with MAC fa:16:3e:a7:12:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:42:06 np0005603609 nova_compute[221550]: 2026-01-31 08:42:06.334 221554 INFO nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Using config drive#033[00m
Jan 31 03:42:06 np0005603609 nova_compute[221550]: 2026-01-31 08:42:06.359 221554 DEBUG nova.storage.rbd_utils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image f297c5b1-d573-46de-95a8-be21389a4763_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:42:06 np0005603609 nova_compute[221550]: 2026-01-31 08:42:06.556 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:42:06 np0005603609 nova_compute[221550]: 2026-01-31 08:42:06.556 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:42:06 np0005603609 nova_compute[221550]: 2026-01-31 08:42:06.559 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:42:06 np0005603609 nova_compute[221550]: 2026-01-31 08:42:06.559 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:42:06 np0005603609 nova_compute[221550]: 2026-01-31 08:42:06.694 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:06 np0005603609 nova_compute[221550]: 2026-01-31 08:42:06.702 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:42:06 np0005603609 nova_compute[221550]: 2026-01-31 08:42:06.703 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3994MB free_disk=20.921566009521484GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:42:06 np0005603609 nova_compute[221550]: 2026-01-31 08:42:06.703 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:42:06 np0005603609 nova_compute[221550]: 2026-01-31 08:42:06.703 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:42:06 np0005603609 nova_compute[221550]: 2026-01-31 08:42:06.883 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 233d3314-7d9d-49a5-818f-909d78422fb9 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:42:06 np0005603609 nova_compute[221550]: 2026-01-31 08:42:06.884 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance f297c5b1-d573-46de-95a8-be21389a4763 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:42:06 np0005603609 nova_compute[221550]: 2026-01-31 08:42:06.884 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:42:06 np0005603609 nova_compute[221550]: 2026-01-31 08:42:06.885 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:42:07 np0005603609 nova_compute[221550]: 2026-01-31 08:42:07.036 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:42:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:42:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/579147508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:42:07 np0005603609 nova_compute[221550]: 2026-01-31 08:42:07.441 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:42:07 np0005603609 nova_compute[221550]: 2026-01-31 08:42:07.446 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:42:07 np0005603609 nova_compute[221550]: 2026-01-31 08:42:07.517 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:42:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:07.534 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:42:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:07.535 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:42:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:07.535 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:07 np0005603609 nova_compute[221550]: 2026-01-31 08:42:07.545 221554 INFO nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Creating config drive at /var/lib/nova/instances/f297c5b1-d573-46de-95a8-be21389a4763/disk.config#033[00m
Jan 31 03:42:07 np0005603609 nova_compute[221550]: 2026-01-31 08:42:07.549 221554 DEBUG oslo_concurrency.processutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f297c5b1-d573-46de-95a8-be21389a4763/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpvfal6iop execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:42:07 np0005603609 nova_compute[221550]: 2026-01-31 08:42:07.675 221554 DEBUG oslo_concurrency.processutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f297c5b1-d573-46de-95a8-be21389a4763/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpvfal6iop" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:42:07 np0005603609 nova_compute[221550]: 2026-01-31 08:42:07.707 221554 DEBUG nova.storage.rbd_utils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image f297c5b1-d573-46de-95a8-be21389a4763_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:42:07 np0005603609 nova_compute[221550]: 2026-01-31 08:42:07.711 221554 DEBUG oslo_concurrency.processutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f297c5b1-d573-46de-95a8-be21389a4763/disk.config f297c5b1-d573-46de-95a8-be21389a4763_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:42:07 np0005603609 nova_compute[221550]: 2026-01-31 08:42:07.761 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:42:07 np0005603609 nova_compute[221550]: 2026-01-31 08:42:07.762 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:07 np0005603609 nova_compute[221550]: 2026-01-31 08:42:07.763 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:07 np0005603609 nova_compute[221550]: 2026-01-31 08:42:07.764 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:42:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:42:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:07.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:42:07 np0005603609 nova_compute[221550]: 2026-01-31 08:42:07.916 221554 DEBUG nova.network.neutron [req-0eabb415-6969-416d-b29e-c226f18e1295 req-35fb6365-8cb0-4b1a-9d3c-c47faec4769c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Updated VIF entry in instance network info cache for port 8812315d-5bb3-40d8-860c-f525ab97a518. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:42:07 np0005603609 nova_compute[221550]: 2026-01-31 08:42:07.916 221554 DEBUG nova.network.neutron [req-0eabb415-6969-416d-b29e-c226f18e1295 req-35fb6365-8cb0-4b1a-9d3c-c47faec4769c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Updating instance_info_cache with network_info: [{"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:42:07 np0005603609 nova_compute[221550]: 2026-01-31 08:42:07.954 221554 DEBUG oslo_concurrency.lockutils [req-0eabb415-6969-416d-b29e-c226f18e1295 req-35fb6365-8cb0-4b1a-9d3c-c47faec4769c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-f297c5b1-d573-46de-95a8-be21389a4763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:42:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:42:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:08.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:42:09 np0005603609 nova_compute[221550]: 2026-01-31 08:42:09.266 221554 DEBUG oslo_concurrency.processutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f297c5b1-d573-46de-95a8-be21389a4763/disk.config f297c5b1-d573-46de-95a8-be21389a4763_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:42:09 np0005603609 nova_compute[221550]: 2026-01-31 08:42:09.267 221554 INFO nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Deleting local config drive /var/lib/nova/instances/f297c5b1-d573-46de-95a8-be21389a4763/disk.config because it was imported into RBD.#033[00m
Jan 31 03:42:09 np0005603609 kernel: tap8812315d-5b: entered promiscuous mode
Jan 31 03:42:09 np0005603609 NetworkManager[49064]: <info>  [1769848929.3032] manager: (tap8812315d-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/394)
Jan 31 03:42:09 np0005603609 ovn_controller[130359]: 2026-01-31T08:42:09Z|00850|binding|INFO|Claiming lport 8812315d-5bb3-40d8-860c-f525ab97a518 for this chassis.
Jan 31 03:42:09 np0005603609 ovn_controller[130359]: 2026-01-31T08:42:09Z|00851|binding|INFO|8812315d-5bb3-40d8-860c-f525ab97a518: Claiming fa:16:3e:a7:12:c6 10.100.0.13
Jan 31 03:42:09 np0005603609 nova_compute[221550]: 2026-01-31 08:42:09.306 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:09 np0005603609 ovn_controller[130359]: 2026-01-31T08:42:09Z|00852|binding|INFO|Setting lport 8812315d-5bb3-40d8-860c-f525ab97a518 ovn-installed in OVS
Jan 31 03:42:09 np0005603609 nova_compute[221550]: 2026-01-31 08:42:09.313 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:09 np0005603609 systemd-udevd[302585]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:42:09 np0005603609 systemd-machined[190912]: New machine qemu-102-instance-000000bd.
Jan 31 03:42:09 np0005603609 NetworkManager[49064]: <info>  [1769848929.3379] device (tap8812315d-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:42:09 np0005603609 NetworkManager[49064]: <info>  [1769848929.3396] device (tap8812315d-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:42:09 np0005603609 ovn_controller[130359]: 2026-01-31T08:42:09Z|00853|binding|INFO|Setting lport 8812315d-5bb3-40d8-860c-f525ab97a518 up in Southbound
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.340 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:12:c6 10.100.0.13'], port_security=['fa:16:3e:a7:12:c6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f297c5b1-d573-46de-95a8-be21389a4763', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf5b3835-8d63-418b-8338-959fa28d05a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a398779e-1480-4eef-8fa0-857756eaddf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=acdcb819-3bf3-4c53-af89-cd871f50a52e, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=8812315d-5bb3-40d8-860c-f525ab97a518) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.341 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 8812315d-5bb3-40d8-860c-f525ab97a518 in datapath bf5b3835-8d63-418b-8338-959fa28d05a8 bound to our chassis#033[00m
Jan 31 03:42:09 np0005603609 systemd[1]: Started Virtual Machine qemu-102-instance-000000bd.
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.342 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf5b3835-8d63-418b-8338-959fa28d05a8#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.349 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7904bc6b-443c-4c14-bd65-435999c8316b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.350 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbf5b3835-81 in ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.352 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbf5b3835-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.352 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a15c8781-7d26-4149-8bb0-c43a967a47fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.352 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cd5a3818-e4f9-444b-90b6-e943530ef574]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.360 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[27ce8699-3f11-4a95-bdea-f7c854665dc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.369 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8066b196-4c46-4eb3-901d-1488202496d2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.389 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d787b876-f0bb-44f1-b868-264f1fe4bd5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:09 np0005603609 systemd-udevd[302588]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.393 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b53c46ce-734c-4a24-8b99-d8521a28bb69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:09 np0005603609 NetworkManager[49064]: <info>  [1769848929.3957] manager: (tapbf5b3835-80): new Veth device (/org/freedesktop/NetworkManager/Devices/395)
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.417 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[50f77d89-e6f7-4997-9069-0d3b0575e50e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.421 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[3ba46581-9fb3-48b7-954d-b84125635815]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:09 np0005603609 NetworkManager[49064]: <info>  [1769848929.4378] device (tapbf5b3835-80): carrier: link connected
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.440 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[11cb5cbe-9b44-42ad-a9c0-4dd20541db60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.453 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3f0a41b8-d683-4754-9c1e-51289769b5ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf5b3835-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:37:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 261], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 916913, 'reachable_time': 38348, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302619, 'error': None, 'target': 'ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.463 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[83d0a6f5-a30e-4f8a-88e9-5523b5ccbcb0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:37c3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 916913, 'tstamp': 916913}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302620, 'error': None, 'target': 'ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.472 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5d881a-d3b2-4c2b-a3fd-7bbff2e7ab74]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf5b3835-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:37:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 261], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 916913, 'reachable_time': 38348, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302621, 'error': None, 'target': 'ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.500 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[36c42956-263c-4ded-a063-ebcab2089c66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.537 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8b882e06-a9e0-4a52-96d7-c4c009d9b27b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.538 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf5b3835-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.538 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.539 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf5b3835-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:42:09 np0005603609 NetworkManager[49064]: <info>  [1769848929.5411] manager: (tapbf5b3835-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Jan 31 03:42:09 np0005603609 kernel: tapbf5b3835-80: entered promiscuous mode
Jan 31 03:42:09 np0005603609 nova_compute[221550]: 2026-01-31 08:42:09.541 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.544 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf5b3835-80, col_values=(('external_ids', {'iface-id': '73463532-de78-43c3-b19a-85e695d1d7e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:42:09 np0005603609 ovn_controller[130359]: 2026-01-31T08:42:09Z|00854|binding|INFO|Releasing lport 73463532-de78-43c3-b19a-85e695d1d7e9 from this chassis (sb_readonly=0)
Jan 31 03:42:09 np0005603609 nova_compute[221550]: 2026-01-31 08:42:09.545 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:09 np0005603609 nova_compute[221550]: 2026-01-31 08:42:09.550 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.551 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bf5b3835-8d63-418b-8338-959fa28d05a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bf5b3835-8d63-418b-8338-959fa28d05a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.551 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0d98efa7-0df2-4708-8a3e-89b0af947d2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.553 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-bf5b3835-8d63-418b-8338-959fa28d05a8
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/bf5b3835-8d63-418b-8338-959fa28d05a8.pid.haproxy
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID bf5b3835-8d63-418b-8338-959fa28d05a8
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:42:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:09.554 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8', 'env', 'PROCESS_TAG=haproxy-bf5b3835-8d63-418b-8338-959fa28d05a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bf5b3835-8d63-418b-8338-959fa28d05a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:42:09 np0005603609 nova_compute[221550]: 2026-01-31 08:42:09.615 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848929.6143503, f297c5b1-d573-46de-95a8-be21389a4763 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:42:09 np0005603609 nova_compute[221550]: 2026-01-31 08:42:09.615 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] VM Started (Lifecycle Event)#033[00m
Jan 31 03:42:09 np0005603609 nova_compute[221550]: 2026-01-31 08:42:09.664 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:42:09 np0005603609 nova_compute[221550]: 2026-01-31 08:42:09.668 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848929.617252, f297c5b1-d573-46de-95a8-be21389a4763 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:42:09 np0005603609 nova_compute[221550]: 2026-01-31 08:42:09.668 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:42:09 np0005603609 nova_compute[221550]: 2026-01-31 08:42:09.704 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:42:09 np0005603609 nova_compute[221550]: 2026-01-31 08:42:09.708 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:42:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:09.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:09 np0005603609 nova_compute[221550]: 2026-01-31 08:42:09.850 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:42:09 np0005603609 podman[302695]: 2026-01-31 08:42:09.865928484 +0000 UTC m=+0.051011523 container create 71ab6c513ec959dec0067ebb8286f6cf0054bfb6035055a4dc1e8093cf905948 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 03:42:09 np0005603609 systemd[1]: Started libpod-conmon-71ab6c513ec959dec0067ebb8286f6cf0054bfb6035055a4dc1e8093cf905948.scope.
Jan 31 03:42:09 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:42:09 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a619900f98ec4fa3772bdfbc64bd77239e6ef21eae1270746f6ccf826a5ad36/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:09 np0005603609 podman[302695]: 2026-01-31 08:42:09.835604906 +0000 UTC m=+0.020688025 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:42:09 np0005603609 podman[302695]: 2026-01-31 08:42:09.934075662 +0000 UTC m=+0.119158711 container init 71ab6c513ec959dec0067ebb8286f6cf0054bfb6035055a4dc1e8093cf905948 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:42:09 np0005603609 podman[302695]: 2026-01-31 08:42:09.937818063 +0000 UTC m=+0.122901102 container start 71ab6c513ec959dec0067ebb8286f6cf0054bfb6035055a4dc1e8093cf905948 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:42:09 np0005603609 neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8[302710]: [NOTICE]   (302714) : New worker (302716) forked
Jan 31 03:42:09 np0005603609 neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8[302710]: [NOTICE]   (302714) : Loading success.
Jan 31 03:42:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:10.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.391 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.589 221554 DEBUG nova.compute.manager [req-ec0aa5eb-4f99-461d-8aa7-e57a939037e5 req-c5590d99-e1f3-4d38-a617-f04a2d94b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Received event network-vif-plugged-8812315d-5bb3-40d8-860c-f525ab97a518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.590 221554 DEBUG oslo_concurrency.lockutils [req-ec0aa5eb-4f99-461d-8aa7-e57a939037e5 req-c5590d99-e1f3-4d38-a617-f04a2d94b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f297c5b1-d573-46de-95a8-be21389a4763-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.590 221554 DEBUG oslo_concurrency.lockutils [req-ec0aa5eb-4f99-461d-8aa7-e57a939037e5 req-c5590d99-e1f3-4d38-a617-f04a2d94b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.591 221554 DEBUG oslo_concurrency.lockutils [req-ec0aa5eb-4f99-461d-8aa7-e57a939037e5 req-c5590d99-e1f3-4d38-a617-f04a2d94b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.591 221554 DEBUG nova.compute.manager [req-ec0aa5eb-4f99-461d-8aa7-e57a939037e5 req-c5590d99-e1f3-4d38-a617-f04a2d94b703 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Processing event network-vif-plugged-8812315d-5bb3-40d8-860c-f525ab97a518 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.591 221554 DEBUG nova.compute.manager [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.596 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848930.5959196, f297c5b1-d573-46de-95a8-be21389a4763 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.596 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.598 221554 DEBUG nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.601 221554 INFO nova.virt.libvirt.driver [-] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Instance spawned successfully.#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.601 221554 DEBUG nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.666 221554 DEBUG nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.667 221554 DEBUG nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.667 221554 DEBUG nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.668 221554 DEBUG nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.668 221554 DEBUG nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.668 221554 DEBUG nova.virt.libvirt.driver [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.671 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.673 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.758 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.910 221554 INFO nova.compute.manager [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Took 16.91 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:42:10 np0005603609 nova_compute[221550]: 2026-01-31 08:42:10.910 221554 DEBUG nova.compute.manager [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:42:11 np0005603609 nova_compute[221550]: 2026-01-31 08:42:11.081 221554 INFO nova.compute.manager [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Took 19.29 seconds to build instance.#033[00m
Jan 31 03:42:11 np0005603609 nova_compute[221550]: 2026-01-31 08:42:11.133 221554 DEBUG oslo_concurrency.lockutils [None req-ca630608-8028-472f-8922-44db897d9dda 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:11 np0005603609 nova_compute[221550]: 2026-01-31 08:42:11.696 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:11.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:12.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:12 np0005603609 nova_compute[221550]: 2026-01-31 08:42:12.407 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:12 np0005603609 nova_compute[221550]: 2026-01-31 08:42:12.408 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:12 np0005603609 nova_compute[221550]: 2026-01-31 08:42:12.408 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:13 np0005603609 nova_compute[221550]: 2026-01-31 08:42:13.164 221554 DEBUG nova.compute.manager [req-760475b4-6fce-442f-a41d-690dcb453b45 req-4492d8f5-8b9a-48a7-9fe4-34ca9c76f839 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Received event network-vif-plugged-8812315d-5bb3-40d8-860c-f525ab97a518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:42:13 np0005603609 nova_compute[221550]: 2026-01-31 08:42:13.164 221554 DEBUG oslo_concurrency.lockutils [req-760475b4-6fce-442f-a41d-690dcb453b45 req-4492d8f5-8b9a-48a7-9fe4-34ca9c76f839 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f297c5b1-d573-46de-95a8-be21389a4763-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:42:13 np0005603609 nova_compute[221550]: 2026-01-31 08:42:13.164 221554 DEBUG oslo_concurrency.lockutils [req-760475b4-6fce-442f-a41d-690dcb453b45 req-4492d8f5-8b9a-48a7-9fe4-34ca9c76f839 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:42:13 np0005603609 nova_compute[221550]: 2026-01-31 08:42:13.164 221554 DEBUG oslo_concurrency.lockutils [req-760475b4-6fce-442f-a41d-690dcb453b45 req-4492d8f5-8b9a-48a7-9fe4-34ca9c76f839 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:13 np0005603609 nova_compute[221550]: 2026-01-31 08:42:13.165 221554 DEBUG nova.compute.manager [req-760475b4-6fce-442f-a41d-690dcb453b45 req-4492d8f5-8b9a-48a7-9fe4-34ca9c76f839 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] No waiting events found dispatching network-vif-plugged-8812315d-5bb3-40d8-860c-f525ab97a518 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:42:13 np0005603609 nova_compute[221550]: 2026-01-31 08:42:13.165 221554 WARNING nova.compute.manager [req-760475b4-6fce-442f-a41d-690dcb453b45 req-4492d8f5-8b9a-48a7-9fe4-34ca9c76f839 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Received unexpected event network-vif-plugged-8812315d-5bb3-40d8-860c-f525ab97a518 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:42:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:13.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:14.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:15 np0005603609 nova_compute[221550]: 2026-01-31 08:42:15.393 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:42:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:15.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:42:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:42:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:16.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:42:16 np0005603609 nova_compute[221550]: 2026-01-31 08:42:16.699 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:17.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:42:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:18.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:42:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:19.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:20.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:20 np0005603609 nova_compute[221550]: 2026-01-31 08:42:20.395 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:21 np0005603609 nova_compute[221550]: 2026-01-31 08:42:21.702 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:21.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:42:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:22.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:42:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:23.808 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=83, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=82) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:42:23 np0005603609 nova_compute[221550]: 2026-01-31 08:42:23.809 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:23.810 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:42:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:23.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:24 np0005603609 nova_compute[221550]: 2026-01-31 08:42:24.093 221554 DEBUG nova.compute.manager [req-0a26735a-e964-46d4-9c9c-ba31620451b1 req-2d5cc7f3-29d8-46f7-8dea-759f1d623c66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Received event network-changed-8812315d-5bb3-40d8-860c-f525ab97a518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:42:24 np0005603609 nova_compute[221550]: 2026-01-31 08:42:24.094 221554 DEBUG nova.compute.manager [req-0a26735a-e964-46d4-9c9c-ba31620451b1 req-2d5cc7f3-29d8-46f7-8dea-759f1d623c66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Refreshing instance network info cache due to event network-changed-8812315d-5bb3-40d8-860c-f525ab97a518. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:42:24 np0005603609 nova_compute[221550]: 2026-01-31 08:42:24.094 221554 DEBUG oslo_concurrency.lockutils [req-0a26735a-e964-46d4-9c9c-ba31620451b1 req-2d5cc7f3-29d8-46f7-8dea-759f1d623c66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-f297c5b1-d573-46de-95a8-be21389a4763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:42:24 np0005603609 nova_compute[221550]: 2026-01-31 08:42:24.095 221554 DEBUG oslo_concurrency.lockutils [req-0a26735a-e964-46d4-9c9c-ba31620451b1 req-2d5cc7f3-29d8-46f7-8dea-759f1d623c66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-f297c5b1-d573-46de-95a8-be21389a4763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:42:24 np0005603609 nova_compute[221550]: 2026-01-31 08:42:24.095 221554 DEBUG nova.network.neutron [req-0a26735a-e964-46d4-9c9c-ba31620451b1 req-2d5cc7f3-29d8-46f7-8dea-759f1d623c66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Refreshing network info cache for port 8812315d-5bb3-40d8-860c-f525ab97a518 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:42:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:42:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:24.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:42:24 np0005603609 nova_compute[221550]: 2026-01-31 08:42:24.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:24 np0005603609 nova_compute[221550]: 2026-01-31 08:42:24.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:42:24 np0005603609 nova_compute[221550]: 2026-01-31 08:42:24.887 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:42:25 np0005603609 nova_compute[221550]: 2026-01-31 08:42:25.460 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:25.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:26.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:42:26Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a7:12:c6 10.100.0.13
Jan 31 03:42:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:42:26Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:12:c6 10.100.0.13
Jan 31 03:42:26 np0005603609 nova_compute[221550]: 2026-01-31 08:42:26.705 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:27.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:27 np0005603609 nova_compute[221550]: 2026-01-31 08:42:27.971 221554 DEBUG nova.network.neutron [req-0a26735a-e964-46d4-9c9c-ba31620451b1 req-2d5cc7f3-29d8-46f7-8dea-759f1d623c66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Updated VIF entry in instance network info cache for port 8812315d-5bb3-40d8-860c-f525ab97a518. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:42:27 np0005603609 nova_compute[221550]: 2026-01-31 08:42:27.972 221554 DEBUG nova.network.neutron [req-0a26735a-e964-46d4-9c9c-ba31620451b1 req-2d5cc7f3-29d8-46f7-8dea-759f1d623c66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Updating instance_info_cache with network_info: [{"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:42:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:28.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:28 np0005603609 nova_compute[221550]: 2026-01-31 08:42:28.466 221554 DEBUG oslo_concurrency.lockutils [req-0a26735a-e964-46d4-9c9c-ba31620451b1 req-2d5cc7f3-29d8-46f7-8dea-759f1d623c66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-f297c5b1-d573-46de-95a8-be21389a4763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:42:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:29.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:42:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:30.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:42:30 np0005603609 nova_compute[221550]: 2026-01-31 08:42:30.463 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:31 np0005603609 nova_compute[221550]: 2026-01-31 08:42:31.707 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:31 np0005603609 nova_compute[221550]: 2026-01-31 08:42:31.859 221554 INFO nova.compute.manager [None req-0f67aedb-f0ad-464c-a06a-f1b04b4d4e71 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Get console output#033[00m
Jan 31 03:42:31 np0005603609 nova_compute[221550]: 2026-01-31 08:42:31.865 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:42:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:31.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:42:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:32.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:42:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:32 np0005603609 nova_compute[221550]: 2026-01-31 08:42:32.993 221554 DEBUG oslo_concurrency.lockutils [None req-3db3c86e-64fb-4992-bdbe-c77f932666a8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "f297c5b1-d573-46de-95a8-be21389a4763" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:42:32 np0005603609 nova_compute[221550]: 2026-01-31 08:42:32.994 221554 DEBUG oslo_concurrency.lockutils [None req-3db3c86e-64fb-4992-bdbe-c77f932666a8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:42:32 np0005603609 nova_compute[221550]: 2026-01-31 08:42:32.994 221554 DEBUG nova.compute.manager [None req-3db3c86e-64fb-4992-bdbe-c77f932666a8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:42:32 np0005603609 nova_compute[221550]: 2026-01-31 08:42:32.998 221554 DEBUG nova.compute.manager [None req-3db3c86e-64fb-4992-bdbe-c77f932666a8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 31 03:42:33 np0005603609 nova_compute[221550]: 2026-01-31 08:42:33.000 221554 DEBUG nova.objects.instance [None req-3db3c86e-64fb-4992-bdbe-c77f932666a8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'flavor' on Instance uuid f297c5b1-d573-46de-95a8-be21389a4763 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:42:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:42:33Z|00855|binding|INFO|Releasing lport 73463532-de78-43c3-b19a-85e695d1d7e9 from this chassis (sb_readonly=0)
Jan 31 03:42:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:42:33Z|00856|binding|INFO|Releasing lport c8a7eefb-b644-411b-b95a-f875570edfa9 from this chassis (sb_readonly=0)
Jan 31 03:42:33 np0005603609 nova_compute[221550]: 2026-01-31 08:42:33.127 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:33 np0005603609 podman[302726]: 2026-01-31 08:42:33.182535268 +0000 UTC m=+0.059602892 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:42:33 np0005603609 podman[302725]: 2026-01-31 08:42:33.214853235 +0000 UTC m=+0.094561653 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller)
Jan 31 03:42:33 np0005603609 nova_compute[221550]: 2026-01-31 08:42:33.272 221554 DEBUG nova.virt.libvirt.driver [None req-3db3c86e-64fb-4992-bdbe-c77f932666a8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:42:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:33.813 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '83'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:42:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:42:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:33.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:42:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:42:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:34.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:42:35 np0005603609 nova_compute[221550]: 2026-01-31 08:42:35.467 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:42:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:35.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:42:35 np0005603609 kernel: tap8812315d-5b (unregistering): left promiscuous mode
Jan 31 03:42:35 np0005603609 NetworkManager[49064]: <info>  [1769848955.9558] device (tap8812315d-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:42:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:42:35Z|00857|binding|INFO|Releasing lport 8812315d-5bb3-40d8-860c-f525ab97a518 from this chassis (sb_readonly=0)
Jan 31 03:42:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:42:35Z|00858|binding|INFO|Setting lport 8812315d-5bb3-40d8-860c-f525ab97a518 down in Southbound
Jan 31 03:42:35 np0005603609 nova_compute[221550]: 2026-01-31 08:42:35.964 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:35 np0005603609 ovn_controller[130359]: 2026-01-31T08:42:35Z|00859|binding|INFO|Removing iface tap8812315d-5b ovn-installed in OVS
Jan 31 03:42:35 np0005603609 nova_compute[221550]: 2026-01-31 08:42:35.967 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:35 np0005603609 nova_compute[221550]: 2026-01-31 08:42:35.971 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:36 np0005603609 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000bd.scope: Deactivated successfully.
Jan 31 03:42:36 np0005603609 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000bd.scope: Consumed 13.402s CPU time.
Jan 31 03:42:36 np0005603609 systemd-machined[190912]: Machine qemu-102-instance-000000bd terminated.
Jan 31 03:42:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:36.114 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:12:c6 10.100.0.13'], port_security=['fa:16:3e:a7:12:c6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f297c5b1-d573-46de-95a8-be21389a4763', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf5b3835-8d63-418b-8338-959fa28d05a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a398779e-1480-4eef-8fa0-857756eaddf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=acdcb819-3bf3-4c53-af89-cd871f50a52e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=8812315d-5bb3-40d8-860c-f525ab97a518) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:42:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:36.116 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 8812315d-5bb3-40d8-860c-f525ab97a518 in datapath bf5b3835-8d63-418b-8338-959fa28d05a8 unbound from our chassis#033[00m
Jan 31 03:42:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:36.119 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bf5b3835-8d63-418b-8338-959fa28d05a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:42:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:36.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:36.120 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b10bf0ab-3b06-45be-825a-226a54c1a4f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:36.121 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8 namespace which is not needed anymore#033[00m
Jan 31 03:42:36 np0005603609 neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8[302710]: [NOTICE]   (302714) : haproxy version is 2.8.14-c23fe91
Jan 31 03:42:36 np0005603609 neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8[302710]: [NOTICE]   (302714) : path to executable is /usr/sbin/haproxy
Jan 31 03:42:36 np0005603609 neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8[302710]: [WARNING]  (302714) : Exiting Master process...
Jan 31 03:42:36 np0005603609 neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8[302710]: [ALERT]    (302714) : Current worker (302716) exited with code 143 (Terminated)
Jan 31 03:42:36 np0005603609 neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8[302710]: [WARNING]  (302714) : All workers exited. Exiting... (0)
Jan 31 03:42:36 np0005603609 systemd[1]: libpod-71ab6c513ec959dec0067ebb8286f6cf0054bfb6035055a4dc1e8093cf905948.scope: Deactivated successfully.
Jan 31 03:42:36 np0005603609 podman[302797]: 2026-01-31 08:42:36.232292408 +0000 UTC m=+0.041436280 container died 71ab6c513ec959dec0067ebb8286f6cf0054bfb6035055a4dc1e8093cf905948 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 03:42:36 np0005603609 systemd[1]: var-lib-containers-storage-overlay-8a619900f98ec4fa3772bdfbc64bd77239e6ef21eae1270746f6ccf826a5ad36-merged.mount: Deactivated successfully.
Jan 31 03:42:36 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-71ab6c513ec959dec0067ebb8286f6cf0054bfb6035055a4dc1e8093cf905948-userdata-shm.mount: Deactivated successfully.
Jan 31 03:42:36 np0005603609 podman[302797]: 2026-01-31 08:42:36.274796992 +0000 UTC m=+0.083940904 container cleanup 71ab6c513ec959dec0067ebb8286f6cf0054bfb6035055a4dc1e8093cf905948 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:42:36 np0005603609 systemd[1]: libpod-conmon-71ab6c513ec959dec0067ebb8286f6cf0054bfb6035055a4dc1e8093cf905948.scope: Deactivated successfully.
Jan 31 03:42:36 np0005603609 nova_compute[221550]: 2026-01-31 08:42:36.289 221554 INFO nova.virt.libvirt.driver [None req-3db3c86e-64fb-4992-bdbe-c77f932666a8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 03:42:36 np0005603609 nova_compute[221550]: 2026-01-31 08:42:36.296 221554 INFO nova.virt.libvirt.driver [-] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Instance destroyed successfully.#033[00m
Jan 31 03:42:36 np0005603609 nova_compute[221550]: 2026-01-31 08:42:36.297 221554 DEBUG nova.objects.instance [None req-3db3c86e-64fb-4992-bdbe-c77f932666a8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'numa_topology' on Instance uuid f297c5b1-d573-46de-95a8-be21389a4763 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:42:36 np0005603609 podman[302833]: 2026-01-31 08:42:36.339665641 +0000 UTC m=+0.046300418 container remove 71ab6c513ec959dec0067ebb8286f6cf0054bfb6035055a4dc1e8093cf905948 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 03:42:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:36.343 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[79b7bf60-3fff-4fcc-8cba-797be16c4fac]: (4, ('Sat Jan 31 08:42:36 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8 (71ab6c513ec959dec0067ebb8286f6cf0054bfb6035055a4dc1e8093cf905948)\n71ab6c513ec959dec0067ebb8286f6cf0054bfb6035055a4dc1e8093cf905948\nSat Jan 31 08:42:36 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8 (71ab6c513ec959dec0067ebb8286f6cf0054bfb6035055a4dc1e8093cf905948)\n71ab6c513ec959dec0067ebb8286f6cf0054bfb6035055a4dc1e8093cf905948\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:36.345 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4af34511-6cc9-4f78-b734-2e4325f3f367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:36.346 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf5b3835-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:42:36 np0005603609 nova_compute[221550]: 2026-01-31 08:42:36.348 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:36 np0005603609 kernel: tapbf5b3835-80: left promiscuous mode
Jan 31 03:42:36 np0005603609 nova_compute[221550]: 2026-01-31 08:42:36.355 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:36.357 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e19396ca-e893-445b-bff8-e8907c556dea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:36.371 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2b1d7dc8-8e38-4158-99b8-67213b6cc683]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:36.373 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[26a79d72-e382-452a-9e90-b683f594c4f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:36.390 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a04ffe-3ae4-4928-8b2c-c84c18e92a86]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 916908, 'reachable_time': 24533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302852, 'error': None, 'target': 'ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:36 np0005603609 systemd[1]: run-netns-ovnmeta\x2dbf5b3835\x2d8d63\x2d418b\x2d8338\x2d959fa28d05a8.mount: Deactivated successfully.
Jan 31 03:42:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:36.393 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:42:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:36.393 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[0595c6ed-a684-44da-a58a-fef3f58f4238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:36 np0005603609 nova_compute[221550]: 2026-01-31 08:42:36.686 221554 DEBUG nova.compute.manager [None req-3db3c86e-64fb-4992-bdbe-c77f932666a8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:42:36 np0005603609 nova_compute[221550]: 2026-01-31 08:42:36.710 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:37 np0005603609 nova_compute[221550]: 2026-01-31 08:42:37.367 221554 DEBUG oslo_concurrency.lockutils [None req-3db3c86e-64fb-4992-bdbe-c77f932666a8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 4.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:37 np0005603609 nova_compute[221550]: 2026-01-31 08:42:37.730 221554 DEBUG nova.compute.manager [req-f05cade6-f262-4b14-bbe6-4bb3bbe3c4b1 req-5b5bef12-c34d-4841-9962-f0647ebe89e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Received event network-vif-unplugged-8812315d-5bb3-40d8-860c-f525ab97a518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:42:37 np0005603609 nova_compute[221550]: 2026-01-31 08:42:37.731 221554 DEBUG oslo_concurrency.lockutils [req-f05cade6-f262-4b14-bbe6-4bb3bbe3c4b1 req-5b5bef12-c34d-4841-9962-f0647ebe89e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f297c5b1-d573-46de-95a8-be21389a4763-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:42:37 np0005603609 nova_compute[221550]: 2026-01-31 08:42:37.731 221554 DEBUG oslo_concurrency.lockutils [req-f05cade6-f262-4b14-bbe6-4bb3bbe3c4b1 req-5b5bef12-c34d-4841-9962-f0647ebe89e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:42:37 np0005603609 nova_compute[221550]: 2026-01-31 08:42:37.732 221554 DEBUG oslo_concurrency.lockutils [req-f05cade6-f262-4b14-bbe6-4bb3bbe3c4b1 req-5b5bef12-c34d-4841-9962-f0647ebe89e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:37 np0005603609 nova_compute[221550]: 2026-01-31 08:42:37.732 221554 DEBUG nova.compute.manager [req-f05cade6-f262-4b14-bbe6-4bb3bbe3c4b1 req-5b5bef12-c34d-4841-9962-f0647ebe89e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] No waiting events found dispatching network-vif-unplugged-8812315d-5bb3-40d8-860c-f525ab97a518 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:42:37 np0005603609 nova_compute[221550]: 2026-01-31 08:42:37.733 221554 WARNING nova.compute.manager [req-f05cade6-f262-4b14-bbe6-4bb3bbe3c4b1 req-5b5bef12-c34d-4841-9962-f0647ebe89e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Received unexpected event network-vif-unplugged-8812315d-5bb3-40d8-860c-f525ab97a518 for instance with vm_state active and task_state powering-off.#033[00m
Jan 31 03:42:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:37.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:38.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:42:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:39.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:42:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:42:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:40.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:42:40 np0005603609 nova_compute[221550]: 2026-01-31 08:42:40.482 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:41 np0005603609 nova_compute[221550]: 2026-01-31 08:42:41.712 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:41.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:42.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:42:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:43.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:42:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:44.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:44 np0005603609 nova_compute[221550]: 2026-01-31 08:42:44.927 221554 INFO nova.compute.manager [None req-4ef6b406-421c-43b7-ae7d-490fa17e643c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Get console output#033[00m
Jan 31 03:42:45 np0005603609 nova_compute[221550]: 2026-01-31 08:42:45.486 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:45.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:46.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:46 np0005603609 nova_compute[221550]: 2026-01-31 08:42:46.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:46 np0005603609 nova_compute[221550]: 2026-01-31 08:42:46.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:46 np0005603609 nova_compute[221550]: 2026-01-31 08:42:46.714 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:47 np0005603609 nova_compute[221550]: 2026-01-31 08:42:47.283 221554 DEBUG nova.compute.manager [req-aa01eacc-101d-4110-8b3f-6fe89a853e67 req-003d69b1-9943-46b9-b98a-fb639db568d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Received event network-vif-plugged-8812315d-5bb3-40d8-860c-f525ab97a518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:42:47 np0005603609 nova_compute[221550]: 2026-01-31 08:42:47.284 221554 DEBUG oslo_concurrency.lockutils [req-aa01eacc-101d-4110-8b3f-6fe89a853e67 req-003d69b1-9943-46b9-b98a-fb639db568d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f297c5b1-d573-46de-95a8-be21389a4763-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:42:47 np0005603609 nova_compute[221550]: 2026-01-31 08:42:47.284 221554 DEBUG oslo_concurrency.lockutils [req-aa01eacc-101d-4110-8b3f-6fe89a853e67 req-003d69b1-9943-46b9-b98a-fb639db568d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:42:47 np0005603609 nova_compute[221550]: 2026-01-31 08:42:47.284 221554 DEBUG oslo_concurrency.lockutils [req-aa01eacc-101d-4110-8b3f-6fe89a853e67 req-003d69b1-9943-46b9-b98a-fb639db568d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:47 np0005603609 nova_compute[221550]: 2026-01-31 08:42:47.284 221554 DEBUG nova.compute.manager [req-aa01eacc-101d-4110-8b3f-6fe89a853e67 req-003d69b1-9943-46b9-b98a-fb639db568d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] No waiting events found dispatching network-vif-plugged-8812315d-5bb3-40d8-860c-f525ab97a518 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:42:47 np0005603609 nova_compute[221550]: 2026-01-31 08:42:47.285 221554 WARNING nova.compute.manager [req-aa01eacc-101d-4110-8b3f-6fe89a853e67 req-003d69b1-9943-46b9-b98a-fb639db568d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Received unexpected event network-vif-plugged-8812315d-5bb3-40d8-860c-f525ab97a518 for instance with vm_state stopped and task_state powering-on.#033[00m
Jan 31 03:42:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:42:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:47.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:42:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:42:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:48.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:42:48 np0005603609 nova_compute[221550]: 2026-01-31 08:42:48.587 221554 DEBUG nova.objects.instance [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'flavor' on Instance uuid f297c5b1-d573-46de-95a8-be21389a4763 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:42:49 np0005603609 nova_compute[221550]: 2026-01-31 08:42:49.526 221554 DEBUG oslo_concurrency.lockutils [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "refresh_cache-f297c5b1-d573-46de-95a8-be21389a4763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:42:49 np0005603609 nova_compute[221550]: 2026-01-31 08:42:49.526 221554 DEBUG oslo_concurrency.lockutils [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquired lock "refresh_cache-f297c5b1-d573-46de-95a8-be21389a4763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:42:49 np0005603609 nova_compute[221550]: 2026-01-31 08:42:49.527 221554 DEBUG nova.network.neutron [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:42:49 np0005603609 nova_compute[221550]: 2026-01-31 08:42:49.527 221554 DEBUG nova.objects.instance [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'info_cache' on Instance uuid f297c5b1-d573-46de-95a8-be21389a4763 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:42:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:42:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:49.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:42:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:42:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:50.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:42:50 np0005603609 nova_compute[221550]: 2026-01-31 08:42:50.490 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:51 np0005603609 nova_compute[221550]: 2026-01-31 08:42:51.195 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848956.1943138, f297c5b1-d573-46de-95a8-be21389a4763 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:42:51 np0005603609 nova_compute[221550]: 2026-01-31 08:42:51.196 221554 INFO nova.compute.manager [-] [instance: f297c5b1-d573-46de-95a8-be21389a4763] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:42:51 np0005603609 nova_compute[221550]: 2026-01-31 08:42:51.412 221554 DEBUG nova.compute.manager [None req-6c145cfd-9aa2-44ce-84ad-c67816e1d6ae - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:42:51 np0005603609 nova_compute[221550]: 2026-01-31 08:42:51.415 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:51 np0005603609 nova_compute[221550]: 2026-01-31 08:42:51.416 221554 DEBUG nova.compute.manager [None req-6c145cfd-9aa2-44ce-84ad-c67816e1d6ae - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:42:51 np0005603609 nova_compute[221550]: 2026-01-31 08:42:51.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:51 np0005603609 nova_compute[221550]: 2026-01-31 08:42:51.716 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:51 np0005603609 nova_compute[221550]: 2026-01-31 08:42:51.869 221554 INFO nova.compute.manager [None req-6c145cfd-9aa2-44ce-84ad-c67816e1d6ae - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Jan 31 03:42:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:42:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:51.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:42:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:42:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:52.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:42:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:52 np0005603609 nova_compute[221550]: 2026-01-31 08:42:52.384 221554 DEBUG nova.network.neutron [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Updating instance_info_cache with network_info: [{"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:42:53 np0005603609 nova_compute[221550]: 2026-01-31 08:42:53.062 221554 DEBUG oslo_concurrency.lockutils [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Releasing lock "refresh_cache-f297c5b1-d573-46de-95a8-be21389a4763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:42:53 np0005603609 nova_compute[221550]: 2026-01-31 08:42:53.125 221554 INFO nova.virt.libvirt.driver [-] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Instance destroyed successfully.#033[00m
Jan 31 03:42:53 np0005603609 nova_compute[221550]: 2026-01-31 08:42:53.126 221554 DEBUG nova.objects.instance [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'numa_topology' on Instance uuid f297c5b1-d573-46de-95a8-be21389a4763 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:42:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:42:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4045926153' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:42:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:42:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4045926153' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:42:53 np0005603609 nova_compute[221550]: 2026-01-31 08:42:53.752 221554 DEBUG nova.objects.instance [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'resources' on Instance uuid f297c5b1-d573-46de-95a8-be21389a4763 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:42:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:53.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.105 221554 DEBUG nova.virt.libvirt.vif [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:41:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1237528230',display_name='tempest-TestNetworkAdvancedServerOps-server-1237528230',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1237528230',id=189,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPDGaHKTTKAZSqA2FRrln0d7VEN1/jGuE6MmKztq5BUqQ2XrCXf+Zt7tRrbMrY06e567UNxUAUTFJA6tx9or8yM9YlOiWwY7AJirCCVAT5kkloGaLEdR41gyXqezB8qD5Q==',key_name='tempest-TestNetworkAdvancedServerOps-386188591',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:42:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-jcqg6id3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:42:37Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=f297c5b1-d573-46de-95a8-be21389a4763,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.106 221554 DEBUG nova.network.os_vif_util [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.106 221554 DEBUG nova.network.os_vif_util [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:12:c6,bridge_name='br-int',has_traffic_filtering=True,id=8812315d-5bb3-40d8-860c-f525ab97a518,network=Network(bf5b3835-8d63-418b-8338-959fa28d05a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8812315d-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.107 221554 DEBUG os_vif [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:12:c6,bridge_name='br-int',has_traffic_filtering=True,id=8812315d-5bb3-40d8-860c-f525ab97a518,network=Network(bf5b3835-8d63-418b-8338-959fa28d05a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8812315d-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.108 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.108 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8812315d-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.110 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.112 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.114 221554 INFO os_vif [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:12:c6,bridge_name='br-int',has_traffic_filtering=True,id=8812315d-5bb3-40d8-860c-f525ab97a518,network=Network(bf5b3835-8d63-418b-8338-959fa28d05a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8812315d-5b')#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.121 221554 DEBUG nova.virt.libvirt.driver [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Start _get_guest_xml network_info=[{"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.124 221554 WARNING nova.virt.libvirt.driver [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.133 221554 DEBUG nova.virt.libvirt.host [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.134 221554 DEBUG nova.virt.libvirt.host [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.138 221554 DEBUG nova.virt.libvirt.host [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.138 221554 DEBUG nova.virt.libvirt.host [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.140 221554 DEBUG nova.virt.libvirt.driver [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.140 221554 DEBUG nova.virt.hardware [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.140 221554 DEBUG nova.virt.hardware [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.141 221554 DEBUG nova.virt.hardware [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.141 221554 DEBUG nova.virt.hardware [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.141 221554 DEBUG nova.virt.hardware [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.142 221554 DEBUG nova.virt.hardware [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.142 221554 DEBUG nova.virt.hardware [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.142 221554 DEBUG nova.virt.hardware [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.143 221554 DEBUG nova.virt.hardware [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.143 221554 DEBUG nova.virt.hardware [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.143 221554 DEBUG nova.virt.hardware [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.144 221554 DEBUG nova.objects.instance [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'vcpu_model' on Instance uuid f297c5b1-d573-46de-95a8-be21389a4763 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:42:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:42:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:54.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.700 221554 DEBUG oslo_concurrency.processutils [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.934 221554 DEBUG nova.compute.manager [req-f82fadfa-f818-4cbe-a93e-5c16d7eed240 req-3ef2ac20-ff40-4c3d-816b-fc1e02a0ed25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Received event network-changed-93519b56-e167-470c-9933-432512222982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.935 221554 DEBUG nova.compute.manager [req-f82fadfa-f818-4cbe-a93e-5c16d7eed240 req-3ef2ac20-ff40-4c3d-816b-fc1e02a0ed25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Refreshing instance network info cache due to event network-changed-93519b56-e167-470c-9933-432512222982. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.935 221554 DEBUG oslo_concurrency.lockutils [req-f82fadfa-f818-4cbe-a93e-5c16d7eed240 req-3ef2ac20-ff40-4c3d-816b-fc1e02a0ed25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.935 221554 DEBUG oslo_concurrency.lockutils [req-f82fadfa-f818-4cbe-a93e-5c16d7eed240 req-3ef2ac20-ff40-4c3d-816b-fc1e02a0ed25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:42:54 np0005603609 nova_compute[221550]: 2026-01-31 08:42:54.936 221554 DEBUG nova.network.neutron [req-f82fadfa-f818-4cbe-a93e-5c16d7eed240 req-3ef2ac20-ff40-4c3d-816b-fc1e02a0ed25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Refreshing network info cache for port 93519b56-e167-470c-9933-432512222982 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.081 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:42:55 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2214913132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.123 221554 DEBUG oslo_concurrency.processutils [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.157 221554 DEBUG oslo_concurrency.processutils [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:42:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:42:55 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3260758072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.585 221554 DEBUG oslo_concurrency.processutils [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.587 221554 DEBUG nova.virt.libvirt.vif [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:41:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1237528230',display_name='tempest-TestNetworkAdvancedServerOps-server-1237528230',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1237528230',id=189,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPDGaHKTTKAZSqA2FRrln0d7VEN1/jGuE6MmKztq5BUqQ2XrCXf+Zt7tRrbMrY06e567UNxUAUTFJA6tx9or8yM9YlOiWwY7AJirCCVAT5kkloGaLEdR41gyXqezB8qD5Q==',key_name='tempest-TestNetworkAdvancedServerOps-386188591',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:42:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-jcqg6id3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:42:37Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=f297c5b1-d573-46de-95a8-be21389a4763,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.587 221554 DEBUG nova.network.os_vif_util [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.588 221554 DEBUG nova.network.os_vif_util [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:12:c6,bridge_name='br-int',has_traffic_filtering=True,id=8812315d-5bb3-40d8-860c-f525ab97a518,network=Network(bf5b3835-8d63-418b-8338-959fa28d05a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8812315d-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.589 221554 DEBUG nova.objects.instance [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'pci_devices' on Instance uuid f297c5b1-d573-46de-95a8-be21389a4763 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:42:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:55.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.988 221554 DEBUG nova.virt.libvirt.driver [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:42:55 np0005603609 nova_compute[221550]:  <uuid>f297c5b1-d573-46de-95a8-be21389a4763</uuid>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:  <name>instance-000000bd</name>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1237528230</nova:name>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:42:54</nova:creationTime>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:42:55 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:        <nova:user uuid="4d0e9d918b4041fabd5ded633b4cf404">tempest-TestNetworkAdvancedServerOps-483180749-project-member</nova:user>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:        <nova:project uuid="9710f0cf77d84353ae13fa47922b085d">tempest-TestNetworkAdvancedServerOps-483180749</nova:project>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:        <nova:port uuid="8812315d-5bb3-40d8-860c-f525ab97a518">
Jan 31 03:42:55 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <entry name="serial">f297c5b1-d573-46de-95a8-be21389a4763</entry>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <entry name="uuid">f297c5b1-d573-46de-95a8-be21389a4763</entry>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/f297c5b1-d573-46de-95a8-be21389a4763_disk">
Jan 31 03:42:55 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:42:55 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/f297c5b1-d573-46de-95a8-be21389a4763_disk.config">
Jan 31 03:42:55 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:42:55 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:a7:12:c6"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <target dev="tap8812315d-5b"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/f297c5b1-d573-46de-95a8-be21389a4763/console.log" append="off"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <input type="keyboard" bus="usb"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:42:55 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:42:55 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:42:55 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:42:55 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.989 221554 DEBUG nova.virt.libvirt.driver [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.990 221554 DEBUG nova.virt.libvirt.driver [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.990 221554 DEBUG nova.virt.libvirt.vif [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:41:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1237528230',display_name='tempest-TestNetworkAdvancedServerOps-server-1237528230',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1237528230',id=189,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPDGaHKTTKAZSqA2FRrln0d7VEN1/jGuE6MmKztq5BUqQ2XrCXf+Zt7tRrbMrY06e567UNxUAUTFJA6tx9or8yM9YlOiWwY7AJirCCVAT5kkloGaLEdR41gyXqezB8qD5Q==',key_name='tempest-TestNetworkAdvancedServerOps-386188591',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:42:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-jcqg6id3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:42:37Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=f297c5b1-d573-46de-95a8-be21389a4763,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.991 221554 DEBUG nova.network.os_vif_util [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.991 221554 DEBUG nova.network.os_vif_util [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:12:c6,bridge_name='br-int',has_traffic_filtering=True,id=8812315d-5bb3-40d8-860c-f525ab97a518,network=Network(bf5b3835-8d63-418b-8338-959fa28d05a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8812315d-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.991 221554 DEBUG os_vif [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:12:c6,bridge_name='br-int',has_traffic_filtering=True,id=8812315d-5bb3-40d8-860c-f525ab97a518,network=Network(bf5b3835-8d63-418b-8338-959fa28d05a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8812315d-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.992 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.992 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.992 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.994 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.994 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8812315d-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.995 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8812315d-5b, col_values=(('external_ids', {'iface-id': '8812315d-5bb3-40d8-860c-f525ab97a518', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:12:c6', 'vm-uuid': 'f297c5b1-d573-46de-95a8-be21389a4763'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.996 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:55 np0005603609 NetworkManager[49064]: <info>  [1769848975.9981] manager: (tap8812315d-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Jan 31 03:42:55 np0005603609 nova_compute[221550]: 2026-01-31 08:42:55.999 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:42:56 np0005603609 nova_compute[221550]: 2026-01-31 08:42:56.001 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:56 np0005603609 nova_compute[221550]: 2026-01-31 08:42:56.002 221554 INFO os_vif [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:12:c6,bridge_name='br-int',has_traffic_filtering=True,id=8812315d-5bb3-40d8-860c-f525ab97a518,network=Network(bf5b3835-8d63-418b-8338-959fa28d05a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8812315d-5b')#033[00m
Jan 31 03:42:56 np0005603609 kernel: tap8812315d-5b: entered promiscuous mode
Jan 31 03:42:56 np0005603609 ovn_controller[130359]: 2026-01-31T08:42:56Z|00860|binding|INFO|Claiming lport 8812315d-5bb3-40d8-860c-f525ab97a518 for this chassis.
Jan 31 03:42:56 np0005603609 ovn_controller[130359]: 2026-01-31T08:42:56Z|00861|binding|INFO|8812315d-5bb3-40d8-860c-f525ab97a518: Claiming fa:16:3e:a7:12:c6 10.100.0.13
Jan 31 03:42:56 np0005603609 nova_compute[221550]: 2026-01-31 08:42:56.067 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:56 np0005603609 NetworkManager[49064]: <info>  [1769848976.0702] manager: (tap8812315d-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/398)
Jan 31 03:42:56 np0005603609 nova_compute[221550]: 2026-01-31 08:42:56.076 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:56 np0005603609 ovn_controller[130359]: 2026-01-31T08:42:56Z|00862|binding|INFO|Setting lport 8812315d-5bb3-40d8-860c-f525ab97a518 ovn-installed in OVS
Jan 31 03:42:56 np0005603609 nova_compute[221550]: 2026-01-31 08:42:56.079 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:56 np0005603609 systemd-machined[190912]: New machine qemu-103-instance-000000bd.
Jan 31 03:42:56 np0005603609 systemd-udevd[302933]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:42:56 np0005603609 NetworkManager[49064]: <info>  [1769848976.1190] device (tap8812315d-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:42:56 np0005603609 NetworkManager[49064]: <info>  [1769848976.1200] device (tap8812315d-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:42:56 np0005603609 systemd[1]: Started Virtual Machine qemu-103-instance-000000bd.
Jan 31 03:42:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:56.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.314 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:12:c6 10.100.0.13'], port_security=['fa:16:3e:a7:12:c6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f297c5b1-d573-46de-95a8-be21389a4763', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf5b3835-8d63-418b-8338-959fa28d05a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a398779e-1480-4eef-8fa0-857756eaddf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=acdcb819-3bf3-4c53-af89-cd871f50a52e, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=8812315d-5bb3-40d8-860c-f525ab97a518) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:42:56 np0005603609 ovn_controller[130359]: 2026-01-31T08:42:56Z|00863|binding|INFO|Setting lport 8812315d-5bb3-40d8-860c-f525ab97a518 up in Southbound
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.316 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 8812315d-5bb3-40d8-860c-f525ab97a518 in datapath bf5b3835-8d63-418b-8338-959fa28d05a8 bound to our chassis#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.318 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf5b3835-8d63-418b-8338-959fa28d05a8#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.328 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[22599c5a-4ade-4b73-a6e6-723f5a7bb1e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.329 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbf5b3835-81 in ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.332 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbf5b3835-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.332 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[49018f7a-c1e7-4b73-a554-ab9560d9856d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.333 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f908dca8-3de2-45d1-9f56-be4a532986a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.340 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[2964540e-ea78-4096-8a38-f20edda772c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.352 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[df9cd4b4-db76-443f-a69a-c158ccb9d2bc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.373 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed25cea-2e70-4bae-8ba9-90f152fe5f26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:56 np0005603609 systemd-udevd[302935]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.378 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb65414-1932-4126-bde1-dd14c5628e71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:56 np0005603609 NetworkManager[49064]: <info>  [1769848976.3791] manager: (tapbf5b3835-80): new Veth device (/org/freedesktop/NetworkManager/Devices/399)
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.399 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[cf313f38-ed4b-40ce-96e4-fba073d2088b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.401 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f1696b-b572-4c59-8ef4-45cff26ff598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:56 np0005603609 NetworkManager[49064]: <info>  [1769848976.4183] device (tapbf5b3835-80): carrier: link connected
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.420 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad00d75-947d-4aff-9202-86c447ad488e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.432 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a0039e69-6aef-4a3b-9fc4-3a63b8383476]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf5b3835-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:37:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 921611, 'reachable_time': 20532, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303005, 'error': None, 'target': 'ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.443 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[022facb4-3b02-452e-8793-f031ed7c7ffb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:37c3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 921611, 'tstamp': 921611}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303008, 'error': None, 'target': 'ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.456 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b166d4c8-1a4e-4f96-be28-cd56da194c60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf5b3835-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:37:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 921611, 'reachable_time': 20532, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303009, 'error': None, 'target': 'ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.479 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[224dbe3d-6416-4378-a234-e95efe3a289c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:56 np0005603609 nova_compute[221550]: 2026-01-31 08:42:56.500 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848976.4998903, f297c5b1-d573-46de-95a8-be21389a4763 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:42:56 np0005603609 nova_compute[221550]: 2026-01-31 08:42:56.500 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:42:56 np0005603609 nova_compute[221550]: 2026-01-31 08:42:56.502 221554 DEBUG nova.compute.manager [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:42:56 np0005603609 nova_compute[221550]: 2026-01-31 08:42:56.506 221554 INFO nova.virt.libvirt.driver [-] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Instance rebooted successfully.#033[00m
Jan 31 03:42:56 np0005603609 nova_compute[221550]: 2026-01-31 08:42:56.506 221554 DEBUG nova.compute.manager [None req-c6dc012a-6ebe-4e3e-96c9-1c989950730e 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.517 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9b52a3f0-89fc-4918-8dc8-6eb288fc67ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.518 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf5b3835-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.519 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.519 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf5b3835-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:42:56 np0005603609 nova_compute[221550]: 2026-01-31 08:42:56.521 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:56 np0005603609 NetworkManager[49064]: <info>  [1769848976.5220] manager: (tapbf5b3835-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Jan 31 03:42:56 np0005603609 kernel: tapbf5b3835-80: entered promiscuous mode
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.527 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf5b3835-80, col_values=(('external_ids', {'iface-id': '73463532-de78-43c3-b19a-85e695d1d7e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:42:56 np0005603609 nova_compute[221550]: 2026-01-31 08:42:56.528 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:56 np0005603609 ovn_controller[130359]: 2026-01-31T08:42:56Z|00864|binding|INFO|Releasing lport 73463532-de78-43c3-b19a-85e695d1d7e9 from this chassis (sb_readonly=0)
Jan 31 03:42:56 np0005603609 nova_compute[221550]: 2026-01-31 08:42:56.532 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.534 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bf5b3835-8d63-418b-8338-959fa28d05a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bf5b3835-8d63-418b-8338-959fa28d05a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.535 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[19639b0b-b97f-4756-bd31-35ac705baddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.536 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-bf5b3835-8d63-418b-8338-959fa28d05a8
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/bf5b3835-8d63-418b-8338-959fa28d05a8.pid.haproxy
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID bf5b3835-8d63-418b-8338-959fa28d05a8
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:42:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:42:56.536 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8', 'env', 'PROCESS_TAG=haproxy-bf5b3835-8d63-418b-8338-959fa28d05a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bf5b3835-8d63-418b-8338-959fa28d05a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:42:56 np0005603609 nova_compute[221550]: 2026-01-31 08:42:56.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:56 np0005603609 nova_compute[221550]: 2026-01-31 08:42:56.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:42:56 np0005603609 nova_compute[221550]: 2026-01-31 08:42:56.718 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:56 np0005603609 podman[303042]: 2026-01-31 08:42:56.849711122 +0000 UTC m=+0.052569480 container create e34d66fe6e0e82aab1e21fed3a68e22ee7a0cf76072cda83fabdd4f391af11c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:42:56 np0005603609 systemd[1]: Started libpod-conmon-e34d66fe6e0e82aab1e21fed3a68e22ee7a0cf76072cda83fabdd4f391af11c7.scope.
Jan 31 03:42:56 np0005603609 podman[303042]: 2026-01-31 08:42:56.819984718 +0000 UTC m=+0.022843096 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:42:56 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:42:56 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d081a0a31b553ac9aa69d237bacdac2e88d4e5444b8a03cef85dbcec62e42a1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:56 np0005603609 podman[303042]: 2026-01-31 08:42:56.934987757 +0000 UTC m=+0.137846165 container init e34d66fe6e0e82aab1e21fed3a68e22ee7a0cf76072cda83fabdd4f391af11c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 03:42:56 np0005603609 nova_compute[221550]: 2026-01-31 08:42:56.940 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:42:56 np0005603609 podman[303042]: 2026-01-31 08:42:56.941627219 +0000 UTC m=+0.144485587 container start e34d66fe6e0e82aab1e21fed3a68e22ee7a0cf76072cda83fabdd4f391af11c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 03:42:56 np0005603609 nova_compute[221550]: 2026-01-31 08:42:56.946 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:42:56 np0005603609 neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8[303057]: [NOTICE]   (303061) : New worker (303063) forked
Jan 31 03:42:56 np0005603609 neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8[303057]: [NOTICE]   (303061) : Loading success.
Jan 31 03:42:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:57 np0005603609 nova_compute[221550]: 2026-01-31 08:42:57.722 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769848976.5007715, f297c5b1-d573-46de-95a8-be21389a4763 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:42:57 np0005603609 nova_compute[221550]: 2026-01-31 08:42:57.723 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] VM Started (Lifecycle Event)#033[00m
Jan 31 03:42:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:42:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:57.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:42:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:58.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:58 np0005603609 nova_compute[221550]: 2026-01-31 08:42:58.442 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:42:58 np0005603609 nova_compute[221550]: 2026-01-31 08:42:58.446 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:42:59 np0005603609 nova_compute[221550]: 2026-01-31 08:42:59.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:59 np0005603609 nova_compute[221550]: 2026-01-31 08:42:59.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:42:59 np0005603609 nova_compute[221550]: 2026-01-31 08:42:59.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:42:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:42:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:59.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:00.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:00 np0005603609 nova_compute[221550]: 2026-01-31 08:43:00.677 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:43:00 np0005603609 nova_compute[221550]: 2026-01-31 08:43:00.997 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:01 np0005603609 nova_compute[221550]: 2026-01-31 08:43:01.720 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:01.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:43:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:02.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:43:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 03:43:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:43:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:43:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:43:03 np0005603609 nova_compute[221550]: 2026-01-31 08:43:03.844 221554 DEBUG nova.compute.manager [req-6aa1f587-057d-4e0c-921f-ec14ae810c67 req-f98e30ad-673c-41a6-9183-37929cae714c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Received event network-vif-plugged-8812315d-5bb3-40d8-860c-f525ab97a518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:43:03 np0005603609 nova_compute[221550]: 2026-01-31 08:43:03.845 221554 DEBUG oslo_concurrency.lockutils [req-6aa1f587-057d-4e0c-921f-ec14ae810c67 req-f98e30ad-673c-41a6-9183-37929cae714c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f297c5b1-d573-46de-95a8-be21389a4763-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:03 np0005603609 nova_compute[221550]: 2026-01-31 08:43:03.845 221554 DEBUG oslo_concurrency.lockutils [req-6aa1f587-057d-4e0c-921f-ec14ae810c67 req-f98e30ad-673c-41a6-9183-37929cae714c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:03 np0005603609 nova_compute[221550]: 2026-01-31 08:43:03.845 221554 DEBUG oslo_concurrency.lockutils [req-6aa1f587-057d-4e0c-921f-ec14ae810c67 req-f98e30ad-673c-41a6-9183-37929cae714c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:03 np0005603609 nova_compute[221550]: 2026-01-31 08:43:03.845 221554 DEBUG nova.compute.manager [req-6aa1f587-057d-4e0c-921f-ec14ae810c67 req-f98e30ad-673c-41a6-9183-37929cae714c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] No waiting events found dispatching network-vif-plugged-8812315d-5bb3-40d8-860c-f525ab97a518 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:43:03 np0005603609 nova_compute[221550]: 2026-01-31 08:43:03.845 221554 WARNING nova.compute.manager [req-6aa1f587-057d-4e0c-921f-ec14ae810c67 req-f98e30ad-673c-41a6-9183-37929cae714c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Received unexpected event network-vif-plugged-8812315d-5bb3-40d8-860c-f525ab97a518 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:43:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:03.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:04 np0005603609 podman[303204]: 2026-01-31 08:43:04.161415739 +0000 UTC m=+0.046045441 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:43:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:04.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:04 np0005603609 nova_compute[221550]: 2026-01-31 08:43:04.167 221554 DEBUG nova.network.neutron [req-f82fadfa-f818-4cbe-a93e-5c16d7eed240 req-3ef2ac20-ff40-4c3d-816b-fc1e02a0ed25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updated VIF entry in instance network info cache for port 93519b56-e167-470c-9933-432512222982. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:43:04 np0005603609 nova_compute[221550]: 2026-01-31 08:43:04.168 221554 DEBUG nova.network.neutron [req-f82fadfa-f818-4cbe-a93e-5c16d7eed240 req-3ef2ac20-ff40-4c3d-816b-fc1e02a0ed25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updating instance_info_cache with network_info: [{"id": "93519b56-e167-470c-9933-432512222982", "address": "fa:16:3e:20:44:32", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93519b56-e1", "ovs_interfaceid": "93519b56-e167-470c-9933-432512222982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:43:04 np0005603609 podman[303203]: 2026-01-31 08:43:04.17996511 +0000 UTC m=+0.064757256 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 03:43:04 np0005603609 nova_compute[221550]: 2026-01-31 08:43:04.462 221554 DEBUG oslo_concurrency.lockutils [req-f82fadfa-f818-4cbe-a93e-5c16d7eed240 req-3ef2ac20-ff40-4c3d-816b-fc1e02a0ed25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:43:04 np0005603609 nova_compute[221550]: 2026-01-31 08:43:04.463 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:43:04 np0005603609 nova_compute[221550]: 2026-01-31 08:43:04.463 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:43:04 np0005603609 nova_compute[221550]: 2026-01-31 08:43:04.463 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 233d3314-7d9d-49a5-818f-909d78422fb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:43:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:05.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:06 np0005603609 nova_compute[221550]: 2026-01-31 08:43:06.044 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:06.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:06 np0005603609 nova_compute[221550]: 2026-01-31 08:43:06.598 221554 DEBUG nova.compute.manager [req-f157af13-8acd-4115-b5aa-d99bd271379f req-cb5df944-b248-4a83-beb5-f59a03ddcb2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Received event network-vif-plugged-8812315d-5bb3-40d8-860c-f525ab97a518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:43:06 np0005603609 nova_compute[221550]: 2026-01-31 08:43:06.598 221554 DEBUG oslo_concurrency.lockutils [req-f157af13-8acd-4115-b5aa-d99bd271379f req-cb5df944-b248-4a83-beb5-f59a03ddcb2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f297c5b1-d573-46de-95a8-be21389a4763-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:06 np0005603609 nova_compute[221550]: 2026-01-31 08:43:06.598 221554 DEBUG oslo_concurrency.lockutils [req-f157af13-8acd-4115-b5aa-d99bd271379f req-cb5df944-b248-4a83-beb5-f59a03ddcb2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:06 np0005603609 nova_compute[221550]: 2026-01-31 08:43:06.598 221554 DEBUG oslo_concurrency.lockutils [req-f157af13-8acd-4115-b5aa-d99bd271379f req-cb5df944-b248-4a83-beb5-f59a03ddcb2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:06 np0005603609 nova_compute[221550]: 2026-01-31 08:43:06.599 221554 DEBUG nova.compute.manager [req-f157af13-8acd-4115-b5aa-d99bd271379f req-cb5df944-b248-4a83-beb5-f59a03ddcb2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] No waiting events found dispatching network-vif-plugged-8812315d-5bb3-40d8-860c-f525ab97a518 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:43:06 np0005603609 nova_compute[221550]: 2026-01-31 08:43:06.599 221554 WARNING nova.compute.manager [req-f157af13-8acd-4115-b5aa-d99bd271379f req-cb5df944-b248-4a83-beb5-f59a03ddcb2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Received unexpected event network-vif-plugged-8812315d-5bb3-40d8-860c-f525ab97a518 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:43:06 np0005603609 nova_compute[221550]: 2026-01-31 08:43:06.722 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:07.535 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:07.536 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:07.536 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:07.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:43:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:08.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:43:09 np0005603609 ovn_controller[130359]: 2026-01-31T08:43:09Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:12:c6 10.100.0.13
Jan 31 03:43:09 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:43:09 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:43:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:09.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:10.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:10 np0005603609 nova_compute[221550]: 2026-01-31 08:43:10.214 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updating instance_info_cache with network_info: [{"id": "93519b56-e167-470c-9933-432512222982", "address": "fa:16:3e:20:44:32", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93519b56-e1", "ovs_interfaceid": "93519b56-e167-470c-9933-432512222982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:43:10 np0005603609 nova_compute[221550]: 2026-01-31 08:43:10.588 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-233d3314-7d9d-49a5-818f-909d78422fb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:43:10 np0005603609 nova_compute[221550]: 2026-01-31 08:43:10.589 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:43:10 np0005603609 nova_compute[221550]: 2026-01-31 08:43:10.590 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:10 np0005603609 nova_compute[221550]: 2026-01-31 08:43:10.590 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:10 np0005603609 nova_compute[221550]: 2026-01-31 08:43:10.942 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:10 np0005603609 nova_compute[221550]: 2026-01-31 08:43:10.943 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:10 np0005603609 nova_compute[221550]: 2026-01-31 08:43:10.944 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:10 np0005603609 nova_compute[221550]: 2026-01-31 08:43:10.944 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:43:10 np0005603609 nova_compute[221550]: 2026-01-31 08:43:10.944 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:43:11 np0005603609 nova_compute[221550]: 2026-01-31 08:43:11.046 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:43:11 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1324190554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:43:11 np0005603609 nova_compute[221550]: 2026-01-31 08:43:11.357 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:43:11 np0005603609 nova_compute[221550]: 2026-01-31 08:43:11.684 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:43:11 np0005603609 nova_compute[221550]: 2026-01-31 08:43:11.684 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:43:11 np0005603609 nova_compute[221550]: 2026-01-31 08:43:11.687 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:43:11 np0005603609 nova_compute[221550]: 2026-01-31 08:43:11.687 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:43:11 np0005603609 nova_compute[221550]: 2026-01-31 08:43:11.724 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:11 np0005603609 nova_compute[221550]: 2026-01-31 08:43:11.837 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:43:11 np0005603609 nova_compute[221550]: 2026-01-31 08:43:11.838 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3751MB free_disk=20.942401885986328GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:43:11 np0005603609 nova_compute[221550]: 2026-01-31 08:43:11.838 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:11 np0005603609 nova_compute[221550]: 2026-01-31 08:43:11.838 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:11.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:12.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:12 np0005603609 nova_compute[221550]: 2026-01-31 08:43:12.664 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 233d3314-7d9d-49a5-818f-909d78422fb9 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:43:12 np0005603609 nova_compute[221550]: 2026-01-31 08:43:12.664 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance f297c5b1-d573-46de-95a8-be21389a4763 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:43:12 np0005603609 nova_compute[221550]: 2026-01-31 08:43:12.664 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:43:12 np0005603609 nova_compute[221550]: 2026-01-31 08:43:12.664 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:43:12 np0005603609 nova_compute[221550]: 2026-01-31 08:43:12.679 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:43:12 np0005603609 nova_compute[221550]: 2026-01-31 08:43:12.698 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:43:12 np0005603609 nova_compute[221550]: 2026-01-31 08:43:12.698 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:43:12 np0005603609 nova_compute[221550]: 2026-01-31 08:43:12.757 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:43:12 np0005603609 nova_compute[221550]: 2026-01-31 08:43:12.779 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:43:12 np0005603609 nova_compute[221550]: 2026-01-31 08:43:12.859 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:43:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:43:13 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3180732374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:43:13 np0005603609 nova_compute[221550]: 2026-01-31 08:43:13.284 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:43:13 np0005603609 nova_compute[221550]: 2026-01-31 08:43:13.288 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:43:13 np0005603609 nova_compute[221550]: 2026-01-31 08:43:13.722 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:43:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:13.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:13 np0005603609 nova_compute[221550]: 2026-01-31 08:43:13.990 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:43:13 np0005603609 nova_compute[221550]: 2026-01-31 08:43:13.990 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:14.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:15 np0005603609 nova_compute[221550]: 2026-01-31 08:43:15.081 221554 INFO nova.compute.manager [None req-d3723ad2-c36a-421f-8865-a23c67a9d11d 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Get console output#033[00m
Jan 31 03:43:15 np0005603609 nova_compute[221550]: 2026-01-31 08:43:15.086 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:43:15 np0005603609 nova_compute[221550]: 2026-01-31 08:43:15.620 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:15.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:16 np0005603609 nova_compute[221550]: 2026-01-31 08:43:16.048 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:16 np0005603609 nova_compute[221550]: 2026-01-31 08:43:16.061 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:16 np0005603609 nova_compute[221550]: 2026-01-31 08:43:16.061 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:43:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:16.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:43:16 np0005603609 nova_compute[221550]: 2026-01-31 08:43:16.727 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:17.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:43:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:18.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:43:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:19.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:20.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:21 np0005603609 nova_compute[221550]: 2026-01-31 08:43:21.058 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:21 np0005603609 nova_compute[221550]: 2026-01-31 08:43:21.730 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:21.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:22.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:23 np0005603609 nova_compute[221550]: 2026-01-31 08:43:23.460 221554 DEBUG nova.compute.manager [req-aa8f4435-3832-4bf3-a88a-c9bb11aca5ac req-8991f0ba-e235-47bc-a995-4479ea2853ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Received event network-changed-8812315d-5bb3-40d8-860c-f525ab97a518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:43:23 np0005603609 nova_compute[221550]: 2026-01-31 08:43:23.460 221554 DEBUG nova.compute.manager [req-aa8f4435-3832-4bf3-a88a-c9bb11aca5ac req-8991f0ba-e235-47bc-a995-4479ea2853ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Refreshing instance network info cache due to event network-changed-8812315d-5bb3-40d8-860c-f525ab97a518. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:43:23 np0005603609 nova_compute[221550]: 2026-01-31 08:43:23.460 221554 DEBUG oslo_concurrency.lockutils [req-aa8f4435-3832-4bf3-a88a-c9bb11aca5ac req-8991f0ba-e235-47bc-a995-4479ea2853ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-f297c5b1-d573-46de-95a8-be21389a4763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:43:23 np0005603609 nova_compute[221550]: 2026-01-31 08:43:23.460 221554 DEBUG oslo_concurrency.lockutils [req-aa8f4435-3832-4bf3-a88a-c9bb11aca5ac req-8991f0ba-e235-47bc-a995-4479ea2853ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-f297c5b1-d573-46de-95a8-be21389a4763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:43:23 np0005603609 nova_compute[221550]: 2026-01-31 08:43:23.461 221554 DEBUG nova.network.neutron [req-aa8f4435-3832-4bf3-a88a-c9bb11aca5ac req-8991f0ba-e235-47bc-a995-4479ea2853ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Refreshing network info cache for port 8812315d-5bb3-40d8-860c-f525ab97a518 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:43:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:23.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:43:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:24.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:43:25 np0005603609 nova_compute[221550]: 2026-01-31 08:43:25.135 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:25.134 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=84, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=83) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:43:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:25.135 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:43:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:25.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:25 np0005603609 nova_compute[221550]: 2026-01-31 08:43:25.993 221554 DEBUG oslo_concurrency.lockutils [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "f297c5b1-d573-46de-95a8-be21389a4763" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:25 np0005603609 nova_compute[221550]: 2026-01-31 08:43:25.994 221554 DEBUG oslo_concurrency.lockutils [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:25 np0005603609 nova_compute[221550]: 2026-01-31 08:43:25.994 221554 DEBUG oslo_concurrency.lockutils [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "f297c5b1-d573-46de-95a8-be21389a4763-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:25 np0005603609 nova_compute[221550]: 2026-01-31 08:43:25.994 221554 DEBUG oslo_concurrency.lockutils [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:25 np0005603609 nova_compute[221550]: 2026-01-31 08:43:25.995 221554 DEBUG oslo_concurrency.lockutils [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:25 np0005603609 nova_compute[221550]: 2026-01-31 08:43:25.996 221554 INFO nova.compute.manager [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Terminating instance#033[00m
Jan 31 03:43:25 np0005603609 nova_compute[221550]: 2026-01-31 08:43:25.997 221554 DEBUG nova.compute.manager [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:43:26 np0005603609 kernel: tap8812315d-5b (unregistering): left promiscuous mode
Jan 31 03:43:26 np0005603609 NetworkManager[49064]: <info>  [1769849006.0556] device (tap8812315d-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:43:26 np0005603609 nova_compute[221550]: 2026-01-31 08:43:26.059 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:26 np0005603609 nova_compute[221550]: 2026-01-31 08:43:26.067 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:43:26Z|00865|binding|INFO|Releasing lport 8812315d-5bb3-40d8-860c-f525ab97a518 from this chassis (sb_readonly=1)
Jan 31 03:43:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:43:26Z|00866|binding|INFO|Removing iface tap8812315d-5b ovn-installed in OVS
Jan 31 03:43:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:43:26Z|00867|if_status|INFO|Dropped 1 log messages in last 328 seconds (most recently, 328 seconds ago) due to excessive rate
Jan 31 03:43:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:43:26Z|00868|if_status|INFO|Not setting lport 8812315d-5bb3-40d8-860c-f525ab97a518 down as sb is readonly
Jan 31 03:43:26 np0005603609 nova_compute[221550]: 2026-01-31 08:43:26.069 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:26 np0005603609 nova_compute[221550]: 2026-01-31 08:43:26.074 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:26 np0005603609 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000bd.scope: Deactivated successfully.
Jan 31 03:43:26 np0005603609 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000bd.scope: Consumed 12.852s CPU time.
Jan 31 03:43:26 np0005603609 systemd-machined[190912]: Machine qemu-103-instance-000000bd terminated.
Jan 31 03:43:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:26.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:26 np0005603609 nova_compute[221550]: 2026-01-31 08:43:26.229 221554 INFO nova.virt.libvirt.driver [-] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Instance destroyed successfully.#033[00m
Jan 31 03:43:26 np0005603609 nova_compute[221550]: 2026-01-31 08:43:26.230 221554 DEBUG nova.objects.instance [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'resources' on Instance uuid f297c5b1-d573-46de-95a8-be21389a4763 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:43:26 np0005603609 nova_compute[221550]: 2026-01-31 08:43:26.733 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:43:26Z|00869|binding|INFO|Setting lport 8812315d-5bb3-40d8-860c-f525ab97a518 down in Southbound
Jan 31 03:43:27 np0005603609 nova_compute[221550]: 2026-01-31 08:43:27.062 221554 DEBUG nova.virt.libvirt.vif [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:41:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1237528230',display_name='tempest-TestNetworkAdvancedServerOps-server-1237528230',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1237528230',id=189,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPDGaHKTTKAZSqA2FRrln0d7VEN1/jGuE6MmKztq5BUqQ2XrCXf+Zt7tRrbMrY06e567UNxUAUTFJA6tx9or8yM9YlOiWwY7AJirCCVAT5kkloGaLEdR41gyXqezB8qD5Q==',key_name='tempest-TestNetworkAdvancedServerOps-386188591',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:42:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-jcqg6id3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:42:57Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=f297c5b1-d573-46de-95a8-be21389a4763,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:43:27 np0005603609 nova_compute[221550]: 2026-01-31 08:43:27.062 221554 DEBUG nova.network.os_vif_util [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:43:27 np0005603609 nova_compute[221550]: 2026-01-31 08:43:27.063 221554 DEBUG nova.network.os_vif_util [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:12:c6,bridge_name='br-int',has_traffic_filtering=True,id=8812315d-5bb3-40d8-860c-f525ab97a518,network=Network(bf5b3835-8d63-418b-8338-959fa28d05a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8812315d-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:43:27 np0005603609 nova_compute[221550]: 2026-01-31 08:43:27.063 221554 DEBUG os_vif [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:12:c6,bridge_name='br-int',has_traffic_filtering=True,id=8812315d-5bb3-40d8-860c-f525ab97a518,network=Network(bf5b3835-8d63-418b-8338-959fa28d05a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8812315d-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:43:27 np0005603609 nova_compute[221550]: 2026-01-31 08:43:27.064 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:27 np0005603609 nova_compute[221550]: 2026-01-31 08:43:27.065 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8812315d-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:43:27 np0005603609 nova_compute[221550]: 2026-01-31 08:43:27.066 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:27 np0005603609 nova_compute[221550]: 2026-01-31 08:43:27.068 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:27 np0005603609 nova_compute[221550]: 2026-01-31 08:43:27.070 221554 INFO os_vif [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:12:c6,bridge_name='br-int',has_traffic_filtering=True,id=8812315d-5bb3-40d8-860c-f525ab97a518,network=Network(bf5b3835-8d63-418b-8338-959fa28d05a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8812315d-5b')#033[00m
Jan 31 03:43:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:27 np0005603609 nova_compute[221550]: 2026-01-31 08:43:27.400 221554 DEBUG nova.network.neutron [req-aa8f4435-3832-4bf3-a88a-c9bb11aca5ac req-8991f0ba-e235-47bc-a995-4479ea2853ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Updated VIF entry in instance network info cache for port 8812315d-5bb3-40d8-860c-f525ab97a518. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:43:27 np0005603609 nova_compute[221550]: 2026-01-31 08:43:27.400 221554 DEBUG nova.network.neutron [req-aa8f4435-3832-4bf3-a88a-c9bb11aca5ac req-8991f0ba-e235-47bc-a995-4479ea2853ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Updating instance_info_cache with network_info: [{"id": "8812315d-5bb3-40d8-860c-f525ab97a518", "address": "fa:16:3e:a7:12:c6", "network": {"id": "bf5b3835-8d63-418b-8338-959fa28d05a8", "bridge": "br-int", "label": "tempest-network-smoke--127670691", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8812315d-5b", "ovs_interfaceid": "8812315d-5bb3-40d8-860c-f525ab97a518", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:43:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:27.525 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:12:c6 10.100.0.13'], port_security=['fa:16:3e:a7:12:c6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f297c5b1-d573-46de-95a8-be21389a4763', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf5b3835-8d63-418b-8338-959fa28d05a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a398779e-1480-4eef-8fa0-857756eaddf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=acdcb819-3bf3-4c53-af89-cd871f50a52e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=8812315d-5bb3-40d8-860c-f525ab97a518) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:43:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:27.527 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 8812315d-5bb3-40d8-860c-f525ab97a518 in datapath bf5b3835-8d63-418b-8338-959fa28d05a8 unbound from our chassis#033[00m
Jan 31 03:43:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:27.530 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bf5b3835-8d63-418b-8338-959fa28d05a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:43:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:27.531 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d05fce50-f065-4c41-98b4-d6d8c5d7e070]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:27.532 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8 namespace which is not needed anymore#033[00m
Jan 31 03:43:27 np0005603609 nova_compute[221550]: 2026-01-31 08:43:27.565 221554 INFO nova.virt.libvirt.driver [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Deleting instance files /var/lib/nova/instances/f297c5b1-d573-46de-95a8-be21389a4763_del#033[00m
Jan 31 03:43:27 np0005603609 nova_compute[221550]: 2026-01-31 08:43:27.567 221554 INFO nova.virt.libvirt.driver [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Deletion of /var/lib/nova/instances/f297c5b1-d573-46de-95a8-be21389a4763_del complete#033[00m
Jan 31 03:43:27 np0005603609 neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8[303057]: [NOTICE]   (303061) : haproxy version is 2.8.14-c23fe91
Jan 31 03:43:27 np0005603609 neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8[303057]: [NOTICE]   (303061) : path to executable is /usr/sbin/haproxy
Jan 31 03:43:27 np0005603609 neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8[303057]: [WARNING]  (303061) : Exiting Master process...
Jan 31 03:43:27 np0005603609 neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8[303057]: [WARNING]  (303061) : Exiting Master process...
Jan 31 03:43:27 np0005603609 neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8[303057]: [ALERT]    (303061) : Current worker (303063) exited with code 143 (Terminated)
Jan 31 03:43:27 np0005603609 neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8[303057]: [WARNING]  (303061) : All workers exited. Exiting... (0)
Jan 31 03:43:27 np0005603609 systemd[1]: libpod-e34d66fe6e0e82aab1e21fed3a68e22ee7a0cf76072cda83fabdd4f391af11c7.scope: Deactivated successfully.
Jan 31 03:43:27 np0005603609 podman[303395]: 2026-01-31 08:43:27.77419401 +0000 UTC m=+0.159804610 container died e34d66fe6e0e82aab1e21fed3a68e22ee7a0cf76072cda83fabdd4f391af11c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 03:43:27 np0005603609 nova_compute[221550]: 2026-01-31 08:43:27.831 221554 DEBUG oslo_concurrency.lockutils [req-aa8f4435-3832-4bf3-a88a-c9bb11aca5ac req-8991f0ba-e235-47bc-a995-4479ea2853ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-f297c5b1-d573-46de-95a8-be21389a4763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:43:27 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e34d66fe6e0e82aab1e21fed3a68e22ee7a0cf76072cda83fabdd4f391af11c7-userdata-shm.mount: Deactivated successfully.
Jan 31 03:43:27 np0005603609 systemd[1]: var-lib-containers-storage-overlay-1d081a0a31b553ac9aa69d237bacdac2e88d4e5444b8a03cef85dbcec62e42a1-merged.mount: Deactivated successfully.
Jan 31 03:43:27 np0005603609 nova_compute[221550]: 2026-01-31 08:43:27.963 221554 INFO nova.compute.manager [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Took 1.96 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:43:27 np0005603609 nova_compute[221550]: 2026-01-31 08:43:27.963 221554 DEBUG oslo.service.loopingcall [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:43:27 np0005603609 nova_compute[221550]: 2026-01-31 08:43:27.964 221554 DEBUG nova.compute.manager [-] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:43:27 np0005603609 nova_compute[221550]: 2026-01-31 08:43:27.965 221554 DEBUG nova.network.neutron [-] [instance: f297c5b1-d573-46de-95a8-be21389a4763] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:43:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:27.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:27 np0005603609 podman[303395]: 2026-01-31 08:43:27.990898244 +0000 UTC m=+0.376508814 container cleanup e34d66fe6e0e82aab1e21fed3a68e22ee7a0cf76072cda83fabdd4f391af11c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 03:43:27 np0005603609 systemd[1]: libpod-conmon-e34d66fe6e0e82aab1e21fed3a68e22ee7a0cf76072cda83fabdd4f391af11c7.scope: Deactivated successfully.
Jan 31 03:43:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:28.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:28 np0005603609 podman[303428]: 2026-01-31 08:43:28.235861834 +0000 UTC m=+0.228784358 container remove e34d66fe6e0e82aab1e21fed3a68e22ee7a0cf76072cda83fabdd4f391af11c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:43:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:28.239 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4e0b8aa2-4edf-470f-b611-70866aba8976]: (4, ('Sat Jan 31 08:43:27 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8 (e34d66fe6e0e82aab1e21fed3a68e22ee7a0cf76072cda83fabdd4f391af11c7)\ne34d66fe6e0e82aab1e21fed3a68e22ee7a0cf76072cda83fabdd4f391af11c7\nSat Jan 31 08:43:27 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8 (e34d66fe6e0e82aab1e21fed3a68e22ee7a0cf76072cda83fabdd4f391af11c7)\ne34d66fe6e0e82aab1e21fed3a68e22ee7a0cf76072cda83fabdd4f391af11c7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:28.241 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[33f03f6f-fb51-49de-8835-9ce6b888742b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:28.242 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf5b3835-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:43:28 np0005603609 nova_compute[221550]: 2026-01-31 08:43:28.243 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:28 np0005603609 kernel: tapbf5b3835-80: left promiscuous mode
Jan 31 03:43:28 np0005603609 nova_compute[221550]: 2026-01-31 08:43:28.248 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:28 np0005603609 nova_compute[221550]: 2026-01-31 08:43:28.250 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:28.251 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8925d896-cbe2-4f7b-af91-462cd2cd97db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:28.269 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[affd96af-4564-4077-a874-70799ac1f380]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:28.271 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[289dfad3-0659-4140-9d5d-5ec27f7b39c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:28.284 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b1e7e315-aed6-4a77-8210-4f70c7a20a9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 921607, 'reachable_time': 26954, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303444, 'error': None, 'target': 'ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:28 np0005603609 systemd[1]: run-netns-ovnmeta\x2dbf5b3835\x2d8d63\x2d418b\x2d8338\x2d959fa28d05a8.mount: Deactivated successfully.
Jan 31 03:43:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:28.288 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bf5b3835-8d63-418b-8338-959fa28d05a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:43:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:28.288 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[acbdb3e5-cfb8-44a4-9070-488a64556166]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:29 np0005603609 nova_compute[221550]: 2026-01-31 08:43:29.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:29 np0005603609 nova_compute[221550]: 2026-01-31 08:43:29.960 221554 DEBUG nova.compute.manager [req-2805ecf1-9126-4f1f-8939-6ca53bbd1c15 req-8698047d-bc7a-48b0-b688-359f7c732eb1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Received event network-vif-unplugged-8812315d-5bb3-40d8-860c-f525ab97a518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:43:29 np0005603609 nova_compute[221550]: 2026-01-31 08:43:29.960 221554 DEBUG oslo_concurrency.lockutils [req-2805ecf1-9126-4f1f-8939-6ca53bbd1c15 req-8698047d-bc7a-48b0-b688-359f7c732eb1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f297c5b1-d573-46de-95a8-be21389a4763-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:29 np0005603609 nova_compute[221550]: 2026-01-31 08:43:29.960 221554 DEBUG oslo_concurrency.lockutils [req-2805ecf1-9126-4f1f-8939-6ca53bbd1c15 req-8698047d-bc7a-48b0-b688-359f7c732eb1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:29 np0005603609 nova_compute[221550]: 2026-01-31 08:43:29.961 221554 DEBUG oslo_concurrency.lockutils [req-2805ecf1-9126-4f1f-8939-6ca53bbd1c15 req-8698047d-bc7a-48b0-b688-359f7c732eb1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:29 np0005603609 nova_compute[221550]: 2026-01-31 08:43:29.961 221554 DEBUG nova.compute.manager [req-2805ecf1-9126-4f1f-8939-6ca53bbd1c15 req-8698047d-bc7a-48b0-b688-359f7c732eb1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] No waiting events found dispatching network-vif-unplugged-8812315d-5bb3-40d8-860c-f525ab97a518 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:43:29 np0005603609 nova_compute[221550]: 2026-01-31 08:43:29.961 221554 DEBUG nova.compute.manager [req-2805ecf1-9126-4f1f-8939-6ca53bbd1c15 req-8698047d-bc7a-48b0-b688-359f7c732eb1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Received event network-vif-unplugged-8812315d-5bb3-40d8-860c-f525ab97a518 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:43:29 np0005603609 nova_compute[221550]: 2026-01-31 08:43:29.961 221554 DEBUG nova.compute.manager [req-2805ecf1-9126-4f1f-8939-6ca53bbd1c15 req-8698047d-bc7a-48b0-b688-359f7c732eb1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Received event network-vif-plugged-8812315d-5bb3-40d8-860c-f525ab97a518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:43:29 np0005603609 nova_compute[221550]: 2026-01-31 08:43:29.961 221554 DEBUG oslo_concurrency.lockutils [req-2805ecf1-9126-4f1f-8939-6ca53bbd1c15 req-8698047d-bc7a-48b0-b688-359f7c732eb1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f297c5b1-d573-46de-95a8-be21389a4763-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:29 np0005603609 nova_compute[221550]: 2026-01-31 08:43:29.961 221554 DEBUG oslo_concurrency.lockutils [req-2805ecf1-9126-4f1f-8939-6ca53bbd1c15 req-8698047d-bc7a-48b0-b688-359f7c732eb1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:29 np0005603609 nova_compute[221550]: 2026-01-31 08:43:29.962 221554 DEBUG oslo_concurrency.lockutils [req-2805ecf1-9126-4f1f-8939-6ca53bbd1c15 req-8698047d-bc7a-48b0-b688-359f7c732eb1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:29 np0005603609 nova_compute[221550]: 2026-01-31 08:43:29.962 221554 DEBUG nova.compute.manager [req-2805ecf1-9126-4f1f-8939-6ca53bbd1c15 req-8698047d-bc7a-48b0-b688-359f7c732eb1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] No waiting events found dispatching network-vif-plugged-8812315d-5bb3-40d8-860c-f525ab97a518 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:43:29 np0005603609 nova_compute[221550]: 2026-01-31 08:43:29.962 221554 WARNING nova.compute.manager [req-2805ecf1-9126-4f1f-8939-6ca53bbd1c15 req-8698047d-bc7a-48b0-b688-359f7c732eb1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Received unexpected event network-vif-plugged-8812315d-5bb3-40d8-860c-f525ab97a518 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:43:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:43:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:29.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:43:30 np0005603609 nova_compute[221550]: 2026-01-31 08:43:30.031 221554 DEBUG nova.network.neutron [-] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:43:30 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:30.140 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '84'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:43:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:43:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:30.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:43:30 np0005603609 nova_compute[221550]: 2026-01-31 08:43:30.254 221554 INFO nova.compute.manager [-] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Took 2.29 seconds to deallocate network for instance.#033[00m
Jan 31 03:43:30 np0005603609 nova_compute[221550]: 2026-01-31 08:43:30.526 221554 DEBUG oslo_concurrency.lockutils [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:30 np0005603609 nova_compute[221550]: 2026-01-31 08:43:30.527 221554 DEBUG oslo_concurrency.lockutils [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:30 np0005603609 nova_compute[221550]: 2026-01-31 08:43:30.611 221554 DEBUG oslo_concurrency.processutils [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:43:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:43:31 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1048975873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:43:31 np0005603609 nova_compute[221550]: 2026-01-31 08:43:31.039 221554 DEBUG oslo_concurrency.processutils [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:43:31 np0005603609 nova_compute[221550]: 2026-01-31 08:43:31.043 221554 DEBUG nova.compute.provider_tree [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:43:31 np0005603609 nova_compute[221550]: 2026-01-31 08:43:31.090 221554 DEBUG nova.scheduler.client.report [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:43:31 np0005603609 nova_compute[221550]: 2026-01-31 08:43:31.399 221554 DEBUG oslo_concurrency.lockutils [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:31 np0005603609 nova_compute[221550]: 2026-01-31 08:43:31.736 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:31 np0005603609 nova_compute[221550]: 2026-01-31 08:43:31.746 221554 INFO nova.scheduler.client.report [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Deleted allocations for instance f297c5b1-d573-46de-95a8-be21389a4763#033[00m
Jan 31 03:43:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:31.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:32 np0005603609 nova_compute[221550]: 2026-01-31 08:43:32.067 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:32.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:32 np0005603609 nova_compute[221550]: 2026-01-31 08:43:32.402 221554 DEBUG nova.compute.manager [req-0fe54fbc-f31c-4f61-8ade-b61e58715dc7 req-8a8ccdac-7399-47b6-9ae9-1907a993386b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Received event network-vif-deleted-8812315d-5bb3-40d8-860c-f525ab97a518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:43:32 np0005603609 nova_compute[221550]: 2026-01-31 08:43:32.543 221554 DEBUG oslo_concurrency.lockutils [None req-802b9a17-9a01-4a1d-8826-0629875b1d77 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "f297c5b1-d573-46de-95a8-be21389a4763" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.109 221554 DEBUG oslo_concurrency.lockutils [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "233d3314-7d9d-49a5-818f-909d78422fb9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.109 221554 DEBUG oslo_concurrency.lockutils [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.110 221554 DEBUG oslo_concurrency.lockutils [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "233d3314-7d9d-49a5-818f-909d78422fb9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.110 221554 DEBUG oslo_concurrency.lockutils [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.110 221554 DEBUG oslo_concurrency.lockutils [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.111 221554 INFO nova.compute.manager [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Terminating instance#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.113 221554 DEBUG nova.compute.manager [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:43:33 np0005603609 kernel: tap93519b56-e1 (unregistering): left promiscuous mode
Jan 31 03:43:33 np0005603609 NetworkManager[49064]: <info>  [1769849013.1822] device (tap93519b56-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:43:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:43:33Z|00870|binding|INFO|Releasing lport 93519b56-e167-470c-9933-432512222982 from this chassis (sb_readonly=0)
Jan 31 03:43:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:43:33Z|00871|binding|INFO|Setting lport 93519b56-e167-470c-9933-432512222982 down in Southbound
Jan 31 03:43:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:43:33Z|00872|binding|INFO|Removing iface tap93519b56-e1 ovn-installed in OVS
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.192 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.201 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:33.229 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:44:32 10.100.0.4'], port_security=['fa:16:3e:20:44:32 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '233d3314-7d9d-49a5-818f-909d78422fb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-090376c2-ac34-46f0-acd4-344bb2bc1154', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b38141686534a0fb9b947a7886cd4b6', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f827482b-eee1-43ff-a797-1ec84e5a6d1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05d66413-db50-49eb-973e-490542297b8d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=93519b56-e167-470c-9933-432512222982) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:43:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:33.231 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 93519b56-e167-470c-9933-432512222982 in datapath 090376c2-ac34-46f0-acd4-344bb2bc1154 unbound from our chassis#033[00m
Jan 31 03:43:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:33.232 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 090376c2-ac34-46f0-acd4-344bb2bc1154, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:43:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:33.233 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3d830f9c-9f54-4b4c-b34c-cf3fd2326ee8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:33.233 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154 namespace which is not needed anymore#033[00m
Jan 31 03:43:33 np0005603609 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000b7.scope: Deactivated successfully.
Jan 31 03:43:33 np0005603609 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000b7.scope: Consumed 24.654s CPU time.
Jan 31 03:43:33 np0005603609 systemd-machined[190912]: Machine qemu-100-instance-000000b7 terminated.
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.329 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:33 np0005603609 neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154[300411]: [NOTICE]   (300416) : haproxy version is 2.8.14-c23fe91
Jan 31 03:43:33 np0005603609 neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154[300411]: [NOTICE]   (300416) : path to executable is /usr/sbin/haproxy
Jan 31 03:43:33 np0005603609 neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154[300411]: [WARNING]  (300416) : Exiting Master process...
Jan 31 03:43:33 np0005603609 neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154[300411]: [WARNING]  (300416) : Exiting Master process...
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.334 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:33 np0005603609 neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154[300411]: [ALERT]    (300416) : Current worker (300418) exited with code 143 (Terminated)
Jan 31 03:43:33 np0005603609 neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154[300411]: [WARNING]  (300416) : All workers exited. Exiting... (0)
Jan 31 03:43:33 np0005603609 systemd[1]: libpod-9572a869c6724f21f3c7788c6336c2f3ae3a1c78dd3fef4a8e2e576e4102a0ab.scope: Deactivated successfully.
Jan 31 03:43:33 np0005603609 podman[303496]: 2026-01-31 08:43:33.344209711 +0000 UTC m=+0.039371049 container died 9572a869c6724f21f3c7788c6336c2f3ae3a1c78dd3fef4a8e2e576e4102a0ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.343 221554 INFO nova.virt.libvirt.driver [-] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Instance destroyed successfully.#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.344 221554 DEBUG nova.objects.instance [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'resources' on Instance uuid 233d3314-7d9d-49a5-818f-909d78422fb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:43:33 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9572a869c6724f21f3c7788c6336c2f3ae3a1c78dd3fef4a8e2e576e4102a0ab-userdata-shm.mount: Deactivated successfully.
Jan 31 03:43:33 np0005603609 systemd[1]: var-lib-containers-storage-overlay-95f64b7e9257aa18e04f38daca0a0b145ac04fa5460dc62ad1c9599649223a55-merged.mount: Deactivated successfully.
Jan 31 03:43:33 np0005603609 podman[303496]: 2026-01-31 08:43:33.373146126 +0000 UTC m=+0.068307454 container cleanup 9572a869c6724f21f3c7788c6336c2f3ae3a1c78dd3fef4a8e2e576e4102a0ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 31 03:43:33 np0005603609 systemd[1]: libpod-conmon-9572a869c6724f21f3c7788c6336c2f3ae3a1c78dd3fef4a8e2e576e4102a0ab.scope: Deactivated successfully.
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.387 221554 DEBUG nova.virt.libvirt.vif [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:38:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1789084214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1789084214',id=183,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKzg6Lk8OuDDWEBQQtNVGTD92uncKX4uuGYvXITCu78FVc0dCeMJjMpvMnamF80j6P2vfKzi9siS1JCEwYFhLgZ6vk2tD+oJq2pafl3D7QkbaZkrlvSItHgJLM4cymh3Sg==',key_name='tempest-TestInstancesWithCinderVolumes-232350541',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:38:56Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b38141686534a0fb9b947a7886cd4b6',ramdisk_id='',reservation_id='r-grhkzysi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-791993230',owner_user_name='tempest-TestInstancesWithCinderVolumes-791993230-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:38:56Z,user_data=None,user_id='cfc8a271e75e4a92b16ee6b5da9cfc9f',uuid=233d3314-7d9d-49a5-818f-909d78422fb9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "93519b56-e167-470c-9933-432512222982", "address": "fa:16:3e:20:44:32", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93519b56-e1", "ovs_interfaceid": "93519b56-e167-470c-9933-432512222982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.387 221554 DEBUG nova.network.os_vif_util [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Converting VIF {"id": "93519b56-e167-470c-9933-432512222982", "address": "fa:16:3e:20:44:32", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93519b56-e1", "ovs_interfaceid": "93519b56-e167-470c-9933-432512222982", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.388 221554 DEBUG nova.network.os_vif_util [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:20:44:32,bridge_name='br-int',has_traffic_filtering=True,id=93519b56-e167-470c-9933-432512222982,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93519b56-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.388 221554 DEBUG os_vif [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:44:32,bridge_name='br-int',has_traffic_filtering=True,id=93519b56-e167-470c-9933-432512222982,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93519b56-e1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.390 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.390 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93519b56-e1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.391 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.394 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.396 221554 INFO os_vif [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:44:32,bridge_name='br-int',has_traffic_filtering=True,id=93519b56-e167-470c-9933-432512222982,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93519b56-e1')#033[00m
Jan 31 03:43:33 np0005603609 podman[303537]: 2026-01-31 08:43:33.424174507 +0000 UTC m=+0.035895275 container remove 9572a869c6724f21f3c7788c6336c2f3ae3a1c78dd3fef4a8e2e576e4102a0ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 03:43:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:33.428 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6cfce3d3-3c7e-40d1-933b-d67f8c1fc38c]: (4, ('Sat Jan 31 08:43:33 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154 (9572a869c6724f21f3c7788c6336c2f3ae3a1c78dd3fef4a8e2e576e4102a0ab)\n9572a869c6724f21f3c7788c6336c2f3ae3a1c78dd3fef4a8e2e576e4102a0ab\nSat Jan 31 08:43:33 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154 (9572a869c6724f21f3c7788c6336c2f3ae3a1c78dd3fef4a8e2e576e4102a0ab)\n9572a869c6724f21f3c7788c6336c2f3ae3a1c78dd3fef4a8e2e576e4102a0ab\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:33.429 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[97c78406-13e5-4122-8f5f-2c3c2e51a306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:33.430 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap090376c2-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:43:33 np0005603609 kernel: tap090376c2-a0: left promiscuous mode
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.433 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.438 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:33.440 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[00ea036e-413d-45be-bdef-1830a133a3cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:33.451 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2552f6-8f97-4adf-b0c9-13c58af532a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:33.452 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[da2dbadb-6363-4b87-8da8-ff81eb6d605b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:33.462 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f990b4ff-fb73-4419-82d3-dbc4f79eab15]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 896649, 'reachable_time': 26843, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303570, 'error': None, 'target': 'ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:33.464 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:43:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:43:33.464 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[5506da42-c8bf-4e1a-bee4-101a1a76ba7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:33 np0005603609 systemd[1]: run-netns-ovnmeta\x2d090376c2\x2dac34\x2d46f0\x2dacd4\x2d344bb2bc1154.mount: Deactivated successfully.
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.620 221554 INFO nova.virt.libvirt.driver [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Deleting instance files /var/lib/nova/instances/233d3314-7d9d-49a5-818f-909d78422fb9_del#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.620 221554 INFO nova.virt.libvirt.driver [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Deletion of /var/lib/nova/instances/233d3314-7d9d-49a5-818f-909d78422fb9_del complete#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.822 221554 INFO nova.compute.manager [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.822 221554 DEBUG oslo.service.loopingcall [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.823 221554 DEBUG nova.compute.manager [-] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:43:33 np0005603609 nova_compute[221550]: 2026-01-31 08:43:33.823 221554 DEBUG nova.network.neutron [-] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:43:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:43:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:33.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:43:34 np0005603609 nova_compute[221550]: 2026-01-31 08:43:34.083 221554 DEBUG nova.compute.manager [req-f6f9020f-910e-4086-9fec-6ad5b492b9ca req-3af16612-8b41-49c5-9f87-5b002f7e2766 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Received event network-vif-unplugged-93519b56-e167-470c-9933-432512222982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:43:34 np0005603609 nova_compute[221550]: 2026-01-31 08:43:34.084 221554 DEBUG oslo_concurrency.lockutils [req-f6f9020f-910e-4086-9fec-6ad5b492b9ca req-3af16612-8b41-49c5-9f87-5b002f7e2766 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "233d3314-7d9d-49a5-818f-909d78422fb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:34 np0005603609 nova_compute[221550]: 2026-01-31 08:43:34.084 221554 DEBUG oslo_concurrency.lockutils [req-f6f9020f-910e-4086-9fec-6ad5b492b9ca req-3af16612-8b41-49c5-9f87-5b002f7e2766 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:34 np0005603609 nova_compute[221550]: 2026-01-31 08:43:34.085 221554 DEBUG oslo_concurrency.lockutils [req-f6f9020f-910e-4086-9fec-6ad5b492b9ca req-3af16612-8b41-49c5-9f87-5b002f7e2766 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:34 np0005603609 nova_compute[221550]: 2026-01-31 08:43:34.085 221554 DEBUG nova.compute.manager [req-f6f9020f-910e-4086-9fec-6ad5b492b9ca req-3af16612-8b41-49c5-9f87-5b002f7e2766 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] No waiting events found dispatching network-vif-unplugged-93519b56-e167-470c-9933-432512222982 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:43:34 np0005603609 nova_compute[221550]: 2026-01-31 08:43:34.085 221554 DEBUG nova.compute.manager [req-f6f9020f-910e-4086-9fec-6ad5b492b9ca req-3af16612-8b41-49c5-9f87-5b002f7e2766 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Received event network-vif-unplugged-93519b56-e167-470c-9933-432512222982 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:43:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:34.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:35 np0005603609 podman[303573]: 2026-01-31 08:43:35.235236125 +0000 UTC m=+0.105707838 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 31 03:43:35 np0005603609 podman[303572]: 2026-01-31 08:43:35.279715695 +0000 UTC m=+0.159216747 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:43:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:35.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:36.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:36 np0005603609 nova_compute[221550]: 2026-01-31 08:43:36.741 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:37 np0005603609 nova_compute[221550]: 2026-01-31 08:43:37.806 221554 DEBUG nova.compute.manager [req-9b473614-8434-4262-8e1d-33156efdcfed req-dbf8acbb-594f-48f9-aa2a-5b5aff73f51f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Received event network-vif-plugged-93519b56-e167-470c-9933-432512222982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:43:37 np0005603609 nova_compute[221550]: 2026-01-31 08:43:37.807 221554 DEBUG oslo_concurrency.lockutils [req-9b473614-8434-4262-8e1d-33156efdcfed req-dbf8acbb-594f-48f9-aa2a-5b5aff73f51f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "233d3314-7d9d-49a5-818f-909d78422fb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:37 np0005603609 nova_compute[221550]: 2026-01-31 08:43:37.808 221554 DEBUG oslo_concurrency.lockutils [req-9b473614-8434-4262-8e1d-33156efdcfed req-dbf8acbb-594f-48f9-aa2a-5b5aff73f51f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:37 np0005603609 nova_compute[221550]: 2026-01-31 08:43:37.808 221554 DEBUG oslo_concurrency.lockutils [req-9b473614-8434-4262-8e1d-33156efdcfed req-dbf8acbb-594f-48f9-aa2a-5b5aff73f51f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:37 np0005603609 nova_compute[221550]: 2026-01-31 08:43:37.808 221554 DEBUG nova.compute.manager [req-9b473614-8434-4262-8e1d-33156efdcfed req-dbf8acbb-594f-48f9-aa2a-5b5aff73f51f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] No waiting events found dispatching network-vif-plugged-93519b56-e167-470c-9933-432512222982 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:43:37 np0005603609 nova_compute[221550]: 2026-01-31 08:43:37.809 221554 WARNING nova.compute.manager [req-9b473614-8434-4262-8e1d-33156efdcfed req-dbf8acbb-594f-48f9-aa2a-5b5aff73f51f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Received unexpected event network-vif-plugged-93519b56-e167-470c-9933-432512222982 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:43:37 np0005603609 nova_compute[221550]: 2026-01-31 08:43:37.857 221554 DEBUG nova.compute.manager [req-308c3eb1-8c03-40a2-88b9-41d1255f28f6 req-b2bb67d2-5858-4b09-b93a-a9ebf83ff333 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Received event network-vif-deleted-93519b56-e167-470c-9933-432512222982 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:43:37 np0005603609 nova_compute[221550]: 2026-01-31 08:43:37.858 221554 INFO nova.compute.manager [req-308c3eb1-8c03-40a2-88b9-41d1255f28f6 req-b2bb67d2-5858-4b09-b93a-a9ebf83ff333 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Neutron deleted interface 93519b56-e167-470c-9933-432512222982; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:43:37 np0005603609 nova_compute[221550]: 2026-01-31 08:43:37.858 221554 DEBUG nova.network.neutron [req-308c3eb1-8c03-40a2-88b9-41d1255f28f6 req-b2bb67d2-5858-4b09-b93a-a9ebf83ff333 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:43:37 np0005603609 nova_compute[221550]: 2026-01-31 08:43:37.861 221554 DEBUG nova.network.neutron [-] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:43:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:37.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:38 np0005603609 nova_compute[221550]: 2026-01-31 08:43:38.099 221554 DEBUG nova.compute.manager [req-308c3eb1-8c03-40a2-88b9-41d1255f28f6 req-b2bb67d2-5858-4b09-b93a-a9ebf83ff333 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Detach interface failed, port_id=93519b56-e167-470c-9933-432512222982, reason: Instance 233d3314-7d9d-49a5-818f-909d78422fb9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:43:38 np0005603609 nova_compute[221550]: 2026-01-31 08:43:38.170 221554 INFO nova.compute.manager [-] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Took 4.35 seconds to deallocate network for instance.#033[00m
Jan 31 03:43:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:43:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:38.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:43:38 np0005603609 nova_compute[221550]: 2026-01-31 08:43:38.392 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:38 np0005603609 nova_compute[221550]: 2026-01-31 08:43:38.666 221554 INFO nova.compute.manager [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Took 0.50 seconds to detach 1 volumes for instance.#033[00m
Jan 31 03:43:39 np0005603609 nova_compute[221550]: 2026-01-31 08:43:39.156 221554 DEBUG oslo_concurrency.lockutils [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:39 np0005603609 nova_compute[221550]: 2026-01-31 08:43:39.157 221554 DEBUG oslo_concurrency.lockutils [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:39 np0005603609 nova_compute[221550]: 2026-01-31 08:43:39.215 221554 DEBUG oslo_concurrency.processutils [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:43:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:43:39 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2520344341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:43:39 np0005603609 nova_compute[221550]: 2026-01-31 08:43:39.653 221554 DEBUG oslo_concurrency.processutils [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:43:39 np0005603609 nova_compute[221550]: 2026-01-31 08:43:39.660 221554 DEBUG nova.compute.provider_tree [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:43:39 np0005603609 nova_compute[221550]: 2026-01-31 08:43:39.807 221554 DEBUG nova.scheduler.client.report [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:43:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:43:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:39.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:43:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:40.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:40 np0005603609 nova_compute[221550]: 2026-01-31 08:43:40.598 221554 DEBUG oslo_concurrency.lockutils [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.441s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:40 np0005603609 nova_compute[221550]: 2026-01-31 08:43:40.904 221554 INFO nova.scheduler.client.report [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Deleted allocations for instance 233d3314-7d9d-49a5-818f-909d78422fb9#033[00m
Jan 31 03:43:41 np0005603609 nova_compute[221550]: 2026-01-31 08:43:41.228 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849006.2267897, f297c5b1-d573-46de-95a8-be21389a4763 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:43:41 np0005603609 nova_compute[221550]: 2026-01-31 08:43:41.229 221554 INFO nova.compute.manager [-] [instance: f297c5b1-d573-46de-95a8-be21389a4763] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:43:41 np0005603609 nova_compute[221550]: 2026-01-31 08:43:41.582 221554 DEBUG nova.compute.manager [None req-32c77d26-ac95-44c7-9723-fe72fe6cf8e1 - - - - - -] [instance: f297c5b1-d573-46de-95a8-be21389a4763] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:43:41 np0005603609 nova_compute[221550]: 2026-01-31 08:43:41.741 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:41.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:42.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:42 np0005603609 nova_compute[221550]: 2026-01-31 08:43:42.792 221554 DEBUG oslo_concurrency.lockutils [None req-8c18cbba-070c-4cd3-8f04-5a25c54bcad3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "233d3314-7d9d-49a5-818f-909d78422fb9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:43 np0005603609 nova_compute[221550]: 2026-01-31 08:43:43.394 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:43.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:44.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:45 np0005603609 nova_compute[221550]: 2026-01-31 08:43:45.435 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:45 np0005603609 nova_compute[221550]: 2026-01-31 08:43:45.530 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:45.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:46.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:46 np0005603609 nova_compute[221550]: 2026-01-31 08:43:46.743 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:43:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:47.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:43:48 np0005603609 nova_compute[221550]: 2026-01-31 08:43:48.342 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849013.3416603, 233d3314-7d9d-49a5-818f-909d78422fb9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:43:48 np0005603609 nova_compute[221550]: 2026-01-31 08:43:48.342 221554 INFO nova.compute.manager [-] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:43:48 np0005603609 nova_compute[221550]: 2026-01-31 08:43:48.394 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:48 np0005603609 nova_compute[221550]: 2026-01-31 08:43:48.479 221554 DEBUG nova.compute.manager [None req-ab32cb42-273b-4705-80c8-d55425252462 - - - - - -] [instance: 233d3314-7d9d-49a5-818f-909d78422fb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:43:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:48.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:43:48 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1500088091' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:43:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:43:48 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1500088091' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:43:48 np0005603609 nova_compute[221550]: 2026-01-31 08:43:48.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:43:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:49.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:43:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:50.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:51 np0005603609 nova_compute[221550]: 2026-01-31 08:43:51.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:51 np0005603609 nova_compute[221550]: 2026-01-31 08:43:51.745 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:52.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:52.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:53 np0005603609 nova_compute[221550]: 2026-01-31 08:43:53.396 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:53 np0005603609 nova_compute[221550]: 2026-01-31 08:43:53.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:54.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:54.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:56.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:43:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:56.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:43:56 np0005603609 nova_compute[221550]: 2026-01-31 08:43:56.747 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:57 np0005603609 nova_compute[221550]: 2026-01-31 08:43:57.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:57 np0005603609 nova_compute[221550]: 2026-01-31 08:43:57.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:43:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e383 e383: 3 total, 3 up, 3 in
Jan 31 03:43:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:43:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:58.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:43:58 np0005603609 nova_compute[221550]: 2026-01-31 08:43:58.397 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:43:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:43:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:58.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:44:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:44:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:00.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:44:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:44:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:00.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:44:00 np0005603609 nova_compute[221550]: 2026-01-31 08:44:00.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:00 np0005603609 nova_compute[221550]: 2026-01-31 08:44:00.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:44:00 np0005603609 nova_compute[221550]: 2026-01-31 08:44:00.941 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:44:01 np0005603609 nova_compute[221550]: 2026-01-31 08:44:01.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:01 np0005603609 nova_compute[221550]: 2026-01-31 08:44:01.750 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:01 np0005603609 nova_compute[221550]: 2026-01-31 08:44:01.912 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:01 np0005603609 nova_compute[221550]: 2026-01-31 08:44:01.912 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:01 np0005603609 nova_compute[221550]: 2026-01-31 08:44:01.912 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:44:01 np0005603609 nova_compute[221550]: 2026-01-31 08:44:01.912 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:44:01 np0005603609 nova_compute[221550]: 2026-01-31 08:44:01.912 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:44:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:44:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:02.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:44:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:44:02 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/783645137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:44:02 np0005603609 nova_compute[221550]: 2026-01-31 08:44:02.407 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:44:02 np0005603609 nova_compute[221550]: 2026-01-31 08:44:02.566 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:44:02 np0005603609 nova_compute[221550]: 2026-01-31 08:44:02.568 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4243MB free_disk=20.967517852783203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:44:02 np0005603609 nova_compute[221550]: 2026-01-31 08:44:02.568 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:02 np0005603609 nova_compute[221550]: 2026-01-31 08:44:02.568 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:02.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:03 np0005603609 nova_compute[221550]: 2026-01-31 08:44:03.397 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:03 np0005603609 nova_compute[221550]: 2026-01-31 08:44:03.729 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:44:03 np0005603609 nova_compute[221550]: 2026-01-31 08:44:03.730 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:44:03 np0005603609 nova_compute[221550]: 2026-01-31 08:44:03.831 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:44:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:04.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 e384: 3 total, 3 up, 3 in
Jan 31 03:44:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:44:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3149711549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:44:04 np0005603609 nova_compute[221550]: 2026-01-31 08:44:04.285 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:44:04 np0005603609 nova_compute[221550]: 2026-01-31 08:44:04.290 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:44:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:04.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:04 np0005603609 nova_compute[221550]: 2026-01-31 08:44:04.767 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #160. Immutable memtables: 0.
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:44:05.246520) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 160
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849045246568, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 2396, "num_deletes": 253, "total_data_size": 5783552, "memory_usage": 5860320, "flush_reason": "Manual Compaction"}
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #161: started
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849045289467, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 161, "file_size": 3781573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77096, "largest_seqno": 79487, "table_properties": {"data_size": 3771879, "index_size": 6123, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20388, "raw_average_key_size": 20, "raw_value_size": 3752409, "raw_average_value_size": 3801, "num_data_blocks": 266, "num_entries": 987, "num_filter_entries": 987, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848834, "oldest_key_time": 1769848834, "file_creation_time": 1769849045, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 42992 microseconds, and 6217 cpu microseconds.
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:44:05.289514) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #161: 3781573 bytes OK
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:44:05.289533) [db/memtable_list.cc:519] [default] Level-0 commit table #161 started
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:44:05.291232) [db/memtable_list.cc:722] [default] Level-0 commit table #161: memtable #1 done
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:44:05.291246) EVENT_LOG_v1 {"time_micros": 1769849045291242, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:44:05.291264) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 5773091, prev total WAL file size 5773091, number of live WAL files 2.
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000157.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:44:05.292311) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [161(3692KB)], [159(9971KB)]
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849045292379, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [161], "files_L6": [159], "score": -1, "input_data_size": 13991925, "oldest_snapshot_seqno": -1}
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #162: 10033 keys, 12040081 bytes, temperature: kUnknown
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849045383681, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 162, "file_size": 12040081, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11976881, "index_size": 37039, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25093, "raw_key_size": 264869, "raw_average_key_size": 26, "raw_value_size": 11802758, "raw_average_value_size": 1176, "num_data_blocks": 1405, "num_entries": 10033, "num_filter_entries": 10033, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769849045, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 162, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:44:05.383916) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 12040081 bytes
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:44:05.386981) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.1 rd, 131.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.7 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 10561, records dropped: 528 output_compression: NoCompression
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:44:05.386996) EVENT_LOG_v1 {"time_micros": 1769849045386989, "job": 102, "event": "compaction_finished", "compaction_time_micros": 91369, "compaction_time_cpu_micros": 21557, "output_level": 6, "num_output_files": 1, "total_output_size": 12040081, "num_input_records": 10561, "num_output_records": 10033, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849045387411, "job": 102, "event": "table_file_deletion", "file_number": 161}
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000159.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849045388307, "job": 102, "event": "table_file_deletion", "file_number": 159}
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:44:05.292186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:44:05.388334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:44:05.388338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:44:05.388340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:44:05.388342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:44:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:44:05.388344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:44:05 np0005603609 nova_compute[221550]: 2026-01-31 08:44:05.488 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:44:05 np0005603609 nova_compute[221550]: 2026-01-31 08:44:05.489 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:44:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:06.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:06 np0005603609 podman[303689]: 2026-01-31 08:44:06.178752693 +0000 UTC m=+0.053852162 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:44:06 np0005603609 podman[303688]: 2026-01-31 08:44:06.205232967 +0000 UTC m=+0.083135184 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 03:44:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:44:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:06.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:44:06 np0005603609 nova_compute[221550]: 2026-01-31 08:44:06.753 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:44:07.536 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:44:07.537 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:44:07.537 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:44:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:44:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:08.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:44:08 np0005603609 nova_compute[221550]: 2026-01-31 08:44:08.398 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:08.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:10.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:44:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:44:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:44:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:10.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:11 np0005603609 nova_compute[221550]: 2026-01-31 08:44:11.485 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:11 np0005603609 nova_compute[221550]: 2026-01-31 08:44:11.485 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:11 np0005603609 nova_compute[221550]: 2026-01-31 08:44:11.769 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:44:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:12.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:44:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:44:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:12.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:44:13 np0005603609 nova_compute[221550]: 2026-01-31 08:44:13.400 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:14.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:14.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:14 np0005603609 nova_compute[221550]: 2026-01-31 08:44:14.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:16.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:44:16.202 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=85, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=84) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:44:16 np0005603609 nova_compute[221550]: 2026-01-31 08:44:16.202 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:44:16.203 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:44:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:44:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:16.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:44:16 np0005603609 nova_compute[221550]: 2026-01-31 08:44:16.772 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:44:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:18.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:44:18 np0005603609 nova_compute[221550]: 2026-01-31 08:44:18.402 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:18.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:44:19.206 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '85'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:44:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:44:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:44:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:20.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:44:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:20.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:44:21 np0005603609 nova_compute[221550]: 2026-01-31 08:44:21.775 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:22.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:22.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:23 np0005603609 nova_compute[221550]: 2026-01-31 08:44:23.407 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:24.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:24.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:26.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:26.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:26 np0005603609 nova_compute[221550]: 2026-01-31 08:44:26.778 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:44:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:28.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:44:28 np0005603609 nova_compute[221550]: 2026-01-31 08:44:28.411 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:28.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:30.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:30.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:31 np0005603609 nova_compute[221550]: 2026-01-31 08:44:31.780 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:44:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:32.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:44:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:44:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:32.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:44:33 np0005603609 nova_compute[221550]: 2026-01-31 08:44:33.414 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:34.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:34.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:36.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:44:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:36.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:44:36 np0005603609 nova_compute[221550]: 2026-01-31 08:44:36.783 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:37 np0005603609 podman[303912]: 2026-01-31 08:44:37.219051517 +0000 UTC m=+0.090462502 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:44:37 np0005603609 podman[303911]: 2026-01-31 08:44:37.255018123 +0000 UTC m=+0.124640925 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:44:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:44:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:38.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:44:38 np0005603609 nova_compute[221550]: 2026-01-31 08:44:38.418 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:38.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:44:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:40.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:44:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:44:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:40.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:44:41 np0005603609 nova_compute[221550]: 2026-01-31 08:44:41.784 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:44:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:42.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:44:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:44:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:42.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:44:43 np0005603609 nova_compute[221550]: 2026-01-31 08:44:43.422 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:44.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:44.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:44:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:46.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:44:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:46.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:46 np0005603609 nova_compute[221550]: 2026-01-31 08:44:46.789 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:47 np0005603609 nova_compute[221550]: 2026-01-31 08:44:47.817 221554 DEBUG oslo_concurrency.lockutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "0a25731c-0886-42e0-ac2e-21d9078873a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:47 np0005603609 nova_compute[221550]: 2026-01-31 08:44:47.818 221554 DEBUG oslo_concurrency.lockutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:44:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:48.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:44:48 np0005603609 nova_compute[221550]: 2026-01-31 08:44:48.426 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:48 np0005603609 nova_compute[221550]: 2026-01-31 08:44:48.526 221554 DEBUG nova.compute.manager [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:44:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:48.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:44:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:50.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:44:50 np0005603609 nova_compute[221550]: 2026-01-31 08:44:50.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:50.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:51 np0005603609 nova_compute[221550]: 2026-01-31 08:44:51.791 221554 DEBUG oslo_concurrency.lockutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:51 np0005603609 nova_compute[221550]: 2026-01-31 08:44:51.792 221554 DEBUG oslo_concurrency.lockutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:51 np0005603609 nova_compute[221550]: 2026-01-31 08:44:51.794 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:51 np0005603609 nova_compute[221550]: 2026-01-31 08:44:51.810 221554 DEBUG nova.virt.hardware [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:44:51 np0005603609 nova_compute[221550]: 2026-01-31 08:44:51.810 221554 INFO nova.compute.claims [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:44:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:44:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:52.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:44:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:44:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:52.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:44:53 np0005603609 nova_compute[221550]: 2026-01-31 08:44:53.054 221554 DEBUG oslo_concurrency.processutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:44:53 np0005603609 nova_compute[221550]: 2026-01-31 08:44:53.430 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:44:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3662965045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:44:53 np0005603609 nova_compute[221550]: 2026-01-31 08:44:53.484 221554 DEBUG oslo_concurrency.processutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:44:53 np0005603609 nova_compute[221550]: 2026-01-31 08:44:53.490 221554 DEBUG nova.compute.provider_tree [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:44:53 np0005603609 nova_compute[221550]: 2026-01-31 08:44:53.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:53 np0005603609 nova_compute[221550]: 2026-01-31 08:44:53.661 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:53 np0005603609 nova_compute[221550]: 2026-01-31 08:44:53.790 221554 DEBUG nova.scheduler.client.report [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:44:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:44:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:54.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:44:54 np0005603609 nova_compute[221550]: 2026-01-31 08:44:54.263 221554 DEBUG oslo_concurrency.lockutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:44:54 np0005603609 nova_compute[221550]: 2026-01-31 08:44:54.264 221554 DEBUG nova.compute.manager [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:44:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:44:54.706 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=86, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=85) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:44:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:44:54.707 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:44:54 np0005603609 nova_compute[221550]: 2026-01-31 08:44:54.708 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:44:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:54.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:44:55 np0005603609 nova_compute[221550]: 2026-01-31 08:44:55.304 221554 DEBUG nova.compute.manager [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:44:55 np0005603609 nova_compute[221550]: 2026-01-31 08:44:55.304 221554 DEBUG nova.network.neutron [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:44:55 np0005603609 nova_compute[221550]: 2026-01-31 08:44:55.540 221554 INFO nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:44:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:44:55.711 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '86'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:44:55 np0005603609 nova_compute[221550]: 2026-01-31 08:44:55.982 221554 DEBUG nova.compute.manager [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:44:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:44:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:56.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:44:56 np0005603609 nova_compute[221550]: 2026-01-31 08:44:56.698 221554 DEBUG nova.policy [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4d0e9d918b4041fabd5ded633b4cf404', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9710f0cf77d84353ae13fa47922b085d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:44:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:56.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:56 np0005603609 nova_compute[221550]: 2026-01-31 08:44:56.795 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:56 np0005603609 nova_compute[221550]: 2026-01-31 08:44:56.805 221554 DEBUG nova.compute.manager [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:44:56 np0005603609 nova_compute[221550]: 2026-01-31 08:44:56.807 221554 DEBUG nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:44:56 np0005603609 nova_compute[221550]: 2026-01-31 08:44:56.808 221554 INFO nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Creating image(s)#033[00m
Jan 31 03:44:56 np0005603609 nova_compute[221550]: 2026-01-31 08:44:56.853 221554 DEBUG nova.storage.rbd_utils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 0a25731c-0886-42e0-ac2e-21d9078873a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:44:56 np0005603609 nova_compute[221550]: 2026-01-31 08:44:56.896 221554 DEBUG nova.storage.rbd_utils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 0a25731c-0886-42e0-ac2e-21d9078873a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:44:56 np0005603609 nova_compute[221550]: 2026-01-31 08:44:56.933 221554 DEBUG nova.storage.rbd_utils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 0a25731c-0886-42e0-ac2e-21d9078873a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:44:56 np0005603609 nova_compute[221550]: 2026-01-31 08:44:56.939 221554 DEBUG oslo_concurrency.processutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:44:57 np0005603609 nova_compute[221550]: 2026-01-31 08:44:57.002 221554 DEBUG oslo_concurrency.processutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:44:57 np0005603609 nova_compute[221550]: 2026-01-31 08:44:57.003 221554 DEBUG oslo_concurrency.lockutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:57 np0005603609 nova_compute[221550]: 2026-01-31 08:44:57.004 221554 DEBUG oslo_concurrency.lockutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:57 np0005603609 nova_compute[221550]: 2026-01-31 08:44:57.005 221554 DEBUG oslo_concurrency.lockutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:44:57 np0005603609 nova_compute[221550]: 2026-01-31 08:44:57.039 221554 DEBUG nova.storage.rbd_utils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 0a25731c-0886-42e0-ac2e-21d9078873a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:44:57 np0005603609 nova_compute[221550]: 2026-01-31 08:44:57.044 221554 DEBUG oslo_concurrency.processutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 0a25731c-0886-42e0-ac2e-21d9078873a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:44:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:57 np0005603609 nova_compute[221550]: 2026-01-31 08:44:57.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:57 np0005603609 nova_compute[221550]: 2026-01-31 08:44:57.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:44:57 np0005603609 nova_compute[221550]: 2026-01-31 08:44:57.852 221554 DEBUG oslo_concurrency.processutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 0a25731c-0886-42e0-ac2e-21d9078873a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.809s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:44:57 np0005603609 nova_compute[221550]: 2026-01-31 08:44:57.926 221554 DEBUG nova.storage.rbd_utils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] resizing rbd image 0a25731c-0886-42e0-ac2e-21d9078873a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:44:58 np0005603609 nova_compute[221550]: 2026-01-31 08:44:58.031 221554 DEBUG nova.objects.instance [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'migration_context' on Instance uuid 0a25731c-0886-42e0-ac2e-21d9078873a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:44:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:58.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:58 np0005603609 nova_compute[221550]: 2026-01-31 08:44:58.170 221554 DEBUG nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:44:58 np0005603609 nova_compute[221550]: 2026-01-31 08:44:58.171 221554 DEBUG nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Ensure instance console log exists: /var/lib/nova/instances/0a25731c-0886-42e0-ac2e-21d9078873a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:44:58 np0005603609 nova_compute[221550]: 2026-01-31 08:44:58.172 221554 DEBUG oslo_concurrency.lockutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:58 np0005603609 nova_compute[221550]: 2026-01-31 08:44:58.172 221554 DEBUG oslo_concurrency.lockutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:58 np0005603609 nova_compute[221550]: 2026-01-31 08:44:58.172 221554 DEBUG oslo_concurrency.lockutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:44:58 np0005603609 nova_compute[221550]: 2026-01-31 08:44:58.465 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:44:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:44:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:58.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:45:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:00.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:00 np0005603609 nova_compute[221550]: 2026-01-31 08:45:00.663 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:00 np0005603609 nova_compute[221550]: 2026-01-31 08:45:00.664 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:45:00 np0005603609 nova_compute[221550]: 2026-01-31 08:45:00.664 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:45:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:00.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:01 np0005603609 nova_compute[221550]: 2026-01-31 08:45:01.705 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:45:01 np0005603609 nova_compute[221550]: 2026-01-31 08:45:01.705 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:45:01 np0005603609 nova_compute[221550]: 2026-01-31 08:45:01.796 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:45:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:02.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:45:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:02 np0005603609 nova_compute[221550]: 2026-01-31 08:45:02.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:45:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:02.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:45:03 np0005603609 nova_compute[221550]: 2026-01-31 08:45:03.049 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:03 np0005603609 nova_compute[221550]: 2026-01-31 08:45:03.049 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:03 np0005603609 nova_compute[221550]: 2026-01-31 08:45:03.050 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:03 np0005603609 nova_compute[221550]: 2026-01-31 08:45:03.050 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:45:03 np0005603609 nova_compute[221550]: 2026-01-31 08:45:03.050 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:45:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:45:03 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2843535445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:45:03 np0005603609 nova_compute[221550]: 2026-01-31 08:45:03.469 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:03 np0005603609 nova_compute[221550]: 2026-01-31 08:45:03.485 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:45:03 np0005603609 nova_compute[221550]: 2026-01-31 08:45:03.672 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:45:03 np0005603609 nova_compute[221550]: 2026-01-31 08:45:03.673 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4215MB free_disk=20.901229858398438GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:45:03 np0005603609 nova_compute[221550]: 2026-01-31 08:45:03.673 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:03 np0005603609 nova_compute[221550]: 2026-01-31 08:45:03.673 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:04 np0005603609 nova_compute[221550]: 2026-01-31 08:45:04.080 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 0a25731c-0886-42e0-ac2e-21d9078873a4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:45:04 np0005603609 nova_compute[221550]: 2026-01-31 08:45:04.081 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:45:04 np0005603609 nova_compute[221550]: 2026-01-31 08:45:04.081 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:45:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:04.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:04 np0005603609 nova_compute[221550]: 2026-01-31 08:45:04.155 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:45:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:45:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1282128364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:45:04 np0005603609 nova_compute[221550]: 2026-01-31 08:45:04.568 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:45:04 np0005603609 nova_compute[221550]: 2026-01-31 08:45:04.575 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:45:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:45:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:04.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:45:04 np0005603609 nova_compute[221550]: 2026-01-31 08:45:04.745 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:45:05 np0005603609 nova_compute[221550]: 2026-01-31 08:45:05.008 221554 DEBUG nova.network.neutron [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Successfully created port: 4a83d32e-2424-4610-95a8-707cc88ca85c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:45:05 np0005603609 nova_compute[221550]: 2026-01-31 08:45:05.901 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:45:05 np0005603609 nova_compute[221550]: 2026-01-31 08:45:05.901 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:06.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:45:06Z|00873|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 31 03:45:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:45:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:06.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:45:06 np0005603609 nova_compute[221550]: 2026-01-31 08:45:06.798 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:07.537 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:07.537 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:07.538 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:08.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:08 np0005603609 podman[304189]: 2026-01-31 08:45:08.206067273 +0000 UTC m=+0.087913510 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:45:08 np0005603609 podman[304188]: 2026-01-31 08:45:08.211124645 +0000 UTC m=+0.095217627 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 03:45:08 np0005603609 nova_compute[221550]: 2026-01-31 08:45:08.472 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:08.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:10 np0005603609 nova_compute[221550]: 2026-01-31 08:45:10.025 221554 DEBUG nova.network.neutron [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Successfully updated port: 4a83d32e-2424-4610-95a8-707cc88ca85c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:45:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:10.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:10.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:11 np0005603609 nova_compute[221550]: 2026-01-31 08:45:11.332 221554 DEBUG nova.compute.manager [req-83f3cbd2-6e02-455a-961a-bc12e98a5f8d req-5acaa35d-fa7d-4219-b342-ff2a5d4ce944 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Received event network-changed-4a83d32e-2424-4610-95a8-707cc88ca85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:45:11 np0005603609 nova_compute[221550]: 2026-01-31 08:45:11.333 221554 DEBUG nova.compute.manager [req-83f3cbd2-6e02-455a-961a-bc12e98a5f8d req-5acaa35d-fa7d-4219-b342-ff2a5d4ce944 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Refreshing instance network info cache due to event network-changed-4a83d32e-2424-4610-95a8-707cc88ca85c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:45:11 np0005603609 nova_compute[221550]: 2026-01-31 08:45:11.333 221554 DEBUG oslo_concurrency.lockutils [req-83f3cbd2-6e02-455a-961a-bc12e98a5f8d req-5acaa35d-fa7d-4219-b342-ff2a5d4ce944 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-0a25731c-0886-42e0-ac2e-21d9078873a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:45:11 np0005603609 nova_compute[221550]: 2026-01-31 08:45:11.333 221554 DEBUG oslo_concurrency.lockutils [req-83f3cbd2-6e02-455a-961a-bc12e98a5f8d req-5acaa35d-fa7d-4219-b342-ff2a5d4ce944 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-0a25731c-0886-42e0-ac2e-21d9078873a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:45:11 np0005603609 nova_compute[221550]: 2026-01-31 08:45:11.333 221554 DEBUG nova.network.neutron [req-83f3cbd2-6e02-455a-961a-bc12e98a5f8d req-5acaa35d-fa7d-4219-b342-ff2a5d4ce944 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Refreshing network info cache for port 4a83d32e-2424-4610-95a8-707cc88ca85c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:45:11 np0005603609 nova_compute[221550]: 2026-01-31 08:45:11.655 221554 DEBUG oslo_concurrency.lockutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "refresh_cache-0a25731c-0886-42e0-ac2e-21d9078873a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:45:11 np0005603609 nova_compute[221550]: 2026-01-31 08:45:11.800 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:11 np0005603609 nova_compute[221550]: 2026-01-31 08:45:11.898 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:11 np0005603609 nova_compute[221550]: 2026-01-31 08:45:11.898 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:45:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:12.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:45:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:45:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:12.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:45:12 np0005603609 nova_compute[221550]: 2026-01-31 08:45:12.986 221554 DEBUG nova.network.neutron [req-83f3cbd2-6e02-455a-961a-bc12e98a5f8d req-5acaa35d-fa7d-4219-b342-ff2a5d4ce944 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:45:13 np0005603609 nova_compute[221550]: 2026-01-31 08:45:13.476 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:13 np0005603609 nova_compute[221550]: 2026-01-31 08:45:13.494 221554 DEBUG nova.network.neutron [req-83f3cbd2-6e02-455a-961a-bc12e98a5f8d req-5acaa35d-fa7d-4219-b342-ff2a5d4ce944 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:45:13 np0005603609 nova_compute[221550]: 2026-01-31 08:45:13.740 221554 DEBUG oslo_concurrency.lockutils [req-83f3cbd2-6e02-455a-961a-bc12e98a5f8d req-5acaa35d-fa7d-4219-b342-ff2a5d4ce944 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-0a25731c-0886-42e0-ac2e-21d9078873a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:45:13 np0005603609 nova_compute[221550]: 2026-01-31 08:45:13.742 221554 DEBUG oslo_concurrency.lockutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquired lock "refresh_cache-0a25731c-0886-42e0-ac2e-21d9078873a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:45:13 np0005603609 nova_compute[221550]: 2026-01-31 08:45:13.743 221554 DEBUG nova.network.neutron [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:45:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:45:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:14.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:45:14 np0005603609 nova_compute[221550]: 2026-01-31 08:45:14.385 221554 DEBUG nova.network.neutron [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:45:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:14.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:45:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:16.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:45:16 np0005603609 nova_compute[221550]: 2026-01-31 08:45:16.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:16.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:16 np0005603609 nova_compute[221550]: 2026-01-31 08:45:16.801 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:18.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:18 np0005603609 nova_compute[221550]: 2026-01-31 08:45:18.480 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:18.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:19 np0005603609 podman[304402]: 2026-01-31 08:45:19.925301832 +0000 UTC m=+0.086220300 container exec 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Jan 31 03:45:20 np0005603609 podman[304402]: 2026-01-31 08:45:20.009067039 +0000 UTC m=+0.169985577 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:45:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:20.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.255 221554 DEBUG nova.network.neutron [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Updating instance_info_cache with network_info: [{"id": "4a83d32e-2424-4610-95a8-707cc88ca85c", "address": "fa:16:3e:de:30:72", "network": {"id": "c76f8f50-9ea9-4f3f-895a-20259e6b3b8e", "bridge": "br-int", "label": "tempest-network-smoke--559564160", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a83d32e-24", "ovs_interfaceid": "4a83d32e-2424-4610-95a8-707cc88ca85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.459 221554 DEBUG oslo_concurrency.lockutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Releasing lock "refresh_cache-0a25731c-0886-42e0-ac2e-21d9078873a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.460 221554 DEBUG nova.compute.manager [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Instance network_info: |[{"id": "4a83d32e-2424-4610-95a8-707cc88ca85c", "address": "fa:16:3e:de:30:72", "network": {"id": "c76f8f50-9ea9-4f3f-895a-20259e6b3b8e", "bridge": "br-int", "label": "tempest-network-smoke--559564160", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a83d32e-24", "ovs_interfaceid": "4a83d32e-2424-4610-95a8-707cc88ca85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.463 221554 DEBUG nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Start _get_guest_xml network_info=[{"id": "4a83d32e-2424-4610-95a8-707cc88ca85c", "address": "fa:16:3e:de:30:72", "network": {"id": "c76f8f50-9ea9-4f3f-895a-20259e6b3b8e", "bridge": "br-int", "label": "tempest-network-smoke--559564160", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a83d32e-24", "ovs_interfaceid": "4a83d32e-2424-4610-95a8-707cc88ca85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.469 221554 WARNING nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.474 221554 DEBUG nova.virt.libvirt.host [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.476 221554 DEBUG nova.virt.libvirt.host [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.479 221554 DEBUG nova.virt.libvirt.host [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.480 221554 DEBUG nova.virt.libvirt.host [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.481 221554 DEBUG nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.481 221554 DEBUG nova.virt.hardware [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.482 221554 DEBUG nova.virt.hardware [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.482 221554 DEBUG nova.virt.hardware [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.482 221554 DEBUG nova.virt.hardware [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.482 221554 DEBUG nova.virt.hardware [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.483 221554 DEBUG nova.virt.hardware [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.483 221554 DEBUG nova.virt.hardware [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.483 221554 DEBUG nova.virt.hardware [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.483 221554 DEBUG nova.virt.hardware [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.484 221554 DEBUG nova.virt.hardware [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.484 221554 DEBUG nova.virt.hardware [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.487 221554 DEBUG oslo_concurrency.processutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:45:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:20.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:45:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3518480498' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.941 221554 DEBUG oslo_concurrency.processutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.972 221554 DEBUG nova.storage.rbd_utils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 0a25731c-0886-42e0-ac2e-21d9078873a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:45:20 np0005603609 nova_compute[221550]: 2026-01-31 08:45:20.977 221554 DEBUG oslo_concurrency.processutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:45:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:45:21 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3257554958' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:45:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:45:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:45:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:45:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:45:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.482 221554 DEBUG oslo_concurrency.processutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.484 221554 DEBUG nova.virt.libvirt.vif [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:44:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1959223726',display_name='tempest-TestNetworkAdvancedServerOps-server-1959223726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1959223726',id=192,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA7+Ui05Zz/RsBbrClJ7vBgaMcvC9QWdCfA17lbaydUCT1aH2bB8o/YdVrWfh1EGjjTYoXOlG7FQaLMqln9+0hIOC8/djk0BDUavPbD5dOYbGsmhY9RYdYJLU0zrRNUiwQ==',key_name='tempest-TestNetworkAdvancedServerOps-2091416548',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-hlhjz8ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:44:56Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=0a25731c-0886-42e0-ac2e-21d9078873a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a83d32e-2424-4610-95a8-707cc88ca85c", "address": "fa:16:3e:de:30:72", "network": {"id": "c76f8f50-9ea9-4f3f-895a-20259e6b3b8e", "bridge": "br-int", "label": "tempest-network-smoke--559564160", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a83d32e-24", "ovs_interfaceid": "4a83d32e-2424-4610-95a8-707cc88ca85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.484 221554 DEBUG nova.network.os_vif_util [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "4a83d32e-2424-4610-95a8-707cc88ca85c", "address": "fa:16:3e:de:30:72", "network": {"id": "c76f8f50-9ea9-4f3f-895a-20259e6b3b8e", "bridge": "br-int", "label": "tempest-network-smoke--559564160", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a83d32e-24", "ovs_interfaceid": "4a83d32e-2424-4610-95a8-707cc88ca85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.485 221554 DEBUG nova.network.os_vif_util [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:30:72,bridge_name='br-int',has_traffic_filtering=True,id=4a83d32e-2424-4610-95a8-707cc88ca85c,network=Network(c76f8f50-9ea9-4f3f-895a-20259e6b3b8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a83d32e-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.487 221554 DEBUG nova.objects.instance [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'pci_devices' on Instance uuid 0a25731c-0886-42e0-ac2e-21d9078873a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.662 221554 DEBUG nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:45:21 np0005603609 nova_compute[221550]:  <uuid>0a25731c-0886-42e0-ac2e-21d9078873a4</uuid>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:  <name>instance-000000c0</name>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1959223726</nova:name>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:45:20</nova:creationTime>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:45:21 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:        <nova:user uuid="4d0e9d918b4041fabd5ded633b4cf404">tempest-TestNetworkAdvancedServerOps-483180749-project-member</nova:user>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:        <nova:project uuid="9710f0cf77d84353ae13fa47922b085d">tempest-TestNetworkAdvancedServerOps-483180749</nova:project>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:        <nova:port uuid="4a83d32e-2424-4610-95a8-707cc88ca85c">
Jan 31 03:45:21 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <entry name="serial">0a25731c-0886-42e0-ac2e-21d9078873a4</entry>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <entry name="uuid">0a25731c-0886-42e0-ac2e-21d9078873a4</entry>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/0a25731c-0886-42e0-ac2e-21d9078873a4_disk">
Jan 31 03:45:21 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:45:21 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/0a25731c-0886-42e0-ac2e-21d9078873a4_disk.config">
Jan 31 03:45:21 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:45:21 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:de:30:72"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <target dev="tap4a83d32e-24"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/0a25731c-0886-42e0-ac2e-21d9078873a4/console.log" append="off"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:45:21 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:45:21 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:45:21 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:45:21 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.663 221554 DEBUG nova.compute.manager [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Preparing to wait for external event network-vif-plugged-4a83d32e-2424-4610-95a8-707cc88ca85c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.666 221554 DEBUG oslo_concurrency.lockutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.666 221554 DEBUG oslo_concurrency.lockutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.666 221554 DEBUG oslo_concurrency.lockutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.667 221554 DEBUG nova.virt.libvirt.vif [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:44:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1959223726',display_name='tempest-TestNetworkAdvancedServerOps-server-1959223726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1959223726',id=192,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA7+Ui05Zz/RsBbrClJ7vBgaMcvC9QWdCfA17lbaydUCT1aH2bB8o/YdVrWfh1EGjjTYoXOlG7FQaLMqln9+0hIOC8/djk0BDUavPbD5dOYbGsmhY9RYdYJLU0zrRNUiwQ==',key_name='tempest-TestNetworkAdvancedServerOps-2091416548',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-hlhjz8ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:44:56Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=0a25731c-0886-42e0-ac2e-21d9078873a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a83d32e-2424-4610-95a8-707cc88ca85c", "address": "fa:16:3e:de:30:72", "network": {"id": "c76f8f50-9ea9-4f3f-895a-20259e6b3b8e", "bridge": "br-int", "label": "tempest-network-smoke--559564160", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a83d32e-24", "ovs_interfaceid": "4a83d32e-2424-4610-95a8-707cc88ca85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.668 221554 DEBUG nova.network.os_vif_util [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "4a83d32e-2424-4610-95a8-707cc88ca85c", "address": "fa:16:3e:de:30:72", "network": {"id": "c76f8f50-9ea9-4f3f-895a-20259e6b3b8e", "bridge": "br-int", "label": "tempest-network-smoke--559564160", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a83d32e-24", "ovs_interfaceid": "4a83d32e-2424-4610-95a8-707cc88ca85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.668 221554 DEBUG nova.network.os_vif_util [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:30:72,bridge_name='br-int',has_traffic_filtering=True,id=4a83d32e-2424-4610-95a8-707cc88ca85c,network=Network(c76f8f50-9ea9-4f3f-895a-20259e6b3b8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a83d32e-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.668 221554 DEBUG os_vif [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:30:72,bridge_name='br-int',has_traffic_filtering=True,id=4a83d32e-2424-4610-95a8-707cc88ca85c,network=Network(c76f8f50-9ea9-4f3f-895a-20259e6b3b8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a83d32e-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.669 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.670 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.670 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.673 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.673 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a83d32e-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.673 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4a83d32e-24, col_values=(('external_ids', {'iface-id': '4a83d32e-2424-4610-95a8-707cc88ca85c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:30:72', 'vm-uuid': '0a25731c-0886-42e0-ac2e-21d9078873a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.675 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:21 np0005603609 NetworkManager[49064]: <info>  [1769849121.6763] manager: (tap4a83d32e-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/401)
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.678 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.681 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.682 221554 INFO os_vif [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:30:72,bridge_name='br-int',has_traffic_filtering=True,id=4a83d32e-2424-4610-95a8-707cc88ca85c,network=Network(c76f8f50-9ea9-4f3f-895a-20259e6b3b8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a83d32e-24')#033[00m
Jan 31 03:45:21 np0005603609 nova_compute[221550]: 2026-01-31 08:45:21.803 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:45:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:22.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:45:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:22 np0005603609 nova_compute[221550]: 2026-01-31 08:45:22.541 221554 DEBUG nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:45:22 np0005603609 nova_compute[221550]: 2026-01-31 08:45:22.542 221554 DEBUG nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:45:22 np0005603609 nova_compute[221550]: 2026-01-31 08:45:22.542 221554 DEBUG nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No VIF found with MAC fa:16:3e:de:30:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:45:22 np0005603609 nova_compute[221550]: 2026-01-31 08:45:22.543 221554 INFO nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Using config drive#033[00m
Jan 31 03:45:22 np0005603609 nova_compute[221550]: 2026-01-31 08:45:22.582 221554 DEBUG nova.storage.rbd_utils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 0a25731c-0886-42e0-ac2e-21d9078873a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:45:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:45:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:22.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:45:23 np0005603609 nova_compute[221550]: 2026-01-31 08:45:23.122 221554 INFO nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Creating config drive at /var/lib/nova/instances/0a25731c-0886-42e0-ac2e-21d9078873a4/disk.config#033[00m
Jan 31 03:45:23 np0005603609 nova_compute[221550]: 2026-01-31 08:45:23.127 221554 DEBUG oslo_concurrency.processutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0a25731c-0886-42e0-ac2e-21d9078873a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp37fvzz1s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:45:23 np0005603609 nova_compute[221550]: 2026-01-31 08:45:23.261 221554 DEBUG oslo_concurrency.processutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0a25731c-0886-42e0-ac2e-21d9078873a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp37fvzz1s" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:45:23 np0005603609 nova_compute[221550]: 2026-01-31 08:45:23.309 221554 DEBUG nova.storage.rbd_utils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 0a25731c-0886-42e0-ac2e-21d9078873a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:45:23 np0005603609 nova_compute[221550]: 2026-01-31 08:45:23.314 221554 DEBUG oslo_concurrency.processutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0a25731c-0886-42e0-ac2e-21d9078873a4/disk.config 0a25731c-0886-42e0-ac2e-21d9078873a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:45:23 np0005603609 nova_compute[221550]: 2026-01-31 08:45:23.484 221554 DEBUG oslo_concurrency.processutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0a25731c-0886-42e0-ac2e-21d9078873a4/disk.config 0a25731c-0886-42e0-ac2e-21d9078873a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:45:23 np0005603609 nova_compute[221550]: 2026-01-31 08:45:23.485 221554 INFO nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Deleting local config drive /var/lib/nova/instances/0a25731c-0886-42e0-ac2e-21d9078873a4/disk.config because it was imported into RBD.#033[00m
Jan 31 03:45:23 np0005603609 kernel: tap4a83d32e-24: entered promiscuous mode
Jan 31 03:45:23 np0005603609 NetworkManager[49064]: <info>  [1769849123.5249] manager: (tap4a83d32e-24): new Tun device (/org/freedesktop/NetworkManager/Devices/402)
Jan 31 03:45:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:45:23Z|00874|binding|INFO|Claiming lport 4a83d32e-2424-4610-95a8-707cc88ca85c for this chassis.
Jan 31 03:45:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:45:23Z|00875|binding|INFO|4a83d32e-2424-4610-95a8-707cc88ca85c: Claiming fa:16:3e:de:30:72 10.100.0.6
Jan 31 03:45:23 np0005603609 nova_compute[221550]: 2026-01-31 08:45:23.526 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:23 np0005603609 nova_compute[221550]: 2026-01-31 08:45:23.529 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:23 np0005603609 nova_compute[221550]: 2026-01-31 08:45:23.532 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:23 np0005603609 systemd-udevd[304789]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:45:23 np0005603609 systemd-machined[190912]: New machine qemu-104-instance-000000c0.
Jan 31 03:45:23 np0005603609 nova_compute[221550]: 2026-01-31 08:45:23.555 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:45:23Z|00876|binding|INFO|Setting lport 4a83d32e-2424-4610-95a8-707cc88ca85c ovn-installed in OVS
Jan 31 03:45:23 np0005603609 nova_compute[221550]: 2026-01-31 08:45:23.560 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:23 np0005603609 NetworkManager[49064]: <info>  [1769849123.5617] device (tap4a83d32e-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:45:23 np0005603609 NetworkManager[49064]: <info>  [1769849123.5622] device (tap4a83d32e-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:45:23 np0005603609 systemd[1]: Started Virtual Machine qemu-104-instance-000000c0.
Jan 31 03:45:23 np0005603609 ovn_controller[130359]: 2026-01-31T08:45:23Z|00877|binding|INFO|Setting lport 4a83d32e-2424-4610-95a8-707cc88ca85c up in Southbound
Jan 31 03:45:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:23.747 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:30:72 10.100.0.6'], port_security=['fa:16:3e:de:30:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0a25731c-0886-42e0-ac2e-21d9078873a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c434b9a6-ce01-434c-8159-4fa6c7ce5f42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66082a7f-9219-40e6-91d4-e8db2b40dfed, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=4a83d32e-2424-4610-95a8-707cc88ca85c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:45:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:23.749 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 4a83d32e-2424-4610-95a8-707cc88ca85c in datapath c76f8f50-9ea9-4f3f-895a-20259e6b3b8e bound to our chassis#033[00m
Jan 31 03:45:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:23.751 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c76f8f50-9ea9-4f3f-895a-20259e6b3b8e#033[00m
Jan 31 03:45:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:23.762 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f861843e-c3c3-44ae-b8bc-94d77649a74f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:23.764 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc76f8f50-91 in ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:45:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:23.766 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc76f8f50-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:45:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:23.766 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d71a30-2f77-4e7f-80d5-00be2f293463]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:23.767 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8117fcd0-942f-4a12-ab2e-95f4ac7146a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:23.778 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b4b2d9-1d79-45b4-902a-8ec4d09985a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:23.792 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[594cccd7-f9cf-4384-ad7e-2d4beb615576]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:23.818 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[1b1216f9-038b-4e27-b0d0-d47891a71168]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:23 np0005603609 NetworkManager[49064]: <info>  [1769849123.8236] manager: (tapc76f8f50-90): new Veth device (/org/freedesktop/NetworkManager/Devices/403)
Jan 31 03:45:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:23.823 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9896582a-9e4f-42d6-82ab-ca76636f2c15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:23.861 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[14e3853b-585d-4b08-b496-638f7c67b0f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:23.864 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf880a2-e039-4084-8d98-01937e00c558]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:23 np0005603609 NetworkManager[49064]: <info>  [1769849123.8888] device (tapc76f8f50-90): carrier: link connected
Jan 31 03:45:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:23.898 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d0601a-0f9e-4934-8b31-d09fd7b3d282]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:23.909 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7b531511-9216-450d-90df-b38428639857]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc76f8f50-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b7:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 936358, 'reachable_time': 21804, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304822, 'error': None, 'target': 'ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:23.922 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bc6fb29c-a060-4f16-95de-eca9645ee35d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:b74c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 936358, 'tstamp': 936358}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304823, 'error': None, 'target': 'ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:23.937 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[84d1decc-5850-4187-ba03-3977ff249a09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc76f8f50-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b7:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 936358, 'reachable_time': 21804, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304824, 'error': None, 'target': 'ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:23.961 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[839bf772-41dc-4ada-b7c7-ad9056b3c3f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:24.010 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa65063-d27a-400e-a3dd-3d3b3482fe5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:24.011 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc76f8f50-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:24.011 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:24.012 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc76f8f50-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:45:24 np0005603609 nova_compute[221550]: 2026-01-31 08:45:24.014 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:24 np0005603609 NetworkManager[49064]: <info>  [1769849124.0153] manager: (tapc76f8f50-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/404)
Jan 31 03:45:24 np0005603609 kernel: tapc76f8f50-90: entered promiscuous mode
Jan 31 03:45:24 np0005603609 nova_compute[221550]: 2026-01-31 08:45:24.019 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:24.022 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc76f8f50-90, col_values=(('external_ids', {'iface-id': 'df0a3c35-9af7-4656-9c5a-b90d4098938d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:45:24 np0005603609 nova_compute[221550]: 2026-01-31 08:45:24.023 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:24 np0005603609 ovn_controller[130359]: 2026-01-31T08:45:24Z|00878|binding|INFO|Releasing lport df0a3c35-9af7-4656-9c5a-b90d4098938d from this chassis (sb_readonly=0)
Jan 31 03:45:24 np0005603609 nova_compute[221550]: 2026-01-31 08:45:24.025 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:24.026 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c76f8f50-9ea9-4f3f-895a-20259e6b3b8e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c76f8f50-9ea9-4f3f-895a-20259e6b3b8e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:24.027 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bff20f7e-5e1e-466e-b481-497ae8cfa516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:24.027 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/c76f8f50-9ea9-4f3f-895a-20259e6b3b8e.pid.haproxy
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID c76f8f50-9ea9-4f3f-895a-20259e6b3b8e
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:45:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:24.028 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e', 'env', 'PROCESS_TAG=haproxy-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c76f8f50-9ea9-4f3f-895a-20259e6b3b8e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:45:24 np0005603609 nova_compute[221550]: 2026-01-31 08:45:24.030 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:24.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:24 np0005603609 nova_compute[221550]: 2026-01-31 08:45:24.326 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849124.3257294, 0a25731c-0886-42e0-ac2e-21d9078873a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:45:24 np0005603609 nova_compute[221550]: 2026-01-31 08:45:24.327 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] VM Started (Lifecycle Event)#033[00m
Jan 31 03:45:24 np0005603609 podman[304898]: 2026-01-31 08:45:24.363467668 +0000 UTC m=+0.041612053 container create c7135ec1dbe547846c85dc55dc2bfa1d2118ac17e504d0a7776a6976e261b148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:45:24 np0005603609 systemd[1]: Started libpod-conmon-c7135ec1dbe547846c85dc55dc2bfa1d2118ac17e504d0a7776a6976e261b148.scope.
Jan 31 03:45:24 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:45:24 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928ff9156831e3a17ccbf7e506feb35bc1afdbd3211b1cd75cf6233ecce3b34d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:24 np0005603609 podman[304898]: 2026-01-31 08:45:24.429938226 +0000 UTC m=+0.108082651 container init c7135ec1dbe547846c85dc55dc2bfa1d2118ac17e504d0a7776a6976e261b148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:45:24 np0005603609 podman[304898]: 2026-01-31 08:45:24.433621006 +0000 UTC m=+0.111765401 container start c7135ec1dbe547846c85dc55dc2bfa1d2118ac17e504d0a7776a6976e261b148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:45:24 np0005603609 podman[304898]: 2026-01-31 08:45:24.344116678 +0000 UTC m=+0.022261083 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:45:24 np0005603609 neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e[304913]: [NOTICE]   (304917) : New worker (304919) forked
Jan 31 03:45:24 np0005603609 neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e[304913]: [NOTICE]   (304917) : Loading success.
Jan 31 03:45:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:24.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:24 np0005603609 nova_compute[221550]: 2026-01-31 08:45:24.972 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:45:24 np0005603609 nova_compute[221550]: 2026-01-31 08:45:24.977 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849124.3270833, 0a25731c-0886-42e0-ac2e-21d9078873a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:45:24 np0005603609 nova_compute[221550]: 2026-01-31 08:45:24.978 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:45:25 np0005603609 nova_compute[221550]: 2026-01-31 08:45:25.189 221554 DEBUG nova.compute.manager [req-0124e064-eb59-4126-bd10-1dbb6dc669ee req-60636ce1-2e43-4e53-b481-c4f044204dc5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Received event network-vif-plugged-4a83d32e-2424-4610-95a8-707cc88ca85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:45:25 np0005603609 nova_compute[221550]: 2026-01-31 08:45:25.190 221554 DEBUG oslo_concurrency.lockutils [req-0124e064-eb59-4126-bd10-1dbb6dc669ee req-60636ce1-2e43-4e53-b481-c4f044204dc5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:25 np0005603609 nova_compute[221550]: 2026-01-31 08:45:25.191 221554 DEBUG oslo_concurrency.lockutils [req-0124e064-eb59-4126-bd10-1dbb6dc669ee req-60636ce1-2e43-4e53-b481-c4f044204dc5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:25 np0005603609 nova_compute[221550]: 2026-01-31 08:45:25.192 221554 DEBUG oslo_concurrency.lockutils [req-0124e064-eb59-4126-bd10-1dbb6dc669ee req-60636ce1-2e43-4e53-b481-c4f044204dc5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:25 np0005603609 nova_compute[221550]: 2026-01-31 08:45:25.192 221554 DEBUG nova.compute.manager [req-0124e064-eb59-4126-bd10-1dbb6dc669ee req-60636ce1-2e43-4e53-b481-c4f044204dc5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Processing event network-vif-plugged-4a83d32e-2424-4610-95a8-707cc88ca85c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:45:25 np0005603609 nova_compute[221550]: 2026-01-31 08:45:25.193 221554 DEBUG nova.compute.manager [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:45:25 np0005603609 nova_compute[221550]: 2026-01-31 08:45:25.198 221554 DEBUG nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:45:25 np0005603609 nova_compute[221550]: 2026-01-31 08:45:25.202 221554 INFO nova.virt.libvirt.driver [-] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Instance spawned successfully.#033[00m
Jan 31 03:45:25 np0005603609 nova_compute[221550]: 2026-01-31 08:45:25.203 221554 DEBUG nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:45:26 np0005603609 nova_compute[221550]: 2026-01-31 08:45:26.022 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:45:26 np0005603609 nova_compute[221550]: 2026-01-31 08:45:26.025 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849125.1975782, 0a25731c-0886-42e0-ac2e-21d9078873a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:45:26 np0005603609 nova_compute[221550]: 2026-01-31 08:45:26.026 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:45:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:26.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:26 np0005603609 nova_compute[221550]: 2026-01-31 08:45:26.339 221554 DEBUG nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:45:26 np0005603609 nova_compute[221550]: 2026-01-31 08:45:26.341 221554 DEBUG nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:45:26 np0005603609 nova_compute[221550]: 2026-01-31 08:45:26.342 221554 DEBUG nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:45:26 np0005603609 nova_compute[221550]: 2026-01-31 08:45:26.343 221554 DEBUG nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:45:26 np0005603609 nova_compute[221550]: 2026-01-31 08:45:26.343 221554 DEBUG nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:45:26 np0005603609 nova_compute[221550]: 2026-01-31 08:45:26.344 221554 DEBUG nova.virt.libvirt.driver [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:45:26 np0005603609 nova_compute[221550]: 2026-01-31 08:45:26.675 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:45:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:26.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:45:26 np0005603609 nova_compute[221550]: 2026-01-31 08:45:26.804 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:26 np0005603609 nova_compute[221550]: 2026-01-31 08:45:26.928 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:45:26 np0005603609 nova_compute[221550]: 2026-01-31 08:45:26.933 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:45:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:27 np0005603609 nova_compute[221550]: 2026-01-31 08:45:27.559 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:45:27 np0005603609 nova_compute[221550]: 2026-01-31 08:45:27.746 221554 INFO nova.compute.manager [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Took 30.94 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:45:27 np0005603609 nova_compute[221550]: 2026-01-31 08:45:27.747 221554 DEBUG nova.compute.manager [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:45:27 np0005603609 nova_compute[221550]: 2026-01-31 08:45:27.782 221554 DEBUG nova.compute.manager [req-9a51b967-4460-410b-afdb-6d6ab07d485a req-b709e7a1-7a53-4e1f-8559-51ec58ca18ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Received event network-vif-plugged-4a83d32e-2424-4610-95a8-707cc88ca85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:45:27 np0005603609 nova_compute[221550]: 2026-01-31 08:45:27.783 221554 DEBUG oslo_concurrency.lockutils [req-9a51b967-4460-410b-afdb-6d6ab07d485a req-b709e7a1-7a53-4e1f-8559-51ec58ca18ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:27 np0005603609 nova_compute[221550]: 2026-01-31 08:45:27.783 221554 DEBUG oslo_concurrency.lockutils [req-9a51b967-4460-410b-afdb-6d6ab07d485a req-b709e7a1-7a53-4e1f-8559-51ec58ca18ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:27 np0005603609 nova_compute[221550]: 2026-01-31 08:45:27.783 221554 DEBUG oslo_concurrency.lockutils [req-9a51b967-4460-410b-afdb-6d6ab07d485a req-b709e7a1-7a53-4e1f-8559-51ec58ca18ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:27 np0005603609 nova_compute[221550]: 2026-01-31 08:45:27.784 221554 DEBUG nova.compute.manager [req-9a51b967-4460-410b-afdb-6d6ab07d485a req-b709e7a1-7a53-4e1f-8559-51ec58ca18ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] No waiting events found dispatching network-vif-plugged-4a83d32e-2424-4610-95a8-707cc88ca85c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:45:27 np0005603609 nova_compute[221550]: 2026-01-31 08:45:27.784 221554 WARNING nova.compute.manager [req-9a51b967-4460-410b-afdb-6d6ab07d485a req-b709e7a1-7a53-4e1f-8559-51ec58ca18ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Received unexpected event network-vif-plugged-4a83d32e-2424-4610-95a8-707cc88ca85c for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:45:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:45:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:45:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:28.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:28 np0005603609 nova_compute[221550]: 2026-01-31 08:45:28.292 221554 INFO nova.compute.manager [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Took 36.75 seconds to build instance.#033[00m
Jan 31 03:45:28 np0005603609 nova_compute[221550]: 2026-01-31 08:45:28.482 221554 DEBUG oslo_concurrency.lockutils [None req-2e4823a2-d2d1-49a2-87c9-cb01f3200677 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 40.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:28.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:30.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:30 np0005603609 nova_compute[221550]: 2026-01-31 08:45:30.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:45:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:30.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:45:31 np0005603609 nova_compute[221550]: 2026-01-31 08:45:31.678 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:31 np0005603609 nova_compute[221550]: 2026-01-31 08:45:31.806 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:32.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:32.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:45:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:34.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:45:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:34.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:36.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:36 np0005603609 nova_compute[221550]: 2026-01-31 08:45:36.681 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:45:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:36.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:45:36 np0005603609 nova_compute[221550]: 2026-01-31 08:45:36.808 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:37 np0005603609 nova_compute[221550]: 2026-01-31 08:45:37.276 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:37 np0005603609 NetworkManager[49064]: <info>  [1769849137.2884] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Jan 31 03:45:37 np0005603609 NetworkManager[49064]: <info>  [1769849137.2891] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Jan 31 03:45:37 np0005603609 nova_compute[221550]: 2026-01-31 08:45:37.301 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:37 np0005603609 ovn_controller[130359]: 2026-01-31T08:45:37Z|00879|binding|INFO|Releasing lport df0a3c35-9af7-4656-9c5a-b90d4098938d from this chassis (sb_readonly=0)
Jan 31 03:45:37 np0005603609 nova_compute[221550]: 2026-01-31 08:45:37.310 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:37 np0005603609 ovn_controller[130359]: 2026-01-31T08:45:37Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:30:72 10.100.0.6
Jan 31 03:45:37 np0005603609 ovn_controller[130359]: 2026-01-31T08:45:37Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:30:72 10.100.0.6
Jan 31 03:45:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:38.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:38.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:39 np0005603609 podman[304981]: 2026-01-31 08:45:39.17662548 +0000 UTC m=+0.056312371 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:45:39 np0005603609 podman[304980]: 2026-01-31 08:45:39.209781408 +0000 UTC m=+0.090844982 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 31 03:45:39 np0005603609 nova_compute[221550]: 2026-01-31 08:45:39.285 221554 DEBUG nova.compute.manager [req-0e39edec-c33a-46d8-9291-87f047ba1fa2 req-28db93f3-4e5d-4f06-a5bf-acd76461eb85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Received event network-changed-4a83d32e-2424-4610-95a8-707cc88ca85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:45:39 np0005603609 nova_compute[221550]: 2026-01-31 08:45:39.286 221554 DEBUG nova.compute.manager [req-0e39edec-c33a-46d8-9291-87f047ba1fa2 req-28db93f3-4e5d-4f06-a5bf-acd76461eb85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Refreshing instance network info cache due to event network-changed-4a83d32e-2424-4610-95a8-707cc88ca85c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:45:39 np0005603609 nova_compute[221550]: 2026-01-31 08:45:39.286 221554 DEBUG oslo_concurrency.lockutils [req-0e39edec-c33a-46d8-9291-87f047ba1fa2 req-28db93f3-4e5d-4f06-a5bf-acd76461eb85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-0a25731c-0886-42e0-ac2e-21d9078873a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:45:39 np0005603609 nova_compute[221550]: 2026-01-31 08:45:39.286 221554 DEBUG oslo_concurrency.lockutils [req-0e39edec-c33a-46d8-9291-87f047ba1fa2 req-28db93f3-4e5d-4f06-a5bf-acd76461eb85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-0a25731c-0886-42e0-ac2e-21d9078873a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:45:39 np0005603609 nova_compute[221550]: 2026-01-31 08:45:39.286 221554 DEBUG nova.network.neutron [req-0e39edec-c33a-46d8-9291-87f047ba1fa2 req-28db93f3-4e5d-4f06-a5bf-acd76461eb85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Refreshing network info cache for port 4a83d32e-2424-4610-95a8-707cc88ca85c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:45:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:45:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:40.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:45:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:40.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:41 np0005603609 nova_compute[221550]: 2026-01-31 08:45:41.683 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:41 np0005603609 nova_compute[221550]: 2026-01-31 08:45:41.810 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:45:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:42.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:45:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:42 np0005603609 nova_compute[221550]: 2026-01-31 08:45:42.290 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:45:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:42.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:45:43 np0005603609 nova_compute[221550]: 2026-01-31 08:45:43.819 221554 DEBUG nova.network.neutron [req-0e39edec-c33a-46d8-9291-87f047ba1fa2 req-28db93f3-4e5d-4f06-a5bf-acd76461eb85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Updated VIF entry in instance network info cache for port 4a83d32e-2424-4610-95a8-707cc88ca85c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:45:43 np0005603609 nova_compute[221550]: 2026-01-31 08:45:43.819 221554 DEBUG nova.network.neutron [req-0e39edec-c33a-46d8-9291-87f047ba1fa2 req-28db93f3-4e5d-4f06-a5bf-acd76461eb85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Updating instance_info_cache with network_info: [{"id": "4a83d32e-2424-4610-95a8-707cc88ca85c", "address": "fa:16:3e:de:30:72", "network": {"id": "c76f8f50-9ea9-4f3f-895a-20259e6b3b8e", "bridge": "br-int", "label": "tempest-network-smoke--559564160", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a83d32e-24", "ovs_interfaceid": "4a83d32e-2424-4610-95a8-707cc88ca85c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:45:43 np0005603609 nova_compute[221550]: 2026-01-31 08:45:43.934 221554 INFO nova.compute.manager [None req-e6a3f5e6-a22c-4e55-b921-56b8abe8a110 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Get console output#033[00m
Jan 31 03:45:43 np0005603609 nova_compute[221550]: 2026-01-31 08:45:43.939 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:45:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:44.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:44 np0005603609 nova_compute[221550]: 2026-01-31 08:45:44.352 221554 DEBUG oslo_concurrency.lockutils [req-0e39edec-c33a-46d8-9291-87f047ba1fa2 req-28db93f3-4e5d-4f06-a5bf-acd76461eb85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-0a25731c-0886-42e0-ac2e-21d9078873a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:45:44 np0005603609 nova_compute[221550]: 2026-01-31 08:45:44.668 221554 DEBUG nova.objects.instance [None req-21d50d68-ddbe-4db6-b364-20961bf485a7 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'pci_devices' on Instance uuid 0a25731c-0886-42e0-ac2e-21d9078873a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:45:44 np0005603609 nova_compute[221550]: 2026-01-31 08:45:44.727 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849144.7268894, 0a25731c-0886-42e0-ac2e-21d9078873a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:45:44 np0005603609 nova_compute[221550]: 2026-01-31 08:45:44.727 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:45:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:44.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:44 np0005603609 nova_compute[221550]: 2026-01-31 08:45:44.861 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:45:44 np0005603609 nova_compute[221550]: 2026-01-31 08:45:44.864 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:45:45 np0005603609 nova_compute[221550]: 2026-01-31 08:45:45.059 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 31 03:45:45 np0005603609 kernel: tap4a83d32e-24 (unregistering): left promiscuous mode
Jan 31 03:45:45 np0005603609 NetworkManager[49064]: <info>  [1769849145.5258] device (tap4a83d32e-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:45:45 np0005603609 nova_compute[221550]: 2026-01-31 08:45:45.535 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:45:45Z|00880|binding|INFO|Releasing lport 4a83d32e-2424-4610-95a8-707cc88ca85c from this chassis (sb_readonly=0)
Jan 31 03:45:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:45:45Z|00881|binding|INFO|Setting lport 4a83d32e-2424-4610-95a8-707cc88ca85c down in Southbound
Jan 31 03:45:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:45:45Z|00882|binding|INFO|Removing iface tap4a83d32e-24 ovn-installed in OVS
Jan 31 03:45:45 np0005603609 nova_compute[221550]: 2026-01-31 08:45:45.539 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:45 np0005603609 nova_compute[221550]: 2026-01-31 08:45:45.555 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:45 np0005603609 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000c0.scope: Deactivated successfully.
Jan 31 03:45:45 np0005603609 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000c0.scope: Consumed 13.517s CPU time.
Jan 31 03:45:45 np0005603609 systemd-machined[190912]: Machine qemu-104-instance-000000c0 terminated.
Jan 31 03:45:45 np0005603609 nova_compute[221550]: 2026-01-31 08:45:45.639 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:45 np0005603609 nova_compute[221550]: 2026-01-31 08:45:45.644 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:45 np0005603609 nova_compute[221550]: 2026-01-31 08:45:45.645 221554 DEBUG nova.compute.manager [None req-21d50d68-ddbe-4db6-b364-20961bf485a7 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:45:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:45.763 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:30:72 10.100.0.6'], port_security=['fa:16:3e:de:30:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0a25731c-0886-42e0-ac2e-21d9078873a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c434b9a6-ce01-434c-8159-4fa6c7ce5f42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66082a7f-9219-40e6-91d4-e8db2b40dfed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=4a83d32e-2424-4610-95a8-707cc88ca85c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:45:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:45.764 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 4a83d32e-2424-4610-95a8-707cc88ca85c in datapath c76f8f50-9ea9-4f3f-895a-20259e6b3b8e unbound from our chassis#033[00m
Jan 31 03:45:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:45.765 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c76f8f50-9ea9-4f3f-895a-20259e6b3b8e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:45:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:45.766 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[53625579-df05-4572-b567-2d25cad32502]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:45.767 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e namespace which is not needed anymore#033[00m
Jan 31 03:45:46 np0005603609 neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e[304913]: [NOTICE]   (304917) : haproxy version is 2.8.14-c23fe91
Jan 31 03:45:46 np0005603609 neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e[304913]: [NOTICE]   (304917) : path to executable is /usr/sbin/haproxy
Jan 31 03:45:46 np0005603609 neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e[304913]: [WARNING]  (304917) : Exiting Master process...
Jan 31 03:45:46 np0005603609 neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e[304913]: [ALERT]    (304917) : Current worker (304919) exited with code 143 (Terminated)
Jan 31 03:45:46 np0005603609 neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e[304913]: [WARNING]  (304917) : All workers exited. Exiting... (0)
Jan 31 03:45:46 np0005603609 systemd[1]: libpod-c7135ec1dbe547846c85dc55dc2bfa1d2118ac17e504d0a7776a6976e261b148.scope: Deactivated successfully.
Jan 31 03:45:46 np0005603609 podman[305061]: 2026-01-31 08:45:46.065810715 +0000 UTC m=+0.212649706 container died c7135ec1dbe547846c85dc55dc2bfa1d2118ac17e504d0a7776a6976e261b148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:45:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:46.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:46 np0005603609 systemd[1]: var-lib-containers-storage-overlay-928ff9156831e3a17ccbf7e506feb35bc1afdbd3211b1cd75cf6233ecce3b34d-merged.mount: Deactivated successfully.
Jan 31 03:45:46 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c7135ec1dbe547846c85dc55dc2bfa1d2118ac17e504d0a7776a6976e261b148-userdata-shm.mount: Deactivated successfully.
Jan 31 03:45:46 np0005603609 nova_compute[221550]: 2026-01-31 08:45:46.685 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:45:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:46.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:45:46 np0005603609 nova_compute[221550]: 2026-01-31 08:45:46.811 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:46 np0005603609 podman[305061]: 2026-01-31 08:45:46.929307298 +0000 UTC m=+1.076146259 container cleanup c7135ec1dbe547846c85dc55dc2bfa1d2118ac17e504d0a7776a6976e261b148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:45:46 np0005603609 systemd[1]: libpod-conmon-c7135ec1dbe547846c85dc55dc2bfa1d2118ac17e504d0a7776a6976e261b148.scope: Deactivated successfully.
Jan 31 03:45:47 np0005603609 podman[305093]: 2026-01-31 08:45:47.217616395 +0000 UTC m=+0.262593381 container remove c7135ec1dbe547846c85dc55dc2bfa1d2118ac17e504d0a7776a6976e261b148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:45:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:47.221 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8f4ae29a-66f8-4d80-aa5d-25aa2c0c578a]: (4, ('Sat Jan 31 08:45:45 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e (c7135ec1dbe547846c85dc55dc2bfa1d2118ac17e504d0a7776a6976e261b148)\nc7135ec1dbe547846c85dc55dc2bfa1d2118ac17e504d0a7776a6976e261b148\nSat Jan 31 08:45:46 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e (c7135ec1dbe547846c85dc55dc2bfa1d2118ac17e504d0a7776a6976e261b148)\nc7135ec1dbe547846c85dc55dc2bfa1d2118ac17e504d0a7776a6976e261b148\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:47.223 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[03924133-4acb-4ffd-8cd5-fcf41f8d36b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:47.224 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc76f8f50-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.227 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:47 np0005603609 kernel: tapc76f8f50-90: left promiscuous mode
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.238 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:47.242 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1f116d80-8d57-424c-af19-3509faf1ad3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:47.262 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f863ad21-d9ae-4807-aa1b-9ae50b590bc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:47.264 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[dec55487-3fe1-4ba7-b16d-cbc7c93503f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:47.279 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[901008e1-de02-4baf-b368-5c6b62e9876d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 936351, 'reachable_time': 36924, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305113, 'error': None, 'target': 'ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:47 np0005603609 systemd[1]: run-netns-ovnmeta\x2dc76f8f50\x2d9ea9\x2d4f3f\x2d895a\x2d20259e6b3b8e.mount: Deactivated successfully.
Jan 31 03:45:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:47.282 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:45:47 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:47.283 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[ed081c8c-1bc1-4e3f-b855-9b7e6c1ebb86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.660 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.661 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.667 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.668 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.668 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.669 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.868 221554 DEBUG nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.868 221554 DEBUG nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Image id 7c23949f-bba8-4466-bb79-caf568852d38 yields fingerprint ff90c10b8251df1dd96780c3025774cae23123c6 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.869 221554 INFO nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] image 7c23949f-bba8-4466-bb79-caf568852d38 at (/var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6): checking#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.869 221554 DEBUG nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] image 7c23949f-bba8-4466-bb79-caf568852d38 at (/var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.872 221554 DEBUG nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.873 221554 DEBUG nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] 0a25731c-0886-42e0-ac2e-21d9078873a4 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.873 221554 WARNING nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.873 221554 INFO nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Active base files: /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.874 221554 INFO nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Removable base files: /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.875 221554 INFO nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.875 221554 DEBUG nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.875 221554 DEBUG nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Jan 31 03:45:47 np0005603609 nova_compute[221550]: 2026-01-31 08:45:47.876 221554 DEBUG nova.virt.libvirt.imagecache [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Jan 31 03:45:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:48.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:45:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:48.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:45:49 np0005603609 nova_compute[221550]: 2026-01-31 08:45:49.514 221554 DEBUG nova.compute.manager [req-4db41c02-19c3-402b-b6ac-61539078275d req-551240e9-0041-4751-b950-77d1a1e126e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Received event network-vif-unplugged-4a83d32e-2424-4610-95a8-707cc88ca85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:45:49 np0005603609 nova_compute[221550]: 2026-01-31 08:45:49.515 221554 DEBUG oslo_concurrency.lockutils [req-4db41c02-19c3-402b-b6ac-61539078275d req-551240e9-0041-4751-b950-77d1a1e126e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:49 np0005603609 nova_compute[221550]: 2026-01-31 08:45:49.515 221554 DEBUG oslo_concurrency.lockutils [req-4db41c02-19c3-402b-b6ac-61539078275d req-551240e9-0041-4751-b950-77d1a1e126e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:49 np0005603609 nova_compute[221550]: 2026-01-31 08:45:49.515 221554 DEBUG oslo_concurrency.lockutils [req-4db41c02-19c3-402b-b6ac-61539078275d req-551240e9-0041-4751-b950-77d1a1e126e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:49 np0005603609 nova_compute[221550]: 2026-01-31 08:45:49.515 221554 DEBUG nova.compute.manager [req-4db41c02-19c3-402b-b6ac-61539078275d req-551240e9-0041-4751-b950-77d1a1e126e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] No waiting events found dispatching network-vif-unplugged-4a83d32e-2424-4610-95a8-707cc88ca85c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:45:49 np0005603609 nova_compute[221550]: 2026-01-31 08:45:49.515 221554 WARNING nova.compute.manager [req-4db41c02-19c3-402b-b6ac-61539078275d req-551240e9-0041-4751-b950-77d1a1e126e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Received unexpected event network-vif-unplugged-4a83d32e-2424-4610-95a8-707cc88ca85c for instance with vm_state suspended and task_state None.#033[00m
Jan 31 03:45:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:50.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:50.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:50 np0005603609 nova_compute[221550]: 2026-01-31 08:45:50.819 221554 INFO nova.compute.manager [None req-10372be2-992c-4eef-8df3-bd06cd7f954d 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Get console output#033[00m
Jan 31 03:45:51 np0005603609 nova_compute[221550]: 2026-01-31 08:45:51.688 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:51 np0005603609 nova_compute[221550]: 2026-01-31 08:45:51.814 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:51 np0005603609 nova_compute[221550]: 2026-01-31 08:45:51.876 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:45:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:52.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:45:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:45:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:52.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:45:53 np0005603609 nova_compute[221550]: 2026-01-31 08:45:53.244 221554 DEBUG nova.compute.manager [req-01f76c70-39cc-4083-abea-6b4a356dd7f3 req-86ca6a7c-08fb-45db-bfbc-c2d2f16af1f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Received event network-vif-plugged-4a83d32e-2424-4610-95a8-707cc88ca85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:45:53 np0005603609 nova_compute[221550]: 2026-01-31 08:45:53.245 221554 DEBUG oslo_concurrency.lockutils [req-01f76c70-39cc-4083-abea-6b4a356dd7f3 req-86ca6a7c-08fb-45db-bfbc-c2d2f16af1f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:53 np0005603609 nova_compute[221550]: 2026-01-31 08:45:53.246 221554 DEBUG oslo_concurrency.lockutils [req-01f76c70-39cc-4083-abea-6b4a356dd7f3 req-86ca6a7c-08fb-45db-bfbc-c2d2f16af1f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:53 np0005603609 nova_compute[221550]: 2026-01-31 08:45:53.246 221554 DEBUG oslo_concurrency.lockutils [req-01f76c70-39cc-4083-abea-6b4a356dd7f3 req-86ca6a7c-08fb-45db-bfbc-c2d2f16af1f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:53 np0005603609 nova_compute[221550]: 2026-01-31 08:45:53.247 221554 DEBUG nova.compute.manager [req-01f76c70-39cc-4083-abea-6b4a356dd7f3 req-86ca6a7c-08fb-45db-bfbc-c2d2f16af1f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] No waiting events found dispatching network-vif-plugged-4a83d32e-2424-4610-95a8-707cc88ca85c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:45:53 np0005603609 nova_compute[221550]: 2026-01-31 08:45:53.247 221554 WARNING nova.compute.manager [req-01f76c70-39cc-4083-abea-6b4a356dd7f3 req-86ca6a7c-08fb-45db-bfbc-c2d2f16af1f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Received unexpected event network-vif-plugged-4a83d32e-2424-4610-95a8-707cc88ca85c for instance with vm_state suspended and task_state None.#033[00m
Jan 31 03:45:53 np0005603609 nova_compute[221550]: 2026-01-31 08:45:53.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:45:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2695108008' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:45:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:45:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2695108008' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:45:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:54.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:54 np0005603609 nova_compute[221550]: 2026-01-31 08:45:54.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:54.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:55 np0005603609 nova_compute[221550]: 2026-01-31 08:45:55.097 221554 INFO nova.compute.manager [None req-c20c69b3-9ce9-4465-a378-ff8c0eae6be7 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Resuming#033[00m
Jan 31 03:45:55 np0005603609 nova_compute[221550]: 2026-01-31 08:45:55.099 221554 DEBUG nova.objects.instance [None req-c20c69b3-9ce9-4465-a378-ff8c0eae6be7 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'flavor' on Instance uuid 0a25731c-0886-42e0-ac2e-21d9078873a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:45:55 np0005603609 nova_compute[221550]: 2026-01-31 08:45:55.503 221554 DEBUG oslo_concurrency.lockutils [None req-c20c69b3-9ce9-4465-a378-ff8c0eae6be7 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "refresh_cache-0a25731c-0886-42e0-ac2e-21d9078873a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:45:55 np0005603609 nova_compute[221550]: 2026-01-31 08:45:55.503 221554 DEBUG oslo_concurrency.lockutils [None req-c20c69b3-9ce9-4465-a378-ff8c0eae6be7 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquired lock "refresh_cache-0a25731c-0886-42e0-ac2e-21d9078873a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:45:55 np0005603609 nova_compute[221550]: 2026-01-31 08:45:55.504 221554 DEBUG nova.network.neutron [None req-c20c69b3-9ce9-4465-a378-ff8c0eae6be7 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:45:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:56.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:56 np0005603609 nova_compute[221550]: 2026-01-31 08:45:56.691 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:56.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:56 np0005603609 nova_compute[221550]: 2026-01-31 08:45:56.815 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:57 np0005603609 nova_compute[221550]: 2026-01-31 08:45:57.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:57 np0005603609 nova_compute[221550]: 2026-01-31 08:45:57.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:45:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:58.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:45:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:58.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:58.874 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=87, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=86) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:45:58 np0005603609 nova_compute[221550]: 2026-01-31 08:45:58.874 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:45:58.875 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:46:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:00.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:00 np0005603609 nova_compute[221550]: 2026-01-31 08:46:00.297 221554 DEBUG nova.network.neutron [None req-c20c69b3-9ce9-4465-a378-ff8c0eae6be7 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Updating instance_info_cache with network_info: [{"id": "4a83d32e-2424-4610-95a8-707cc88ca85c", "address": "fa:16:3e:de:30:72", "network": {"id": "c76f8f50-9ea9-4f3f-895a-20259e6b3b8e", "bridge": "br-int", "label": "tempest-network-smoke--559564160", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a83d32e-24", "ovs_interfaceid": "4a83d32e-2424-4610-95a8-707cc88ca85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:46:00 np0005603609 nova_compute[221550]: 2026-01-31 08:46:00.647 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849145.64569, 0a25731c-0886-42e0-ac2e-21d9078873a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:46:00 np0005603609 nova_compute[221550]: 2026-01-31 08:46:00.648 221554 INFO nova.compute.manager [-] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:46:00 np0005603609 nova_compute[221550]: 2026-01-31 08:46:00.715 221554 DEBUG oslo_concurrency.lockutils [None req-c20c69b3-9ce9-4465-a378-ff8c0eae6be7 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Releasing lock "refresh_cache-0a25731c-0886-42e0-ac2e-21d9078873a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:46:00 np0005603609 nova_compute[221550]: 2026-01-31 08:46:00.720 221554 DEBUG nova.virt.libvirt.vif [None req-c20c69b3-9ce9-4465-a378-ff8c0eae6be7 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:44:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1959223726',display_name='tempest-TestNetworkAdvancedServerOps-server-1959223726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1959223726',id=192,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA7+Ui05Zz/RsBbrClJ7vBgaMcvC9QWdCfA17lbaydUCT1aH2bB8o/YdVrWfh1EGjjTYoXOlG7FQaLMqln9+0hIOC8/djk0BDUavPbD5dOYbGsmhY9RYdYJLU0zrRNUiwQ==',key_name='tempest-TestNetworkAdvancedServerOps-2091416548',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:45:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-hlhjz8ig',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:45:45Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=0a25731c-0886-42e0-ac2e-21d9078873a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "4a83d32e-2424-4610-95a8-707cc88ca85c", "address": "fa:16:3e:de:30:72", "network": {"id": "c76f8f50-9ea9-4f3f-895a-20259e6b3b8e", "bridge": "br-int", "label": "tempest-network-smoke--559564160", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a83d32e-24", "ovs_interfaceid": "4a83d32e-2424-4610-95a8-707cc88ca85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:46:00 np0005603609 nova_compute[221550]: 2026-01-31 08:46:00.721 221554 DEBUG nova.network.os_vif_util [None req-c20c69b3-9ce9-4465-a378-ff8c0eae6be7 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "4a83d32e-2424-4610-95a8-707cc88ca85c", "address": "fa:16:3e:de:30:72", "network": {"id": "c76f8f50-9ea9-4f3f-895a-20259e6b3b8e", "bridge": "br-int", "label": "tempest-network-smoke--559564160", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a83d32e-24", "ovs_interfaceid": "4a83d32e-2424-4610-95a8-707cc88ca85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:46:00 np0005603609 nova_compute[221550]: 2026-01-31 08:46:00.721 221554 DEBUG nova.network.os_vif_util [None req-c20c69b3-9ce9-4465-a378-ff8c0eae6be7 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:30:72,bridge_name='br-int',has_traffic_filtering=True,id=4a83d32e-2424-4610-95a8-707cc88ca85c,network=Network(c76f8f50-9ea9-4f3f-895a-20259e6b3b8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a83d32e-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:46:00 np0005603609 nova_compute[221550]: 2026-01-31 08:46:00.722 221554 DEBUG os_vif [None req-c20c69b3-9ce9-4465-a378-ff8c0eae6be7 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:30:72,bridge_name='br-int',has_traffic_filtering=True,id=4a83d32e-2424-4610-95a8-707cc88ca85c,network=Network(c76f8f50-9ea9-4f3f-895a-20259e6b3b8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a83d32e-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:46:00 np0005603609 nova_compute[221550]: 2026-01-31 08:46:00.722 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:00 np0005603609 nova_compute[221550]: 2026-01-31 08:46:00.723 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:46:00 np0005603609 nova_compute[221550]: 2026-01-31 08:46:00.723 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:46:00 np0005603609 nova_compute[221550]: 2026-01-31 08:46:00.725 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:00 np0005603609 nova_compute[221550]: 2026-01-31 08:46:00.725 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a83d32e-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:46:00 np0005603609 nova_compute[221550]: 2026-01-31 08:46:00.725 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4a83d32e-24, col_values=(('external_ids', {'iface-id': '4a83d32e-2424-4610-95a8-707cc88ca85c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:30:72', 'vm-uuid': '0a25731c-0886-42e0-ac2e-21d9078873a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:46:00 np0005603609 nova_compute[221550]: 2026-01-31 08:46:00.726 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:46:00 np0005603609 nova_compute[221550]: 2026-01-31 08:46:00.726 221554 INFO os_vif [None req-c20c69b3-9ce9-4465-a378-ff8c0eae6be7 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:30:72,bridge_name='br-int',has_traffic_filtering=True,id=4a83d32e-2424-4610-95a8-707cc88ca85c,network=Network(c76f8f50-9ea9-4f3f-895a-20259e6b3b8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a83d32e-24')#033[00m
Jan 31 03:46:00 np0005603609 nova_compute[221550]: 2026-01-31 08:46:00.741 221554 DEBUG nova.compute.manager [None req-6cb02bff-005b-414d-aa56-44faa826577f - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:46:00 np0005603609 nova_compute[221550]: 2026-01-31 08:46:00.762 221554 DEBUG nova.objects.instance [None req-c20c69b3-9ce9-4465-a378-ff8c0eae6be7 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'numa_topology' on Instance uuid 0a25731c-0886-42e0-ac2e-21d9078873a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:46:00 np0005603609 nova_compute[221550]: 2026-01-31 08:46:00.764 221554 DEBUG nova.compute.manager [None req-6cb02bff-005b-414d-aa56-44faa826577f - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:46:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:46:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:00.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:46:01 np0005603609 nova_compute[221550]: 2026-01-31 08:46:01.153 221554 INFO nova.compute.manager [None req-6cb02bff-005b-414d-aa56-44faa826577f - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Jan 31 03:46:01 np0005603609 kernel: tap4a83d32e-24: entered promiscuous mode
Jan 31 03:46:01 np0005603609 NetworkManager[49064]: <info>  [1769849161.2001] manager: (tap4a83d32e-24): new Tun device (/org/freedesktop/NetworkManager/Devices/407)
Jan 31 03:46:01 np0005603609 ovn_controller[130359]: 2026-01-31T08:46:01Z|00883|binding|INFO|Claiming lport 4a83d32e-2424-4610-95a8-707cc88ca85c for this chassis.
Jan 31 03:46:01 np0005603609 ovn_controller[130359]: 2026-01-31T08:46:01Z|00884|binding|INFO|4a83d32e-2424-4610-95a8-707cc88ca85c: Claiming fa:16:3e:de:30:72 10.100.0.6
Jan 31 03:46:01 np0005603609 nova_compute[221550]: 2026-01-31 08:46:01.202 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:01 np0005603609 ovn_controller[130359]: 2026-01-31T08:46:01Z|00885|binding|INFO|Setting lport 4a83d32e-2424-4610-95a8-707cc88ca85c ovn-installed in OVS
Jan 31 03:46:01 np0005603609 nova_compute[221550]: 2026-01-31 08:46:01.210 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:01 np0005603609 nova_compute[221550]: 2026-01-31 08:46:01.212 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:01 np0005603609 systemd-udevd[305128]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:46:01 np0005603609 systemd-machined[190912]: New machine qemu-105-instance-000000c0.
Jan 31 03:46:01 np0005603609 NetworkManager[49064]: <info>  [1769849161.2433] device (tap4a83d32e-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:46:01 np0005603609 NetworkManager[49064]: <info>  [1769849161.2443] device (tap4a83d32e-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:46:01 np0005603609 systemd[1]: Started Virtual Machine qemu-105-instance-000000c0.
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.362 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:30:72 10.100.0.6'], port_security=['fa:16:3e:de:30:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0a25731c-0886-42e0-ac2e-21d9078873a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c434b9a6-ce01-434c-8159-4fa6c7ce5f42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66082a7f-9219-40e6-91d4-e8db2b40dfed, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=4a83d32e-2424-4610-95a8-707cc88ca85c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:46:01 np0005603609 ovn_controller[130359]: 2026-01-31T08:46:01Z|00886|binding|INFO|Setting lport 4a83d32e-2424-4610-95a8-707cc88ca85c up in Southbound
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.364 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 4a83d32e-2424-4610-95a8-707cc88ca85c in datapath c76f8f50-9ea9-4f3f-895a-20259e6b3b8e bound to our chassis#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.365 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c76f8f50-9ea9-4f3f-895a-20259e6b3b8e#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.376 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a120d929-b026-49de-a905-b4921c892799]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.377 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc76f8f50-91 in ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.378 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc76f8f50-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.378 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5869c01a-3d77-4194-9ac8-40141341f33b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.379 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5983c835-9bec-43e0-87f0-c03894d1f8be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.392 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[7784ad84-3346-47c0-8f50-b2999f400b7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.405 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[add57072-d1ff-43f0-a795-726b43309fa0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.427 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[6b3a1045-36df-4a2a-8568-78012dc31d96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:01 np0005603609 NetworkManager[49064]: <info>  [1769849161.4337] manager: (tapc76f8f50-90): new Veth device (/org/freedesktop/NetworkManager/Devices/408)
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.434 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[80c8085c-39eb-428d-af97-d0aa6e9988a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.460 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[6d19e6b3-70bf-4eb5-a4cb-87465852425a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.463 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2cf7ee-00b7-40e6-80f6-77e62ec9068b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:01 np0005603609 NetworkManager[49064]: <info>  [1769849161.5036] device (tapc76f8f50-90): carrier: link connected
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.505 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[d9acf647-277e-4f1b-b576-ecc392476b39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.517 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[55928990-9626-4520-b922-2700ba25d447]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc76f8f50-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b7:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 271], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 940118, 'reachable_time': 38450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305161, 'error': None, 'target': 'ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.527 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d8815be9-10ed-41f6-8a85-f7848ab01b57]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:b74c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 940118, 'tstamp': 940118}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305162, 'error': None, 'target': 'ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.536 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0d072a1c-ea2d-4f43-80c9-e560484fddde]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc76f8f50-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b7:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 271], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 940118, 'reachable_time': 38450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305163, 'error': None, 'target': 'ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.553 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[69b098d0-4906-4baa-ad0d-4747bc984fc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.592 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[35626a22-5e2f-4e14-ad5b-e76d2c9564b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.593 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc76f8f50-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.594 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.594 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc76f8f50-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:46:01 np0005603609 nova_compute[221550]: 2026-01-31 08:46:01.596 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:01 np0005603609 kernel: tapc76f8f50-90: entered promiscuous mode
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.598 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc76f8f50-90, col_values=(('external_ids', {'iface-id': 'df0a3c35-9af7-4656-9c5a-b90d4098938d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:46:01 np0005603609 nova_compute[221550]: 2026-01-31 08:46:01.599 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:01 np0005603609 ovn_controller[130359]: 2026-01-31T08:46:01Z|00887|binding|INFO|Releasing lport df0a3c35-9af7-4656-9c5a-b90d4098938d from this chassis (sb_readonly=0)
Jan 31 03:46:01 np0005603609 NetworkManager[49064]: <info>  [1769849161.6002] manager: (tapc76f8f50-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.600 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c76f8f50-9ea9-4f3f-895a-20259e6b3b8e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c76f8f50-9ea9-4f3f-895a-20259e6b3b8e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.601 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d51bf868-2f6e-4876-b9b4-9efb15c5c688]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.602 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/c76f8f50-9ea9-4f3f-895a-20259e6b3b8e.pid.haproxy
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID c76f8f50-9ea9-4f3f-895a-20259e6b3b8e
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:46:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:01.603 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e', 'env', 'PROCESS_TAG=haproxy-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c76f8f50-9ea9-4f3f-895a-20259e6b3b8e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:46:01 np0005603609 nova_compute[221550]: 2026-01-31 08:46:01.604 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:01 np0005603609 nova_compute[221550]: 2026-01-31 08:46:01.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:01 np0005603609 nova_compute[221550]: 2026-01-31 08:46:01.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:46:01 np0005603609 nova_compute[221550]: 2026-01-31 08:46:01.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:46:01 np0005603609 nova_compute[221550]: 2026-01-31 08:46:01.692 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:01 np0005603609 nova_compute[221550]: 2026-01-31 08:46:01.807 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-0a25731c-0886-42e0-ac2e-21d9078873a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:46:01 np0005603609 nova_compute[221550]: 2026-01-31 08:46:01.807 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-0a25731c-0886-42e0-ac2e-21d9078873a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:46:01 np0005603609 nova_compute[221550]: 2026-01-31 08:46:01.807 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:46:01 np0005603609 nova_compute[221550]: 2026-01-31 08:46:01.808 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0a25731c-0886-42e0-ac2e-21d9078873a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:46:01 np0005603609 nova_compute[221550]: 2026-01-31 08:46:01.815 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:01 np0005603609 podman[305202]: 2026-01-31 08:46:01.884009986 +0000 UTC m=+0.042936296 container create 97759ceeb44d3efce821296f3b7b29d691d3893cc3b99d0f25d6881fdeca540a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:46:01 np0005603609 systemd[1]: Started libpod-conmon-97759ceeb44d3efce821296f3b7b29d691d3893cc3b99d0f25d6881fdeca540a.scope.
Jan 31 03:46:01 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:46:01 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba9aa9bbee4b9c359051d4bea7bd839608c5101844ab43c8bc5ea50492f841f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:01 np0005603609 podman[305202]: 2026-01-31 08:46:01.939379924 +0000 UTC m=+0.098306234 container init 97759ceeb44d3efce821296f3b7b29d691d3893cc3b99d0f25d6881fdeca540a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 03:46:01 np0005603609 podman[305202]: 2026-01-31 08:46:01.945707648 +0000 UTC m=+0.104633958 container start 97759ceeb44d3efce821296f3b7b29d691d3893cc3b99d0f25d6881fdeca540a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 03:46:01 np0005603609 podman[305202]: 2026-01-31 08:46:01.86032064 +0000 UTC m=+0.019246980 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:46:01 np0005603609 neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e[305250]: [NOTICE]   (305255) : New worker (305258) forked
Jan 31 03:46:01 np0005603609 neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e[305250]: [NOTICE]   (305255) : Loading success.
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.072 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849162.0722256, 0a25731c-0886-42e0-ac2e-21d9078873a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.073 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] VM Started (Lifecycle Event)#033[00m
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.102 221554 DEBUG nova.compute.manager [None req-c20c69b3-9ce9-4465-a378-ff8c0eae6be7 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.103 221554 DEBUG nova.objects.instance [None req-c20c69b3-9ce9-4465-a378-ff8c0eae6be7 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'pci_devices' on Instance uuid 0a25731c-0886-42e0-ac2e-21d9078873a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:46:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:02.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.342 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.350 221554 INFO nova.virt.libvirt.driver [-] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Instance running successfully.#033[00m
Jan 31 03:46:02 np0005603609 virtqemud[221292]: argument unsupported: QEMU guest agent is not configured
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.353 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.355 221554 DEBUG nova.virt.libvirt.guest [None req-c20c69b3-9ce9-4465-a378-ff8c0eae6be7 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.356 221554 DEBUG nova.compute.manager [None req-c20c69b3-9ce9-4465-a378-ff8c0eae6be7 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.477 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.478 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849162.0758276, 0a25731c-0886-42e0-ac2e-21d9078873a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.478 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.681 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.684 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.787 221554 DEBUG nova.compute.manager [req-d025e5ce-addc-4430-8084-e2a6ad8b6b1e req-fe82af68-4ebf-4242-8b05-f42987036305 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Received event network-vif-plugged-4a83d32e-2424-4610-95a8-707cc88ca85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.788 221554 DEBUG oslo_concurrency.lockutils [req-d025e5ce-addc-4430-8084-e2a6ad8b6b1e req-fe82af68-4ebf-4242-8b05-f42987036305 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.788 221554 DEBUG oslo_concurrency.lockutils [req-d025e5ce-addc-4430-8084-e2a6ad8b6b1e req-fe82af68-4ebf-4242-8b05-f42987036305 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.788 221554 DEBUG oslo_concurrency.lockutils [req-d025e5ce-addc-4430-8084-e2a6ad8b6b1e req-fe82af68-4ebf-4242-8b05-f42987036305 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.788 221554 DEBUG nova.compute.manager [req-d025e5ce-addc-4430-8084-e2a6ad8b6b1e req-fe82af68-4ebf-4242-8b05-f42987036305 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] No waiting events found dispatching network-vif-plugged-4a83d32e-2424-4610-95a8-707cc88ca85c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:46:02 np0005603609 nova_compute[221550]: 2026-01-31 08:46:02.788 221554 WARNING nova.compute.manager [req-d025e5ce-addc-4430-8084-e2a6ad8b6b1e req-fe82af68-4ebf-4242-8b05-f42987036305 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Received unexpected event network-vif-plugged-4a83d32e-2424-4610-95a8-707cc88ca85c for instance with vm_state suspended and task_state resuming.#033[00m
Jan 31 03:46:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:02.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:46:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:04.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:46:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:04.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:05.877 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '87'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:46:06 np0005603609 nova_compute[221550]: 2026-01-31 08:46:06.074 221554 DEBUG nova.compute.manager [req-a58a679f-ec2a-48f1-a8b6-3f12476839ff req-bc4e068d-d024-499e-88ce-1cba478edc3d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Received event network-vif-plugged-4a83d32e-2424-4610-95a8-707cc88ca85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:46:06 np0005603609 nova_compute[221550]: 2026-01-31 08:46:06.074 221554 DEBUG oslo_concurrency.lockutils [req-a58a679f-ec2a-48f1-a8b6-3f12476839ff req-bc4e068d-d024-499e-88ce-1cba478edc3d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:06 np0005603609 nova_compute[221550]: 2026-01-31 08:46:06.075 221554 DEBUG oslo_concurrency.lockutils [req-a58a679f-ec2a-48f1-a8b6-3f12476839ff req-bc4e068d-d024-499e-88ce-1cba478edc3d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:06 np0005603609 nova_compute[221550]: 2026-01-31 08:46:06.075 221554 DEBUG oslo_concurrency.lockutils [req-a58a679f-ec2a-48f1-a8b6-3f12476839ff req-bc4e068d-d024-499e-88ce-1cba478edc3d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:06 np0005603609 nova_compute[221550]: 2026-01-31 08:46:06.075 221554 DEBUG nova.compute.manager [req-a58a679f-ec2a-48f1-a8b6-3f12476839ff req-bc4e068d-d024-499e-88ce-1cba478edc3d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] No waiting events found dispatching network-vif-plugged-4a83d32e-2424-4610-95a8-707cc88ca85c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:46:06 np0005603609 nova_compute[221550]: 2026-01-31 08:46:06.075 221554 WARNING nova.compute.manager [req-a58a679f-ec2a-48f1-a8b6-3f12476839ff req-bc4e068d-d024-499e-88ce-1cba478edc3d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Received unexpected event network-vif-plugged-4a83d32e-2424-4610-95a8-707cc88ca85c for instance with vm_state active and task_state None.#033[00m
Jan 31 03:46:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:46:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:06.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:46:06 np0005603609 nova_compute[221550]: 2026-01-31 08:46:06.694 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:06 np0005603609 nova_compute[221550]: 2026-01-31 08:46:06.706 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Updating instance_info_cache with network_info: [{"id": "4a83d32e-2424-4610-95a8-707cc88ca85c", "address": "fa:16:3e:de:30:72", "network": {"id": "c76f8f50-9ea9-4f3f-895a-20259e6b3b8e", "bridge": "br-int", "label": "tempest-network-smoke--559564160", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a83d32e-24", "ovs_interfaceid": "4a83d32e-2424-4610-95a8-707cc88ca85c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:46:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:46:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:06.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:46:06 np0005603609 nova_compute[221550]: 2026-01-31 08:46:06.818 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:07 np0005603609 nova_compute[221550]: 2026-01-31 08:46:07.006 221554 INFO nova.compute.manager [None req-65342b8b-9bc9-4abc-9be0-3f3d5a8dcab0 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Get console output#033[00m
Jan 31 03:46:07 np0005603609 nova_compute[221550]: 2026-01-31 08:46:07.014 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:46:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:07.538 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:07.539 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:07.539 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:08.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:08.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:10 np0005603609 podman[305268]: 2026-01-31 08:46:10.157223724 +0000 UTC m=+0.041380359 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 03:46:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:10.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:10 np0005603609 podman[305267]: 2026-01-31 08:46:10.210799987 +0000 UTC m=+0.095521565 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 03:46:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:10.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:11 np0005603609 nova_compute[221550]: 2026-01-31 08:46:11.697 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:11 np0005603609 nova_compute[221550]: 2026-01-31 08:46:11.821 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:11 np0005603609 nova_compute[221550]: 2026-01-31 08:46:11.943 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-0a25731c-0886-42e0-ac2e-21d9078873a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:46:11 np0005603609 nova_compute[221550]: 2026-01-31 08:46:11.943 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:46:11 np0005603609 nova_compute[221550]: 2026-01-31 08:46:11.944 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:12.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:46:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:12.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:46:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:14.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:14.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:15 np0005603609 nova_compute[221550]: 2026-01-31 08:46:15.608 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:15 np0005603609 nova_compute[221550]: 2026-01-31 08:46:15.609 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:15 np0005603609 nova_compute[221550]: 2026-01-31 08:46:15.609 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:15 np0005603609 nova_compute[221550]: 2026-01-31 08:46:15.610 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:46:15 np0005603609 nova_compute[221550]: 2026-01-31 08:46:15.610 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:46:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:46:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/33009808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:46:16 np0005603609 nova_compute[221550]: 2026-01-31 08:46:16.019 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:46:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:16.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:16 np0005603609 nova_compute[221550]: 2026-01-31 08:46:16.272 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000c0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:46:16 np0005603609 nova_compute[221550]: 2026-01-31 08:46:16.272 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000c0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:46:16 np0005603609 nova_compute[221550]: 2026-01-31 08:46:16.388 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:46:16 np0005603609 nova_compute[221550]: 2026-01-31 08:46:16.389 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4020MB free_disk=20.942718505859375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:46:16 np0005603609 nova_compute[221550]: 2026-01-31 08:46:16.389 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:16 np0005603609 nova_compute[221550]: 2026-01-31 08:46:16.390 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:16 np0005603609 nova_compute[221550]: 2026-01-31 08:46:16.699 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:16.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:16 np0005603609 nova_compute[221550]: 2026-01-31 08:46:16.823 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:16 np0005603609 nova_compute[221550]: 2026-01-31 08:46:16.828 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 0a25731c-0886-42e0-ac2e-21d9078873a4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:46:16 np0005603609 nova_compute[221550]: 2026-01-31 08:46:16.828 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:46:16 np0005603609 nova_compute[221550]: 2026-01-31 08:46:16.828 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:46:16 np0005603609 nova_compute[221550]: 2026-01-31 08:46:16.894 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:46:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:46:17 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/934742884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:46:17 np0005603609 nova_compute[221550]: 2026-01-31 08:46:17.302 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:46:17 np0005603609 nova_compute[221550]: 2026-01-31 08:46:17.307 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:46:17 np0005603609 nova_compute[221550]: 2026-01-31 08:46:17.485 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:46:18 np0005603609 nova_compute[221550]: 2026-01-31 08:46:18.154 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:46:18 np0005603609 nova_compute[221550]: 2026-01-31 08:46:18.154 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:18.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:46:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:18.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:46:18 np0005603609 nova_compute[221550]: 2026-01-31 08:46:18.870 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:18 np0005603609 nova_compute[221550]: 2026-01-31 08:46:18.871 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:18 np0005603609 nova_compute[221550]: 2026-01-31 08:46:18.871 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:19 np0005603609 nova_compute[221550]: 2026-01-31 08:46:19.657 221554 DEBUG nova.compute.manager [req-41eb2ee8-0031-4723-81db-7a3d837c927e req-4ee2d1df-6b06-4889-9ffc-9371f1414fec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Received event network-changed-4a83d32e-2424-4610-95a8-707cc88ca85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:46:19 np0005603609 nova_compute[221550]: 2026-01-31 08:46:19.657 221554 DEBUG nova.compute.manager [req-41eb2ee8-0031-4723-81db-7a3d837c927e req-4ee2d1df-6b06-4889-9ffc-9371f1414fec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Refreshing instance network info cache due to event network-changed-4a83d32e-2424-4610-95a8-707cc88ca85c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:46:19 np0005603609 nova_compute[221550]: 2026-01-31 08:46:19.657 221554 DEBUG oslo_concurrency.lockutils [req-41eb2ee8-0031-4723-81db-7a3d837c927e req-4ee2d1df-6b06-4889-9ffc-9371f1414fec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-0a25731c-0886-42e0-ac2e-21d9078873a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:46:19 np0005603609 nova_compute[221550]: 2026-01-31 08:46:19.657 221554 DEBUG oslo_concurrency.lockutils [req-41eb2ee8-0031-4723-81db-7a3d837c927e req-4ee2d1df-6b06-4889-9ffc-9371f1414fec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-0a25731c-0886-42e0-ac2e-21d9078873a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:46:19 np0005603609 nova_compute[221550]: 2026-01-31 08:46:19.658 221554 DEBUG nova.network.neutron [req-41eb2ee8-0031-4723-81db-7a3d837c927e req-4ee2d1df-6b06-4889-9ffc-9371f1414fec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Refreshing network info cache for port 4a83d32e-2424-4610-95a8-707cc88ca85c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:46:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:20.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:46:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:20.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:46:20 np0005603609 nova_compute[221550]: 2026-01-31 08:46:20.834 221554 DEBUG oslo_concurrency.lockutils [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "0a25731c-0886-42e0-ac2e-21d9078873a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:20 np0005603609 nova_compute[221550]: 2026-01-31 08:46:20.835 221554 DEBUG oslo_concurrency.lockutils [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:20 np0005603609 nova_compute[221550]: 2026-01-31 08:46:20.835 221554 DEBUG oslo_concurrency.lockutils [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:20 np0005603609 nova_compute[221550]: 2026-01-31 08:46:20.835 221554 DEBUG oslo_concurrency.lockutils [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:20 np0005603609 nova_compute[221550]: 2026-01-31 08:46:20.836 221554 DEBUG oslo_concurrency.lockutils [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:20 np0005603609 nova_compute[221550]: 2026-01-31 08:46:20.837 221554 INFO nova.compute.manager [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Terminating instance#033[00m
Jan 31 03:46:20 np0005603609 nova_compute[221550]: 2026-01-31 08:46:20.838 221554 DEBUG nova.compute.manager [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:46:20 np0005603609 kernel: tap4a83d32e-24 (unregistering): left promiscuous mode
Jan 31 03:46:20 np0005603609 NetworkManager[49064]: <info>  [1769849180.9137] device (tap4a83d32e-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:46:20 np0005603609 nova_compute[221550]: 2026-01-31 08:46:20.912 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:20 np0005603609 nova_compute[221550]: 2026-01-31 08:46:20.921 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:20 np0005603609 ovn_controller[130359]: 2026-01-31T08:46:20Z|00888|binding|INFO|Releasing lport 4a83d32e-2424-4610-95a8-707cc88ca85c from this chassis (sb_readonly=0)
Jan 31 03:46:20 np0005603609 ovn_controller[130359]: 2026-01-31T08:46:20Z|00889|binding|INFO|Setting lport 4a83d32e-2424-4610-95a8-707cc88ca85c down in Southbound
Jan 31 03:46:20 np0005603609 ovn_controller[130359]: 2026-01-31T08:46:20Z|00890|binding|INFO|Removing iface tap4a83d32e-24 ovn-installed in OVS
Jan 31 03:46:20 np0005603609 nova_compute[221550]: 2026-01-31 08:46:20.924 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:20 np0005603609 nova_compute[221550]: 2026-01-31 08:46:20.931 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:20 np0005603609 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d000000c0.scope: Deactivated successfully.
Jan 31 03:46:20 np0005603609 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d000000c0.scope: Consumed 1.485s CPU time.
Jan 31 03:46:20 np0005603609 systemd-machined[190912]: Machine qemu-105-instance-000000c0 terminated.
Jan 31 03:46:21 np0005603609 nova_compute[221550]: 2026-01-31 08:46:21.082 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:21 np0005603609 nova_compute[221550]: 2026-01-31 08:46:21.101 221554 INFO nova.virt.libvirt.driver [-] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Instance destroyed successfully.#033[00m
Jan 31 03:46:21 np0005603609 nova_compute[221550]: 2026-01-31 08:46:21.102 221554 DEBUG nova.objects.instance [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'resources' on Instance uuid 0a25731c-0886-42e0-ac2e-21d9078873a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:46:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:21.168 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:30:72 10.100.0.6'], port_security=['fa:16:3e:de:30:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0a25731c-0886-42e0-ac2e-21d9078873a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c434b9a6-ce01-434c-8159-4fa6c7ce5f42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66082a7f-9219-40e6-91d4-e8db2b40dfed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=4a83d32e-2424-4610-95a8-707cc88ca85c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:46:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:21.169 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 4a83d32e-2424-4610-95a8-707cc88ca85c in datapath c76f8f50-9ea9-4f3f-895a-20259e6b3b8e unbound from our chassis#033[00m
Jan 31 03:46:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:21.170 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c76f8f50-9ea9-4f3f-895a-20259e6b3b8e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:46:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:21.172 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4e0700-f316-42b7-817a-e52eaa85add7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:21.172 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e namespace which is not needed anymore#033[00m
Jan 31 03:46:21 np0005603609 neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e[305250]: [NOTICE]   (305255) : haproxy version is 2.8.14-c23fe91
Jan 31 03:46:21 np0005603609 neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e[305250]: [NOTICE]   (305255) : path to executable is /usr/sbin/haproxy
Jan 31 03:46:21 np0005603609 neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e[305250]: [ALERT]    (305255) : Current worker (305258) exited with code 143 (Terminated)
Jan 31 03:46:21 np0005603609 neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e[305250]: [WARNING]  (305255) : All workers exited. Exiting... (0)
Jan 31 03:46:21 np0005603609 systemd[1]: libpod-97759ceeb44d3efce821296f3b7b29d691d3893cc3b99d0f25d6881fdeca540a.scope: Deactivated successfully.
Jan 31 03:46:21 np0005603609 podman[305386]: 2026-01-31 08:46:21.300426073 +0000 UTC m=+0.044659928 container died 97759ceeb44d3efce821296f3b7b29d691d3893cc3b99d0f25d6881fdeca540a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:46:21 np0005603609 systemd[1]: var-lib-containers-storage-overlay-7ba9aa9bbee4b9c359051d4bea7bd839608c5101844ab43c8bc5ea50492f841f-merged.mount: Deactivated successfully.
Jan 31 03:46:21 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-97759ceeb44d3efce821296f3b7b29d691d3893cc3b99d0f25d6881fdeca540a-userdata-shm.mount: Deactivated successfully.
Jan 31 03:46:21 np0005603609 podman[305386]: 2026-01-31 08:46:21.333400975 +0000 UTC m=+0.077634830 container cleanup 97759ceeb44d3efce821296f3b7b29d691d3893cc3b99d0f25d6881fdeca540a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 03:46:21 np0005603609 systemd[1]: libpod-conmon-97759ceeb44d3efce821296f3b7b29d691d3893cc3b99d0f25d6881fdeca540a.scope: Deactivated successfully.
Jan 31 03:46:21 np0005603609 nova_compute[221550]: 2026-01-31 08:46:21.369 221554 DEBUG nova.virt.libvirt.vif [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:44:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1959223726',display_name='tempest-TestNetworkAdvancedServerOps-server-1959223726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1959223726',id=192,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA7+Ui05Zz/RsBbrClJ7vBgaMcvC9QWdCfA17lbaydUCT1aH2bB8o/YdVrWfh1EGjjTYoXOlG7FQaLMqln9+0hIOC8/djk0BDUavPbD5dOYbGsmhY9RYdYJLU0zrRNUiwQ==',key_name='tempest-TestNetworkAdvancedServerOps-2091416548',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:45:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-hlhjz8ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:46:02Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=0a25731c-0886-42e0-ac2e-21d9078873a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a83d32e-2424-4610-95a8-707cc88ca85c", "address": "fa:16:3e:de:30:72", "network": {"id": "c76f8f50-9ea9-4f3f-895a-20259e6b3b8e", "bridge": "br-int", "label": "tempest-network-smoke--559564160", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a83d32e-24", "ovs_interfaceid": "4a83d32e-2424-4610-95a8-707cc88ca85c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:46:21 np0005603609 nova_compute[221550]: 2026-01-31 08:46:21.370 221554 DEBUG nova.network.os_vif_util [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "4a83d32e-2424-4610-95a8-707cc88ca85c", "address": "fa:16:3e:de:30:72", "network": {"id": "c76f8f50-9ea9-4f3f-895a-20259e6b3b8e", "bridge": "br-int", "label": "tempest-network-smoke--559564160", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a83d32e-24", "ovs_interfaceid": "4a83d32e-2424-4610-95a8-707cc88ca85c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:46:21 np0005603609 nova_compute[221550]: 2026-01-31 08:46:21.371 221554 DEBUG nova.network.os_vif_util [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:de:30:72,bridge_name='br-int',has_traffic_filtering=True,id=4a83d32e-2424-4610-95a8-707cc88ca85c,network=Network(c76f8f50-9ea9-4f3f-895a-20259e6b3b8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a83d32e-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:46:21 np0005603609 nova_compute[221550]: 2026-01-31 08:46:21.372 221554 DEBUG os_vif [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:30:72,bridge_name='br-int',has_traffic_filtering=True,id=4a83d32e-2424-4610-95a8-707cc88ca85c,network=Network(c76f8f50-9ea9-4f3f-895a-20259e6b3b8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a83d32e-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:46:21 np0005603609 nova_compute[221550]: 2026-01-31 08:46:21.374 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:21 np0005603609 nova_compute[221550]: 2026-01-31 08:46:21.374 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a83d32e-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:46:21 np0005603609 nova_compute[221550]: 2026-01-31 08:46:21.376 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:21 np0005603609 nova_compute[221550]: 2026-01-31 08:46:21.377 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:21 np0005603609 nova_compute[221550]: 2026-01-31 08:46:21.380 221554 INFO os_vif [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:30:72,bridge_name='br-int',has_traffic_filtering=True,id=4a83d32e-2424-4610-95a8-707cc88ca85c,network=Network(c76f8f50-9ea9-4f3f-895a-20259e6b3b8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a83d32e-24')#033[00m
Jan 31 03:46:21 np0005603609 podman[305417]: 2026-01-31 08:46:21.389325646 +0000 UTC m=+0.040350173 container remove 97759ceeb44d3efce821296f3b7b29d691d3893cc3b99d0f25d6881fdeca540a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 03:46:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:21.393 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e578d3a6-c672-4ab0-93ab-1b02d55b1147]: (4, ('Sat Jan 31 08:46:21 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e (97759ceeb44d3efce821296f3b7b29d691d3893cc3b99d0f25d6881fdeca540a)\n97759ceeb44d3efce821296f3b7b29d691d3893cc3b99d0f25d6881fdeca540a\nSat Jan 31 08:46:21 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e (97759ceeb44d3efce821296f3b7b29d691d3893cc3b99d0f25d6881fdeca540a)\n97759ceeb44d3efce821296f3b7b29d691d3893cc3b99d0f25d6881fdeca540a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:21.396 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[db52a641-45b0-49f1-921b-909632309e5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:21.400 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc76f8f50-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:46:21 np0005603609 kernel: tapc76f8f50-90: left promiscuous mode
Jan 31 03:46:21 np0005603609 nova_compute[221550]: 2026-01-31 08:46:21.405 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:21.409 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b7926d97-b390-498d-909e-dc73a1b509e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:21.427 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[efac6286-4533-46e6-9678-47b756ea258b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:21.429 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1db1ae96-6544-4b0a-a4a5-abc0894d07f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:21.445 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[05154936-dbf0-4c27-a9bd-724eb3a0ed87]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 940112, 'reachable_time': 44621, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305450, 'error': None, 'target': 'ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:21 np0005603609 systemd[1]: run-netns-ovnmeta\x2dc76f8f50\x2d9ea9\x2d4f3f\x2d895a\x2d20259e6b3b8e.mount: Deactivated successfully.
Jan 31 03:46:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:21.448 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c76f8f50-9ea9-4f3f-895a-20259e6b3b8e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:46:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:46:21.448 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[75e76c17-5d6f-4b46-a18a-ec21868e9965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:21 np0005603609 nova_compute[221550]: 2026-01-31 08:46:21.826 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:22.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:22 np0005603609 nova_compute[221550]: 2026-01-31 08:46:22.601 221554 INFO nova.virt.libvirt.driver [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Deleting instance files /var/lib/nova/instances/0a25731c-0886-42e0-ac2e-21d9078873a4_del#033[00m
Jan 31 03:46:22 np0005603609 nova_compute[221550]: 2026-01-31 08:46:22.602 221554 INFO nova.virt.libvirt.driver [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Deletion of /var/lib/nova/instances/0a25731c-0886-42e0-ac2e-21d9078873a4_del complete#033[00m
Jan 31 03:46:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:22.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:23 np0005603609 nova_compute[221550]: 2026-01-31 08:46:23.035 221554 INFO nova.compute.manager [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Took 2.20 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:46:23 np0005603609 nova_compute[221550]: 2026-01-31 08:46:23.035 221554 DEBUG oslo.service.loopingcall [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:46:23 np0005603609 nova_compute[221550]: 2026-01-31 08:46:23.035 221554 DEBUG nova.compute.manager [-] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:46:23 np0005603609 nova_compute[221550]: 2026-01-31 08:46:23.036 221554 DEBUG nova.network.neutron [-] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:46:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:24.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:24.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:26.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:26 np0005603609 nova_compute[221550]: 2026-01-31 08:46:26.271 221554 DEBUG nova.network.neutron [req-41eb2ee8-0031-4723-81db-7a3d837c927e req-4ee2d1df-6b06-4889-9ffc-9371f1414fec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Updated VIF entry in instance network info cache for port 4a83d32e-2424-4610-95a8-707cc88ca85c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:46:26 np0005603609 nova_compute[221550]: 2026-01-31 08:46:26.272 221554 DEBUG nova.network.neutron [req-41eb2ee8-0031-4723-81db-7a3d837c927e req-4ee2d1df-6b06-4889-9ffc-9371f1414fec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Updating instance_info_cache with network_info: [{"id": "4a83d32e-2424-4610-95a8-707cc88ca85c", "address": "fa:16:3e:de:30:72", "network": {"id": "c76f8f50-9ea9-4f3f-895a-20259e6b3b8e", "bridge": "br-int", "label": "tempest-network-smoke--559564160", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a83d32e-24", "ovs_interfaceid": "4a83d32e-2424-4610-95a8-707cc88ca85c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:46:26 np0005603609 nova_compute[221550]: 2026-01-31 08:46:26.376 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:26 np0005603609 nova_compute[221550]: 2026-01-31 08:46:26.826 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:46:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:26.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:46:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:27 np0005603609 nova_compute[221550]: 2026-01-31 08:46:27.806 221554 DEBUG nova.compute.manager [req-9248a2f6-a9ee-4482-8cbb-65bb6d897845 req-2a32900b-872c-4a13-94d4-acf232dfe042 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Received event network-vif-unplugged-4a83d32e-2424-4610-95a8-707cc88ca85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:46:27 np0005603609 nova_compute[221550]: 2026-01-31 08:46:27.806 221554 DEBUG oslo_concurrency.lockutils [req-9248a2f6-a9ee-4482-8cbb-65bb6d897845 req-2a32900b-872c-4a13-94d4-acf232dfe042 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:27 np0005603609 nova_compute[221550]: 2026-01-31 08:46:27.806 221554 DEBUG oslo_concurrency.lockutils [req-9248a2f6-a9ee-4482-8cbb-65bb6d897845 req-2a32900b-872c-4a13-94d4-acf232dfe042 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:27 np0005603609 nova_compute[221550]: 2026-01-31 08:46:27.806 221554 DEBUG oslo_concurrency.lockutils [req-9248a2f6-a9ee-4482-8cbb-65bb6d897845 req-2a32900b-872c-4a13-94d4-acf232dfe042 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:27 np0005603609 nova_compute[221550]: 2026-01-31 08:46:27.806 221554 DEBUG nova.compute.manager [req-9248a2f6-a9ee-4482-8cbb-65bb6d897845 req-2a32900b-872c-4a13-94d4-acf232dfe042 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] No waiting events found dispatching network-vif-unplugged-4a83d32e-2424-4610-95a8-707cc88ca85c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:46:27 np0005603609 nova_compute[221550]: 2026-01-31 08:46:27.807 221554 DEBUG nova.compute.manager [req-9248a2f6-a9ee-4482-8cbb-65bb6d897845 req-2a32900b-872c-4a13-94d4-acf232dfe042 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Received event network-vif-unplugged-4a83d32e-2424-4610-95a8-707cc88ca85c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:46:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:28.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:28.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:29 np0005603609 nova_compute[221550]: 2026-01-31 08:46:29.255 221554 DEBUG oslo_concurrency.lockutils [req-41eb2ee8-0031-4723-81db-7a3d837c927e req-4ee2d1df-6b06-4889-9ffc-9371f1414fec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-0a25731c-0886-42e0-ac2e-21d9078873a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:46:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:46:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 59K writes, 233K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.04 MB/s#012Cumulative WAL: 59K writes, 21K syncs, 2.73 writes per sync, written: 0.22 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4314 writes, 16K keys, 4314 commit groups, 1.0 writes per commit group, ingest: 16.57 MB, 0.03 MB/s#012Interval WAL: 4315 writes, 1680 syncs, 2.57 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f1fef0cf30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f1fef0cf30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Jan 31 03:46:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:30.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:46:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:46:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:46:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:46:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:46:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:30.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:31 np0005603609 nova_compute[221550]: 2026-01-31 08:46:31.378 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:31 np0005603609 nova_compute[221550]: 2026-01-31 08:46:31.828 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:32.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:32 np0005603609 nova_compute[221550]: 2026-01-31 08:46:32.597 221554 DEBUG nova.network.neutron [-] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:46:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:32.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:32 np0005603609 nova_compute[221550]: 2026-01-31 08:46:32.914 221554 DEBUG nova.compute.manager [req-4bfc91aa-605c-4938-ad2b-6bd80f98a9bb req-accb90a9-9d17-4471-baa5-862101e8f3b9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Received event network-vif-deleted-4a83d32e-2424-4610-95a8-707cc88ca85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:46:32 np0005603609 nova_compute[221550]: 2026-01-31 08:46:32.915 221554 INFO nova.compute.manager [req-4bfc91aa-605c-4938-ad2b-6bd80f98a9bb req-accb90a9-9d17-4471-baa5-862101e8f3b9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Neutron deleted interface 4a83d32e-2424-4610-95a8-707cc88ca85c; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:46:32 np0005603609 nova_compute[221550]: 2026-01-31 08:46:32.915 221554 DEBUG nova.network.neutron [req-4bfc91aa-605c-4938-ad2b-6bd80f98a9bb req-accb90a9-9d17-4471-baa5-862101e8f3b9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:46:32 np0005603609 nova_compute[221550]: 2026-01-31 08:46:32.917 221554 DEBUG nova.compute.manager [req-4d84852f-9274-4919-bd30-631de8a35477 req-8f909878-8131-4d17-8c96-36e7adeb1bcd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Received event network-vif-plugged-4a83d32e-2424-4610-95a8-707cc88ca85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:46:32 np0005603609 nova_compute[221550]: 2026-01-31 08:46:32.918 221554 DEBUG oslo_concurrency.lockutils [req-4d84852f-9274-4919-bd30-631de8a35477 req-8f909878-8131-4d17-8c96-36e7adeb1bcd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:32 np0005603609 nova_compute[221550]: 2026-01-31 08:46:32.918 221554 DEBUG oslo_concurrency.lockutils [req-4d84852f-9274-4919-bd30-631de8a35477 req-8f909878-8131-4d17-8c96-36e7adeb1bcd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:32 np0005603609 nova_compute[221550]: 2026-01-31 08:46:32.918 221554 DEBUG oslo_concurrency.lockutils [req-4d84852f-9274-4919-bd30-631de8a35477 req-8f909878-8131-4d17-8c96-36e7adeb1bcd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:32 np0005603609 nova_compute[221550]: 2026-01-31 08:46:32.919 221554 DEBUG nova.compute.manager [req-4d84852f-9274-4919-bd30-631de8a35477 req-8f909878-8131-4d17-8c96-36e7adeb1bcd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] No waiting events found dispatching network-vif-plugged-4a83d32e-2424-4610-95a8-707cc88ca85c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:46:32 np0005603609 nova_compute[221550]: 2026-01-31 08:46:32.919 221554 WARNING nova.compute.manager [req-4d84852f-9274-4919-bd30-631de8a35477 req-8f909878-8131-4d17-8c96-36e7adeb1bcd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Received unexpected event network-vif-plugged-4a83d32e-2424-4610-95a8-707cc88ca85c for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:46:33 np0005603609 nova_compute[221550]: 2026-01-31 08:46:33.703 221554 INFO nova.compute.manager [-] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Took 10.67 seconds to deallocate network for instance.#033[00m
Jan 31 03:46:33 np0005603609 nova_compute[221550]: 2026-01-31 08:46:33.750 221554 DEBUG nova.compute.manager [req-4bfc91aa-605c-4938-ad2b-6bd80f98a9bb req-accb90a9-9d17-4471-baa5-862101e8f3b9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Detach interface failed, port_id=4a83d32e-2424-4610-95a8-707cc88ca85c, reason: Instance 0a25731c-0886-42e0-ac2e-21d9078873a4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:46:34 np0005603609 nova_compute[221550]: 2026-01-31 08:46:34.182 221554 DEBUG oslo_concurrency.lockutils [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:34 np0005603609 nova_compute[221550]: 2026-01-31 08:46:34.182 221554 DEBUG oslo_concurrency.lockutils [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:46:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:34.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:46:34 np0005603609 nova_compute[221550]: 2026-01-31 08:46:34.754 221554 DEBUG oslo_concurrency.processutils [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:46:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:34.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:46:35 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2492724714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:46:35 np0005603609 nova_compute[221550]: 2026-01-31 08:46:35.193 221554 DEBUG oslo_concurrency.processutils [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:46:35 np0005603609 nova_compute[221550]: 2026-01-31 08:46:35.198 221554 DEBUG nova.compute.provider_tree [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:46:36 np0005603609 nova_compute[221550]: 2026-01-31 08:46:36.100 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849181.099644, 0a25731c-0886-42e0-ac2e-21d9078873a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:46:36 np0005603609 nova_compute[221550]: 2026-01-31 08:46:36.101 221554 INFO nova.compute.manager [-] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:46:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:36.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:36 np0005603609 nova_compute[221550]: 2026-01-31 08:46:36.380 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:46:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:36.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:46:36 np0005603609 nova_compute[221550]: 2026-01-31 08:46:36.871 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:37 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:46:37 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:46:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:38.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:38.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:39 np0005603609 nova_compute[221550]: 2026-01-31 08:46:39.591 221554 DEBUG nova.compute.manager [None req-4a09e1e6-9161-4037-8b70-07a9f1823835 - - - - - -] [instance: 0a25731c-0886-42e0-ac2e-21d9078873a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:46:39 np0005603609 nova_compute[221550]: 2026-01-31 08:46:39.592 221554 DEBUG nova.scheduler.client.report [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:46:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:40.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:40.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:41 np0005603609 podman[305659]: 2026-01-31 08:46:41.179992757 +0000 UTC m=+0.060843422 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 03:46:41 np0005603609 podman[305658]: 2026-01-31 08:46:41.207288441 +0000 UTC m=+0.087882849 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:46:41 np0005603609 nova_compute[221550]: 2026-01-31 08:46:41.382 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:41 np0005603609 nova_compute[221550]: 2026-01-31 08:46:41.930 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:42.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:42 np0005603609 nova_compute[221550]: 2026-01-31 08:46:42.691 221554 DEBUG oslo_concurrency.lockutils [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 8.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:42.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:43 np0005603609 nova_compute[221550]: 2026-01-31 08:46:43.134 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:43 np0005603609 nova_compute[221550]: 2026-01-31 08:46:43.202 221554 INFO nova.scheduler.client.report [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Deleted allocations for instance 0a25731c-0886-42e0-ac2e-21d9078873a4#033[00m
Jan 31 03:46:43 np0005603609 nova_compute[221550]: 2026-01-31 08:46:43.221 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:46:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:44.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:46:44 np0005603609 nova_compute[221550]: 2026-01-31 08:46:44.528 221554 DEBUG oslo_concurrency.lockutils [None req-5fce889b-f784-4592-8035-9deca376e9d4 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "0a25731c-0886-42e0-ac2e-21d9078873a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 23.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:46:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:44.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:46:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:46:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:46.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:46:46 np0005603609 nova_compute[221550]: 2026-01-31 08:46:46.384 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:46:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:46.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:46:46 np0005603609 nova_compute[221550]: 2026-01-31 08:46:46.933 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:46:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:48.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:46:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:46:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:48.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:46:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 03:46:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:50.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 03:46:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:46:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:50.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:46:51 np0005603609 nova_compute[221550]: 2026-01-31 08:46:51.430 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:51 np0005603609 nova_compute[221550]: 2026-01-31 08:46:51.935 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:46:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:52.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:46:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:46:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:52.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:46:53 np0005603609 nova_compute[221550]: 2026-01-31 08:46:53.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #163. Immutable memtables: 0.
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:46:54.154371) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 163
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849214154440, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1881, "num_deletes": 256, "total_data_size": 4521398, "memory_usage": 4601904, "flush_reason": "Manual Compaction"}
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #164: started
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849214170200, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 164, "file_size": 2949889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79493, "largest_seqno": 81368, "table_properties": {"data_size": 2942105, "index_size": 4662, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16009, "raw_average_key_size": 19, "raw_value_size": 2926626, "raw_average_value_size": 3644, "num_data_blocks": 205, "num_entries": 803, "num_filter_entries": 803, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849045, "oldest_key_time": 1769849045, "file_creation_time": 1769849214, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 15881 microseconds, and 5007 cpu microseconds.
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:46:54.170254) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #164: 2949889 bytes OK
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:46:54.170280) [db/memtable_list.cc:519] [default] Level-0 commit table #164 started
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:46:54.172315) [db/memtable_list.cc:722] [default] Level-0 commit table #164: memtable #1 done
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:46:54.172331) EVENT_LOG_v1 {"time_micros": 1769849214172325, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:46:54.172350) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 4512991, prev total WAL file size 4512991, number of live WAL files 2.
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000160.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:46:54.173222) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303134' seq:72057594037927935, type:22 .. '6C6F676D0033323636' seq:0, type:0; will stop at (end)
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [164(2880KB)], [162(11MB)]
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849214173283, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [164], "files_L6": [162], "score": -1, "input_data_size": 14989970, "oldest_snapshot_seqno": -1}
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #165: 10309 keys, 14840698 bytes, temperature: kUnknown
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849214256439, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 165, "file_size": 14840698, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14772730, "index_size": 41103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25797, "raw_key_size": 271591, "raw_average_key_size": 26, "raw_value_size": 14590950, "raw_average_value_size": 1415, "num_data_blocks": 1577, "num_entries": 10309, "num_filter_entries": 10309, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769849214, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 165, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:46:54.256918) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 14840698 bytes
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:46:54.258516) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.1 rd, 178.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 11.5 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(10.1) write-amplify(5.0) OK, records in: 10836, records dropped: 527 output_compression: NoCompression
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:46:54.258533) EVENT_LOG_v1 {"time_micros": 1769849214258525, "job": 104, "event": "compaction_finished", "compaction_time_micros": 83226, "compaction_time_cpu_micros": 24728, "output_level": 6, "num_output_files": 1, "total_output_size": 14840698, "num_input_records": 10836, "num_output_records": 10309, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849214258830, "job": 104, "event": "table_file_deletion", "file_number": 164}
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000162.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849214259982, "job": 104, "event": "table_file_deletion", "file_number": 162}
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:46:54.173096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:46:54.260090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:46:54.260095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:46:54.260097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:46:54.260099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:46:54 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:46:54.260100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:46:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:54.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:54 np0005603609 nova_compute[221550]: 2026-01-31 08:46:54.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:54.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:55 np0005603609 nova_compute[221550]: 2026-01-31 08:46:55.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:46:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:56.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:46:56 np0005603609 nova_compute[221550]: 2026-01-31 08:46:56.433 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:46:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:56.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:46:56 np0005603609 nova_compute[221550]: 2026-01-31 08:46:56.938 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:58.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:46:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:58.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:59 np0005603609 nova_compute[221550]: 2026-01-31 08:46:59.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:59 np0005603609 nova_compute[221550]: 2026-01-31 08:46:59.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:47:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:47:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:00.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:47:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:00.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:01 np0005603609 nova_compute[221550]: 2026-01-31 08:47:01.436 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:01 np0005603609 nova_compute[221550]: 2026-01-31 08:47:01.942 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:47:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:02.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:47:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:47:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.3 total, 600.0 interval#012Cumulative writes: 16K writes, 81K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s#012Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.16 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1546 writes, 7565 keys, 1546 commit groups, 1.0 writes per commit group, ingest: 15.87 MB, 0.03 MB/s#012Interval WAL: 1547 writes, 1547 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     43.1      2.34              0.23        52    0.045       0      0       0.0       0.0#012  L6      1/0   14.15 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.2     73.5     63.0      8.26              1.21        51    0.162    382K    27K       0.0       0.0#012 Sum      1/0   14.15 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.2     57.3     58.6     10.60              1.44       103    0.103    382K    27K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.4     53.2     55.9      1.54              0.22        12    0.129     62K   3118       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0     73.5     63.0      8.26              1.21        51    0.162    382K    27K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     43.7      2.31              0.23        51    0.045       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 600.0 interval#012Flush(GB): cumulative 0.098, interval 0.013#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.61 GB write, 0.10 MB/s write, 0.59 GB read, 0.10 MB/s read, 10.6 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 1.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619ad8ed1f0#2 capacity: 304.00 MB usage: 65.22 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000492 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3768,62.47 MB,20.5487%) FilterBlock(103,1.03 MB,0.338339%) IndexBlock(103,1.73 MB,0.568284%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 03:47:02 np0005603609 nova_compute[221550]: 2026-01-31 08:47:02.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:02 np0005603609 nova_compute[221550]: 2026-01-31 08:47:02.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:47:02 np0005603609 nova_compute[221550]: 2026-01-31 08:47:02.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:47:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:47:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:02.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:47:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:47:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:04.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:47:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:04.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:05 np0005603609 nova_compute[221550]: 2026-01-31 08:47:05.567 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:47:05 np0005603609 nova_compute[221550]: 2026-01-31 08:47:05.567 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:47:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:06.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:47:06 np0005603609 nova_compute[221550]: 2026-01-31 08:47:06.436 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:06.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:06 np0005603609 nova_compute[221550]: 2026-01-31 08:47:06.942 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:07 np0005603609 nova_compute[221550]: 2026-01-31 08:47:07.514 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:07 np0005603609 nova_compute[221550]: 2026-01-31 08:47:07.514 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:07 np0005603609 nova_compute[221550]: 2026-01-31 08:47:07.514 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:07 np0005603609 nova_compute[221550]: 2026-01-31 08:47:07.514 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:47:07 np0005603609 nova_compute[221550]: 2026-01-31 08:47:07.515 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:47:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:47:07.539 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:47:07.539 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:47:07.540 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:47:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3314652597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:47:07 np0005603609 nova_compute[221550]: 2026-01-31 08:47:07.917 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:47:08 np0005603609 nova_compute[221550]: 2026-01-31 08:47:08.068 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:47:08 np0005603609 nova_compute[221550]: 2026-01-31 08:47:08.070 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4229MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:47:08 np0005603609 nova_compute[221550]: 2026-01-31 08:47:08.071 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:08 np0005603609 nova_compute[221550]: 2026-01-31 08:47:08.071 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:47:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:08.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:47:08 np0005603609 nova_compute[221550]: 2026-01-31 08:47:08.828 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:47:08 np0005603609 nova_compute[221550]: 2026-01-31 08:47:08.829 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:47:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:08.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:08 np0005603609 nova_compute[221550]: 2026-01-31 08:47:08.918 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:47:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:47:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/534509877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:47:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:47:09.331 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=88, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=87) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:47:09 np0005603609 nova_compute[221550]: 2026-01-31 08:47:09.332 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:09 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:47:09.332 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:47:09 np0005603609 nova_compute[221550]: 2026-01-31 08:47:09.347 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:47:09 np0005603609 nova_compute[221550]: 2026-01-31 08:47:09.355 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:47:09 np0005603609 nova_compute[221550]: 2026-01-31 08:47:09.413 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:47:09 np0005603609 nova_compute[221550]: 2026-01-31 08:47:09.466 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:47:09 np0005603609 nova_compute[221550]: 2026-01-31 08:47:09.467 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.396s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:09 np0005603609 nova_compute[221550]: 2026-01-31 08:47:09.503 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:09 np0005603609 nova_compute[221550]: 2026-01-31 08:47:09.714 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:10.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:10.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:11 np0005603609 nova_compute[221550]: 2026-01-31 08:47:11.437 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:11 np0005603609 nova_compute[221550]: 2026-01-31 08:47:11.943 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:12 np0005603609 podman[305747]: 2026-01-31 08:47:12.16680348 +0000 UTC m=+0.044242899 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:47:12 np0005603609 podman[305746]: 2026-01-31 08:47:12.228828829 +0000 UTC m=+0.106397111 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 03:47:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:12.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:12.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:13 np0005603609 nova_compute[221550]: 2026-01-31 08:47:13.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:14.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:47:14.335 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '88'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:47:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:14.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:16.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:16 np0005603609 nova_compute[221550]: 2026-01-31 08:47:16.439 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:16.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:16 np0005603609 nova_compute[221550]: 2026-01-31 08:47:16.944 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:17 np0005603609 nova_compute[221550]: 2026-01-31 08:47:17.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:17 np0005603609 nova_compute[221550]: 2026-01-31 08:47:17.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:47:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:18.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:18.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:19 np0005603609 nova_compute[221550]: 2026-01-31 08:47:19.705 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:47:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:20.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:47:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:20.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:21 np0005603609 nova_compute[221550]: 2026-01-31 08:47:21.440 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:21 np0005603609 nova_compute[221550]: 2026-01-31 08:47:21.947 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:22.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:22.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:47:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:24.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:47:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:47:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:24.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:47:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:26.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:26 np0005603609 nova_compute[221550]: 2026-01-31 08:47:26.441 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:26 np0005603609 nova_compute[221550]: 2026-01-31 08:47:26.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:26 np0005603609 nova_compute[221550]: 2026-01-31 08:47:26.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:47:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:47:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:26.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:47:26 np0005603609 nova_compute[221550]: 2026-01-31 08:47:26.948 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:27 np0005603609 nova_compute[221550]: 2026-01-31 08:47:27.501 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:47:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:28.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:28.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:30.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:30.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:31 np0005603609 nova_compute[221550]: 2026-01-31 08:47:31.443 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:31 np0005603609 nova_compute[221550]: 2026-01-31 08:47:31.950 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:32.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:47:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:32.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:47:33 np0005603609 nova_compute[221550]: 2026-01-31 08:47:33.497 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:34.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:47:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:34.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:47:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:47:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:36.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:47:36 np0005603609 nova_compute[221550]: 2026-01-31 08:47:36.445 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:36 np0005603609 nova_compute[221550]: 2026-01-31 08:47:36.951 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:47:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:36.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:47:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:38.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:47:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:47:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:47:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:38.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:47:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:40.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:47:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:40.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:41 np0005603609 nova_compute[221550]: 2026-01-31 08:47:41.446 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:41 np0005603609 nova_compute[221550]: 2026-01-31 08:47:41.953 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:42.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:42.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:43 np0005603609 podman[305927]: 2026-01-31 08:47:43.169266653 +0000 UTC m=+0.047919407 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:47:43 np0005603609 podman[305926]: 2026-01-31 08:47:43.189637769 +0000 UTC m=+0.074816771 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 03:47:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:44.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:47:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:44.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:47:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:47:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:47:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:46.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:46 np0005603609 nova_compute[221550]: 2026-01-31 08:47:46.448 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #166. Immutable memtables: 0.
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:47:46.593496) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 166
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849266593535, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 746, "num_deletes": 251, "total_data_size": 1375857, "memory_usage": 1403440, "flush_reason": "Manual Compaction"}
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #167: started
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849266602047, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 167, "file_size": 907419, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81373, "largest_seqno": 82114, "table_properties": {"data_size": 903809, "index_size": 1453, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8242, "raw_average_key_size": 19, "raw_value_size": 896596, "raw_average_value_size": 2119, "num_data_blocks": 64, "num_entries": 423, "num_filter_entries": 423, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849215, "oldest_key_time": 1769849215, "file_creation_time": 1769849266, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 8581 microseconds, and 1954 cpu microseconds.
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:47:46.602076) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #167: 907419 bytes OK
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:47:46.602091) [db/memtable_list.cc:519] [default] Level-0 commit table #167 started
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:47:46.604016) [db/memtable_list.cc:722] [default] Level-0 commit table #167: memtable #1 done
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:47:46.604027) EVENT_LOG_v1 {"time_micros": 1769849266604023, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:47:46.604039) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 1371932, prev total WAL file size 1371932, number of live WAL files 2.
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000163.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:47:46.604479) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [167(886KB)], [165(14MB)]
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849266604596, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [167], "files_L6": [165], "score": -1, "input_data_size": 15748117, "oldest_snapshot_seqno": -1}
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #168: 10217 keys, 13743128 bytes, temperature: kUnknown
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849266697300, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 168, "file_size": 13743128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13676723, "index_size": 39732, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25605, "raw_key_size": 270371, "raw_average_key_size": 26, "raw_value_size": 13497369, "raw_average_value_size": 1321, "num_data_blocks": 1514, "num_entries": 10217, "num_filter_entries": 10217, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769849266, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 168, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:47:46.697591) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 13743128 bytes
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:47:46.705449) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.8 rd, 148.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 14.2 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(32.5) write-amplify(15.1) OK, records in: 10732, records dropped: 515 output_compression: NoCompression
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:47:46.705484) EVENT_LOG_v1 {"time_micros": 1769849266705468, "job": 106, "event": "compaction_finished", "compaction_time_micros": 92744, "compaction_time_cpu_micros": 29274, "output_level": 6, "num_output_files": 1, "total_output_size": 13743128, "num_input_records": 10732, "num_output_records": 10217, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849266705770, "job": 106, "event": "table_file_deletion", "file_number": 167}
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000165.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849266708298, "job": 106, "event": "table_file_deletion", "file_number": 165}
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:47:46.604304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:47:46.708339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:47:46.708346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:47:46.708349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:47:46.708351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:47:46 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:47:46.708354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:47:46 np0005603609 nova_compute[221550]: 2026-01-31 08:47:46.954 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:46.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:47:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:48.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:47:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:48.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:50 np0005603609 ovn_controller[130359]: 2026-01-31T08:47:50Z|00891|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Jan 31 03:47:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:47:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:50.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:47:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:47:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:50.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:47:51 np0005603609 nova_compute[221550]: 2026-01-31 08:47:51.450 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:51 np0005603609 nova_compute[221550]: 2026-01-31 08:47:51.957 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:47:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:52.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:47:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:52.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:53 np0005603609 nova_compute[221550]: 2026-01-31 08:47:53.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:54.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:54 np0005603609 nova_compute[221550]: 2026-01-31 08:47:54.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:47:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:54.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:47:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:56.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:56 np0005603609 nova_compute[221550]: 2026-01-31 08:47:56.450 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:56 np0005603609 nova_compute[221550]: 2026-01-31 08:47:56.958 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:56.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:57 np0005603609 nova_compute[221550]: 2026-01-31 08:47:57.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:58.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:58 np0005603609 nova_compute[221550]: 2026-01-31 08:47:58.872 221554 DEBUG oslo_concurrency.lockutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Acquiring lock "e18328fb-6ec0-4307-a12d-810c8f7cc007" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:58 np0005603609 nova_compute[221550]: 2026-01-31 08:47:58.873 221554 DEBUG oslo_concurrency.lockutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:47:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:58.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:59 np0005603609 nova_compute[221550]: 2026-01-31 08:47:59.414 221554 DEBUG nova.compute.manager [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:47:59 np0005603609 nova_compute[221550]: 2026-01-31 08:47:59.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:59 np0005603609 nova_compute[221550]: 2026-01-31 08:47:59.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:47:59 np0005603609 nova_compute[221550]: 2026-01-31 08:47:59.661 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:00.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:00 np0005603609 nova_compute[221550]: 2026-01-31 08:48:00.524 221554 DEBUG oslo_concurrency.lockutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:00 np0005603609 nova_compute[221550]: 2026-01-31 08:48:00.524 221554 DEBUG oslo_concurrency.lockutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:00 np0005603609 nova_compute[221550]: 2026-01-31 08:48:00.538 221554 DEBUG nova.virt.hardware [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:48:00 np0005603609 nova_compute[221550]: 2026-01-31 08:48:00.539 221554 INFO nova.compute.claims [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:48:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:48:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:00.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:48:01 np0005603609 nova_compute[221550]: 2026-01-31 08:48:01.050 221554 DEBUG oslo_concurrency.processutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:01 np0005603609 nova_compute[221550]: 2026-01-31 08:48:01.462 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:48:01 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1146695923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:48:01 np0005603609 nova_compute[221550]: 2026-01-31 08:48:01.530 221554 DEBUG oslo_concurrency.processutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:01 np0005603609 nova_compute[221550]: 2026-01-31 08:48:01.539 221554 DEBUG nova.compute.provider_tree [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:48:01 np0005603609 nova_compute[221550]: 2026-01-31 08:48:01.570 221554 DEBUG nova.scheduler.client.report [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:48:01 np0005603609 nova_compute[221550]: 2026-01-31 08:48:01.623 221554 DEBUG oslo_concurrency.lockutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:01 np0005603609 nova_compute[221550]: 2026-01-31 08:48:01.624 221554 DEBUG nova.compute.manager [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:48:01 np0005603609 nova_compute[221550]: 2026-01-31 08:48:01.757 221554 DEBUG oslo_concurrency.lockutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "7f398989-1c3a-465d-afd2-f6fcd860682f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:01 np0005603609 nova_compute[221550]: 2026-01-31 08:48:01.758 221554 DEBUG oslo_concurrency.lockutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "7f398989-1c3a-465d-afd2-f6fcd860682f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:01 np0005603609 nova_compute[221550]: 2026-01-31 08:48:01.772 221554 DEBUG nova.compute.manager [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:48:01 np0005603609 nova_compute[221550]: 2026-01-31 08:48:01.772 221554 DEBUG nova.network.neutron [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:48:01 np0005603609 nova_compute[221550]: 2026-01-31 08:48:01.815 221554 DEBUG nova.compute.manager [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:48:01 np0005603609 nova_compute[221550]: 2026-01-31 08:48:01.842 221554 INFO nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:48:01 np0005603609 nova_compute[221550]: 2026-01-31 08:48:01.896 221554 DEBUG nova.compute.manager [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:48:01 np0005603609 nova_compute[221550]: 2026-01-31 08:48:01.940 221554 DEBUG oslo_concurrency.lockutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:01 np0005603609 nova_compute[221550]: 2026-01-31 08:48:01.941 221554 DEBUG oslo_concurrency.lockutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:01 np0005603609 nova_compute[221550]: 2026-01-31 08:48:01.947 221554 DEBUG nova.virt.hardware [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:48:01 np0005603609 nova_compute[221550]: 2026-01-31 08:48:01.948 221554 INFO nova.compute.claims [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:48:01 np0005603609 nova_compute[221550]: 2026-01-31 08:48:01.961 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:02 np0005603609 nova_compute[221550]: 2026-01-31 08:48:02.281 221554 DEBUG nova.compute.manager [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:48:02 np0005603609 nova_compute[221550]: 2026-01-31 08:48:02.282 221554 DEBUG nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:48:02 np0005603609 nova_compute[221550]: 2026-01-31 08:48:02.283 221554 INFO nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Creating image(s)#033[00m
Jan 31 03:48:02 np0005603609 nova_compute[221550]: 2026-01-31 08:48:02.307 221554 DEBUG nova.storage.rbd_utils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] rbd image e18328fb-6ec0-4307-a12d-810c8f7cc007_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:02 np0005603609 nova_compute[221550]: 2026-01-31 08:48:02.334 221554 DEBUG nova.storage.rbd_utils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] rbd image e18328fb-6ec0-4307-a12d-810c8f7cc007_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:48:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:02.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:48:02 np0005603609 nova_compute[221550]: 2026-01-31 08:48:02.365 221554 DEBUG nova.storage.rbd_utils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] rbd image e18328fb-6ec0-4307-a12d-810c8f7cc007_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:02 np0005603609 nova_compute[221550]: 2026-01-31 08:48:02.369 221554 DEBUG oslo_concurrency.processutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:02 np0005603609 nova_compute[221550]: 2026-01-31 08:48:02.397 221554 DEBUG nova.policy [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '429612bcc82f4c6a99fe57f4d9b01624', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e1a1c923b1d14e44b30717c742870aa1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:48:02 np0005603609 nova_compute[221550]: 2026-01-31 08:48:02.430 221554 DEBUG oslo_concurrency.processutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:02 np0005603609 nova_compute[221550]: 2026-01-31 08:48:02.458 221554 DEBUG oslo_concurrency.processutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:02 np0005603609 nova_compute[221550]: 2026-01-31 08:48:02.459 221554 DEBUG oslo_concurrency.lockutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:02 np0005603609 nova_compute[221550]: 2026-01-31 08:48:02.459 221554 DEBUG oslo_concurrency.lockutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:02 np0005603609 nova_compute[221550]: 2026-01-31 08:48:02.459 221554 DEBUG oslo_concurrency.lockutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:02 np0005603609 nova_compute[221550]: 2026-01-31 08:48:02.489 221554 DEBUG nova.storage.rbd_utils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] rbd image e18328fb-6ec0-4307-a12d-810c8f7cc007_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:02 np0005603609 nova_compute[221550]: 2026-01-31 08:48:02.494 221554 DEBUG oslo_concurrency.processutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e18328fb-6ec0-4307-a12d-810c8f7cc007_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:48:02 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3678183900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:48:02 np0005603609 nova_compute[221550]: 2026-01-31 08:48:02.897 221554 DEBUG oslo_concurrency.processutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:02 np0005603609 nova_compute[221550]: 2026-01-31 08:48:02.904 221554 DEBUG nova.compute.provider_tree [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:48:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:48:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:02.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.106 221554 DEBUG nova.scheduler.client.report [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.170 221554 DEBUG oslo_concurrency.lockutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.171 221554 DEBUG nova.compute.manager [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.301 221554 DEBUG oslo_concurrency.processutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e18328fb-6ec0-4307-a12d-810c8f7cc007_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.808s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.302 221554 DEBUG nova.compute.manager [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.302 221554 DEBUG nova.network.neutron [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.343 221554 INFO nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.398 221554 DEBUG nova.storage.rbd_utils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] resizing rbd image e18328fb-6ec0-4307-a12d-810c8f7cc007_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.443 221554 DEBUG nova.compute.manager [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.628 221554 DEBUG nova.objects.instance [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lazy-loading 'migration_context' on Instance uuid e18328fb-6ec0-4307-a12d-810c8f7cc007 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.633 221554 DEBUG nova.compute.manager [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.636 221554 DEBUG nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.637 221554 INFO nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Creating image(s)#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.671 221554 DEBUG nova.storage.rbd_utils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 7f398989-1c3a-465d-afd2-f6fcd860682f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.704 221554 DEBUG nova.storage.rbd_utils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 7f398989-1c3a-465d-afd2-f6fcd860682f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.732 221554 DEBUG nova.storage.rbd_utils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 7f398989-1c3a-465d-afd2-f6fcd860682f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.736 221554 DEBUG oslo_concurrency.processutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.757 221554 DEBUG nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.758 221554 DEBUG nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Ensure instance console log exists: /var/lib/nova/instances/e18328fb-6ec0-4307-a12d-810c8f7cc007/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.758 221554 DEBUG oslo_concurrency.lockutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.759 221554 DEBUG oslo_concurrency.lockutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.759 221554 DEBUG oslo_concurrency.lockutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.802 221554 DEBUG oslo_concurrency.processutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.803 221554 DEBUG oslo_concurrency.lockutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.803 221554 DEBUG oslo_concurrency.lockutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.803 221554 DEBUG oslo_concurrency.lockutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.836 221554 DEBUG nova.storage.rbd_utils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 7f398989-1c3a-465d-afd2-f6fcd860682f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.844 221554 DEBUG oslo_concurrency.processutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 7f398989-1c3a-465d-afd2-f6fcd860682f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:03 np0005603609 nova_compute[221550]: 2026-01-31 08:48:03.915 221554 DEBUG nova.policy [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a56abd8fdd341ae88a99e102ab399de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:48:04 np0005603609 nova_compute[221550]: 2026-01-31 08:48:04.105 221554 DEBUG nova.network.neutron [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Successfully created port: df761877-caf0-49b5-8d74-374169310e23 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:48:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:04.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:04 np0005603609 nova_compute[221550]: 2026-01-31 08:48:04.759 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:04 np0005603609 nova_compute[221550]: 2026-01-31 08:48:04.759 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:48:04 np0005603609 nova_compute[221550]: 2026-01-31 08:48:04.759 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:48:04 np0005603609 nova_compute[221550]: 2026-01-31 08:48:04.812 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:48:04 np0005603609 nova_compute[221550]: 2026-01-31 08:48:04.812 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:48:04 np0005603609 nova_compute[221550]: 2026-01-31 08:48:04.813 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:48:04 np0005603609 nova_compute[221550]: 2026-01-31 08:48:04.813 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:04 np0005603609 nova_compute[221550]: 2026-01-31 08:48:04.842 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:04 np0005603609 nova_compute[221550]: 2026-01-31 08:48:04.842 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:04 np0005603609 nova_compute[221550]: 2026-01-31 08:48:04.842 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:04 np0005603609 nova_compute[221550]: 2026-01-31 08:48:04.843 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:48:04 np0005603609 nova_compute[221550]: 2026-01-31 08:48:04.843 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:04 np0005603609 nova_compute[221550]: 2026-01-31 08:48:04.998 221554 DEBUG oslo_concurrency.processutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 7f398989-1c3a-465d-afd2-f6fcd860682f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:04.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:05 np0005603609 nova_compute[221550]: 2026-01-31 08:48:05.085 221554 DEBUG nova.storage.rbd_utils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] resizing rbd image 7f398989-1c3a-465d-afd2-f6fcd860682f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:48:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:48:05 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/280270877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:48:05 np0005603609 nova_compute[221550]: 2026-01-31 08:48:05.365 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:05 np0005603609 nova_compute[221550]: 2026-01-31 08:48:05.522 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:48:05 np0005603609 nova_compute[221550]: 2026-01-31 08:48:05.523 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4214MB free_disk=20.92192840576172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:48:05 np0005603609 nova_compute[221550]: 2026-01-31 08:48:05.524 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:05 np0005603609 nova_compute[221550]: 2026-01-31 08:48:05.524 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:05 np0005603609 nova_compute[221550]: 2026-01-31 08:48:05.679 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance e18328fb-6ec0-4307-a12d-810c8f7cc007 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:48:05 np0005603609 nova_compute[221550]: 2026-01-31 08:48:05.680 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 7f398989-1c3a-465d-afd2-f6fcd860682f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:48:05 np0005603609 nova_compute[221550]: 2026-01-31 08:48:05.680 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:48:05 np0005603609 nova_compute[221550]: 2026-01-31 08:48:05.680 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:48:05 np0005603609 nova_compute[221550]: 2026-01-31 08:48:05.932 221554 DEBUG nova.objects.instance [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'migration_context' on Instance uuid 7f398989-1c3a-465d-afd2-f6fcd860682f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:48:05 np0005603609 nova_compute[221550]: 2026-01-31 08:48:05.959 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:05 np0005603609 nova_compute[221550]: 2026-01-31 08:48:05.996 221554 DEBUG nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:48:05 np0005603609 nova_compute[221550]: 2026-01-31 08:48:05.997 221554 DEBUG nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Ensure instance console log exists: /var/lib/nova/instances/7f398989-1c3a-465d-afd2-f6fcd860682f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:48:05 np0005603609 nova_compute[221550]: 2026-01-31 08:48:05.998 221554 DEBUG oslo_concurrency.lockutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:05 np0005603609 nova_compute[221550]: 2026-01-31 08:48:05.998 221554 DEBUG oslo_concurrency.lockutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:05 np0005603609 nova_compute[221550]: 2026-01-31 08:48:05.999 221554 DEBUG oslo_concurrency.lockutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:48:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:06.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:48:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:48:06 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1441525110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:48:06 np0005603609 nova_compute[221550]: 2026-01-31 08:48:06.421 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:06 np0005603609 nova_compute[221550]: 2026-01-31 08:48:06.426 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:48:06 np0005603609 nova_compute[221550]: 2026-01-31 08:48:06.453 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:48:06 np0005603609 nova_compute[221550]: 2026-01-31 08:48:06.465 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:06 np0005603609 nova_compute[221550]: 2026-01-31 08:48:06.496 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:48:06 np0005603609 nova_compute[221550]: 2026-01-31 08:48:06.496 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:06 np0005603609 nova_compute[221550]: 2026-01-31 08:48:06.964 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:07.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:07.540 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:07.540 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:07.540 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:07 np0005603609 nova_compute[221550]: 2026-01-31 08:48:07.694 221554 DEBUG nova.network.neutron [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Successfully created port: c2e32560-8930-4ad7-a92a-06705df2a932 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:48:07 np0005603609 nova_compute[221550]: 2026-01-31 08:48:07.713 221554 DEBUG nova.network.neutron [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Successfully updated port: df761877-caf0-49b5-8d74-374169310e23 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:48:07 np0005603609 nova_compute[221550]: 2026-01-31 08:48:07.775 221554 DEBUG oslo_concurrency.lockutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Acquiring lock "refresh_cache-e18328fb-6ec0-4307-a12d-810c8f7cc007" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:48:07 np0005603609 nova_compute[221550]: 2026-01-31 08:48:07.775 221554 DEBUG oslo_concurrency.lockutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Acquired lock "refresh_cache-e18328fb-6ec0-4307-a12d-810c8f7cc007" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:48:07 np0005603609 nova_compute[221550]: 2026-01-31 08:48:07.776 221554 DEBUG nova.network.neutron [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:48:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:08.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:08 np0005603609 nova_compute[221550]: 2026-01-31 08:48:08.373 221554 DEBUG nova.compute.manager [req-053c4bcb-c643-4f74-8c81-17d16607e7e0 req-538a40af-36f1-4f79-83d7-8c683b922460 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received event network-changed-df761877-caf0-49b5-8d74-374169310e23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:08 np0005603609 nova_compute[221550]: 2026-01-31 08:48:08.375 221554 DEBUG nova.compute.manager [req-053c4bcb-c643-4f74-8c81-17d16607e7e0 req-538a40af-36f1-4f79-83d7-8c683b922460 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Refreshing instance network info cache due to event network-changed-df761877-caf0-49b5-8d74-374169310e23. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:48:08 np0005603609 nova_compute[221550]: 2026-01-31 08:48:08.375 221554 DEBUG oslo_concurrency.lockutils [req-053c4bcb-c643-4f74-8c81-17d16607e7e0 req-538a40af-36f1-4f79-83d7-8c683b922460 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-e18328fb-6ec0-4307-a12d-810c8f7cc007" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:48:08 np0005603609 nova_compute[221550]: 2026-01-31 08:48:08.522 221554 DEBUG nova.network.neutron [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:48:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:09.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.182 221554 DEBUG nova.network.neutron [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Updating instance_info_cache with network_info: [{"id": "df761877-caf0-49b5-8d74-374169310e23", "address": "fa:16:3e:32:69:36", "network": {"id": "465c490f-bfbd-42dd-8ef3-8b9af8836224", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1184908098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e1a1c923b1d14e44b30717c742870aa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf761877-ca", "ovs_interfaceid": "df761877-caf0-49b5-8d74-374169310e23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.295 221554 DEBUG oslo_concurrency.lockutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Releasing lock "refresh_cache-e18328fb-6ec0-4307-a12d-810c8f7cc007" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.296 221554 DEBUG nova.compute.manager [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Instance network_info: |[{"id": "df761877-caf0-49b5-8d74-374169310e23", "address": "fa:16:3e:32:69:36", "network": {"id": "465c490f-bfbd-42dd-8ef3-8b9af8836224", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1184908098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e1a1c923b1d14e44b30717c742870aa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf761877-ca", "ovs_interfaceid": "df761877-caf0-49b5-8d74-374169310e23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.296 221554 DEBUG oslo_concurrency.lockutils [req-053c4bcb-c643-4f74-8c81-17d16607e7e0 req-538a40af-36f1-4f79-83d7-8c683b922460 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-e18328fb-6ec0-4307-a12d-810c8f7cc007" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.297 221554 DEBUG nova.network.neutron [req-053c4bcb-c643-4f74-8c81-17d16607e7e0 req-538a40af-36f1-4f79-83d7-8c683b922460 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Refreshing network info cache for port df761877-caf0-49b5-8d74-374169310e23 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.299 221554 DEBUG nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Start _get_guest_xml network_info=[{"id": "df761877-caf0-49b5-8d74-374169310e23", "address": "fa:16:3e:32:69:36", "network": {"id": "465c490f-bfbd-42dd-8ef3-8b9af8836224", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1184908098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e1a1c923b1d14e44b30717c742870aa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf761877-ca", "ovs_interfaceid": "df761877-caf0-49b5-8d74-374169310e23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.304 221554 WARNING nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.310 221554 DEBUG nova.virt.libvirt.host [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.311 221554 DEBUG nova.virt.libvirt.host [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.319 221554 DEBUG nova.virt.libvirt.host [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.320 221554 DEBUG nova.virt.libvirt.host [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.321 221554 DEBUG nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.321 221554 DEBUG nova.virt.hardware [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.322 221554 DEBUG nova.virt.hardware [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.322 221554 DEBUG nova.virt.hardware [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.322 221554 DEBUG nova.virt.hardware [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.323 221554 DEBUG nova.virt.hardware [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.323 221554 DEBUG nova.virt.hardware [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.323 221554 DEBUG nova.virt.hardware [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.323 221554 DEBUG nova.virt.hardware [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.324 221554 DEBUG nova.virt.hardware [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.324 221554 DEBUG nova.virt.hardware [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.324 221554 DEBUG nova.virt.hardware [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.327 221554 DEBUG oslo_concurrency.processutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:10.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:48:10 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1540000744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.764 221554 DEBUG oslo_concurrency.processutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.792 221554 DEBUG nova.storage.rbd_utils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] rbd image e18328fb-6ec0-4307-a12d-810c8f7cc007_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:10 np0005603609 nova_compute[221550]: 2026-01-31 08:48:10.797 221554 DEBUG oslo_concurrency.processutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:11.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.025 221554 DEBUG nova.network.neutron [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Successfully updated port: c2e32560-8930-4ad7-a92a-06705df2a932 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.068 221554 DEBUG oslo_concurrency.lockutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "refresh_cache-7f398989-1c3a-465d-afd2-f6fcd860682f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.068 221554 DEBUG oslo_concurrency.lockutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquired lock "refresh_cache-7f398989-1c3a-465d-afd2-f6fcd860682f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.069 221554 DEBUG nova.network.neutron [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:48:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:48:11 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1385660829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.201 221554 DEBUG oslo_concurrency.processutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.203 221554 DEBUG nova.virt.libvirt.vif [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:47:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-616284573',display_name='tempest-TestServerAdvancedOps-server-616284573',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-616284573',id=195,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e1a1c923b1d14e44b30717c742870aa1',ramdisk_id='',reservation_id='r-hxna07n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1611358131',owner_user_name='tempest-TestServerAdvancedOps-1611358131-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:48:01Z,user_data=None,user_id='429612bcc82f4c6a99fe57f4d9b01624',uuid=e18328fb-6ec0-4307-a12d-810c8f7cc007,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df761877-caf0-49b5-8d74-374169310e23", "address": "fa:16:3e:32:69:36", "network": {"id": "465c490f-bfbd-42dd-8ef3-8b9af8836224", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1184908098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e1a1c923b1d14e44b30717c742870aa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf761877-ca", "ovs_interfaceid": "df761877-caf0-49b5-8d74-374169310e23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.204 221554 DEBUG nova.network.os_vif_util [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Converting VIF {"id": "df761877-caf0-49b5-8d74-374169310e23", "address": "fa:16:3e:32:69:36", "network": {"id": "465c490f-bfbd-42dd-8ef3-8b9af8836224", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1184908098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e1a1c923b1d14e44b30717c742870aa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf761877-ca", "ovs_interfaceid": "df761877-caf0-49b5-8d74-374169310e23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.205 221554 DEBUG nova.network.os_vif_util [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:69:36,bridge_name='br-int',has_traffic_filtering=True,id=df761877-caf0-49b5-8d74-374169310e23,network=Network(465c490f-bfbd-42dd-8ef3-8b9af8836224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf761877-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.207 221554 DEBUG nova.objects.instance [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lazy-loading 'pci_devices' on Instance uuid e18328fb-6ec0-4307-a12d-810c8f7cc007 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.246 221554 DEBUG nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:48:11 np0005603609 nova_compute[221550]:  <uuid>e18328fb-6ec0-4307-a12d-810c8f7cc007</uuid>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:  <name>instance-000000c3</name>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestServerAdvancedOps-server-616284573</nova:name>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:48:10</nova:creationTime>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:48:11 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:        <nova:user uuid="429612bcc82f4c6a99fe57f4d9b01624">tempest-TestServerAdvancedOps-1611358131-project-member</nova:user>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:        <nova:project uuid="e1a1c923b1d14e44b30717c742870aa1">tempest-TestServerAdvancedOps-1611358131</nova:project>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:        <nova:port uuid="df761877-caf0-49b5-8d74-374169310e23">
Jan 31 03:48:11 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <entry name="serial">e18328fb-6ec0-4307-a12d-810c8f7cc007</entry>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <entry name="uuid">e18328fb-6ec0-4307-a12d-810c8f7cc007</entry>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/e18328fb-6ec0-4307-a12d-810c8f7cc007_disk">
Jan 31 03:48:11 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:48:11 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/e18328fb-6ec0-4307-a12d-810c8f7cc007_disk.config">
Jan 31 03:48:11 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:48:11 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:32:69:36"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <target dev="tapdf761877-ca"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/e18328fb-6ec0-4307-a12d-810c8f7cc007/console.log" append="off"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:48:11 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:48:11 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:48:11 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:48:11 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.247 221554 DEBUG nova.compute.manager [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Preparing to wait for external event network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.247 221554 DEBUG oslo_concurrency.lockutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Acquiring lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.248 221554 DEBUG oslo_concurrency.lockutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.248 221554 DEBUG oslo_concurrency.lockutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.249 221554 DEBUG nova.virt.libvirt.vif [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:47:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-616284573',display_name='tempest-TestServerAdvancedOps-server-616284573',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-616284573',id=195,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e1a1c923b1d14e44b30717c742870aa1',ramdisk_id='',reservation_id='r-hxna07n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1611358131',owner_user_name='tempest-TestServerAdvancedOps-1611358131-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:48:01Z,user_data=None,user_id='429612bcc82f4c6a99fe57f4d9b01624',uuid=e18328fb-6ec0-4307-a12d-810c8f7cc007,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df761877-caf0-49b5-8d74-374169310e23", "address": "fa:16:3e:32:69:36", "network": {"id": "465c490f-bfbd-42dd-8ef3-8b9af8836224", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1184908098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e1a1c923b1d14e44b30717c742870aa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf761877-ca", "ovs_interfaceid": "df761877-caf0-49b5-8d74-374169310e23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.249 221554 DEBUG nova.network.os_vif_util [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Converting VIF {"id": "df761877-caf0-49b5-8d74-374169310e23", "address": "fa:16:3e:32:69:36", "network": {"id": "465c490f-bfbd-42dd-8ef3-8b9af8836224", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1184908098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e1a1c923b1d14e44b30717c742870aa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf761877-ca", "ovs_interfaceid": "df761877-caf0-49b5-8d74-374169310e23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.250 221554 DEBUG nova.network.os_vif_util [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:69:36,bridge_name='br-int',has_traffic_filtering=True,id=df761877-caf0-49b5-8d74-374169310e23,network=Network(465c490f-bfbd-42dd-8ef3-8b9af8836224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf761877-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.250 221554 DEBUG os_vif [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:69:36,bridge_name='br-int',has_traffic_filtering=True,id=df761877-caf0-49b5-8d74-374169310e23,network=Network(465c490f-bfbd-42dd-8ef3-8b9af8836224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf761877-ca') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.251 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.252 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.252 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.255 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.255 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf761877-ca, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.256 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdf761877-ca, col_values=(('external_ids', {'iface-id': 'df761877-caf0-49b5-8d74-374169310e23', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:69:36', 'vm-uuid': 'e18328fb-6ec0-4307-a12d-810c8f7cc007'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.257 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:11 np0005603609 NetworkManager[49064]: <info>  [1769849291.2582] manager: (tapdf761877-ca): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.260 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.265 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.267 221554 INFO os_vif [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:69:36,bridge_name='br-int',has_traffic_filtering=True,id=df761877-caf0-49b5-8d74-374169310e23,network=Network(465c490f-bfbd-42dd-8ef3-8b9af8836224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf761877-ca')#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.385 221554 DEBUG nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.386 221554 DEBUG nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.386 221554 DEBUG nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] No VIF found with MAC fa:16:3e:32:69:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.387 221554 INFO nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Using config drive#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.410 221554 DEBUG nova.storage.rbd_utils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] rbd image e18328fb-6ec0-4307-a12d-810c8f7cc007_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.418 221554 DEBUG nova.network.neutron [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.421 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.834 221554 DEBUG nova.compute.manager [req-d52d47c1-5c09-4f8d-9c7b-57e4ade39d0c req-ab45fc6a-ed06-41c8-a9c3-0d9bac88f678 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Received event network-changed-c2e32560-8930-4ad7-a92a-06705df2a932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.835 221554 DEBUG nova.compute.manager [req-d52d47c1-5c09-4f8d-9c7b-57e4ade39d0c req-ab45fc6a-ed06-41c8-a9c3-0d9bac88f678 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Refreshing instance network info cache due to event network-changed-c2e32560-8930-4ad7-a92a-06705df2a932. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.836 221554 DEBUG oslo_concurrency.lockutils [req-d52d47c1-5c09-4f8d-9c7b-57e4ade39d0c req-ab45fc6a-ed06-41c8-a9c3-0d9bac88f678 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-7f398989-1c3a-465d-afd2-f6fcd860682f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:48:11 np0005603609 nova_compute[221550]: 2026-01-31 08:48:11.970 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:12 np0005603609 nova_compute[221550]: 2026-01-31 08:48:12.229 221554 INFO nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Creating config drive at /var/lib/nova/instances/e18328fb-6ec0-4307-a12d-810c8f7cc007/disk.config#033[00m
Jan 31 03:48:12 np0005603609 nova_compute[221550]: 2026-01-31 08:48:12.237 221554 DEBUG oslo_concurrency.processutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e18328fb-6ec0-4307-a12d-810c8f7cc007/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpf8vwq2c8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:12.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:12 np0005603609 nova_compute[221550]: 2026-01-31 08:48:12.374 221554 DEBUG oslo_concurrency.processutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e18328fb-6ec0-4307-a12d-810c8f7cc007/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpf8vwq2c8" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:12 np0005603609 nova_compute[221550]: 2026-01-31 08:48:12.402 221554 DEBUG nova.storage.rbd_utils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] rbd image e18328fb-6ec0-4307-a12d-810c8f7cc007_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:12 np0005603609 nova_compute[221550]: 2026-01-31 08:48:12.406 221554 DEBUG oslo_concurrency.processutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e18328fb-6ec0-4307-a12d-810c8f7cc007/disk.config e18328fb-6ec0-4307-a12d-810c8f7cc007_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:12 np0005603609 nova_compute[221550]: 2026-01-31 08:48:12.853 221554 DEBUG nova.network.neutron [req-053c4bcb-c643-4f74-8c81-17d16607e7e0 req-538a40af-36f1-4f79-83d7-8c683b922460 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Updated VIF entry in instance network info cache for port df761877-caf0-49b5-8d74-374169310e23. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:48:12 np0005603609 nova_compute[221550]: 2026-01-31 08:48:12.855 221554 DEBUG nova.network.neutron [req-053c4bcb-c643-4f74-8c81-17d16607e7e0 req-538a40af-36f1-4f79-83d7-8c683b922460 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Updating instance_info_cache with network_info: [{"id": "df761877-caf0-49b5-8d74-374169310e23", "address": "fa:16:3e:32:69:36", "network": {"id": "465c490f-bfbd-42dd-8ef3-8b9af8836224", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1184908098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e1a1c923b1d14e44b30717c742870aa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf761877-ca", "ovs_interfaceid": "df761877-caf0-49b5-8d74-374169310e23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:48:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:13.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.205 221554 DEBUG oslo_concurrency.lockutils [req-053c4bcb-c643-4f74-8c81-17d16607e7e0 req-538a40af-36f1-4f79-83d7-8c683b922460 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-e18328fb-6ec0-4307-a12d-810c8f7cc007" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.276 221554 DEBUG nova.network.neutron [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Updating instance_info_cache with network_info: [{"id": "c2e32560-8930-4ad7-a92a-06705df2a932", "address": "fa:16:3e:66:02:4a", "network": {"id": "4003c199-b45c-4de7-962b-d4f5fb322ed8", "bridge": "br-int", "label": "tempest-network-smoke--1751171446", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2e32560-89", "ovs_interfaceid": "c2e32560-8930-4ad7-a92a-06705df2a932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.507 221554 DEBUG oslo_concurrency.processutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e18328fb-6ec0-4307-a12d-810c8f7cc007/disk.config e18328fb-6ec0-4307-a12d-810c8f7cc007_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.508 221554 INFO nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Deleting local config drive /var/lib/nova/instances/e18328fb-6ec0-4307-a12d-810c8f7cc007/disk.config because it was imported into RBD.#033[00m
Jan 31 03:48:13 np0005603609 kernel: tapdf761877-ca: entered promiscuous mode
Jan 31 03:48:13 np0005603609 NetworkManager[49064]: <info>  [1769849293.5516] manager: (tapdf761877-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/411)
Jan 31 03:48:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:13Z|00892|binding|INFO|Claiming lport df761877-caf0-49b5-8d74-374169310e23 for this chassis.
Jan 31 03:48:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:13Z|00893|binding|INFO|df761877-caf0-49b5-8d74-374169310e23: Claiming fa:16:3e:32:69:36 10.100.0.9
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.554 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.558 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.570 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:13Z|00894|binding|INFO|Setting lport df761877-caf0-49b5-8d74-374169310e23 ovn-installed in OVS
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.574 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:13 np0005603609 systemd-udevd[306606]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:48:13 np0005603609 podman[306568]: 2026-01-31 08:48:13.582203182 +0000 UTC m=+0.046719989 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Jan 31 03:48:13 np0005603609 systemd-machined[190912]: New machine qemu-106-instance-000000c3.
Jan 31 03:48:13 np0005603609 NetworkManager[49064]: <info>  [1769849293.5881] device (tapdf761877-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:48:13 np0005603609 NetworkManager[49064]: <info>  [1769849293.5890] device (tapdf761877-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:48:13 np0005603609 systemd[1]: Started Virtual Machine qemu-106-instance-000000c3.
Jan 31 03:48:13 np0005603609 podman[306565]: 2026-01-31 08:48:13.616799924 +0000 UTC m=+0.081937165 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.864 221554 DEBUG oslo_concurrency.lockutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Releasing lock "refresh_cache-7f398989-1c3a-465d-afd2-f6fcd860682f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.864 221554 DEBUG nova.compute.manager [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Instance network_info: |[{"id": "c2e32560-8930-4ad7-a92a-06705df2a932", "address": "fa:16:3e:66:02:4a", "network": {"id": "4003c199-b45c-4de7-962b-d4f5fb322ed8", "bridge": "br-int", "label": "tempest-network-smoke--1751171446", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2e32560-89", "ovs_interfaceid": "c2e32560-8930-4ad7-a92a-06705df2a932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.864 221554 DEBUG oslo_concurrency.lockutils [req-d52d47c1-5c09-4f8d-9c7b-57e4ade39d0c req-ab45fc6a-ed06-41c8-a9c3-0d9bac88f678 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-7f398989-1c3a-465d-afd2-f6fcd860682f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.865 221554 DEBUG nova.network.neutron [req-d52d47c1-5c09-4f8d-9c7b-57e4ade39d0c req-ab45fc6a-ed06-41c8-a9c3-0d9bac88f678 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Refreshing network info cache for port c2e32560-8930-4ad7-a92a-06705df2a932 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.867 221554 DEBUG nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Start _get_guest_xml network_info=[{"id": "c2e32560-8930-4ad7-a92a-06705df2a932", "address": "fa:16:3e:66:02:4a", "network": {"id": "4003c199-b45c-4de7-962b-d4f5fb322ed8", "bridge": "br-int", "label": "tempest-network-smoke--1751171446", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2e32560-89", "ovs_interfaceid": "c2e32560-8930-4ad7-a92a-06705df2a932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.871 221554 WARNING nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.875 221554 DEBUG nova.virt.libvirt.host [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.876 221554 DEBUG nova.virt.libvirt.host [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.879 221554 DEBUG nova.virt.libvirt.host [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.879 221554 DEBUG nova.virt.libvirt.host [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.880 221554 DEBUG nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.880 221554 DEBUG nova.virt.hardware [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.881 221554 DEBUG nova.virt.hardware [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.881 221554 DEBUG nova.virt.hardware [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.881 221554 DEBUG nova.virt.hardware [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.881 221554 DEBUG nova.virt.hardware [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.881 221554 DEBUG nova.virt.hardware [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.882 221554 DEBUG nova.virt.hardware [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.882 221554 DEBUG nova.virt.hardware [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.882 221554 DEBUG nova.virt.hardware [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.883 221554 DEBUG nova.virt.hardware [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.883 221554 DEBUG nova.virt.hardware [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:48:13 np0005603609 nova_compute[221550]: 2026-01-31 08:48:13.885 221554 DEBUG oslo_concurrency.processutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:14 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:14Z|00895|binding|INFO|Setting lport df761877-caf0-49b5-8d74-374169310e23 up in Southbound
Jan 31 03:48:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:14.215 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:69:36 10.100.0.9'], port_security=['fa:16:3e:32:69:36 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e18328fb-6ec0-4307-a12d-810c8f7cc007', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-465c490f-bfbd-42dd-8ef3-8b9af8836224', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e1a1c923b1d14e44b30717c742870aa1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '576af920-946c-41e4-9f2b-8a2e422edf05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e96e68d5-5e8f-45d3-a26f-458c20b777d9, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=df761877-caf0-49b5-8d74-374169310e23) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:48:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:14.217 140058 INFO neutron.agent.ovn.metadata.agent [-] Port df761877-caf0-49b5-8d74-374169310e23 in datapath 465c490f-bfbd-42dd-8ef3-8b9af8836224 bound to our chassis#033[00m
Jan 31 03:48:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:14.217 140058 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 465c490f-bfbd-42dd-8ef3-8b9af8836224 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 31 03:48:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:14.218 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[27cbc7d9-02c6-45bb-bbbf-bac504c7b8ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:14 np0005603609 nova_compute[221550]: 2026-01-31 08:48:14.310 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849294.309891, e18328fb-6ec0-4307-a12d-810c8f7cc007 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:48:14 np0005603609 nova_compute[221550]: 2026-01-31 08:48:14.311 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] VM Started (Lifecycle Event)#033[00m
Jan 31 03:48:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:48:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/759972552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:48:14 np0005603609 nova_compute[221550]: 2026-01-31 08:48:14.341 221554 DEBUG oslo_concurrency.processutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:14 np0005603609 nova_compute[221550]: 2026-01-31 08:48:14.368 221554 DEBUG nova.storage.rbd_utils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 7f398989-1c3a-465d-afd2-f6fcd860682f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:14 np0005603609 nova_compute[221550]: 2026-01-31 08:48:14.374 221554 DEBUG oslo_concurrency.processutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:14.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:14 np0005603609 nova_compute[221550]: 2026-01-31 08:48:14.742 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:14 np0005603609 nova_compute[221550]: 2026-01-31 08:48:14.747 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849294.3101625, e18328fb-6ec0-4307-a12d-810c8f7cc007 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:48:14 np0005603609 nova_compute[221550]: 2026-01-31 08:48:14.748 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:48:14 np0005603609 nova_compute[221550]: 2026-01-31 08:48:14.928 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:14 np0005603609 nova_compute[221550]: 2026-01-31 08:48:14.932 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:48:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:48:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2196161786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:48:14 np0005603609 nova_compute[221550]: 2026-01-31 08:48:14.973 221554 DEBUG oslo_concurrency.processutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:14 np0005603609 nova_compute[221550]: 2026-01-31 08:48:14.974 221554 DEBUG nova.virt.libvirt.vif [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:47:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1021887537',display_name='tempest-TestNetworkBasicOps-server-1021887537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1021887537',id=196,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOt56ZiO9bIJjptXP4FC4yvr0FGC/HLSFRJ9KFq+xOa97MgIuZ9vQ16iiUBUOq9Sog4gkJIU1+ZZDVLRrwd13t4yLg1vb6+7D03FGH8zKCqtpVFlfBkElLz7OTcF+2BbSw==',key_name='tempest-TestNetworkBasicOps-1882044848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-amgmft3u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:48:03Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=7f398989-1c3a-465d-afd2-f6fcd860682f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c2e32560-8930-4ad7-a92a-06705df2a932", "address": "fa:16:3e:66:02:4a", "network": {"id": "4003c199-b45c-4de7-962b-d4f5fb322ed8", "bridge": "br-int", "label": "tempest-network-smoke--1751171446", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2e32560-89", "ovs_interfaceid": "c2e32560-8930-4ad7-a92a-06705df2a932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:48:14 np0005603609 nova_compute[221550]: 2026-01-31 08:48:14.975 221554 DEBUG nova.network.os_vif_util [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "c2e32560-8930-4ad7-a92a-06705df2a932", "address": "fa:16:3e:66:02:4a", "network": {"id": "4003c199-b45c-4de7-962b-d4f5fb322ed8", "bridge": "br-int", "label": "tempest-network-smoke--1751171446", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2e32560-89", "ovs_interfaceid": "c2e32560-8930-4ad7-a92a-06705df2a932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:48:14 np0005603609 nova_compute[221550]: 2026-01-31 08:48:14.976 221554 DEBUG nova.network.os_vif_util [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:02:4a,bridge_name='br-int',has_traffic_filtering=True,id=c2e32560-8930-4ad7-a92a-06705df2a932,network=Network(4003c199-b45c-4de7-962b-d4f5fb322ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2e32560-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:48:14 np0005603609 nova_compute[221550]: 2026-01-31 08:48:14.977 221554 DEBUG nova.objects.instance [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f398989-1c3a-465d-afd2-f6fcd860682f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:48:14 np0005603609 nova_compute[221550]: 2026-01-31 08:48:14.988 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.008 221554 DEBUG nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:48:15 np0005603609 nova_compute[221550]:  <uuid>7f398989-1c3a-465d-afd2-f6fcd860682f</uuid>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:  <name>instance-000000c4</name>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestNetworkBasicOps-server-1021887537</nova:name>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:48:13</nova:creationTime>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:48:15 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:        <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:        <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:        <nova:port uuid="c2e32560-8930-4ad7-a92a-06705df2a932">
Jan 31 03:48:15 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <entry name="serial">7f398989-1c3a-465d-afd2-f6fcd860682f</entry>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <entry name="uuid">7f398989-1c3a-465d-afd2-f6fcd860682f</entry>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/7f398989-1c3a-465d-afd2-f6fcd860682f_disk">
Jan 31 03:48:15 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:48:15 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/7f398989-1c3a-465d-afd2-f6fcd860682f_disk.config">
Jan 31 03:48:15 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:48:15 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:66:02:4a"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <target dev="tapc2e32560-89"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/7f398989-1c3a-465d-afd2-f6fcd860682f/console.log" append="off"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:48:15 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:48:15 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:48:15 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:48:15 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.011 221554 DEBUG nova.compute.manager [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Preparing to wait for external event network-vif-plugged-c2e32560-8930-4ad7-a92a-06705df2a932 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.012 221554 DEBUG oslo_concurrency.lockutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "7f398989-1c3a-465d-afd2-f6fcd860682f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.012 221554 DEBUG oslo_concurrency.lockutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "7f398989-1c3a-465d-afd2-f6fcd860682f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.013 221554 DEBUG oslo_concurrency.lockutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "7f398989-1c3a-465d-afd2-f6fcd860682f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:15.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.014 221554 DEBUG nova.virt.libvirt.vif [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:47:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1021887537',display_name='tempest-TestNetworkBasicOps-server-1021887537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1021887537',id=196,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOt56ZiO9bIJjptXP4FC4yvr0FGC/HLSFRJ9KFq+xOa97MgIuZ9vQ16iiUBUOq9Sog4gkJIU1+ZZDVLRrwd13t4yLg1vb6+7D03FGH8zKCqtpVFlfBkElLz7OTcF+2BbSw==',key_name='tempest-TestNetworkBasicOps-1882044848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-amgmft3u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:48:03Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=7f398989-1c3a-465d-afd2-f6fcd860682f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c2e32560-8930-4ad7-a92a-06705df2a932", "address": "fa:16:3e:66:02:4a", "network": {"id": "4003c199-b45c-4de7-962b-d4f5fb322ed8", "bridge": "br-int", "label": "tempest-network-smoke--1751171446", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2e32560-89", "ovs_interfaceid": "c2e32560-8930-4ad7-a92a-06705df2a932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.014 221554 DEBUG nova.network.os_vif_util [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "c2e32560-8930-4ad7-a92a-06705df2a932", "address": "fa:16:3e:66:02:4a", "network": {"id": "4003c199-b45c-4de7-962b-d4f5fb322ed8", "bridge": "br-int", "label": "tempest-network-smoke--1751171446", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2e32560-89", "ovs_interfaceid": "c2e32560-8930-4ad7-a92a-06705df2a932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.015 221554 DEBUG nova.network.os_vif_util [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:02:4a,bridge_name='br-int',has_traffic_filtering=True,id=c2e32560-8930-4ad7-a92a-06705df2a932,network=Network(4003c199-b45c-4de7-962b-d4f5fb322ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2e32560-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.016 221554 DEBUG os_vif [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:02:4a,bridge_name='br-int',has_traffic_filtering=True,id=c2e32560-8930-4ad7-a92a-06705df2a932,network=Network(4003c199-b45c-4de7-962b-d4f5fb322ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2e32560-89') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.017 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.017 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.018 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.022 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.022 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2e32560-89, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.023 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc2e32560-89, col_values=(('external_ids', {'iface-id': 'c2e32560-8930-4ad7-a92a-06705df2a932', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:02:4a', 'vm-uuid': '7f398989-1c3a-465d-afd2-f6fcd860682f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.025 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.027 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:48:15 np0005603609 NetworkManager[49064]: <info>  [1769849295.0271] manager: (tapc2e32560-89): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/412)
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.033 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.035 221554 INFO os_vif [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:02:4a,bridge_name='br-int',has_traffic_filtering=True,id=c2e32560-8930-4ad7-a92a-06705df2a932,network=Network(4003c199-b45c-4de7-962b-d4f5fb322ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2e32560-89')#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.223 221554 DEBUG nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.225 221554 DEBUG nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.225 221554 DEBUG nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No VIF found with MAC fa:16:3e:66:02:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.226 221554 INFO nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Using config drive#033[00m
Jan 31 03:48:15 np0005603609 nova_compute[221550]: 2026-01-31 08:48:15.262 221554 DEBUG nova.storage.rbd_utils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 7f398989-1c3a-465d-afd2-f6fcd860682f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:48:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:16.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:48:16 np0005603609 nova_compute[221550]: 2026-01-31 08:48:16.967 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:17.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.185 221554 INFO nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Creating config drive at /var/lib/nova/instances/7f398989-1c3a-465d-afd2-f6fcd860682f/disk.config#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.189 221554 DEBUG oslo_concurrency.processutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7f398989-1c3a-465d-afd2-f6fcd860682f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfe8k8pm2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.314 221554 DEBUG oslo_concurrency.processutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7f398989-1c3a-465d-afd2-f6fcd860682f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfe8k8pm2" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.387 221554 DEBUG nova.storage.rbd_utils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 7f398989-1c3a-465d-afd2-f6fcd860682f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.393 221554 DEBUG oslo_concurrency.processutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7f398989-1c3a-465d-afd2-f6fcd860682f/disk.config 7f398989-1c3a-465d-afd2-f6fcd860682f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.427 221554 DEBUG nova.compute.manager [req-f9161b93-5961-466a-a2b6-df89a5fc37e1 req-e713dd88-a12a-4284-ad42-2eb6c0c22199 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received event network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.429 221554 DEBUG oslo_concurrency.lockutils [req-f9161b93-5961-466a-a2b6-df89a5fc37e1 req-e713dd88-a12a-4284-ad42-2eb6c0c22199 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.430 221554 DEBUG oslo_concurrency.lockutils [req-f9161b93-5961-466a-a2b6-df89a5fc37e1 req-e713dd88-a12a-4284-ad42-2eb6c0c22199 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.431 221554 DEBUG oslo_concurrency.lockutils [req-f9161b93-5961-466a-a2b6-df89a5fc37e1 req-e713dd88-a12a-4284-ad42-2eb6c0c22199 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.431 221554 DEBUG nova.compute.manager [req-f9161b93-5961-466a-a2b6-df89a5fc37e1 req-e713dd88-a12a-4284-ad42-2eb6c0c22199 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Processing event network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.433 221554 DEBUG nova.compute.manager [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.438 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849297.4378345, e18328fb-6ec0-4307-a12d-810c8f7cc007 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.438 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.440 221554 DEBUG nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.443 221554 INFO nova.virt.libvirt.driver [-] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Instance spawned successfully.#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.443 221554 DEBUG nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.562 221554 DEBUG oslo_concurrency.processutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7f398989-1c3a-465d-afd2-f6fcd860682f/disk.config 7f398989-1c3a-465d-afd2-f6fcd860682f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.563 221554 INFO nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Deleting local config drive /var/lib/nova/instances/7f398989-1c3a-465d-afd2-f6fcd860682f/disk.config because it was imported into RBD.#033[00m
Jan 31 03:48:17 np0005603609 kernel: tapc2e32560-89: entered promiscuous mode
Jan 31 03:48:17 np0005603609 NetworkManager[49064]: <info>  [1769849297.6038] manager: (tapc2e32560-89): new Tun device (/org/freedesktop/NetworkManager/Devices/413)
Jan 31 03:48:17 np0005603609 systemd-udevd[306806]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:48:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:17Z|00896|binding|INFO|Claiming lport c2e32560-8930-4ad7-a92a-06705df2a932 for this chassis.
Jan 31 03:48:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:17Z|00897|binding|INFO|c2e32560-8930-4ad7-a92a-06705df2a932: Claiming fa:16:3e:66:02:4a 10.100.0.23
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.642 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:17 np0005603609 NetworkManager[49064]: <info>  [1769849297.6611] device (tapc2e32560-89): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:48:17 np0005603609 NetworkManager[49064]: <info>  [1769849297.6617] device (tapc2e32560-89): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:48:17 np0005603609 systemd-machined[190912]: New machine qemu-107-instance-000000c4.
Jan 31 03:48:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:17Z|00898|binding|INFO|Setting lport c2e32560-8930-4ad7-a92a-06705df2a932 ovn-installed in OVS
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.669 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.672 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:17 np0005603609 systemd[1]: Started Virtual Machine qemu-107-instance-000000c4.
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.766 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.773 221554 DEBUG nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.773 221554 DEBUG nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.774 221554 DEBUG nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.774 221554 DEBUG nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.774 221554 DEBUG nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.775 221554 DEBUG nova.virt.libvirt.driver [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:48:17 np0005603609 nova_compute[221550]: 2026-01-31 08:48:17.778 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:48:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:17.848 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:02:4a 10.100.0.23'], port_security=['fa:16:3e:66:02:4a 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': '7f398989-1c3a-465d-afd2-f6fcd860682f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4003c199-b45c-4de7-962b-d4f5fb322ed8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4c12029f-8cca-405b-b2bb-5ba39d25fdaa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9622be4c-71c4-4695-b87e-549a010b5dd4, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=c2e32560-8930-4ad7-a92a-06705df2a932) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:48:17 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:17Z|00899|binding|INFO|Setting lport c2e32560-8930-4ad7-a92a-06705df2a932 up in Southbound
Jan 31 03:48:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:17.849 140058 INFO neutron.agent.ovn.metadata.agent [-] Port c2e32560-8930-4ad7-a92a-06705df2a932 in datapath 4003c199-b45c-4de7-962b-d4f5fb322ed8 bound to our chassis#033[00m
Jan 31 03:48:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:17.850 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4003c199-b45c-4de7-962b-d4f5fb322ed8#033[00m
Jan 31 03:48:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:17.859 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[83555c80-afd3-4346-991d-627ca47b021e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:17.859 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4003c199-b1 in ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:48:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:17.860 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4003c199-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:48:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:17.860 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[33895991-5450-4b36-b236-cad050460a04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:17.861 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c09bcd8b-2e0d-442c-ad42-eb3f4b9c7bb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:17.870 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[6537f368-6458-4e8c-a544-318c9064b940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:17.889 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e73393a7-5a5e-473f-af9c-bfe346a6b0b9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:17.908 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b5081bbc-990f-418c-9eb3-189b37daabad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:17 np0005603609 systemd-udevd[306811]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:48:17 np0005603609 NetworkManager[49064]: <info>  [1769849297.9127] manager: (tap4003c199-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/414)
Jan 31 03:48:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:17.913 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d263d800-1356-4f5e-8c5f-7e862a61fa26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:17.934 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd74781-42a8-4f86-88ea-09dd7605ca42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:17.936 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[de3a06b3-231b-4df0-a93e-a09cfe4870ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:17 np0005603609 NetworkManager[49064]: <info>  [1769849297.9537] device (tap4003c199-b0): carrier: link connected
Jan 31 03:48:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:17.960 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[98428598-ec06-4556-87f6-2f033ebdd5d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:17.971 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[231339a6-4c87-4a17-a1d4-e3e324d22b9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4003c199-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:38:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 953765, 'reachable_time': 34057, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306842, 'error': None, 'target': 'ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:17.983 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba671b7-e6dd-400e-a034-256542f08392]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:3860'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 953765, 'tstamp': 953765}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306850, 'error': None, 'target': 'ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:17.994 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d38165dd-cf57-4e2c-b12c-c5e28caefaa8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4003c199-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:38:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 953765, 'reachable_time': 34057, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306859, 'error': None, 'target': 'ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:18.023 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8cdf6f45-3544-43a0-99cf-47b219c2e89b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:18.066 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ba2d709d-de1a-4076-aefc-f97a855b9d0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:18.067 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4003c199-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:18.067 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:18.067 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4003c199-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:18 np0005603609 nova_compute[221550]: 2026-01-31 08:48:18.069 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:18 np0005603609 kernel: tap4003c199-b0: entered promiscuous mode
Jan 31 03:48:18 np0005603609 NetworkManager[49064]: <info>  [1769849298.0696] manager: (tap4003c199-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:18.071 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4003c199-b0, col_values=(('external_ids', {'iface-id': '9142c9ba-19a2-4811-a708-46cd93afa5e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:18 np0005603609 nova_compute[221550]: 2026-01-31 08:48:18.072 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:18 np0005603609 nova_compute[221550]: 2026-01-31 08:48:18.073 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:18 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:18Z|00900|binding|INFO|Releasing lport 9142c9ba-19a2-4811-a708-46cd93afa5e2 from this chassis (sb_readonly=1)
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:18.074 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4003c199-b45c-4de7-962b-d4f5fb322ed8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4003c199-b45c-4de7-962b-d4f5fb322ed8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:18.075 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[54a6ab2f-53bd-438e-a695-1a04a21c0c02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:18.076 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-4003c199-b45c-4de7-962b-d4f5fb322ed8
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/4003c199-b45c-4de7-962b-d4f5fb322ed8.pid.haproxy
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 4003c199-b45c-4de7-962b-d4f5fb322ed8
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:48:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:18.076 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8', 'env', 'PROCESS_TAG=haproxy-4003c199-b45c-4de7-962b-d4f5fb322ed8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4003c199-b45c-4de7-962b-d4f5fb322ed8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:48:18 np0005603609 nova_compute[221550]: 2026-01-31 08:48:18.079 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:18.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:18 np0005603609 podman[306918]: 2026-01-31 08:48:18.415223449 +0000 UTC m=+0.043410458 container create fd6649a3f95e71af4366b5c5698daa80cf2797a673c9ae699656226a773a481b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 03:48:18 np0005603609 systemd[1]: Started libpod-conmon-fd6649a3f95e71af4366b5c5698daa80cf2797a673c9ae699656226a773a481b.scope.
Jan 31 03:48:18 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:48:18 np0005603609 nova_compute[221550]: 2026-01-31 08:48:18.477 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:48:18 np0005603609 nova_compute[221550]: 2026-01-31 08:48:18.477 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849298.1382391, 7f398989-1c3a-465d-afd2-f6fcd860682f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:48:18 np0005603609 nova_compute[221550]: 2026-01-31 08:48:18.478 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] VM Started (Lifecycle Event)#033[00m
Jan 31 03:48:18 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3351fa9d86d2d58bba802dccedf3259d9733bebecd55d297da03bff9755bc4c0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:18 np0005603609 podman[306918]: 2026-01-31 08:48:18.391725647 +0000 UTC m=+0.019912676 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:48:18 np0005603609 podman[306918]: 2026-01-31 08:48:18.490456989 +0000 UTC m=+0.118644028 container init fd6649a3f95e71af4366b5c5698daa80cf2797a673c9ae699656226a773a481b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:48:18 np0005603609 podman[306918]: 2026-01-31 08:48:18.496221869 +0000 UTC m=+0.124408878 container start fd6649a3f95e71af4366b5c5698daa80cf2797a673c9ae699656226a773a481b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 03:48:18 np0005603609 neutron-haproxy-ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8[306933]: [NOTICE]   (306937) : New worker (306939) forked
Jan 31 03:48:18 np0005603609 neutron-haproxy-ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8[306933]: [NOTICE]   (306937) : Loading success.
Jan 31 03:48:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:19.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.048 221554 INFO nova.compute.manager [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Took 16.77 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.049 221554 DEBUG nova.compute.manager [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.056 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.059 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849298.138399, 7f398989-1c3a-465d-afd2-f6fcd860682f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.059 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.090 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.093 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.152 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.185 221554 INFO nova.compute.manager [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Took 19.07 seconds to build instance.#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.206 221554 DEBUG nova.network.neutron [req-d52d47c1-5c09-4f8d-9c7b-57e4ade39d0c req-ab45fc6a-ed06-41c8-a9c3-0d9bac88f678 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Updated VIF entry in instance network info cache for port c2e32560-8930-4ad7-a92a-06705df2a932. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.207 221554 DEBUG nova.network.neutron [req-d52d47c1-5c09-4f8d-9c7b-57e4ade39d0c req-ab45fc6a-ed06-41c8-a9c3-0d9bac88f678 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Updating instance_info_cache with network_info: [{"id": "c2e32560-8930-4ad7-a92a-06705df2a932", "address": "fa:16:3e:66:02:4a", "network": {"id": "4003c199-b45c-4de7-962b-d4f5fb322ed8", "bridge": "br-int", "label": "tempest-network-smoke--1751171446", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2e32560-89", "ovs_interfaceid": "c2e32560-8930-4ad7-a92a-06705df2a932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.231 221554 DEBUG oslo_concurrency.lockutils [req-d52d47c1-5c09-4f8d-9c7b-57e4ade39d0c req-ab45fc6a-ed06-41c8-a9c3-0d9bac88f678 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-7f398989-1c3a-465d-afd2-f6fcd860682f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.246 221554 DEBUG oslo_concurrency.lockutils [None req-f782a4cd-67fd-48e5-a04e-c1d9d3003296 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.800 221554 DEBUG nova.compute.manager [req-48343703-66e8-4038-afb0-e139db7d501b req-b68bb276-4626-4e7c-826f-cad9cc690550 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Received event network-vif-plugged-c2e32560-8930-4ad7-a92a-06705df2a932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.800 221554 DEBUG oslo_concurrency.lockutils [req-48343703-66e8-4038-afb0-e139db7d501b req-b68bb276-4626-4e7c-826f-cad9cc690550 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "7f398989-1c3a-465d-afd2-f6fcd860682f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.800 221554 DEBUG oslo_concurrency.lockutils [req-48343703-66e8-4038-afb0-e139db7d501b req-b68bb276-4626-4e7c-826f-cad9cc690550 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7f398989-1c3a-465d-afd2-f6fcd860682f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.801 221554 DEBUG oslo_concurrency.lockutils [req-48343703-66e8-4038-afb0-e139db7d501b req-b68bb276-4626-4e7c-826f-cad9cc690550 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7f398989-1c3a-465d-afd2-f6fcd860682f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.801 221554 DEBUG nova.compute.manager [req-48343703-66e8-4038-afb0-e139db7d501b req-b68bb276-4626-4e7c-826f-cad9cc690550 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Processing event network-vif-plugged-c2e32560-8930-4ad7-a92a-06705df2a932 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.801 221554 DEBUG nova.compute.manager [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.808 221554 DEBUG nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.809 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849299.808709, 7f398989-1c3a-465d-afd2-f6fcd860682f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.809 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.813 221554 INFO nova.virt.libvirt.driver [-] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Instance spawned successfully.#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.813 221554 DEBUG nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.864 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.868 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.875 221554 DEBUG nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.876 221554 DEBUG nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.876 221554 DEBUG nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.877 221554 DEBUG nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.877 221554 DEBUG nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.878 221554 DEBUG nova.virt.libvirt.driver [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:48:19 np0005603609 nova_compute[221550]: 2026-01-31 08:48:19.936 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:48:20 np0005603609 nova_compute[221550]: 2026-01-31 08:48:20.025 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:20 np0005603609 nova_compute[221550]: 2026-01-31 08:48:20.100 221554 INFO nova.compute.manager [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Took 16.47 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:48:20 np0005603609 nova_compute[221550]: 2026-01-31 08:48:20.101 221554 DEBUG nova.compute.manager [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:20.107 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=89, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=88) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:48:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:20.109 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:48:20 np0005603609 nova_compute[221550]: 2026-01-31 08:48:20.110 221554 DEBUG nova.compute.manager [req-6622815d-f3af-44bb-8269-5dd336046222 req-3fefe05a-d051-4f1b-b81d-61d3f7287f1a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received event network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:20 np0005603609 nova_compute[221550]: 2026-01-31 08:48:20.110 221554 DEBUG oslo_concurrency.lockutils [req-6622815d-f3af-44bb-8269-5dd336046222 req-3fefe05a-d051-4f1b-b81d-61d3f7287f1a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:20 np0005603609 nova_compute[221550]: 2026-01-31 08:48:20.111 221554 DEBUG oslo_concurrency.lockutils [req-6622815d-f3af-44bb-8269-5dd336046222 req-3fefe05a-d051-4f1b-b81d-61d3f7287f1a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:20 np0005603609 nova_compute[221550]: 2026-01-31 08:48:20.111 221554 DEBUG oslo_concurrency.lockutils [req-6622815d-f3af-44bb-8269-5dd336046222 req-3fefe05a-d051-4f1b-b81d-61d3f7287f1a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:20 np0005603609 nova_compute[221550]: 2026-01-31 08:48:20.111 221554 DEBUG nova.compute.manager [req-6622815d-f3af-44bb-8269-5dd336046222 req-3fefe05a-d051-4f1b-b81d-61d3f7287f1a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] No waiting events found dispatching network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:48:20 np0005603609 nova_compute[221550]: 2026-01-31 08:48:20.112 221554 WARNING nova.compute.manager [req-6622815d-f3af-44bb-8269-5dd336046222 req-3fefe05a-d051-4f1b-b81d-61d3f7287f1a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received unexpected event network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:48:20 np0005603609 nova_compute[221550]: 2026-01-31 08:48:20.112 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:20.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:20 np0005603609 nova_compute[221550]: 2026-01-31 08:48:20.515 221554 INFO nova.compute.manager [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Took 18.61 seconds to build instance.#033[00m
Jan 31 03:48:20 np0005603609 nova_compute[221550]: 2026-01-31 08:48:20.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:20 np0005603609 nova_compute[221550]: 2026-01-31 08:48:20.868 221554 DEBUG oslo_concurrency.lockutils [None req-8cc3d896-5c11-466f-b606-9279296ec6a9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "7f398989-1c3a-465d-afd2-f6fcd860682f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:21.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:21 np0005603609 nova_compute[221550]: 2026-01-31 08:48:21.969 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:22 np0005603609 nova_compute[221550]: 2026-01-31 08:48:22.151 221554 DEBUG nova.compute.manager [req-1391c1e9-86d1-49f8-9136-58cca8f8db9d req-b8198567-29da-452f-b8e2-6d69e7030cdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Received event network-vif-plugged-c2e32560-8930-4ad7-a92a-06705df2a932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:22 np0005603609 nova_compute[221550]: 2026-01-31 08:48:22.151 221554 DEBUG oslo_concurrency.lockutils [req-1391c1e9-86d1-49f8-9136-58cca8f8db9d req-b8198567-29da-452f-b8e2-6d69e7030cdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "7f398989-1c3a-465d-afd2-f6fcd860682f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:22 np0005603609 nova_compute[221550]: 2026-01-31 08:48:22.152 221554 DEBUG oslo_concurrency.lockutils [req-1391c1e9-86d1-49f8-9136-58cca8f8db9d req-b8198567-29da-452f-b8e2-6d69e7030cdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7f398989-1c3a-465d-afd2-f6fcd860682f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:22 np0005603609 nova_compute[221550]: 2026-01-31 08:48:22.152 221554 DEBUG oslo_concurrency.lockutils [req-1391c1e9-86d1-49f8-9136-58cca8f8db9d req-b8198567-29da-452f-b8e2-6d69e7030cdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7f398989-1c3a-465d-afd2-f6fcd860682f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:22 np0005603609 nova_compute[221550]: 2026-01-31 08:48:22.152 221554 DEBUG nova.compute.manager [req-1391c1e9-86d1-49f8-9136-58cca8f8db9d req-b8198567-29da-452f-b8e2-6d69e7030cdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] No waiting events found dispatching network-vif-plugged-c2e32560-8930-4ad7-a92a-06705df2a932 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:48:22 np0005603609 nova_compute[221550]: 2026-01-31 08:48:22.152 221554 WARNING nova.compute.manager [req-1391c1e9-86d1-49f8-9136-58cca8f8db9d req-b8198567-29da-452f-b8e2-6d69e7030cdf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Received unexpected event network-vif-plugged-c2e32560-8930-4ad7-a92a-06705df2a932 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:48:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:48:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:22.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:48:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:23.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:24.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:48:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:25.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:48:25 np0005603609 nova_compute[221550]: 2026-01-31 08:48:25.027 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:25 np0005603609 nova_compute[221550]: 2026-01-31 08:48:25.091 221554 DEBUG nova.objects.instance [None req-512fecfe-4740-4a08-9fde-2ce59d2406e5 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lazy-loading 'pci_devices' on Instance uuid e18328fb-6ec0-4307-a12d-810c8f7cc007 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:48:25 np0005603609 nova_compute[221550]: 2026-01-31 08:48:25.120 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849305.12006, e18328fb-6ec0-4307-a12d-810c8f7cc007 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:48:25 np0005603609 nova_compute[221550]: 2026-01-31 08:48:25.120 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:48:25 np0005603609 nova_compute[221550]: 2026-01-31 08:48:25.151 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:25 np0005603609 nova_compute[221550]: 2026-01-31 08:48:25.156 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:48:25 np0005603609 nova_compute[221550]: 2026-01-31 08:48:25.210 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 31 03:48:25 np0005603609 kernel: tapdf761877-ca (unregistering): left promiscuous mode
Jan 31 03:48:25 np0005603609 NetworkManager[49064]: <info>  [1769849305.6341] device (tapdf761877-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:48:25 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:25Z|00901|binding|INFO|Releasing lport df761877-caf0-49b5-8d74-374169310e23 from this chassis (sb_readonly=0)
Jan 31 03:48:25 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:25Z|00902|binding|INFO|Setting lport df761877-caf0-49b5-8d74-374169310e23 down in Southbound
Jan 31 03:48:25 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:25Z|00903|binding|INFO|Removing iface tapdf761877-ca ovn-installed in OVS
Jan 31 03:48:25 np0005603609 nova_compute[221550]: 2026-01-31 08:48:25.640 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:25 np0005603609 nova_compute[221550]: 2026-01-31 08:48:25.652 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:25 np0005603609 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d000000c3.scope: Deactivated successfully.
Jan 31 03:48:25 np0005603609 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d000000c3.scope: Consumed 8.499s CPU time.
Jan 31 03:48:25 np0005603609 systemd-machined[190912]: Machine qemu-106-instance-000000c3 terminated.
Jan 31 03:48:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:25.712 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:69:36 10.100.0.9'], port_security=['fa:16:3e:32:69:36 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e18328fb-6ec0-4307-a12d-810c8f7cc007', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-465c490f-bfbd-42dd-8ef3-8b9af8836224', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e1a1c923b1d14e44b30717c742870aa1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '576af920-946c-41e4-9f2b-8a2e422edf05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e96e68d5-5e8f-45d3-a26f-458c20b777d9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=df761877-caf0-49b5-8d74-374169310e23) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:48:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:25.714 140058 INFO neutron.agent.ovn.metadata.agent [-] Port df761877-caf0-49b5-8d74-374169310e23 in datapath 465c490f-bfbd-42dd-8ef3-8b9af8836224 unbound from our chassis#033[00m
Jan 31 03:48:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:25.714 140058 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 465c490f-bfbd-42dd-8ef3-8b9af8836224 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 31 03:48:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:25.716 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[49a186a9-cd9a-48db-91b5-898c9a4e73e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:25 np0005603609 nova_compute[221550]: 2026-01-31 08:48:25.795 221554 DEBUG nova.compute.manager [None req-512fecfe-4740-4a08-9fde-2ce59d2406e5 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:26.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:26 np0005603609 nova_compute[221550]: 2026-01-31 08:48:26.673 221554 DEBUG nova.compute.manager [req-93d013c7-3373-4f59-8734-270aad8675a4 req-00f6d5f1-364f-403a-b94e-0c74e5c2a52b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received event network-vif-unplugged-df761877-caf0-49b5-8d74-374169310e23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:26 np0005603609 nova_compute[221550]: 2026-01-31 08:48:26.674 221554 DEBUG oslo_concurrency.lockutils [req-93d013c7-3373-4f59-8734-270aad8675a4 req-00f6d5f1-364f-403a-b94e-0c74e5c2a52b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:26 np0005603609 nova_compute[221550]: 2026-01-31 08:48:26.675 221554 DEBUG oslo_concurrency.lockutils [req-93d013c7-3373-4f59-8734-270aad8675a4 req-00f6d5f1-364f-403a-b94e-0c74e5c2a52b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:26 np0005603609 nova_compute[221550]: 2026-01-31 08:48:26.676 221554 DEBUG oslo_concurrency.lockutils [req-93d013c7-3373-4f59-8734-270aad8675a4 req-00f6d5f1-364f-403a-b94e-0c74e5c2a52b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:26 np0005603609 nova_compute[221550]: 2026-01-31 08:48:26.676 221554 DEBUG nova.compute.manager [req-93d013c7-3373-4f59-8734-270aad8675a4 req-00f6d5f1-364f-403a-b94e-0c74e5c2a52b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] No waiting events found dispatching network-vif-unplugged-df761877-caf0-49b5-8d74-374169310e23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:48:26 np0005603609 nova_compute[221550]: 2026-01-31 08:48:26.677 221554 WARNING nova.compute.manager [req-93d013c7-3373-4f59-8734-270aad8675a4 req-00f6d5f1-364f-403a-b94e-0c74e5c2a52b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received unexpected event network-vif-unplugged-df761877-caf0-49b5-8d74-374169310e23 for instance with vm_state suspended and task_state None.#033[00m
Jan 31 03:48:26 np0005603609 nova_compute[221550]: 2026-01-31 08:48:26.971 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:27.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:27.111 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '89'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:28.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:29.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:29 np0005603609 nova_compute[221550]: 2026-01-31 08:48:29.297 221554 DEBUG nova.compute.manager [req-1fbe65e6-5a69-49d1-bf1d-599815117b8d req-ae87a187-6700-47d8-8d88-1224f9eff2e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received event network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:29 np0005603609 nova_compute[221550]: 2026-01-31 08:48:29.298 221554 DEBUG oslo_concurrency.lockutils [req-1fbe65e6-5a69-49d1-bf1d-599815117b8d req-ae87a187-6700-47d8-8d88-1224f9eff2e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:29 np0005603609 nova_compute[221550]: 2026-01-31 08:48:29.298 221554 DEBUG oslo_concurrency.lockutils [req-1fbe65e6-5a69-49d1-bf1d-599815117b8d req-ae87a187-6700-47d8-8d88-1224f9eff2e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:29 np0005603609 nova_compute[221550]: 2026-01-31 08:48:29.298 221554 DEBUG oslo_concurrency.lockutils [req-1fbe65e6-5a69-49d1-bf1d-599815117b8d req-ae87a187-6700-47d8-8d88-1224f9eff2e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:29 np0005603609 nova_compute[221550]: 2026-01-31 08:48:29.299 221554 DEBUG nova.compute.manager [req-1fbe65e6-5a69-49d1-bf1d-599815117b8d req-ae87a187-6700-47d8-8d88-1224f9eff2e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] No waiting events found dispatching network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:48:29 np0005603609 nova_compute[221550]: 2026-01-31 08:48:29.300 221554 WARNING nova.compute.manager [req-1fbe65e6-5a69-49d1-bf1d-599815117b8d req-ae87a187-6700-47d8-8d88-1224f9eff2e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received unexpected event network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 for instance with vm_state suspended and task_state None.#033[00m
Jan 31 03:48:29 np0005603609 nova_compute[221550]: 2026-01-31 08:48:29.821 221554 INFO nova.compute.manager [None req-9d95214a-2b46-4dd2-b493-f73c0343ef39 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Resuming#033[00m
Jan 31 03:48:29 np0005603609 nova_compute[221550]: 2026-01-31 08:48:29.822 221554 DEBUG nova.objects.instance [None req-9d95214a-2b46-4dd2-b493-f73c0343ef39 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lazy-loading 'flavor' on Instance uuid e18328fb-6ec0-4307-a12d-810c8f7cc007 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:48:29 np0005603609 nova_compute[221550]: 2026-01-31 08:48:29.948 221554 DEBUG oslo_concurrency.lockutils [None req-9d95214a-2b46-4dd2-b493-f73c0343ef39 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Acquiring lock "refresh_cache-e18328fb-6ec0-4307-a12d-810c8f7cc007" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:48:29 np0005603609 nova_compute[221550]: 2026-01-31 08:48:29.949 221554 DEBUG oslo_concurrency.lockutils [None req-9d95214a-2b46-4dd2-b493-f73c0343ef39 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Acquired lock "refresh_cache-e18328fb-6ec0-4307-a12d-810c8f7cc007" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:48:29 np0005603609 nova_compute[221550]: 2026-01-31 08:48:29.949 221554 DEBUG nova.network.neutron [None req-9d95214a-2b46-4dd2-b493-f73c0343ef39 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:48:30 np0005603609 nova_compute[221550]: 2026-01-31 08:48:30.029 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:48:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:30.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:48:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:31.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:31 np0005603609 nova_compute[221550]: 2026-01-31 08:48:31.972 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:32.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:32 np0005603609 nova_compute[221550]: 2026-01-31 08:48:32.660 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:32 np0005603609 NetworkManager[49064]: <info>  [1769849312.6617] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Jan 31 03:48:32 np0005603609 NetworkManager[49064]: <info>  [1769849312.6625] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/417)
Jan 31 03:48:32 np0005603609 nova_compute[221550]: 2026-01-31 08:48:32.691 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:32 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:32Z|00904|binding|INFO|Releasing lport 9142c9ba-19a2-4811-a708-46cd93afa5e2 from this chassis (sb_readonly=0)
Jan 31 03:48:32 np0005603609 nova_compute[221550]: 2026-01-31 08:48:32.708 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:32 np0005603609 nova_compute[221550]: 2026-01-31 08:48:32.712 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:48:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:33.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:48:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:33Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:02:4a 10.100.0.23
Jan 31 03:48:33 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:33Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:02:4a 10.100.0.23
Jan 31 03:48:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:34.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:48:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:35.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:48:35 np0005603609 nova_compute[221550]: 2026-01-31 08:48:35.061 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:48:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:36.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:48:36 np0005603609 nova_compute[221550]: 2026-01-31 08:48:36.975 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:37.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:37 np0005603609 nova_compute[221550]: 2026-01-31 08:48:37.495 221554 DEBUG nova.network.neutron [None req-9d95214a-2b46-4dd2-b493-f73c0343ef39 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Updating instance_info_cache with network_info: [{"id": "df761877-caf0-49b5-8d74-374169310e23", "address": "fa:16:3e:32:69:36", "network": {"id": "465c490f-bfbd-42dd-8ef3-8b9af8836224", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1184908098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e1a1c923b1d14e44b30717c742870aa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf761877-ca", "ovs_interfaceid": "df761877-caf0-49b5-8d74-374169310e23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:48:37 np0005603609 nova_compute[221550]: 2026-01-31 08:48:37.567 221554 DEBUG oslo_concurrency.lockutils [None req-9d95214a-2b46-4dd2-b493-f73c0343ef39 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Releasing lock "refresh_cache-e18328fb-6ec0-4307-a12d-810c8f7cc007" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:48:37 np0005603609 nova_compute[221550]: 2026-01-31 08:48:37.572 221554 DEBUG nova.virt.libvirt.vif [None req-9d95214a-2b46-4dd2-b493-f73c0343ef39 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:47:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-616284573',display_name='tempest-TestServerAdvancedOps-server-616284573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-616284573',id=195,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:48:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='e1a1c923b1d14e44b30717c742870aa1',ramdisk_id='',reservation_id='r-hxna07n0',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1611358131',owner_user_name='tempest-TestServerAdvancedOps-1611358131-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:48:25Z,user_data=None,user_id='429612bcc82f4c6a99fe57f4d9b01624',uuid=e18328fb-6ec0-4307-a12d-810c8f7cc007,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "df761877-caf0-49b5-8d74-374169310e23", "address": "fa:16:3e:32:69:36", "network": {"id": "465c490f-bfbd-42dd-8ef3-8b9af8836224", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1184908098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e1a1c923b1d14e44b30717c742870aa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf761877-ca", "ovs_interfaceid": "df761877-caf0-49b5-8d74-374169310e23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:48:37 np0005603609 nova_compute[221550]: 2026-01-31 08:48:37.572 221554 DEBUG nova.network.os_vif_util [None req-9d95214a-2b46-4dd2-b493-f73c0343ef39 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Converting VIF {"id": "df761877-caf0-49b5-8d74-374169310e23", "address": "fa:16:3e:32:69:36", "network": {"id": "465c490f-bfbd-42dd-8ef3-8b9af8836224", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1184908098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e1a1c923b1d14e44b30717c742870aa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf761877-ca", "ovs_interfaceid": "df761877-caf0-49b5-8d74-374169310e23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:48:37 np0005603609 nova_compute[221550]: 2026-01-31 08:48:37.573 221554 DEBUG nova.network.os_vif_util [None req-9d95214a-2b46-4dd2-b493-f73c0343ef39 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:69:36,bridge_name='br-int',has_traffic_filtering=True,id=df761877-caf0-49b5-8d74-374169310e23,network=Network(465c490f-bfbd-42dd-8ef3-8b9af8836224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf761877-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:48:37 np0005603609 nova_compute[221550]: 2026-01-31 08:48:37.573 221554 DEBUG os_vif [None req-9d95214a-2b46-4dd2-b493-f73c0343ef39 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:69:36,bridge_name='br-int',has_traffic_filtering=True,id=df761877-caf0-49b5-8d74-374169310e23,network=Network(465c490f-bfbd-42dd-8ef3-8b9af8836224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf761877-ca') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:48:37 np0005603609 nova_compute[221550]: 2026-01-31 08:48:37.574 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:37 np0005603609 nova_compute[221550]: 2026-01-31 08:48:37.574 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:37 np0005603609 nova_compute[221550]: 2026-01-31 08:48:37.574 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:48:37 np0005603609 nova_compute[221550]: 2026-01-31 08:48:37.579 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:37 np0005603609 nova_compute[221550]: 2026-01-31 08:48:37.581 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf761877-ca, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:37 np0005603609 nova_compute[221550]: 2026-01-31 08:48:37.581 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdf761877-ca, col_values=(('external_ids', {'iface-id': 'df761877-caf0-49b5-8d74-374169310e23', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:69:36', 'vm-uuid': 'e18328fb-6ec0-4307-a12d-810c8f7cc007'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:37 np0005603609 nova_compute[221550]: 2026-01-31 08:48:37.583 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:48:37 np0005603609 nova_compute[221550]: 2026-01-31 08:48:37.583 221554 INFO os_vif [None req-9d95214a-2b46-4dd2-b493-f73c0343ef39 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:69:36,bridge_name='br-int',has_traffic_filtering=True,id=df761877-caf0-49b5-8d74-374169310e23,network=Network(465c490f-bfbd-42dd-8ef3-8b9af8836224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf761877-ca')#033[00m
Jan 31 03:48:37 np0005603609 nova_compute[221550]: 2026-01-31 08:48:37.642 221554 DEBUG nova.objects.instance [None req-9d95214a-2b46-4dd2-b493-f73c0343ef39 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lazy-loading 'numa_topology' on Instance uuid e18328fb-6ec0-4307-a12d-810c8f7cc007 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:48:37 np0005603609 kernel: tapdf761877-ca: entered promiscuous mode
Jan 31 03:48:37 np0005603609 NetworkManager[49064]: <info>  [1769849317.7165] manager: (tapdf761877-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/418)
Jan 31 03:48:37 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:37Z|00905|binding|INFO|Claiming lport df761877-caf0-49b5-8d74-374169310e23 for this chassis.
Jan 31 03:48:37 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:37Z|00906|binding|INFO|df761877-caf0-49b5-8d74-374169310e23: Claiming fa:16:3e:32:69:36 10.100.0.9
Jan 31 03:48:37 np0005603609 nova_compute[221550]: 2026-01-31 08:48:37.717 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:37 np0005603609 nova_compute[221550]: 2026-01-31 08:48:37.720 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:37 np0005603609 nova_compute[221550]: 2026-01-31 08:48:37.733 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:37 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:37Z|00907|binding|INFO|Setting lport df761877-caf0-49b5-8d74-374169310e23 ovn-installed in OVS
Jan 31 03:48:37 np0005603609 systemd-udevd[306986]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:48:37 np0005603609 nova_compute[221550]: 2026-01-31 08:48:37.737 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:37 np0005603609 NetworkManager[49064]: <info>  [1769849317.7467] device (tapdf761877-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:48:37 np0005603609 NetworkManager[49064]: <info>  [1769849317.7475] device (tapdf761877-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:48:37 np0005603609 systemd-machined[190912]: New machine qemu-108-instance-000000c3.
Jan 31 03:48:37 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:37Z|00908|binding|INFO|Setting lport df761877-caf0-49b5-8d74-374169310e23 up in Southbound
Jan 31 03:48:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:37.753 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:69:36 10.100.0.9'], port_security=['fa:16:3e:32:69:36 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e18328fb-6ec0-4307-a12d-810c8f7cc007', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-465c490f-bfbd-42dd-8ef3-8b9af8836224', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e1a1c923b1d14e44b30717c742870aa1', 'neutron:revision_number': '5', 'neutron:security_group_ids': '576af920-946c-41e4-9f2b-8a2e422edf05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e96e68d5-5e8f-45d3-a26f-458c20b777d9, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=df761877-caf0-49b5-8d74-374169310e23) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:48:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:37.755 140058 INFO neutron.agent.ovn.metadata.agent [-] Port df761877-caf0-49b5-8d74-374169310e23 in datapath 465c490f-bfbd-42dd-8ef3-8b9af8836224 bound to our chassis#033[00m
Jan 31 03:48:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:37.755 140058 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 465c490f-bfbd-42dd-8ef3-8b9af8836224 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 31 03:48:37 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:37.756 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c20d2a-5a75-4cc5-9bd9-8bcb719a305f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:37 np0005603609 systemd[1]: Started Virtual Machine qemu-108-instance-000000c3.
Jan 31 03:48:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:38.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:38 np0005603609 nova_compute[221550]: 2026-01-31 08:48:38.788 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for e18328fb-6ec0-4307-a12d-810c8f7cc007 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:48:38 np0005603609 nova_compute[221550]: 2026-01-31 08:48:38.788 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849318.7880356, e18328fb-6ec0-4307-a12d-810c8f7cc007 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:48:38 np0005603609 nova_compute[221550]: 2026-01-31 08:48:38.788 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] VM Started (Lifecycle Event)#033[00m
Jan 31 03:48:38 np0005603609 nova_compute[221550]: 2026-01-31 08:48:38.796 221554 DEBUG nova.compute.manager [None req-9d95214a-2b46-4dd2-b493-f73c0343ef39 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:48:38 np0005603609 nova_compute[221550]: 2026-01-31 08:48:38.797 221554 DEBUG nova.objects.instance [None req-9d95214a-2b46-4dd2-b493-f73c0343ef39 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lazy-loading 'pci_devices' on Instance uuid e18328fb-6ec0-4307-a12d-810c8f7cc007 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:48:39 np0005603609 nova_compute[221550]: 2026-01-31 08:48:39.026 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:39 np0005603609 nova_compute[221550]: 2026-01-31 08:48:39.032 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:48:39 np0005603609 nova_compute[221550]: 2026-01-31 08:48:39.041 221554 INFO nova.virt.libvirt.driver [-] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Instance running successfully.#033[00m
Jan 31 03:48:39 np0005603609 virtqemud[221292]: argument unsupported: QEMU guest agent is not configured
Jan 31 03:48:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:48:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:39.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:48:39 np0005603609 nova_compute[221550]: 2026-01-31 08:48:39.044 221554 DEBUG nova.virt.libvirt.guest [None req-9d95214a-2b46-4dd2-b493-f73c0343ef39 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 03:48:39 np0005603609 nova_compute[221550]: 2026-01-31 08:48:39.045 221554 DEBUG nova.compute.manager [None req-9d95214a-2b46-4dd2-b493-f73c0343ef39 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:39 np0005603609 nova_compute[221550]: 2026-01-31 08:48:39.169 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Jan 31 03:48:39 np0005603609 nova_compute[221550]: 2026-01-31 08:48:39.170 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849318.7911823, e18328fb-6ec0-4307-a12d-810c8f7cc007 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:48:39 np0005603609 nova_compute[221550]: 2026-01-31 08:48:39.170 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:48:39 np0005603609 nova_compute[221550]: 2026-01-31 08:48:39.236 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:39 np0005603609 nova_compute[221550]: 2026-01-31 08:48:39.240 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:48:39 np0005603609 nova_compute[221550]: 2026-01-31 08:48:39.563 221554 DEBUG nova.compute.manager [req-38e1e757-e15d-41b6-9376-a4ca3702537f req-41612d5a-0f6c-4a4f-a6f0-8634babc800e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received event network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:39 np0005603609 nova_compute[221550]: 2026-01-31 08:48:39.564 221554 DEBUG oslo_concurrency.lockutils [req-38e1e757-e15d-41b6-9376-a4ca3702537f req-41612d5a-0f6c-4a4f-a6f0-8634babc800e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:39 np0005603609 nova_compute[221550]: 2026-01-31 08:48:39.565 221554 DEBUG oslo_concurrency.lockutils [req-38e1e757-e15d-41b6-9376-a4ca3702537f req-41612d5a-0f6c-4a4f-a6f0-8634babc800e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:39 np0005603609 nova_compute[221550]: 2026-01-31 08:48:39.565 221554 DEBUG oslo_concurrency.lockutils [req-38e1e757-e15d-41b6-9376-a4ca3702537f req-41612d5a-0f6c-4a4f-a6f0-8634babc800e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:39 np0005603609 nova_compute[221550]: 2026-01-31 08:48:39.566 221554 DEBUG nova.compute.manager [req-38e1e757-e15d-41b6-9376-a4ca3702537f req-41612d5a-0f6c-4a4f-a6f0-8634babc800e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] No waiting events found dispatching network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:48:39 np0005603609 nova_compute[221550]: 2026-01-31 08:48:39.566 221554 WARNING nova.compute.manager [req-38e1e757-e15d-41b6-9376-a4ca3702537f req-41612d5a-0f6c-4a4f-a6f0-8634babc800e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received unexpected event network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:48:40 np0005603609 nova_compute[221550]: 2026-01-31 08:48:40.064 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:40.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:41.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:41 np0005603609 nova_compute[221550]: 2026-01-31 08:48:41.978 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:48:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:42.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:48:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:43.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:43 np0005603609 nova_compute[221550]: 2026-01-31 08:48:43.314 221554 DEBUG nova.compute.manager [req-adb2e5b4-5e95-4a61-8f48-63b5976d13b6 req-daf138f8-e139-4169-8767-1e764d7da35c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received event network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:43 np0005603609 nova_compute[221550]: 2026-01-31 08:48:43.318 221554 DEBUG oslo_concurrency.lockutils [req-adb2e5b4-5e95-4a61-8f48-63b5976d13b6 req-daf138f8-e139-4169-8767-1e764d7da35c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:43 np0005603609 nova_compute[221550]: 2026-01-31 08:48:43.318 221554 DEBUG oslo_concurrency.lockutils [req-adb2e5b4-5e95-4a61-8f48-63b5976d13b6 req-daf138f8-e139-4169-8767-1e764d7da35c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:43 np0005603609 nova_compute[221550]: 2026-01-31 08:48:43.318 221554 DEBUG oslo_concurrency.lockutils [req-adb2e5b4-5e95-4a61-8f48-63b5976d13b6 req-daf138f8-e139-4169-8767-1e764d7da35c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:43 np0005603609 nova_compute[221550]: 2026-01-31 08:48:43.318 221554 DEBUG nova.compute.manager [req-adb2e5b4-5e95-4a61-8f48-63b5976d13b6 req-daf138f8-e139-4169-8767-1e764d7da35c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] No waiting events found dispatching network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:48:43 np0005603609 nova_compute[221550]: 2026-01-31 08:48:43.319 221554 WARNING nova.compute.manager [req-adb2e5b4-5e95-4a61-8f48-63b5976d13b6 req-daf138f8-e139-4169-8767-1e764d7da35c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received unexpected event network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:48:44 np0005603609 podman[307040]: 2026-01-31 08:48:44.190688967 +0000 UTC m=+0.066329856 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:48:44 np0005603609 podman[307039]: 2026-01-31 08:48:44.221835164 +0000 UTC m=+0.098639971 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 03:48:44 np0005603609 nova_compute[221550]: 2026-01-31 08:48:44.362 221554 DEBUG nova.objects.instance [None req-673a5ede-6d58-4cb4-9720-b91545593395 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lazy-loading 'pci_devices' on Instance uuid e18328fb-6ec0-4307-a12d-810c8f7cc007 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:48:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:44.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #169. Immutable memtables: 0.
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:48:44.576603) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 169
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849324577125, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 776, "num_deletes": 250, "total_data_size": 1454625, "memory_usage": 1469400, "flush_reason": "Manual Compaction"}
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #170: started
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849324602639, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 170, "file_size": 607509, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82119, "largest_seqno": 82890, "table_properties": {"data_size": 604467, "index_size": 949, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8325, "raw_average_key_size": 20, "raw_value_size": 598045, "raw_average_value_size": 1473, "num_data_blocks": 43, "num_entries": 406, "num_filter_entries": 406, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849267, "oldest_key_time": 1769849267, "file_creation_time": 1769849324, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 25656 microseconds, and 2233 cpu microseconds.
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:48:44.602705) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #170: 607509 bytes OK
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:48:44.602736) [db/memtable_list.cc:519] [default] Level-0 commit table #170 started
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:48:44.663774) [db/memtable_list.cc:722] [default] Level-0 commit table #170: memtable #1 done
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:48:44.663815) EVENT_LOG_v1 {"time_micros": 1769849324663806, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:48:44.663838) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 1450587, prev total WAL file size 1451300, number of live WAL files 2.
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000166.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:48:44.664682) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373630' seq:72057594037927935, type:22 .. '6D6772737461740033303131' seq:0, type:0; will stop at (end)
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [170(593KB)], [168(13MB)]
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849324664729, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [170], "files_L6": [168], "score": -1, "input_data_size": 14350637, "oldest_snapshot_seqno": -1}
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #171: 10137 keys, 10877355 bytes, temperature: kUnknown
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849324767602, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 171, "file_size": 10877355, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10815627, "index_size": 35311, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 268912, "raw_average_key_size": 26, "raw_value_size": 10641836, "raw_average_value_size": 1049, "num_data_blocks": 1330, "num_entries": 10137, "num_filter_entries": 10137, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769849324, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 171, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:48:44.767840) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10877355 bytes
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:48:44.769807) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.4 rd, 105.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 13.1 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(41.5) write-amplify(17.9) OK, records in: 10623, records dropped: 486 output_compression: NoCompression
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:48:44.769835) EVENT_LOG_v1 {"time_micros": 1769849324769821, "job": 108, "event": "compaction_finished", "compaction_time_micros": 102940, "compaction_time_cpu_micros": 21226, "output_level": 6, "num_output_files": 1, "total_output_size": 10877355, "num_input_records": 10623, "num_output_records": 10137, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849324770061, "job": 108, "event": "table_file_deletion", "file_number": 170}
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000168.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849324771855, "job": 108, "event": "table_file_deletion", "file_number": 168}
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:48:44.664562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:48:44.771976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:48:44.771983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:48:44.771985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:48:44.771987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:48:44 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:48:44.771988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:48:44 np0005603609 nova_compute[221550]: 2026-01-31 08:48:44.906 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849324.906242, e18328fb-6ec0-4307-a12d-810c8f7cc007 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:48:44 np0005603609 nova_compute[221550]: 2026-01-31 08:48:44.906 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:48:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:48:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:45.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:48:45 np0005603609 nova_compute[221550]: 2026-01-31 08:48:45.065 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:45 np0005603609 nova_compute[221550]: 2026-01-31 08:48:45.074 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:45 np0005603609 nova_compute[221550]: 2026-01-31 08:48:45.077 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:48:45 np0005603609 nova_compute[221550]: 2026-01-31 08:48:45.552 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 31 03:48:45 np0005603609 kernel: tapdf761877-ca (unregistering): left promiscuous mode
Jan 31 03:48:45 np0005603609 NetworkManager[49064]: <info>  [1769849325.8133] device (tapdf761877-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:48:45 np0005603609 nova_compute[221550]: 2026-01-31 08:48:45.821 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:45Z|00909|binding|INFO|Releasing lport df761877-caf0-49b5-8d74-374169310e23 from this chassis (sb_readonly=0)
Jan 31 03:48:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:45Z|00910|binding|INFO|Setting lport df761877-caf0-49b5-8d74-374169310e23 down in Southbound
Jan 31 03:48:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:45Z|00911|binding|INFO|Removing iface tapdf761877-ca ovn-installed in OVS
Jan 31 03:48:45 np0005603609 nova_compute[221550]: 2026-01-31 08:48:45.823 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:45 np0005603609 nova_compute[221550]: 2026-01-31 08:48:45.827 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:45 np0005603609 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d000000c3.scope: Deactivated successfully.
Jan 31 03:48:45 np0005603609 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d000000c3.scope: Consumed 5.941s CPU time.
Jan 31 03:48:45 np0005603609 systemd-machined[190912]: Machine qemu-108-instance-000000c3 terminated.
Jan 31 03:48:45 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:45.925 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:69:36 10.100.0.9'], port_security=['fa:16:3e:32:69:36 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e18328fb-6ec0-4307-a12d-810c8f7cc007', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-465c490f-bfbd-42dd-8ef3-8b9af8836224', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e1a1c923b1d14e44b30717c742870aa1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '576af920-946c-41e4-9f2b-8a2e422edf05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e96e68d5-5e8f-45d3-a26f-458c20b777d9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=df761877-caf0-49b5-8d74-374169310e23) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:48:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:45.926 140058 INFO neutron.agent.ovn.metadata.agent [-] Port df761877-caf0-49b5-8d74-374169310e23 in datapath 465c490f-bfbd-42dd-8ef3-8b9af8836224 unbound from our chassis#033[00m
Jan 31 03:48:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:45.926 140058 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 465c490f-bfbd-42dd-8ef3-8b9af8836224 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 31 03:48:46 np0005603609 nova_compute[221550]: 2026-01-31 08:48:46.154 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:46 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:46.153 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7e84d7fa-4a64-4be8-9ef4-d21f9eebc85a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:46 np0005603609 nova_compute[221550]: 2026-01-31 08:48:46.157 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:46 np0005603609 nova_compute[221550]: 2026-01-31 08:48:46.161 221554 DEBUG nova.compute.manager [None req-673a5ede-6d58-4cb4-9720-b91545593395 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:48:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:46.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:48:46 np0005603609 nova_compute[221550]: 2026-01-31 08:48:46.981 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:47.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:48:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:48:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:48:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:48:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.204 221554 DEBUG nova.compute.manager [req-365913f2-7cc0-4425-9883-0b71b878f381 req-e26b425d-1ca2-4a78-bb78-dbf23dca05bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received event network-vif-unplugged-df761877-caf0-49b5-8d74-374169310e23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.205 221554 DEBUG oslo_concurrency.lockutils [req-365913f2-7cc0-4425-9883-0b71b878f381 req-e26b425d-1ca2-4a78-bb78-dbf23dca05bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.205 221554 DEBUG oslo_concurrency.lockutils [req-365913f2-7cc0-4425-9883-0b71b878f381 req-e26b425d-1ca2-4a78-bb78-dbf23dca05bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.206 221554 DEBUG oslo_concurrency.lockutils [req-365913f2-7cc0-4425-9883-0b71b878f381 req-e26b425d-1ca2-4a78-bb78-dbf23dca05bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.206 221554 DEBUG nova.compute.manager [req-365913f2-7cc0-4425-9883-0b71b878f381 req-e26b425d-1ca2-4a78-bb78-dbf23dca05bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] No waiting events found dispatching network-vif-unplugged-df761877-caf0-49b5-8d74-374169310e23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.206 221554 WARNING nova.compute.manager [req-365913f2-7cc0-4425-9883-0b71b878f381 req-e26b425d-1ca2-4a78-bb78-dbf23dca05bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received unexpected event network-vif-unplugged-df761877-caf0-49b5-8d74-374169310e23 for instance with vm_state suspended and task_state None.#033[00m
Jan 31 03:48:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:48:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:48.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.681 221554 INFO nova.compute.manager [None req-06b900ab-83bc-44c4-8131-091b3d872df5 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Resuming#033[00m
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.682 221554 DEBUG nova.objects.instance [None req-06b900ab-83bc-44c4-8131-091b3d872df5 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lazy-loading 'flavor' on Instance uuid e18328fb-6ec0-4307-a12d-810c8f7cc007 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.747 221554 DEBUG oslo_concurrency.lockutils [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "7f398989-1c3a-465d-afd2-f6fcd860682f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.748 221554 DEBUG oslo_concurrency.lockutils [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "7f398989-1c3a-465d-afd2-f6fcd860682f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.748 221554 DEBUG oslo_concurrency.lockutils [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "7f398989-1c3a-465d-afd2-f6fcd860682f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.749 221554 DEBUG oslo_concurrency.lockutils [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "7f398989-1c3a-465d-afd2-f6fcd860682f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.749 221554 DEBUG oslo_concurrency.lockutils [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "7f398989-1c3a-465d-afd2-f6fcd860682f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.750 221554 INFO nova.compute.manager [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Terminating instance#033[00m
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.751 221554 DEBUG nova.compute.manager [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:48:48 np0005603609 kernel: tapc2e32560-89 (unregistering): left promiscuous mode
Jan 31 03:48:48 np0005603609 NetworkManager[49064]: <info>  [1769849328.8029] device (tapc2e32560-89): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.804 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.811 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:48 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:48Z|00912|binding|INFO|Releasing lport c2e32560-8930-4ad7-a92a-06705df2a932 from this chassis (sb_readonly=0)
Jan 31 03:48:48 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:48Z|00913|binding|INFO|Setting lport c2e32560-8930-4ad7-a92a-06705df2a932 down in Southbound
Jan 31 03:48:48 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:48Z|00914|binding|INFO|Removing iface tapc2e32560-89 ovn-installed in OVS
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.813 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.816 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:48 np0005603609 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d000000c4.scope: Deactivated successfully.
Jan 31 03:48:48 np0005603609 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d000000c4.scope: Consumed 13.811s CPU time.
Jan 31 03:48:48 np0005603609 systemd-machined[190912]: Machine qemu-107-instance-000000c4 terminated.
Jan 31 03:48:48 np0005603609 NetworkManager[49064]: <info>  [1769849328.9675] manager: (tapc2e32560-89): new Tun device (/org/freedesktop/NetworkManager/Devices/419)
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.984 221554 INFO nova.virt.libvirt.driver [-] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Instance destroyed successfully.#033[00m
Jan 31 03:48:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:48.984 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:02:4a 10.100.0.23'], port_security=['fa:16:3e:66:02:4a 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': '7f398989-1c3a-465d-afd2-f6fcd860682f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4003c199-b45c-4de7-962b-d4f5fb322ed8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4c12029f-8cca-405b-b2bb-5ba39d25fdaa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9622be4c-71c4-4695-b87e-549a010b5dd4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=c2e32560-8930-4ad7-a92a-06705df2a932) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:48:48 np0005603609 nova_compute[221550]: 2026-01-31 08:48:48.985 221554 DEBUG nova.objects.instance [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'resources' on Instance uuid 7f398989-1c3a-465d-afd2-f6fcd860682f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:48:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:48.986 140058 INFO neutron.agent.ovn.metadata.agent [-] Port c2e32560-8930-4ad7-a92a-06705df2a932 in datapath 4003c199-b45c-4de7-962b-d4f5fb322ed8 unbound from our chassis#033[00m
Jan 31 03:48:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:48.988 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4003c199-b45c-4de7-962b-d4f5fb322ed8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:48:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:48.989 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fdfb595c-943f-4636-bdc2-c55fe54441ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:48 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:48.990 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8 namespace which is not needed anymore#033[00m
Jan 31 03:48:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:49.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:49 np0005603609 neutron-haproxy-ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8[306933]: [NOTICE]   (306937) : haproxy version is 2.8.14-c23fe91
Jan 31 03:48:49 np0005603609 neutron-haproxy-ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8[306933]: [NOTICE]   (306937) : path to executable is /usr/sbin/haproxy
Jan 31 03:48:49 np0005603609 neutron-haproxy-ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8[306933]: [WARNING]  (306937) : Exiting Master process...
Jan 31 03:48:49 np0005603609 neutron-haproxy-ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8[306933]: [ALERT]    (306937) : Current worker (306939) exited with code 143 (Terminated)
Jan 31 03:48:49 np0005603609 neutron-haproxy-ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8[306933]: [WARNING]  (306937) : All workers exited. Exiting... (0)
Jan 31 03:48:49 np0005603609 systemd[1]: libpod-fd6649a3f95e71af4366b5c5698daa80cf2797a673c9ae699656226a773a481b.scope: Deactivated successfully.
Jan 31 03:48:49 np0005603609 podman[307275]: 2026-01-31 08:48:49.119925503 +0000 UTC m=+0.043632393 container died fd6649a3f95e71af4366b5c5698daa80cf2797a673c9ae699656226a773a481b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 03:48:49 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd6649a3f95e71af4366b5c5698daa80cf2797a673c9ae699656226a773a481b-userdata-shm.mount: Deactivated successfully.
Jan 31 03:48:49 np0005603609 systemd[1]: var-lib-containers-storage-overlay-3351fa9d86d2d58bba802dccedf3259d9733bebecd55d297da03bff9755bc4c0-merged.mount: Deactivated successfully.
Jan 31 03:48:49 np0005603609 podman[307275]: 2026-01-31 08:48:49.155758055 +0000 UTC m=+0.079464935 container cleanup fd6649a3f95e71af4366b5c5698daa80cf2797a673c9ae699656226a773a481b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 03:48:49 np0005603609 systemd[1]: libpod-conmon-fd6649a3f95e71af4366b5c5698daa80cf2797a673c9ae699656226a773a481b.scope: Deactivated successfully.
Jan 31 03:48:49 np0005603609 podman[307304]: 2026-01-31 08:48:49.224801115 +0000 UTC m=+0.048418109 container remove fd6649a3f95e71af4366b5c5698daa80cf2797a673c9ae699656226a773a481b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:48:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:49.230 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[93d64288-f5f0-437d-8920-5ac949f5409c]: (4, ('Sat Jan 31 08:48:49 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8 (fd6649a3f95e71af4366b5c5698daa80cf2797a673c9ae699656226a773a481b)\nfd6649a3f95e71af4366b5c5698daa80cf2797a673c9ae699656226a773a481b\nSat Jan 31 08:48:49 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8 (fd6649a3f95e71af4366b5c5698daa80cf2797a673c9ae699656226a773a481b)\nfd6649a3f95e71af4366b5c5698daa80cf2797a673c9ae699656226a773a481b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:49.233 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe543e8-2c61-4e2e-bf8c-429c35433e46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:49.234 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4003c199-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:49 np0005603609 nova_compute[221550]: 2026-01-31 08:48:49.237 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:49 np0005603609 kernel: tap4003c199-b0: left promiscuous mode
Jan 31 03:48:49 np0005603609 nova_compute[221550]: 2026-01-31 08:48:49.252 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:49.255 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[624459b9-573d-4120-bedc-0a7a86fa9cff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:49.274 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e341b309-faa9-4a24-82e7-5a3e4cf5c076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:49.276 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0b45d256-3c95-420b-a41f-45d24cc6fb63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:49.287 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[db292740-50f5-4b9b-9ade-d42d27317990]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 953760, 'reachable_time': 44033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307327, 'error': None, 'target': 'ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:49 np0005603609 nova_compute[221550]: 2026-01-31 08:48:49.289 221554 DEBUG oslo_concurrency.lockutils [None req-06b900ab-83bc-44c4-8131-091b3d872df5 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Acquiring lock "refresh_cache-e18328fb-6ec0-4307-a12d-810c8f7cc007" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:48:49 np0005603609 nova_compute[221550]: 2026-01-31 08:48:49.290 221554 DEBUG oslo_concurrency.lockutils [None req-06b900ab-83bc-44c4-8131-091b3d872df5 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Acquired lock "refresh_cache-e18328fb-6ec0-4307-a12d-810c8f7cc007" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:48:49 np0005603609 nova_compute[221550]: 2026-01-31 08:48:49.290 221554 DEBUG nova.network.neutron [None req-06b900ab-83bc-44c4-8131-091b3d872df5 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:48:49 np0005603609 nova_compute[221550]: 2026-01-31 08:48:49.292 221554 DEBUG nova.virt.libvirt.vif [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:47:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1021887537',display_name='tempest-TestNetworkBasicOps-server-1021887537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1021887537',id=196,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOt56ZiO9bIJjptXP4FC4yvr0FGC/HLSFRJ9KFq+xOa97MgIuZ9vQ16iiUBUOq9Sog4gkJIU1+ZZDVLRrwd13t4yLg1vb6+7D03FGH8zKCqtpVFlfBkElLz7OTcF+2BbSw==',key_name='tempest-TestNetworkBasicOps-1882044848',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:48:20Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-amgmft3u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:48:20Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=7f398989-1c3a-465d-afd2-f6fcd860682f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c2e32560-8930-4ad7-a92a-06705df2a932", "address": "fa:16:3e:66:02:4a", "network": {"id": "4003c199-b45c-4de7-962b-d4f5fb322ed8", "bridge": "br-int", "label": "tempest-network-smoke--1751171446", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2e32560-89", "ovs_interfaceid": "c2e32560-8930-4ad7-a92a-06705df2a932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:48:49 np0005603609 nova_compute[221550]: 2026-01-31 08:48:49.292 221554 DEBUG nova.network.os_vif_util [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "c2e32560-8930-4ad7-a92a-06705df2a932", "address": "fa:16:3e:66:02:4a", "network": {"id": "4003c199-b45c-4de7-962b-d4f5fb322ed8", "bridge": "br-int", "label": "tempest-network-smoke--1751171446", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2e32560-89", "ovs_interfaceid": "c2e32560-8930-4ad7-a92a-06705df2a932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:48:49 np0005603609 systemd[1]: run-netns-ovnmeta\x2d4003c199\x2db45c\x2d4de7\x2d962b\x2dd4f5fb322ed8.mount: Deactivated successfully.
Jan 31 03:48:49 np0005603609 nova_compute[221550]: 2026-01-31 08:48:49.293 221554 DEBUG nova.network.os_vif_util [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:02:4a,bridge_name='br-int',has_traffic_filtering=True,id=c2e32560-8930-4ad7-a92a-06705df2a932,network=Network(4003c199-b45c-4de7-962b-d4f5fb322ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2e32560-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:48:49 np0005603609 nova_compute[221550]: 2026-01-31 08:48:49.293 221554 DEBUG os_vif [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:02:4a,bridge_name='br-int',has_traffic_filtering=True,id=c2e32560-8930-4ad7-a92a-06705df2a932,network=Network(4003c199-b45c-4de7-962b-d4f5fb322ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2e32560-89') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:48:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:49.293 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4003c199-b45c-4de7-962b-d4f5fb322ed8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:48:49 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:49.294 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[b767ad7d-2ee0-49f4-b780-ccb198e4d9fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:49 np0005603609 nova_compute[221550]: 2026-01-31 08:48:49.295 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:49 np0005603609 nova_compute[221550]: 2026-01-31 08:48:49.295 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2e32560-89, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:49 np0005603609 nova_compute[221550]: 2026-01-31 08:48:49.297 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:49 np0005603609 nova_compute[221550]: 2026-01-31 08:48:49.299 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:48:49 np0005603609 nova_compute[221550]: 2026-01-31 08:48:49.302 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:49 np0005603609 nova_compute[221550]: 2026-01-31 08:48:49.304 221554 INFO os_vif [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:02:4a,bridge_name='br-int',has_traffic_filtering=True,id=c2e32560-8930-4ad7-a92a-06705df2a932,network=Network(4003c199-b45c-4de7-962b-d4f5fb322ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2e32560-89')#033[00m
Jan 31 03:48:49 np0005603609 nova_compute[221550]: 2026-01-31 08:48:49.782 221554 INFO nova.virt.libvirt.driver [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Deleting instance files /var/lib/nova/instances/7f398989-1c3a-465d-afd2-f6fcd860682f_del#033[00m
Jan 31 03:48:49 np0005603609 nova_compute[221550]: 2026-01-31 08:48:49.783 221554 INFO nova.virt.libvirt.driver [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Deletion of /var/lib/nova/instances/7f398989-1c3a-465d-afd2-f6fcd860682f_del complete#033[00m
Jan 31 03:48:50 np0005603609 nova_compute[221550]: 2026-01-31 08:48:50.306 221554 INFO nova.compute.manager [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Took 1.55 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:48:50 np0005603609 nova_compute[221550]: 2026-01-31 08:48:50.306 221554 DEBUG oslo.service.loopingcall [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:48:50 np0005603609 nova_compute[221550]: 2026-01-31 08:48:50.307 221554 DEBUG nova.compute.manager [-] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:48:50 np0005603609 nova_compute[221550]: 2026-01-31 08:48:50.307 221554 DEBUG nova.network.neutron [-] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:48:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:48:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:50.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:48:50 np0005603609 nova_compute[221550]: 2026-01-31 08:48:50.447 221554 DEBUG nova.compute.manager [req-c0ec6f0c-30a3-467e-bd22-f51675730c83 req-e28da9ab-0390-478e-8bf4-cb43cd21d8ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received event network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:50 np0005603609 nova_compute[221550]: 2026-01-31 08:48:50.447 221554 DEBUG oslo_concurrency.lockutils [req-c0ec6f0c-30a3-467e-bd22-f51675730c83 req-e28da9ab-0390-478e-8bf4-cb43cd21d8ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:50 np0005603609 nova_compute[221550]: 2026-01-31 08:48:50.447 221554 DEBUG oslo_concurrency.lockutils [req-c0ec6f0c-30a3-467e-bd22-f51675730c83 req-e28da9ab-0390-478e-8bf4-cb43cd21d8ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:50 np0005603609 nova_compute[221550]: 2026-01-31 08:48:50.447 221554 DEBUG oslo_concurrency.lockutils [req-c0ec6f0c-30a3-467e-bd22-f51675730c83 req-e28da9ab-0390-478e-8bf4-cb43cd21d8ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:50 np0005603609 nova_compute[221550]: 2026-01-31 08:48:50.448 221554 DEBUG nova.compute.manager [req-c0ec6f0c-30a3-467e-bd22-f51675730c83 req-e28da9ab-0390-478e-8bf4-cb43cd21d8ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] No waiting events found dispatching network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:48:50 np0005603609 nova_compute[221550]: 2026-01-31 08:48:50.448 221554 WARNING nova.compute.manager [req-c0ec6f0c-30a3-467e-bd22-f51675730c83 req-e28da9ab-0390-478e-8bf4-cb43cd21d8ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received unexpected event network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 for instance with vm_state suspended and task_state resuming.#033[00m
Jan 31 03:48:50 np0005603609 nova_compute[221550]: 2026-01-31 08:48:50.809 221554 DEBUG nova.compute.manager [req-cf8cc620-cc6f-4981-9e39-1e27f7f15876 req-4852df43-4342-4279-a207-762026ad44a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Received event network-vif-unplugged-c2e32560-8930-4ad7-a92a-06705df2a932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:50 np0005603609 nova_compute[221550]: 2026-01-31 08:48:50.810 221554 DEBUG oslo_concurrency.lockutils [req-cf8cc620-cc6f-4981-9e39-1e27f7f15876 req-4852df43-4342-4279-a207-762026ad44a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "7f398989-1c3a-465d-afd2-f6fcd860682f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:50 np0005603609 nova_compute[221550]: 2026-01-31 08:48:50.810 221554 DEBUG oslo_concurrency.lockutils [req-cf8cc620-cc6f-4981-9e39-1e27f7f15876 req-4852df43-4342-4279-a207-762026ad44a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7f398989-1c3a-465d-afd2-f6fcd860682f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:50 np0005603609 nova_compute[221550]: 2026-01-31 08:48:50.810 221554 DEBUG oslo_concurrency.lockutils [req-cf8cc620-cc6f-4981-9e39-1e27f7f15876 req-4852df43-4342-4279-a207-762026ad44a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7f398989-1c3a-465d-afd2-f6fcd860682f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:50 np0005603609 nova_compute[221550]: 2026-01-31 08:48:50.810 221554 DEBUG nova.compute.manager [req-cf8cc620-cc6f-4981-9e39-1e27f7f15876 req-4852df43-4342-4279-a207-762026ad44a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] No waiting events found dispatching network-vif-unplugged-c2e32560-8930-4ad7-a92a-06705df2a932 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:48:50 np0005603609 nova_compute[221550]: 2026-01-31 08:48:50.811 221554 DEBUG nova.compute.manager [req-cf8cc620-cc6f-4981-9e39-1e27f7f15876 req-4852df43-4342-4279-a207-762026ad44a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Received event network-vif-unplugged-c2e32560-8930-4ad7-a92a-06705df2a932 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:48:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:51.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:51 np0005603609 nova_compute[221550]: 2026-01-31 08:48:51.982 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:52.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:52 np0005603609 nova_compute[221550]: 2026-01-31 08:48:52.622 221554 DEBUG nova.network.neutron [None req-06b900ab-83bc-44c4-8131-091b3d872df5 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Updating instance_info_cache with network_info: [{"id": "df761877-caf0-49b5-8d74-374169310e23", "address": "fa:16:3e:32:69:36", "network": {"id": "465c490f-bfbd-42dd-8ef3-8b9af8836224", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1184908098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e1a1c923b1d14e44b30717c742870aa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf761877-ca", "ovs_interfaceid": "df761877-caf0-49b5-8d74-374169310e23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:48:52 np0005603609 nova_compute[221550]: 2026-01-31 08:48:52.678 221554 DEBUG nova.compute.manager [req-4dc42ec6-1210-4da7-ac85-a929bdb4d747 req-0f69ebb6-6c46-4cf2-81ca-a4b57e94ec11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Received event network-vif-deleted-c2e32560-8930-4ad7-a92a-06705df2a932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:52 np0005603609 nova_compute[221550]: 2026-01-31 08:48:52.678 221554 INFO nova.compute.manager [req-4dc42ec6-1210-4da7-ac85-a929bdb4d747 req-0f69ebb6-6c46-4cf2-81ca-a4b57e94ec11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Neutron deleted interface c2e32560-8930-4ad7-a92a-06705df2a932; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:48:52 np0005603609 nova_compute[221550]: 2026-01-31 08:48:52.679 221554 DEBUG nova.network.neutron [req-4dc42ec6-1210-4da7-ac85-a929bdb4d747 req-0f69ebb6-6c46-4cf2-81ca-a4b57e94ec11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:48:52 np0005603609 nova_compute[221550]: 2026-01-31 08:48:52.742 221554 DEBUG oslo_concurrency.lockutils [None req-06b900ab-83bc-44c4-8131-091b3d872df5 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Releasing lock "refresh_cache-e18328fb-6ec0-4307-a12d-810c8f7cc007" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:48:52 np0005603609 nova_compute[221550]: 2026-01-31 08:48:52.753 221554 DEBUG nova.virt.libvirt.vif [None req-06b900ab-83bc-44c4-8131-091b3d872df5 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:47:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-616284573',display_name='tempest-TestServerAdvancedOps-server-616284573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-616284573',id=195,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:48:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='e1a1c923b1d14e44b30717c742870aa1',ramdisk_id='',reservation_id='r-hxna07n0',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1611358131',owner_user_name='tempest-TestServerAdvancedOps-1611358131-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:48:46Z,user_data=None,user_id='429612bcc82f4c6a99fe57f4d9b01624',uuid=e18328fb-6ec0-4307-a12d-810c8f7cc007,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "df761877-caf0-49b5-8d74-374169310e23", "address": "fa:16:3e:32:69:36", "network": {"id": "465c490f-bfbd-42dd-8ef3-8b9af8836224", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1184908098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e1a1c923b1d14e44b30717c742870aa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf761877-ca", "ovs_interfaceid": "df761877-caf0-49b5-8d74-374169310e23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:48:52 np0005603609 nova_compute[221550]: 2026-01-31 08:48:52.754 221554 DEBUG nova.network.os_vif_util [None req-06b900ab-83bc-44c4-8131-091b3d872df5 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Converting VIF {"id": "df761877-caf0-49b5-8d74-374169310e23", "address": "fa:16:3e:32:69:36", "network": {"id": "465c490f-bfbd-42dd-8ef3-8b9af8836224", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1184908098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e1a1c923b1d14e44b30717c742870aa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf761877-ca", "ovs_interfaceid": "df761877-caf0-49b5-8d74-374169310e23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:48:52 np0005603609 nova_compute[221550]: 2026-01-31 08:48:52.755 221554 DEBUG nova.network.os_vif_util [None req-06b900ab-83bc-44c4-8131-091b3d872df5 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:69:36,bridge_name='br-int',has_traffic_filtering=True,id=df761877-caf0-49b5-8d74-374169310e23,network=Network(465c490f-bfbd-42dd-8ef3-8b9af8836224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf761877-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:48:52 np0005603609 nova_compute[221550]: 2026-01-31 08:48:52.756 221554 DEBUG os_vif [None req-06b900ab-83bc-44c4-8131-091b3d872df5 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:69:36,bridge_name='br-int',has_traffic_filtering=True,id=df761877-caf0-49b5-8d74-374169310e23,network=Network(465c490f-bfbd-42dd-8ef3-8b9af8836224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf761877-ca') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:48:52 np0005603609 nova_compute[221550]: 2026-01-31 08:48:52.757 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:52 np0005603609 nova_compute[221550]: 2026-01-31 08:48:52.757 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:52 np0005603609 nova_compute[221550]: 2026-01-31 08:48:52.758 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:48:52 np0005603609 nova_compute[221550]: 2026-01-31 08:48:52.762 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:52 np0005603609 nova_compute[221550]: 2026-01-31 08:48:52.763 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf761877-ca, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:52 np0005603609 nova_compute[221550]: 2026-01-31 08:48:52.763 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdf761877-ca, col_values=(('external_ids', {'iface-id': 'df761877-caf0-49b5-8d74-374169310e23', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:69:36', 'vm-uuid': 'e18328fb-6ec0-4307-a12d-810c8f7cc007'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:52 np0005603609 nova_compute[221550]: 2026-01-31 08:48:52.764 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:48:52 np0005603609 nova_compute[221550]: 2026-01-31 08:48:52.765 221554 INFO os_vif [None req-06b900ab-83bc-44c4-8131-091b3d872df5 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:69:36,bridge_name='br-int',has_traffic_filtering=True,id=df761877-caf0-49b5-8d74-374169310e23,network=Network(465c490f-bfbd-42dd-8ef3-8b9af8836224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf761877-ca')#033[00m
Jan 31 03:48:52 np0005603609 nova_compute[221550]: 2026-01-31 08:48:52.780 221554 DEBUG nova.network.neutron [-] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.030 221554 DEBUG nova.objects.instance [None req-06b900ab-83bc-44c4-8131-091b3d872df5 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lazy-loading 'numa_topology' on Instance uuid e18328fb-6ec0-4307-a12d-810c8f7cc007 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:48:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:48:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:53.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.067 221554 DEBUG nova.compute.manager [req-4dc42ec6-1210-4da7-ac85-a929bdb4d747 req-0f69ebb6-6c46-4cf2-81ca-a4b57e94ec11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Detach interface failed, port_id=c2e32560-8930-4ad7-a92a-06705df2a932, reason: Instance 7f398989-1c3a-465d-afd2-f6fcd860682f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.101 221554 DEBUG nova.compute.manager [req-7f783bb0-1f3d-4b29-9ca5-b285fe109571 req-940cf945-5fdc-43a7-9e71-510631b236f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Received event network-vif-plugged-c2e32560-8930-4ad7-a92a-06705df2a932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.101 221554 DEBUG oslo_concurrency.lockutils [req-7f783bb0-1f3d-4b29-9ca5-b285fe109571 req-940cf945-5fdc-43a7-9e71-510631b236f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "7f398989-1c3a-465d-afd2-f6fcd860682f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.102 221554 DEBUG oslo_concurrency.lockutils [req-7f783bb0-1f3d-4b29-9ca5-b285fe109571 req-940cf945-5fdc-43a7-9e71-510631b236f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7f398989-1c3a-465d-afd2-f6fcd860682f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.102 221554 DEBUG oslo_concurrency.lockutils [req-7f783bb0-1f3d-4b29-9ca5-b285fe109571 req-940cf945-5fdc-43a7-9e71-510631b236f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7f398989-1c3a-465d-afd2-f6fcd860682f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.102 221554 DEBUG nova.compute.manager [req-7f783bb0-1f3d-4b29-9ca5-b285fe109571 req-940cf945-5fdc-43a7-9e71-510631b236f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] No waiting events found dispatching network-vif-plugged-c2e32560-8930-4ad7-a92a-06705df2a932 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.103 221554 WARNING nova.compute.manager [req-7f783bb0-1f3d-4b29-9ca5-b285fe109571 req-940cf945-5fdc-43a7-9e71-510631b236f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Received unexpected event network-vif-plugged-c2e32560-8930-4ad7-a92a-06705df2a932 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.119 221554 INFO nova.compute.manager [-] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Took 2.81 seconds to deallocate network for instance.#033[00m
Jan 31 03:48:53 np0005603609 kernel: tapdf761877-ca: entered promiscuous mode
Jan 31 03:48:53 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:53Z|00915|binding|INFO|Claiming lport df761877-caf0-49b5-8d74-374169310e23 for this chassis.
Jan 31 03:48:53 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:53Z|00916|binding|INFO|df761877-caf0-49b5-8d74-374169310e23: Claiming fa:16:3e:32:69:36 10.100.0.9
Jan 31 03:48:53 np0005603609 NetworkManager[49064]: <info>  [1769849333.1558] manager: (tapdf761877-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/420)
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.154 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:53 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:53Z|00917|binding|INFO|Setting lport df761877-caf0-49b5-8d74-374169310e23 ovn-installed in OVS
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.160 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.163 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:53 np0005603609 systemd-udevd[307363]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:48:53 np0005603609 systemd-machined[190912]: New machine qemu-109-instance-000000c3.
Jan 31 03:48:53 np0005603609 NetworkManager[49064]: <info>  [1769849333.1857] device (tapdf761877-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:48:53 np0005603609 NetworkManager[49064]: <info>  [1769849333.1863] device (tapdf761877-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:48:53 np0005603609 systemd[1]: Started Virtual Machine qemu-109-instance-000000c3.
Jan 31 03:48:53 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:53Z|00918|binding|INFO|Setting lport df761877-caf0-49b5-8d74-374169310e23 up in Southbound
Jan 31 03:48:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:53.333 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:69:36 10.100.0.9'], port_security=['fa:16:3e:32:69:36 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e18328fb-6ec0-4307-a12d-810c8f7cc007', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-465c490f-bfbd-42dd-8ef3-8b9af8836224', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e1a1c923b1d14e44b30717c742870aa1', 'neutron:revision_number': '7', 'neutron:security_group_ids': '576af920-946c-41e4-9f2b-8a2e422edf05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e96e68d5-5e8f-45d3-a26f-458c20b777d9, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=df761877-caf0-49b5-8d74-374169310e23) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:48:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:53.334 140058 INFO neutron.agent.ovn.metadata.agent [-] Port df761877-caf0-49b5-8d74-374169310e23 in datapath 465c490f-bfbd-42dd-8ef3-8b9af8836224 bound to our chassis#033[00m
Jan 31 03:48:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:53.335 140058 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 465c490f-bfbd-42dd-8ef3-8b9af8836224 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 31 03:48:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:53.336 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d11d3ede-c617-4c10-a4ad-9519f8ea6261]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:48:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/769013742' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:48:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:48:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/769013742' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.661 221554 DEBUG oslo_concurrency.lockutils [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.662 221554 DEBUG oslo_concurrency.lockutils [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.695 221554 DEBUG nova.scheduler.client.report [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.709 221554 DEBUG nova.scheduler.client.report [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.711 221554 DEBUG nova.compute.provider_tree [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.729 221554 DEBUG nova.scheduler.client.report [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.777 221554 DEBUG nova.scheduler.client.report [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.817 221554 DEBUG nova.virt.libvirt.host [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Removed pending event for e18328fb-6ec0-4307-a12d-810c8f7cc007 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.817 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849333.814722, e18328fb-6ec0-4307-a12d-810c8f7cc007 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.818 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] VM Started (Lifecycle Event)#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.848 221554 DEBUG nova.compute.manager [None req-06b900ab-83bc-44c4-8131-091b3d872df5 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.848 221554 DEBUG nova.objects.instance [None req-06b900ab-83bc-44c4-8131-091b3d872df5 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lazy-loading 'pci_devices' on Instance uuid e18328fb-6ec0-4307-a12d-810c8f7cc007 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:48:53 np0005603609 nova_compute[221550]: 2026-01-31 08:48:53.873 221554 DEBUG oslo_concurrency.processutils [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:48:54 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2202804351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:48:54 np0005603609 nova_compute[221550]: 2026-01-31 08:48:54.284 221554 DEBUG oslo_concurrency.processutils [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:54 np0005603609 nova_compute[221550]: 2026-01-31 08:48:54.292 221554 DEBUG nova.compute.provider_tree [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:48:54 np0005603609 nova_compute[221550]: 2026-01-31 08:48:54.296 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:54 np0005603609 nova_compute[221550]: 2026-01-31 08:48:54.297 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:54 np0005603609 nova_compute[221550]: 2026-01-31 08:48:54.301 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:48:54 np0005603609 nova_compute[221550]: 2026-01-31 08:48:54.358 221554 INFO nova.virt.libvirt.driver [-] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Instance running successfully.#033[00m
Jan 31 03:48:54 np0005603609 virtqemud[221292]: argument unsupported: QEMU guest agent is not configured
Jan 31 03:48:54 np0005603609 nova_compute[221550]: 2026-01-31 08:48:54.361 221554 DEBUG nova.virt.libvirt.guest [None req-06b900ab-83bc-44c4-8131-091b3d872df5 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 03:48:54 np0005603609 nova_compute[221550]: 2026-01-31 08:48:54.362 221554 DEBUG nova.compute.manager [None req-06b900ab-83bc-44c4-8131-091b3d872df5 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:54 np0005603609 nova_compute[221550]: 2026-01-31 08:48:54.410 221554 DEBUG nova.scheduler.client.report [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:48:54 np0005603609 nova_compute[221550]: 2026-01-31 08:48:54.414 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Jan 31 03:48:54 np0005603609 nova_compute[221550]: 2026-01-31 08:48:54.414 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849333.8174303, e18328fb-6ec0-4307-a12d-810c8f7cc007 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:48:54 np0005603609 nova_compute[221550]: 2026-01-31 08:48:54.415 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:48:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:48:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:54.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:48:54 np0005603609 nova_compute[221550]: 2026-01-31 08:48:54.555 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:54 np0005603609 nova_compute[221550]: 2026-01-31 08:48:54.559 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:48:54 np0005603609 nova_compute[221550]: 2026-01-31 08:48:54.692 221554 DEBUG oslo_concurrency.lockutils [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:55 np0005603609 nova_compute[221550]: 2026-01-31 08:48:55.038 221554 DEBUG nova.compute.manager [req-161644d9-d6a2-41bd-b335-83e47f33ace2 req-4e73848f-ac8f-41e7-a322-09d86a5312bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received event network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:55 np0005603609 nova_compute[221550]: 2026-01-31 08:48:55.039 221554 DEBUG oslo_concurrency.lockutils [req-161644d9-d6a2-41bd-b335-83e47f33ace2 req-4e73848f-ac8f-41e7-a322-09d86a5312bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:55 np0005603609 nova_compute[221550]: 2026-01-31 08:48:55.039 221554 DEBUG oslo_concurrency.lockutils [req-161644d9-d6a2-41bd-b335-83e47f33ace2 req-4e73848f-ac8f-41e7-a322-09d86a5312bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:55 np0005603609 nova_compute[221550]: 2026-01-31 08:48:55.039 221554 DEBUG oslo_concurrency.lockutils [req-161644d9-d6a2-41bd-b335-83e47f33ace2 req-4e73848f-ac8f-41e7-a322-09d86a5312bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:55 np0005603609 nova_compute[221550]: 2026-01-31 08:48:55.039 221554 DEBUG nova.compute.manager [req-161644d9-d6a2-41bd-b335-83e47f33ace2 req-4e73848f-ac8f-41e7-a322-09d86a5312bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] No waiting events found dispatching network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:48:55 np0005603609 nova_compute[221550]: 2026-01-31 08:48:55.040 221554 WARNING nova.compute.manager [req-161644d9-d6a2-41bd-b335-83e47f33ace2 req-4e73848f-ac8f-41e7-a322-09d86a5312bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received unexpected event network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:48:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:55.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:55 np0005603609 nova_compute[221550]: 2026-01-31 08:48:55.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:55 np0005603609 nova_compute[221550]: 2026-01-31 08:48:55.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:48:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:48:56 np0005603609 nova_compute[221550]: 2026-01-31 08:48:56.284 221554 INFO nova.scheduler.client.report [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Deleted allocations for instance 7f398989-1c3a-465d-afd2-f6fcd860682f#033[00m
Jan 31 03:48:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:48:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:56.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:48:56 np0005603609 nova_compute[221550]: 2026-01-31 08:48:56.702 221554 DEBUG oslo_concurrency.lockutils [None req-eac2bb26-cf8e-48e7-a3bd-3d23cb23a035 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "7f398989-1c3a-465d-afd2-f6fcd860682f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:56 np0005603609 nova_compute[221550]: 2026-01-31 08:48:56.984 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:57.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:57 np0005603609 nova_compute[221550]: 2026-01-31 08:48:57.421 221554 DEBUG nova.compute.manager [req-b054871a-c82c-408d-8215-20b54e99ca56 req-d37a752a-9938-426f-af33-d8f4681fc07b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received event network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:57 np0005603609 nova_compute[221550]: 2026-01-31 08:48:57.422 221554 DEBUG oslo_concurrency.lockutils [req-b054871a-c82c-408d-8215-20b54e99ca56 req-d37a752a-9938-426f-af33-d8f4681fc07b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:57 np0005603609 nova_compute[221550]: 2026-01-31 08:48:57.422 221554 DEBUG oslo_concurrency.lockutils [req-b054871a-c82c-408d-8215-20b54e99ca56 req-d37a752a-9938-426f-af33-d8f4681fc07b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:57 np0005603609 nova_compute[221550]: 2026-01-31 08:48:57.423 221554 DEBUG oslo_concurrency.lockutils [req-b054871a-c82c-408d-8215-20b54e99ca56 req-d37a752a-9938-426f-af33-d8f4681fc07b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:57 np0005603609 nova_compute[221550]: 2026-01-31 08:48:57.423 221554 DEBUG nova.compute.manager [req-b054871a-c82c-408d-8215-20b54e99ca56 req-d37a752a-9938-426f-af33-d8f4681fc07b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] No waiting events found dispatching network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:48:57 np0005603609 nova_compute[221550]: 2026-01-31 08:48:57.423 221554 WARNING nova.compute.manager [req-b054871a-c82c-408d-8215-20b54e99ca56 req-d37a752a-9938-426f-af33-d8f4681fc07b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received unexpected event network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:48:57 np0005603609 nova_compute[221550]: 2026-01-31 08:48:57.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:57 np0005603609 nova_compute[221550]: 2026-01-31 08:48:57.891 221554 DEBUG oslo_concurrency.lockutils [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Acquiring lock "e18328fb-6ec0-4307-a12d-810c8f7cc007" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:57 np0005603609 nova_compute[221550]: 2026-01-31 08:48:57.891 221554 DEBUG oslo_concurrency.lockutils [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:57 np0005603609 nova_compute[221550]: 2026-01-31 08:48:57.892 221554 DEBUG oslo_concurrency.lockutils [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Acquiring lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:57 np0005603609 nova_compute[221550]: 2026-01-31 08:48:57.892 221554 DEBUG oslo_concurrency.lockutils [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:57 np0005603609 nova_compute[221550]: 2026-01-31 08:48:57.892 221554 DEBUG oslo_concurrency.lockutils [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:57 np0005603609 nova_compute[221550]: 2026-01-31 08:48:57.893 221554 INFO nova.compute.manager [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Terminating instance#033[00m
Jan 31 03:48:57 np0005603609 nova_compute[221550]: 2026-01-31 08:48:57.895 221554 DEBUG nova.compute.manager [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:48:57 np0005603609 kernel: tapdf761877-ca (unregistering): left promiscuous mode
Jan 31 03:48:57 np0005603609 NetworkManager[49064]: <info>  [1769849337.9545] device (tapdf761877-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:48:57 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:57Z|00919|binding|INFO|Releasing lport df761877-caf0-49b5-8d74-374169310e23 from this chassis (sb_readonly=0)
Jan 31 03:48:57 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:57Z|00920|binding|INFO|Setting lport df761877-caf0-49b5-8d74-374169310e23 down in Southbound
Jan 31 03:48:57 np0005603609 nova_compute[221550]: 2026-01-31 08:48:57.965 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:57 np0005603609 ovn_controller[130359]: 2026-01-31T08:48:57Z|00921|binding|INFO|Removing iface tapdf761877-ca ovn-installed in OVS
Jan 31 03:48:57 np0005603609 nova_compute[221550]: 2026-01-31 08:48:57.976 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:58 np0005603609 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d000000c3.scope: Deactivated successfully.
Jan 31 03:48:58 np0005603609 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d000000c3.scope: Consumed 1.331s CPU time.
Jan 31 03:48:58 np0005603609 systemd-machined[190912]: Machine qemu-109-instance-000000c3 terminated.
Jan 31 03:48:58 np0005603609 nova_compute[221550]: 2026-01-31 08:48:58.135 221554 INFO nova.virt.libvirt.driver [-] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Instance destroyed successfully.#033[00m
Jan 31 03:48:58 np0005603609 nova_compute[221550]: 2026-01-31 08:48:58.136 221554 DEBUG nova.objects.instance [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lazy-loading 'resources' on Instance uuid e18328fb-6ec0-4307-a12d-810c8f7cc007 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:48:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:58.190 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:69:36 10.100.0.9'], port_security=['fa:16:3e:32:69:36 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e18328fb-6ec0-4307-a12d-810c8f7cc007', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-465c490f-bfbd-42dd-8ef3-8b9af8836224', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e1a1c923b1d14e44b30717c742870aa1', 'neutron:revision_number': '8', 'neutron:security_group_ids': '576af920-946c-41e4-9f2b-8a2e422edf05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e96e68d5-5e8f-45d3-a26f-458c20b777d9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=df761877-caf0-49b5-8d74-374169310e23) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:48:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:58.191 140058 INFO neutron.agent.ovn.metadata.agent [-] Port df761877-caf0-49b5-8d74-374169310e23 in datapath 465c490f-bfbd-42dd-8ef3-8b9af8836224 unbound from our chassis#033[00m
Jan 31 03:48:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:58.192 140058 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 465c490f-bfbd-42dd-8ef3-8b9af8836224 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 31 03:48:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:48:58.193 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[01d96eae-7764-4145-a250-f4623e1687da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:48:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:58.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:48:58 np0005603609 nova_compute[221550]: 2026-01-31 08:48:58.575 221554 DEBUG nova.virt.libvirt.vif [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:47:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-616284573',display_name='tempest-TestServerAdvancedOps-server-616284573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-616284573',id=195,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:48:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e1a1c923b1d14e44b30717c742870aa1',ramdisk_id='',reservation_id='r-hxna07n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-1611358131',owner_user_name='tempest-TestServerAdvancedOps-1611358131-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:48:54Z,user_data=None,user_id='429612bcc82f4c6a99fe57f4d9b01624',uuid=e18328fb-6ec0-4307-a12d-810c8f7cc007,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df761877-caf0-49b5-8d74-374169310e23", "address": "fa:16:3e:32:69:36", "network": {"id": "465c490f-bfbd-42dd-8ef3-8b9af8836224", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1184908098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e1a1c923b1d14e44b30717c742870aa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf761877-ca", "ovs_interfaceid": "df761877-caf0-49b5-8d74-374169310e23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:48:58 np0005603609 nova_compute[221550]: 2026-01-31 08:48:58.576 221554 DEBUG nova.network.os_vif_util [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Converting VIF {"id": "df761877-caf0-49b5-8d74-374169310e23", "address": "fa:16:3e:32:69:36", "network": {"id": "465c490f-bfbd-42dd-8ef3-8b9af8836224", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1184908098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e1a1c923b1d14e44b30717c742870aa1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf761877-ca", "ovs_interfaceid": "df761877-caf0-49b5-8d74-374169310e23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:48:58 np0005603609 nova_compute[221550]: 2026-01-31 08:48:58.576 221554 DEBUG nova.network.os_vif_util [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:69:36,bridge_name='br-int',has_traffic_filtering=True,id=df761877-caf0-49b5-8d74-374169310e23,network=Network(465c490f-bfbd-42dd-8ef3-8b9af8836224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf761877-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:48:58 np0005603609 nova_compute[221550]: 2026-01-31 08:48:58.577 221554 DEBUG os_vif [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:69:36,bridge_name='br-int',has_traffic_filtering=True,id=df761877-caf0-49b5-8d74-374169310e23,network=Network(465c490f-bfbd-42dd-8ef3-8b9af8836224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf761877-ca') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:48:58 np0005603609 nova_compute[221550]: 2026-01-31 08:48:58.578 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:58 np0005603609 nova_compute[221550]: 2026-01-31 08:48:58.578 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf761877-ca, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:58 np0005603609 nova_compute[221550]: 2026-01-31 08:48:58.580 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:58 np0005603609 nova_compute[221550]: 2026-01-31 08:48:58.581 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:58 np0005603609 nova_compute[221550]: 2026-01-31 08:48:58.584 221554 INFO os_vif [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:69:36,bridge_name='br-int',has_traffic_filtering=True,id=df761877-caf0-49b5-8d74-374169310e23,network=Network(465c490f-bfbd-42dd-8ef3-8b9af8836224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf761877-ca')#033[00m
Jan 31 03:48:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:48:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:48:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:59.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:48:59 np0005603609 nova_compute[221550]: 2026-01-31 08:48:59.178 221554 INFO nova.virt.libvirt.driver [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Deleting instance files /var/lib/nova/instances/e18328fb-6ec0-4307-a12d-810c8f7cc007_del#033[00m
Jan 31 03:48:59 np0005603609 nova_compute[221550]: 2026-01-31 08:48:59.179 221554 INFO nova.virt.libvirt.driver [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Deletion of /var/lib/nova/instances/e18328fb-6ec0-4307-a12d-810c8f7cc007_del complete#033[00m
Jan 31 03:48:59 np0005603609 nova_compute[221550]: 2026-01-31 08:48:59.460 221554 INFO nova.compute.manager [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Took 1.57 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:48:59 np0005603609 nova_compute[221550]: 2026-01-31 08:48:59.461 221554 DEBUG oslo.service.loopingcall [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:48:59 np0005603609 nova_compute[221550]: 2026-01-31 08:48:59.461 221554 DEBUG nova.compute.manager [-] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:48:59 np0005603609 nova_compute[221550]: 2026-01-31 08:48:59.461 221554 DEBUG nova.network.neutron [-] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:48:59 np0005603609 nova_compute[221550]: 2026-01-31 08:48:59.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:59 np0005603609 nova_compute[221550]: 2026-01-31 08:48:59.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:48:59 np0005603609 nova_compute[221550]: 2026-01-31 08:48:59.979 221554 DEBUG nova.compute.manager [req-f45c4637-e0e5-4a99-881d-a56078426222 req-e04872ba-adb4-4b1e-9af2-979fa770e6f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received event network-vif-unplugged-df761877-caf0-49b5-8d74-374169310e23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:59 np0005603609 nova_compute[221550]: 2026-01-31 08:48:59.979 221554 DEBUG oslo_concurrency.lockutils [req-f45c4637-e0e5-4a99-881d-a56078426222 req-e04872ba-adb4-4b1e-9af2-979fa770e6f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:59 np0005603609 nova_compute[221550]: 2026-01-31 08:48:59.979 221554 DEBUG oslo_concurrency.lockutils [req-f45c4637-e0e5-4a99-881d-a56078426222 req-e04872ba-adb4-4b1e-9af2-979fa770e6f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:59 np0005603609 nova_compute[221550]: 2026-01-31 08:48:59.980 221554 DEBUG oslo_concurrency.lockutils [req-f45c4637-e0e5-4a99-881d-a56078426222 req-e04872ba-adb4-4b1e-9af2-979fa770e6f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:59 np0005603609 nova_compute[221550]: 2026-01-31 08:48:59.980 221554 DEBUG nova.compute.manager [req-f45c4637-e0e5-4a99-881d-a56078426222 req-e04872ba-adb4-4b1e-9af2-979fa770e6f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] No waiting events found dispatching network-vif-unplugged-df761877-caf0-49b5-8d74-374169310e23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:48:59 np0005603609 nova_compute[221550]: 2026-01-31 08:48:59.980 221554 DEBUG nova.compute.manager [req-f45c4637-e0e5-4a99-881d-a56078426222 req-e04872ba-adb4-4b1e-9af2-979fa770e6f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received event network-vif-unplugged-df761877-caf0-49b5-8d74-374169310e23 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:49:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:00.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:01.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:01 np0005603609 nova_compute[221550]: 2026-01-31 08:49:01.986 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:02 np0005603609 nova_compute[221550]: 2026-01-31 08:49:02.332 221554 DEBUG nova.compute.manager [req-8e53abd4-4906-4d4a-a495-cac51e04c30a req-374be43d-1b61-4ee5-8dc6-87b601682d2a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received event network-vif-deleted-df761877-caf0-49b5-8d74-374169310e23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:49:02 np0005603609 nova_compute[221550]: 2026-01-31 08:49:02.333 221554 INFO nova.compute.manager [req-8e53abd4-4906-4d4a-a495-cac51e04c30a req-374be43d-1b61-4ee5-8dc6-87b601682d2a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Neutron deleted interface df761877-caf0-49b5-8d74-374169310e23; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:49:02 np0005603609 nova_compute[221550]: 2026-01-31 08:49:02.333 221554 DEBUG nova.network.neutron [req-8e53abd4-4906-4d4a-a495-cac51e04c30a req-374be43d-1b61-4ee5-8dc6-87b601682d2a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:49:02 np0005603609 nova_compute[221550]: 2026-01-31 08:49:02.335 221554 DEBUG nova.compute.manager [req-b8e2646a-984c-4964-a7c3-3cd75ba62cd6 req-ca6f3e76-330f-456a-be38-78e3205ceec1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received event network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:49:02 np0005603609 nova_compute[221550]: 2026-01-31 08:49:02.335 221554 DEBUG oslo_concurrency.lockutils [req-b8e2646a-984c-4964-a7c3-3cd75ba62cd6 req-ca6f3e76-330f-456a-be38-78e3205ceec1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:02 np0005603609 nova_compute[221550]: 2026-01-31 08:49:02.335 221554 DEBUG oslo_concurrency.lockutils [req-b8e2646a-984c-4964-a7c3-3cd75ba62cd6 req-ca6f3e76-330f-456a-be38-78e3205ceec1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:02 np0005603609 nova_compute[221550]: 2026-01-31 08:49:02.335 221554 DEBUG oslo_concurrency.lockutils [req-b8e2646a-984c-4964-a7c3-3cd75ba62cd6 req-ca6f3e76-330f-456a-be38-78e3205ceec1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:02 np0005603609 nova_compute[221550]: 2026-01-31 08:49:02.336 221554 DEBUG nova.compute.manager [req-b8e2646a-984c-4964-a7c3-3cd75ba62cd6 req-ca6f3e76-330f-456a-be38-78e3205ceec1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] No waiting events found dispatching network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:49:02 np0005603609 nova_compute[221550]: 2026-01-31 08:49:02.336 221554 WARNING nova.compute.manager [req-b8e2646a-984c-4964-a7c3-3cd75ba62cd6 req-ca6f3e76-330f-456a-be38-78e3205ceec1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Received unexpected event network-vif-plugged-df761877-caf0-49b5-8d74-374169310e23 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:49:02 np0005603609 nova_compute[221550]: 2026-01-31 08:49:02.358 221554 DEBUG nova.network.neutron [-] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:49:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:02.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:02 np0005603609 nova_compute[221550]: 2026-01-31 08:49:02.885 221554 INFO nova.compute.manager [-] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Took 3.42 seconds to deallocate network for instance.#033[00m
Jan 31 03:49:02 np0005603609 nova_compute[221550]: 2026-01-31 08:49:02.891 221554 DEBUG nova.compute.manager [req-8e53abd4-4906-4d4a-a495-cac51e04c30a req-374be43d-1b61-4ee5-8dc6-87b601682d2a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Detach interface failed, port_id=df761877-caf0-49b5-8d74-374169310e23, reason: Instance e18328fb-6ec0-4307-a12d-810c8f7cc007 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:49:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:49:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:03.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:49:03 np0005603609 nova_compute[221550]: 2026-01-31 08:49:03.616 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:03 np0005603609 nova_compute[221550]: 2026-01-31 08:49:03.980 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849328.9800758, 7f398989-1c3a-465d-afd2-f6fcd860682f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:49:03 np0005603609 nova_compute[221550]: 2026-01-31 08:49:03.981 221554 INFO nova.compute.manager [-] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:49:04 np0005603609 nova_compute[221550]: 2026-01-31 08:49:04.013 221554 DEBUG oslo_concurrency.lockutils [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:04 np0005603609 nova_compute[221550]: 2026-01-31 08:49:04.014 221554 DEBUG oslo_concurrency.lockutils [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:04 np0005603609 nova_compute[221550]: 2026-01-31 08:49:04.195 221554 DEBUG oslo_concurrency.processutils [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:04 np0005603609 nova_compute[221550]: 2026-01-31 08:49:04.216 221554 DEBUG nova.compute.manager [None req-1446060e-9bd6-4c7e-b63f-202c5178b193 - - - - - -] [instance: 7f398989-1c3a-465d-afd2-f6fcd860682f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:49:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:04.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:49:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1274297397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:49:04 np0005603609 nova_compute[221550]: 2026-01-31 08:49:04.615 221554 DEBUG oslo_concurrency.processutils [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:04 np0005603609 nova_compute[221550]: 2026-01-31 08:49:04.620 221554 DEBUG nova.compute.provider_tree [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:49:04 np0005603609 nova_compute[221550]: 2026-01-31 08:49:04.712 221554 DEBUG nova.scheduler.client.report [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:49:04 np0005603609 nova_compute[221550]: 2026-01-31 08:49:04.983 221554 DEBUG oslo_concurrency.lockutils [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:05 np0005603609 nova_compute[221550]: 2026-01-31 08:49:05.015 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:05 np0005603609 nova_compute[221550]: 2026-01-31 08:49:05.057 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:05.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:05 np0005603609 nova_compute[221550]: 2026-01-31 08:49:05.379 221554 INFO nova.scheduler.client.report [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Deleted allocations for instance e18328fb-6ec0-4307-a12d-810c8f7cc007#033[00m
Jan 31 03:49:05 np0005603609 nova_compute[221550]: 2026-01-31 08:49:05.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:05 np0005603609 nova_compute[221550]: 2026-01-31 08:49:05.918 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:05 np0005603609 nova_compute[221550]: 2026-01-31 08:49:05.919 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:05 np0005603609 nova_compute[221550]: 2026-01-31 08:49:05.919 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:05 np0005603609 nova_compute[221550]: 2026-01-31 08:49:05.919 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:49:05 np0005603609 nova_compute[221550]: 2026-01-31 08:49:05.920 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:06 np0005603609 nova_compute[221550]: 2026-01-31 08:49:06.062 221554 DEBUG oslo_concurrency.lockutils [None req-74ba8ffb-276c-4260-93a7-331ed960a1f6 429612bcc82f4c6a99fe57f4d9b01624 e1a1c923b1d14e44b30717c742870aa1 - - default default] Lock "e18328fb-6ec0-4307-a12d-810c8f7cc007" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:49:06 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/902395460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:49:06 np0005603609 nova_compute[221550]: 2026-01-31 08:49:06.353 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:06.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:06 np0005603609 nova_compute[221550]: 2026-01-31 08:49:06.502 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:49:06 np0005603609 nova_compute[221550]: 2026-01-31 08:49:06.503 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4166MB free_disk=20.876338958740234GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:49:06 np0005603609 nova_compute[221550]: 2026-01-31 08:49:06.503 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:06 np0005603609 nova_compute[221550]: 2026-01-31 08:49:06.503 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:06 np0005603609 nova_compute[221550]: 2026-01-31 08:49:06.954 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:49:06 np0005603609 nova_compute[221550]: 2026-01-31 08:49:06.954 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:49:06 np0005603609 nova_compute[221550]: 2026-01-31 08:49:06.980 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:06 np0005603609 nova_compute[221550]: 2026-01-31 08:49:06.998 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:49:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:07.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:49:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:49:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1381336499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:49:07 np0005603609 nova_compute[221550]: 2026-01-31 08:49:07.387 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:07 np0005603609 nova_compute[221550]: 2026-01-31 08:49:07.391 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:49:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:07 np0005603609 nova_compute[221550]: 2026-01-31 08:49:07.459 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:49:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:49:07.540 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:49:07.541 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:49:07.541 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:07 np0005603609 nova_compute[221550]: 2026-01-31 08:49:07.798 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:49:07 np0005603609 nova_compute[221550]: 2026-01-31 08:49:07.799 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:08.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:08 np0005603609 nova_compute[221550]: 2026-01-31 08:49:08.619 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:08 np0005603609 nova_compute[221550]: 2026-01-31 08:49:08.800 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:08 np0005603609 nova_compute[221550]: 2026-01-31 08:49:08.801 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:49:08 np0005603609 nova_compute[221550]: 2026-01-31 08:49:08.802 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:49:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:09.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:09 np0005603609 nova_compute[221550]: 2026-01-31 08:49:09.147 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:49:10 np0005603609 nova_compute[221550]: 2026-01-31 08:49:10.001 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:10.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:11.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:11 np0005603609 nova_compute[221550]: 2026-01-31 08:49:11.990 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:49:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:12.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:49:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:49:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:13.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:49:13 np0005603609 nova_compute[221550]: 2026-01-31 08:49:13.133 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849338.1326084, e18328fb-6ec0-4307-a12d-810c8f7cc007 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:49:13 np0005603609 nova_compute[221550]: 2026-01-31 08:49:13.134 221554 INFO nova.compute.manager [-] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:49:13 np0005603609 nova_compute[221550]: 2026-01-31 08:49:13.534 221554 DEBUG nova.compute.manager [None req-3f15e254-17a3-455e-9622-906d20ff51ed - - - - - -] [instance: e18328fb-6ec0-4307-a12d-810c8f7cc007] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:49:13 np0005603609 nova_compute[221550]: 2026-01-31 08:49:13.669 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:13 np0005603609 nova_compute[221550]: 2026-01-31 08:49:13.670 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:49:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:14.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:49:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:15.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:15 np0005603609 podman[307595]: 2026-01-31 08:49:15.195746724 +0000 UTC m=+0.069911013 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:49:15 np0005603609 podman[307594]: 2026-01-31 08:49:15.224787911 +0000 UTC m=+0.098566141 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:49:16 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 5.201527596s
Jan 31 03:49:16 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 5.201527596s
Jan 31 03:49:16 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.201743126s, txc = 0x55f201413500
Jan 31 03:49:16 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 03:49:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:16.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:17 np0005603609 nova_compute[221550]: 2026-01-31 08:49:17.019 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:17.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:18.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:18 np0005603609 nova_compute[221550]: 2026-01-31 08:49:18.673 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:49:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:19.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:49:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:20.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:21.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:49:21.361 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=90, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=89) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:49:21 np0005603609 nova_compute[221550]: 2026-01-31 08:49:21.361 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:49:21.363 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:49:22 np0005603609 nova_compute[221550]: 2026-01-31 08:49:22.021 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:22.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:22 np0005603609 nova_compute[221550]: 2026-01-31 08:49:22.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:49:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:23.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:49:23 np0005603609 nova_compute[221550]: 2026-01-31 08:49:23.675 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:24.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:25.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:49:25.365 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '90'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:49:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:26.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:27 np0005603609 nova_compute[221550]: 2026-01-31 08:49:27.023 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:27.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:28.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:28 np0005603609 nova_compute[221550]: 2026-01-31 08:49:28.678 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:29.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:30.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:31.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:32 np0005603609 nova_compute[221550]: 2026-01-31 08:49:32.025 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:32.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:32 np0005603609 nova_compute[221550]: 2026-01-31 08:49:32.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:49:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:33.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:49:33 np0005603609 nova_compute[221550]: 2026-01-31 08:49:33.722 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:34.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:35.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:36.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:37 np0005603609 nova_compute[221550]: 2026-01-31 08:49:37.060 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:49:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:37.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:49:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:38.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:38 np0005603609 nova_compute[221550]: 2026-01-31 08:49:38.833 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:49:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:39.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:49:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:49:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:40.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:49:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:49:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:41.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:49:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:42 np0005603609 nova_compute[221550]: 2026-01-31 08:49:42.062 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:42.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:49:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:43.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:49:43 np0005603609 nova_compute[221550]: 2026-01-31 08:49:43.888 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:44.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:45.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:46 np0005603609 podman[307641]: 2026-01-31 08:49:46.16464599 +0000 UTC m=+0.053230386 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 03:49:46 np0005603609 podman[307640]: 2026-01-31 08:49:46.192913379 +0000 UTC m=+0.082796816 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:49:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:46.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:47 np0005603609 nova_compute[221550]: 2026-01-31 08:49:47.064 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:47.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:48.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:48 np0005603609 nova_compute[221550]: 2026-01-31 08:49:48.891 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:49:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:49.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:49:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:50.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:51.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:52 np0005603609 nova_compute[221550]: 2026-01-31 08:49:52.066 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:52.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:53.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:53 np0005603609 nova_compute[221550]: 2026-01-31 08:49:53.894 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:49:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:54.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:49:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:49:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:55.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:49:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:49:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:56.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:49:56 np0005603609 nova_compute[221550]: 2026-01-31 08:49:56.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:49:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:49:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:49:57 np0005603609 nova_compute[221550]: 2026-01-31 08:49:57.068 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:57.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:57 np0005603609 nova_compute[221550]: 2026-01-31 08:49:57.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:58.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:58 np0005603609 nova_compute[221550]: 2026-01-31 08:49:58.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:58 np0005603609 nova_compute[221550]: 2026-01-31 08:49:58.920 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:49:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:59.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:00 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 03:50:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:00.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:01.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:01 np0005603609 nova_compute[221550]: 2026-01-31 08:50:01.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:01 np0005603609 nova_compute[221550]: 2026-01-31 08:50:01.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:50:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:50:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:50:02 np0005603609 nova_compute[221550]: 2026-01-31 08:50:02.068 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:02.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:03.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:03 np0005603609 nova_compute[221550]: 2026-01-31 08:50:03.923 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:04.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:05.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:05 np0005603609 nova_compute[221550]: 2026-01-31 08:50:05.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:05 np0005603609 nova_compute[221550]: 2026-01-31 08:50:05.845 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:05 np0005603609 nova_compute[221550]: 2026-01-31 08:50:05.846 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:05 np0005603609 nova_compute[221550]: 2026-01-31 08:50:05.846 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:05 np0005603609 nova_compute[221550]: 2026-01-31 08:50:05.846 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:50:05 np0005603609 nova_compute[221550]: 2026-01-31 08:50:05.847 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:50:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:50:06 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/71149161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:50:06 np0005603609 nova_compute[221550]: 2026-01-31 08:50:06.269 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:50:06 np0005603609 nova_compute[221550]: 2026-01-31 08:50:06.382 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:50:06 np0005603609 nova_compute[221550]: 2026-01-31 08:50:06.383 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4219MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:50:06 np0005603609 nova_compute[221550]: 2026-01-31 08:50:06.384 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:06 np0005603609 nova_compute[221550]: 2026-01-31 08:50:06.384 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:06.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:07 np0005603609 nova_compute[221550]: 2026-01-31 08:50:07.070 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:50:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:07.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:50:07 np0005603609 nova_compute[221550]: 2026-01-31 08:50:07.235 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:50:07 np0005603609 nova_compute[221550]: 2026-01-31 08:50:07.236 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:50:07 np0005603609 nova_compute[221550]: 2026-01-31 08:50:07.261 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:50:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:50:07.541 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:50:07.542 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:50:07.542 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:50:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2986454865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:50:07 np0005603609 nova_compute[221550]: 2026-01-31 08:50:07.699 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:50:07 np0005603609 nova_compute[221550]: 2026-01-31 08:50:07.707 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:50:07 np0005603609 nova_compute[221550]: 2026-01-31 08:50:07.832 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:50:08 np0005603609 nova_compute[221550]: 2026-01-31 08:50:08.025 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:50:08 np0005603609 nova_compute[221550]: 2026-01-31 08:50:08.026 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:08.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:08 np0005603609 nova_compute[221550]: 2026-01-31 08:50:08.927 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:09.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:10 np0005603609 nova_compute[221550]: 2026-01-31 08:50:10.026 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:10 np0005603609 nova_compute[221550]: 2026-01-31 08:50:10.028 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:50:10 np0005603609 nova_compute[221550]: 2026-01-31 08:50:10.028 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:50:10 np0005603609 nova_compute[221550]: 2026-01-31 08:50:10.168 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:50:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:10.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:11.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:11 np0005603609 nova_compute[221550]: 2026-01-31 08:50:11.797 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:12 np0005603609 nova_compute[221550]: 2026-01-31 08:50:12.072 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:12.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:13.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:13 np0005603609 nova_compute[221550]: 2026-01-31 08:50:13.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:13 np0005603609 nova_compute[221550]: 2026-01-31 08:50:13.931 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:14.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:50:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:15.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:50:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:16.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:17 np0005603609 nova_compute[221550]: 2026-01-31 08:50:17.074 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:17 np0005603609 podman[307913]: 2026-01-31 08:50:17.160559146 +0000 UTC m=+0.042730650 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:50:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:17.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:17 np0005603609 podman[307912]: 2026-01-31 08:50:17.184645252 +0000 UTC m=+0.067410191 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 03:50:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:18.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:18 np0005603609 nova_compute[221550]: 2026-01-31 08:50:18.975 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:19.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:50:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:20.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:50:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:21.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:22 np0005603609 nova_compute[221550]: 2026-01-31 08:50:22.076 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:22.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:50:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:23.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:50:23 np0005603609 nova_compute[221550]: 2026-01-31 08:50:23.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:23 np0005603609 nova_compute[221550]: 2026-01-31 08:50:23.977 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:24.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:50:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:25.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:50:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:50:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:26.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:50:27 np0005603609 nova_compute[221550]: 2026-01-31 08:50:27.077 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:50:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:27.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:50:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:28.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:50:28.739 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=91, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=90) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:50:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:50:28.739 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:50:28 np0005603609 nova_compute[221550]: 2026-01-31 08:50:28.741 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:28 np0005603609 nova_compute[221550]: 2026-01-31 08:50:28.979 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:29.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:50:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:30.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:50:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:50:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:31.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:50:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:32 np0005603609 nova_compute[221550]: 2026-01-31 08:50:32.080 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:32.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:33.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:33 np0005603609 nova_compute[221550]: 2026-01-31 08:50:33.982 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:34.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:35.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:50:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:36.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:50:37 np0005603609 nova_compute[221550]: 2026-01-31 08:50:37.081 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:37.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:38.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:38 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:50:38.741 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '91'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:38 np0005603609 nova_compute[221550]: 2026-01-31 08:50:38.986 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:50:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:39.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:50:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:40.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:41.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:42 np0005603609 nova_compute[221550]: 2026-01-31 08:50:42.082 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:42.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:43.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:43 np0005603609 nova_compute[221550]: 2026-01-31 08:50:43.617 221554 DEBUG oslo_concurrency.lockutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:43 np0005603609 nova_compute[221550]: 2026-01-31 08:50:43.618 221554 DEBUG oslo_concurrency.lockutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:43 np0005603609 nova_compute[221550]: 2026-01-31 08:50:43.713 221554 DEBUG nova.compute.manager [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:50:43 np0005603609 nova_compute[221550]: 2026-01-31 08:50:43.919 221554 DEBUG oslo_concurrency.lockutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:43 np0005603609 nova_compute[221550]: 2026-01-31 08:50:43.920 221554 DEBUG oslo_concurrency.lockutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:43 np0005603609 nova_compute[221550]: 2026-01-31 08:50:43.940 221554 DEBUG nova.virt.hardware [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:50:43 np0005603609 nova_compute[221550]: 2026-01-31 08:50:43.940 221554 INFO nova.compute.claims [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:50:44 np0005603609 nova_compute[221550]: 2026-01-31 08:50:44.018 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:44 np0005603609 nova_compute[221550]: 2026-01-31 08:50:44.524 221554 DEBUG oslo_concurrency.processutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:50:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:44.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:50:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/20990812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:50:44 np0005603609 nova_compute[221550]: 2026-01-31 08:50:44.976 221554 DEBUG oslo_concurrency.processutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:50:44 np0005603609 nova_compute[221550]: 2026-01-31 08:50:44.982 221554 DEBUG nova.compute.provider_tree [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.044 221554 DEBUG nova.scheduler.client.report [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.099 221554 DEBUG oslo_concurrency.lockutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.100 221554 DEBUG nova.compute.manager [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.212 221554 DEBUG nova.compute.manager [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.213 221554 DEBUG nova.network.neutron [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:50:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:45.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.286 221554 INFO nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.337 221554 DEBUG nova.compute.manager [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.592 221554 DEBUG nova.compute.manager [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.594 221554 DEBUG nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.595 221554 INFO nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Creating image(s)#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.636 221554 DEBUG nova.storage.rbd_utils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.667 221554 DEBUG nova.storage.rbd_utils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.701 221554 DEBUG nova.storage.rbd_utils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.706 221554 DEBUG oslo_concurrency.processutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.759 221554 DEBUG oslo_concurrency.processutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.761 221554 DEBUG oslo_concurrency.lockutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.761 221554 DEBUG oslo_concurrency.lockutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.761 221554 DEBUG oslo_concurrency.lockutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.788 221554 DEBUG nova.storage.rbd_utils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.793 221554 DEBUG oslo_concurrency.processutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:50:45 np0005603609 nova_compute[221550]: 2026-01-31 08:50:45.990 221554 DEBUG nova.policy [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a56abd8fdd341ae88a99e102ab399de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:50:46 np0005603609 nova_compute[221550]: 2026-01-31 08:50:46.117 221554 DEBUG oslo_concurrency.processutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:50:46 np0005603609 nova_compute[221550]: 2026-01-31 08:50:46.221 221554 DEBUG nova.storage.rbd_utils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] resizing rbd image 14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:50:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:46 np0005603609 nova_compute[221550]: 2026-01-31 08:50:46.352 221554 DEBUG nova.objects.instance [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'migration_context' on Instance uuid 14d8553f-067d-4188-8ef4-8bb7fd63c62c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:50:46 np0005603609 nova_compute[221550]: 2026-01-31 08:50:46.445 221554 DEBUG nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:50:46 np0005603609 nova_compute[221550]: 2026-01-31 08:50:46.446 221554 DEBUG nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Ensure instance console log exists: /var/lib/nova/instances/14d8553f-067d-4188-8ef4-8bb7fd63c62c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:50:46 np0005603609 nova_compute[221550]: 2026-01-31 08:50:46.446 221554 DEBUG oslo_concurrency.lockutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:46 np0005603609 nova_compute[221550]: 2026-01-31 08:50:46.447 221554 DEBUG oslo_concurrency.lockutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:46 np0005603609 nova_compute[221550]: 2026-01-31 08:50:46.447 221554 DEBUG oslo_concurrency.lockutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:46.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:47 np0005603609 nova_compute[221550]: 2026-01-31 08:50:47.084 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:47.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:48 np0005603609 podman[308145]: 2026-01-31 08:50:48.162085495 +0000 UTC m=+0.048519762 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:50:48 np0005603609 podman[308144]: 2026-01-31 08:50:48.19102966 +0000 UTC m=+0.078996054 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:50:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:48.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:49 np0005603609 nova_compute[221550]: 2026-01-31 08:50:49.020 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:49.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:50 np0005603609 nova_compute[221550]: 2026-01-31 08:50:50.086 221554 DEBUG nova.network.neutron [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Successfully created port: bb42c664-a169-4fcf-8369-07a9e967e305 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:50:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:50:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:50.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:50:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:51.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:51 np0005603609 nova_compute[221550]: 2026-01-31 08:50:51.538 221554 DEBUG nova.network.neutron [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Successfully updated port: bb42c664-a169-4fcf-8369-07a9e967e305 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:50:52 np0005603609 nova_compute[221550]: 2026-01-31 08:50:52.085 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:52.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:53.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:54 np0005603609 nova_compute[221550]: 2026-01-31 08:50:54.024 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:54.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:55.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:55 np0005603609 ovn_controller[130359]: 2026-01-31T08:50:55Z|00922|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Jan 31 03:50:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:56.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:56 np0005603609 nova_compute[221550]: 2026-01-31 08:50:56.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:57 np0005603609 nova_compute[221550]: 2026-01-31 08:50:57.087 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:57.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:57 np0005603609 nova_compute[221550]: 2026-01-31 08:50:57.826 221554 DEBUG oslo_concurrency.lockutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:50:57 np0005603609 nova_compute[221550]: 2026-01-31 08:50:57.827 221554 DEBUG oslo_concurrency.lockutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquired lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:50:57 np0005603609 nova_compute[221550]: 2026-01-31 08:50:57.827 221554 DEBUG nova.network.neutron [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:50:57 np0005603609 nova_compute[221550]: 2026-01-31 08:50:57.992 221554 DEBUG nova.compute.manager [req-5a041a6e-fff0-45f8-a129-77476a224e36 req-5e4fa001-e0a3-436d-b28d-41f47c2a9563 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received event network-changed-bb42c664-a169-4fcf-8369-07a9e967e305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:50:57 np0005603609 nova_compute[221550]: 2026-01-31 08:50:57.992 221554 DEBUG nova.compute.manager [req-5a041a6e-fff0-45f8-a129-77476a224e36 req-5e4fa001-e0a3-436d-b28d-41f47c2a9563 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Refreshing instance network info cache due to event network-changed-bb42c664-a169-4fcf-8369-07a9e967e305. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:50:57 np0005603609 nova_compute[221550]: 2026-01-31 08:50:57.993 221554 DEBUG oslo_concurrency.lockutils [req-5a041a6e-fff0-45f8-a129-77476a224e36 req-5e4fa001-e0a3-436d-b28d-41f47c2a9563 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:50:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:58.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:58 np0005603609 nova_compute[221550]: 2026-01-31 08:50:58.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:59 np0005603609 nova_compute[221550]: 2026-01-31 08:50:59.027 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:50:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:59.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:59 np0005603609 nova_compute[221550]: 2026-01-31 08:50:59.582 221554 DEBUG nova.network.neutron [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:50:59 np0005603609 nova_compute[221550]: 2026-01-31 08:50:59.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:00.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:01.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.088 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.458 221554 DEBUG nova.network.neutron [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Updating instance_info_cache with network_info: [{"id": "bb42c664-a169-4fcf-8369-07a9e967e305", "address": "fa:16:3e:9f:23:e6", "network": {"id": "d6d64646-ef89-444c-96ac-3de221e3aa33", "bridge": "br-int", "label": "tempest-network-smoke--404410181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb42c664-a1", "ovs_interfaceid": "bb42c664-a169-4fcf-8369-07a9e967e305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.503 221554 DEBUG oslo_concurrency.lockutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Releasing lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.504 221554 DEBUG nova.compute.manager [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Instance network_info: |[{"id": "bb42c664-a169-4fcf-8369-07a9e967e305", "address": "fa:16:3e:9f:23:e6", "network": {"id": "d6d64646-ef89-444c-96ac-3de221e3aa33", "bridge": "br-int", "label": "tempest-network-smoke--404410181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb42c664-a1", "ovs_interfaceid": "bb42c664-a169-4fcf-8369-07a9e967e305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.504 221554 DEBUG oslo_concurrency.lockutils [req-5a041a6e-fff0-45f8-a129-77476a224e36 req-5e4fa001-e0a3-436d-b28d-41f47c2a9563 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.504 221554 DEBUG nova.network.neutron [req-5a041a6e-fff0-45f8-a129-77476a224e36 req-5e4fa001-e0a3-436d-b28d-41f47c2a9563 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Refreshing network info cache for port bb42c664-a169-4fcf-8369-07a9e967e305 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.506 221554 DEBUG nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Start _get_guest_xml network_info=[{"id": "bb42c664-a169-4fcf-8369-07a9e967e305", "address": "fa:16:3e:9f:23:e6", "network": {"id": "d6d64646-ef89-444c-96ac-3de221e3aa33", "bridge": "br-int", "label": "tempest-network-smoke--404410181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb42c664-a1", "ovs_interfaceid": "bb42c664-a169-4fcf-8369-07a9e967e305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.510 221554 WARNING nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.519 221554 DEBUG nova.virt.libvirt.host [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.520 221554 DEBUG nova.virt.libvirt.host [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.523 221554 DEBUG nova.virt.libvirt.host [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.523 221554 DEBUG nova.virt.libvirt.host [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.524 221554 DEBUG nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.524 221554 DEBUG nova.virt.hardware [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.525 221554 DEBUG nova.virt.hardware [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.525 221554 DEBUG nova.virt.hardware [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.525 221554 DEBUG nova.virt.hardware [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.525 221554 DEBUG nova.virt.hardware [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.525 221554 DEBUG nova.virt.hardware [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.526 221554 DEBUG nova.virt.hardware [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.526 221554 DEBUG nova.virt.hardware [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.526 221554 DEBUG nova.virt.hardware [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.526 221554 DEBUG nova.virt.hardware [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.526 221554 DEBUG nova.virt.hardware [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.529 221554 DEBUG oslo_concurrency.processutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:02.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:51:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:51:02 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/802492447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.977 221554 DEBUG oslo_concurrency.processutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:02 np0005603609 nova_compute[221550]: 2026-01-31 08:51:02.999 221554 DEBUG nova.storage.rbd_utils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.002 221554 DEBUG oslo_concurrency.processutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:03.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:51:03 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/533740819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.405 221554 DEBUG oslo_concurrency.processutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.407 221554 DEBUG nova.virt.libvirt.vif [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:50:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2100946823',display_name='tempest-TestNetworkBasicOps-server-2100946823',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2100946823',id=198,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJNjBVgbErmT8//F7831upVgyo+TyYRA1eBt11/KsyCRN7pmelaqXV1IV1lMPiLAKBIPkzzpgcQRq/nQyirHPvnL1pzNnteQVi2Ibgfr0f1bEDWL1Jyb45CXLI7VI4BwSA==',key_name='tempest-TestNetworkBasicOps-1500474542',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-kizu2q26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:50:45Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=14d8553f-067d-4188-8ef4-8bb7fd63c62c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bb42c664-a169-4fcf-8369-07a9e967e305", "address": "fa:16:3e:9f:23:e6", "network": {"id": "d6d64646-ef89-444c-96ac-3de221e3aa33", "bridge": "br-int", "label": "tempest-network-smoke--404410181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb42c664-a1", "ovs_interfaceid": "bb42c664-a169-4fcf-8369-07a9e967e305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.407 221554 DEBUG nova.network.os_vif_util [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "bb42c664-a169-4fcf-8369-07a9e967e305", "address": "fa:16:3e:9f:23:e6", "network": {"id": "d6d64646-ef89-444c-96ac-3de221e3aa33", "bridge": "br-int", "label": "tempest-network-smoke--404410181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb42c664-a1", "ovs_interfaceid": "bb42c664-a169-4fcf-8369-07a9e967e305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.408 221554 DEBUG nova.network.os_vif_util [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:23:e6,bridge_name='br-int',has_traffic_filtering=True,id=bb42c664-a169-4fcf-8369-07a9e967e305,network=Network(d6d64646-ef89-444c-96ac-3de221e3aa33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb42c664-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.408 221554 DEBUG nova.objects.instance [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 14d8553f-067d-4188-8ef4-8bb7fd63c62c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.445 221554 DEBUG nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:51:03 np0005603609 nova_compute[221550]:  <uuid>14d8553f-067d-4188-8ef4-8bb7fd63c62c</uuid>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:  <name>instance-000000c6</name>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestNetworkBasicOps-server-2100946823</nova:name>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:51:02</nova:creationTime>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:51:03 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:        <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:        <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:        <nova:port uuid="bb42c664-a169-4fcf-8369-07a9e967e305">
Jan 31 03:51:03 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <entry name="serial">14d8553f-067d-4188-8ef4-8bb7fd63c62c</entry>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <entry name="uuid">14d8553f-067d-4188-8ef4-8bb7fd63c62c</entry>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk">
Jan 31 03:51:03 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:51:03 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk.config">
Jan 31 03:51:03 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:51:03 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:9f:23:e6"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <target dev="tapbb42c664-a1"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/14d8553f-067d-4188-8ef4-8bb7fd63c62c/console.log" append="off"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:51:03 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:51:03 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:51:03 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:51:03 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.446 221554 DEBUG nova.compute.manager [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Preparing to wait for external event network-vif-plugged-bb42c664-a169-4fcf-8369-07a9e967e305 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.447 221554 DEBUG oslo_concurrency.lockutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.447 221554 DEBUG oslo_concurrency.lockutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.447 221554 DEBUG oslo_concurrency.lockutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.448 221554 DEBUG nova.virt.libvirt.vif [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:50:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2100946823',display_name='tempest-TestNetworkBasicOps-server-2100946823',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2100946823',id=198,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJNjBVgbErmT8//F7831upVgyo+TyYRA1eBt11/KsyCRN7pmelaqXV1IV1lMPiLAKBIPkzzpgcQRq/nQyirHPvnL1pzNnteQVi2Ibgfr0f1bEDWL1Jyb45CXLI7VI4BwSA==',key_name='tempest-TestNetworkBasicOps-1500474542',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-kizu2q26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:50:45Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=14d8553f-067d-4188-8ef4-8bb7fd63c62c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bb42c664-a169-4fcf-8369-07a9e967e305", "address": "fa:16:3e:9f:23:e6", "network": {"id": "d6d64646-ef89-444c-96ac-3de221e3aa33", "bridge": "br-int", "label": "tempest-network-smoke--404410181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb42c664-a1", "ovs_interfaceid": "bb42c664-a169-4fcf-8369-07a9e967e305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.448 221554 DEBUG nova.network.os_vif_util [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "bb42c664-a169-4fcf-8369-07a9e967e305", "address": "fa:16:3e:9f:23:e6", "network": {"id": "d6d64646-ef89-444c-96ac-3de221e3aa33", "bridge": "br-int", "label": "tempest-network-smoke--404410181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb42c664-a1", "ovs_interfaceid": "bb42c664-a169-4fcf-8369-07a9e967e305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.448 221554 DEBUG nova.network.os_vif_util [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:23:e6,bridge_name='br-int',has_traffic_filtering=True,id=bb42c664-a169-4fcf-8369-07a9e967e305,network=Network(d6d64646-ef89-444c-96ac-3de221e3aa33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb42c664-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.449 221554 DEBUG os_vif [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:23:e6,bridge_name='br-int',has_traffic_filtering=True,id=bb42c664-a169-4fcf-8369-07a9e967e305,network=Network(d6d64646-ef89-444c-96ac-3de221e3aa33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb42c664-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.449 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.450 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.450 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.453 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.453 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb42c664-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.453 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbb42c664-a1, col_values=(('external_ids', {'iface-id': 'bb42c664-a169-4fcf-8369-07a9e967e305', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:23:e6', 'vm-uuid': '14d8553f-067d-4188-8ef4-8bb7fd63c62c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.455 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:03 np0005603609 NetworkManager[49064]: <info>  [1769849463.4558] manager: (tapbb42c664-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.457 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.461 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.462 221554 INFO os_vif [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:23:e6,bridge_name='br-int',has_traffic_filtering=True,id=bb42c664-a169-4fcf-8369-07a9e967e305,network=Network(d6d64646-ef89-444c-96ac-3de221e3aa33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb42c664-a1')#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.526 221554 DEBUG nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.526 221554 DEBUG nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.527 221554 DEBUG nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No VIF found with MAC fa:16:3e:9f:23:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.528 221554 INFO nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Using config drive#033[00m
Jan 31 03:51:03 np0005603609 nova_compute[221550]: 2026-01-31 08:51:03.547 221554 DEBUG nova.storage.rbd_utils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:51:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:51:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:51:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:51:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:51:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:51:04 np0005603609 nova_compute[221550]: 2026-01-31 08:51:04.353 221554 INFO nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Creating config drive at /var/lib/nova/instances/14d8553f-067d-4188-8ef4-8bb7fd63c62c/disk.config#033[00m
Jan 31 03:51:04 np0005603609 nova_compute[221550]: 2026-01-31 08:51:04.357 221554 DEBUG oslo_concurrency.processutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/14d8553f-067d-4188-8ef4-8bb7fd63c62c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpi5nr2w0f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:04 np0005603609 nova_compute[221550]: 2026-01-31 08:51:04.481 221554 DEBUG oslo_concurrency.processutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/14d8553f-067d-4188-8ef4-8bb7fd63c62c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpi5nr2w0f" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:04 np0005603609 nova_compute[221550]: 2026-01-31 08:51:04.503 221554 DEBUG nova.storage.rbd_utils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:51:04 np0005603609 nova_compute[221550]: 2026-01-31 08:51:04.506 221554 DEBUG oslo_concurrency.processutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/14d8553f-067d-4188-8ef4-8bb7fd63c62c/disk.config 14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:04.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:04 np0005603609 nova_compute[221550]: 2026-01-31 08:51:04.645 221554 DEBUG oslo_concurrency.processutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/14d8553f-067d-4188-8ef4-8bb7fd63c62c/disk.config 14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:04 np0005603609 nova_compute[221550]: 2026-01-31 08:51:04.646 221554 INFO nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Deleting local config drive /var/lib/nova/instances/14d8553f-067d-4188-8ef4-8bb7fd63c62c/disk.config because it was imported into RBD.#033[00m
Jan 31 03:51:04 np0005603609 kernel: tapbb42c664-a1: entered promiscuous mode
Jan 31 03:51:04 np0005603609 NetworkManager[49064]: <info>  [1769849464.6911] manager: (tapbb42c664-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/422)
Jan 31 03:51:04 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:04Z|00923|binding|INFO|Claiming lport bb42c664-a169-4fcf-8369-07a9e967e305 for this chassis.
Jan 31 03:51:04 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:04Z|00924|binding|INFO|bb42c664-a169-4fcf-8369-07a9e967e305: Claiming fa:16:3e:9f:23:e6 10.100.0.10
Jan 31 03:51:04 np0005603609 nova_compute[221550]: 2026-01-31 08:51:04.691 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:04 np0005603609 nova_compute[221550]: 2026-01-31 08:51:04.696 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:04 np0005603609 nova_compute[221550]: 2026-01-31 08:51:04.698 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:04 np0005603609 nova_compute[221550]: 2026-01-31 08:51:04.714 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:04 np0005603609 systemd-machined[190912]: New machine qemu-110-instance-000000c6.
Jan 31 03:51:04 np0005603609 systemd-udevd[308455]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:51:04 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:04Z|00925|binding|INFO|Setting lport bb42c664-a169-4fcf-8369-07a9e967e305 ovn-installed in OVS
Jan 31 03:51:04 np0005603609 nova_compute[221550]: 2026-01-31 08:51:04.720 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:04 np0005603609 systemd[1]: Started Virtual Machine qemu-110-instance-000000c6.
Jan 31 03:51:04 np0005603609 NetworkManager[49064]: <info>  [1769849464.7293] device (tapbb42c664-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:51:04 np0005603609 NetworkManager[49064]: <info>  [1769849464.7299] device (tapbb42c664-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:51:04 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:04Z|00926|binding|INFO|Setting lport bb42c664-a169-4fcf-8369-07a9e967e305 up in Southbound
Jan 31 03:51:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:04.839 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:23:e6 10.100.0.10'], port_security=['fa:16:3e:9f:23:e6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '14d8553f-067d-4188-8ef4-8bb7fd63c62c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6d64646-ef89-444c-96ac-3de221e3aa33', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1e825819-7398-4d75-a4d1-c086e5f05fef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1675a5e-20f4-4c0a-aae9-8afbbc711349, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=bb42c664-a169-4fcf-8369-07a9e967e305) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:51:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:04.840 140058 INFO neutron.agent.ovn.metadata.agent [-] Port bb42c664-a169-4fcf-8369-07a9e967e305 in datapath d6d64646-ef89-444c-96ac-3de221e3aa33 bound to our chassis#033[00m
Jan 31 03:51:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:04.841 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d6d64646-ef89-444c-96ac-3de221e3aa33#033[00m
Jan 31 03:51:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:04.850 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[211af423-d205-4d90-84b4-66555eea8b52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:04.851 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd6d64646-e1 in ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:51:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:04.853 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd6d64646-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:51:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:04.853 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[32de3290-0b11-4a5f-8939-324fed92e858]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:04.854 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2a13bbe0-275b-45d8-bcc7-58ea0767959d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:04.863 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[497776a9-c8e1-4a2a-9c05-47443c2ba4fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:04.873 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6f76f2a8-f911-40c1-9a74-373c8ca1f129]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:04.893 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[2c264beb-32d4-43b3-92c2-6c7328006e6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:04.900 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ab86da8d-c15d-4735-8674-b98c922c4846]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:04 np0005603609 NetworkManager[49064]: <info>  [1769849464.9020] manager: (tapd6d64646-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/423)
Jan 31 03:51:04 np0005603609 nova_compute[221550]: 2026-01-31 08:51:04.908 221554 DEBUG nova.network.neutron [req-5a041a6e-fff0-45f8-a129-77476a224e36 req-5e4fa001-e0a3-436d-b28d-41f47c2a9563 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Updated VIF entry in instance network info cache for port bb42c664-a169-4fcf-8369-07a9e967e305. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:51:04 np0005603609 nova_compute[221550]: 2026-01-31 08:51:04.909 221554 DEBUG nova.network.neutron [req-5a041a6e-fff0-45f8-a129-77476a224e36 req-5e4fa001-e0a3-436d-b28d-41f47c2a9563 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Updating instance_info_cache with network_info: [{"id": "bb42c664-a169-4fcf-8369-07a9e967e305", "address": "fa:16:3e:9f:23:e6", "network": {"id": "d6d64646-ef89-444c-96ac-3de221e3aa33", "bridge": "br-int", "label": "tempest-network-smoke--404410181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb42c664-a1", "ovs_interfaceid": "bb42c664-a169-4fcf-8369-07a9e967e305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:51:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:04.926 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[38b32a5d-5629-43a9-823b-b4b860e604ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:04.929 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[14881923-35e2-4953-8cf9-8acadb41c21d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:04 np0005603609 NetworkManager[49064]: <info>  [1769849464.9450] device (tapd6d64646-e0): carrier: link connected
Jan 31 03:51:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:04.947 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[881fe8cb-f1a2-457c-8a6c-5b70db8e1048]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:04.960 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e829d3-2b49-42df-b429-9b1467ff0603]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd6d64646-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:01:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 283], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 970464, 'reachable_time': 38789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308488, 'error': None, 'target': 'ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:04.972 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8092fb94-c209-491a-bbea-7c6955f79163]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec5:151'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 970464, 'tstamp': 970464}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308489, 'error': None, 'target': 'ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:04.983 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[50b4d3e9-4ef6-45d1-94dd-a249995fed80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd6d64646-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:01:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 283], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 970464, 'reachable_time': 38789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308490, 'error': None, 'target': 'ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:05.002 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9f457852-5435-40c0-8427-a58b3150ffab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.012 221554 DEBUG oslo_concurrency.lockutils [req-5a041a6e-fff0-45f8-a129-77476a224e36 req-5e4fa001-e0a3-436d-b28d-41f47c2a9563 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:05.044 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[572963b9-bf15-4991-b415-b062b2d6b5b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:05.046 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6d64646-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:05.046 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:05.046 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6d64646-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:05 np0005603609 NetworkManager[49064]: <info>  [1769849465.0488] manager: (tapd6d64646-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Jan 31 03:51:05 np0005603609 kernel: tapd6d64646-e0: entered promiscuous mode
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.048 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:05.050 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd6d64646-e0, col_values=(('external_ids', {'iface-id': '84f1437f-a2d5-499e-9b9c-84740517da54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:05 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:05Z|00927|binding|INFO|Releasing lport 84f1437f-a2d5-499e-9b9c-84740517da54 from this chassis (sb_readonly=0)
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.056 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:05.055 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d6d64646-ef89-444c-96ac-3de221e3aa33.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d6d64646-ef89-444c-96ac-3de221e3aa33.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:05.057 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b0822322-77a1-4291-8efd-fe611de51e12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:05.058 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-d6d64646-ef89-444c-96ac-3de221e3aa33
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/d6d64646-ef89-444c-96ac-3de221e3aa33.pid.haproxy
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID d6d64646-ef89-444c-96ac-3de221e3aa33
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:51:05 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:05.058 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33', 'env', 'PROCESS_TAG=haproxy-d6d64646-ef89-444c-96ac-3de221e3aa33', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d6d64646-ef89-444c-96ac-3de221e3aa33.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.147 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849465.146653, 14d8553f-067d-4188-8ef4-8bb7fd63c62c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.147 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] VM Started (Lifecycle Event)#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.194 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.196 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849465.147073, 14d8553f-067d-4188-8ef4-8bb7fd63c62c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.197 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:51:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:05.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.272 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.275 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:51:05 np0005603609 podman[308562]: 2026-01-31 08:51:05.348343716 +0000 UTC m=+0.039642771 container create 4e9fabdfb6d6f437ae621926085bab62005dac0f9f3d49cfcd31100289fe16a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 03:51:05 np0005603609 systemd[1]: Started libpod-conmon-4e9fabdfb6d6f437ae621926085bab62005dac0f9f3d49cfcd31100289fe16a5.scope.
Jan 31 03:51:05 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:51:05 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9a0250ec5cddd04607ce00186fc70f4ee311ba9a8ebd905e20278d9a23c646/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:51:05 np0005603609 podman[308562]: 2026-01-31 08:51:05.401681327 +0000 UTC m=+0.092980412 container init 4e9fabdfb6d6f437ae621926085bab62005dac0f9f3d49cfcd31100289fe16a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 03:51:05 np0005603609 podman[308562]: 2026-01-31 08:51:05.405839188 +0000 UTC m=+0.097138233 container start 4e9fabdfb6d6f437ae621926085bab62005dac0f9f3d49cfcd31100289fe16a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:51:05 np0005603609 podman[308562]: 2026-01-31 08:51:05.326064587 +0000 UTC m=+0.017363662 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:51:05 np0005603609 neutron-haproxy-ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33[308578]: [NOTICE]   (308582) : New worker (308584) forked
Jan 31 03:51:05 np0005603609 neutron-haproxy-ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33[308578]: [NOTICE]   (308582) : Loading success.
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.585 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.734 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.735 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.735 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.735 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.736 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.985 221554 DEBUG nova.compute.manager [req-d161af35-89f3-41ae-bc06-a5eb32e17b82 req-fb50b099-aa34-4a3f-a405-fcd63cb201c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received event network-vif-plugged-bb42c664-a169-4fcf-8369-07a9e967e305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.985 221554 DEBUG oslo_concurrency.lockutils [req-d161af35-89f3-41ae-bc06-a5eb32e17b82 req-fb50b099-aa34-4a3f-a405-fcd63cb201c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.986 221554 DEBUG oslo_concurrency.lockutils [req-d161af35-89f3-41ae-bc06-a5eb32e17b82 req-fb50b099-aa34-4a3f-a405-fcd63cb201c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.986 221554 DEBUG oslo_concurrency.lockutils [req-d161af35-89f3-41ae-bc06-a5eb32e17b82 req-fb50b099-aa34-4a3f-a405-fcd63cb201c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.987 221554 DEBUG nova.compute.manager [req-d161af35-89f3-41ae-bc06-a5eb32e17b82 req-fb50b099-aa34-4a3f-a405-fcd63cb201c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Processing event network-vif-plugged-bb42c664-a169-4fcf-8369-07a9e967e305 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.987 221554 DEBUG nova.compute.manager [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.991 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849465.990739, 14d8553f-067d-4188-8ef4-8bb7fd63c62c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.991 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.993 221554 DEBUG nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.999 221554 INFO nova.virt.libvirt.driver [-] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Instance spawned successfully.#033[00m
Jan 31 03:51:05 np0005603609 nova_compute[221550]: 2026-01-31 08:51:05.999 221554 DEBUG nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.041 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.045 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.049 221554 DEBUG nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.049 221554 DEBUG nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.049 221554 DEBUG nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.050 221554 DEBUG nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.050 221554 DEBUG nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.050 221554 DEBUG nova.virt.libvirt.driver [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:51:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:51:06 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/900598782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.127 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.131 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.220 221554 INFO nova.compute.manager [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Took 20.63 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.220 221554 DEBUG nova.compute.manager [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.323 221554 INFO nova.compute.manager [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Took 22.46 seconds to build instance.#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.347 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000c6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.347 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000c6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:51:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.389 221554 DEBUG oslo_concurrency.lockutils [None req-687a6ba8-19fa-468a-8cb5-f4486dd8387e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.462 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.463 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4146MB free_disk=20.967388153076172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.463 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.463 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:06.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.931 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 14d8553f-067d-4188-8ef4-8bb7fd63c62c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.931 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.931 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:51:06 np0005603609 nova_compute[221550]: 2026-01-31 08:51:06.972 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:07 np0005603609 nova_compute[221550]: 2026-01-31 08:51:07.091 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:07.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:51:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2677413607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:51:07 np0005603609 nova_compute[221550]: 2026-01-31 08:51:07.411 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:07 np0005603609 nova_compute[221550]: 2026-01-31 08:51:07.415 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:51:07 np0005603609 nova_compute[221550]: 2026-01-31 08:51:07.459 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:51:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:07.542 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:07.543 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:07.543 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:07 np0005603609 nova_compute[221550]: 2026-01-31 08:51:07.569 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:51:07 np0005603609 nova_compute[221550]: 2026-01-31 08:51:07.570 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:08 np0005603609 nova_compute[221550]: 2026-01-31 08:51:08.211 221554 DEBUG nova.compute.manager [req-51dee4fe-4d7e-4d21-9282-e0037b58bc11 req-8e732018-af5f-42fb-831c-6882b352d568 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received event network-vif-plugged-bb42c664-a169-4fcf-8369-07a9e967e305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:51:08 np0005603609 nova_compute[221550]: 2026-01-31 08:51:08.212 221554 DEBUG oslo_concurrency.lockutils [req-51dee4fe-4d7e-4d21-9282-e0037b58bc11 req-8e732018-af5f-42fb-831c-6882b352d568 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:08 np0005603609 nova_compute[221550]: 2026-01-31 08:51:08.213 221554 DEBUG oslo_concurrency.lockutils [req-51dee4fe-4d7e-4d21-9282-e0037b58bc11 req-8e732018-af5f-42fb-831c-6882b352d568 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:08 np0005603609 nova_compute[221550]: 2026-01-31 08:51:08.213 221554 DEBUG oslo_concurrency.lockutils [req-51dee4fe-4d7e-4d21-9282-e0037b58bc11 req-8e732018-af5f-42fb-831c-6882b352d568 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:08 np0005603609 nova_compute[221550]: 2026-01-31 08:51:08.214 221554 DEBUG nova.compute.manager [req-51dee4fe-4d7e-4d21-9282-e0037b58bc11 req-8e732018-af5f-42fb-831c-6882b352d568 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] No waiting events found dispatching network-vif-plugged-bb42c664-a169-4fcf-8369-07a9e967e305 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:51:08 np0005603609 nova_compute[221550]: 2026-01-31 08:51:08.214 221554 WARNING nova.compute.manager [req-51dee4fe-4d7e-4d21-9282-e0037b58bc11 req-8e732018-af5f-42fb-831c-6882b352d568 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received unexpected event network-vif-plugged-bb42c664-a169-4fcf-8369-07a9e967e305 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:51:08 np0005603609 nova_compute[221550]: 2026-01-31 08:51:08.456 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:51:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:08.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:51:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:09.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:51:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:51:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:10.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:11.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:11 np0005603609 nova_compute[221550]: 2026-01-31 08:51:11.571 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:11 np0005603609 nova_compute[221550]: 2026-01-31 08:51:11.572 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:51:11 np0005603609 nova_compute[221550]: 2026-01-31 08:51:11.572 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:51:11 np0005603609 nova_compute[221550]: 2026-01-31 08:51:11.945 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:51:11 np0005603609 nova_compute[221550]: 2026-01-31 08:51:11.946 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:51:11 np0005603609 nova_compute[221550]: 2026-01-31 08:51:11.946 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:51:11 np0005603609 nova_compute[221550]: 2026-01-31 08:51:11.950 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 14d8553f-067d-4188-8ef4-8bb7fd63c62c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:51:12 np0005603609 nova_compute[221550]: 2026-01-31 08:51:12.092 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:12.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:51:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:13.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:51:13 np0005603609 nova_compute[221550]: 2026-01-31 08:51:13.458 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:13 np0005603609 nova_compute[221550]: 2026-01-31 08:51:13.754 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:13 np0005603609 NetworkManager[49064]: <info>  [1769849473.7552] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Jan 31 03:51:13 np0005603609 NetworkManager[49064]: <info>  [1769849473.7569] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/426)
Jan 31 03:51:13 np0005603609 nova_compute[221550]: 2026-01-31 08:51:13.792 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:13 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:13Z|00928|binding|INFO|Releasing lport 84f1437f-a2d5-499e-9b9c-84740517da54 from this chassis (sb_readonly=0)
Jan 31 03:51:13 np0005603609 nova_compute[221550]: 2026-01-31 08:51:13.814 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:14 np0005603609 nova_compute[221550]: 2026-01-31 08:51:14.192 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:14 np0005603609 nova_compute[221550]: 2026-01-31 08:51:14.433 221554 DEBUG nova.compute.manager [req-a8c80784-0bae-4518-b10b-efe8d035c8af req-88918a08-f5f8-4173-8f66-ea263558c2f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received event network-changed-bb42c664-a169-4fcf-8369-07a9e967e305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:51:14 np0005603609 nova_compute[221550]: 2026-01-31 08:51:14.433 221554 DEBUG nova.compute.manager [req-a8c80784-0bae-4518-b10b-efe8d035c8af req-88918a08-f5f8-4173-8f66-ea263558c2f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Refreshing instance network info cache due to event network-changed-bb42c664-a169-4fcf-8369-07a9e967e305. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:51:14 np0005603609 nova_compute[221550]: 2026-01-31 08:51:14.434 221554 DEBUG oslo_concurrency.lockutils [req-a8c80784-0bae-4518-b10b-efe8d035c8af req-88918a08-f5f8-4173-8f66-ea263558c2f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:51:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:14.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:14 np0005603609 nova_compute[221550]: 2026-01-31 08:51:14.705 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Updating instance_info_cache with network_info: [{"id": "bb42c664-a169-4fcf-8369-07a9e967e305", "address": "fa:16:3e:9f:23:e6", "network": {"id": "d6d64646-ef89-444c-96ac-3de221e3aa33", "bridge": "br-int", "label": "tempest-network-smoke--404410181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb42c664-a1", "ovs_interfaceid": "bb42c664-a169-4fcf-8369-07a9e967e305", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:51:15 np0005603609 nova_compute[221550]: 2026-01-31 08:51:15.083 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:51:15 np0005603609 nova_compute[221550]: 2026-01-31 08:51:15.084 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:51:15 np0005603609 nova_compute[221550]: 2026-01-31 08:51:15.084 221554 DEBUG oslo_concurrency.lockutils [req-a8c80784-0bae-4518-b10b-efe8d035c8af req-88918a08-f5f8-4173-8f66-ea263558c2f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:51:15 np0005603609 nova_compute[221550]: 2026-01-31 08:51:15.084 221554 DEBUG nova.network.neutron [req-a8c80784-0bae-4518-b10b-efe8d035c8af req-88918a08-f5f8-4173-8f66-ea263558c2f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Refreshing network info cache for port bb42c664-a169-4fcf-8369-07a9e967e305 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:51:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:51:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:15.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:51:15 np0005603609 nova_compute[221550]: 2026-01-31 08:51:15.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:15 np0005603609 nova_compute[221550]: 2026-01-31 08:51:15.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:16.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:17 np0005603609 nova_compute[221550]: 2026-01-31 08:51:17.094 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:17.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:18 np0005603609 nova_compute[221550]: 2026-01-31 08:51:18.461 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:18.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:19 np0005603609 podman[308690]: 2026-01-31 08:51:19.164693966 +0000 UTC m=+0.048214588 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:51:19 np0005603609 nova_compute[221550]: 2026-01-31 08:51:19.182 221554 DEBUG nova.network.neutron [req-a8c80784-0bae-4518-b10b-efe8d035c8af req-88918a08-f5f8-4173-8f66-ea263558c2f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Updated VIF entry in instance network info cache for port bb42c664-a169-4fcf-8369-07a9e967e305. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:51:19 np0005603609 nova_compute[221550]: 2026-01-31 08:51:19.183 221554 DEBUG nova.network.neutron [req-a8c80784-0bae-4518-b10b-efe8d035c8af req-88918a08-f5f8-4173-8f66-ea263558c2f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Updating instance_info_cache with network_info: [{"id": "bb42c664-a169-4fcf-8369-07a9e967e305", "address": "fa:16:3e:9f:23:e6", "network": {"id": "d6d64646-ef89-444c-96ac-3de221e3aa33", "bridge": "br-int", "label": "tempest-network-smoke--404410181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb42c664-a1", "ovs_interfaceid": "bb42c664-a169-4fcf-8369-07a9e967e305", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:51:19 np0005603609 podman[308689]: 2026-01-31 08:51:19.214909552 +0000 UTC m=+0.098605168 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 03:51:19 np0005603609 nova_compute[221550]: 2026-01-31 08:51:19.232 221554 DEBUG oslo_concurrency.lockutils [req-a8c80784-0bae-4518-b10b-efe8d035c8af req-88918a08-f5f8-4173-8f66-ea263558c2f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:51:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:51:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:19.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:51:19 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:19Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9f:23:e6 10.100.0.10
Jan 31 03:51:19 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:19Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9f:23:e6 10.100.0.10
Jan 31 03:51:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:20.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:21.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:22 np0005603609 nova_compute[221550]: 2026-01-31 08:51:22.097 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:22.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:51:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:23.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:51:23 np0005603609 nova_compute[221550]: 2026-01-31 08:51:23.464 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:24.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:51:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:25.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:51:25 np0005603609 nova_compute[221550]: 2026-01-31 08:51:25.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:25 np0005603609 nova_compute[221550]: 2026-01-31 08:51:25.685 221554 INFO nova.compute.manager [None req-31f47e44-3742-4d80-ac4a-efd9686e0af6 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Get console output#033[00m
Jan 31 03:51:25 np0005603609 nova_compute[221550]: 2026-01-31 08:51:25.691 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:51:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:26.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:27 np0005603609 nova_compute[221550]: 2026-01-31 08:51:27.098 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:51:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:27.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #172. Immutable memtables: 0.
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:51:28.228868) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 172
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849488228941, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 1840, "num_deletes": 251, "total_data_size": 4298347, "memory_usage": 4352352, "flush_reason": "Manual Compaction"}
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #173: started
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849488263074, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 173, "file_size": 2801808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82895, "largest_seqno": 84730, "table_properties": {"data_size": 2794290, "index_size": 4460, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15999, "raw_average_key_size": 20, "raw_value_size": 2779136, "raw_average_value_size": 3517, "num_data_blocks": 196, "num_entries": 790, "num_filter_entries": 790, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849324, "oldest_key_time": 1769849324, "file_creation_time": 1769849488, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 34239 microseconds, and 4467 cpu microseconds.
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:51:28.263115) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #173: 2801808 bytes OK
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:51:28.263134) [db/memtable_list.cc:519] [default] Level-0 commit table #173 started
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:51:28.266533) [db/memtable_list.cc:722] [default] Level-0 commit table #173: memtable #1 done
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:51:28.266544) EVENT_LOG_v1 {"time_micros": 1769849488266540, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:51:28.266561) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 4290112, prev total WAL file size 4290112, number of live WAL files 2.
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000169.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:51:28.267224) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [173(2736KB)], [171(10MB)]
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849488267284, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [173], "files_L6": [171], "score": -1, "input_data_size": 13679163, "oldest_snapshot_seqno": -1}
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #174: 10410 keys, 11723880 bytes, temperature: kUnknown
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849488410481, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 174, "file_size": 11723880, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11659625, "index_size": 37134, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26053, "raw_key_size": 275341, "raw_average_key_size": 26, "raw_value_size": 11480420, "raw_average_value_size": 1102, "num_data_blocks": 1403, "num_entries": 10410, "num_filter_entries": 10410, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769849488, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 174, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:51:28.410724) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 11723880 bytes
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:51:28.413502) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.5 rd, 81.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 10.4 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(9.1) write-amplify(4.2) OK, records in: 10927, records dropped: 517 output_compression: NoCompression
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:51:28.413526) EVENT_LOG_v1 {"time_micros": 1769849488413515, "job": 110, "event": "compaction_finished", "compaction_time_micros": 143268, "compaction_time_cpu_micros": 30089, "output_level": 6, "num_output_files": 1, "total_output_size": 11723880, "num_input_records": 10927, "num_output_records": 10410, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849488413944, "job": 110, "event": "table_file_deletion", "file_number": 173}
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000171.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849488415093, "job": 110, "event": "table_file_deletion", "file_number": 171}
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:51:28.267099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:51:28.415195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:51:28.415201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:51:28.415202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:51:28.415204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:51:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:51:28.415205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:51:28 np0005603609 nova_compute[221550]: 2026-01-31 08:51:28.468 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:28.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:28 np0005603609 nova_compute[221550]: 2026-01-31 08:51:28.989 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:51:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:29.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:51:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:30.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:31.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:32 np0005603609 nova_compute[221550]: 2026-01-31 08:51:32.100 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:32.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:32.712 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=92, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=91) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:51:32 np0005603609 nova_compute[221550]: 2026-01-31 08:51:32.713 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:32.713 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:51:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:33.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:33 np0005603609 nova_compute[221550]: 2026-01-31 08:51:33.470 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:34.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:34 np0005603609 nova_compute[221550]: 2026-01-31 08:51:34.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:35.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:35.715 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '92'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:36 np0005603609 nova_compute[221550]: 2026-01-31 08:51:36.503 221554 DEBUG oslo_concurrency.lockutils [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "interface-14d8553f-067d-4188-8ef4-8bb7fd63c62c-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:36 np0005603609 nova_compute[221550]: 2026-01-31 08:51:36.503 221554 DEBUG oslo_concurrency.lockutils [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "interface-14d8553f-067d-4188-8ef4-8bb7fd63c62c-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:36 np0005603609 nova_compute[221550]: 2026-01-31 08:51:36.504 221554 DEBUG nova.objects.instance [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'flavor' on Instance uuid 14d8553f-067d-4188-8ef4-8bb7fd63c62c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:51:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:36.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:37 np0005603609 nova_compute[221550]: 2026-01-31 08:51:37.103 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:51:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:37.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:51:37 np0005603609 nova_compute[221550]: 2026-01-31 08:51:37.636 221554 DEBUG nova.objects.instance [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 14d8553f-067d-4188-8ef4-8bb7fd63c62c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:51:37 np0005603609 nova_compute[221550]: 2026-01-31 08:51:37.848 221554 DEBUG nova.network.neutron [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:51:38 np0005603609 nova_compute[221550]: 2026-01-31 08:51:38.215 221554 DEBUG nova.policy [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a56abd8fdd341ae88a99e102ab399de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:51:38 np0005603609 nova_compute[221550]: 2026-01-31 08:51:38.474 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:38.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:51:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:39.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:51:40 np0005603609 nova_compute[221550]: 2026-01-31 08:51:40.135 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:51:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:40.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:51:40 np0005603609 nova_compute[221550]: 2026-01-31 08:51:40.871 221554 DEBUG nova.network.neutron [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Successfully created port: 1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:51:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:41.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:42 np0005603609 nova_compute[221550]: 2026-01-31 08:51:42.106 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:51:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:42.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:51:42 np0005603609 nova_compute[221550]: 2026-01-31 08:51:42.950 221554 DEBUG nova.network.neutron [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Successfully updated port: 1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:51:43 np0005603609 nova_compute[221550]: 2026-01-31 08:51:43.016 221554 DEBUG oslo_concurrency.lockutils [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:51:43 np0005603609 nova_compute[221550]: 2026-01-31 08:51:43.017 221554 DEBUG oslo_concurrency.lockutils [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquired lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:51:43 np0005603609 nova_compute[221550]: 2026-01-31 08:51:43.017 221554 DEBUG nova.network.neutron [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:51:43 np0005603609 nova_compute[221550]: 2026-01-31 08:51:43.184 221554 DEBUG nova.compute.manager [req-9dbfaf57-0ffd-4859-aa1c-79aeda56ee22 req-0d39978d-c769-4510-9f92-abbee447c8c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received event network-changed-1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:51:43 np0005603609 nova_compute[221550]: 2026-01-31 08:51:43.185 221554 DEBUG nova.compute.manager [req-9dbfaf57-0ffd-4859-aa1c-79aeda56ee22 req-0d39978d-c769-4510-9f92-abbee447c8c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Refreshing instance network info cache due to event network-changed-1d9a5de2-ec4a-4259-960e-a5b2ff6e3209. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:51:43 np0005603609 nova_compute[221550]: 2026-01-31 08:51:43.185 221554 DEBUG oslo_concurrency.lockutils [req-9dbfaf57-0ffd-4859-aa1c-79aeda56ee22 req-0d39978d-c769-4510-9f92-abbee447c8c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:51:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:43.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:43 np0005603609 nova_compute[221550]: 2026-01-31 08:51:43.476 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:44.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:51:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:45.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:51:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:46.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:47 np0005603609 nova_compute[221550]: 2026-01-31 08:51:47.107 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:51:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:47.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:51:48 np0005603609 nova_compute[221550]: 2026-01-31 08:51:48.479 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:48.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:51:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:49.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:51:50 np0005603609 podman[308734]: 2026-01-31 08:51:50.169846495 +0000 UTC m=+0.049823297 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:51:50 np0005603609 podman[308733]: 2026-01-31 08:51:50.193391176 +0000 UTC m=+0.082682124 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.665 221554 DEBUG nova.network.neutron [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Updating instance_info_cache with network_info: [{"id": "bb42c664-a169-4fcf-8369-07a9e967e305", "address": "fa:16:3e:9f:23:e6", "network": {"id": "d6d64646-ef89-444c-96ac-3de221e3aa33", "bridge": "br-int", "label": "tempest-network-smoke--404410181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb42c664-a1", "ovs_interfaceid": "bb42c664-a169-4fcf-8369-07a9e967e305", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "address": "fa:16:3e:73:b8:19", "network": {"id": "f113c108-4cd7-46b7-82a9-ddf257d911bb", "bridge": "br-int", "label": "tempest-network-smoke--1714582670", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d9a5de2-ec", "ovs_interfaceid": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:51:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:50.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.731 221554 DEBUG oslo_concurrency.lockutils [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Releasing lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.733 221554 DEBUG oslo_concurrency.lockutils [req-9dbfaf57-0ffd-4859-aa1c-79aeda56ee22 req-0d39978d-c769-4510-9f92-abbee447c8c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.733 221554 DEBUG nova.network.neutron [req-9dbfaf57-0ffd-4859-aa1c-79aeda56ee22 req-0d39978d-c769-4510-9f92-abbee447c8c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Refreshing network info cache for port 1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.737 221554 DEBUG nova.virt.libvirt.vif [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:50:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2100946823',display_name='tempest-TestNetworkBasicOps-server-2100946823',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2100946823',id=198,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJNjBVgbErmT8//F7831upVgyo+TyYRA1eBt11/KsyCRN7pmelaqXV1IV1lMPiLAKBIPkzzpgcQRq/nQyirHPvnL1pzNnteQVi2Ibgfr0f1bEDWL1Jyb45CXLI7VI4BwSA==',key_name='tempest-TestNetworkBasicOps-1500474542',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:51:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-kizu2q26',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:51:06Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=14d8553f-067d-4188-8ef4-8bb7fd63c62c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "address": "fa:16:3e:73:b8:19", "network": {"id": "f113c108-4cd7-46b7-82a9-ddf257d911bb", "bridge": "br-int", "label": "tempest-network-smoke--1714582670", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d9a5de2-ec", "ovs_interfaceid": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.737 221554 DEBUG nova.network.os_vif_util [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "address": "fa:16:3e:73:b8:19", "network": {"id": "f113c108-4cd7-46b7-82a9-ddf257d911bb", "bridge": "br-int", "label": "tempest-network-smoke--1714582670", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d9a5de2-ec", "ovs_interfaceid": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.738 221554 DEBUG nova.network.os_vif_util [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:b8:19,bridge_name='br-int',has_traffic_filtering=True,id=1d9a5de2-ec4a-4259-960e-a5b2ff6e3209,network=Network(f113c108-4cd7-46b7-82a9-ddf257d911bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d9a5de2-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.739 221554 DEBUG os_vif [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:b8:19,bridge_name='br-int',has_traffic_filtering=True,id=1d9a5de2-ec4a-4259-960e-a5b2ff6e3209,network=Network(f113c108-4cd7-46b7-82a9-ddf257d911bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d9a5de2-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.739 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.740 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.740 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.743 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.744 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d9a5de2-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.744 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d9a5de2-ec, col_values=(('external_ids', {'iface-id': '1d9a5de2-ec4a-4259-960e-a5b2ff6e3209', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:b8:19', 'vm-uuid': '14d8553f-067d-4188-8ef4-8bb7fd63c62c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:50 np0005603609 NetworkManager[49064]: <info>  [1769849510.7946] manager: (tap1d9a5de2-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/427)
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.794 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.796 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.799 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.799 221554 INFO os_vif [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:b8:19,bridge_name='br-int',has_traffic_filtering=True,id=1d9a5de2-ec4a-4259-960e-a5b2ff6e3209,network=Network(f113c108-4cd7-46b7-82a9-ddf257d911bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d9a5de2-ec')#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.800 221554 DEBUG nova.virt.libvirt.vif [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:50:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2100946823',display_name='tempest-TestNetworkBasicOps-server-2100946823',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2100946823',id=198,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJNjBVgbErmT8//F7831upVgyo+TyYRA1eBt11/KsyCRN7pmelaqXV1IV1lMPiLAKBIPkzzpgcQRq/nQyirHPvnL1pzNnteQVi2Ibgfr0f1bEDWL1Jyb45CXLI7VI4BwSA==',key_name='tempest-TestNetworkBasicOps-1500474542',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:51:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-kizu2q26',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:51:06Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=14d8553f-067d-4188-8ef4-8bb7fd63c62c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "address": "fa:16:3e:73:b8:19", "network": {"id": "f113c108-4cd7-46b7-82a9-ddf257d911bb", "bridge": "br-int", "label": "tempest-network-smoke--1714582670", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d9a5de2-ec", "ovs_interfaceid": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.800 221554 DEBUG nova.network.os_vif_util [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "address": "fa:16:3e:73:b8:19", "network": {"id": "f113c108-4cd7-46b7-82a9-ddf257d911bb", "bridge": "br-int", "label": "tempest-network-smoke--1714582670", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d9a5de2-ec", "ovs_interfaceid": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.801 221554 DEBUG nova.network.os_vif_util [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:b8:19,bridge_name='br-int',has_traffic_filtering=True,id=1d9a5de2-ec4a-4259-960e-a5b2ff6e3209,network=Network(f113c108-4cd7-46b7-82a9-ddf257d911bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d9a5de2-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.805 221554 DEBUG nova.virt.libvirt.guest [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] attach device xml: <interface type="ethernet">
Jan 31 03:51:50 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:73:b8:19"/>
Jan 31 03:51:50 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 03:51:50 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:51:50 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 03:51:50 np0005603609 nova_compute[221550]:  <target dev="tap1d9a5de2-ec"/>
Jan 31 03:51:50 np0005603609 nova_compute[221550]: </interface>
Jan 31 03:51:50 np0005603609 nova_compute[221550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 03:51:50 np0005603609 kernel: tap1d9a5de2-ec: entered promiscuous mode
Jan 31 03:51:50 np0005603609 NetworkManager[49064]: <info>  [1769849510.8178] manager: (tap1d9a5de2-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/428)
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.818 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:50 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:50Z|00929|binding|INFO|Claiming lport 1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 for this chassis.
Jan 31 03:51:50 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:50Z|00930|binding|INFO|1d9a5de2-ec4a-4259-960e-a5b2ff6e3209: Claiming fa:16:3e:73:b8:19 10.100.0.29
Jan 31 03:51:50 np0005603609 systemd-udevd[308784]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:51:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:50.846 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:b8:19 10.100.0.29'], port_security=['fa:16:3e:73:b8:19 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '14d8553f-067d-4188-8ef4-8bb7fd63c62c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f113c108-4cd7-46b7-82a9-ddf257d911bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc812d40-ff23-43ac-a90f-2ee695b7bd6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cee5488-6c87-4d9d-9e11-ff7ade32604c, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=1d9a5de2-ec4a-4259-960e-a5b2ff6e3209) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.847 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:50.849 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 in datapath f113c108-4cd7-46b7-82a9-ddf257d911bb bound to our chassis#033[00m
Jan 31 03:51:50 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:50Z|00931|binding|INFO|Setting lport 1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 ovn-installed in OVS
Jan 31 03:51:50 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:50Z|00932|binding|INFO|Setting lport 1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 up in Southbound
Jan 31 03:51:50 np0005603609 nova_compute[221550]: 2026-01-31 08:51:50.851 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:50 np0005603609 NetworkManager[49064]: <info>  [1769849510.8548] device (tap1d9a5de2-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:51:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:50.854 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f113c108-4cd7-46b7-82a9-ddf257d911bb#033[00m
Jan 31 03:51:50 np0005603609 NetworkManager[49064]: <info>  [1769849510.8557] device (tap1d9a5de2-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:51:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:50.863 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[284f9362-cd09-4bf4-9f42-2f2d69ffe2db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:50.864 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf113c108-41 in ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:51:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:50.867 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf113c108-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:51:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:50.867 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c27a8812-b451-4fa3-b56b-a6f5016b9ede]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:50.868 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fea120b5-1292-4f65-af1f-77896295a131]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:50.881 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[5aca74a3-b3a8-4a37-aa81-455088d9430c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:50.904 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[84ba2c47-f87f-463a-9efa-6c7107f3c0eb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:50.928 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b2879c2f-16f3-4fa5-9c29-240477a54d90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:50.936 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[95793965-5708-4875-9dbe-9d9c0a23a26d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:50 np0005603609 NetworkManager[49064]: <info>  [1769849510.9380] manager: (tapf113c108-40): new Veth device (/org/freedesktop/NetworkManager/Devices/429)
Jan 31 03:51:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:50.962 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[47ea0df3-3a29-4bd3-9ac2-e5e46c3cb45b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:50.965 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[282ce0eb-e62d-4104-897f-15b2fb08367c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:50 np0005603609 NetworkManager[49064]: <info>  [1769849510.9882] device (tapf113c108-40): carrier: link connected
Jan 31 03:51:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:50.992 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4d166d-d80b-451b-bb61-f423aaac66bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:51.008 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[51e1a12f-ff67-4bca-b1e3-c411ab362b8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf113c108-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:ee:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 975068, 'reachable_time': 39776, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308811, 'error': None, 'target': 'ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:51.024 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f5958a4a-29fa-49a6-a719-368fb2560c13]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fece:ee65'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 975068, 'tstamp': 975068}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308812, 'error': None, 'target': 'ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:51.036 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[94fd2138-de4d-4a35-8e7d-6341799aa18f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf113c108-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:ee:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 975068, 'reachable_time': 39776, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308813, 'error': None, 'target': 'ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:51.059 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a60e55-f3ad-4e5d-845b-1e55cc8e03e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:51.105 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[aba48db5-a1b0-428c-acdc-1ca7c38ffdd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:51.106 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf113c108-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:51.106 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:51.107 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf113c108-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:51 np0005603609 nova_compute[221550]: 2026-01-31 08:51:51.108 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:51 np0005603609 NetworkManager[49064]: <info>  [1769849511.1092] manager: (tapf113c108-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Jan 31 03:51:51 np0005603609 kernel: tapf113c108-40: entered promiscuous mode
Jan 31 03:51:51 np0005603609 nova_compute[221550]: 2026-01-31 08:51:51.110 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:51.111 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf113c108-40, col_values=(('external_ids', {'iface-id': '8a021b97-fcbf-452b-8edd-d8da73500627'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:51 np0005603609 nova_compute[221550]: 2026-01-31 08:51:51.112 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:51 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:51Z|00933|binding|INFO|Releasing lport 8a021b97-fcbf-452b-8edd-d8da73500627 from this chassis (sb_readonly=0)
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:51.113 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f113c108-4cd7-46b7-82a9-ddf257d911bb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f113c108-4cd7-46b7-82a9-ddf257d911bb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:51.114 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[67578f9a-24a8-43ca-a830-a0221ae69a05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:51.114 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-f113c108-4cd7-46b7-82a9-ddf257d911bb
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/f113c108-4cd7-46b7-82a9-ddf257d911bb.pid.haproxy
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID f113c108-4cd7-46b7-82a9-ddf257d911bb
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:51:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:51.116 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb', 'env', 'PROCESS_TAG=haproxy-f113c108-4cd7-46b7-82a9-ddf257d911bb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f113c108-4cd7-46b7-82a9-ddf257d911bb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:51:51 np0005603609 nova_compute[221550]: 2026-01-31 08:51:51.117 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:51 np0005603609 nova_compute[221550]: 2026-01-31 08:51:51.122 221554 DEBUG nova.virt.libvirt.driver [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:51:51 np0005603609 nova_compute[221550]: 2026-01-31 08:51:51.122 221554 DEBUG nova.virt.libvirt.driver [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:51:51 np0005603609 nova_compute[221550]: 2026-01-31 08:51:51.123 221554 DEBUG nova.virt.libvirt.driver [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No VIF found with MAC fa:16:3e:9f:23:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:51:51 np0005603609 nova_compute[221550]: 2026-01-31 08:51:51.123 221554 DEBUG nova.virt.libvirt.driver [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No VIF found with MAC fa:16:3e:73:b8:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:51:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:51:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:51.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:51:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:51 np0005603609 nova_compute[221550]: 2026-01-31 08:51:51.491 221554 DEBUG nova.virt.libvirt.guest [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:51:51 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:51:51 np0005603609 nova_compute[221550]:  <nova:name>tempest-TestNetworkBasicOps-server-2100946823</nova:name>
Jan 31 03:51:51 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 08:51:51</nova:creationTime>
Jan 31 03:51:51 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 03:51:51 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 03:51:51 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 03:51:51 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 03:51:51 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:51:51 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:51:51 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 03:51:51 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 03:51:51 np0005603609 nova_compute[221550]:    <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 03:51:51 np0005603609 nova_compute[221550]:    <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 03:51:51 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 03:51:51 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:51:51 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 03:51:51 np0005603609 nova_compute[221550]:    <nova:port uuid="bb42c664-a169-4fcf-8369-07a9e967e305">
Jan 31 03:51:51 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 03:51:51 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:51:51 np0005603609 nova_compute[221550]:    <nova:port uuid="1d9a5de2-ec4a-4259-960e-a5b2ff6e3209">
Jan 31 03:51:51 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Jan 31 03:51:51 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:51:51 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 03:51:51 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 03:51:51 np0005603609 nova_compute[221550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 03:51:51 np0005603609 podman[308845]: 2026-01-31 08:51:51.411221678 +0000 UTC m=+0.023106040 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:51:51 np0005603609 nova_compute[221550]: 2026-01-31 08:51:51.551 221554 DEBUG oslo_concurrency.lockutils [None req-6d810b07-77b3-4c13-8b80-db82b40d06e4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "interface-14d8553f-067d-4188-8ef4-8bb7fd63c62c-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 15.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:52 np0005603609 nova_compute[221550]: 2026-01-31 08:51:52.031 221554 DEBUG nova.compute.manager [req-4e9415a6-1172-4996-bdc4-baf5cb2a9dd4 req-2c119406-d688-447c-803d-69457fa586f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received event network-vif-plugged-1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:51:52 np0005603609 nova_compute[221550]: 2026-01-31 08:51:52.031 221554 DEBUG oslo_concurrency.lockutils [req-4e9415a6-1172-4996-bdc4-baf5cb2a9dd4 req-2c119406-d688-447c-803d-69457fa586f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:52 np0005603609 nova_compute[221550]: 2026-01-31 08:51:52.032 221554 DEBUG oslo_concurrency.lockutils [req-4e9415a6-1172-4996-bdc4-baf5cb2a9dd4 req-2c119406-d688-447c-803d-69457fa586f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:52 np0005603609 nova_compute[221550]: 2026-01-31 08:51:52.032 221554 DEBUG oslo_concurrency.lockutils [req-4e9415a6-1172-4996-bdc4-baf5cb2a9dd4 req-2c119406-d688-447c-803d-69457fa586f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:52 np0005603609 nova_compute[221550]: 2026-01-31 08:51:52.032 221554 DEBUG nova.compute.manager [req-4e9415a6-1172-4996-bdc4-baf5cb2a9dd4 req-2c119406-d688-447c-803d-69457fa586f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] No waiting events found dispatching network-vif-plugged-1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:51:52 np0005603609 nova_compute[221550]: 2026-01-31 08:51:52.032 221554 WARNING nova.compute.manager [req-4e9415a6-1172-4996-bdc4-baf5cb2a9dd4 req-2c119406-d688-447c-803d-69457fa586f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received unexpected event network-vif-plugged-1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:51:52 np0005603609 nova_compute[221550]: 2026-01-31 08:51:52.109 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:52.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:52 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:52Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:73:b8:19 10.100.0.29
Jan 31 03:51:52 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:52Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:73:b8:19 10.100.0.29
Jan 31 03:51:52 np0005603609 podman[308845]: 2026-01-31 08:51:52.90533102 +0000 UTC m=+1.517215332 container create 4a8b13ecccda2533859fd25f47db2329693aee5b81501baf9c041d03739c3cd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:51:53 np0005603609 systemd[1]: Started libpod-conmon-4a8b13ecccda2533859fd25f47db2329693aee5b81501baf9c041d03739c3cd1.scope.
Jan 31 03:51:53 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:51:53 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0170b6af1fefed1d50c20899648efe11366e292b7197b942cfd769f3f756d47d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:51:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:53.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:51:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2136031882' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:51:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:51:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2136031882' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:51:53 np0005603609 podman[308845]: 2026-01-31 08:51:53.417540841 +0000 UTC m=+2.029425163 container init 4a8b13ecccda2533859fd25f47db2329693aee5b81501baf9c041d03739c3cd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 03:51:53 np0005603609 podman[308845]: 2026-01-31 08:51:53.421973248 +0000 UTC m=+2.033857570 container start 4a8b13ecccda2533859fd25f47db2329693aee5b81501baf9c041d03739c3cd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 03:51:53 np0005603609 neutron-haproxy-ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb[308860]: [NOTICE]   (308864) : New worker (308866) forked
Jan 31 03:51:53 np0005603609 neutron-haproxy-ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb[308860]: [NOTICE]   (308864) : Loading success.
Jan 31 03:51:53 np0005603609 nova_compute[221550]: 2026-01-31 08:51:53.903 221554 DEBUG nova.network.neutron [req-9dbfaf57-0ffd-4859-aa1c-79aeda56ee22 req-0d39978d-c769-4510-9f92-abbee447c8c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Updated VIF entry in instance network info cache for port 1d9a5de2-ec4a-4259-960e-a5b2ff6e3209. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:51:53 np0005603609 nova_compute[221550]: 2026-01-31 08:51:53.904 221554 DEBUG nova.network.neutron [req-9dbfaf57-0ffd-4859-aa1c-79aeda56ee22 req-0d39978d-c769-4510-9f92-abbee447c8c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Updating instance_info_cache with network_info: [{"id": "bb42c664-a169-4fcf-8369-07a9e967e305", "address": "fa:16:3e:9f:23:e6", "network": {"id": "d6d64646-ef89-444c-96ac-3de221e3aa33", "bridge": "br-int", "label": "tempest-network-smoke--404410181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb42c664-a1", "ovs_interfaceid": "bb42c664-a169-4fcf-8369-07a9e967e305", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "address": "fa:16:3e:73:b8:19", "network": {"id": "f113c108-4cd7-46b7-82a9-ddf257d911bb", "bridge": "br-int", "label": "tempest-network-smoke--1714582670", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d9a5de2-ec", "ovs_interfaceid": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:51:53 np0005603609 nova_compute[221550]: 2026-01-31 08:51:53.948 221554 DEBUG oslo_concurrency.lockutils [req-9dbfaf57-0ffd-4859-aa1c-79aeda56ee22 req-0d39978d-c769-4510-9f92-abbee447c8c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:51:53 np0005603609 nova_compute[221550]: 2026-01-31 08:51:53.998 221554 DEBUG oslo_concurrency.lockutils [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "interface-14d8553f-067d-4188-8ef4-8bb7fd63c62c-1d9a5de2-ec4a-4259-960e-a5b2ff6e3209" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:53 np0005603609 nova_compute[221550]: 2026-01-31 08:51:53.999 221554 DEBUG oslo_concurrency.lockutils [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "interface-14d8553f-067d-4188-8ef4-8bb7fd63c62c-1d9a5de2-ec4a-4259-960e-a5b2ff6e3209" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.029 221554 DEBUG nova.objects.instance [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'flavor' on Instance uuid 14d8553f-067d-4188-8ef4-8bb7fd63c62c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.292 221554 DEBUG nova.virt.libvirt.vif [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:50:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2100946823',display_name='tempest-TestNetworkBasicOps-server-2100946823',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2100946823',id=198,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJNjBVgbErmT8//F7831upVgyo+TyYRA1eBt11/KsyCRN7pmelaqXV1IV1lMPiLAKBIPkzzpgcQRq/nQyirHPvnL1pzNnteQVi2Ibgfr0f1bEDWL1Jyb45CXLI7VI4BwSA==',key_name='tempest-TestNetworkBasicOps-1500474542',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:51:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-kizu2q26',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:51:06Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=14d8553f-067d-4188-8ef4-8bb7fd63c62c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "address": "fa:16:3e:73:b8:19", "network": {"id": "f113c108-4cd7-46b7-82a9-ddf257d911bb", "bridge": "br-int", "label": "tempest-network-smoke--1714582670", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d9a5de2-ec", "ovs_interfaceid": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.293 221554 DEBUG nova.network.os_vif_util [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "address": "fa:16:3e:73:b8:19", "network": {"id": "f113c108-4cd7-46b7-82a9-ddf257d911bb", "bridge": "br-int", "label": "tempest-network-smoke--1714582670", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d9a5de2-ec", "ovs_interfaceid": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.294 221554 DEBUG nova.network.os_vif_util [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:b8:19,bridge_name='br-int',has_traffic_filtering=True,id=1d9a5de2-ec4a-4259-960e-a5b2ff6e3209,network=Network(f113c108-4cd7-46b7-82a9-ddf257d911bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d9a5de2-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.298 221554 DEBUG nova.virt.libvirt.guest [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:73:b8:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d9a5de2-ec"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.300 221554 DEBUG nova.virt.libvirt.guest [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:73:b8:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d9a5de2-ec"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.302 221554 DEBUG nova.virt.libvirt.driver [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Attempting to detach device tap1d9a5de2-ec from instance 14d8553f-067d-4188-8ef4-8bb7fd63c62c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.302 221554 DEBUG nova.virt.libvirt.guest [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] detach device xml: <interface type="ethernet">
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:73:b8:19"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <target dev="tap1d9a5de2-ec"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]: </interface>
Jan 31 03:51:54 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.333 221554 DEBUG nova.virt.libvirt.guest [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:73:b8:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d9a5de2-ec"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.337 221554 DEBUG nova.virt.libvirt.guest [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:73:b8:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d9a5de2-ec"/></interface>not found in domain: <domain type='kvm' id='110'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <name>instance-000000c6</name>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <uuid>14d8553f-067d-4188-8ef4-8bb7fd63c62c</uuid>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:name>tempest-TestNetworkBasicOps-server-2100946823</nova:name>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 08:51:51</nova:creationTime>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:port uuid="bb42c664-a169-4fcf-8369-07a9e967e305">
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:port uuid="1d9a5de2-ec4a-4259-960e-a5b2ff6e3209">
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 03:51:54 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <memory unit='KiB'>131072</memory>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <vcpu placement='static'>1</vcpu>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <resource>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <partition>/machine</partition>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </resource>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <sysinfo type='smbios'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <entry name='manufacturer'>RDO</entry>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <entry name='serial'>14d8553f-067d-4188-8ef4-8bb7fd63c62c</entry>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <entry name='uuid'>14d8553f-067d-4188-8ef4-8bb7fd63c62c</entry>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <entry name='family'>Virtual Machine</entry>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <boot dev='hd'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <smbios mode='sysinfo'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <vmcoreinfo state='on'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <model fallback='forbid'>Nehalem</model>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <feature policy='require' name='x2apic'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <feature policy='require' name='hypervisor'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <feature policy='require' name='vme'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <clock offset='utc'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <timer name='hpet' present='no'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <on_poweroff>destroy</on_poweroff>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <on_reboot>restart</on_reboot>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <on_crash>destroy</on_crash>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <disk type='network' device='disk'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk' index='2'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target dev='vda' bus='virtio'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='virtio-disk0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <disk type='network' device='cdrom'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk.config' index='1'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target dev='sda' bus='sata'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <readonly/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='sata0-0-0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pcie.0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='1' port='0x10'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.1'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='2' port='0x11'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.2'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='3' port='0x12'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.3'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='4' port='0x13'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.4'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='5' port='0x14'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.5'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='6' port='0x15'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.6'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='7' port='0x16'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.7'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='8' port='0x17'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.8'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='9' port='0x18'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.9'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='10' port='0x19'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.10'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='11' port='0x1a'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.11'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='12' port='0x1b'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.12'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='13' port='0x1c'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.13'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='14' port='0x1d'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.14'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='15' port='0x1e'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.15'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='16' port='0x1f'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.16'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='17' port='0x20'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.17'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='18' port='0x21'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.18'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='19' port='0x22'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.19'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='20' port='0x23'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.20'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='21' port='0x24'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.21'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='22' port='0x25'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.22'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='23' port='0x26'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.23'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='24' port='0x27'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.24'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='25' port='0x28'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.25'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-pci-bridge'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.26'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='usb'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='sata' index='0'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='ide'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:9f:23:e6'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target dev='tapbb42c664-a1'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='net0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:73:b8:19'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target dev='tap1d9a5de2-ec'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='net1'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <serial type='pty'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/14d8553f-067d-4188-8ef4-8bb7fd63c62c/console.log' append='off'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target type='isa-serial' port='0'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:        <model name='isa-serial'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      </target>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <console type='pty' tty='/dev/pts/0'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/14d8553f-067d-4188-8ef4-8bb7fd63c62c/console.log' append='off'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target type='serial' port='0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </console>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <input type='tablet' bus='usb'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='input0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='usb' bus='0' port='1'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <input type='mouse' bus='ps2'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='input1'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <input type='keyboard' bus='ps2'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='input2'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <listen type='address' address='::0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </graphics>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <audio id='1' type='none'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='video0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <watchdog model='itco' action='reset'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='watchdog0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </watchdog>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <memballoon model='virtio'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <stats period='10'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='balloon0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <rng model='virtio'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <backend model='random'>/dev/urandom</backend>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='rng0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <label>system_u:system_r:svirt_t:s0:c102,c698</label>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c102,c698</imagelabel>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <label>+107:+107</label>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <imagelabel>+107:+107</imagelabel>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 03:51:54 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:51:54 np0005603609 nova_compute[221550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.338 221554 INFO nova.virt.libvirt.driver [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully detached device tap1d9a5de2-ec from instance 14d8553f-067d-4188-8ef4-8bb7fd63c62c from the persistent domain config.#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.338 221554 DEBUG nova.virt.libvirt.driver [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] (1/8): Attempting to detach device tap1d9a5de2-ec with device alias net1 from instance 14d8553f-067d-4188-8ef4-8bb7fd63c62c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.339 221554 DEBUG nova.virt.libvirt.guest [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] detach device xml: <interface type="ethernet">
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:73:b8:19"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <target dev="tap1d9a5de2-ec"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]: </interface>
Jan 31 03:51:54 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.353 221554 DEBUG nova.compute.manager [req-388c6c01-a673-47d0-aa70-6022853840b6 req-04674707-a17a-4053-8016-4955908bf7a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received event network-vif-plugged-1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.354 221554 DEBUG oslo_concurrency.lockutils [req-388c6c01-a673-47d0-aa70-6022853840b6 req-04674707-a17a-4053-8016-4955908bf7a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.354 221554 DEBUG oslo_concurrency.lockutils [req-388c6c01-a673-47d0-aa70-6022853840b6 req-04674707-a17a-4053-8016-4955908bf7a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.354 221554 DEBUG oslo_concurrency.lockutils [req-388c6c01-a673-47d0-aa70-6022853840b6 req-04674707-a17a-4053-8016-4955908bf7a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.354 221554 DEBUG nova.compute.manager [req-388c6c01-a673-47d0-aa70-6022853840b6 req-04674707-a17a-4053-8016-4955908bf7a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] No waiting events found dispatching network-vif-plugged-1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.355 221554 WARNING nova.compute.manager [req-388c6c01-a673-47d0-aa70-6022853840b6 req-04674707-a17a-4053-8016-4955908bf7a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received unexpected event network-vif-plugged-1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:51:54 np0005603609 kernel: tap1d9a5de2-ec (unregistering): left promiscuous mode
Jan 31 03:51:54 np0005603609 NetworkManager[49064]: <info>  [1769849514.4044] device (tap1d9a5de2-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.409 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:54 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:54Z|00934|binding|INFO|Releasing lport 1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 from this chassis (sb_readonly=0)
Jan 31 03:51:54 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:54Z|00935|binding|INFO|Setting lport 1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 down in Southbound
Jan 31 03:51:54 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:54Z|00936|binding|INFO|Removing iface tap1d9a5de2-ec ovn-installed in OVS
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.411 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.414 221554 DEBUG nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Received event <DeviceRemovedEvent: 1769849514.41445, 14d8553f-067d-4188-8ef4-8bb7fd63c62c => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.415 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.417 221554 DEBUG nova.virt.libvirt.driver [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Start waiting for the detach event from libvirt for device tap1d9a5de2-ec with device alias net1 for instance 14d8553f-067d-4188-8ef4-8bb7fd63c62c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.417 221554 DEBUG nova.virt.libvirt.guest [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:73:b8:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d9a5de2-ec"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.420 221554 DEBUG nova.virt.libvirt.guest [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:73:b8:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d9a5de2-ec"/></interface>not found in domain: <domain type='kvm' id='110'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <name>instance-000000c6</name>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <uuid>14d8553f-067d-4188-8ef4-8bb7fd63c62c</uuid>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:name>tempest-TestNetworkBasicOps-server-2100946823</nova:name>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 08:51:51</nova:creationTime>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:port uuid="bb42c664-a169-4fcf-8369-07a9e967e305">
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:port uuid="1d9a5de2-ec4a-4259-960e-a5b2ff6e3209">
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 03:51:54 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <memory unit='KiB'>131072</memory>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <vcpu placement='static'>1</vcpu>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <resource>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <partition>/machine</partition>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </resource>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <sysinfo type='smbios'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <entry name='manufacturer'>RDO</entry>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <entry name='serial'>14d8553f-067d-4188-8ef4-8bb7fd63c62c</entry>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <entry name='uuid'>14d8553f-067d-4188-8ef4-8bb7fd63c62c</entry>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <entry name='family'>Virtual Machine</entry>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <boot dev='hd'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <smbios mode='sysinfo'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <vmcoreinfo state='on'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <model fallback='forbid'>Nehalem</model>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <feature policy='require' name='x2apic'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <feature policy='require' name='hypervisor'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <feature policy='require' name='vme'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <clock offset='utc'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <timer name='hpet' present='no'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <on_poweroff>destroy</on_poweroff>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <on_reboot>restart</on_reboot>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <on_crash>destroy</on_crash>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <disk type='network' device='disk'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk' index='2'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target dev='vda' bus='virtio'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='virtio-disk0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <disk type='network' device='cdrom'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk.config' index='1'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target dev='sda' bus='sata'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <readonly/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='sata0-0-0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pcie.0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='1' port='0x10'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.1'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='2' port='0x11'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.2'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='3' port='0x12'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.3'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='4' port='0x13'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.4'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='5' port='0x14'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.5'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='6' port='0x15'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.6'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='7' port='0x16'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.7'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='8' port='0x17'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.8'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='9' port='0x18'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.9'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='10' port='0x19'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.10'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='11' port='0x1a'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.11'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='12' port='0x1b'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.12'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='13' port='0x1c'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.13'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='14' port='0x1d'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.14'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='15' port='0x1e'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.15'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='16' port='0x1f'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.16'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='17' port='0x20'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.17'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='18' port='0x21'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.18'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='19' port='0x22'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.19'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='20' port='0x23'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.20'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='21' port='0x24'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.21'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='22' port='0x25'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.22'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='23' port='0x26'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.23'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='24' port='0x27'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.24'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target chassis='25' port='0x28'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.25'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model name='pcie-pci-bridge'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='pci.26'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='usb'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <controller type='sata' index='0'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='ide'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:9f:23:e6'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target dev='tapbb42c664-a1'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='net0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <serial type='pty'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/14d8553f-067d-4188-8ef4-8bb7fd63c62c/console.log' append='off'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target type='isa-serial' port='0'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:        <model name='isa-serial'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      </target>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <console type='pty' tty='/dev/pts/0'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/14d8553f-067d-4188-8ef4-8bb7fd63c62c/console.log' append='off'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <target type='serial' port='0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </console>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <input type='tablet' bus='usb'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='input0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='usb' bus='0' port='1'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <input type='mouse' bus='ps2'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='input1'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <input type='keyboard' bus='ps2'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='input2'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <listen type='address' address='::0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </graphics>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <audio id='1' type='none'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='video0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <watchdog model='itco' action='reset'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='watchdog0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </watchdog>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <memballoon model='virtio'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <stats period='10'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='balloon0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <rng model='virtio'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <backend model='random'>/dev/urandom</backend>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <alias name='rng0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <label>system_u:system_r:svirt_t:s0:c102,c698</label>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c102,c698</imagelabel>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <label>+107:+107</label>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <imagelabel>+107:+107</imagelabel>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 03:51:54 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:51:54 np0005603609 nova_compute[221550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.422 221554 INFO nova.virt.libvirt.driver [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully detached device tap1d9a5de2-ec from instance 14d8553f-067d-4188-8ef4-8bb7fd63c62c from the live domain config.#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.422 221554 DEBUG nova.virt.libvirt.vif [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:50:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2100946823',display_name='tempest-TestNetworkBasicOps-server-2100946823',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2100946823',id=198,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJNjBVgbErmT8//F7831upVgyo+TyYRA1eBt11/KsyCRN7pmelaqXV1IV1lMPiLAKBIPkzzpgcQRq/nQyirHPvnL1pzNnteQVi2Ibgfr0f1bEDWL1Jyb45CXLI7VI4BwSA==',key_name='tempest-TestNetworkBasicOps-1500474542',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:51:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-kizu2q26',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:51:06Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=14d8553f-067d-4188-8ef4-8bb7fd63c62c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "address": "fa:16:3e:73:b8:19", "network": {"id": "f113c108-4cd7-46b7-82a9-ddf257d911bb", "bridge": "br-int", "label": "tempest-network-smoke--1714582670", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d9a5de2-ec", "ovs_interfaceid": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.423 221554 DEBUG nova.network.os_vif_util [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "address": "fa:16:3e:73:b8:19", "network": {"id": "f113c108-4cd7-46b7-82a9-ddf257d911bb", "bridge": "br-int", "label": "tempest-network-smoke--1714582670", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d9a5de2-ec", "ovs_interfaceid": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.423 221554 DEBUG nova.network.os_vif_util [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:b8:19,bridge_name='br-int',has_traffic_filtering=True,id=1d9a5de2-ec4a-4259-960e-a5b2ff6e3209,network=Network(f113c108-4cd7-46b7-82a9-ddf257d911bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d9a5de2-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.423 221554 DEBUG os_vif [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:b8:19,bridge_name='br-int',has_traffic_filtering=True,id=1d9a5de2-ec4a-4259-960e-a5b2ff6e3209,network=Network(f113c108-4cd7-46b7-82a9-ddf257d911bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d9a5de2-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.425 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.425 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d9a5de2-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.426 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.427 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.429 221554 INFO os_vif [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:b8:19,bridge_name='br-int',has_traffic_filtering=True,id=1d9a5de2-ec4a-4259-960e-a5b2ff6e3209,network=Network(f113c108-4cd7-46b7-82a9-ddf257d911bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d9a5de2-ec')#033[00m
Jan 31 03:51:54 np0005603609 nova_compute[221550]: 2026-01-31 08:51:54.429 221554 DEBUG nova.virt.libvirt.guest [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:name>tempest-TestNetworkBasicOps-server-2100946823</nova:name>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 08:51:54</nova:creationTime>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    <nova:port uuid="bb42c664-a169-4fcf-8369-07a9e967e305">
Jan 31 03:51:54 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:51:54 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 03:51:54 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 03:51:54 np0005603609 nova_compute[221550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 03:51:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:54.624 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:b8:19 10.100.0.29'], port_security=['fa:16:3e:73:b8:19 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '14d8553f-067d-4188-8ef4-8bb7fd63c62c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f113c108-4cd7-46b7-82a9-ddf257d911bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc812d40-ff23-43ac-a90f-2ee695b7bd6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cee5488-6c87-4d9d-9e11-ff7ade32604c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=1d9a5de2-ec4a-4259-960e-a5b2ff6e3209) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:51:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:54.625 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 in datapath f113c108-4cd7-46b7-82a9-ddf257d911bb unbound from our chassis#033[00m
Jan 31 03:51:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:54.627 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f113c108-4cd7-46b7-82a9-ddf257d911bb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:51:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:54.628 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[29b886ca-5fd5-4cee-8383-02850ed3080c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:54.628 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb namespace which is not needed anymore#033[00m
Jan 31 03:51:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:51:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:54.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:51:54 np0005603609 neutron-haproxy-ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb[308860]: [NOTICE]   (308864) : haproxy version is 2.8.14-c23fe91
Jan 31 03:51:54 np0005603609 neutron-haproxy-ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb[308860]: [NOTICE]   (308864) : path to executable is /usr/sbin/haproxy
Jan 31 03:51:54 np0005603609 neutron-haproxy-ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb[308860]: [WARNING]  (308864) : Exiting Master process...
Jan 31 03:51:54 np0005603609 neutron-haproxy-ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb[308860]: [ALERT]    (308864) : Current worker (308866) exited with code 143 (Terminated)
Jan 31 03:51:54 np0005603609 neutron-haproxy-ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb[308860]: [WARNING]  (308864) : All workers exited. Exiting... (0)
Jan 31 03:51:54 np0005603609 systemd[1]: libpod-4a8b13ecccda2533859fd25f47db2329693aee5b81501baf9c041d03739c3cd1.scope: Deactivated successfully.
Jan 31 03:51:54 np0005603609 podman[308898]: 2026-01-31 08:51:54.837941899 +0000 UTC m=+0.143470095 container died 4a8b13ecccda2533859fd25f47db2329693aee5b81501baf9c041d03739c3cd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 03:51:55 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4a8b13ecccda2533859fd25f47db2329693aee5b81501baf9c041d03739c3cd1-userdata-shm.mount: Deactivated successfully.
Jan 31 03:51:55 np0005603609 systemd[1]: var-lib-containers-storage-overlay-0170b6af1fefed1d50c20899648efe11366e292b7197b942cfd769f3f756d47d-merged.mount: Deactivated successfully.
Jan 31 03:51:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:55.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:55 np0005603609 podman[308898]: 2026-01-31 08:51:55.404171057 +0000 UTC m=+0.709699263 container cleanup 4a8b13ecccda2533859fd25f47db2329693aee5b81501baf9c041d03739c3cd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 03:51:55 np0005603609 systemd[1]: libpod-conmon-4a8b13ecccda2533859fd25f47db2329693aee5b81501baf9c041d03739c3cd1.scope: Deactivated successfully.
Jan 31 03:51:55 np0005603609 podman[308929]: 2026-01-31 08:51:55.560199425 +0000 UTC m=+0.138172066 container remove 4a8b13ecccda2533859fd25f47db2329693aee5b81501baf9c041d03739c3cd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 03:51:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:55.565 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[209a771d-4e2a-4162-87d3-59d22ce30293]: (4, ('Sat Jan 31 08:51:54 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb (4a8b13ecccda2533859fd25f47db2329693aee5b81501baf9c041d03739c3cd1)\n4a8b13ecccda2533859fd25f47db2329693aee5b81501baf9c041d03739c3cd1\nSat Jan 31 08:51:55 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb (4a8b13ecccda2533859fd25f47db2329693aee5b81501baf9c041d03739c3cd1)\n4a8b13ecccda2533859fd25f47db2329693aee5b81501baf9c041d03739c3cd1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:55.567 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1c314aea-f83c-4e1e-92a8-7c7c38347759]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:55.568 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf113c108-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:55 np0005603609 kernel: tapf113c108-40: left promiscuous mode
Jan 31 03:51:55 np0005603609 nova_compute[221550]: 2026-01-31 08:51:55.569 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:55 np0005603609 nova_compute[221550]: 2026-01-31 08:51:55.574 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:55.579 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3cec7249-5f7d-43b6-a3b1-da7d3e9cfc4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:55.592 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a6866ad4-71d4-40b5-9d54-214ceb327017]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:55.593 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[baa1dac4-3b52-4232-adb0-f55708273218]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:55.603 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[30e5f83b-1327-4c72-a5b9-223dabd37933]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 975062, 'reachable_time': 33100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308945, 'error': None, 'target': 'ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:55.605 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f113c108-4cd7-46b7-82a9-ddf257d911bb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:51:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:55.605 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[724e4264-ff5f-45ba-b80d-7355e0b1c8e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:55 np0005603609 systemd[1]: run-netns-ovnmeta\x2df113c108\x2d4cd7\x2d46b7\x2d82a9\x2dddf257d911bb.mount: Deactivated successfully.
Jan 31 03:51:55 np0005603609 nova_compute[221550]: 2026-01-31 08:51:55.738 221554 DEBUG oslo_concurrency.lockutils [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:51:55 np0005603609 nova_compute[221550]: 2026-01-31 08:51:55.738 221554 DEBUG oslo_concurrency.lockutils [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquired lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:51:55 np0005603609 nova_compute[221550]: 2026-01-31 08:51:55.738 221554 DEBUG nova.network.neutron [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:51:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.547 221554 DEBUG nova.compute.manager [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received event network-vif-unplugged-1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.548 221554 DEBUG oslo_concurrency.lockutils [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.548 221554 DEBUG oslo_concurrency.lockutils [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.548 221554 DEBUG oslo_concurrency.lockutils [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.548 221554 DEBUG nova.compute.manager [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] No waiting events found dispatching network-vif-unplugged-1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.549 221554 WARNING nova.compute.manager [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received unexpected event network-vif-unplugged-1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.549 221554 DEBUG nova.compute.manager [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received event network-vif-plugged-1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.549 221554 DEBUG oslo_concurrency.lockutils [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.549 221554 DEBUG oslo_concurrency.lockutils [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.550 221554 DEBUG oslo_concurrency.lockutils [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.550 221554 DEBUG nova.compute.manager [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] No waiting events found dispatching network-vif-plugged-1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.550 221554 WARNING nova.compute.manager [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received unexpected event network-vif-plugged-1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.550 221554 DEBUG nova.compute.manager [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received event network-vif-deleted-1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.551 221554 INFO nova.compute.manager [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Neutron deleted interface 1d9a5de2-ec4a-4259-960e-a5b2ff6e3209; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.551 221554 DEBUG nova.network.neutron [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Updating instance_info_cache with network_info: [{"id": "bb42c664-a169-4fcf-8369-07a9e967e305", "address": "fa:16:3e:9f:23:e6", "network": {"id": "d6d64646-ef89-444c-96ac-3de221e3aa33", "bridge": "br-int", "label": "tempest-network-smoke--404410181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb42c664-a1", "ovs_interfaceid": "bb42c664-a169-4fcf-8369-07a9e967e305", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:51:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:56.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.683 221554 DEBUG nova.objects.instance [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lazy-loading 'system_metadata' on Instance uuid 14d8553f-067d-4188-8ef4-8bb7fd63c62c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.746 221554 DEBUG nova.objects.instance [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lazy-loading 'flavor' on Instance uuid 14d8553f-067d-4188-8ef4-8bb7fd63c62c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.868 221554 DEBUG nova.virt.libvirt.vif [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:50:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2100946823',display_name='tempest-TestNetworkBasicOps-server-2100946823',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2100946823',id=198,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJNjBVgbErmT8//F7831upVgyo+TyYRA1eBt11/KsyCRN7pmelaqXV1IV1lMPiLAKBIPkzzpgcQRq/nQyirHPvnL1pzNnteQVi2Ibgfr0f1bEDWL1Jyb45CXLI7VI4BwSA==',key_name='tempest-TestNetworkBasicOps-1500474542',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:51:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-kizu2q26',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:51:06Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=14d8553f-067d-4188-8ef4-8bb7fd63c62c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "address": "fa:16:3e:73:b8:19", "network": {"id": "f113c108-4cd7-46b7-82a9-ddf257d911bb", "bridge": "br-int", "label": "tempest-network-smoke--1714582670", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d9a5de2-ec", "ovs_interfaceid": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.869 221554 DEBUG nova.network.os_vif_util [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converting VIF {"id": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "address": "fa:16:3e:73:b8:19", "network": {"id": "f113c108-4cd7-46b7-82a9-ddf257d911bb", "bridge": "br-int", "label": "tempest-network-smoke--1714582670", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d9a5de2-ec", "ovs_interfaceid": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.870 221554 DEBUG nova.network.os_vif_util [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:b8:19,bridge_name='br-int',has_traffic_filtering=True,id=1d9a5de2-ec4a-4259-960e-a5b2ff6e3209,network=Network(f113c108-4cd7-46b7-82a9-ddf257d911bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d9a5de2-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.873 221554 DEBUG nova.virt.libvirt.guest [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:73:b8:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d9a5de2-ec"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.877 221554 DEBUG nova.virt.libvirt.guest [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:73:b8:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d9a5de2-ec"/></interface>not found in domain: <domain type='kvm' id='110'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <name>instance-000000c6</name>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <uuid>14d8553f-067d-4188-8ef4-8bb7fd63c62c</uuid>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:name>tempest-TestNetworkBasicOps-server-2100946823</nova:name>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 08:51:54</nova:creationTime>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:port uuid="bb42c664-a169-4fcf-8369-07a9e967e305">
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 03:51:56 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <memory unit='KiB'>131072</memory>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <vcpu placement='static'>1</vcpu>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <resource>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <partition>/machine</partition>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </resource>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <sysinfo type='smbios'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <entry name='manufacturer'>RDO</entry>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <entry name='serial'>14d8553f-067d-4188-8ef4-8bb7fd63c62c</entry>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <entry name='uuid'>14d8553f-067d-4188-8ef4-8bb7fd63c62c</entry>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <entry name='family'>Virtual Machine</entry>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <boot dev='hd'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <smbios mode='sysinfo'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <vmcoreinfo state='on'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <model fallback='forbid'>Nehalem</model>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <feature policy='require' name='x2apic'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <feature policy='require' name='hypervisor'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <feature policy='require' name='vme'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <clock offset='utc'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <timer name='hpet' present='no'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <on_poweroff>destroy</on_poweroff>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <on_reboot>restart</on_reboot>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <on_crash>destroy</on_crash>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <disk type='network' device='disk'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk' index='2'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target dev='vda' bus='virtio'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='virtio-disk0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <disk type='network' device='cdrom'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk.config' index='1'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target dev='sda' bus='sata'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <readonly/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='sata0-0-0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pcie.0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='1' port='0x10'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.1'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='2' port='0x11'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.2'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='3' port='0x12'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.3'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='4' port='0x13'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.4'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='5' port='0x14'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.5'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='6' port='0x15'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.6'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='7' port='0x16'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.7'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='8' port='0x17'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.8'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='9' port='0x18'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.9'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='10' port='0x19'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.10'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='11' port='0x1a'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.11'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='12' port='0x1b'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.12'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='13' port='0x1c'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.13'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='14' port='0x1d'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.14'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='15' port='0x1e'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.15'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='16' port='0x1f'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.16'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='17' port='0x20'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.17'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='18' port='0x21'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.18'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='19' port='0x22'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.19'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='20' port='0x23'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.20'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='21' port='0x24'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.21'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='22' port='0x25'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.22'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='23' port='0x26'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.23'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='24' port='0x27'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.24'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='25' port='0x28'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.25'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-pci-bridge'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.26'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='usb'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='sata' index='0'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='ide'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:9f:23:e6'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target dev='tapbb42c664-a1'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='net0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <serial type='pty'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/14d8553f-067d-4188-8ef4-8bb7fd63c62c/console.log' append='off'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target type='isa-serial' port='0'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:        <model name='isa-serial'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      </target>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <console type='pty' tty='/dev/pts/0'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/14d8553f-067d-4188-8ef4-8bb7fd63c62c/console.log' append='off'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target type='serial' port='0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </console>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <input type='tablet' bus='usb'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='input0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='usb' bus='0' port='1'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <input type='mouse' bus='ps2'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='input1'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <input type='keyboard' bus='ps2'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='input2'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <listen type='address' address='::0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </graphics>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <audio id='1' type='none'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='video0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <watchdog model='itco' action='reset'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='watchdog0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </watchdog>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <memballoon model='virtio'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <stats period='10'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='balloon0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <rng model='virtio'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <backend model='random'>/dev/urandom</backend>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='rng0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <label>system_u:system_r:svirt_t:s0:c102,c698</label>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c102,c698</imagelabel>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <label>+107:+107</label>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <imagelabel>+107:+107</imagelabel>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 03:51:56 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:51:56 np0005603609 nova_compute[221550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.879 221554 DEBUG nova.virt.libvirt.guest [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:73:b8:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d9a5de2-ec"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.884 221554 DEBUG nova.virt.libvirt.guest [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:73:b8:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d9a5de2-ec"/></interface>not found in domain: <domain type='kvm' id='110'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <name>instance-000000c6</name>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <uuid>14d8553f-067d-4188-8ef4-8bb7fd63c62c</uuid>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:name>tempest-TestNetworkBasicOps-server-2100946823</nova:name>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 08:51:54</nova:creationTime>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:port uuid="bb42c664-a169-4fcf-8369-07a9e967e305">
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 03:51:56 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <memory unit='KiB'>131072</memory>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <vcpu placement='static'>1</vcpu>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <resource>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <partition>/machine</partition>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </resource>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <sysinfo type='smbios'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <entry name='manufacturer'>RDO</entry>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <entry name='serial'>14d8553f-067d-4188-8ef4-8bb7fd63c62c</entry>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <entry name='uuid'>14d8553f-067d-4188-8ef4-8bb7fd63c62c</entry>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <entry name='family'>Virtual Machine</entry>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <boot dev='hd'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <smbios mode='sysinfo'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <vmcoreinfo state='on'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <model fallback='forbid'>Nehalem</model>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <feature policy='require' name='x2apic'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <feature policy='require' name='hypervisor'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <feature policy='require' name='vme'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <clock offset='utc'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <timer name='hpet' present='no'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <on_poweroff>destroy</on_poweroff>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <on_reboot>restart</on_reboot>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <on_crash>destroy</on_crash>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <disk type='network' device='disk'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk' index='2'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target dev='vda' bus='virtio'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='virtio-disk0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <disk type='network' device='cdrom'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/14d8553f-067d-4188-8ef4-8bb7fd63c62c_disk.config' index='1'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target dev='sda' bus='sata'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <readonly/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='sata0-0-0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pcie.0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='1' port='0x10'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.1'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='2' port='0x11'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.2'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='3' port='0x12'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.3'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='4' port='0x13'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.4'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='5' port='0x14'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.5'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='6' port='0x15'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.6'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='7' port='0x16'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.7'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='8' port='0x17'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.8'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='9' port='0x18'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.9'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='10' port='0x19'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.10'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='11' port='0x1a'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.11'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='12' port='0x1b'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.12'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='13' port='0x1c'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.13'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='14' port='0x1d'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.14'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='15' port='0x1e'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.15'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='16' port='0x1f'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.16'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='17' port='0x20'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.17'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='18' port='0x21'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.18'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='19' port='0x22'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.19'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='20' port='0x23'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.20'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='21' port='0x24'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.21'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='22' port='0x25'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.22'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='23' port='0x26'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.23'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='24' port='0x27'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.24'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target chassis='25' port='0x28'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.25'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model name='pcie-pci-bridge'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='pci.26'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='usb'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <controller type='sata' index='0'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='ide'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:9f:23:e6'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target dev='tapbb42c664-a1'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='net0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <serial type='pty'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/14d8553f-067d-4188-8ef4-8bb7fd63c62c/console.log' append='off'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target type='isa-serial' port='0'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:        <model name='isa-serial'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      </target>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <console type='pty' tty='/dev/pts/0'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/14d8553f-067d-4188-8ef4-8bb7fd63c62c/console.log' append='off'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <target type='serial' port='0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </console>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <input type='tablet' bus='usb'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='input0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='usb' bus='0' port='1'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <input type='mouse' bus='ps2'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='input1'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <input type='keyboard' bus='ps2'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='input2'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <listen type='address' address='::0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </graphics>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <audio id='1' type='none'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='video0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <watchdog model='itco' action='reset'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='watchdog0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </watchdog>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <memballoon model='virtio'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <stats period='10'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='balloon0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <rng model='virtio'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <backend model='random'>/dev/urandom</backend>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <alias name='rng0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <label>system_u:system_r:svirt_t:s0:c102,c698</label>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c102,c698</imagelabel>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <label>+107:+107</label>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <imagelabel>+107:+107</imagelabel>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 03:51:56 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:51:56 np0005603609 nova_compute[221550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.886 221554 WARNING nova.virt.libvirt.driver [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Detaching interface fa:16:3e:73:b8:19 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap1d9a5de2-ec' not found.#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.887 221554 DEBUG nova.virt.libvirt.vif [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:50:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2100946823',display_name='tempest-TestNetworkBasicOps-server-2100946823',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2100946823',id=198,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJNjBVgbErmT8//F7831upVgyo+TyYRA1eBt11/KsyCRN7pmelaqXV1IV1lMPiLAKBIPkzzpgcQRq/nQyirHPvnL1pzNnteQVi2Ibgfr0f1bEDWL1Jyb45CXLI7VI4BwSA==',key_name='tempest-TestNetworkBasicOps-1500474542',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:51:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-kizu2q26',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:51:06Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=14d8553f-067d-4188-8ef4-8bb7fd63c62c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "address": "fa:16:3e:73:b8:19", "network": {"id": "f113c108-4cd7-46b7-82a9-ddf257d911bb", "bridge": "br-int", "label": "tempest-network-smoke--1714582670", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d9a5de2-ec", "ovs_interfaceid": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.887 221554 DEBUG nova.network.os_vif_util [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converting VIF {"id": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "address": "fa:16:3e:73:b8:19", "network": {"id": "f113c108-4cd7-46b7-82a9-ddf257d911bb", "bridge": "br-int", "label": "tempest-network-smoke--1714582670", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d9a5de2-ec", "ovs_interfaceid": "1d9a5de2-ec4a-4259-960e-a5b2ff6e3209", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.888 221554 DEBUG nova.network.os_vif_util [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:b8:19,bridge_name='br-int',has_traffic_filtering=True,id=1d9a5de2-ec4a-4259-960e-a5b2ff6e3209,network=Network(f113c108-4cd7-46b7-82a9-ddf257d911bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d9a5de2-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.889 221554 DEBUG os_vif [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:b8:19,bridge_name='br-int',has_traffic_filtering=True,id=1d9a5de2-ec4a-4259-960e-a5b2ff6e3209,network=Network(f113c108-4cd7-46b7-82a9-ddf257d911bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d9a5de2-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.890 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.891 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d9a5de2-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.891 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.894 221554 INFO os_vif [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:b8:19,bridge_name='br-int',has_traffic_filtering=True,id=1d9a5de2-ec4a-4259-960e-a5b2ff6e3209,network=Network(f113c108-4cd7-46b7-82a9-ddf257d911bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d9a5de2-ec')#033[00m
Jan 31 03:51:56 np0005603609 nova_compute[221550]: 2026-01-31 08:51:56.895 221554 DEBUG nova.virt.libvirt.guest [req-c83f4fb0-18c5-4ebf-9eb1-97c599843c39 req-24419a9d-f67d-400b-9ad6-25e18d27c91c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:name>tempest-TestNetworkBasicOps-server-2100946823</nova:name>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 08:51:56</nova:creationTime>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    <nova:port uuid="bb42c664-a169-4fcf-8369-07a9e967e305">
Jan 31 03:51:56 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:51:56 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 03:51:56 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 03:51:56 np0005603609 nova_compute[221550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 03:51:57 np0005603609 nova_compute[221550]: 2026-01-31 08:51:57.111 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:57.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:57 np0005603609 nova_compute[221550]: 2026-01-31 08:51:57.634 221554 INFO nova.network.neutron [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Port 1d9a5de2-ec4a-4259-960e-a5b2ff6e3209 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Jan 31 03:51:57 np0005603609 nova_compute[221550]: 2026-01-31 08:51:57.635 221554 DEBUG nova.network.neutron [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Updating instance_info_cache with network_info: [{"id": "bb42c664-a169-4fcf-8369-07a9e967e305", "address": "fa:16:3e:9f:23:e6", "network": {"id": "d6d64646-ef89-444c-96ac-3de221e3aa33", "bridge": "br-int", "label": "tempest-network-smoke--404410181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb42c664-a1", "ovs_interfaceid": "bb42c664-a169-4fcf-8369-07a9e967e305", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:51:57 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:57Z|00937|binding|INFO|Releasing lport 84f1437f-a2d5-499e-9b9c-84740517da54 from this chassis (sb_readonly=0)
Jan 31 03:51:57 np0005603609 nova_compute[221550]: 2026-01-31 08:51:57.769 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:57 np0005603609 nova_compute[221550]: 2026-01-31 08:51:57.836 221554 DEBUG oslo_concurrency.lockutils [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Releasing lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:51:57 np0005603609 nova_compute[221550]: 2026-01-31 08:51:57.879 221554 DEBUG oslo_concurrency.lockutils [None req-54f4cff4-d105-467a-ad2c-410de6a00452 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "interface-14d8553f-067d-4188-8ef4-8bb7fd63c62c-1d9a5de2-ec4a-4259-960e-a5b2ff6e3209" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:58 np0005603609 nova_compute[221550]: 2026-01-31 08:51:58.615 221554 DEBUG nova.compute.manager [req-1bc7b152-c74c-4ba4-a165-7f4d5c02ebe1 req-a5deafe4-a38e-4098-b00a-f959b072a336 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received event network-changed-bb42c664-a169-4fcf-8369-07a9e967e305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:51:58 np0005603609 nova_compute[221550]: 2026-01-31 08:51:58.615 221554 DEBUG nova.compute.manager [req-1bc7b152-c74c-4ba4-a165-7f4d5c02ebe1 req-a5deafe4-a38e-4098-b00a-f959b072a336 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Refreshing instance network info cache due to event network-changed-bb42c664-a169-4fcf-8369-07a9e967e305. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:51:58 np0005603609 nova_compute[221550]: 2026-01-31 08:51:58.616 221554 DEBUG oslo_concurrency.lockutils [req-1bc7b152-c74c-4ba4-a165-7f4d5c02ebe1 req-a5deafe4-a38e-4098-b00a-f959b072a336 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:51:58 np0005603609 nova_compute[221550]: 2026-01-31 08:51:58.616 221554 DEBUG oslo_concurrency.lockutils [req-1bc7b152-c74c-4ba4-a165-7f4d5c02ebe1 req-a5deafe4-a38e-4098-b00a-f959b072a336 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:51:58 np0005603609 nova_compute[221550]: 2026-01-31 08:51:58.617 221554 DEBUG nova.network.neutron [req-1bc7b152-c74c-4ba4-a165-7f4d5c02ebe1 req-a5deafe4-a38e-4098-b00a-f959b072a336 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Refreshing network info cache for port bb42c664-a169-4fcf-8369-07a9e967e305 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:51:58 np0005603609 nova_compute[221550]: 2026-01-31 08:51:58.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:58 np0005603609 nova_compute[221550]: 2026-01-31 08:51:58.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:58.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:58 np0005603609 nova_compute[221550]: 2026-01-31 08:51:58.983 221554 DEBUG oslo_concurrency.lockutils [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:58 np0005603609 nova_compute[221550]: 2026-01-31 08:51:58.984 221554 DEBUG oslo_concurrency.lockutils [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:58 np0005603609 nova_compute[221550]: 2026-01-31 08:51:58.985 221554 DEBUG oslo_concurrency.lockutils [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:58 np0005603609 nova_compute[221550]: 2026-01-31 08:51:58.985 221554 DEBUG oslo_concurrency.lockutils [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:58 np0005603609 nova_compute[221550]: 2026-01-31 08:51:58.986 221554 DEBUG oslo_concurrency.lockutils [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:58 np0005603609 nova_compute[221550]: 2026-01-31 08:51:58.988 221554 INFO nova.compute.manager [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Terminating instance#033[00m
Jan 31 03:51:58 np0005603609 nova_compute[221550]: 2026-01-31 08:51:58.990 221554 DEBUG nova.compute.manager [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:51:59 np0005603609 kernel: tapbb42c664-a1 (unregistering): left promiscuous mode
Jan 31 03:51:59 np0005603609 NetworkManager[49064]: <info>  [1769849519.0466] device (tapbb42c664-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:51:59 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:59Z|00938|binding|INFO|Releasing lport bb42c664-a169-4fcf-8369-07a9e967e305 from this chassis (sb_readonly=0)
Jan 31 03:51:59 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:59Z|00939|binding|INFO|Setting lport bb42c664-a169-4fcf-8369-07a9e967e305 down in Southbound
Jan 31 03:51:59 np0005603609 nova_compute[221550]: 2026-01-31 08:51:59.051 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:59 np0005603609 ovn_controller[130359]: 2026-01-31T08:51:59Z|00940|binding|INFO|Removing iface tapbb42c664-a1 ovn-installed in OVS
Jan 31 03:51:59 np0005603609 nova_compute[221550]: 2026-01-31 08:51:59.065 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:59.066 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:23:e6 10.100.0.10'], port_security=['fa:16:3e:9f:23:e6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '14d8553f-067d-4188-8ef4-8bb7fd63c62c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6d64646-ef89-444c-96ac-3de221e3aa33', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1e825819-7398-4d75-a4d1-c086e5f05fef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1675a5e-20f4-4c0a-aae9-8afbbc711349, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=bb42c664-a169-4fcf-8369-07a9e967e305) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:51:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:59.068 140058 INFO neutron.agent.ovn.metadata.agent [-] Port bb42c664-a169-4fcf-8369-07a9e967e305 in datapath d6d64646-ef89-444c-96ac-3de221e3aa33 unbound from our chassis#033[00m
Jan 31 03:51:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:59.070 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d6d64646-ef89-444c-96ac-3de221e3aa33, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:51:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:59.071 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a60039eb-cca7-4253-944f-b7ec8dfaa558]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:59.071 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33 namespace which is not needed anymore#033[00m
Jan 31 03:51:59 np0005603609 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d000000c6.scope: Deactivated successfully.
Jan 31 03:51:59 np0005603609 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d000000c6.scope: Consumed 14.123s CPU time.
Jan 31 03:51:59 np0005603609 systemd-machined[190912]: Machine qemu-110-instance-000000c6 terminated.
Jan 31 03:51:59 np0005603609 neutron-haproxy-ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33[308578]: [NOTICE]   (308582) : haproxy version is 2.8.14-c23fe91
Jan 31 03:51:59 np0005603609 neutron-haproxy-ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33[308578]: [NOTICE]   (308582) : path to executable is /usr/sbin/haproxy
Jan 31 03:51:59 np0005603609 neutron-haproxy-ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33[308578]: [WARNING]  (308582) : Exiting Master process...
Jan 31 03:51:59 np0005603609 neutron-haproxy-ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33[308578]: [WARNING]  (308582) : Exiting Master process...
Jan 31 03:51:59 np0005603609 neutron-haproxy-ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33[308578]: [ALERT]    (308582) : Current worker (308584) exited with code 143 (Terminated)
Jan 31 03:51:59 np0005603609 neutron-haproxy-ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33[308578]: [WARNING]  (308582) : All workers exited. Exiting... (0)
Jan 31 03:51:59 np0005603609 systemd[1]: libpod-4e9fabdfb6d6f437ae621926085bab62005dac0f9f3d49cfcd31100289fe16a5.scope: Deactivated successfully.
Jan 31 03:51:59 np0005603609 podman[308971]: 2026-01-31 08:51:59.188772412 +0000 UTC m=+0.041300291 container died 4e9fabdfb6d6f437ae621926085bab62005dac0f9f3d49cfcd31100289fe16a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:51:59 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4e9fabdfb6d6f437ae621926085bab62005dac0f9f3d49cfcd31100289fe16a5-userdata-shm.mount: Deactivated successfully.
Jan 31 03:51:59 np0005603609 systemd[1]: var-lib-containers-storage-overlay-6a9a0250ec5cddd04607ce00186fc70f4ee311ba9a8ebd905e20278d9a23c646-merged.mount: Deactivated successfully.
Jan 31 03:51:59 np0005603609 podman[308971]: 2026-01-31 08:51:59.234703024 +0000 UTC m=+0.087230903 container cleanup 4e9fabdfb6d6f437ae621926085bab62005dac0f9f3d49cfcd31100289fe16a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:51:59 np0005603609 nova_compute[221550]: 2026-01-31 08:51:59.235 221554 INFO nova.virt.libvirt.driver [-] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Instance destroyed successfully.#033[00m
Jan 31 03:51:59 np0005603609 nova_compute[221550]: 2026-01-31 08:51:59.235 221554 DEBUG nova.objects.instance [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'resources' on Instance uuid 14d8553f-067d-4188-8ef4-8bb7fd63c62c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:51:59 np0005603609 systemd[1]: libpod-conmon-4e9fabdfb6d6f437ae621926085bab62005dac0f9f3d49cfcd31100289fe16a5.scope: Deactivated successfully.
Jan 31 03:51:59 np0005603609 nova_compute[221550]: 2026-01-31 08:51:59.282 221554 DEBUG nova.virt.libvirt.vif [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:50:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2100946823',display_name='tempest-TestNetworkBasicOps-server-2100946823',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2100946823',id=198,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJNjBVgbErmT8//F7831upVgyo+TyYRA1eBt11/KsyCRN7pmelaqXV1IV1lMPiLAKBIPkzzpgcQRq/nQyirHPvnL1pzNnteQVi2Ibgfr0f1bEDWL1Jyb45CXLI7VI4BwSA==',key_name='tempest-TestNetworkBasicOps-1500474542',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:51:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-kizu2q26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:51:06Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=14d8553f-067d-4188-8ef4-8bb7fd63c62c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bb42c664-a169-4fcf-8369-07a9e967e305", "address": "fa:16:3e:9f:23:e6", "network": {"id": "d6d64646-ef89-444c-96ac-3de221e3aa33", "bridge": "br-int", "label": "tempest-network-smoke--404410181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb42c664-a1", "ovs_interfaceid": "bb42c664-a169-4fcf-8369-07a9e967e305", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:51:59 np0005603609 nova_compute[221550]: 2026-01-31 08:51:59.282 221554 DEBUG nova.network.os_vif_util [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "bb42c664-a169-4fcf-8369-07a9e967e305", "address": "fa:16:3e:9f:23:e6", "network": {"id": "d6d64646-ef89-444c-96ac-3de221e3aa33", "bridge": "br-int", "label": "tempest-network-smoke--404410181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb42c664-a1", "ovs_interfaceid": "bb42c664-a169-4fcf-8369-07a9e967e305", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:51:59 np0005603609 nova_compute[221550]: 2026-01-31 08:51:59.283 221554 DEBUG nova.network.os_vif_util [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9f:23:e6,bridge_name='br-int',has_traffic_filtering=True,id=bb42c664-a169-4fcf-8369-07a9e967e305,network=Network(d6d64646-ef89-444c-96ac-3de221e3aa33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb42c664-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:51:59 np0005603609 nova_compute[221550]: 2026-01-31 08:51:59.283 221554 DEBUG os_vif [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:23:e6,bridge_name='br-int',has_traffic_filtering=True,id=bb42c664-a169-4fcf-8369-07a9e967e305,network=Network(d6d64646-ef89-444c-96ac-3de221e3aa33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb42c664-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:51:59 np0005603609 nova_compute[221550]: 2026-01-31 08:51:59.285 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:59 np0005603609 nova_compute[221550]: 2026-01-31 08:51:59.285 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb42c664-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:59 np0005603609 nova_compute[221550]: 2026-01-31 08:51:59.286 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:59 np0005603609 podman[309012]: 2026-01-31 08:51:59.289287695 +0000 UTC m=+0.038302998 container remove 4e9fabdfb6d6f437ae621926085bab62005dac0f9f3d49cfcd31100289fe16a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:51:59 np0005603609 nova_compute[221550]: 2026-01-31 08:51:59.289 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:51:59 np0005603609 nova_compute[221550]: 2026-01-31 08:51:59.291 221554 INFO os_vif [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:23:e6,bridge_name='br-int',has_traffic_filtering=True,id=bb42c664-a169-4fcf-8369-07a9e967e305,network=Network(d6d64646-ef89-444c-96ac-3de221e3aa33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb42c664-a1')#033[00m
Jan 31 03:51:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:59.292 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a373c6a6-aac4-471e-ab28-9258f4302829]: (4, ('Sat Jan 31 08:51:59 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33 (4e9fabdfb6d6f437ae621926085bab62005dac0f9f3d49cfcd31100289fe16a5)\n4e9fabdfb6d6f437ae621926085bab62005dac0f9f3d49cfcd31100289fe16a5\nSat Jan 31 08:51:59 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33 (4e9fabdfb6d6f437ae621926085bab62005dac0f9f3d49cfcd31100289fe16a5)\n4e9fabdfb6d6f437ae621926085bab62005dac0f9f3d49cfcd31100289fe16a5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:59.294 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f455a558-6b7a-4976-a99f-b0a57264a0de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:59.294 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6d64646-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:59 np0005603609 kernel: tapd6d64646-e0: left promiscuous mode
Jan 31 03:51:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:59.304 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4b908660-a125-452d-a2fe-102595d36e7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:59 np0005603609 nova_compute[221550]: 2026-01-31 08:51:59.309 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:51:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:59.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:59.317 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6d6cc6-84e3-460d-b2ca-24c656a82681]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:59.318 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8e285c19-4843-470d-9b56-91b1a8bef34a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:59.330 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bcdae3ad-cbdb-465d-8b87-17267d6c1b75]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 970459, 'reachable_time': 20151, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309042, 'error': None, 'target': 'ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:59.332 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d6d64646-ef89-444c-96ac-3de221e3aa33 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:51:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:51:59.332 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc5043e-5590-4549-8b09-db32a728091c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:59 np0005603609 systemd[1]: run-netns-ovnmeta\x2dd6d64646\x2def89\x2d444c\x2d96ac\x2d3de221e3aa33.mount: Deactivated successfully.
Jan 31 03:52:00 np0005603609 nova_compute[221550]: 2026-01-31 08:52:00.113 221554 INFO nova.virt.libvirt.driver [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Deleting instance files /var/lib/nova/instances/14d8553f-067d-4188-8ef4-8bb7fd63c62c_del#033[00m
Jan 31 03:52:00 np0005603609 nova_compute[221550]: 2026-01-31 08:52:00.114 221554 INFO nova.virt.libvirt.driver [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Deletion of /var/lib/nova/instances/14d8553f-067d-4188-8ef4-8bb7fd63c62c_del complete#033[00m
Jan 31 03:52:00 np0005603609 nova_compute[221550]: 2026-01-31 08:52:00.264 221554 INFO nova.compute.manager [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Took 1.27 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:52:00 np0005603609 nova_compute[221550]: 2026-01-31 08:52:00.264 221554 DEBUG oslo.service.loopingcall [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:52:00 np0005603609 nova_compute[221550]: 2026-01-31 08:52:00.265 221554 DEBUG nova.compute.manager [-] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:52:00 np0005603609 nova_compute[221550]: 2026-01-31 08:52:00.265 221554 DEBUG nova.network.neutron [-] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:52:00 np0005603609 nova_compute[221550]: 2026-01-31 08:52:00.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:00.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:00 np0005603609 nova_compute[221550]: 2026-01-31 08:52:00.784 221554 DEBUG nova.compute.manager [req-00cf5821-3334-46d8-8240-b660ba1f2020 req-7262aff4-cd28-45a8-8ebe-154f258d042c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received event network-vif-unplugged-bb42c664-a169-4fcf-8369-07a9e967e305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:00 np0005603609 nova_compute[221550]: 2026-01-31 08:52:00.784 221554 DEBUG oslo_concurrency.lockutils [req-00cf5821-3334-46d8-8240-b660ba1f2020 req-7262aff4-cd28-45a8-8ebe-154f258d042c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:00 np0005603609 nova_compute[221550]: 2026-01-31 08:52:00.785 221554 DEBUG oslo_concurrency.lockutils [req-00cf5821-3334-46d8-8240-b660ba1f2020 req-7262aff4-cd28-45a8-8ebe-154f258d042c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:00 np0005603609 nova_compute[221550]: 2026-01-31 08:52:00.785 221554 DEBUG oslo_concurrency.lockutils [req-00cf5821-3334-46d8-8240-b660ba1f2020 req-7262aff4-cd28-45a8-8ebe-154f258d042c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:00 np0005603609 nova_compute[221550]: 2026-01-31 08:52:00.785 221554 DEBUG nova.compute.manager [req-00cf5821-3334-46d8-8240-b660ba1f2020 req-7262aff4-cd28-45a8-8ebe-154f258d042c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] No waiting events found dispatching network-vif-unplugged-bb42c664-a169-4fcf-8369-07a9e967e305 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:52:00 np0005603609 nova_compute[221550]: 2026-01-31 08:52:00.785 221554 DEBUG nova.compute.manager [req-00cf5821-3334-46d8-8240-b660ba1f2020 req-7262aff4-cd28-45a8-8ebe-154f258d042c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received event network-vif-unplugged-bb42c664-a169-4fcf-8369-07a9e967e305 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:52:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:01.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:01 np0005603609 nova_compute[221550]: 2026-01-31 08:52:01.581 221554 DEBUG nova.network.neutron [req-1bc7b152-c74c-4ba4-a165-7f4d5c02ebe1 req-a5deafe4-a38e-4098-b00a-f959b072a336 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Updated VIF entry in instance network info cache for port bb42c664-a169-4fcf-8369-07a9e967e305. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:52:01 np0005603609 nova_compute[221550]: 2026-01-31 08:52:01.582 221554 DEBUG nova.network.neutron [req-1bc7b152-c74c-4ba4-a165-7f4d5c02ebe1 req-a5deafe4-a38e-4098-b00a-f959b072a336 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Updating instance_info_cache with network_info: [{"id": "bb42c664-a169-4fcf-8369-07a9e967e305", "address": "fa:16:3e:9f:23:e6", "network": {"id": "d6d64646-ef89-444c-96ac-3de221e3aa33", "bridge": "br-int", "label": "tempest-network-smoke--404410181", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb42c664-a1", "ovs_interfaceid": "bb42c664-a169-4fcf-8369-07a9e967e305", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:52:01 np0005603609 nova_compute[221550]: 2026-01-31 08:52:01.695 221554 DEBUG oslo_concurrency.lockutils [req-1bc7b152-c74c-4ba4-a165-7f4d5c02ebe1 req-a5deafe4-a38e-4098-b00a-f959b072a336 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-14d8553f-067d-4188-8ef4-8bb7fd63c62c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:52:01 np0005603609 nova_compute[221550]: 2026-01-31 08:52:01.966 221554 DEBUG nova.network.neutron [-] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:52:02 np0005603609 nova_compute[221550]: 2026-01-31 08:52:02.015 221554 INFO nova.compute.manager [-] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Took 1.75 seconds to deallocate network for instance.#033[00m
Jan 31 03:52:02 np0005603609 nova_compute[221550]: 2026-01-31 08:52:02.084 221554 DEBUG oslo_concurrency.lockutils [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:02 np0005603609 nova_compute[221550]: 2026-01-31 08:52:02.085 221554 DEBUG oslo_concurrency.lockutils [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:02 np0005603609 nova_compute[221550]: 2026-01-31 08:52:02.097 221554 DEBUG nova.compute.manager [req-aaedba16-757c-4179-8bf0-63f4d7886647 req-d31ba65f-cf46-4622-bc3f-b7b4e1d50e60 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received event network-vif-deleted-bb42c664-a169-4fcf-8369-07a9e967e305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:02 np0005603609 nova_compute[221550]: 2026-01-31 08:52:02.113 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:02 np0005603609 nova_compute[221550]: 2026-01-31 08:52:02.162 221554 DEBUG oslo_concurrency.processutils [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:52:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:52:02 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1349914857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:52:02 np0005603609 nova_compute[221550]: 2026-01-31 08:52:02.584 221554 DEBUG oslo_concurrency.processutils [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:52:02 np0005603609 nova_compute[221550]: 2026-01-31 08:52:02.590 221554 DEBUG nova.compute.provider_tree [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:52:02 np0005603609 nova_compute[221550]: 2026-01-31 08:52:02.620 221554 DEBUG nova.scheduler.client.report [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:52:02 np0005603609 nova_compute[221550]: 2026-01-31 08:52:02.677 221554 DEBUG oslo_concurrency.lockutils [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:52:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:02.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:52:02 np0005603609 nova_compute[221550]: 2026-01-31 08:52:02.722 221554 INFO nova.scheduler.client.report [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Deleted allocations for instance 14d8553f-067d-4188-8ef4-8bb7fd63c62c#033[00m
Jan 31 03:52:02 np0005603609 nova_compute[221550]: 2026-01-31 08:52:02.839 221554 DEBUG oslo_concurrency.lockutils [None req-d232aca4-faef-4534-ad53-e9cc1454f3a0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:02 np0005603609 nova_compute[221550]: 2026-01-31 08:52:02.950 221554 DEBUG nova.compute.manager [req-347d0ee0-d580-4ca8-bdc9-bc52445b696c req-2cc9616a-700a-44ee-9cb4-d3167fee0d16 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received event network-vif-plugged-bb42c664-a169-4fcf-8369-07a9e967e305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:02 np0005603609 nova_compute[221550]: 2026-01-31 08:52:02.951 221554 DEBUG oslo_concurrency.lockutils [req-347d0ee0-d580-4ca8-bdc9-bc52445b696c req-2cc9616a-700a-44ee-9cb4-d3167fee0d16 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:02 np0005603609 nova_compute[221550]: 2026-01-31 08:52:02.951 221554 DEBUG oslo_concurrency.lockutils [req-347d0ee0-d580-4ca8-bdc9-bc52445b696c req-2cc9616a-700a-44ee-9cb4-d3167fee0d16 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:02 np0005603609 nova_compute[221550]: 2026-01-31 08:52:02.951 221554 DEBUG oslo_concurrency.lockutils [req-347d0ee0-d580-4ca8-bdc9-bc52445b696c req-2cc9616a-700a-44ee-9cb4-d3167fee0d16 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14d8553f-067d-4188-8ef4-8bb7fd63c62c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:02 np0005603609 nova_compute[221550]: 2026-01-31 08:52:02.951 221554 DEBUG nova.compute.manager [req-347d0ee0-d580-4ca8-bdc9-bc52445b696c req-2cc9616a-700a-44ee-9cb4-d3167fee0d16 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] No waiting events found dispatching network-vif-plugged-bb42c664-a169-4fcf-8369-07a9e967e305 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:52:02 np0005603609 nova_compute[221550]: 2026-01-31 08:52:02.952 221554 WARNING nova.compute.manager [req-347d0ee0-d580-4ca8-bdc9-bc52445b696c req-2cc9616a-700a-44ee-9cb4-d3167fee0d16 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Received unexpected event network-vif-plugged-bb42c664-a169-4fcf-8369-07a9e967e305 for instance with vm_state deleted and task_state None.#033[00m
Jan 31 03:52:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:03.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:03 np0005603609 nova_compute[221550]: 2026-01-31 08:52:03.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:03 np0005603609 nova_compute[221550]: 2026-01-31 08:52:03.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:52:04 np0005603609 nova_compute[221550]: 2026-01-31 08:52:04.289 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:52:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:04.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:52:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:05.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:05 np0005603609 nova_compute[221550]: 2026-01-31 08:52:05.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:05 np0005603609 nova_compute[221550]: 2026-01-31 08:52:05.705 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:05 np0005603609 nova_compute[221550]: 2026-01-31 08:52:05.706 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:05 np0005603609 nova_compute[221550]: 2026-01-31 08:52:05.707 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:05 np0005603609 nova_compute[221550]: 2026-01-31 08:52:05.707 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:52:05 np0005603609 nova_compute[221550]: 2026-01-31 08:52:05.707 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:52:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:52:06 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/711020738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:52:06 np0005603609 nova_compute[221550]: 2026-01-31 08:52:06.149 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:52:06 np0005603609 nova_compute[221550]: 2026-01-31 08:52:06.276 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:52:06 np0005603609 nova_compute[221550]: 2026-01-31 08:52:06.277 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4222MB free_disk=20.921977996826172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:52:06 np0005603609 nova_compute[221550]: 2026-01-31 08:52:06.277 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:06 np0005603609 nova_compute[221550]: 2026-01-31 08:52:06.277 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:06 np0005603609 nova_compute[221550]: 2026-01-31 08:52:06.347 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:52:06 np0005603609 nova_compute[221550]: 2026-01-31 08:52:06.347 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:52:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:06 np0005603609 nova_compute[221550]: 2026-01-31 08:52:06.367 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:52:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:06.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:52:06 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3662488891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:52:06 np0005603609 nova_compute[221550]: 2026-01-31 08:52:06.774 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:52:06 np0005603609 nova_compute[221550]: 2026-01-31 08:52:06.779 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:52:06 np0005603609 nova_compute[221550]: 2026-01-31 08:52:06.802 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:52:06 np0005603609 nova_compute[221550]: 2026-01-31 08:52:06.831 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:52:06 np0005603609 nova_compute[221550]: 2026-01-31 08:52:06.831 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:07 np0005603609 nova_compute[221550]: 2026-01-31 08:52:07.116 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:07.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:52:07.543 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:52:07.543 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:52:07.544 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:08.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:09 np0005603609 nova_compute[221550]: 2026-01-31 08:52:09.294 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:09.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:52:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:10.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:52:10 np0005603609 nova_compute[221550]: 2026-01-31 08:52:10.832 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:10 np0005603609 nova_compute[221550]: 2026-01-31 08:52:10.833 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:52:10 np0005603609 nova_compute[221550]: 2026-01-31 08:52:10.834 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:52:10 np0005603609 nova_compute[221550]: 2026-01-31 08:52:10.865 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:52:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:11.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:52:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:52:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:52:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:52:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 03:52:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 03:52:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:52:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:52:11 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:52:12 np0005603609 nova_compute[221550]: 2026-01-31 08:52:12.046 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:12 np0005603609 nova_compute[221550]: 2026-01-31 08:52:12.142 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:12 np0005603609 nova_compute[221550]: 2026-01-31 08:52:12.157 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:12 np0005603609 nova_compute[221550]: 2026-01-31 08:52:12.687 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:12.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:13.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:14 np0005603609 nova_compute[221550]: 2026-01-31 08:52:14.232 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849519.2313914, 14d8553f-067d-4188-8ef4-8bb7fd63c62c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:52:14 np0005603609 nova_compute[221550]: 2026-01-31 08:52:14.233 221554 INFO nova.compute.manager [-] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:52:14 np0005603609 nova_compute[221550]: 2026-01-31 08:52:14.266 221554 DEBUG nova.compute.manager [None req-22f9cc16-1ca5-4106-9e79-9e792a8c0088 - - - - - -] [instance: 14d8553f-067d-4188-8ef4-8bb7fd63c62c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:52:14 np0005603609 nova_compute[221550]: 2026-01-31 08:52:14.299 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:14.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:15.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:16 np0005603609 nova_compute[221550]: 2026-01-31 08:52:16.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:16.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:17 np0005603609 nova_compute[221550]: 2026-01-31 08:52:17.145 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:52:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:17.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:52:18 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:52:18 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:52:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:18.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:19.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:19 np0005603609 nova_compute[221550]: 2026-01-31 08:52:19.339 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:52:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:20.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:52:21 np0005603609 podman[309421]: 2026-01-31 08:52:21.162987981 +0000 UTC m=+0.050377130 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 03:52:21 np0005603609 podman[309420]: 2026-01-31 08:52:21.211328882 +0000 UTC m=+0.098738872 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 03:52:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:52:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:21.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:52:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:22 np0005603609 nova_compute[221550]: 2026-01-31 08:52:22.147 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:22.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:52:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:23.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:52:23 np0005603609 nova_compute[221550]: 2026-01-31 08:52:23.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:23 np0005603609 nova_compute[221550]: 2026-01-31 08:52:23.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:52:24 np0005603609 nova_compute[221550]: 2026-01-31 08:52:24.342 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:24.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:52:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:25.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:52:25 np0005603609 nova_compute[221550]: 2026-01-31 08:52:25.701 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:52:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:26.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:52:27 np0005603609 nova_compute[221550]: 2026-01-31 08:52:27.192 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:52:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:27.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:52:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:28.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:29.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:29 np0005603609 nova_compute[221550]: 2026-01-31 08:52:29.395 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:30.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:31.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:32 np0005603609 nova_compute[221550]: 2026-01-31 08:52:32.194 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:32.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:52:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:33.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:52:34 np0005603609 nova_compute[221550]: 2026-01-31 08:52:34.399 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:34.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:52:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:35.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:52:35 np0005603609 nova_compute[221550]: 2026-01-31 08:52:35.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:35 np0005603609 nova_compute[221550]: 2026-01-31 08:52:35.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:52:35 np0005603609 nova_compute[221550]: 2026-01-31 08:52:35.724 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:52:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:36.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:36 np0005603609 ceph-mgr[82033]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 03:52:37 np0005603609 nova_compute[221550]: 2026-01-31 08:52:37.196 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:52:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:37.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:52:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:38.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:39 np0005603609 nova_compute[221550]: 2026-01-31 08:52:39.401 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:39.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:40.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:41.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e385 e385: 3 total, 3 up, 3 in
Jan 31 03:52:42 np0005603609 nova_compute[221550]: 2026-01-31 08:52:42.198 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:42.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:52:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:43.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:52:44 np0005603609 nova_compute[221550]: 2026-01-31 08:52:44.404 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:44.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e386 e386: 3 total, 3 up, 3 in
Jan 31 03:52:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:45.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e387 e387: 3 total, 3 up, 3 in
Jan 31 03:52:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:46.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:47 np0005603609 nova_compute[221550]: 2026-01-31 08:52:47.201 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:47.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:48.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:49 np0005603609 nova_compute[221550]: 2026-01-31 08:52:49.407 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:52:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:49.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:52:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:50.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:51 np0005603609 nova_compute[221550]: 2026-01-31 08:52:51.382 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:52:51.381 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=93, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=92) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:52:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:52:51.383 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:52:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:52:51.383 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '93'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:51.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:52 np0005603609 podman[309467]: 2026-01-31 08:52:52.193248148 +0000 UTC m=+0.070900638 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 03:52:52 np0005603609 podman[309466]: 2026-01-31 08:52:52.202900722 +0000 UTC m=+0.087824588 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:52:52 np0005603609 nova_compute[221550]: 2026-01-31 08:52:52.202 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:52.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:53.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:54 np0005603609 nova_compute[221550]: 2026-01-31 08:52:54.411 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:54.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e388 e388: 3 total, 3 up, 3 in
Jan 31 03:52:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:55.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:56.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:57 np0005603609 nova_compute[221550]: 2026-01-31 08:52:57.204 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:52:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:57.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:52:58 np0005603609 nova_compute[221550]: 2026-01-31 08:52:58.371 221554 DEBUG oslo_concurrency.lockutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquiring lock "3d81873b-9fc8-47da-8ff3-81d0b7715556" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:58 np0005603609 nova_compute[221550]: 2026-01-31 08:52:58.371 221554 DEBUG oslo_concurrency.lockutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "3d81873b-9fc8-47da-8ff3-81d0b7715556" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:58 np0005603609 nova_compute[221550]: 2026-01-31 08:52:58.414 221554 DEBUG nova.compute.manager [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:52:58 np0005603609 nova_compute[221550]: 2026-01-31 08:52:58.599 221554 DEBUG oslo_concurrency.lockutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:58 np0005603609 nova_compute[221550]: 2026-01-31 08:52:58.600 221554 DEBUG oslo_concurrency.lockutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:58 np0005603609 nova_compute[221550]: 2026-01-31 08:52:58.617 221554 DEBUG nova.virt.hardware [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:52:58 np0005603609 nova_compute[221550]: 2026-01-31 08:52:58.618 221554 INFO nova.compute.claims [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:52:58 np0005603609 nova_compute[221550]: 2026-01-31 08:52:58.724 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:52:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:58.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:52:59 np0005603609 nova_compute[221550]: 2026-01-31 08:52:59.346 221554 DEBUG oslo_concurrency.processutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:52:59 np0005603609 nova_compute[221550]: 2026-01-31 08:52:59.415 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:52:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:59.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:59 np0005603609 nova_compute[221550]: 2026-01-31 08:52:59.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:52:59 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3004284897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:52:59 np0005603609 nova_compute[221550]: 2026-01-31 08:52:59.775 221554 DEBUG oslo_concurrency.processutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:52:59 np0005603609 nova_compute[221550]: 2026-01-31 08:52:59.781 221554 DEBUG nova.compute.provider_tree [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:53:00 np0005603609 nova_compute[221550]: 2026-01-31 08:53:00.250 221554 DEBUG nova.scheduler.client.report [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:53:00 np0005603609 nova_compute[221550]: 2026-01-31 08:53:00.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:00.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:00 np0005603609 nova_compute[221550]: 2026-01-31 08:53:00.806 221554 DEBUG oslo_concurrency.lockutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:00 np0005603609 nova_compute[221550]: 2026-01-31 08:53:00.807 221554 DEBUG nova.compute.manager [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:53:01 np0005603609 nova_compute[221550]: 2026-01-31 08:53:01.041 221554 DEBUG nova.compute.manager [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:53:01 np0005603609 nova_compute[221550]: 2026-01-31 08:53:01.041 221554 DEBUG nova.network.neutron [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:53:01 np0005603609 nova_compute[221550]: 2026-01-31 08:53:01.076 221554 INFO nova.virt.libvirt.driver [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:53:01 np0005603609 nova_compute[221550]: 2026-01-31 08:53:01.106 221554 DEBUG nova.compute.manager [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:53:01 np0005603609 nova_compute[221550]: 2026-01-31 08:53:01.264 221554 DEBUG nova.compute.manager [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:53:01 np0005603609 nova_compute[221550]: 2026-01-31 08:53:01.266 221554 DEBUG nova.virt.libvirt.driver [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:53:01 np0005603609 nova_compute[221550]: 2026-01-31 08:53:01.266 221554 INFO nova.virt.libvirt.driver [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Creating image(s)#033[00m
Jan 31 03:53:01 np0005603609 nova_compute[221550]: 2026-01-31 08:53:01.292 221554 DEBUG nova.storage.rbd_utils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] rbd image 3d81873b-9fc8-47da-8ff3-81d0b7715556_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:53:01 np0005603609 nova_compute[221550]: 2026-01-31 08:53:01.320 221554 DEBUG nova.storage.rbd_utils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] rbd image 3d81873b-9fc8-47da-8ff3-81d0b7715556_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:53:01 np0005603609 nova_compute[221550]: 2026-01-31 08:53:01.347 221554 DEBUG nova.storage.rbd_utils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] rbd image 3d81873b-9fc8-47da-8ff3-81d0b7715556_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:53:01 np0005603609 nova_compute[221550]: 2026-01-31 08:53:01.350 221554 DEBUG oslo_concurrency.lockutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquiring lock "28bdc0f59e4bbeba08f2c454f71fc9f493241d92" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:01 np0005603609 nova_compute[221550]: 2026-01-31 08:53:01.351 221554 DEBUG oslo_concurrency.lockutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "28bdc0f59e4bbeba08f2c454f71fc9f493241d92" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:01.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:01 np0005603609 nova_compute[221550]: 2026-01-31 08:53:01.749 221554 DEBUG nova.virt.libvirt.imagebackend [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/e29674ab-eaa7-4f25-a795-4373e5f5c4e8/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/e29674ab-eaa7-4f25-a795-4373e5f5c4e8/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 31 03:53:01 np0005603609 nova_compute[221550]: 2026-01-31 08:53:01.812 221554 DEBUG nova.virt.libvirt.imagebackend [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Selected location: {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/e29674ab-eaa7-4f25-a795-4373e5f5c4e8/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 31 03:53:01 np0005603609 nova_compute[221550]: 2026-01-31 08:53:01.813 221554 DEBUG nova.storage.rbd_utils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] cloning images/e29674ab-eaa7-4f25-a795-4373e5f5c4e8@snap to None/3d81873b-9fc8-47da-8ff3-81d0b7715556_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:53:01 np0005603609 nova_compute[221550]: 2026-01-31 08:53:01.854 221554 DEBUG nova.policy [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5a37d71e432b45168339dde5abdbe7b6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '996284baaa2946258a0ab1be9a30d1f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:53:01 np0005603609 nova_compute[221550]: 2026-01-31 08:53:01.935 221554 DEBUG oslo_concurrency.lockutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "28bdc0f59e4bbeba08f2c454f71fc9f493241d92" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:02 np0005603609 nova_compute[221550]: 2026-01-31 08:53:02.087 221554 DEBUG nova.objects.instance [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 3d81873b-9fc8-47da-8ff3-81d0b7715556 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:53:02 np0005603609 nova_compute[221550]: 2026-01-31 08:53:02.128 221554 DEBUG nova.virt.libvirt.driver [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:53:02 np0005603609 nova_compute[221550]: 2026-01-31 08:53:02.129 221554 DEBUG nova.virt.libvirt.driver [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Ensure instance console log exists: /var/lib/nova/instances/3d81873b-9fc8-47da-8ff3-81d0b7715556/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:53:02 np0005603609 nova_compute[221550]: 2026-01-31 08:53:02.129 221554 DEBUG oslo_concurrency.lockutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:02 np0005603609 nova_compute[221550]: 2026-01-31 08:53:02.129 221554 DEBUG oslo_concurrency.lockutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:02 np0005603609 nova_compute[221550]: 2026-01-31 08:53:02.130 221554 DEBUG oslo_concurrency.lockutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:02 np0005603609 nova_compute[221550]: 2026-01-31 08:53:02.205 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:02.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:53:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:03.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:53:04 np0005603609 nova_compute[221550]: 2026-01-31 08:53:04.463 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:04 np0005603609 nova_compute[221550]: 2026-01-31 08:53:04.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:04 np0005603609 nova_compute[221550]: 2026-01-31 08:53:04.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:53:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:53:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:04.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:53:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:05.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:05 np0005603609 nova_compute[221550]: 2026-01-31 08:53:05.580 221554 DEBUG nova.network.neutron [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Successfully created port: 6e938ec2-4998-426a-850f-6a6a71f9ef6f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:53:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:53:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:06.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:53:06 np0005603609 nova_compute[221550]: 2026-01-31 08:53:06.915 221554 DEBUG nova.network.neutron [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Successfully updated port: 6e938ec2-4998-426a-850f-6a6a71f9ef6f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:53:06 np0005603609 nova_compute[221550]: 2026-01-31 08:53:06.941 221554 DEBUG oslo_concurrency.lockutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquiring lock "refresh_cache-3d81873b-9fc8-47da-8ff3-81d0b7715556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:53:06 np0005603609 nova_compute[221550]: 2026-01-31 08:53:06.941 221554 DEBUG oslo_concurrency.lockutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquired lock "refresh_cache-3d81873b-9fc8-47da-8ff3-81d0b7715556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:53:06 np0005603609 nova_compute[221550]: 2026-01-31 08:53:06.942 221554 DEBUG nova.network.neutron [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:53:07 np0005603609 nova_compute[221550]: 2026-01-31 08:53:07.168 221554 DEBUG nova.network.neutron [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:53:07 np0005603609 nova_compute[221550]: 2026-01-31 08:53:07.207 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:53:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:07.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:53:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:07.544 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:07.545 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:07.545 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:07 np0005603609 nova_compute[221550]: 2026-01-31 08:53:07.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:07 np0005603609 nova_compute[221550]: 2026-01-31 08:53:07.718 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:07 np0005603609 nova_compute[221550]: 2026-01-31 08:53:07.718 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:07 np0005603609 nova_compute[221550]: 2026-01-31 08:53:07.718 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:07 np0005603609 nova_compute[221550]: 2026-01-31 08:53:07.719 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:53:07 np0005603609 nova_compute[221550]: 2026-01-31 08:53:07.719 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:53:07 np0005603609 nova_compute[221550]: 2026-01-31 08:53:07.947 221554 DEBUG nova.compute.manager [req-8c0cc1f0-2590-4e8f-ac3b-979a629e1610 req-1788a20d-4c1c-495b-a26d-1ef595c5a516 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Received event network-changed-6e938ec2-4998-426a-850f-6a6a71f9ef6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:53:07 np0005603609 nova_compute[221550]: 2026-01-31 08:53:07.949 221554 DEBUG nova.compute.manager [req-8c0cc1f0-2590-4e8f-ac3b-979a629e1610 req-1788a20d-4c1c-495b-a26d-1ef595c5a516 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Refreshing instance network info cache due to event network-changed-6e938ec2-4998-426a-850f-6a6a71f9ef6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:53:07 np0005603609 nova_compute[221550]: 2026-01-31 08:53:07.949 221554 DEBUG oslo_concurrency.lockutils [req-8c0cc1f0-2590-4e8f-ac3b-979a629e1610 req-1788a20d-4c1c-495b-a26d-1ef595c5a516 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-3d81873b-9fc8-47da-8ff3-81d0b7715556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:53:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:53:08 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1997707929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.104 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.256 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.257 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4204MB free_disk=20.921764373779297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.257 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.257 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.386 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 3d81873b-9fc8-47da-8ff3-81d0b7715556 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.386 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.387 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.507 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:53:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:53:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:08.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:53:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:53:08 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3637733675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.916 221554 DEBUG nova.network.neutron [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Updating instance_info_cache with network_info: [{"id": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "address": "fa:16:3e:e1:48:11", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e938ec2-49", "ovs_interfaceid": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.920 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.925 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.957 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.985 221554 DEBUG oslo_concurrency.lockutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Releasing lock "refresh_cache-3d81873b-9fc8-47da-8ff3-81d0b7715556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.986 221554 DEBUG nova.compute.manager [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Instance network_info: |[{"id": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "address": "fa:16:3e:e1:48:11", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e938ec2-49", "ovs_interfaceid": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.987 221554 DEBUG oslo_concurrency.lockutils [req-8c0cc1f0-2590-4e8f-ac3b-979a629e1610 req-1788a20d-4c1c-495b-a26d-1ef595c5a516 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-3d81873b-9fc8-47da-8ff3-81d0b7715556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.987 221554 DEBUG nova.network.neutron [req-8c0cc1f0-2590-4e8f-ac3b-979a629e1610 req-1788a20d-4c1c-495b-a26d-1ef595c5a516 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Refreshing network info cache for port 6e938ec2-4998-426a-850f-6a6a71f9ef6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.991 221554 DEBUG nova.virt.libvirt.driver [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Start _get_guest_xml network_info=[{"id": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "address": "fa:16:3e:e1:48:11", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e938ec2-49", "ovs_interfaceid": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T08:52:38Z,direct_url=<?>,disk_format='raw',id=e29674ab-eaa7-4f25-a795-4373e5f5c4e8,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-105993315',owner='996284baaa2946258a0ab1be9a30d1f6',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T08:52:50Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': 'e29674ab-eaa7-4f25-a795-4373e5f5c4e8'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.997 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.997 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:08 np0005603609 nova_compute[221550]: 2026-01-31 08:53:08.998 221554 WARNING nova.virt.libvirt.driver [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.003 221554 DEBUG nova.virt.libvirt.host [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.004 221554 DEBUG nova.virt.libvirt.host [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.010 221554 DEBUG nova.virt.libvirt.host [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.011 221554 DEBUG nova.virt.libvirt.host [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.012 221554 DEBUG nova.virt.libvirt.driver [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.013 221554 DEBUG nova.virt.hardware [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T08:52:38Z,direct_url=<?>,disk_format='raw',id=e29674ab-eaa7-4f25-a795-4373e5f5c4e8,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-105993315',owner='996284baaa2946258a0ab1be9a30d1f6',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T08:52:50Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.013 221554 DEBUG nova.virt.hardware [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.014 221554 DEBUG nova.virt.hardware [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.014 221554 DEBUG nova.virt.hardware [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.014 221554 DEBUG nova.virt.hardware [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.015 221554 DEBUG nova.virt.hardware [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.015 221554 DEBUG nova.virt.hardware [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.015 221554 DEBUG nova.virt.hardware [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.016 221554 DEBUG nova.virt.hardware [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.016 221554 DEBUG nova.virt.hardware [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.016 221554 DEBUG nova.virt.hardware [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.020 221554 DEBUG oslo_concurrency.processutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:53:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:53:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3390267310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.427 221554 DEBUG oslo_concurrency.processutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:53:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:09.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.461 221554 DEBUG nova.storage.rbd_utils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] rbd image 3d81873b-9fc8-47da-8ff3-81d0b7715556_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.466 221554 DEBUG oslo_concurrency.processutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.490 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:53:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3709395865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.898 221554 DEBUG oslo_concurrency.processutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.899 221554 DEBUG nova.virt.libvirt.vif [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:52:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1328608103',display_name='tempest-TestSnapshotPattern-server-1328608103',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1328608103',id=202,image_ref='e29674ab-eaa7-4f25-a795-4373e5f5c4e8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGk1avDtDRCkfi38WDQyZMb5AgNzJzpa/E7afrTWkN843bVEm67ME3qOwcYUGN5nJ8L7GrnzGumfCdBmJYfKJwV+0Rfyn9QiAVoGTFowygEpRUI24B1nMsGC8mTo9YdR3w==',key_name='tempest-TestSnapshotPattern-1326482889',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='996284baaa2946258a0ab1be9a30d1f6',ramdisk_id='',reservation_id='r-9et6zsad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='c304efec-22bc-408c-adea-b06aaf5fbe40',image_min_disk='1',image_min_ram='0',image_owner_id='996284baaa2946258a0ab1be9a30d1f6',image_owner_project_name='tempest-TestSnapshotPattern-920311190',image_owner_user_name='tempest-TestSnapshotPattern-920311190-project-member',image_user_id='5a37d71e432b45168339dde5abdbe7b6',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-920311190',owner_user_name='tempest-TestSnapshotPattern-920311190-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:53:01Z,user_data=None,user_id='5a37d71e432b45168339dde5abdbe7b6',uuid=3d81873b-9fc8-47da-8ff3-81d0b7715556,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "address": "fa:16:3e:e1:48:11", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e938ec2-49", "ovs_interfaceid": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.900 221554 DEBUG nova.network.os_vif_util [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Converting VIF {"id": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "address": "fa:16:3e:e1:48:11", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e938ec2-49", "ovs_interfaceid": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.900 221554 DEBUG nova.network.os_vif_util [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:48:11,bridge_name='br-int',has_traffic_filtering=True,id=6e938ec2-4998-426a-850f-6a6a71f9ef6f,network=Network(5bcbd792-d73a-4177-857d-cca1f0044ec8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e938ec2-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.901 221554 DEBUG nova.objects.instance [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3d81873b-9fc8-47da-8ff3-81d0b7715556 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.929 221554 DEBUG nova.virt.libvirt.driver [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:53:09 np0005603609 nova_compute[221550]:  <uuid>3d81873b-9fc8-47da-8ff3-81d0b7715556</uuid>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:  <name>instance-000000ca</name>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestSnapshotPattern-server-1328608103</nova:name>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:53:08</nova:creationTime>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:53:09 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:        <nova:user uuid="5a37d71e432b45168339dde5abdbe7b6">tempest-TestSnapshotPattern-920311190-project-member</nova:user>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:        <nova:project uuid="996284baaa2946258a0ab1be9a30d1f6">tempest-TestSnapshotPattern-920311190</nova:project>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="e29674ab-eaa7-4f25-a795-4373e5f5c4e8"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:        <nova:port uuid="6e938ec2-4998-426a-850f-6a6a71f9ef6f">
Jan 31 03:53:09 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <entry name="serial">3d81873b-9fc8-47da-8ff3-81d0b7715556</entry>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <entry name="uuid">3d81873b-9fc8-47da-8ff3-81d0b7715556</entry>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/3d81873b-9fc8-47da-8ff3-81d0b7715556_disk">
Jan 31 03:53:09 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:53:09 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/3d81873b-9fc8-47da-8ff3-81d0b7715556_disk.config">
Jan 31 03:53:09 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:53:09 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:e1:48:11"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <target dev="tap6e938ec2-49"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/3d81873b-9fc8-47da-8ff3-81d0b7715556/console.log" append="off"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <input type="keyboard" bus="usb"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:53:09 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:53:09 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:53:09 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:53:09 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.930 221554 DEBUG nova.compute.manager [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Preparing to wait for external event network-vif-plugged-6e938ec2-4998-426a-850f-6a6a71f9ef6f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.930 221554 DEBUG oslo_concurrency.lockutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquiring lock "3d81873b-9fc8-47da-8ff3-81d0b7715556-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.930 221554 DEBUG oslo_concurrency.lockutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "3d81873b-9fc8-47da-8ff3-81d0b7715556-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.931 221554 DEBUG oslo_concurrency.lockutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "3d81873b-9fc8-47da-8ff3-81d0b7715556-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.931 221554 DEBUG nova.virt.libvirt.vif [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:52:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1328608103',display_name='tempest-TestSnapshotPattern-server-1328608103',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1328608103',id=202,image_ref='e29674ab-eaa7-4f25-a795-4373e5f5c4e8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGk1avDtDRCkfi38WDQyZMb5AgNzJzpa/E7afrTWkN843bVEm67ME3qOwcYUGN5nJ8L7GrnzGumfCdBmJYfKJwV+0Rfyn9QiAVoGTFowygEpRUI24B1nMsGC8mTo9YdR3w==',key_name='tempest-TestSnapshotPattern-1326482889',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='996284baaa2946258a0ab1be9a30d1f6',ramdisk_id='',reservation_id='r-9et6zsad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='c304efec-22bc-408c-adea-b06aaf5fbe40',image_min_disk='1',image_min_ram='0',image_owner_id='996284baaa2946258a0ab1be9a30d1f6',image_owner_project_name='tempest-TestSnapshotPattern-920311190',image_owner_user_name='tempest-TestSnapshotPattern-920311190-project-member',image_user_id='5a37d71e432b45168339dde5abdbe7b6',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-920311190',owner_user_name='tempest-TestSnapshotPattern-920311190-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:53:01Z,user_data=None,user_id='5a37d71e432b45168339dde5abdbe7b6',uuid=3d81873b-9fc8-47da-8ff3-81d0b7715556,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "address": "fa:16:3e:e1:48:11", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e938ec2-49", "ovs_interfaceid": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.932 221554 DEBUG nova.network.os_vif_util [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Converting VIF {"id": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "address": "fa:16:3e:e1:48:11", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e938ec2-49", "ovs_interfaceid": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.932 221554 DEBUG nova.network.os_vif_util [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:48:11,bridge_name='br-int',has_traffic_filtering=True,id=6e938ec2-4998-426a-850f-6a6a71f9ef6f,network=Network(5bcbd792-d73a-4177-857d-cca1f0044ec8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e938ec2-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.933 221554 DEBUG os_vif [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:48:11,bridge_name='br-int',has_traffic_filtering=True,id=6e938ec2-4998-426a-850f-6a6a71f9ef6f,network=Network(5bcbd792-d73a-4177-857d-cca1f0044ec8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e938ec2-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.933 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.934 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.934 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.936 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.937 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e938ec2-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.937 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e938ec2-49, col_values=(('external_ids', {'iface-id': '6e938ec2-4998-426a-850f-6a6a71f9ef6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:48:11', 'vm-uuid': '3d81873b-9fc8-47da-8ff3-81d0b7715556'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.938 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:09 np0005603609 NetworkManager[49064]: <info>  [1769849589.9397] manager: (tap6e938ec2-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.941 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.945 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:09 np0005603609 nova_compute[221550]: 2026-01-31 08:53:09.946 221554 INFO os_vif [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:48:11,bridge_name='br-int',has_traffic_filtering=True,id=6e938ec2-4998-426a-850f-6a6a71f9ef6f,network=Network(5bcbd792-d73a-4177-857d-cca1f0044ec8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e938ec2-49')#033[00m
Jan 31 03:53:10 np0005603609 nova_compute[221550]: 2026-01-31 08:53:10.013 221554 DEBUG nova.virt.libvirt.driver [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:53:10 np0005603609 nova_compute[221550]: 2026-01-31 08:53:10.014 221554 DEBUG nova.virt.libvirt.driver [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:53:10 np0005603609 nova_compute[221550]: 2026-01-31 08:53:10.014 221554 DEBUG nova.virt.libvirt.driver [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] No VIF found with MAC fa:16:3e:e1:48:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:53:10 np0005603609 nova_compute[221550]: 2026-01-31 08:53:10.015 221554 INFO nova.virt.libvirt.driver [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Using config drive#033[00m
Jan 31 03:53:10 np0005603609 nova_compute[221550]: 2026-01-31 08:53:10.036 221554 DEBUG nova.storage.rbd_utils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] rbd image 3d81873b-9fc8-47da-8ff3-81d0b7715556_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:53:10 np0005603609 nova_compute[221550]: 2026-01-31 08:53:10.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:10 np0005603609 nova_compute[221550]: 2026-01-31 08:53:10.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:53:10 np0005603609 nova_compute[221550]: 2026-01-31 08:53:10.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:53:10 np0005603609 nova_compute[221550]: 2026-01-31 08:53:10.693 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:53:10 np0005603609 nova_compute[221550]: 2026-01-31 08:53:10.694 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:53:10 np0005603609 nova_compute[221550]: 2026-01-31 08:53:10.694 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:53:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:10.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:53:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:53:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:11.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:53:11 np0005603609 nova_compute[221550]: 2026-01-31 08:53:11.491 221554 INFO nova.virt.libvirt.driver [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Creating config drive at /var/lib/nova/instances/3d81873b-9fc8-47da-8ff3-81d0b7715556/disk.config#033[00m
Jan 31 03:53:11 np0005603609 nova_compute[221550]: 2026-01-31 08:53:11.495 221554 DEBUG oslo_concurrency.processutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3d81873b-9fc8-47da-8ff3-81d0b7715556/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp00dxf0sp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:53:11 np0005603609 nova_compute[221550]: 2026-01-31 08:53:11.624 221554 DEBUG oslo_concurrency.processutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3d81873b-9fc8-47da-8ff3-81d0b7715556/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp00dxf0sp" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:53:11 np0005603609 nova_compute[221550]: 2026-01-31 08:53:11.659 221554 DEBUG nova.storage.rbd_utils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] rbd image 3d81873b-9fc8-47da-8ff3-81d0b7715556_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:53:11 np0005603609 nova_compute[221550]: 2026-01-31 08:53:11.664 221554 DEBUG oslo_concurrency.processutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3d81873b-9fc8-47da-8ff3-81d0b7715556/disk.config 3d81873b-9fc8-47da-8ff3-81d0b7715556_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:53:11 np0005603609 nova_compute[221550]: 2026-01-31 08:53:11.811 221554 DEBUG oslo_concurrency.processutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3d81873b-9fc8-47da-8ff3-81d0b7715556/disk.config 3d81873b-9fc8-47da-8ff3-81d0b7715556_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:53:11 np0005603609 nova_compute[221550]: 2026-01-31 08:53:11.812 221554 INFO nova.virt.libvirt.driver [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Deleting local config drive /var/lib/nova/instances/3d81873b-9fc8-47da-8ff3-81d0b7715556/disk.config because it was imported into RBD.#033[00m
Jan 31 03:53:11 np0005603609 NetworkManager[49064]: <info>  [1769849591.8637] manager: (tap6e938ec2-49): new Tun device (/org/freedesktop/NetworkManager/Devices/432)
Jan 31 03:53:11 np0005603609 kernel: tap6e938ec2-49: entered promiscuous mode
Jan 31 03:53:11 np0005603609 ovn_controller[130359]: 2026-01-31T08:53:11Z|00941|binding|INFO|Claiming lport 6e938ec2-4998-426a-850f-6a6a71f9ef6f for this chassis.
Jan 31 03:53:11 np0005603609 ovn_controller[130359]: 2026-01-31T08:53:11Z|00942|binding|INFO|6e938ec2-4998-426a-850f-6a6a71f9ef6f: Claiming fa:16:3e:e1:48:11 10.100.0.3
Jan 31 03:53:11 np0005603609 nova_compute[221550]: 2026-01-31 08:53:11.865 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:11 np0005603609 nova_compute[221550]: 2026-01-31 08:53:11.880 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:11 np0005603609 nova_compute[221550]: 2026-01-31 08:53:11.882 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:11 np0005603609 NetworkManager[49064]: <info>  [1769849591.8831] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/433)
Jan 31 03:53:11 np0005603609 NetworkManager[49064]: <info>  [1769849591.8847] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Jan 31 03:53:11 np0005603609 systemd-udevd[309888]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:53:11 np0005603609 NetworkManager[49064]: <info>  [1769849591.8969] device (tap6e938ec2-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:53:11 np0005603609 NetworkManager[49064]: <info>  [1769849591.8974] device (tap6e938ec2-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:11.897 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:48:11 10.100.0.3'], port_security=['fa:16:3e:e1:48:11 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3d81873b-9fc8-47da-8ff3-81d0b7715556', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bcbd792-d73a-4177-857d-cca1f0044ec8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '996284baaa2946258a0ab1be9a30d1f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc7f4114-32da-4b90-986a-baaae713330e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6418939f-fc1e-4745-b073-d983621a4c28, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=6e938ec2-4998-426a-850f-6a6a71f9ef6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:11.899 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 6e938ec2-4998-426a-850f-6a6a71f9ef6f in datapath 5bcbd792-d73a-4177-857d-cca1f0044ec8 bound to our chassis#033[00m
Jan 31 03:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:11.900 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bcbd792-d73a-4177-857d-cca1f0044ec8#033[00m
Jan 31 03:53:11 np0005603609 systemd-machined[190912]: New machine qemu-111-instance-000000ca.
Jan 31 03:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:11.908 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[74e854aa-ff1b-4530-a9e4-44bd6e23ddd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:11.909 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5bcbd792-d1 in ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:11.910 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5bcbd792-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:11.911 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc019ec-be4e-448e-bb28-09d5cdf4dc7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:11.911 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[61205dc5-4cb2-4098-bd0d-4c0cf593a144]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:11.920 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[5370c9af-edab-417a-8af0-b74f764bfa4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:11 np0005603609 systemd[1]: Started Virtual Machine qemu-111-instance-000000ca.
Jan 31 03:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:11.942 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a6654046-a579-40a0-8d31-fdb81f83bb9a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:11 np0005603609 nova_compute[221550]: 2026-01-31 08:53:11.952 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:11.970 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[6e45c2b8-0b19-4076-8474-db75e24c9d91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:11.975 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[51d27026-4dc5-43df-8f71-5c2afffe2230]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:11 np0005603609 NetworkManager[49064]: <info>  [1769849591.9759] manager: (tap5bcbd792-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/435)
Jan 31 03:53:11 np0005603609 nova_compute[221550]: 2026-01-31 08:53:11.978 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:11 np0005603609 ovn_controller[130359]: 2026-01-31T08:53:11Z|00943|binding|INFO|Setting lport 6e938ec2-4998-426a-850f-6a6a71f9ef6f ovn-installed in OVS
Jan 31 03:53:11 np0005603609 ovn_controller[130359]: 2026-01-31T08:53:11Z|00944|binding|INFO|Setting lport 6e938ec2-4998-426a-850f-6a6a71f9ef6f up in Southbound
Jan 31 03:53:11 np0005603609 nova_compute[221550]: 2026-01-31 08:53:11.984 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:12.003 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[943be7f4-8389-4d57-b3ee-8de6352514b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:12.006 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b81fdc09-6c6b-481b-bf9c-f49101dbc41d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:12 np0005603609 NetworkManager[49064]: <info>  [1769849592.0210] device (tap5bcbd792-d0): carrier: link connected
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:12.023 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[bee78b11-2353-433c-bf51-a7dc504e4539]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:12.036 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cdb24493-cf3f-4a90-961e-de2c4fd27b21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bcbd792-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:c4:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 983172, 'reachable_time': 42730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309922, 'error': None, 'target': 'ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:12.051 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed62742-2ccb-4da3-9283-3859b507d993]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:c4f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 983172, 'tstamp': 983172}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309923, 'error': None, 'target': 'ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:12.063 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe77bb4-3c71-4d90-8d6b-37b376d99e5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bcbd792-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:c4:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 983172, 'reachable_time': 42730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309924, 'error': None, 'target': 'ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:12.085 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6dd69843-9a8a-47f9-bcfc-5be930566c98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:12.129 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e01e370e-c239-48d7-bf20-b251c6f499ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:12.130 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bcbd792-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:12.130 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:12.131 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bcbd792-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:53:12 np0005603609 nova_compute[221550]: 2026-01-31 08:53:12.132 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:12 np0005603609 NetworkManager[49064]: <info>  [1769849592.1331] manager: (tap5bcbd792-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/436)
Jan 31 03:53:12 np0005603609 kernel: tap5bcbd792-d0: entered promiscuous mode
Jan 31 03:53:12 np0005603609 nova_compute[221550]: 2026-01-31 08:53:12.134 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:12.135 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bcbd792-d0, col_values=(('external_ids', {'iface-id': '5339b2f1-5a22-4487-8eff-180aa0b06c18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:53:12 np0005603609 nova_compute[221550]: 2026-01-31 08:53:12.136 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:12 np0005603609 ovn_controller[130359]: 2026-01-31T08:53:12Z|00945|binding|INFO|Releasing lport 5339b2f1-5a22-4487-8eff-180aa0b06c18 from this chassis (sb_readonly=0)
Jan 31 03:53:12 np0005603609 nova_compute[221550]: 2026-01-31 08:53:12.140 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:12.140 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5bcbd792-d73a-4177-857d-cca1f0044ec8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5bcbd792-d73a-4177-857d-cca1f0044ec8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:12.141 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c128ebcb-a36b-4cff-b79b-db9c870b5ee1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:12.141 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-5bcbd792-d73a-4177-857d-cca1f0044ec8
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/5bcbd792-d73a-4177-857d-cca1f0044ec8.pid.haproxy
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 5bcbd792-d73a-4177-857d-cca1f0044ec8
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:53:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:12.142 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8', 'env', 'PROCESS_TAG=haproxy-5bcbd792-d73a-4177-857d-cca1f0044ec8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5bcbd792-d73a-4177-857d-cca1f0044ec8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:53:12 np0005603609 nova_compute[221550]: 2026-01-31 08:53:12.209 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:12 np0005603609 podman[309957]: 2026-01-31 08:53:12.460689298 +0000 UTC m=+0.045379229 container create 2f527edfddf05a5faba9bd64afdf28e867043946cff80765c897684268b1504b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 03:53:12 np0005603609 systemd[1]: Started libpod-conmon-2f527edfddf05a5faba9bd64afdf28e867043946cff80765c897684268b1504b.scope.
Jan 31 03:53:12 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:53:12 np0005603609 podman[309957]: 2026-01-31 08:53:12.433760477 +0000 UTC m=+0.018450428 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:53:12 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/409ba0e3dc5e0483a558f44ebec91475a6ce4a0e621a858d4293b848de314540/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:53:12 np0005603609 podman[309957]: 2026-01-31 08:53:12.540367447 +0000 UTC m=+0.125057378 container init 2f527edfddf05a5faba9bd64afdf28e867043946cff80765c897684268b1504b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:53:12 np0005603609 podman[309957]: 2026-01-31 08:53:12.546661749 +0000 UTC m=+0.131351680 container start 2f527edfddf05a5faba9bd64afdf28e867043946cff80765c897684268b1504b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:53:12 np0005603609 neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8[309972]: [NOTICE]   (309976) : New worker (309978) forked
Jan 31 03:53:12 np0005603609 neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8[309972]: [NOTICE]   (309976) : Loading success.
Jan 31 03:53:12 np0005603609 nova_compute[221550]: 2026-01-31 08:53:12.766 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849592.766075, 3d81873b-9fc8-47da-8ff3-81d0b7715556 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:53:12 np0005603609 nova_compute[221550]: 2026-01-31 08:53:12.768 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] VM Started (Lifecycle Event)#033[00m
Jan 31 03:53:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:53:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:12.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:53:12 np0005603609 nova_compute[221550]: 2026-01-31 08:53:12.814 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:53:12 np0005603609 nova_compute[221550]: 2026-01-31 08:53:12.816 221554 DEBUG nova.network.neutron [req-8c0cc1f0-2590-4e8f-ac3b-979a629e1610 req-1788a20d-4c1c-495b-a26d-1ef595c5a516 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Updated VIF entry in instance network info cache for port 6e938ec2-4998-426a-850f-6a6a71f9ef6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:53:12 np0005603609 nova_compute[221550]: 2026-01-31 08:53:12.817 221554 DEBUG nova.network.neutron [req-8c0cc1f0-2590-4e8f-ac3b-979a629e1610 req-1788a20d-4c1c-495b-a26d-1ef595c5a516 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Updating instance_info_cache with network_info: [{"id": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "address": "fa:16:3e:e1:48:11", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e938ec2-49", "ovs_interfaceid": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:53:12 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:53:12 np0005603609 nova_compute[221550]: 2026-01-31 08:53:12.820 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849592.7662342, 3d81873b-9fc8-47da-8ff3-81d0b7715556 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:53:12 np0005603609 nova_compute[221550]: 2026-01-31 08:53:12.821 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:53:12 np0005603609 nova_compute[221550]: 2026-01-31 08:53:12.908 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:53:12 np0005603609 nova_compute[221550]: 2026-01-31 08:53:12.909 221554 DEBUG oslo_concurrency.lockutils [req-8c0cc1f0-2590-4e8f-ac3b-979a629e1610 req-1788a20d-4c1c-495b-a26d-1ef595c5a516 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-3d81873b-9fc8-47da-8ff3-81d0b7715556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:53:12 np0005603609 nova_compute[221550]: 2026-01-31 08:53:12.913 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:53:12 np0005603609 nova_compute[221550]: 2026-01-31 08:53:12.992 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:53:13 np0005603609 nova_compute[221550]: 2026-01-31 08:53:13.228 221554 DEBUG nova.compute.manager [req-4c64b18b-a57f-4573-9132-d08336187aa8 req-6edfe168-7c4e-47c6-bc26-dfcef9b6b9ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Received event network-vif-plugged-6e938ec2-4998-426a-850f-6a6a71f9ef6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:53:13 np0005603609 nova_compute[221550]: 2026-01-31 08:53:13.228 221554 DEBUG oslo_concurrency.lockutils [req-4c64b18b-a57f-4573-9132-d08336187aa8 req-6edfe168-7c4e-47c6-bc26-dfcef9b6b9ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3d81873b-9fc8-47da-8ff3-81d0b7715556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:13 np0005603609 nova_compute[221550]: 2026-01-31 08:53:13.229 221554 DEBUG oslo_concurrency.lockutils [req-4c64b18b-a57f-4573-9132-d08336187aa8 req-6edfe168-7c4e-47c6-bc26-dfcef9b6b9ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3d81873b-9fc8-47da-8ff3-81d0b7715556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:13 np0005603609 nova_compute[221550]: 2026-01-31 08:53:13.229 221554 DEBUG oslo_concurrency.lockutils [req-4c64b18b-a57f-4573-9132-d08336187aa8 req-6edfe168-7c4e-47c6-bc26-dfcef9b6b9ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3d81873b-9fc8-47da-8ff3-81d0b7715556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:13 np0005603609 nova_compute[221550]: 2026-01-31 08:53:13.229 221554 DEBUG nova.compute.manager [req-4c64b18b-a57f-4573-9132-d08336187aa8 req-6edfe168-7c4e-47c6-bc26-dfcef9b6b9ba 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Processing event network-vif-plugged-6e938ec2-4998-426a-850f-6a6a71f9ef6f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:53:13 np0005603609 nova_compute[221550]: 2026-01-31 08:53:13.230 221554 DEBUG nova.compute.manager [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:53:13 np0005603609 nova_compute[221550]: 2026-01-31 08:53:13.233 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849593.2328668, 3d81873b-9fc8-47da-8ff3-81d0b7715556 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:53:13 np0005603609 nova_compute[221550]: 2026-01-31 08:53:13.233 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:53:13 np0005603609 nova_compute[221550]: 2026-01-31 08:53:13.234 221554 DEBUG nova.virt.libvirt.driver [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:53:13 np0005603609 nova_compute[221550]: 2026-01-31 08:53:13.238 221554 INFO nova.virt.libvirt.driver [-] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Instance spawned successfully.#033[00m
Jan 31 03:53:13 np0005603609 nova_compute[221550]: 2026-01-31 08:53:13.239 221554 INFO nova.compute.manager [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Took 11.97 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:53:13 np0005603609 nova_compute[221550]: 2026-01-31 08:53:13.239 221554 DEBUG nova.compute.manager [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:53:13 np0005603609 nova_compute[221550]: 2026-01-31 08:53:13.273 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:53:13 np0005603609 nova_compute[221550]: 2026-01-31 08:53:13.277 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:53:13 np0005603609 nova_compute[221550]: 2026-01-31 08:53:13.343 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:53:13 np0005603609 nova_compute[221550]: 2026-01-31 08:53:13.401 221554 INFO nova.compute.manager [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Took 14.86 seconds to build instance.#033[00m
Jan 31 03:53:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:13.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:13 np0005603609 nova_compute[221550]: 2026-01-31 08:53:13.541 221554 DEBUG oslo_concurrency.lockutils [None req-2e83979c-8eba-47d6-8c3c-61a9176f9328 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "3d81873b-9fc8-47da-8ff3-81d0b7715556" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:13 np0005603609 nova_compute[221550]: 2026-01-31 08:53:13.718 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:53:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:14.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:53:14 np0005603609 nova_compute[221550]: 2026-01-31 08:53:14.939 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:15.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:15 np0005603609 nova_compute[221550]: 2026-01-31 08:53:15.471 221554 DEBUG nova.compute.manager [req-1e992d7d-9878-40af-869f-e76f41d92b8b req-911624ff-6b46-4338-9687-e4d35b3bb23c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Received event network-vif-plugged-6e938ec2-4998-426a-850f-6a6a71f9ef6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:53:15 np0005603609 nova_compute[221550]: 2026-01-31 08:53:15.472 221554 DEBUG oslo_concurrency.lockutils [req-1e992d7d-9878-40af-869f-e76f41d92b8b req-911624ff-6b46-4338-9687-e4d35b3bb23c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3d81873b-9fc8-47da-8ff3-81d0b7715556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:15 np0005603609 nova_compute[221550]: 2026-01-31 08:53:15.472 221554 DEBUG oslo_concurrency.lockutils [req-1e992d7d-9878-40af-869f-e76f41d92b8b req-911624ff-6b46-4338-9687-e4d35b3bb23c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3d81873b-9fc8-47da-8ff3-81d0b7715556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:15 np0005603609 nova_compute[221550]: 2026-01-31 08:53:15.473 221554 DEBUG oslo_concurrency.lockutils [req-1e992d7d-9878-40af-869f-e76f41d92b8b req-911624ff-6b46-4338-9687-e4d35b3bb23c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3d81873b-9fc8-47da-8ff3-81d0b7715556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:15 np0005603609 nova_compute[221550]: 2026-01-31 08:53:15.473 221554 DEBUG nova.compute.manager [req-1e992d7d-9878-40af-869f-e76f41d92b8b req-911624ff-6b46-4338-9687-e4d35b3bb23c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] No waiting events found dispatching network-vif-plugged-6e938ec2-4998-426a-850f-6a6a71f9ef6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:53:15 np0005603609 nova_compute[221550]: 2026-01-31 08:53:15.473 221554 WARNING nova.compute.manager [req-1e992d7d-9878-40af-869f-e76f41d92b8b req-911624ff-6b46-4338-9687-e4d35b3bb23c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Received unexpected event network-vif-plugged-6e938ec2-4998-426a-850f-6a6a71f9ef6f for instance with vm_state active and task_state None.#033[00m
Jan 31 03:53:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:16.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:17 np0005603609 nova_compute[221550]: 2026-01-31 08:53:17.212 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:53:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:17.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:53:18 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 03:53:18 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:53:18 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:53:18 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:53:18 np0005603609 nova_compute[221550]: 2026-01-31 08:53:18.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:18.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:19.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:19 np0005603609 nova_compute[221550]: 2026-01-31 08:53:19.888 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:19 np0005603609 nova_compute[221550]: 2026-01-31 08:53:19.941 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:20.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 03:53:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:21.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 03:53:22 np0005603609 nova_compute[221550]: 2026-01-31 08:53:22.212 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:22 np0005603609 nova_compute[221550]: 2026-01-31 08:53:22.670 221554 DEBUG nova.compute.manager [req-c15d3111-b5cc-4cb1-bc69-63166cad9554 req-e2734f6f-7cab-494a-9f88-a3dda09d7f6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Received event network-changed-6e938ec2-4998-426a-850f-6a6a71f9ef6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:53:22 np0005603609 nova_compute[221550]: 2026-01-31 08:53:22.670 221554 DEBUG nova.compute.manager [req-c15d3111-b5cc-4cb1-bc69-63166cad9554 req-e2734f6f-7cab-494a-9f88-a3dda09d7f6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Refreshing instance network info cache due to event network-changed-6e938ec2-4998-426a-850f-6a6a71f9ef6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:53:22 np0005603609 nova_compute[221550]: 2026-01-31 08:53:22.670 221554 DEBUG oslo_concurrency.lockutils [req-c15d3111-b5cc-4cb1-bc69-63166cad9554 req-e2734f6f-7cab-494a-9f88-a3dda09d7f6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-3d81873b-9fc8-47da-8ff3-81d0b7715556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:53:22 np0005603609 nova_compute[221550]: 2026-01-31 08:53:22.671 221554 DEBUG oslo_concurrency.lockutils [req-c15d3111-b5cc-4cb1-bc69-63166cad9554 req-e2734f6f-7cab-494a-9f88-a3dda09d7f6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-3d81873b-9fc8-47da-8ff3-81d0b7715556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:53:22 np0005603609 nova_compute[221550]: 2026-01-31 08:53:22.671 221554 DEBUG nova.network.neutron [req-c15d3111-b5cc-4cb1-bc69-63166cad9554 req-e2734f6f-7cab-494a-9f88-a3dda09d7f6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Refreshing network info cache for port 6e938ec2-4998-426a-850f-6a6a71f9ef6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:53:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:22.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:23 np0005603609 podman[310163]: 2026-01-31 08:53:23.174529078 +0000 UTC m=+0.054757007 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 03:53:23 np0005603609 podman[310162]: 2026-01-31 08:53:23.225523192 +0000 UTC m=+0.106714765 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 03:53:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:23.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:53:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:24.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:53:24 np0005603609 nova_compute[221550]: 2026-01-31 08:53:24.943 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:25 np0005603609 nova_compute[221550]: 2026-01-31 08:53:25.063 221554 DEBUG nova.network.neutron [req-c15d3111-b5cc-4cb1-bc69-63166cad9554 req-e2734f6f-7cab-494a-9f88-a3dda09d7f6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Updated VIF entry in instance network info cache for port 6e938ec2-4998-426a-850f-6a6a71f9ef6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:53:25 np0005603609 nova_compute[221550]: 2026-01-31 08:53:25.063 221554 DEBUG nova.network.neutron [req-c15d3111-b5cc-4cb1-bc69-63166cad9554 req-e2734f6f-7cab-494a-9f88-a3dda09d7f6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Updating instance_info_cache with network_info: [{"id": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "address": "fa:16:3e:e1:48:11", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e938ec2-49", "ovs_interfaceid": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:53:25 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:53:25 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:53:25 np0005603609 nova_compute[221550]: 2026-01-31 08:53:25.441 221554 DEBUG oslo_concurrency.lockutils [req-c15d3111-b5cc-4cb1-bc69-63166cad9554 req-e2734f6f-7cab-494a-9f88-a3dda09d7f6a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-3d81873b-9fc8-47da-8ff3-81d0b7715556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:53:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:25.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.003000071s ======
Jan 31 03:53:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:26.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Jan 31 03:53:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:53:26Z|00124|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.3
Jan 31 03:53:26 np0005603609 ovn_controller[130359]: 2026-01-31T08:53:26Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:e1:48:11 10.100.0.3
Jan 31 03:53:27 np0005603609 nova_compute[221550]: 2026-01-31 08:53:27.213 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:53:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:27.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:53:27 np0005603609 nova_compute[221550]: 2026-01-31 08:53:27.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:28.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:29.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:29 np0005603609 nova_compute[221550]: 2026-01-31 08:53:29.945 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:30.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:31 np0005603609 ovn_controller[130359]: 2026-01-31T08:53:31Z|00126|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.3
Jan 31 03:53:31 np0005603609 ovn_controller[130359]: 2026-01-31T08:53:31Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:e1:48:11 10.100.0.3
Jan 31 03:53:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:31.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:32 np0005603609 ovn_controller[130359]: 2026-01-31T08:53:32Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e1:48:11 10.100.0.3
Jan 31 03:53:32 np0005603609 ovn_controller[130359]: 2026-01-31T08:53:32Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:48:11 10.100.0.3
Jan 31 03:53:32 np0005603609 nova_compute[221550]: 2026-01-31 08:53:32.216 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:32.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:53:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:33.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:53:33 np0005603609 nova_compute[221550]: 2026-01-31 08:53:33.643 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:53:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:34.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:53:34 np0005603609 nova_compute[221550]: 2026-01-31 08:53:34.947 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:35.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:36 np0005603609 nova_compute[221550]: 2026-01-31 08:53:36.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:36.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:37 np0005603609 nova_compute[221550]: 2026-01-31 08:53:37.218 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:37.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:53:37 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1264266940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:53:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:53:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:38.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:53:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:53:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:39.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:53:39 np0005603609 nova_compute[221550]: 2026-01-31 08:53:39.948 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:40 np0005603609 nova_compute[221550]: 2026-01-31 08:53:40.342 221554 DEBUG nova.compute.manager [None req-a43d6f8a-b919-4fcb-b75c-37e7232ceeac 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:53:40 np0005603609 nova_compute[221550]: 2026-01-31 08:53:40.479 221554 INFO nova.compute.manager [None req-a43d6f8a-b919-4fcb-b75c-37e7232ceeac 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] instance snapshotting#033[00m
Jan 31 03:53:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:40.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:41 np0005603609 nova_compute[221550]: 2026-01-31 08:53:41.351 221554 INFO nova.virt.libvirt.driver [None req-a43d6f8a-b919-4fcb-b75c-37e7232ceeac 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Beginning live snapshot process#033[00m
Jan 31 03:53:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:41.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:42 np0005603609 nova_compute[221550]: 2026-01-31 08:53:42.103 221554 DEBUG nova.storage.rbd_utils [None req-a43d6f8a-b919-4fcb-b75c-37e7232ceeac 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] creating snapshot(5e28910254a84810800634b0d1141b9c) on rbd image(3d81873b-9fc8-47da-8ff3-81d0b7715556_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:53:42 np0005603609 nova_compute[221550]: 2026-01-31 08:53:42.221 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e389 e389: 3 total, 3 up, 3 in
Jan 31 03:53:42 np0005603609 nova_compute[221550]: 2026-01-31 08:53:42.506 221554 DEBUG nova.storage.rbd_utils [None req-a43d6f8a-b919-4fcb-b75c-37e7232ceeac 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] cloning vms/3d81873b-9fc8-47da-8ff3-81d0b7715556_disk@5e28910254a84810800634b0d1141b9c to images/27f20f89-310d-4b9a-9cbf-b30cb0f65051 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:53:42 np0005603609 nova_compute[221550]: 2026-01-31 08:53:42.710 221554 DEBUG nova.storage.rbd_utils [None req-a43d6f8a-b919-4fcb-b75c-37e7232ceeac 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] flattening images/27f20f89-310d-4b9a-9cbf-b30cb0f65051 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:53:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:53:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:42.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:53:43 np0005603609 nova_compute[221550]: 2026-01-31 08:53:43.105 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:43 np0005603609 nova_compute[221550]: 2026-01-31 08:53:43.419 221554 DEBUG nova.storage.rbd_utils [None req-a43d6f8a-b919-4fcb-b75c-37e7232ceeac 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] removing snapshot(5e28910254a84810800634b0d1141b9c) on rbd image(3d81873b-9fc8-47da-8ff3-81d0b7715556_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:53:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:43.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e390 e390: 3 total, 3 up, 3 in
Jan 31 03:53:44 np0005603609 nova_compute[221550]: 2026-01-31 08:53:44.575 221554 DEBUG nova.storage.rbd_utils [None req-a43d6f8a-b919-4fcb-b75c-37e7232ceeac 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] creating snapshot(snap) on rbd image(27f20f89-310d-4b9a-9cbf-b30cb0f65051) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:53:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:44.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:44 np0005603609 nova_compute[221550]: 2026-01-31 08:53:44.950 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:53:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:45.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:53:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e391 e391: 3 total, 3 up, 3 in
Jan 31 03:53:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:46.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:47 np0005603609 nova_compute[221550]: 2026-01-31 08:53:47.224 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:47.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:48 np0005603609 nova_compute[221550]: 2026-01-31 08:53:48.276 221554 INFO nova.virt.libvirt.driver [None req-a43d6f8a-b919-4fcb-b75c-37e7232ceeac 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Snapshot image upload complete#033[00m
Jan 31 03:53:48 np0005603609 nova_compute[221550]: 2026-01-31 08:53:48.276 221554 INFO nova.compute.manager [None req-a43d6f8a-b919-4fcb-b75c-37e7232ceeac 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Took 7.80 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:53:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:48.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e392 e392: 3 total, 3 up, 3 in
Jan 31 03:53:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:53:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:49.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:53:49 np0005603609 nova_compute[221550]: 2026-01-31 08:53:49.954 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:50.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:51.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:52 np0005603609 nova_compute[221550]: 2026-01-31 08:53:52.224 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:52 np0005603609 ovn_controller[130359]: 2026-01-31T08:53:52Z|00946|binding|INFO|Releasing lport 5339b2f1-5a22-4487-8eff-180aa0b06c18 from this chassis (sb_readonly=0)
Jan 31 03:53:52 np0005603609 nova_compute[221550]: 2026-01-31 08:53:52.449 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:52.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:53.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:54 np0005603609 podman[310400]: 2026-01-31 08:53:54.171657504 +0000 UTC m=+0.051683453 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:53:54 np0005603609 podman[310399]: 2026-01-31 08:53:54.222566126 +0000 UTC m=+0.105154747 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:53:54 np0005603609 nova_compute[221550]: 2026-01-31 08:53:54.342 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:54.342 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=94, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=93) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:53:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:54.343 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:53:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e393 e393: 3 total, 3 up, 3 in
Jan 31 03:53:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:53:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:54.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:53:54 np0005603609 nova_compute[221550]: 2026-01-31 08:53:54.955 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:55.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:55 np0005603609 nova_compute[221550]: 2026-01-31 08:53:55.601 221554 DEBUG nova.compute.manager [req-b8413512-f3d9-42d7-8a6b-464d9d931070 req-f59ee75e-9a94-4646-9808-cb541f6b884b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Received event network-changed-6e938ec2-4998-426a-850f-6a6a71f9ef6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:53:55 np0005603609 nova_compute[221550]: 2026-01-31 08:53:55.602 221554 DEBUG nova.compute.manager [req-b8413512-f3d9-42d7-8a6b-464d9d931070 req-f59ee75e-9a94-4646-9808-cb541f6b884b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Refreshing instance network info cache due to event network-changed-6e938ec2-4998-426a-850f-6a6a71f9ef6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:53:55 np0005603609 nova_compute[221550]: 2026-01-31 08:53:55.602 221554 DEBUG oslo_concurrency.lockutils [req-b8413512-f3d9-42d7-8a6b-464d9d931070 req-f59ee75e-9a94-4646-9808-cb541f6b884b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-3d81873b-9fc8-47da-8ff3-81d0b7715556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:53:55 np0005603609 nova_compute[221550]: 2026-01-31 08:53:55.602 221554 DEBUG oslo_concurrency.lockutils [req-b8413512-f3d9-42d7-8a6b-464d9d931070 req-f59ee75e-9a94-4646-9808-cb541f6b884b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-3d81873b-9fc8-47da-8ff3-81d0b7715556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:53:55 np0005603609 nova_compute[221550]: 2026-01-31 08:53:55.602 221554 DEBUG nova.network.neutron [req-b8413512-f3d9-42d7-8a6b-464d9d931070 req-f59ee75e-9a94-4646-9808-cb541f6b884b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Refreshing network info cache for port 6e938ec2-4998-426a-850f-6a6a71f9ef6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.152 221554 DEBUG oslo_concurrency.lockutils [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquiring lock "3d81873b-9fc8-47da-8ff3-81d0b7715556" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.153 221554 DEBUG oslo_concurrency.lockutils [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "3d81873b-9fc8-47da-8ff3-81d0b7715556" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.154 221554 DEBUG oslo_concurrency.lockutils [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquiring lock "3d81873b-9fc8-47da-8ff3-81d0b7715556-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.154 221554 DEBUG oslo_concurrency.lockutils [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "3d81873b-9fc8-47da-8ff3-81d0b7715556-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.154 221554 DEBUG oslo_concurrency.lockutils [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "3d81873b-9fc8-47da-8ff3-81d0b7715556-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.156 221554 INFO nova.compute.manager [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Terminating instance#033[00m
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.158 221554 DEBUG nova.compute.manager [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:53:56 np0005603609 kernel: tap6e938ec2-49 (unregistering): left promiscuous mode
Jan 31 03:53:56 np0005603609 NetworkManager[49064]: <info>  [1769849636.2852] device (tap6e938ec2-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.286 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.293 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:56 np0005603609 ovn_controller[130359]: 2026-01-31T08:53:56Z|00947|binding|INFO|Releasing lport 6e938ec2-4998-426a-850f-6a6a71f9ef6f from this chassis (sb_readonly=0)
Jan 31 03:53:56 np0005603609 ovn_controller[130359]: 2026-01-31T08:53:56Z|00948|binding|INFO|Setting lport 6e938ec2-4998-426a-850f-6a6a71f9ef6f down in Southbound
Jan 31 03:53:56 np0005603609 ovn_controller[130359]: 2026-01-31T08:53:56Z|00949|binding|INFO|Removing iface tap6e938ec2-49 ovn-installed in OVS
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.296 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.302 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:56 np0005603609 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d000000ca.scope: Deactivated successfully.
Jan 31 03:53:56 np0005603609 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d000000ca.scope: Consumed 15.113s CPU time.
Jan 31 03:53:56 np0005603609 systemd-machined[190912]: Machine qemu-111-instance-000000ca terminated.
Jan 31 03:53:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:56.368 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:48:11 10.100.0.3'], port_security=['fa:16:3e:e1:48:11 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3d81873b-9fc8-47da-8ff3-81d0b7715556', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bcbd792-d73a-4177-857d-cca1f0044ec8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '996284baaa2946258a0ab1be9a30d1f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc7f4114-32da-4b90-986a-baaae713330e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6418939f-fc1e-4745-b073-d983621a4c28, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=6e938ec2-4998-426a-850f-6a6a71f9ef6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:53:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:56.369 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 6e938ec2-4998-426a-850f-6a6a71f9ef6f in datapath 5bcbd792-d73a-4177-857d-cca1f0044ec8 unbound from our chassis#033[00m
Jan 31 03:53:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:56.371 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5bcbd792-d73a-4177-857d-cca1f0044ec8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:53:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:56.372 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b825fadb-016c-48b0-8251-f7258f0330be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:56.373 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8 namespace which is not needed anymore#033[00m
Jan 31 03:53:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.395 221554 INFO nova.virt.libvirt.driver [-] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Instance destroyed successfully.#033[00m
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.395 221554 DEBUG nova.objects.instance [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lazy-loading 'resources' on Instance uuid 3d81873b-9fc8-47da-8ff3-81d0b7715556 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:53:56 np0005603609 neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8[309972]: [NOTICE]   (309976) : haproxy version is 2.8.14-c23fe91
Jan 31 03:53:56 np0005603609 neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8[309972]: [NOTICE]   (309976) : path to executable is /usr/sbin/haproxy
Jan 31 03:53:56 np0005603609 neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8[309972]: [WARNING]  (309976) : Exiting Master process...
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.475 221554 DEBUG nova.virt.libvirt.vif [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:52:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1328608103',display_name='tempest-TestSnapshotPattern-server-1328608103',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1328608103',id=202,image_ref='e29674ab-eaa7-4f25-a795-4373e5f5c4e8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGk1avDtDRCkfi38WDQyZMb5AgNzJzpa/E7afrTWkN843bVEm67ME3qOwcYUGN5nJ8L7GrnzGumfCdBmJYfKJwV+0Rfyn9QiAVoGTFowygEpRUI24B1nMsGC8mTo9YdR3w==',key_name='tempest-TestSnapshotPattern-1326482889',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:53:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='996284baaa2946258a0ab1be9a30d1f6',ramdisk_id='',reservation_id='r-9et6zsad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='c304efec-22bc-408c-adea-b06aaf5fbe40',image_min_disk='1',image_min_ram='0',image_owner_id='996284baaa2946258a0ab1be9a30d1f6',image_owner_project_name='tempest-TestSnapshotPattern-920311190',image_owner_user_name='tempest-TestSnapshotPattern-920311190-project-member',image_user_id='5a37d71e432b45168339dde5abdbe7b6',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-920311190',owner_user_name='tempest-TestSnapshotPattern-920311190-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:53:48Z,user_data=None,user_id='5a37d71e432b45168339dde5abdbe7b6',uuid=3d81873b-9fc8-47da-8ff3-81d0b7715556,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "address": "fa:16:3e:e1:48:11", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e938ec2-49", "ovs_interfaceid": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:53:56 np0005603609 neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8[309972]: [ALERT]    (309976) : Current worker (309978) exited with code 143 (Terminated)
Jan 31 03:53:56 np0005603609 neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8[309972]: [WARNING]  (309976) : All workers exited. Exiting... (0)
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.477 221554 DEBUG nova.network.os_vif_util [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Converting VIF {"id": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "address": "fa:16:3e:e1:48:11", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e938ec2-49", "ovs_interfaceid": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.478 221554 DEBUG nova.network.os_vif_util [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e1:48:11,bridge_name='br-int',has_traffic_filtering=True,id=6e938ec2-4998-426a-850f-6a6a71f9ef6f,network=Network(5bcbd792-d73a-4177-857d-cca1f0044ec8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e938ec2-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.478 221554 DEBUG os_vif [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:48:11,bridge_name='br-int',has_traffic_filtering=True,id=6e938ec2-4998-426a-850f-6a6a71f9ef6f,network=Network(5bcbd792-d73a-4177-857d-cca1f0044ec8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e938ec2-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:53:56 np0005603609 systemd[1]: libpod-2f527edfddf05a5faba9bd64afdf28e867043946cff80765c897684268b1504b.scope: Deactivated successfully.
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.480 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.480 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e938ec2-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.482 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.483 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.485 221554 INFO os_vif [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:48:11,bridge_name='br-int',has_traffic_filtering=True,id=6e938ec2-4998-426a-850f-6a6a71f9ef6f,network=Network(5bcbd792-d73a-4177-857d-cca1f0044ec8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e938ec2-49')#033[00m
Jan 31 03:53:56 np0005603609 podman[310478]: 2026-01-31 08:53:56.487071458 +0000 UTC m=+0.041181847 container died 2f527edfddf05a5faba9bd64afdf28e867043946cff80765c897684268b1504b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 03:53:56 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f527edfddf05a5faba9bd64afdf28e867043946cff80765c897684268b1504b-userdata-shm.mount: Deactivated successfully.
Jan 31 03:53:56 np0005603609 systemd[1]: var-lib-containers-storage-overlay-409ba0e3dc5e0483a558f44ebec91475a6ce4a0e621a858d4293b848de314540-merged.mount: Deactivated successfully.
Jan 31 03:53:56 np0005603609 podman[310478]: 2026-01-31 08:53:56.532084488 +0000 UTC m=+0.086194877 container cleanup 2f527edfddf05a5faba9bd64afdf28e867043946cff80765c897684268b1504b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:53:56 np0005603609 systemd[1]: libpod-conmon-2f527edfddf05a5faba9bd64afdf28e867043946cff80765c897684268b1504b.scope: Deactivated successfully.
Jan 31 03:53:56 np0005603609 podman[310524]: 2026-01-31 08:53:56.587520931 +0000 UTC m=+0.038350270 container remove 2f527edfddf05a5faba9bd64afdf28e867043946cff80765c897684268b1504b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 03:53:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:56.591 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[df52d311-55e3-4bc4-9016-91e621b26384]: (4, ('Sat Jan 31 08:53:56 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8 (2f527edfddf05a5faba9bd64afdf28e867043946cff80765c897684268b1504b)\n2f527edfddf05a5faba9bd64afdf28e867043946cff80765c897684268b1504b\nSat Jan 31 08:53:56 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8 (2f527edfddf05a5faba9bd64afdf28e867043946cff80765c897684268b1504b)\n2f527edfddf05a5faba9bd64afdf28e867043946cff80765c897684268b1504b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:56.593 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[079ef9d3-a72a-4dcd-a03f-4ef1604fc2ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:56.595 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bcbd792-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.596 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:56 np0005603609 kernel: tap5bcbd792-d0: left promiscuous mode
Jan 31 03:53:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:56.600 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[734a5f41-18f2-4d53-8745-a64624e03c39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.604 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:56.617 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f3fb88-c2b9-4be3-afb3-c7cb6ac60702]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:56.618 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[937ef88a-8ba1-4a71-b6a2-6d1e06aedda2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:56.628 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[871f37b9-bee8-4f2b-b963-170e29ec4617]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 983166, 'reachable_time': 27339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310540, 'error': None, 'target': 'ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:56 np0005603609 systemd[1]: run-netns-ovnmeta\x2d5bcbd792\x2dd73a\x2d4177\x2d857d\x2dcca1f0044ec8.mount: Deactivated successfully.
Jan 31 03:53:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:56.631 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5bcbd792-d73a-4177-857d-cca1f0044ec8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:53:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:53:56.631 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c9e2fa-129f-4344-9eaa-6669c49daedd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:53:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:56.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.911 221554 INFO nova.virt.libvirt.driver [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Deleting instance files /var/lib/nova/instances/3d81873b-9fc8-47da-8ff3-81d0b7715556_del#033[00m
Jan 31 03:53:56 np0005603609 nova_compute[221550]: 2026-01-31 08:53:56.912 221554 INFO nova.virt.libvirt.driver [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Deletion of /var/lib/nova/instances/3d81873b-9fc8-47da-8ff3-81d0b7715556_del complete#033[00m
Jan 31 03:53:57 np0005603609 nova_compute[221550]: 2026-01-31 08:53:57.219 221554 INFO nova.compute.manager [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Took 1.06 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:53:57 np0005603609 nova_compute[221550]: 2026-01-31 08:53:57.220 221554 DEBUG oslo.service.loopingcall [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:53:57 np0005603609 nova_compute[221550]: 2026-01-31 08:53:57.221 221554 DEBUG nova.compute.manager [-] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:53:57 np0005603609 nova_compute[221550]: 2026-01-31 08:53:57.221 221554 DEBUG nova.network.neutron [-] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:53:57 np0005603609 nova_compute[221550]: 2026-01-31 08:53:57.229 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:57 np0005603609 nova_compute[221550]: 2026-01-31 08:53:57.357 221554 DEBUG nova.compute.manager [req-1ef6fc73-28d8-471e-ab3d-cbd4921dab8f req-f2261302-6bf6-40d7-9353-a2006044d58c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Received event network-vif-unplugged-6e938ec2-4998-426a-850f-6a6a71f9ef6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:53:57 np0005603609 nova_compute[221550]: 2026-01-31 08:53:57.357 221554 DEBUG oslo_concurrency.lockutils [req-1ef6fc73-28d8-471e-ab3d-cbd4921dab8f req-f2261302-6bf6-40d7-9353-a2006044d58c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3d81873b-9fc8-47da-8ff3-81d0b7715556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:57 np0005603609 nova_compute[221550]: 2026-01-31 08:53:57.358 221554 DEBUG oslo_concurrency.lockutils [req-1ef6fc73-28d8-471e-ab3d-cbd4921dab8f req-f2261302-6bf6-40d7-9353-a2006044d58c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3d81873b-9fc8-47da-8ff3-81d0b7715556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:57 np0005603609 nova_compute[221550]: 2026-01-31 08:53:57.358 221554 DEBUG oslo_concurrency.lockutils [req-1ef6fc73-28d8-471e-ab3d-cbd4921dab8f req-f2261302-6bf6-40d7-9353-a2006044d58c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3d81873b-9fc8-47da-8ff3-81d0b7715556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:57 np0005603609 nova_compute[221550]: 2026-01-31 08:53:57.359 221554 DEBUG nova.compute.manager [req-1ef6fc73-28d8-471e-ab3d-cbd4921dab8f req-f2261302-6bf6-40d7-9353-a2006044d58c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] No waiting events found dispatching network-vif-unplugged-6e938ec2-4998-426a-850f-6a6a71f9ef6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:53:57 np0005603609 nova_compute[221550]: 2026-01-31 08:53:57.360 221554 DEBUG nova.compute.manager [req-1ef6fc73-28d8-471e-ab3d-cbd4921dab8f req-f2261302-6bf6-40d7-9353-a2006044d58c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Received event network-vif-unplugged-6e938ec2-4998-426a-850f-6a6a71f9ef6f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:53:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:53:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:57.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:53:58 np0005603609 nova_compute[221550]: 2026-01-31 08:53:58.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:58 np0005603609 nova_compute[221550]: 2026-01-31 08:53:58.773 221554 DEBUG nova.network.neutron [req-b8413512-f3d9-42d7-8a6b-464d9d931070 req-f59ee75e-9a94-4646-9808-cb541f6b884b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Updated VIF entry in instance network info cache for port 6e938ec2-4998-426a-850f-6a6a71f9ef6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:53:58 np0005603609 nova_compute[221550]: 2026-01-31 08:53:58.774 221554 DEBUG nova.network.neutron [req-b8413512-f3d9-42d7-8a6b-464d9d931070 req-f59ee75e-9a94-4646-9808-cb541f6b884b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Updating instance_info_cache with network_info: [{"id": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "address": "fa:16:3e:e1:48:11", "network": {"id": "5bcbd792-d73a-4177-857d-cca1f0044ec8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1702155659-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996284baaa2946258a0ab1be9a30d1f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e938ec2-49", "ovs_interfaceid": "6e938ec2-4998-426a-850f-6a6a71f9ef6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:53:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:58.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.134 221554 DEBUG nova.compute.manager [req-532977bc-fa14-464d-baec-08f3e08c4d80 req-43b1be16-555f-4536-ab13-3a5333fad50d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Received event network-vif-deleted-6e938ec2-4998-426a-850f-6a6a71f9ef6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.135 221554 INFO nova.compute.manager [req-532977bc-fa14-464d-baec-08f3e08c4d80 req-43b1be16-555f-4536-ab13-3a5333fad50d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Neutron deleted interface 6e938ec2-4998-426a-850f-6a6a71f9ef6f; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.135 221554 DEBUG nova.network.neutron [req-532977bc-fa14-464d-baec-08f3e08c4d80 req-43b1be16-555f-4536-ab13-3a5333fad50d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.139 221554 DEBUG oslo_concurrency.lockutils [req-b8413512-f3d9-42d7-8a6b-464d9d931070 req-f59ee75e-9a94-4646-9808-cb541f6b884b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-3d81873b-9fc8-47da-8ff3-81d0b7715556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.145 221554 DEBUG nova.network.neutron [-] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.462 221554 INFO nova.compute.manager [-] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Took 2.24 seconds to deallocate network for instance.#033[00m
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.471 221554 DEBUG nova.compute.manager [req-532977bc-fa14-464d-baec-08f3e08c4d80 req-43b1be16-555f-4536-ab13-3a5333fad50d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Detach interface failed, port_id=6e938ec2-4998-426a-850f-6a6a71f9ef6f, reason: Instance 3d81873b-9fc8-47da-8ff3-81d0b7715556 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:53:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:53:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:59.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.604 221554 DEBUG nova.compute.manager [req-ee8328fd-6413-47fa-a914-17d7945a4b7d req-7c9fee0b-0fd9-427b-9112-8866b8c40ec4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Received event network-vif-plugged-6e938ec2-4998-426a-850f-6a6a71f9ef6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.605 221554 DEBUG oslo_concurrency.lockutils [req-ee8328fd-6413-47fa-a914-17d7945a4b7d req-7c9fee0b-0fd9-427b-9112-8866b8c40ec4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "3d81873b-9fc8-47da-8ff3-81d0b7715556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.605 221554 DEBUG oslo_concurrency.lockutils [req-ee8328fd-6413-47fa-a914-17d7945a4b7d req-7c9fee0b-0fd9-427b-9112-8866b8c40ec4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3d81873b-9fc8-47da-8ff3-81d0b7715556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.605 221554 DEBUG oslo_concurrency.lockutils [req-ee8328fd-6413-47fa-a914-17d7945a4b7d req-7c9fee0b-0fd9-427b-9112-8866b8c40ec4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "3d81873b-9fc8-47da-8ff3-81d0b7715556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.605 221554 DEBUG nova.compute.manager [req-ee8328fd-6413-47fa-a914-17d7945a4b7d req-7c9fee0b-0fd9-427b-9112-8866b8c40ec4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] No waiting events found dispatching network-vif-plugged-6e938ec2-4998-426a-850f-6a6a71f9ef6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.605 221554 WARNING nova.compute.manager [req-ee8328fd-6413-47fa-a914-17d7945a4b7d req-7c9fee0b-0fd9-427b-9112-8866b8c40ec4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Received unexpected event network-vif-plugged-6e938ec2-4998-426a-850f-6a6a71f9ef6f for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.734 221554 DEBUG oslo_concurrency.lockutils [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.735 221554 DEBUG oslo_concurrency.lockutils [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.810 221554 DEBUG nova.scheduler.client.report [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.862 221554 DEBUG nova.scheduler.client.report [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.863 221554 DEBUG nova.compute.provider_tree [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.881 221554 DEBUG nova.scheduler.client.report [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.918 221554 DEBUG nova.scheduler.client.report [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:53:59 np0005603609 nova_compute[221550]: 2026-01-31 08:53:59.977 221554 DEBUG oslo_concurrency.processutils [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:54:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:54:00 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/243364613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:54:00 np0005603609 nova_compute[221550]: 2026-01-31 08:54:00.418 221554 DEBUG oslo_concurrency.processutils [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:54:00 np0005603609 nova_compute[221550]: 2026-01-31 08:54:00.423 221554 DEBUG nova.compute.provider_tree [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:54:00 np0005603609 nova_compute[221550]: 2026-01-31 08:54:00.447 221554 DEBUG nova.scheduler.client.report [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:54:00 np0005603609 nova_compute[221550]: 2026-01-31 08:54:00.486 221554 DEBUG oslo_concurrency.lockutils [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:00 np0005603609 nova_compute[221550]: 2026-01-31 08:54:00.536 221554 INFO nova.scheduler.client.report [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Deleted allocations for instance 3d81873b-9fc8-47da-8ff3-81d0b7715556#033[00m
Jan 31 03:54:00 np0005603609 nova_compute[221550]: 2026-01-31 08:54:00.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:54:00 np0005603609 nova_compute[221550]: 2026-01-31 08:54:00.662 221554 DEBUG oslo_concurrency.lockutils [None req-c1429fe7-06e1-4142-841b-752826bc10c6 5a37d71e432b45168339dde5abdbe7b6 996284baaa2946258a0ab1be9a30d1f6 - - default default] Lock "3d81873b-9fc8-47da-8ff3-81d0b7715556" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:00.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:00 np0005603609 nova_compute[221550]: 2026-01-31 08:54:00.917 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:01 np0005603609 nova_compute[221550]: 2026-01-31 08:54:01.484 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:01.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:02 np0005603609 nova_compute[221550]: 2026-01-31 08:54:02.227 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e394 e394: 3 total, 3 up, 3 in
Jan 31 03:54:02 np0005603609 nova_compute[221550]: 2026-01-31 08:54:02.664 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:54:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:02.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:03.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2018586957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:54:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:04.345 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '94'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #175. Immutable memtables: 0.
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:04.813325) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 175
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849644813353, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 1924, "num_deletes": 259, "total_data_size": 4448864, "memory_usage": 4501264, "flush_reason": "Manual Compaction"}
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #176: started
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849644842826, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 176, "file_size": 2922273, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84735, "largest_seqno": 86654, "table_properties": {"data_size": 2914231, "index_size": 4855, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16748, "raw_average_key_size": 20, "raw_value_size": 2898148, "raw_average_value_size": 3508, "num_data_blocks": 210, "num_entries": 826, "num_filter_entries": 826, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849489, "oldest_key_time": 1769849489, "file_creation_time": 1769849644, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 29554 microseconds, and 5842 cpu microseconds.
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:04.842874) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #176: 2922273 bytes OK
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:04.842893) [db/memtable_list.cc:519] [default] Level-0 commit table #176 started
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:04.844833) [db/memtable_list.cc:722] [default] Level-0 commit table #176: memtable #1 done
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:04.844859) EVENT_LOG_v1 {"time_micros": 1769849644844851, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:04.844880) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 4440249, prev total WAL file size 4440249, number of live WAL files 2.
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000172.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:04.845898) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323635' seq:72057594037927935, type:22 .. '6C6F676D0033353138' seq:0, type:0; will stop at (end)
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [176(2853KB)], [174(11MB)]
Jan 31 03:54:04 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849644846098, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [176], "files_L6": [174], "score": -1, "input_data_size": 14646153, "oldest_snapshot_seqno": -1}
Jan 31 03:54:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:54:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:04.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:54:05 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #177: 10699 keys, 14496238 bytes, temperature: kUnknown
Jan 31 03:54:05 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849645043388, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 177, "file_size": 14496238, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14426992, "index_size": 41421, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26757, "raw_key_size": 282408, "raw_average_key_size": 26, "raw_value_size": 14239719, "raw_average_value_size": 1330, "num_data_blocks": 1583, "num_entries": 10699, "num_filter_entries": 10699, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769849644, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 177, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:54:05 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:54:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:05.043713) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 14496238 bytes
Jan 31 03:54:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:05.045112) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 74.2 rd, 73.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 11.2 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(10.0) write-amplify(5.0) OK, records in: 11236, records dropped: 537 output_compression: NoCompression
Jan 31 03:54:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:05.045143) EVENT_LOG_v1 {"time_micros": 1769849645045128, "job": 112, "event": "compaction_finished", "compaction_time_micros": 197365, "compaction_time_cpu_micros": 44924, "output_level": 6, "num_output_files": 1, "total_output_size": 14496238, "num_input_records": 11236, "num_output_records": 10699, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:54:05 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:54:05 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849645045665, "job": 112, "event": "table_file_deletion", "file_number": 176}
Jan 31 03:54:05 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000174.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:54:05 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849645047889, "job": 112, "event": "table_file_deletion", "file_number": 174}
Jan 31 03:54:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:04.845675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:05.047935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:05.047942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:05.047946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:05.047949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:05.047995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:05.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:05 np0005603609 nova_compute[221550]: 2026-01-31 08:54:05.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:54:05 np0005603609 nova_compute[221550]: 2026-01-31 08:54:05.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:54:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:06 np0005603609 nova_compute[221550]: 2026-01-31 08:54:06.487 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:06.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:07 np0005603609 nova_compute[221550]: 2026-01-31 08:54:07.229 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:54:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:07.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:54:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:07.546 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:07.546 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:07.546 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:08 np0005603609 nova_compute[221550]: 2026-01-31 08:54:08.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:54:08 np0005603609 nova_compute[221550]: 2026-01-31 08:54:08.750 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:08 np0005603609 nova_compute[221550]: 2026-01-31 08:54:08.750 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:08 np0005603609 nova_compute[221550]: 2026-01-31 08:54:08.751 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:08 np0005603609 nova_compute[221550]: 2026-01-31 08:54:08.751 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:54:08 np0005603609 nova_compute[221550]: 2026-01-31 08:54:08.751 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:54:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:08.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:54:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2686569800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:54:09 np0005603609 nova_compute[221550]: 2026-01-31 08:54:09.204 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:54:09 np0005603609 nova_compute[221550]: 2026-01-31 08:54:09.354 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:54:09 np0005603609 nova_compute[221550]: 2026-01-31 08:54:09.355 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4180MB free_disk=20.93897247314453GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:54:09 np0005603609 nova_compute[221550]: 2026-01-31 08:54:09.355 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:09 np0005603609 nova_compute[221550]: 2026-01-31 08:54:09.355 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:54:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:09.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:54:09 np0005603609 nova_compute[221550]: 2026-01-31 08:54:09.550 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:54:09 np0005603609 nova_compute[221550]: 2026-01-31 08:54:09.550 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:54:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e395 e395: 3 total, 3 up, 3 in
Jan 31 03:54:09 np0005603609 nova_compute[221550]: 2026-01-31 08:54:09.927 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:54:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:54:10 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/127726543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:54:10 np0005603609 nova_compute[221550]: 2026-01-31 08:54:10.361 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:54:10 np0005603609 nova_compute[221550]: 2026-01-31 08:54:10.367 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:54:10 np0005603609 nova_compute[221550]: 2026-01-31 08:54:10.404 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:54:10 np0005603609 nova_compute[221550]: 2026-01-31 08:54:10.449 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:54:10 np0005603609 nova_compute[221550]: 2026-01-31 08:54:10.450 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:54:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:10.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:54:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:11 np0005603609 nova_compute[221550]: 2026-01-31 08:54:11.392 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849636.3917716, 3d81873b-9fc8-47da-8ff3-81d0b7715556 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:54:11 np0005603609 nova_compute[221550]: 2026-01-31 08:54:11.393 221554 INFO nova.compute.manager [-] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:54:11 np0005603609 nova_compute[221550]: 2026-01-31 08:54:11.439 221554 DEBUG nova.compute.manager [None req-4f04697d-e2e0-4fcb-b431-e481c87e1e1b - - - - - -] [instance: 3d81873b-9fc8-47da-8ff3-81d0b7715556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:54:11 np0005603609 nova_compute[221550]: 2026-01-31 08:54:11.490 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:11.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:12 np0005603609 nova_compute[221550]: 2026-01-31 08:54:12.231 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:12 np0005603609 nova_compute[221550]: 2026-01-31 08:54:12.595 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:12.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:13 np0005603609 nova_compute[221550]: 2026-01-31 08:54:13.450 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:54:13 np0005603609 nova_compute[221550]: 2026-01-31 08:54:13.450 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:54:13 np0005603609 nova_compute[221550]: 2026-01-31 08:54:13.451 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:54:13 np0005603609 nova_compute[221550]: 2026-01-31 08:54:13.475 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:54:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:13.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:54:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:14.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:54:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:15.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:15 np0005603609 nova_compute[221550]: 2026-01-31 08:54:15.680 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:54:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:16 np0005603609 nova_compute[221550]: 2026-01-31 08:54:16.494 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:16.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:17 np0005603609 nova_compute[221550]: 2026-01-31 08:54:17.233 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:17.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:18.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:19.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:20 np0005603609 nova_compute[221550]: 2026-01-31 08:54:20.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:54:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:20.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:21 np0005603609 nova_compute[221550]: 2026-01-31 08:54:21.499 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:21.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:21 np0005603609 nova_compute[221550]: 2026-01-31 08:54:21.563 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:21 np0005603609 nova_compute[221550]: 2026-01-31 08:54:21.630 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:22 np0005603609 nova_compute[221550]: 2026-01-31 08:54:22.234 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:54:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:22.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:54:23 np0005603609 nova_compute[221550]: 2026-01-31 08:54:23.365 221554 DEBUG oslo_concurrency.lockutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "419232fb-c874-411a-b4a8-cad39708c3aa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:23 np0005603609 nova_compute[221550]: 2026-01-31 08:54:23.366 221554 DEBUG oslo_concurrency.lockutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "419232fb-c874-411a-b4a8-cad39708c3aa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:23 np0005603609 nova_compute[221550]: 2026-01-31 08:54:23.392 221554 DEBUG nova.compute.manager [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:54:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:54:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:23.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:54:23 np0005603609 nova_compute[221550]: 2026-01-31 08:54:23.561 221554 DEBUG oslo_concurrency.lockutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:23 np0005603609 nova_compute[221550]: 2026-01-31 08:54:23.561 221554 DEBUG oslo_concurrency.lockutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:23 np0005603609 nova_compute[221550]: 2026-01-31 08:54:23.567 221554 DEBUG nova.virt.hardware [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:54:23 np0005603609 nova_compute[221550]: 2026-01-31 08:54:23.567 221554 INFO nova.compute.claims [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:54:23 np0005603609 nova_compute[221550]: 2026-01-31 08:54:23.862 221554 DEBUG oslo_concurrency.processutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:54:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:54:24 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/658489239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.279 221554 DEBUG oslo_concurrency.processutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.286 221554 DEBUG nova.compute.provider_tree [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.317 221554 DEBUG nova.scheduler.client.report [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.349 221554 DEBUG oslo_concurrency.lockutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.350 221554 DEBUG nova.compute.manager [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.465 221554 DEBUG nova.compute.manager [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.466 221554 DEBUG nova.network.neutron [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.508 221554 INFO nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.551 221554 DEBUG nova.compute.manager [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:54:24 np0005603609 podman[310657]: 2026-01-31 08:54:24.65171333 +0000 UTC m=+0.045916403 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 03:54:24 np0005603609 podman[310656]: 2026-01-31 08:54:24.677757849 +0000 UTC m=+0.071351418 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.705 221554 DEBUG nova.compute.manager [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.707 221554 DEBUG nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.707 221554 INFO nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Creating image(s)#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.738 221554 DEBUG nova.storage.rbd_utils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 419232fb-c874-411a-b4a8-cad39708c3aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.762 221554 DEBUG nova.storage.rbd_utils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 419232fb-c874-411a-b4a8-cad39708c3aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.786 221554 DEBUG nova.storage.rbd_utils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 419232fb-c874-411a-b4a8-cad39708c3aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.789 221554 DEBUG oslo_concurrency.processutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.814 221554 DEBUG nova.policy [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a56abd8fdd341ae88a99e102ab399de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.863 221554 DEBUG oslo_concurrency.processutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.865 221554 DEBUG oslo_concurrency.lockutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.866 221554 DEBUG oslo_concurrency.lockutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.866 221554 DEBUG oslo_concurrency.lockutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:24.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.894 221554 DEBUG nova.storage.rbd_utils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 419232fb-c874-411a-b4a8-cad39708c3aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:54:24 np0005603609 nova_compute[221550]: 2026-01-31 08:54:24.897 221554 DEBUG oslo_concurrency.processutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 419232fb-c874-411a-b4a8-cad39708c3aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:54:25 np0005603609 nova_compute[221550]: 2026-01-31 08:54:25.155 221554 DEBUG oslo_concurrency.processutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 419232fb-c874-411a-b4a8-cad39708c3aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:54:25 np0005603609 nova_compute[221550]: 2026-01-31 08:54:25.230 221554 DEBUG nova.storage.rbd_utils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] resizing rbd image 419232fb-c874-411a-b4a8-cad39708c3aa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:54:25 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:54:25 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:54:25 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:54:25 np0005603609 nova_compute[221550]: 2026-01-31 08:54:25.342 221554 DEBUG nova.objects.instance [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'migration_context' on Instance uuid 419232fb-c874-411a-b4a8-cad39708c3aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:54:25 np0005603609 nova_compute[221550]: 2026-01-31 08:54:25.379 221554 DEBUG nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:54:25 np0005603609 nova_compute[221550]: 2026-01-31 08:54:25.380 221554 DEBUG nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Ensure instance console log exists: /var/lib/nova/instances/419232fb-c874-411a-b4a8-cad39708c3aa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:54:25 np0005603609 nova_compute[221550]: 2026-01-31 08:54:25.380 221554 DEBUG oslo_concurrency.lockutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:25 np0005603609 nova_compute[221550]: 2026-01-31 08:54:25.380 221554 DEBUG oslo_concurrency.lockutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:25 np0005603609 nova_compute[221550]: 2026-01-31 08:54:25.381 221554 DEBUG oslo_concurrency.lockutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:25.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:26 np0005603609 nova_compute[221550]: 2026-01-31 08:54:26.424 221554 DEBUG nova.network.neutron [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Successfully created port: ffca14da-f5e6-48fb-820a-efe53de40c61 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:54:26 np0005603609 nova_compute[221550]: 2026-01-31 08:54:26.503 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:54:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:26.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:54:27 np0005603609 nova_compute[221550]: 2026-01-31 08:54:27.237 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:27.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:27 np0005603609 nova_compute[221550]: 2026-01-31 08:54:27.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:54:28 np0005603609 nova_compute[221550]: 2026-01-31 08:54:28.011 221554 DEBUG nova.network.neutron [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Successfully updated port: ffca14da-f5e6-48fb-820a-efe53de40c61 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:54:28 np0005603609 nova_compute[221550]: 2026-01-31 08:54:28.030 221554 DEBUG oslo_concurrency.lockutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "refresh_cache-419232fb-c874-411a-b4a8-cad39708c3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:54:28 np0005603609 nova_compute[221550]: 2026-01-31 08:54:28.030 221554 DEBUG oslo_concurrency.lockutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquired lock "refresh_cache-419232fb-c874-411a-b4a8-cad39708c3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:54:28 np0005603609 nova_compute[221550]: 2026-01-31 08:54:28.030 221554 DEBUG nova.network.neutron [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:54:28 np0005603609 nova_compute[221550]: 2026-01-31 08:54:28.193 221554 DEBUG nova.compute.manager [req-ec248afb-08f1-4009-acfc-a88f5a452983 req-2d6ca5de-e894-4924-bee5-0d03bcd0efee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Received event network-changed-ffca14da-f5e6-48fb-820a-efe53de40c61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:54:28 np0005603609 nova_compute[221550]: 2026-01-31 08:54:28.193 221554 DEBUG nova.compute.manager [req-ec248afb-08f1-4009-acfc-a88f5a452983 req-2d6ca5de-e894-4924-bee5-0d03bcd0efee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Refreshing instance network info cache due to event network-changed-ffca14da-f5e6-48fb-820a-efe53de40c61. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:54:28 np0005603609 nova_compute[221550]: 2026-01-31 08:54:28.193 221554 DEBUG oslo_concurrency.lockutils [req-ec248afb-08f1-4009-acfc-a88f5a452983 req-2d6ca5de-e894-4924-bee5-0d03bcd0efee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-419232fb-c874-411a-b4a8-cad39708c3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:54:28 np0005603609 nova_compute[221550]: 2026-01-31 08:54:28.554 221554 DEBUG nova.network.neutron [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:54:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:54:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:28.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:54:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:54:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:29.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.327 221554 DEBUG nova.network.neutron [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Updating instance_info_cache with network_info: [{"id": "ffca14da-f5e6-48fb-820a-efe53de40c61", "address": "fa:16:3e:20:a4:b8", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffca14da-f5", "ovs_interfaceid": "ffca14da-f5e6-48fb-820a-efe53de40c61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.428 221554 DEBUG oslo_concurrency.lockutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Releasing lock "refresh_cache-419232fb-c874-411a-b4a8-cad39708c3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.428 221554 DEBUG nova.compute.manager [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Instance network_info: |[{"id": "ffca14da-f5e6-48fb-820a-efe53de40c61", "address": "fa:16:3e:20:a4:b8", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffca14da-f5", "ovs_interfaceid": "ffca14da-f5e6-48fb-820a-efe53de40c61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.428 221554 DEBUG oslo_concurrency.lockutils [req-ec248afb-08f1-4009-acfc-a88f5a452983 req-2d6ca5de-e894-4924-bee5-0d03bcd0efee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-419232fb-c874-411a-b4a8-cad39708c3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.429 221554 DEBUG nova.network.neutron [req-ec248afb-08f1-4009-acfc-a88f5a452983 req-2d6ca5de-e894-4924-bee5-0d03bcd0efee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Refreshing network info cache for port ffca14da-f5e6-48fb-820a-efe53de40c61 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.431 221554 DEBUG nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Start _get_guest_xml network_info=[{"id": "ffca14da-f5e6-48fb-820a-efe53de40c61", "address": "fa:16:3e:20:a4:b8", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffca14da-f5", "ovs_interfaceid": "ffca14da-f5e6-48fb-820a-efe53de40c61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.436 221554 WARNING nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.443 221554 DEBUG nova.virt.libvirt.host [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.444 221554 DEBUG nova.virt.libvirt.host [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.447 221554 DEBUG nova.virt.libvirt.host [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.448 221554 DEBUG nova.virt.libvirt.host [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.449 221554 DEBUG nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.450 221554 DEBUG nova.virt.hardware [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.450 221554 DEBUG nova.virt.hardware [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.451 221554 DEBUG nova.virt.hardware [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.451 221554 DEBUG nova.virt.hardware [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.451 221554 DEBUG nova.virt.hardware [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.451 221554 DEBUG nova.virt.hardware [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.452 221554 DEBUG nova.virt.hardware [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.452 221554 DEBUG nova.virt.hardware [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.452 221554 DEBUG nova.virt.hardware [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.453 221554 DEBUG nova.virt.hardware [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.453 221554 DEBUG nova.virt.hardware [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.456 221554 DEBUG oslo_concurrency.processutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:54:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:54:30 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1813203743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:54:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:30.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.895 221554 DEBUG oslo_concurrency.processutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.929 221554 DEBUG nova.storage.rbd_utils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 419232fb-c874-411a-b4a8-cad39708c3aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:54:30 np0005603609 nova_compute[221550]: 2026-01-31 08:54:30.934 221554 DEBUG oslo_concurrency.processutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:54:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:54:31 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/925435894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.425 221554 DEBUG oslo_concurrency.processutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.426 221554 DEBUG nova.virt.libvirt.vif [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:54:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1042869875',display_name='tempest-TestNetworkBasicOps-server-1042869875',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1042869875',id=204,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN1syU/vyioUizkyhRVHAKVOmhOYRuzzXHkZyKCpyq7hSq7kIGF92QtCU4tebgULvA5/eUsBZxKQwP+CZaKMYbrjmhq0RAYmqQmZng4wxVlg+sGfdgk+gIM/6U8k6r0hrA==',key_name='tempest-TestNetworkBasicOps-1090043757',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-6gr9v0z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:54:24Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=419232fb-c874-411a-b4a8-cad39708c3aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ffca14da-f5e6-48fb-820a-efe53de40c61", "address": "fa:16:3e:20:a4:b8", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffca14da-f5", "ovs_interfaceid": "ffca14da-f5e6-48fb-820a-efe53de40c61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.427 221554 DEBUG nova.network.os_vif_util [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "ffca14da-f5e6-48fb-820a-efe53de40c61", "address": "fa:16:3e:20:a4:b8", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffca14da-f5", "ovs_interfaceid": "ffca14da-f5e6-48fb-820a-efe53de40c61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.427 221554 DEBUG nova.network.os_vif_util [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:a4:b8,bridge_name='br-int',has_traffic_filtering=True,id=ffca14da-f5e6-48fb-820a-efe53de40c61,network=Network(e71e82f1-2476-4d79-9b64-3d07204593df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffca14da-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.428 221554 DEBUG nova.objects.instance [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 419232fb-c874-411a-b4a8-cad39708c3aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.446 221554 DEBUG nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:54:31 np0005603609 nova_compute[221550]:  <uuid>419232fb-c874-411a-b4a8-cad39708c3aa</uuid>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:  <name>instance-000000cc</name>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestNetworkBasicOps-server-1042869875</nova:name>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:54:30</nova:creationTime>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:54:31 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:        <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:        <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:        <nova:port uuid="ffca14da-f5e6-48fb-820a-efe53de40c61">
Jan 31 03:54:31 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <entry name="serial">419232fb-c874-411a-b4a8-cad39708c3aa</entry>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <entry name="uuid">419232fb-c874-411a-b4a8-cad39708c3aa</entry>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/419232fb-c874-411a-b4a8-cad39708c3aa_disk">
Jan 31 03:54:31 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:54:31 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/419232fb-c874-411a-b4a8-cad39708c3aa_disk.config">
Jan 31 03:54:31 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:54:31 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:20:a4:b8"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <target dev="tapffca14da-f5"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/419232fb-c874-411a-b4a8-cad39708c3aa/console.log" append="off"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:54:31 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:54:31 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:54:31 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:54:31 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.448 221554 DEBUG nova.compute.manager [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Preparing to wait for external event network-vif-plugged-ffca14da-f5e6-48fb-820a-efe53de40c61 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.449 221554 DEBUG oslo_concurrency.lockutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "419232fb-c874-411a-b4a8-cad39708c3aa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.449 221554 DEBUG oslo_concurrency.lockutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "419232fb-c874-411a-b4a8-cad39708c3aa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.450 221554 DEBUG oslo_concurrency.lockutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "419232fb-c874-411a-b4a8-cad39708c3aa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.451 221554 DEBUG nova.virt.libvirt.vif [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:54:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1042869875',display_name='tempest-TestNetworkBasicOps-server-1042869875',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1042869875',id=204,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN1syU/vyioUizkyhRVHAKVOmhOYRuzzXHkZyKCpyq7hSq7kIGF92QtCU4tebgULvA5/eUsBZxKQwP+CZaKMYbrjmhq0RAYmqQmZng4wxVlg+sGfdgk+gIM/6U8k6r0hrA==',key_name='tempest-TestNetworkBasicOps-1090043757',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-6gr9v0z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:54:24Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=419232fb-c874-411a-b4a8-cad39708c3aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ffca14da-f5e6-48fb-820a-efe53de40c61", "address": "fa:16:3e:20:a4:b8", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffca14da-f5", "ovs_interfaceid": "ffca14da-f5e6-48fb-820a-efe53de40c61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.451 221554 DEBUG nova.network.os_vif_util [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "ffca14da-f5e6-48fb-820a-efe53de40c61", "address": "fa:16:3e:20:a4:b8", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffca14da-f5", "ovs_interfaceid": "ffca14da-f5e6-48fb-820a-efe53de40c61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.452 221554 DEBUG nova.network.os_vif_util [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:a4:b8,bridge_name='br-int',has_traffic_filtering=True,id=ffca14da-f5e6-48fb-820a-efe53de40c61,network=Network(e71e82f1-2476-4d79-9b64-3d07204593df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffca14da-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.454 221554 DEBUG os_vif [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:a4:b8,bridge_name='br-int',has_traffic_filtering=True,id=ffca14da-f5e6-48fb-820a-efe53de40c61,network=Network(e71e82f1-2476-4d79-9b64-3d07204593df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffca14da-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.455 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.455 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.456 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.461 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.462 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapffca14da-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.462 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapffca14da-f5, col_values=(('external_ids', {'iface-id': 'ffca14da-f5e6-48fb-820a-efe53de40c61', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:a4:b8', 'vm-uuid': '419232fb-c874-411a-b4a8-cad39708c3aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:54:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:54:31 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.513 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:31 np0005603609 NetworkManager[49064]: <info>  [1769849671.5138] manager: (tapffca14da-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/437)
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.516 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.519 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.520 221554 INFO os_vif [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:a4:b8,bridge_name='br-int',has_traffic_filtering=True,id=ffca14da-f5e6-48fb-820a-efe53de40c61,network=Network(e71e82f1-2476-4d79-9b64-3d07204593df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffca14da-f5')#033[00m
Jan 31 03:54:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:31.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.568 221554 DEBUG nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.568 221554 DEBUG nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.569 221554 DEBUG nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No VIF found with MAC fa:16:3e:20:a4:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.569 221554 INFO nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Using config drive#033[00m
Jan 31 03:54:31 np0005603609 nova_compute[221550]: 2026-01-31 08:54:31.594 221554 DEBUG nova.storage.rbd_utils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 419232fb-c874-411a-b4a8-cad39708c3aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:54:32 np0005603609 nova_compute[221550]: 2026-01-31 08:54:32.229 221554 INFO nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Creating config drive at /var/lib/nova/instances/419232fb-c874-411a-b4a8-cad39708c3aa/disk.config#033[00m
Jan 31 03:54:32 np0005603609 nova_compute[221550]: 2026-01-31 08:54:32.235 221554 DEBUG oslo_concurrency.processutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/419232fb-c874-411a-b4a8-cad39708c3aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpuubka9em execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:54:32 np0005603609 nova_compute[221550]: 2026-01-31 08:54:32.259 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:32 np0005603609 nova_compute[221550]: 2026-01-31 08:54:32.312 221554 DEBUG nova.network.neutron [req-ec248afb-08f1-4009-acfc-a88f5a452983 req-2d6ca5de-e894-4924-bee5-0d03bcd0efee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Updated VIF entry in instance network info cache for port ffca14da-f5e6-48fb-820a-efe53de40c61. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:54:32 np0005603609 nova_compute[221550]: 2026-01-31 08:54:32.313 221554 DEBUG nova.network.neutron [req-ec248afb-08f1-4009-acfc-a88f5a452983 req-2d6ca5de-e894-4924-bee5-0d03bcd0efee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Updating instance_info_cache with network_info: [{"id": "ffca14da-f5e6-48fb-820a-efe53de40c61", "address": "fa:16:3e:20:a4:b8", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffca14da-f5", "ovs_interfaceid": "ffca14da-f5e6-48fb-820a-efe53de40c61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:54:32 np0005603609 nova_compute[221550]: 2026-01-31 08:54:32.343 221554 DEBUG oslo_concurrency.lockutils [req-ec248afb-08f1-4009-acfc-a88f5a452983 req-2d6ca5de-e894-4924-bee5-0d03bcd0efee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-419232fb-c874-411a-b4a8-cad39708c3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:54:32 np0005603609 nova_compute[221550]: 2026-01-31 08:54:32.363 221554 DEBUG oslo_concurrency.processutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/419232fb-c874-411a-b4a8-cad39708c3aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpuubka9em" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:54:32 np0005603609 nova_compute[221550]: 2026-01-31 08:54:32.400 221554 DEBUG nova.storage.rbd_utils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 419232fb-c874-411a-b4a8-cad39708c3aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:54:32 np0005603609 nova_compute[221550]: 2026-01-31 08:54:32.405 221554 DEBUG oslo_concurrency.processutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/419232fb-c874-411a-b4a8-cad39708c3aa/disk.config 419232fb-c874-411a-b4a8-cad39708c3aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:54:32 np0005603609 nova_compute[221550]: 2026-01-31 08:54:32.564 221554 DEBUG oslo_concurrency.processutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/419232fb-c874-411a-b4a8-cad39708c3aa/disk.config 419232fb-c874-411a-b4a8-cad39708c3aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:54:32 np0005603609 nova_compute[221550]: 2026-01-31 08:54:32.565 221554 INFO nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Deleting local config drive /var/lib/nova/instances/419232fb-c874-411a-b4a8-cad39708c3aa/disk.config because it was imported into RBD.#033[00m
Jan 31 03:54:32 np0005603609 kernel: tapffca14da-f5: entered promiscuous mode
Jan 31 03:54:32 np0005603609 NetworkManager[49064]: <info>  [1769849672.6081] manager: (tapffca14da-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/438)
Jan 31 03:54:32 np0005603609 systemd-udevd[311155]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:54:32 np0005603609 nova_compute[221550]: 2026-01-31 08:54:32.654 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:32 np0005603609 ovn_controller[130359]: 2026-01-31T08:54:32Z|00950|binding|INFO|Claiming lport ffca14da-f5e6-48fb-820a-efe53de40c61 for this chassis.
Jan 31 03:54:32 np0005603609 ovn_controller[130359]: 2026-01-31T08:54:32Z|00951|binding|INFO|ffca14da-f5e6-48fb-820a-efe53de40c61: Claiming fa:16:3e:20:a4:b8 10.100.0.6
Jan 31 03:54:32 np0005603609 nova_compute[221550]: 2026-01-31 08:54:32.656 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:32 np0005603609 nova_compute[221550]: 2026-01-31 08:54:32.658 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:32 np0005603609 NetworkManager[49064]: <info>  [1769849672.6650] device (tapffca14da-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:54:32 np0005603609 NetworkManager[49064]: <info>  [1769849672.6658] device (tapffca14da-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.668 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:a4:b8 10.100.0.6'], port_security=['fa:16:3e:20:a4:b8 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '419232fb-c874-411a-b4a8-cad39708c3aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e71e82f1-2476-4d79-9b64-3d07204593df', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f12bee1-cae7-4483-84ca-fadfee414f23', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2f59b70-6149-48e7-81ee-ebb34275d7e4, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=ffca14da-f5e6-48fb-820a-efe53de40c61) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.670 140058 INFO neutron.agent.ovn.metadata.agent [-] Port ffca14da-f5e6-48fb-820a-efe53de40c61 in datapath e71e82f1-2476-4d79-9b64-3d07204593df bound to our chassis#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.671 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e71e82f1-2476-4d79-9b64-3d07204593df#033[00m
Jan 31 03:54:32 np0005603609 ovn_controller[130359]: 2026-01-31T08:54:32Z|00952|binding|INFO|Setting lport ffca14da-f5e6-48fb-820a-efe53de40c61 ovn-installed in OVS
Jan 31 03:54:32 np0005603609 ovn_controller[130359]: 2026-01-31T08:54:32Z|00953|binding|INFO|Setting lport ffca14da-f5e6-48fb-820a-efe53de40c61 up in Southbound
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.679 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5f47912a-3cfc-4c90-bf2d-95eba632d82e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.680 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape71e82f1-21 in ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:54:32 np0005603609 systemd-machined[190912]: New machine qemu-112-instance-000000cc.
Jan 31 03:54:32 np0005603609 nova_compute[221550]: 2026-01-31 08:54:32.681 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.682 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape71e82f1-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.683 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[30e894a4-dc07-433d-8771-5b1c12f99f92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.684 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[20a6ac26-3105-49fe-879c-f6d6e9feb9d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.692 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc470f5-a509-4003-bd72-168d3c829c07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:32 np0005603609 systemd[1]: Started Virtual Machine qemu-112-instance-000000cc.
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.714 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[32f8ece9-0407-4a3c-be77-612a379f72b7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.740 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[5dde27aa-7b6f-4926-864a-fab3f57f4684]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:32 np0005603609 NetworkManager[49064]: <info>  [1769849672.7465] manager: (tape71e82f1-20): new Veth device (/org/freedesktop/NetworkManager/Devices/439)
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.745 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb39607-aac5-45c2-a0bb-cc511a6d6787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:32 np0005603609 systemd-udevd[311158]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.778 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[1143ce7e-a41a-4181-bb2d-ac2e70c07a3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.780 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[35f25755-b7d5-4a8e-bb48-d7b1a1747116]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:32 np0005603609 NetworkManager[49064]: <info>  [1769849672.7994] device (tape71e82f1-20): carrier: link connected
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.805 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[684cb82b-cf07-4117-9f21-9ecc5a607652]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.819 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8338699e-f377-4b65-9ec4-4af0c45437c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape71e82f1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:3c:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 991249, 'reachable_time': 31473, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311191, 'error': None, 'target': 'ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.831 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0d07ca21-cb87-400d-9c99-f1ba0460d303]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:3c1c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 991249, 'tstamp': 991249}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311192, 'error': None, 'target': 'ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.844 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d7224b6f-d4c6-4c28-8db4-a78c233eb971]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape71e82f1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:3c:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 991249, 'reachable_time': 31473, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311193, 'error': None, 'target': 'ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.872 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7497fd-783f-4690-9fcd-efb812211bde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:32.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.926 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1655d444-db1f-4148-8525-8db61555641a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.927 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape71e82f1-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.927 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.928 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape71e82f1-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:54:32 np0005603609 NetworkManager[49064]: <info>  [1769849672.9304] manager: (tape71e82f1-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/440)
Jan 31 03:54:32 np0005603609 nova_compute[221550]: 2026-01-31 08:54:32.929 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:32 np0005603609 kernel: tape71e82f1-20: entered promiscuous mode
Jan 31 03:54:32 np0005603609 nova_compute[221550]: 2026-01-31 08:54:32.935 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.938 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape71e82f1-20, col_values=(('external_ids', {'iface-id': '40798090-132f-4914-b176-6fbb940ac583'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:54:32 np0005603609 ovn_controller[130359]: 2026-01-31T08:54:32Z|00954|binding|INFO|Releasing lport 40798090-132f-4914-b176-6fbb940ac583 from this chassis (sb_readonly=0)
Jan 31 03:54:32 np0005603609 nova_compute[221550]: 2026-01-31 08:54:32.939 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.942 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e71e82f1-2476-4d79-9b64-3d07204593df.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e71e82f1-2476-4d79-9b64-3d07204593df.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.943 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[125b50c8-764e-438f-a787-020db0df6dba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.944 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-e71e82f1-2476-4d79-9b64-3d07204593df
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/e71e82f1-2476-4d79-9b64-3d07204593df.pid.haproxy
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID e71e82f1-2476-4d79-9b64-3d07204593df
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:54:32 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:32.944 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df', 'env', 'PROCESS_TAG=haproxy-e71e82f1-2476-4d79-9b64-3d07204593df', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e71e82f1-2476-4d79-9b64-3d07204593df.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:54:32 np0005603609 nova_compute[221550]: 2026-01-31 08:54:32.946 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:33 np0005603609 podman[311225]: 2026-01-31 08:54:33.316452571 +0000 UTC m=+0.053057655 container create 476bad2e637492b657c5a5c1c95f080238ac2c2b73b09fe8a235fe30872968d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:54:33 np0005603609 systemd[1]: Started libpod-conmon-476bad2e637492b657c5a5c1c95f080238ac2c2b73b09fe8a235fe30872968d2.scope.
Jan 31 03:54:33 np0005603609 podman[311225]: 2026-01-31 08:54:33.286053366 +0000 UTC m=+0.022658470 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:54:33 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:54:33 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec040367ca5befadb6556f8509951ce5bc07fa86c27d9bef35d941d2c59a424/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:54:33 np0005603609 podman[311225]: 2026-01-31 08:54:33.410861008 +0000 UTC m=+0.147466112 container init 476bad2e637492b657c5a5c1c95f080238ac2c2b73b09fe8a235fe30872968d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:54:33 np0005603609 podman[311225]: 2026-01-31 08:54:33.415220023 +0000 UTC m=+0.151825107 container start 476bad2e637492b657c5a5c1c95f080238ac2c2b73b09fe8a235fe30872968d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:54:33 np0005603609 neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df[311279]: [NOTICE]   (311288) : New worker (311290) forked
Jan 31 03:54:33 np0005603609 neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df[311279]: [NOTICE]   (311288) : Loading success.
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.457 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849673.4564893, 419232fb-c874-411a-b4a8-cad39708c3aa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.458 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] VM Started (Lifecycle Event)#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.502 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.507 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849673.456606, 419232fb-c874-411a-b4a8-cad39708c3aa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.507 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.513 221554 DEBUG nova.compute.manager [req-e761035b-8750-4ff0-8db4-7f22f6050310 req-73a614de-cfa1-40c4-a797-f2cde7b5ac5b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Received event network-vif-plugged-ffca14da-f5e6-48fb-820a-efe53de40c61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.517 221554 DEBUG oslo_concurrency.lockutils [req-e761035b-8750-4ff0-8db4-7f22f6050310 req-73a614de-cfa1-40c4-a797-f2cde7b5ac5b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "419232fb-c874-411a-b4a8-cad39708c3aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.518 221554 DEBUG oslo_concurrency.lockutils [req-e761035b-8750-4ff0-8db4-7f22f6050310 req-73a614de-cfa1-40c4-a797-f2cde7b5ac5b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "419232fb-c874-411a-b4a8-cad39708c3aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.518 221554 DEBUG oslo_concurrency.lockutils [req-e761035b-8750-4ff0-8db4-7f22f6050310 req-73a614de-cfa1-40c4-a797-f2cde7b5ac5b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "419232fb-c874-411a-b4a8-cad39708c3aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.518 221554 DEBUG nova.compute.manager [req-e761035b-8750-4ff0-8db4-7f22f6050310 req-73a614de-cfa1-40c4-a797-f2cde7b5ac5b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Processing event network-vif-plugged-ffca14da-f5e6-48fb-820a-efe53de40c61 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.520 221554 DEBUG nova.compute.manager [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.523 221554 DEBUG nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.525 221554 INFO nova.virt.libvirt.driver [-] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Instance spawned successfully.#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.526 221554 DEBUG nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.544 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.548 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849673.5230386, 419232fb-c874-411a-b4a8-cad39708c3aa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.549 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.569 221554 DEBUG nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.569 221554 DEBUG nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.570 221554 DEBUG nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:54:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.570 221554 DEBUG nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:54:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:33.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.571 221554 DEBUG nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.571 221554 DEBUG nova.virt.libvirt.driver [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.576 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.579 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.614 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.646 221554 INFO nova.compute.manager [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Took 8.94 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.646 221554 DEBUG nova.compute.manager [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.808 221554 INFO nova.compute.manager [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Took 10.29 seconds to build instance.#033[00m
Jan 31 03:54:33 np0005603609 nova_compute[221550]: 2026-01-31 08:54:33.847 221554 DEBUG oslo_concurrency.lockutils [None req-f317bec7-0646-40df-85a7-feec6cab707d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "419232fb-c874-411a-b4a8-cad39708c3aa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:34.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:54:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:35.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:54:35 np0005603609 nova_compute[221550]: 2026-01-31 08:54:35.650 221554 DEBUG nova.compute.manager [req-8b4fd17d-579c-4d40-984f-da9cf2e98713 req-a57fdb4c-6aeb-4f0c-9d2c-9706da35d1f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Received event network-vif-plugged-ffca14da-f5e6-48fb-820a-efe53de40c61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:54:35 np0005603609 nova_compute[221550]: 2026-01-31 08:54:35.650 221554 DEBUG oslo_concurrency.lockutils [req-8b4fd17d-579c-4d40-984f-da9cf2e98713 req-a57fdb4c-6aeb-4f0c-9d2c-9706da35d1f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "419232fb-c874-411a-b4a8-cad39708c3aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:35 np0005603609 nova_compute[221550]: 2026-01-31 08:54:35.651 221554 DEBUG oslo_concurrency.lockutils [req-8b4fd17d-579c-4d40-984f-da9cf2e98713 req-a57fdb4c-6aeb-4f0c-9d2c-9706da35d1f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "419232fb-c874-411a-b4a8-cad39708c3aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:35 np0005603609 nova_compute[221550]: 2026-01-31 08:54:35.651 221554 DEBUG oslo_concurrency.lockutils [req-8b4fd17d-579c-4d40-984f-da9cf2e98713 req-a57fdb4c-6aeb-4f0c-9d2c-9706da35d1f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "419232fb-c874-411a-b4a8-cad39708c3aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:35 np0005603609 nova_compute[221550]: 2026-01-31 08:54:35.652 221554 DEBUG nova.compute.manager [req-8b4fd17d-579c-4d40-984f-da9cf2e98713 req-a57fdb4c-6aeb-4f0c-9d2c-9706da35d1f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] No waiting events found dispatching network-vif-plugged-ffca14da-f5e6-48fb-820a-efe53de40c61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:54:35 np0005603609 nova_compute[221550]: 2026-01-31 08:54:35.652 221554 WARNING nova.compute.manager [req-8b4fd17d-579c-4d40-984f-da9cf2e98713 req-a57fdb4c-6aeb-4f0c-9d2c-9706da35d1f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Received unexpected event network-vif-plugged-ffca14da-f5e6-48fb-820a-efe53de40c61 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:54:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:36 np0005603609 nova_compute[221550]: 2026-01-31 08:54:36.514 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:36.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:37 np0005603609 nova_compute[221550]: 2026-01-31 08:54:37.241 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:37 np0005603609 nova_compute[221550]: 2026-01-31 08:54:37.436 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:37 np0005603609 NetworkManager[49064]: <info>  [1769849677.4383] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Jan 31 03:54:37 np0005603609 NetworkManager[49064]: <info>  [1769849677.4393] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/442)
Jan 31 03:54:37 np0005603609 nova_compute[221550]: 2026-01-31 08:54:37.477 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:37 np0005603609 ovn_controller[130359]: 2026-01-31T08:54:37Z|00955|binding|INFO|Releasing lport 40798090-132f-4914-b176-6fbb940ac583 from this chassis (sb_readonly=0)
Jan 31 03:54:37 np0005603609 nova_compute[221550]: 2026-01-31 08:54:37.493 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:37.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:38 np0005603609 nova_compute[221550]: 2026-01-31 08:54:38.045 221554 DEBUG nova.compute.manager [req-19d17317-49d3-4f7b-81b4-acbac219cd79 req-80590059-1700-4cbd-a9ee-9dfa3c6e9140 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Received event network-changed-ffca14da-f5e6-48fb-820a-efe53de40c61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:54:38 np0005603609 nova_compute[221550]: 2026-01-31 08:54:38.046 221554 DEBUG nova.compute.manager [req-19d17317-49d3-4f7b-81b4-acbac219cd79 req-80590059-1700-4cbd-a9ee-9dfa3c6e9140 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Refreshing instance network info cache due to event network-changed-ffca14da-f5e6-48fb-820a-efe53de40c61. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:54:38 np0005603609 nova_compute[221550]: 2026-01-31 08:54:38.046 221554 DEBUG oslo_concurrency.lockutils [req-19d17317-49d3-4f7b-81b4-acbac219cd79 req-80590059-1700-4cbd-a9ee-9dfa3c6e9140 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-419232fb-c874-411a-b4a8-cad39708c3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:54:38 np0005603609 nova_compute[221550]: 2026-01-31 08:54:38.046 221554 DEBUG oslo_concurrency.lockutils [req-19d17317-49d3-4f7b-81b4-acbac219cd79 req-80590059-1700-4cbd-a9ee-9dfa3c6e9140 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-419232fb-c874-411a-b4a8-cad39708c3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:54:38 np0005603609 nova_compute[221550]: 2026-01-31 08:54:38.046 221554 DEBUG nova.network.neutron [req-19d17317-49d3-4f7b-81b4-acbac219cd79 req-80590059-1700-4cbd-a9ee-9dfa3c6e9140 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Refreshing network info cache for port ffca14da-f5e6-48fb-820a-efe53de40c61 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:54:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:38.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:39 np0005603609 nova_compute[221550]: 2026-01-31 08:54:39.564 221554 DEBUG nova.network.neutron [req-19d17317-49d3-4f7b-81b4-acbac219cd79 req-80590059-1700-4cbd-a9ee-9dfa3c6e9140 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Updated VIF entry in instance network info cache for port ffca14da-f5e6-48fb-820a-efe53de40c61. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:54:39 np0005603609 nova_compute[221550]: 2026-01-31 08:54:39.565 221554 DEBUG nova.network.neutron [req-19d17317-49d3-4f7b-81b4-acbac219cd79 req-80590059-1700-4cbd-a9ee-9dfa3c6e9140 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Updating instance_info_cache with network_info: [{"id": "ffca14da-f5e6-48fb-820a-efe53de40c61", "address": "fa:16:3e:20:a4:b8", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffca14da-f5", "ovs_interfaceid": "ffca14da-f5e6-48fb-820a-efe53de40c61", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:54:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:39.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:39 np0005603609 nova_compute[221550]: 2026-01-31 08:54:39.600 221554 DEBUG oslo_concurrency.lockutils [req-19d17317-49d3-4f7b-81b4-acbac219cd79 req-80590059-1700-4cbd-a9ee-9dfa3c6e9140 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-419232fb-c874-411a-b4a8-cad39708c3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:54:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:40.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:41 np0005603609 nova_compute[221550]: 2026-01-31 08:54:41.519 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:54:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:41.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:54:42 np0005603609 nova_compute[221550]: 2026-01-31 08:54:42.243 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:42.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:54:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:43.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:54:44 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Jan 31 03:54:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:54:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:44.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:54:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:45.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:46 np0005603609 ovn_controller[130359]: 2026-01-31T08:54:46Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:a4:b8 10.100.0.6
Jan 31 03:54:46 np0005603609 ovn_controller[130359]: 2026-01-31T08:54:46Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:a4:b8 10.100.0.6
Jan 31 03:54:46 np0005603609 nova_compute[221550]: 2026-01-31 08:54:46.522 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:54:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:46.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:54:47 np0005603609 nova_compute[221550]: 2026-01-31 08:54:47.246 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:47.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:48.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:49.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:54:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:50.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #178. Immutable memtables: 0.
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:51.227118) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 178
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849691227181, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 762, "num_deletes": 252, "total_data_size": 1355515, "memory_usage": 1375088, "flush_reason": "Manual Compaction"}
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #179: started
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e396 e396: 3 total, 3 up, 3 in
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849691242784, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 179, "file_size": 894100, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 86659, "largest_seqno": 87416, "table_properties": {"data_size": 890372, "index_size": 1507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8760, "raw_average_key_size": 19, "raw_value_size": 882718, "raw_average_value_size": 1997, "num_data_blocks": 66, "num_entries": 442, "num_filter_entries": 442, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849645, "oldest_key_time": 1769849645, "file_creation_time": 1769849691, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 15743 microseconds, and 2725 cpu microseconds.
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:51.242856) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #179: 894100 bytes OK
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:51.242895) [db/memtable_list.cc:519] [default] Level-0 commit table #179 started
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:51.245084) [db/memtable_list.cc:722] [default] Level-0 commit table #179: memtable #1 done
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:51.245105) EVENT_LOG_v1 {"time_micros": 1769849691245099, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:51.245133) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 1351465, prev total WAL file size 1351506, number of live WAL files 2.
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000175.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:51.245688) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [179(873KB)], [177(13MB)]
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849691245718, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [179], "files_L6": [177], "score": -1, "input_data_size": 15390338, "oldest_snapshot_seqno": -1}
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #180: 10615 keys, 13406896 bytes, temperature: kUnknown
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849691368174, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 180, "file_size": 13406896, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13339111, "index_size": 40152, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26565, "raw_key_size": 281360, "raw_average_key_size": 26, "raw_value_size": 13154160, "raw_average_value_size": 1239, "num_data_blocks": 1523, "num_entries": 10615, "num_filter_entries": 10615, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769849691, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 180, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:51.368473) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 13406896 bytes
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:51.369759) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.6 rd, 109.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.8 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(32.2) write-amplify(15.0) OK, records in: 11141, records dropped: 526 output_compression: NoCompression
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:51.369777) EVENT_LOG_v1 {"time_micros": 1769849691369769, "job": 114, "event": "compaction_finished", "compaction_time_micros": 122576, "compaction_time_cpu_micros": 23390, "output_level": 6, "num_output_files": 1, "total_output_size": 13406896, "num_input_records": 11141, "num_output_records": 10615, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849691369950, "job": 114, "event": "table_file_deletion", "file_number": 179}
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000177.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849691371258, "job": 114, "event": "table_file_deletion", "file_number": 177}
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:51.245625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:51.371312) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:51.371316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:51.371317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:51.371318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:54:51.371320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:51 np0005603609 nova_compute[221550]: 2026-01-31 08:54:51.527 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:54:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:51.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:54:52 np0005603609 nova_compute[221550]: 2026-01-31 08:54:52.248 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:52 np0005603609 nova_compute[221550]: 2026-01-31 08:54:52.783 221554 INFO nova.compute.manager [None req-0c10d1b0-1357-4df2-8c84-93313a10a5b3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Get console output#033[00m
Jan 31 03:54:52 np0005603609 nova_compute[221550]: 2026-01-31 08:54:52.788 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:54:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:54:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:52.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:54:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:53.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e397 e397: 3 total, 3 up, 3 in
Jan 31 03:54:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:54.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:54 np0005603609 nova_compute[221550]: 2026-01-31 08:54:54.976 221554 DEBUG nova.compute.manager [req-bb88dfc8-7a5f-4acf-8289-62e35b7ab9be req-929d093c-af3f-4fd6-a222-93da12510eff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Received event network-changed-ffca14da-f5e6-48fb-820a-efe53de40c61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:54:54 np0005603609 nova_compute[221550]: 2026-01-31 08:54:54.976 221554 DEBUG nova.compute.manager [req-bb88dfc8-7a5f-4acf-8289-62e35b7ab9be req-929d093c-af3f-4fd6-a222-93da12510eff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Refreshing instance network info cache due to event network-changed-ffca14da-f5e6-48fb-820a-efe53de40c61. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:54:54 np0005603609 nova_compute[221550]: 2026-01-31 08:54:54.976 221554 DEBUG oslo_concurrency.lockutils [req-bb88dfc8-7a5f-4acf-8289-62e35b7ab9be req-929d093c-af3f-4fd6-a222-93da12510eff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-419232fb-c874-411a-b4a8-cad39708c3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:54:54 np0005603609 nova_compute[221550]: 2026-01-31 08:54:54.977 221554 DEBUG oslo_concurrency.lockutils [req-bb88dfc8-7a5f-4acf-8289-62e35b7ab9be req-929d093c-af3f-4fd6-a222-93da12510eff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-419232fb-c874-411a-b4a8-cad39708c3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:54:54 np0005603609 nova_compute[221550]: 2026-01-31 08:54:54.977 221554 DEBUG nova.network.neutron [req-bb88dfc8-7a5f-4acf-8289-62e35b7ab9be req-929d093c-af3f-4fd6-a222-93da12510eff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Refreshing network info cache for port ffca14da-f5e6-48fb-820a-efe53de40c61 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:54:55 np0005603609 podman[311305]: 2026-01-31 08:54:55.184712527 +0000 UTC m=+0.058029286 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Jan 31 03:54:55 np0005603609 podman[311304]: 2026-01-31 08:54:55.208636166 +0000 UTC m=+0.087893599 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 03:54:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:54:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:55.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:54:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e398 e398: 3 total, 3 up, 3 in
Jan 31 03:54:56 np0005603609 nova_compute[221550]: 2026-01-31 08:54:56.529 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:56.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:56 np0005603609 nova_compute[221550]: 2026-01-31 08:54:56.994 221554 DEBUG nova.network.neutron [req-bb88dfc8-7a5f-4acf-8289-62e35b7ab9be req-929d093c-af3f-4fd6-a222-93da12510eff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Updated VIF entry in instance network info cache for port ffca14da-f5e6-48fb-820a-efe53de40c61. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:54:56 np0005603609 nova_compute[221550]: 2026-01-31 08:54:56.994 221554 DEBUG nova.network.neutron [req-bb88dfc8-7a5f-4acf-8289-62e35b7ab9be req-929d093c-af3f-4fd6-a222-93da12510eff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Updating instance_info_cache with network_info: [{"id": "ffca14da-f5e6-48fb-820a-efe53de40c61", "address": "fa:16:3e:20:a4:b8", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffca14da-f5", "ovs_interfaceid": "ffca14da-f5e6-48fb-820a-efe53de40c61", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:54:57 np0005603609 nova_compute[221550]: 2026-01-31 08:54:57.049 221554 DEBUG oslo_concurrency.lockutils [req-bb88dfc8-7a5f-4acf-8289-62e35b7ab9be req-929d093c-af3f-4fd6-a222-93da12510eff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-419232fb-c874-411a-b4a8-cad39708c3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:54:57 np0005603609 nova_compute[221550]: 2026-01-31 08:54:57.251 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e399 e399: 3 total, 3 up, 3 in
Jan 31 03:54:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:57.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:58.055 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=95, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=94) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:54:58 np0005603609 nova_compute[221550]: 2026-01-31 08:54:58.056 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:58 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:54:58.057 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:54:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:58.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:59 np0005603609 nova_compute[221550]: 2026-01-31 08:54:59.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:54:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:54:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:59.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:00.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:01 np0005603609 nova_compute[221550]: 2026-01-31 08:55:01.533 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:01.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:02 np0005603609 nova_compute[221550]: 2026-01-31 08:55:02.252 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:02 np0005603609 nova_compute[221550]: 2026-01-31 08:55:02.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:55:02 np0005603609 nova_compute[221550]: 2026-01-31 08:55:02.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:55:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:02.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:03.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:55:04.059 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '95'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:55:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e400 e400: 3 total, 3 up, 3 in
Jan 31 03:55:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:55:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:04.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:55:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:05.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:06 np0005603609 nova_compute[221550]: 2026-01-31 08:55:06.536 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:06 np0005603609 nova_compute[221550]: 2026-01-31 08:55:06.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:55:06 np0005603609 nova_compute[221550]: 2026-01-31 08:55:06.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:55:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:55:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:06.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:55:07 np0005603609 nova_compute[221550]: 2026-01-31 08:55:07.253 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:55:07.547 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:55:07.547 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:55:07.548 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:55:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:07.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:08.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:09.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:10 np0005603609 nova_compute[221550]: 2026-01-31 08:55:10.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:55:10 np0005603609 nova_compute[221550]: 2026-01-31 08:55:10.771 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:10 np0005603609 nova_compute[221550]: 2026-01-31 08:55:10.772 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:10 np0005603609 nova_compute[221550]: 2026-01-31 08:55:10.772 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:55:10 np0005603609 nova_compute[221550]: 2026-01-31 08:55:10.773 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:55:10 np0005603609 nova_compute[221550]: 2026-01-31 08:55:10.773 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:55:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:10.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:55:11 np0005603609 nova_compute[221550]: 2026-01-31 08:55:11.202 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:11 np0005603609 nova_compute[221550]: 2026-01-31 08:55:11.338 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000cc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:55:11 np0005603609 nova_compute[221550]: 2026-01-31 08:55:11.339 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000cc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:55:11 np0005603609 nova_compute[221550]: 2026-01-31 08:55:11.484 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:55:11 np0005603609 nova_compute[221550]: 2026-01-31 08:55:11.486 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4016MB free_disk=20.876312255859375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:55:11 np0005603609 nova_compute[221550]: 2026-01-31 08:55:11.486 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:11 np0005603609 nova_compute[221550]: 2026-01-31 08:55:11.487 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:11 np0005603609 nova_compute[221550]: 2026-01-31 08:55:11.539 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:11 np0005603609 nova_compute[221550]: 2026-01-31 08:55:11.614 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 419232fb-c874-411a-b4a8-cad39708c3aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:55:11 np0005603609 nova_compute[221550]: 2026-01-31 08:55:11.614 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:55:11 np0005603609 nova_compute[221550]: 2026-01-31 08:55:11.614 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:55:11 np0005603609 nova_compute[221550]: 2026-01-31 08:55:11.675 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:11.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:55:12 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3939374713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:55:12 np0005603609 nova_compute[221550]: 2026-01-31 08:55:12.110 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:12 np0005603609 nova_compute[221550]: 2026-01-31 08:55:12.114 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:55:12 np0005603609 nova_compute[221550]: 2026-01-31 08:55:12.144 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:55:12 np0005603609 nova_compute[221550]: 2026-01-31 08:55:12.247 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:55:12 np0005603609 nova_compute[221550]: 2026-01-31 08:55:12.247 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:55:12 np0005603609 nova_compute[221550]: 2026-01-31 08:55:12.254 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:55:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:12.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:55:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:13.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:14 np0005603609 nova_compute[221550]: 2026-01-31 08:55:14.247 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:55:14 np0005603609 nova_compute[221550]: 2026-01-31 08:55:14.248 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:55:14 np0005603609 nova_compute[221550]: 2026-01-31 08:55:14.248 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:55:14 np0005603609 nova_compute[221550]: 2026-01-31 08:55:14.899 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-419232fb-c874-411a-b4a8-cad39708c3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:55:14 np0005603609 nova_compute[221550]: 2026-01-31 08:55:14.900 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-419232fb-c874-411a-b4a8-cad39708c3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:55:14 np0005603609 nova_compute[221550]: 2026-01-31 08:55:14.900 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:55:14 np0005603609 nova_compute[221550]: 2026-01-31 08:55:14.901 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 419232fb-c874-411a-b4a8-cad39708c3aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:55:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:14.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:15.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:16 np0005603609 nova_compute[221550]: 2026-01-31 08:55:16.542 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:16.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:17 np0005603609 nova_compute[221550]: 2026-01-31 08:55:17.256 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:55:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:17.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:55:18 np0005603609 nova_compute[221550]: 2026-01-31 08:55:18.257 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Updating instance_info_cache with network_info: [{"id": "ffca14da-f5e6-48fb-820a-efe53de40c61", "address": "fa:16:3e:20:a4:b8", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffca14da-f5", "ovs_interfaceid": "ffca14da-f5e6-48fb-820a-efe53de40c61", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:55:18 np0005603609 nova_compute[221550]: 2026-01-31 08:55:18.393 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-419232fb-c874-411a-b4a8-cad39708c3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:55:18 np0005603609 nova_compute[221550]: 2026-01-31 08:55:18.393 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:55:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:18.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:19.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:20.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:21 np0005603609 nova_compute[221550]: 2026-01-31 08:55:21.544 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:21 np0005603609 nova_compute[221550]: 2026-01-31 08:55:21.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:55:21 np0005603609 nova_compute[221550]: 2026-01-31 08:55:21.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:55:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:21.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:22 np0005603609 nova_compute[221550]: 2026-01-31 08:55:22.260 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:22.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:23.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:55:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:24.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:55:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:55:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:25.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:55:26 np0005603609 podman[311396]: 2026-01-31 08:55:26.162695243 +0000 UTC m=+0.050040993 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:55:26 np0005603609 podman[311395]: 2026-01-31 08:55:26.179578682 +0000 UTC m=+0.069097374 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:55:26 np0005603609 nova_compute[221550]: 2026-01-31 08:55:26.546 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:55:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:26.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:55:27 np0005603609 nova_compute[221550]: 2026-01-31 08:55:27.263 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:27 np0005603609 nova_compute[221550]: 2026-01-31 08:55:27.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:55:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:27.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:55:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:28.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:55:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:29.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:30.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:31 np0005603609 nova_compute[221550]: 2026-01-31 08:55:31.580 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:31.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:32 np0005603609 podman[311611]: 2026-01-31 08:55:32.09448013 +0000 UTC m=+0.053141407 container exec 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 03:55:32 np0005603609 podman[311611]: 2026-01-31 08:55:32.188313432 +0000 UTC m=+0.146974719 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:55:32 np0005603609 nova_compute[221550]: 2026-01-31 08:55:32.265 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:55:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:32.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:55:33 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:55:33 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:55:33 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:55:33 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:55:33 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:55:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:33.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:55:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:34.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:55:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:55:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:35.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:55:36 np0005603609 nova_compute[221550]: 2026-01-31 08:55:36.584 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:36.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:37 np0005603609 nova_compute[221550]: 2026-01-31 08:55:37.265 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:37 np0005603609 nova_compute[221550]: 2026-01-31 08:55:37.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:55:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:37.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:38.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:39.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:40 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:55:40 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:55:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:40.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:41 np0005603609 nova_compute[221550]: 2026-01-31 08:55:41.637 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:41.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:42 np0005603609 nova_compute[221550]: 2026-01-31 08:55:42.267 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:42.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:43.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:44.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:45.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:46 np0005603609 nova_compute[221550]: 2026-01-31 08:55:46.638 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:46.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:47 np0005603609 nova_compute[221550]: 2026-01-31 08:55:47.269 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:47.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:48.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:55:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:49.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:55:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:55:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:50.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:55:51 np0005603609 nova_compute[221550]: 2026-01-31 08:55:51.640 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:51.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:52 np0005603609 nova_compute[221550]: 2026-01-31 08:55:52.271 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:52 np0005603609 nova_compute[221550]: 2026-01-31 08:55:52.790 221554 DEBUG oslo_concurrency.lockutils [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "419232fb-c874-411a-b4a8-cad39708c3aa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:52 np0005603609 nova_compute[221550]: 2026-01-31 08:55:52.791 221554 DEBUG oslo_concurrency.lockutils [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "419232fb-c874-411a-b4a8-cad39708c3aa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:52 np0005603609 nova_compute[221550]: 2026-01-31 08:55:52.791 221554 DEBUG oslo_concurrency.lockutils [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "419232fb-c874-411a-b4a8-cad39708c3aa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:52 np0005603609 nova_compute[221550]: 2026-01-31 08:55:52.791 221554 DEBUG oslo_concurrency.lockutils [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "419232fb-c874-411a-b4a8-cad39708c3aa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:52 np0005603609 nova_compute[221550]: 2026-01-31 08:55:52.792 221554 DEBUG oslo_concurrency.lockutils [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "419232fb-c874-411a-b4a8-cad39708c3aa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:55:52 np0005603609 nova_compute[221550]: 2026-01-31 08:55:52.793 221554 INFO nova.compute.manager [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Terminating instance#033[00m
Jan 31 03:55:52 np0005603609 nova_compute[221550]: 2026-01-31 08:55:52.795 221554 DEBUG nova.compute.manager [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:55:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:53.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:53 np0005603609 kernel: tapffca14da-f5 (unregistering): left promiscuous mode
Jan 31 03:55:53 np0005603609 NetworkManager[49064]: <info>  [1769849753.5760] device (tapffca14da-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:55:53 np0005603609 ovn_controller[130359]: 2026-01-31T08:55:53Z|00956|binding|INFO|Releasing lport ffca14da-f5e6-48fb-820a-efe53de40c61 from this chassis (sb_readonly=0)
Jan 31 03:55:53 np0005603609 ovn_controller[130359]: 2026-01-31T08:55:53Z|00957|binding|INFO|Setting lport ffca14da-f5e6-48fb-820a-efe53de40c61 down in Southbound
Jan 31 03:55:53 np0005603609 ovn_controller[130359]: 2026-01-31T08:55:53Z|00958|binding|INFO|Removing iface tapffca14da-f5 ovn-installed in OVS
Jan 31 03:55:53 np0005603609 nova_compute[221550]: 2026-01-31 08:55:53.583 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:53 np0005603609 nova_compute[221550]: 2026-01-31 08:55:53.585 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:53 np0005603609 nova_compute[221550]: 2026-01-31 08:55:53.591 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:53 np0005603609 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d000000cc.scope: Deactivated successfully.
Jan 31 03:55:53 np0005603609 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d000000cc.scope: Consumed 15.329s CPU time.
Jan 31 03:55:53 np0005603609 systemd-machined[190912]: Machine qemu-112-instance-000000cc terminated.
Jan 31 03:55:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:55:53.669 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:a4:b8 10.100.0.6'], port_security=['fa:16:3e:20:a4:b8 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '419232fb-c874-411a-b4a8-cad39708c3aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e71e82f1-2476-4d79-9b64-3d07204593df', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0f12bee1-cae7-4483-84ca-fadfee414f23', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2f59b70-6149-48e7-81ee-ebb34275d7e4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=ffca14da-f5e6-48fb-820a-efe53de40c61) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:55:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:55:53.670 140058 INFO neutron.agent.ovn.metadata.agent [-] Port ffca14da-f5e6-48fb-820a-efe53de40c61 in datapath e71e82f1-2476-4d79-9b64-3d07204593df unbound from our chassis#033[00m
Jan 31 03:55:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:55:53.671 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e71e82f1-2476-4d79-9b64-3d07204593df, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:55:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:55:53.672 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7b516d-1a9e-45ee-8a47-ad1586c6ec4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:55:53.672 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df namespace which is not needed anymore#033[00m
Jan 31 03:55:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:53.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:53 np0005603609 nova_compute[221550]: 2026-01-31 08:55:53.839 221554 INFO nova.virt.libvirt.driver [-] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Instance destroyed successfully.#033[00m
Jan 31 03:55:53 np0005603609 nova_compute[221550]: 2026-01-31 08:55:53.840 221554 DEBUG nova.objects.instance [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'resources' on Instance uuid 419232fb-c874-411a-b4a8-cad39708c3aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:55:53 np0005603609 nova_compute[221550]: 2026-01-31 08:55:53.922 221554 DEBUG nova.virt.libvirt.vif [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:54:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1042869875',display_name='tempest-TestNetworkBasicOps-server-1042869875',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1042869875',id=204,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN1syU/vyioUizkyhRVHAKVOmhOYRuzzXHkZyKCpyq7hSq7kIGF92QtCU4tebgULvA5/eUsBZxKQwP+CZaKMYbrjmhq0RAYmqQmZng4wxVlg+sGfdgk+gIM/6U8k6r0hrA==',key_name='tempest-TestNetworkBasicOps-1090043757',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:54:33Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-6gr9v0z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:54:33Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=419232fb-c874-411a-b4a8-cad39708c3aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ffca14da-f5e6-48fb-820a-efe53de40c61", "address": "fa:16:3e:20:a4:b8", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffca14da-f5", "ovs_interfaceid": "ffca14da-f5e6-48fb-820a-efe53de40c61", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:55:53 np0005603609 nova_compute[221550]: 2026-01-31 08:55:53.923 221554 DEBUG nova.network.os_vif_util [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "ffca14da-f5e6-48fb-820a-efe53de40c61", "address": "fa:16:3e:20:a4:b8", "network": {"id": "e71e82f1-2476-4d79-9b64-3d07204593df", "bridge": "br-int", "label": "tempest-network-smoke--1092340605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffca14da-f5", "ovs_interfaceid": "ffca14da-f5e6-48fb-820a-efe53de40c61", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:55:53 np0005603609 nova_compute[221550]: 2026-01-31 08:55:53.923 221554 DEBUG nova.network.os_vif_util [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:20:a4:b8,bridge_name='br-int',has_traffic_filtering=True,id=ffca14da-f5e6-48fb-820a-efe53de40c61,network=Network(e71e82f1-2476-4d79-9b64-3d07204593df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffca14da-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:55:53 np0005603609 nova_compute[221550]: 2026-01-31 08:55:53.924 221554 DEBUG os_vif [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:a4:b8,bridge_name='br-int',has_traffic_filtering=True,id=ffca14da-f5e6-48fb-820a-efe53de40c61,network=Network(e71e82f1-2476-4d79-9b64-3d07204593df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffca14da-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:55:53 np0005603609 nova_compute[221550]: 2026-01-31 08:55:53.925 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:53 np0005603609 nova_compute[221550]: 2026-01-31 08:55:53.925 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapffca14da-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:55:53 np0005603609 nova_compute[221550]: 2026-01-31 08:55:53.927 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:53 np0005603609 nova_compute[221550]: 2026-01-31 08:55:53.929 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:53 np0005603609 nova_compute[221550]: 2026-01-31 08:55:53.932 221554 INFO os_vif [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:a4:b8,bridge_name='br-int',has_traffic_filtering=True,id=ffca14da-f5e6-48fb-820a-efe53de40c61,network=Network(e71e82f1-2476-4d79-9b64-3d07204593df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffca14da-f5')#033[00m
Jan 31 03:55:53 np0005603609 neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df[311279]: [NOTICE]   (311288) : haproxy version is 2.8.14-c23fe91
Jan 31 03:55:53 np0005603609 neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df[311279]: [NOTICE]   (311288) : path to executable is /usr/sbin/haproxy
Jan 31 03:55:53 np0005603609 neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df[311279]: [WARNING]  (311288) : Exiting Master process...
Jan 31 03:55:53 np0005603609 neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df[311279]: [ALERT]    (311288) : Current worker (311290) exited with code 143 (Terminated)
Jan 31 03:55:53 np0005603609 neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df[311279]: [WARNING]  (311288) : All workers exited. Exiting... (0)
Jan 31 03:55:53 np0005603609 systemd[1]: libpod-476bad2e637492b657c5a5c1c95f080238ac2c2b73b09fe8a235fe30872968d2.scope: Deactivated successfully.
Jan 31 03:55:53 np0005603609 podman[311941]: 2026-01-31 08:55:53.976178412 +0000 UTC m=+0.228401192 container died 476bad2e637492b657c5a5c1c95f080238ac2c2b73b09fe8a235fe30872968d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 03:55:54 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-476bad2e637492b657c5a5c1c95f080238ac2c2b73b09fe8a235fe30872968d2-userdata-shm.mount: Deactivated successfully.
Jan 31 03:55:54 np0005603609 systemd[1]: var-lib-containers-storage-overlay-dec040367ca5befadb6556f8509951ce5bc07fa86c27d9bef35d941d2c59a424-merged.mount: Deactivated successfully.
Jan 31 03:55:54 np0005603609 podman[311941]: 2026-01-31 08:55:54.687495023 +0000 UTC m=+0.939717803 container cleanup 476bad2e637492b657c5a5c1c95f080238ac2c2b73b09fe8a235fe30872968d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:55:54 np0005603609 systemd[1]: libpod-conmon-476bad2e637492b657c5a5c1c95f080238ac2c2b73b09fe8a235fe30872968d2.scope: Deactivated successfully.
Jan 31 03:55:54 np0005603609 nova_compute[221550]: 2026-01-31 08:55:54.835 221554 DEBUG nova.compute.manager [req-3f2d7e91-c755-438c-8340-1773df0cb223 req-cb381f31-cba5-411a-a824-222645590ca7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Received event network-vif-unplugged-ffca14da-f5e6-48fb-820a-efe53de40c61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:55:54 np0005603609 nova_compute[221550]: 2026-01-31 08:55:54.835 221554 DEBUG oslo_concurrency.lockutils [req-3f2d7e91-c755-438c-8340-1773df0cb223 req-cb381f31-cba5-411a-a824-222645590ca7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "419232fb-c874-411a-b4a8-cad39708c3aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:54 np0005603609 nova_compute[221550]: 2026-01-31 08:55:54.836 221554 DEBUG oslo_concurrency.lockutils [req-3f2d7e91-c755-438c-8340-1773df0cb223 req-cb381f31-cba5-411a-a824-222645590ca7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "419232fb-c874-411a-b4a8-cad39708c3aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:54 np0005603609 nova_compute[221550]: 2026-01-31 08:55:54.836 221554 DEBUG oslo_concurrency.lockutils [req-3f2d7e91-c755-438c-8340-1773df0cb223 req-cb381f31-cba5-411a-a824-222645590ca7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "419232fb-c874-411a-b4a8-cad39708c3aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:55:54 np0005603609 nova_compute[221550]: 2026-01-31 08:55:54.836 221554 DEBUG nova.compute.manager [req-3f2d7e91-c755-438c-8340-1773df0cb223 req-cb381f31-cba5-411a-a824-222645590ca7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] No waiting events found dispatching network-vif-unplugged-ffca14da-f5e6-48fb-820a-efe53de40c61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:55:54 np0005603609 nova_compute[221550]: 2026-01-31 08:55:54.837 221554 DEBUG nova.compute.manager [req-3f2d7e91-c755-438c-8340-1773df0cb223 req-cb381f31-cba5-411a-a824-222645590ca7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Received event network-vif-unplugged-ffca14da-f5e6-48fb-820a-efe53de40c61 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:55:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:55:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:55.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:55:55 np0005603609 podman[312001]: 2026-01-31 08:55:55.595656569 +0000 UTC m=+0.886752609 container remove 476bad2e637492b657c5a5c1c95f080238ac2c2b73b09fe8a235fe30872968d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:55:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:55:55.600 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ca7d06cb-e75b-4de5-be50-dddfc3efaf66]: (4, ('Sat Jan 31 08:55:53 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df (476bad2e637492b657c5a5c1c95f080238ac2c2b73b09fe8a235fe30872968d2)\n476bad2e637492b657c5a5c1c95f080238ac2c2b73b09fe8a235fe30872968d2\nSat Jan 31 08:55:54 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df (476bad2e637492b657c5a5c1c95f080238ac2c2b73b09fe8a235fe30872968d2)\n476bad2e637492b657c5a5c1c95f080238ac2c2b73b09fe8a235fe30872968d2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:55:55.602 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1a239f21-d220-45cb-bd59-6a12b623cb95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:55:55.603 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape71e82f1-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:55:55 np0005603609 nova_compute[221550]: 2026-01-31 08:55:55.606 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:55 np0005603609 kernel: tape71e82f1-20: left promiscuous mode
Jan 31 03:55:55 np0005603609 nova_compute[221550]: 2026-01-31 08:55:55.611 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:55:55.614 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[94ffa667-acef-450e-9d0f-308fcb346cf5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:55:55.630 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5b513314-68f9-4aff-ad78-cadee2e07081]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:55:55.631 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[93c22ffc-551c-4542-8f33-c0ec6ed2291d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:55:55.644 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[77c19312-fe4d-4768-b34f-93f9e9969d76]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 991243, 'reachable_time': 15733, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312017, 'error': None, 'target': 'ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:55:55.646 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e71e82f1-2476-4d79-9b64-3d07204593df deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:55:55 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:55:55.646 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[4183db34-d3c9-48e2-aaec-0847df1a43bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:55 np0005603609 systemd[1]: run-netns-ovnmeta\x2de71e82f1\x2d2476\x2d4d79\x2d9b64\x2d3d07204593df.mount: Deactivated successfully.
Jan 31 03:55:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:55.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:56 np0005603609 nova_compute[221550]: 2026-01-31 08:55:56.448 221554 INFO nova.virt.libvirt.driver [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Deleting instance files /var/lib/nova/instances/419232fb-c874-411a-b4a8-cad39708c3aa_del#033[00m
Jan 31 03:55:56 np0005603609 nova_compute[221550]: 2026-01-31 08:55:56.450 221554 INFO nova.virt.libvirt.driver [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Deletion of /var/lib/nova/instances/419232fb-c874-411a-b4a8-cad39708c3aa_del complete#033[00m
Jan 31 03:55:56 np0005603609 nova_compute[221550]: 2026-01-31 08:55:56.643 221554 INFO nova.compute.manager [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Took 3.85 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:55:56 np0005603609 nova_compute[221550]: 2026-01-31 08:55:56.643 221554 DEBUG oslo.service.loopingcall [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:55:56 np0005603609 nova_compute[221550]: 2026-01-31 08:55:56.643 221554 DEBUG nova.compute.manager [-] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:55:56 np0005603609 nova_compute[221550]: 2026-01-31 08:55:56.644 221554 DEBUG nova.network.neutron [-] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:55:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:57.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:57 np0005603609 nova_compute[221550]: 2026-01-31 08:55:57.074 221554 DEBUG nova.compute.manager [req-81fe837b-acef-416a-9041-47842f740e39 req-95619335-4582-4887-80e7-4920fb2a229e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Received event network-vif-plugged-ffca14da-f5e6-48fb-820a-efe53de40c61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:55:57 np0005603609 nova_compute[221550]: 2026-01-31 08:55:57.074 221554 DEBUG oslo_concurrency.lockutils [req-81fe837b-acef-416a-9041-47842f740e39 req-95619335-4582-4887-80e7-4920fb2a229e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "419232fb-c874-411a-b4a8-cad39708c3aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:57 np0005603609 nova_compute[221550]: 2026-01-31 08:55:57.074 221554 DEBUG oslo_concurrency.lockutils [req-81fe837b-acef-416a-9041-47842f740e39 req-95619335-4582-4887-80e7-4920fb2a229e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "419232fb-c874-411a-b4a8-cad39708c3aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:57 np0005603609 nova_compute[221550]: 2026-01-31 08:55:57.074 221554 DEBUG oslo_concurrency.lockutils [req-81fe837b-acef-416a-9041-47842f740e39 req-95619335-4582-4887-80e7-4920fb2a229e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "419232fb-c874-411a-b4a8-cad39708c3aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:55:57 np0005603609 nova_compute[221550]: 2026-01-31 08:55:57.075 221554 DEBUG nova.compute.manager [req-81fe837b-acef-416a-9041-47842f740e39 req-95619335-4582-4887-80e7-4920fb2a229e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] No waiting events found dispatching network-vif-plugged-ffca14da-f5e6-48fb-820a-efe53de40c61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:55:57 np0005603609 nova_compute[221550]: 2026-01-31 08:55:57.075 221554 WARNING nova.compute.manager [req-81fe837b-acef-416a-9041-47842f740e39 req-95619335-4582-4887-80e7-4920fb2a229e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Received unexpected event network-vif-plugged-ffca14da-f5e6-48fb-820a-efe53de40c61 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:55:57 np0005603609 podman[312020]: 2026-01-31 08:55:57.163698191 +0000 UTC m=+0.050036263 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 03:55:57 np0005603609 podman[312019]: 2026-01-31 08:55:57.192200172 +0000 UTC m=+0.078205365 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 03:55:57 np0005603609 nova_compute[221550]: 2026-01-31 08:55:57.271 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:55:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:57.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:55:58 np0005603609 nova_compute[221550]: 2026-01-31 08:55:58.929 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:59.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:55:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:59.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:00 np0005603609 nova_compute[221550]: 2026-01-31 08:56:00.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:01.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:01 np0005603609 nova_compute[221550]: 2026-01-31 08:56:01.234 221554 DEBUG nova.network.neutron [-] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:56:01 np0005603609 nova_compute[221550]: 2026-01-31 08:56:01.268 221554 INFO nova.compute.manager [-] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Took 4.62 seconds to deallocate network for instance.#033[00m
Jan 31 03:56:01 np0005603609 nova_compute[221550]: 2026-01-31 08:56:01.350 221554 DEBUG nova.compute.manager [req-9815505f-a33e-4134-9001-a2a18b1d65d6 req-89dbfa02-c6d9-4e2c-adf4-e21bddb57772 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Received event network-vif-deleted-ffca14da-f5e6-48fb-820a-efe53de40c61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:56:01 np0005603609 nova_compute[221550]: 2026-01-31 08:56:01.352 221554 DEBUG oslo_concurrency.lockutils [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:56:01 np0005603609 nova_compute[221550]: 2026-01-31 08:56:01.352 221554 DEBUG oslo_concurrency.lockutils [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:56:01 np0005603609 nova_compute[221550]: 2026-01-31 08:56:01.434 221554 DEBUG oslo_concurrency.processutils [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:56:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:01.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:56:01 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3242168337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:56:01 np0005603609 nova_compute[221550]: 2026-01-31 08:56:01.864 221554 DEBUG oslo_concurrency.processutils [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:56:01 np0005603609 nova_compute[221550]: 2026-01-31 08:56:01.868 221554 DEBUG nova.compute.provider_tree [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:56:01 np0005603609 nova_compute[221550]: 2026-01-31 08:56:01.891 221554 DEBUG nova.scheduler.client.report [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:56:01 np0005603609 nova_compute[221550]: 2026-01-31 08:56:01.921 221554 DEBUG oslo_concurrency.lockutils [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:56:02 np0005603609 nova_compute[221550]: 2026-01-31 08:56:02.139 221554 INFO nova.scheduler.client.report [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Deleted allocations for instance 419232fb-c874-411a-b4a8-cad39708c3aa#033[00m
Jan 31 03:56:02 np0005603609 nova_compute[221550]: 2026-01-31 08:56:02.293 221554 DEBUG oslo_concurrency.lockutils [None req-bd879e76-1142-4196-a9ac-226802f296f9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "419232fb-c874-411a-b4a8-cad39708c3aa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:56:02 np0005603609 nova_compute[221550]: 2026-01-31 08:56:02.323 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:02 np0005603609 nova_compute[221550]: 2026-01-31 08:56:02.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:56:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:03.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:56:03 np0005603609 nova_compute[221550]: 2026-01-31 08:56:03.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:03.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:03 np0005603609 nova_compute[221550]: 2026-01-31 08:56:03.931 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:56:04.102 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=96, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=95) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:56:04 np0005603609 nova_compute[221550]: 2026-01-31 08:56:04.102 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:04 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:56:04.103 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:56:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:56:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1685330573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:56:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:05.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:05.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:07.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:07 np0005603609 nova_compute[221550]: 2026-01-31 08:56:07.325 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:56:07.548 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:56:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:56:07.548 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:56:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:56:07.549 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:56:07 np0005603609 nova_compute[221550]: 2026-01-31 08:56:07.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:07 np0005603609 nova_compute[221550]: 2026-01-31 08:56:07.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:56:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:56:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:07.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:56:08 np0005603609 nova_compute[221550]: 2026-01-31 08:56:08.837 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849753.8368423, 419232fb-c874-411a-b4a8-cad39708c3aa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:56:08 np0005603609 nova_compute[221550]: 2026-01-31 08:56:08.838 221554 INFO nova.compute.manager [-] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:56:08 np0005603609 nova_compute[221550]: 2026-01-31 08:56:08.977 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:09.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:56:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:09.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:56:10 np0005603609 nova_compute[221550]: 2026-01-31 08:56:10.077 221554 DEBUG nova.compute.manager [None req-677f8a63-2aae-4f0c-bd05-4cbd23acfca9 - - - - - -] [instance: 419232fb-c874-411a-b4a8-cad39708c3aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:56:10 np0005603609 nova_compute[221550]: 2026-01-31 08:56:10.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:10 np0005603609 nova_compute[221550]: 2026-01-31 08:56:10.808 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:56:10 np0005603609 nova_compute[221550]: 2026-01-31 08:56:10.808 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:56:10 np0005603609 nova_compute[221550]: 2026-01-31 08:56:10.809 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:56:10 np0005603609 nova_compute[221550]: 2026-01-31 08:56:10.809 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:56:10 np0005603609 nova_compute[221550]: 2026-01-31 08:56:10.809 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:56:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:56:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:11.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:56:11 np0005603609 nova_compute[221550]: 2026-01-31 08:56:11.160 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:11 np0005603609 nova_compute[221550]: 2026-01-31 08:56:11.201 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:56:11 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2553713859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:56:11 np0005603609 nova_compute[221550]: 2026-01-31 08:56:11.303 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:56:11 np0005603609 nova_compute[221550]: 2026-01-31 08:56:11.432 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:56:11 np0005603609 nova_compute[221550]: 2026-01-31 08:56:11.434 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4202MB free_disk=20.934207916259766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:56:11 np0005603609 nova_compute[221550]: 2026-01-31 08:56:11.434 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:56:11 np0005603609 nova_compute[221550]: 2026-01-31 08:56:11.434 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:56:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:11.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:12 np0005603609 nova_compute[221550]: 2026-01-31 08:56:12.008 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:56:12 np0005603609 nova_compute[221550]: 2026-01-31 08:56:12.009 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:56:12 np0005603609 nova_compute[221550]: 2026-01-31 08:56:12.028 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:56:12 np0005603609 nova_compute[221550]: 2026-01-31 08:56:12.327 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:56:12 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/249953271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:56:12 np0005603609 nova_compute[221550]: 2026-01-31 08:56:12.489 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:56:12 np0005603609 nova_compute[221550]: 2026-01-31 08:56:12.494 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:56:12 np0005603609 nova_compute[221550]: 2026-01-31 08:56:12.542 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:56:12 np0005603609 nova_compute[221550]: 2026-01-31 08:56:12.577 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:56:12 np0005603609 nova_compute[221550]: 2026-01-31 08:56:12.578 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:56:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:13.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:56:13.104 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '96'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:56:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:56:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:13.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:56:13 np0005603609 nova_compute[221550]: 2026-01-31 08:56:13.980 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:14 np0005603609 nova_compute[221550]: 2026-01-31 08:56:14.578 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:14 np0005603609 nova_compute[221550]: 2026-01-31 08:56:14.578 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:56:14 np0005603609 nova_compute[221550]: 2026-01-31 08:56:14.578 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:56:14 np0005603609 nova_compute[221550]: 2026-01-31 08:56:14.632 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:56:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:56:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:15.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:56:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:15.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:17.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:17 np0005603609 nova_compute[221550]: 2026-01-31 08:56:17.328 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:17.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:18 np0005603609 nova_compute[221550]: 2026-01-31 08:56:18.709 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:19 np0005603609 nova_compute[221550]: 2026-01-31 08:56:19.013 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:19.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:19.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:56:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:21.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:56:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:21.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:22 np0005603609 nova_compute[221550]: 2026-01-31 08:56:22.372 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:56:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:23.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:56:23 np0005603609 nova_compute[221550]: 2026-01-31 08:56:23.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:23.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:24 np0005603609 nova_compute[221550]: 2026-01-31 08:56:24.016 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e401 e401: 3 total, 3 up, 3 in
Jan 31 03:56:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:56:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:25.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:56:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:56:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:25.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:56:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e402 e402: 3 total, 3 up, 3 in
Jan 31 03:56:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:27.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:27 np0005603609 nova_compute[221550]: 2026-01-31 08:56:27.374 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:27 np0005603609 nova_compute[221550]: 2026-01-31 08:56:27.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:27.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:28 np0005603609 podman[312134]: 2026-01-31 08:56:28.193065955 +0000 UTC m=+0.067825724 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:56:28 np0005603609 podman[312133]: 2026-01-31 08:56:28.214454542 +0000 UTC m=+0.092080130 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:56:29 np0005603609 nova_compute[221550]: 2026-01-31 08:56:29.020 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:29.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:29.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:56:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 63K writes, 249K keys, 63K commit groups, 1.0 writes per commit group, ingest: 0.24 GB, 0.04 MB/s#012Cumulative WAL: 63K writes, 23K syncs, 2.71 writes per sync, written: 0.24 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4106 writes, 15K keys, 4106 commit groups, 1.0 writes per commit group, ingest: 15.38 MB, 0.03 MB/s#012Interval WAL: 4106 writes, 1694 syncs, 2.42 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:56:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e403 e403: 3 total, 3 up, 3 in
Jan 31 03:56:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:31.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:31.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:32 np0005603609 nova_compute[221550]: 2026-01-31 08:56:32.376 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:33.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:33.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:34 np0005603609 nova_compute[221550]: 2026-01-31 08:56:34.049 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:35.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 e404: 3 total, 3 up, 3 in
Jan 31 03:56:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:56:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:35.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:56:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:56:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:37.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:56:37 np0005603609 nova_compute[221550]: 2026-01-31 08:56:37.392 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:37.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:39 np0005603609 nova_compute[221550]: 2026-01-31 08:56:39.053 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:56:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:39.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:56:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:39.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:56:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:41.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:56:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:56:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:41.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:56:42 np0005603609 nova_compute[221550]: 2026-01-31 08:56:42.393 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:42 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:56:42 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:56:42 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:56:42 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:56:42 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:56:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:56:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:43.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:56:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:43.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:56:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1492873453' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:56:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:56:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1492873453' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:56:44 np0005603609 nova_compute[221550]: 2026-01-31 08:56:44.056 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:56:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:45.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:56:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:56:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:45.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:56:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:56:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:47.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:56:47 np0005603609 nova_compute[221550]: 2026-01-31 08:56:47.394 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:47.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:56:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:56:49 np0005603609 nova_compute[221550]: 2026-01-31 08:56:49.059 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:56:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:49.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:56:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:49.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:51.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:56:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:51.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:56:52 np0005603609 nova_compute[221550]: 2026-01-31 08:56:52.397 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:56:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:53.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:56:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:56:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2744281129' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:56:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:56:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2744281129' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:56:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:56:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:53.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:56:54 np0005603609 nova_compute[221550]: 2026-01-31 08:56:54.062 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:56:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:55.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:56:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:55.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:56:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:57.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:56:57 np0005603609 nova_compute[221550]: 2026-01-31 08:56:57.399 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:57.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:59 np0005603609 nova_compute[221550]: 2026-01-31 08:56:59.065 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:59.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:59 np0005603609 podman[312364]: 2026-01-31 08:56:59.160566031 +0000 UTC m=+0.045451640 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 03:56:59 np0005603609 podman[312363]: 2026-01-31 08:56:59.185184468 +0000 UTC m=+0.071559313 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 31 03:56:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:56:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:56:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:59.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:57:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:01.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:01 np0005603609 nova_compute[221550]: 2026-01-31 08:57:01.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:01.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:02 np0005603609 nova_compute[221550]: 2026-01-31 08:57:02.401 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:57:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.3 total, 600.0 interval#012Cumulative writes: 17K writes, 88K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1569 writes, 7392 keys, 1569 commit groups, 1.0 writes per commit group, ingest: 15.64 MB, 0.03 MB/s#012Interval WAL: 1569 writes, 1569 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     44.3      2.45              0.25        57    0.043       0      0       0.0       0.0#012  L6      1/0   12.79 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.4     76.0     65.2      8.92              1.36        56    0.159    437K    30K       0.0       0.0#012 Sum      1/0   12.79 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.4     59.6     60.7     11.37              1.61       113    0.101    437K    30K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.9     91.1     89.3      0.77              0.17        10    0.077     54K   2581       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0     76.0     65.2      8.92              1.36        56    0.159    437K    30K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     44.8      2.42              0.25        56    0.043       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.3 total, 600.0 interval#012Flush(GB): cumulative 0.106, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.67 GB write, 0.10 MB/s write, 0.66 GB read, 0.10 MB/s read, 11.4 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619ad8ed1f0#2 capacity: 304.00 MB usage: 73.19 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000541 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4214,70.08 MB,23.051%) FilterBlock(113,1.17 MB,0.384617%) IndexBlock(113,1.94 MB,0.63969%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 03:57:02 np0005603609 nova_compute[221550]: 2026-01-31 08:57:02.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:57:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:03.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:57:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:57:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:03.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:57:04 np0005603609 nova_compute[221550]: 2026-01-31 08:57:04.069 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:04 np0005603609 nova_compute[221550]: 2026-01-31 08:57:04.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:05.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:57:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:05.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:57:06 np0005603609 nova_compute[221550]: 2026-01-31 08:57:06.615 221554 DEBUG oslo_concurrency.lockutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:06 np0005603609 nova_compute[221550]: 2026-01-31 08:57:06.616 221554 DEBUG oslo_concurrency.lockutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:06 np0005603609 nova_compute[221550]: 2026-01-31 08:57:06.905 221554 DEBUG nova.compute.manager [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:57:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:57:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:07.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:57:07 np0005603609 nova_compute[221550]: 2026-01-31 08:57:07.237 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:07.237 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=97, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=96) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:57:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:07.238 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:57:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:07.238 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '97'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:07 np0005603609 nova_compute[221550]: 2026-01-31 08:57:07.267 221554 DEBUG oslo_concurrency.lockutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:07 np0005603609 nova_compute[221550]: 2026-01-31 08:57:07.267 221554 DEBUG oslo_concurrency.lockutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:07 np0005603609 nova_compute[221550]: 2026-01-31 08:57:07.275 221554 DEBUG nova.virt.hardware [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:57:07 np0005603609 nova_compute[221550]: 2026-01-31 08:57:07.275 221554 INFO nova.compute.claims [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 03:57:07 np0005603609 nova_compute[221550]: 2026-01-31 08:57:07.403 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:07 np0005603609 nova_compute[221550]: 2026-01-31 08:57:07.467 221554 DEBUG oslo_concurrency.processutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:57:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:07.548 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:07.549 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:07.549 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:57:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3311638129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:57:07 np0005603609 nova_compute[221550]: 2026-01-31 08:57:07.890 221554 DEBUG oslo_concurrency.processutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:57:07 np0005603609 nova_compute[221550]: 2026-01-31 08:57:07.896 221554 DEBUG nova.compute.provider_tree [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:57:07 np0005603609 nova_compute[221550]: 2026-01-31 08:57:07.919 221554 DEBUG nova.scheduler.client.report [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:57:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:57:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:07.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:57:07 np0005603609 nova_compute[221550]: 2026-01-31 08:57:07.957 221554 DEBUG oslo_concurrency.lockutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:07 np0005603609 nova_compute[221550]: 2026-01-31 08:57:07.958 221554 DEBUG nova.compute.manager [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.033 221554 DEBUG nova.compute.manager [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.033 221554 DEBUG nova.network.neutron [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.083 221554 INFO nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.113 221554 DEBUG nova.compute.manager [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.246 221554 DEBUG nova.compute.manager [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.247 221554 DEBUG nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.247 221554 INFO nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Creating image(s)#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.274 221554 DEBUG nova.storage.rbd_utils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.300 221554 DEBUG nova.storage.rbd_utils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.323 221554 DEBUG nova.storage.rbd_utils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.325 221554 DEBUG oslo_concurrency.processutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.390 221554 DEBUG oslo_concurrency.processutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.392 221554 DEBUG oslo_concurrency.lockutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.393 221554 DEBUG oslo_concurrency.lockutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.393 221554 DEBUG oslo_concurrency.lockutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.424 221554 DEBUG nova.storage.rbd_utils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.427 221554 DEBUG oslo_concurrency.processutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.450 221554 DEBUG nova.policy [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a56abd8fdd341ae88a99e102ab399de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.604 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.642 221554 WARNING nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.642 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Triggering sync for uuid c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.642 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.852 221554 DEBUG oslo_concurrency.processutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:57:08 np0005603609 nova_compute[221550]: 2026-01-31 08:57:08.930 221554 DEBUG nova.storage.rbd_utils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] resizing rbd image c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:57:09 np0005603609 nova_compute[221550]: 2026-01-31 08:57:09.053 221554 DEBUG nova.objects.instance [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'migration_context' on Instance uuid c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:57:09 np0005603609 nova_compute[221550]: 2026-01-31 08:57:09.071 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:09.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:09 np0005603609 nova_compute[221550]: 2026-01-31 08:57:09.123 221554 DEBUG nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:57:09 np0005603609 nova_compute[221550]: 2026-01-31 08:57:09.124 221554 DEBUG nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Ensure instance console log exists: /var/lib/nova/instances/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:57:09 np0005603609 nova_compute[221550]: 2026-01-31 08:57:09.125 221554 DEBUG oslo_concurrency.lockutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:09 np0005603609 nova_compute[221550]: 2026-01-31 08:57:09.125 221554 DEBUG oslo_concurrency.lockutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:09 np0005603609 nova_compute[221550]: 2026-01-31 08:57:09.125 221554 DEBUG oslo_concurrency.lockutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:09.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #181. Immutable memtables: 0.
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:57:10.283297) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 181
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849830283328, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 1732, "num_deletes": 253, "total_data_size": 3909550, "memory_usage": 3948864, "flush_reason": "Manual Compaction"}
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #182: started
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849830296490, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 182, "file_size": 1592728, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87422, "largest_seqno": 89148, "table_properties": {"data_size": 1587089, "index_size": 2778, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14888, "raw_average_key_size": 21, "raw_value_size": 1574680, "raw_average_value_size": 2255, "num_data_blocks": 124, "num_entries": 698, "num_filter_entries": 698, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849691, "oldest_key_time": 1769849691, "file_creation_time": 1769849830, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 13234 microseconds, and 3681 cpu microseconds.
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:57:10.296531) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #182: 1592728 bytes OK
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:57:10.296550) [db/memtable_list.cc:519] [default] Level-0 commit table #182 started
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:57:10.297786) [db/memtable_list.cc:722] [default] Level-0 commit table #182: memtable #1 done
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:57:10.297804) EVENT_LOG_v1 {"time_micros": 1769849830297797, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:57:10.297821) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 3901709, prev total WAL file size 3901990, number of live WAL files 2.
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000178.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:57:10.298610) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303130' seq:72057594037927935, type:22 .. '6D6772737461740033323631' seq:0, type:0; will stop at (end)
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [182(1555KB)], [180(12MB)]
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849830298679, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [182], "files_L6": [180], "score": -1, "input_data_size": 14999624, "oldest_snapshot_seqno": -1}
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #183: 10851 keys, 12099770 bytes, temperature: kUnknown
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849830394833, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 183, "file_size": 12099770, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12033071, "index_size": 38490, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27141, "raw_key_size": 286582, "raw_average_key_size": 26, "raw_value_size": 11846778, "raw_average_value_size": 1091, "num_data_blocks": 1457, "num_entries": 10851, "num_filter_entries": 10851, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769849830, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 183, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:57:10.395197) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 12099770 bytes
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:57:10.396395) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.7 rd, 125.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 12.8 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(17.0) write-amplify(7.6) OK, records in: 11313, records dropped: 462 output_compression: NoCompression
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:57:10.396418) EVENT_LOG_v1 {"time_micros": 1769849830396408, "job": 116, "event": "compaction_finished", "compaction_time_micros": 96310, "compaction_time_cpu_micros": 21622, "output_level": 6, "num_output_files": 1, "total_output_size": 12099770, "num_input_records": 11313, "num_output_records": 10851, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849830396781, "job": 116, "event": "table_file_deletion", "file_number": 182}
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000180.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849830398806, "job": 116, "event": "table_file_deletion", "file_number": 180}
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:57:10.298442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:57:10.398837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:57:10.398841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:57:10.398844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:57:10.398846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:57:10 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:57:10.398848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:57:10 np0005603609 nova_compute[221550]: 2026-01-31 08:57:10.517 221554 DEBUG nova.network.neutron [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Successfully created port: 9ae8abb4-0619-4212-857c-5e62212e10f1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:57:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:11.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:57:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:11.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:57:12 np0005603609 nova_compute[221550]: 2026-01-31 08:57:12.404 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:12 np0005603609 nova_compute[221550]: 2026-01-31 08:57:12.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:12 np0005603609 nova_compute[221550]: 2026-01-31 08:57:12.826 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:12 np0005603609 nova_compute[221550]: 2026-01-31 08:57:12.826 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:12 np0005603609 nova_compute[221550]: 2026-01-31 08:57:12.827 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:12 np0005603609 nova_compute[221550]: 2026-01-31 08:57:12.827 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:57:12 np0005603609 nova_compute[221550]: 2026-01-31 08:57:12.827 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:57:12 np0005603609 nova_compute[221550]: 2026-01-31 08:57:12.969 221554 DEBUG nova.network.neutron [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Successfully updated port: 9ae8abb4-0619-4212-857c-5e62212e10f1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:57:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:57:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:13.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:57:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:57:13 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1209241508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:57:13 np0005603609 nova_compute[221550]: 2026-01-31 08:57:13.250 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:57:13 np0005603609 nova_compute[221550]: 2026-01-31 08:57:13.382 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:57:13 np0005603609 nova_compute[221550]: 2026-01-31 08:57:13.383 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4189MB free_disk=20.974990844726562GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:57:13 np0005603609 nova_compute[221550]: 2026-01-31 08:57:13.384 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:13 np0005603609 nova_compute[221550]: 2026-01-31 08:57:13.384 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:57:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:13.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:57:14 np0005603609 nova_compute[221550]: 2026-01-31 08:57:14.045 221554 DEBUG oslo_concurrency.lockutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:57:14 np0005603609 nova_compute[221550]: 2026-01-31 08:57:14.045 221554 DEBUG oslo_concurrency.lockutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquired lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:57:14 np0005603609 nova_compute[221550]: 2026-01-31 08:57:14.045 221554 DEBUG nova.network.neutron [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:57:14 np0005603609 nova_compute[221550]: 2026-01-31 08:57:14.074 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:57:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:15.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:57:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:15.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:57:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:17.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:57:17 np0005603609 nova_compute[221550]: 2026-01-31 08:57:17.407 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:17 np0005603609 nova_compute[221550]: 2026-01-31 08:57:17.679 221554 DEBUG nova.compute.manager [req-4ccc72fe-33c1-4316-b3a9-65f0cf6e312f req-bd3ef964-43ca-4fcc-8460-294a74de21e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received event network-changed-9ae8abb4-0619-4212-857c-5e62212e10f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:57:17 np0005603609 nova_compute[221550]: 2026-01-31 08:57:17.679 221554 DEBUG nova.compute.manager [req-4ccc72fe-33c1-4316-b3a9-65f0cf6e312f req-bd3ef964-43ca-4fcc-8460-294a74de21e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Refreshing instance network info cache due to event network-changed-9ae8abb4-0619-4212-857c-5e62212e10f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:57:17 np0005603609 nova_compute[221550]: 2026-01-31 08:57:17.680 221554 DEBUG oslo_concurrency.lockutils [req-4ccc72fe-33c1-4316-b3a9-65f0cf6e312f req-bd3ef964-43ca-4fcc-8460-294a74de21e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:57:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:57:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:17.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:57:19 np0005603609 nova_compute[221550]: 2026-01-31 08:57:19.121 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:19.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:57:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:19.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:57:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:57:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:21.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:57:21 np0005603609 nova_compute[221550]: 2026-01-31 08:57:21.713 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:57:21 np0005603609 nova_compute[221550]: 2026-01-31 08:57:21.713 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:57:21 np0005603609 nova_compute[221550]: 2026-01-31 08:57:21.713 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:57:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:57:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:21.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:57:22 np0005603609 nova_compute[221550]: 2026-01-31 08:57:22.193 221554 DEBUG nova.network.neutron [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:57:22 np0005603609 nova_compute[221550]: 2026-01-31 08:57:22.196 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:57:22 np0005603609 nova_compute[221550]: 2026-01-31 08:57:22.409 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:57:22 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/666852656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:57:22 np0005603609 nova_compute[221550]: 2026-01-31 08:57:22.590 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:57:22 np0005603609 nova_compute[221550]: 2026-01-31 08:57:22.595 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:57:22 np0005603609 nova_compute[221550]: 2026-01-31 08:57:22.696 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:57:22 np0005603609 nova_compute[221550]: 2026-01-31 08:57:22.910 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:57:22 np0005603609 nova_compute[221550]: 2026-01-31 08:57:22.911 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 9.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:57:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:23.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:57:23 np0005603609 nova_compute[221550]: 2026-01-31 08:57:23.911 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:23 np0005603609 nova_compute[221550]: 2026-01-31 08:57:23.911 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:23 np0005603609 nova_compute[221550]: 2026-01-31 08:57:23.911 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:57:23 np0005603609 nova_compute[221550]: 2026-01-31 08:57:23.911 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:57:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:23.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:24 np0005603609 nova_compute[221550]: 2026-01-31 08:57:24.124 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:24 np0005603609 nova_compute[221550]: 2026-01-31 08:57:24.160 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:57:24 np0005603609 nova_compute[221550]: 2026-01-31 08:57:24.160 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:57:24 np0005603609 nova_compute[221550]: 2026-01-31 08:57:24.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:24 np0005603609 nova_compute[221550]: 2026-01-31 08:57:24.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:24 np0005603609 nova_compute[221550]: 2026-01-31 08:57:24.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:57:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:57:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:25.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:57:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:57:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:25.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.370 221554 DEBUG nova.network.neutron [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Updating instance_info_cache with network_info: [{"id": "9ae8abb4-0619-4212-857c-5e62212e10f1", "address": "fa:16:3e:2d:67:50", "network": {"id": "7136c4a9-529c-4978-b239-a8adc0348e1b", "bridge": "br-int", "label": "tempest-network-smoke--1511896950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ae8abb4-06", "ovs_interfaceid": "9ae8abb4-0619-4212-857c-5e62212e10f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.520 221554 DEBUG oslo_concurrency.lockutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Releasing lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.520 221554 DEBUG nova.compute.manager [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Instance network_info: |[{"id": "9ae8abb4-0619-4212-857c-5e62212e10f1", "address": "fa:16:3e:2d:67:50", "network": {"id": "7136c4a9-529c-4978-b239-a8adc0348e1b", "bridge": "br-int", "label": "tempest-network-smoke--1511896950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ae8abb4-06", "ovs_interfaceid": "9ae8abb4-0619-4212-857c-5e62212e10f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.521 221554 DEBUG oslo_concurrency.lockutils [req-4ccc72fe-33c1-4316-b3a9-65f0cf6e312f req-bd3ef964-43ca-4fcc-8460-294a74de21e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.521 221554 DEBUG nova.network.neutron [req-4ccc72fe-33c1-4316-b3a9-65f0cf6e312f req-bd3ef964-43ca-4fcc-8460-294a74de21e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Refreshing network info cache for port 9ae8abb4-0619-4212-857c-5e62212e10f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.524 221554 DEBUG nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Start _get_guest_xml network_info=[{"id": "9ae8abb4-0619-4212-857c-5e62212e10f1", "address": "fa:16:3e:2d:67:50", "network": {"id": "7136c4a9-529c-4978-b239-a8adc0348e1b", "bridge": "br-int", "label": "tempest-network-smoke--1511896950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ae8abb4-06", "ovs_interfaceid": "9ae8abb4-0619-4212-857c-5e62212e10f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.528 221554 WARNING nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.535 221554 DEBUG nova.virt.libvirt.host [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.536 221554 DEBUG nova.virt.libvirt.host [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.541 221554 DEBUG nova.virt.libvirt.host [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.541 221554 DEBUG nova.virt.libvirt.host [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.542 221554 DEBUG nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.542 221554 DEBUG nova.virt.hardware [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.543 221554 DEBUG nova.virt.hardware [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.543 221554 DEBUG nova.virt.hardware [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.543 221554 DEBUG nova.virt.hardware [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.543 221554 DEBUG nova.virt.hardware [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.543 221554 DEBUG nova.virt.hardware [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.544 221554 DEBUG nova.virt.hardware [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.544 221554 DEBUG nova.virt.hardware [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.544 221554 DEBUG nova.virt.hardware [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.544 221554 DEBUG nova.virt.hardware [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.544 221554 DEBUG nova.virt.hardware [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.547 221554 DEBUG oslo_concurrency.processutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:57:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:57:26 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2818314949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:57:26 np0005603609 nova_compute[221550]: 2026-01-31 08:57:26.985 221554 DEBUG oslo_concurrency.processutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.011 221554 DEBUG nova.storage.rbd_utils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.015 221554 DEBUG oslo_concurrency.processutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:57:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:27.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.414 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:57:27 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2565149127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.441 221554 DEBUG oslo_concurrency.processutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.442 221554 DEBUG nova.virt.libvirt.vif [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:57:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-325444004',display_name='tempest-TestNetworkBasicOps-server-325444004',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-325444004',id=207,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKCg3ytR14Nf7CENayPR51bXFfzZWnxTZmB8lC8INqq2x6QgZpD/VZdNUwq0Udix1R8WQSQlmA1Enu1ua+RdsCZeuxctqBrfeJfW5tPxL9qkV7f/1FqN7Eat1NdVebjdgg==',key_name='tempest-TestNetworkBasicOps-271661057',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-bpsj0gn1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:57:08Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ae8abb4-0619-4212-857c-5e62212e10f1", "address": "fa:16:3e:2d:67:50", "network": {"id": "7136c4a9-529c-4978-b239-a8adc0348e1b", "bridge": "br-int", "label": "tempest-network-smoke--1511896950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ae8abb4-06", "ovs_interfaceid": "9ae8abb4-0619-4212-857c-5e62212e10f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.443 221554 DEBUG nova.network.os_vif_util [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "9ae8abb4-0619-4212-857c-5e62212e10f1", "address": "fa:16:3e:2d:67:50", "network": {"id": "7136c4a9-529c-4978-b239-a8adc0348e1b", "bridge": "br-int", "label": "tempest-network-smoke--1511896950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ae8abb4-06", "ovs_interfaceid": "9ae8abb4-0619-4212-857c-5e62212e10f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.443 221554 DEBUG nova.network.os_vif_util [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:67:50,bridge_name='br-int',has_traffic_filtering=True,id=9ae8abb4-0619-4212-857c-5e62212e10f1,network=Network(7136c4a9-529c-4978-b239-a8adc0348e1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ae8abb4-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.444 221554 DEBUG nova.objects.instance [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.595 221554 DEBUG nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:57:27 np0005603609 nova_compute[221550]:  <uuid>c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141</uuid>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:  <name>instance-000000cf</name>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestNetworkBasicOps-server-325444004</nova:name>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 08:57:26</nova:creationTime>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 03:57:27 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:        <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:        <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:        <nova:port uuid="9ae8abb4-0619-4212-857c-5e62212e10f1">
Jan 31 03:57:27 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <entry name="serial">c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141</entry>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <entry name="uuid">c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141</entry>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk">
Jan 31 03:57:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:57:27 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk.config">
Jan 31 03:57:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 03:57:27 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:2d:67:50"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <target dev="tap9ae8abb4-06"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141/console.log" append="off"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 03:57:27 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:57:27 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:57:27 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:57:27 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.597 221554 DEBUG nova.compute.manager [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Preparing to wait for external event network-vif-plugged-9ae8abb4-0619-4212-857c-5e62212e10f1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.597 221554 DEBUG oslo_concurrency.lockutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.597 221554 DEBUG oslo_concurrency.lockutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.597 221554 DEBUG oslo_concurrency.lockutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.598 221554 DEBUG nova.virt.libvirt.vif [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:57:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-325444004',display_name='tempest-TestNetworkBasicOps-server-325444004',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-325444004',id=207,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKCg3ytR14Nf7CENayPR51bXFfzZWnxTZmB8lC8INqq2x6QgZpD/VZdNUwq0Udix1R8WQSQlmA1Enu1ua+RdsCZeuxctqBrfeJfW5tPxL9qkV7f/1FqN7Eat1NdVebjdgg==',key_name='tempest-TestNetworkBasicOps-271661057',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-bpsj0gn1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:57:08Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ae8abb4-0619-4212-857c-5e62212e10f1", "address": "fa:16:3e:2d:67:50", "network": {"id": "7136c4a9-529c-4978-b239-a8adc0348e1b", "bridge": "br-int", "label": "tempest-network-smoke--1511896950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ae8abb4-06", "ovs_interfaceid": "9ae8abb4-0619-4212-857c-5e62212e10f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.598 221554 DEBUG nova.network.os_vif_util [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "9ae8abb4-0619-4212-857c-5e62212e10f1", "address": "fa:16:3e:2d:67:50", "network": {"id": "7136c4a9-529c-4978-b239-a8adc0348e1b", "bridge": "br-int", "label": "tempest-network-smoke--1511896950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ae8abb4-06", "ovs_interfaceid": "9ae8abb4-0619-4212-857c-5e62212e10f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.599 221554 DEBUG nova.network.os_vif_util [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:67:50,bridge_name='br-int',has_traffic_filtering=True,id=9ae8abb4-0619-4212-857c-5e62212e10f1,network=Network(7136c4a9-529c-4978-b239-a8adc0348e1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ae8abb4-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.599 221554 DEBUG os_vif [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:67:50,bridge_name='br-int',has_traffic_filtering=True,id=9ae8abb4-0619-4212-857c-5e62212e10f1,network=Network(7136c4a9-529c-4978-b239-a8adc0348e1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ae8abb4-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.600 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.600 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.601 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.603 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.603 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ae8abb4-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.604 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9ae8abb4-06, col_values=(('external_ids', {'iface-id': '9ae8abb4-0619-4212-857c-5e62212e10f1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:67:50', 'vm-uuid': 'c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.605 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:27 np0005603609 NetworkManager[49064]: <info>  [1769849847.6064] manager: (tap9ae8abb4-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/443)
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.607 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.611 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.611 221554 INFO os_vif [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:67:50,bridge_name='br-int',has_traffic_filtering=True,id=9ae8abb4-0619-4212-857c-5e62212e10f1,network=Network(7136c4a9-529c-4978-b239-a8adc0348e1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ae8abb4-06')#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.930 221554 DEBUG nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.930 221554 DEBUG nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.930 221554 DEBUG nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No VIF found with MAC fa:16:3e:2d:67:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.931 221554 INFO nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Using config drive#033[00m
Jan 31 03:57:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:27.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:27 np0005603609 nova_compute[221550]: 2026-01-31 08:57:27.955 221554 DEBUG nova.storage.rbd_utils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:57:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:29.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:29 np0005603609 nova_compute[221550]: 2026-01-31 08:57:29.161 221554 INFO nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Creating config drive at /var/lib/nova/instances/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141/disk.config#033[00m
Jan 31 03:57:29 np0005603609 nova_compute[221550]: 2026-01-31 08:57:29.164 221554 DEBUG oslo_concurrency.processutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfb90mwpl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:57:29 np0005603609 nova_compute[221550]: 2026-01-31 08:57:29.289 221554 DEBUG oslo_concurrency.processutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfb90mwpl" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:57:29 np0005603609 nova_compute[221550]: 2026-01-31 08:57:29.316 221554 DEBUG nova.storage.rbd_utils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:57:29 np0005603609 nova_compute[221550]: 2026-01-31 08:57:29.320 221554 DEBUG oslo_concurrency.processutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141/disk.config c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:57:29 np0005603609 nova_compute[221550]: 2026-01-31 08:57:29.467 221554 DEBUG oslo_concurrency.processutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141/disk.config c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:57:29 np0005603609 nova_compute[221550]: 2026-01-31 08:57:29.468 221554 INFO nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Deleting local config drive /var/lib/nova/instances/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141/disk.config because it was imported into RBD.#033[00m
Jan 31 03:57:29 np0005603609 kernel: tap9ae8abb4-06: entered promiscuous mode
Jan 31 03:57:29 np0005603609 NetworkManager[49064]: <info>  [1769849849.5052] manager: (tap9ae8abb4-06): new Tun device (/org/freedesktop/NetworkManager/Devices/444)
Jan 31 03:57:29 np0005603609 nova_compute[221550]: 2026-01-31 08:57:29.507 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:29 np0005603609 nova_compute[221550]: 2026-01-31 08:57:29.509 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:29 np0005603609 ovn_controller[130359]: 2026-01-31T08:57:29Z|00959|binding|INFO|Claiming lport 9ae8abb4-0619-4212-857c-5e62212e10f1 for this chassis.
Jan 31 03:57:29 np0005603609 ovn_controller[130359]: 2026-01-31T08:57:29Z|00960|binding|INFO|9ae8abb4-0619-4212-857c-5e62212e10f1: Claiming fa:16:3e:2d:67:50 10.100.0.14
Jan 31 03:57:29 np0005603609 nova_compute[221550]: 2026-01-31 08:57:29.530 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:29 np0005603609 ovn_controller[130359]: 2026-01-31T08:57:29Z|00961|binding|INFO|Setting lport 9ae8abb4-0619-4212-857c-5e62212e10f1 ovn-installed in OVS
Jan 31 03:57:29 np0005603609 systemd-udevd[312795]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:57:29 np0005603609 nova_compute[221550]: 2026-01-31 08:57:29.536 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:29 np0005603609 NetworkManager[49064]: <info>  [1769849849.5472] device (tap9ae8abb4-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:57:29 np0005603609 NetworkManager[49064]: <info>  [1769849849.5486] device (tap9ae8abb4-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:57:29 np0005603609 podman[312760]: 2026-01-31 08:57:29.567576747 +0000 UTC m=+0.064806760 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:57:29 np0005603609 systemd-machined[190912]: New machine qemu-113-instance-000000cf.
Jan 31 03:57:29 np0005603609 podman[312768]: 2026-01-31 08:57:29.584787504 +0000 UTC m=+0.078288046 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:57:29 np0005603609 systemd[1]: Started Virtual Machine qemu-113-instance-000000cf.
Jan 31 03:57:29 np0005603609 ovn_controller[130359]: 2026-01-31T08:57:29Z|00962|binding|INFO|Setting lport 9ae8abb4-0619-4212-857c-5e62212e10f1 up in Southbound
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.737 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:67:50 10.100.0.14'], port_security=['fa:16:3e:2d:67:50 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7136c4a9-529c-4978-b239-a8adc0348e1b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05fa5eca-e3e2-490f-93c3-a165799ce264', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc343683-1015-4ebe-a3f3-14c358ce76bf, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=9ae8abb4-0619-4212-857c-5e62212e10f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.738 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 9ae8abb4-0619-4212-857c-5e62212e10f1 in datapath 7136c4a9-529c-4978-b239-a8adc0348e1b bound to our chassis#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.738 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7136c4a9-529c-4978-b239-a8adc0348e1b#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.748 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[48131ed0-1d2b-4f04-b489-74805d2b79c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.749 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7136c4a9-51 in ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.751 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7136c4a9-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.751 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e6d896-8099-42a4-8e34-abd893144774]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.752 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[59e52f68-052a-41cb-8abe-257ff3e804a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.759 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c8a396-d39e-422d-9ecd-70de6c40af04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.770 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[740e85a5-43cb-457f-9b67-4a0954d6f864]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.791 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[bafaf05c-f267-440a-8dab-5ad3d43912d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.796 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[487d7576-0320-4c9b-b905-c6fa746c17bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:29 np0005603609 systemd-udevd[312805]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:57:29 np0005603609 NetworkManager[49064]: <info>  [1769849849.7996] manager: (tap7136c4a9-50): new Veth device (/org/freedesktop/NetworkManager/Devices/445)
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.821 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f9158013-ce20-491b-b9c8-c7820e3dd9c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.824 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[886f8d9b-40e1-4c08-97a0-713f78202ecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:29 np0005603609 NetworkManager[49064]: <info>  [1769849849.8412] device (tap7136c4a9-50): carrier: link connected
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.847 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[29acc5c1-8f14-462c-9e80-c814d48285e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.860 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8599e885-77e9-420b-83ec-fc32cbe64975]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7136c4a9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:2b:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1008954, 'reachable_time': 36261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312847, 'error': None, 'target': 'ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.872 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[89a13998-b7c8-4220-b0e7-94484d61aa35]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:2b3f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1008954, 'tstamp': 1008954}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312848, 'error': None, 'target': 'ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.889 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[47bc97ac-7cae-4558-a10f-18ced3e1186a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7136c4a9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:2b:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1008954, 'reachable_time': 36261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312849, 'error': None, 'target': 'ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.912 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca3f46d-0586-4e59-b316-069a76f1ea12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.953 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ef26e15f-f3e8-4f48-bf33-67f98cbf31c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:57:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:29.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.954 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7136c4a9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.955 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.955 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7136c4a9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:29 np0005603609 nova_compute[221550]: 2026-01-31 08:57:29.957 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:29 np0005603609 NetworkManager[49064]: <info>  [1769849849.9578] manager: (tap7136c4a9-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/446)
Jan 31 03:57:29 np0005603609 kernel: tap7136c4a9-50: entered promiscuous mode
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.959 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7136c4a9-50, col_values=(('external_ids', {'iface-id': 'c2e351b7-d03f-4dee-9d1a-9812c248f934'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:29 np0005603609 nova_compute[221550]: 2026-01-31 08:57:29.960 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:29 np0005603609 ovn_controller[130359]: 2026-01-31T08:57:29Z|00963|binding|INFO|Releasing lport c2e351b7-d03f-4dee-9d1a-9812c248f934 from this chassis (sb_readonly=0)
Jan 31 03:57:29 np0005603609 nova_compute[221550]: 2026-01-31 08:57:29.961 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.962 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7136c4a9-529c-4978-b239-a8adc0348e1b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7136c4a9-529c-4978-b239-a8adc0348e1b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:57:29 np0005603609 nova_compute[221550]: 2026-01-31 08:57:29.965 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.965 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[932900d2-2f3e-4452-b3eb-41ec52ed35c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.967 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-7136c4a9-529c-4978-b239-a8adc0348e1b
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/7136c4a9-529c-4978-b239-a8adc0348e1b.pid.haproxy
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 7136c4a9-529c-4978-b239-a8adc0348e1b
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:57:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:57:29.968 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b', 'env', 'PROCESS_TAG=haproxy-7136c4a9-529c-4978-b239-a8adc0348e1b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7136c4a9-529c-4978-b239-a8adc0348e1b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:57:29 np0005603609 nova_compute[221550]: 2026-01-31 08:57:29.971 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:30 np0005603609 podman[312918]: 2026-01-31 08:57:30.276397638 +0000 UTC m=+0.041283271 container create f02fb282cf8b55d568d5f083ed22656e559429c92f39bdf8a26584f012286224 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Jan 31 03:57:30 np0005603609 systemd[1]: Started libpod-conmon-f02fb282cf8b55d568d5f083ed22656e559429c92f39bdf8a26584f012286224.scope.
Jan 31 03:57:30 np0005603609 nova_compute[221550]: 2026-01-31 08:57:30.339 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849850.3388147, c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:57:30 np0005603609 nova_compute[221550]: 2026-01-31 08:57:30.341 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] VM Started (Lifecycle Event)#033[00m
Jan 31 03:57:30 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:57:30 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135dc3bfc69f36e3506aa5659b19ca0ec08814903afb098e1591d33d54306665/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:57:30 np0005603609 podman[312918]: 2026-01-31 08:57:30.253506603 +0000 UTC m=+0.018392256 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:57:30 np0005603609 podman[312918]: 2026-01-31 08:57:30.358808522 +0000 UTC m=+0.123694155 container init f02fb282cf8b55d568d5f083ed22656e559429c92f39bdf8a26584f012286224 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:57:30 np0005603609 podman[312918]: 2026-01-31 08:57:30.363159229 +0000 UTC m=+0.128044852 container start f02fb282cf8b55d568d5f083ed22656e559429c92f39bdf8a26584f012286224 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 03:57:30 np0005603609 neutron-haproxy-ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b[312938]: [NOTICE]   (312944) : New worker (312946) forked
Jan 31 03:57:30 np0005603609 neutron-haproxy-ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b[312938]: [NOTICE]   (312944) : Loading success.
Jan 31 03:57:30 np0005603609 nova_compute[221550]: 2026-01-31 08:57:30.398 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:57:30 np0005603609 nova_compute[221550]: 2026-01-31 08:57:30.402 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849850.338991, c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:57:30 np0005603609 nova_compute[221550]: 2026-01-31 08:57:30.402 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:57:30 np0005603609 nova_compute[221550]: 2026-01-31 08:57:30.470 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:57:30 np0005603609 nova_compute[221550]: 2026-01-31 08:57:30.473 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:57:30 np0005603609 nova_compute[221550]: 2026-01-31 08:57:30.528 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:57:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:31.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.155 221554 DEBUG nova.network.neutron [req-4ccc72fe-33c1-4316-b3a9-65f0cf6e312f req-bd3ef964-43ca-4fcc-8460-294a74de21e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Updated VIF entry in instance network info cache for port 9ae8abb4-0619-4212-857c-5e62212e10f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.156 221554 DEBUG nova.network.neutron [req-4ccc72fe-33c1-4316-b3a9-65f0cf6e312f req-bd3ef964-43ca-4fcc-8460-294a74de21e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Updating instance_info_cache with network_info: [{"id": "9ae8abb4-0619-4212-857c-5e62212e10f1", "address": "fa:16:3e:2d:67:50", "network": {"id": "7136c4a9-529c-4978-b239-a8adc0348e1b", "bridge": "br-int", "label": "tempest-network-smoke--1511896950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ae8abb4-06", "ovs_interfaceid": "9ae8abb4-0619-4212-857c-5e62212e10f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.319 221554 DEBUG oslo_concurrency.lockutils [req-4ccc72fe-33c1-4316-b3a9-65f0cf6e312f req-bd3ef964-43ca-4fcc-8460-294a74de21e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.551 221554 DEBUG nova.compute.manager [req-92b2748e-d9b2-4c65-a4db-5ce135cdfc2b req-ee117ec3-0f6a-492e-82a0-082251122f08 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received event network-vif-plugged-9ae8abb4-0619-4212-857c-5e62212e10f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.552 221554 DEBUG oslo_concurrency.lockutils [req-92b2748e-d9b2-4c65-a4db-5ce135cdfc2b req-ee117ec3-0f6a-492e-82a0-082251122f08 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.552 221554 DEBUG oslo_concurrency.lockutils [req-92b2748e-d9b2-4c65-a4db-5ce135cdfc2b req-ee117ec3-0f6a-492e-82a0-082251122f08 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.552 221554 DEBUG oslo_concurrency.lockutils [req-92b2748e-d9b2-4c65-a4db-5ce135cdfc2b req-ee117ec3-0f6a-492e-82a0-082251122f08 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.553 221554 DEBUG nova.compute.manager [req-92b2748e-d9b2-4c65-a4db-5ce135cdfc2b req-ee117ec3-0f6a-492e-82a0-082251122f08 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Processing event network-vif-plugged-9ae8abb4-0619-4212-857c-5e62212e10f1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.554 221554 DEBUG nova.compute.manager [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.558 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769849851.5573204, c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.558 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.560 221554 DEBUG nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.563 221554 INFO nova.virt.libvirt.driver [-] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Instance spawned successfully.#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.564 221554 DEBUG nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:57:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.865 221554 DEBUG nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.865 221554 DEBUG nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.866 221554 DEBUG nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.866 221554 DEBUG nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.867 221554 DEBUG nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.867 221554 DEBUG nova.virt.libvirt.driver [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.889 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.893 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:57:31 np0005603609 nova_compute[221550]: 2026-01-31 08:57:31.941 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:57:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:31.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:32 np0005603609 nova_compute[221550]: 2026-01-31 08:57:32.079 221554 INFO nova.compute.manager [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Took 23.83 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:57:32 np0005603609 nova_compute[221550]: 2026-01-31 08:57:32.080 221554 DEBUG nova.compute.manager [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:57:32 np0005603609 nova_compute[221550]: 2026-01-31 08:57:32.370 221554 INFO nova.compute.manager [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Took 25.15 seconds to build instance.#033[00m
Jan 31 03:57:32 np0005603609 nova_compute[221550]: 2026-01-31 08:57:32.413 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:32 np0005603609 nova_compute[221550]: 2026-01-31 08:57:32.532 221554 DEBUG oslo_concurrency.lockutils [None req-d269a97e-e668-4bec-b3cc-06bfaccf6918 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:32 np0005603609 nova_compute[221550]: 2026-01-31 08:57:32.533 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 23.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:32 np0005603609 nova_compute[221550]: 2026-01-31 08:57:32.533 221554 INFO nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:57:32 np0005603609 nova_compute[221550]: 2026-01-31 08:57:32.533 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:32 np0005603609 nova_compute[221550]: 2026-01-31 08:57:32.605 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:33.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:33 np0005603609 nova_compute[221550]: 2026-01-31 08:57:33.842 221554 DEBUG nova.compute.manager [req-bec0098f-21e5-4bfd-9279-02e7fd2b2179 req-263137e0-5e5f-43a6-829e-2a3528deff5e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received event network-vif-plugged-9ae8abb4-0619-4212-857c-5e62212e10f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:57:33 np0005603609 nova_compute[221550]: 2026-01-31 08:57:33.842 221554 DEBUG oslo_concurrency.lockutils [req-bec0098f-21e5-4bfd-9279-02e7fd2b2179 req-263137e0-5e5f-43a6-829e-2a3528deff5e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:33 np0005603609 nova_compute[221550]: 2026-01-31 08:57:33.842 221554 DEBUG oslo_concurrency.lockutils [req-bec0098f-21e5-4bfd-9279-02e7fd2b2179 req-263137e0-5e5f-43a6-829e-2a3528deff5e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:33 np0005603609 nova_compute[221550]: 2026-01-31 08:57:33.843 221554 DEBUG oslo_concurrency.lockutils [req-bec0098f-21e5-4bfd-9279-02e7fd2b2179 req-263137e0-5e5f-43a6-829e-2a3528deff5e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:33 np0005603609 nova_compute[221550]: 2026-01-31 08:57:33.843 221554 DEBUG nova.compute.manager [req-bec0098f-21e5-4bfd-9279-02e7fd2b2179 req-263137e0-5e5f-43a6-829e-2a3528deff5e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] No waiting events found dispatching network-vif-plugged-9ae8abb4-0619-4212-857c-5e62212e10f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:57:33 np0005603609 nova_compute[221550]: 2026-01-31 08:57:33.843 221554 WARNING nova.compute.manager [req-bec0098f-21e5-4bfd-9279-02e7fd2b2179 req-263137e0-5e5f-43a6-829e-2a3528deff5e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received unexpected event network-vif-plugged-9ae8abb4-0619-4212-857c-5e62212e10f1 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:57:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:57:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:33.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:57:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:35.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:35.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:36 np0005603609 nova_compute[221550]: 2026-01-31 08:57:36.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:36 np0005603609 nova_compute[221550]: 2026-01-31 08:57:36.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:57:36 np0005603609 nova_compute[221550]: 2026-01-31 08:57:36.750 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:57:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:37.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:37 np0005603609 nova_compute[221550]: 2026-01-31 08:57:37.414 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:37 np0005603609 nova_compute[221550]: 2026-01-31 08:57:37.607 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:37.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:38 np0005603609 ovn_controller[130359]: 2026-01-31T08:57:38Z|00964|binding|INFO|Releasing lport c2e351b7-d03f-4dee-9d1a-9812c248f934 from this chassis (sb_readonly=0)
Jan 31 03:57:38 np0005603609 NetworkManager[49064]: <info>  [1769849858.5730] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/447)
Jan 31 03:57:38 np0005603609 NetworkManager[49064]: <info>  [1769849858.5739] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/448)
Jan 31 03:57:38 np0005603609 nova_compute[221550]: 2026-01-31 08:57:38.576 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:38 np0005603609 ovn_controller[130359]: 2026-01-31T08:57:38Z|00965|binding|INFO|Releasing lport c2e351b7-d03f-4dee-9d1a-9812c248f934 from this chassis (sb_readonly=0)
Jan 31 03:57:38 np0005603609 nova_compute[221550]: 2026-01-31 08:57:38.587 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:39.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:39 np0005603609 nova_compute[221550]: 2026-01-31 08:57:39.651 221554 DEBUG nova.compute.manager [req-7d894fc7-72cd-47b9-b5d0-f5c1ffd02ecc req-48c1dcf6-b15c-4d65-96fe-ab2d11eabd2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received event network-changed-9ae8abb4-0619-4212-857c-5e62212e10f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:57:39 np0005603609 nova_compute[221550]: 2026-01-31 08:57:39.652 221554 DEBUG nova.compute.manager [req-7d894fc7-72cd-47b9-b5d0-f5c1ffd02ecc req-48c1dcf6-b15c-4d65-96fe-ab2d11eabd2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Refreshing instance network info cache due to event network-changed-9ae8abb4-0619-4212-857c-5e62212e10f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:57:39 np0005603609 nova_compute[221550]: 2026-01-31 08:57:39.653 221554 DEBUG oslo_concurrency.lockutils [req-7d894fc7-72cd-47b9-b5d0-f5c1ffd02ecc req-48c1dcf6-b15c-4d65-96fe-ab2d11eabd2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:57:39 np0005603609 nova_compute[221550]: 2026-01-31 08:57:39.653 221554 DEBUG oslo_concurrency.lockutils [req-7d894fc7-72cd-47b9-b5d0-f5c1ffd02ecc req-48c1dcf6-b15c-4d65-96fe-ab2d11eabd2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:57:39 np0005603609 nova_compute[221550]: 2026-01-31 08:57:39.653 221554 DEBUG nova.network.neutron [req-7d894fc7-72cd-47b9-b5d0-f5c1ffd02ecc req-48c1dcf6-b15c-4d65-96fe-ab2d11eabd2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Refreshing network info cache for port 9ae8abb4-0619-4212-857c-5e62212e10f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:57:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:39.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:40 np0005603609 nova_compute[221550]: 2026-01-31 08:57:40.977 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:57:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:41.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:57:41 np0005603609 nova_compute[221550]: 2026-01-31 08:57:41.430 221554 DEBUG nova.network.neutron [req-7d894fc7-72cd-47b9-b5d0-f5c1ffd02ecc req-48c1dcf6-b15c-4d65-96fe-ab2d11eabd2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Updated VIF entry in instance network info cache for port 9ae8abb4-0619-4212-857c-5e62212e10f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:57:41 np0005603609 nova_compute[221550]: 2026-01-31 08:57:41.431 221554 DEBUG nova.network.neutron [req-7d894fc7-72cd-47b9-b5d0-f5c1ffd02ecc req-48c1dcf6-b15c-4d65-96fe-ab2d11eabd2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Updating instance_info_cache with network_info: [{"id": "9ae8abb4-0619-4212-857c-5e62212e10f1", "address": "fa:16:3e:2d:67:50", "network": {"id": "7136c4a9-529c-4978-b239-a8adc0348e1b", "bridge": "br-int", "label": "tempest-network-smoke--1511896950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ae8abb4-06", "ovs_interfaceid": "9ae8abb4-0619-4212-857c-5e62212e10f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:57:41 np0005603609 nova_compute[221550]: 2026-01-31 08:57:41.746 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:41 np0005603609 nova_compute[221550]: 2026-01-31 08:57:41.765 221554 DEBUG oslo_concurrency.lockutils [req-7d894fc7-72cd-47b9-b5d0-f5c1ffd02ecc req-48c1dcf6-b15c-4d65-96fe-ab2d11eabd2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:57:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:57:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:41.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:57:42 np0005603609 nova_compute[221550]: 2026-01-31 08:57:42.415 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:42 np0005603609 nova_compute[221550]: 2026-01-31 08:57:42.608 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:57:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:43.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:57:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:57:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:43.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:57:44 np0005603609 nova_compute[221550]: 2026-01-31 08:57:44.661 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:57:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:45.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:57:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:57:45Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:67:50 10.100.0.14
Jan 31 03:57:45 np0005603609 ovn_controller[130359]: 2026-01-31T08:57:45Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:67:50 10.100.0.14
Jan 31 03:57:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:45.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:47.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:47 np0005603609 nova_compute[221550]: 2026-01-31 08:57:47.417 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:47 np0005603609 nova_compute[221550]: 2026-01-31 08:57:47.610 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:47.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:49.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:57:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:57:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:57:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:49.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:50 np0005603609 nova_compute[221550]: 2026-01-31 08:57:50.500 221554 INFO nova.compute.manager [None req-663432ea-7d1e-4595-b9f7-7aceabcfa1db 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Get console output#033[00m
Jan 31 03:57:50 np0005603609 nova_compute[221550]: 2026-01-31 08:57:50.505 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:57:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:51.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:51.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:52 np0005603609 nova_compute[221550]: 2026-01-31 08:57:52.419 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:52 np0005603609 nova_compute[221550]: 2026-01-31 08:57:52.612 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:53.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:57:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/361818439' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:57:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:57:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/361818439' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:57:53 np0005603609 nova_compute[221550]: 2026-01-31 08:57:53.662 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:53.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:54 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:57:54 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:57:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:55.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:55.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:57.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:57 np0005603609 nova_compute[221550]: 2026-01-31 08:57:57.422 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:57 np0005603609 nova_compute[221550]: 2026-01-31 08:57:57.614 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:57.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:58 np0005603609 nova_compute[221550]: 2026-01-31 08:57:58.363 221554 DEBUG oslo_concurrency.lockutils [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "interface-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:58 np0005603609 nova_compute[221550]: 2026-01-31 08:57:58.364 221554 DEBUG oslo_concurrency.lockutils [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "interface-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:58 np0005603609 nova_compute[221550]: 2026-01-31 08:57:58.365 221554 DEBUG nova.objects.instance [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'flavor' on Instance uuid c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:57:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:59.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:57:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:57:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:59.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:58:00 np0005603609 podman[313139]: 2026-01-31 08:58:00.154529293 +0000 UTC m=+0.042422448 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 03:58:00 np0005603609 podman[313138]: 2026-01-31 08:58:00.176629858 +0000 UTC m=+0.065946107 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:58:00 np0005603609 nova_compute[221550]: 2026-01-31 08:58:00.298 221554 DEBUG nova.objects.instance [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'pci_requests' on Instance uuid c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:58:00 np0005603609 nova_compute[221550]: 2026-01-31 08:58:00.320 221554 DEBUG nova.network.neutron [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:58:00 np0005603609 nova_compute[221550]: 2026-01-31 08:58:00.641 221554 DEBUG nova.policy [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a56abd8fdd341ae88a99e102ab399de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:58:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:01.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:01 np0005603609 nova_compute[221550]: 2026-01-31 08:58:01.915 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:58:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:01.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:58:02 np0005603609 nova_compute[221550]: 2026-01-31 08:58:02.238 221554 DEBUG nova.network.neutron [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Successfully created port: 1de0676f-0ca1-4874-bf60-193728802441 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:58:02 np0005603609 nova_compute[221550]: 2026-01-31 08:58:02.457 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:02 np0005603609 nova_compute[221550]: 2026-01-31 08:58:02.616 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:03.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:03 np0005603609 nova_compute[221550]: 2026-01-31 08:58:03.554 221554 DEBUG nova.network.neutron [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Successfully updated port: 1de0676f-0ca1-4874-bf60-193728802441 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:58:03 np0005603609 nova_compute[221550]: 2026-01-31 08:58:03.614 221554 DEBUG oslo_concurrency.lockutils [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:58:03 np0005603609 nova_compute[221550]: 2026-01-31 08:58:03.614 221554 DEBUG oslo_concurrency.lockutils [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquired lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:58:03 np0005603609 nova_compute[221550]: 2026-01-31 08:58:03.615 221554 DEBUG nova.network.neutron [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:58:03 np0005603609 nova_compute[221550]: 2026-01-31 08:58:03.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:03 np0005603609 nova_compute[221550]: 2026-01-31 08:58:03.677 221554 DEBUG nova.compute.manager [req-9fb8c46e-b40e-4c64-bb08-bc34733f4908 req-415b375b-e272-4d5f-81d2-565908009348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received event network-changed-1de0676f-0ca1-4874-bf60-193728802441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:58:03 np0005603609 nova_compute[221550]: 2026-01-31 08:58:03.678 221554 DEBUG nova.compute.manager [req-9fb8c46e-b40e-4c64-bb08-bc34733f4908 req-415b375b-e272-4d5f-81d2-565908009348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Refreshing instance network info cache due to event network-changed-1de0676f-0ca1-4874-bf60-193728802441. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:58:03 np0005603609 nova_compute[221550]: 2026-01-31 08:58:03.678 221554 DEBUG oslo_concurrency.lockutils [req-9fb8c46e-b40e-4c64-bb08-bc34733f4908 req-415b375b-e272-4d5f-81d2-565908009348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:58:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:03.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:04 np0005603609 nova_compute[221550]: 2026-01-31 08:58:04.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:58:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:05.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:58:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:06.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.151 221554 DEBUG nova.network.neutron [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Updating instance_info_cache with network_info: [{"id": "9ae8abb4-0619-4212-857c-5e62212e10f1", "address": "fa:16:3e:2d:67:50", "network": {"id": "7136c4a9-529c-4978-b239-a8adc0348e1b", "bridge": "br-int", "label": "tempest-network-smoke--1511896950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ae8abb4-06", "ovs_interfaceid": "9ae8abb4-0619-4212-857c-5e62212e10f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1de0676f-0ca1-4874-bf60-193728802441", "address": "fa:16:3e:00:00:8f", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de0676f-0c", "ovs_interfaceid": "1de0676f-0ca1-4874-bf60-193728802441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.187 221554 DEBUG oslo_concurrency.lockutils [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Releasing lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.188 221554 DEBUG oslo_concurrency.lockutils [req-9fb8c46e-b40e-4c64-bb08-bc34733f4908 req-415b375b-e272-4d5f-81d2-565908009348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.188 221554 DEBUG nova.network.neutron [req-9fb8c46e-b40e-4c64-bb08-bc34733f4908 req-415b375b-e272-4d5f-81d2-565908009348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Refreshing network info cache for port 1de0676f-0ca1-4874-bf60-193728802441 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.191 221554 DEBUG nova.virt.libvirt.vif [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:57:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-325444004',display_name='tempest-TestNetworkBasicOps-server-325444004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-325444004',id=207,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKCg3ytR14Nf7CENayPR51bXFfzZWnxTZmB8lC8INqq2x6QgZpD/VZdNUwq0Udix1R8WQSQlmA1Enu1ua+RdsCZeuxctqBrfeJfW5tPxL9qkV7f/1FqN7Eat1NdVebjdgg==',key_name='tempest-TestNetworkBasicOps-271661057',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:57:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-bpsj0gn1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:57:32Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1de0676f-0ca1-4874-bf60-193728802441", "address": "fa:16:3e:00:00:8f", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de0676f-0c", "ovs_interfaceid": "1de0676f-0ca1-4874-bf60-193728802441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.191 221554 DEBUG nova.network.os_vif_util [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "1de0676f-0ca1-4874-bf60-193728802441", "address": "fa:16:3e:00:00:8f", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de0676f-0c", "ovs_interfaceid": "1de0676f-0ca1-4874-bf60-193728802441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.192 221554 DEBUG nova.network.os_vif_util [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:00:8f,bridge_name='br-int',has_traffic_filtering=True,id=1de0676f-0ca1-4874-bf60-193728802441,network=Network(5071208b-ad60-4226-80bf-e8d76f33781f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de0676f-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.192 221554 DEBUG os_vif [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:00:8f,bridge_name='br-int',has_traffic_filtering=True,id=1de0676f-0ca1-4874-bf60-193728802441,network=Network(5071208b-ad60-4226-80bf-e8d76f33781f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de0676f-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.193 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.193 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.193 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.196 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.196 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1de0676f-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.197 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1de0676f-0c, col_values=(('external_ids', {'iface-id': '1de0676f-0ca1-4874-bf60-193728802441', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:00:8f', 'vm-uuid': 'c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.198 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:06 np0005603609 NetworkManager[49064]: <info>  [1769849886.1993] manager: (tap1de0676f-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/449)
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.199 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.203 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.204 221554 INFO os_vif [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:00:8f,bridge_name='br-int',has_traffic_filtering=True,id=1de0676f-0ca1-4874-bf60-193728802441,network=Network(5071208b-ad60-4226-80bf-e8d76f33781f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de0676f-0c')#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.204 221554 DEBUG nova.virt.libvirt.vif [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:57:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-325444004',display_name='tempest-TestNetworkBasicOps-server-325444004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-325444004',id=207,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKCg3ytR14Nf7CENayPR51bXFfzZWnxTZmB8lC8INqq2x6QgZpD/VZdNUwq0Udix1R8WQSQlmA1Enu1ua+RdsCZeuxctqBrfeJfW5tPxL9qkV7f/1FqN7Eat1NdVebjdgg==',key_name='tempest-TestNetworkBasicOps-271661057',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:57:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-bpsj0gn1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:57:32Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1de0676f-0ca1-4874-bf60-193728802441", "address": "fa:16:3e:00:00:8f", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de0676f-0c", "ovs_interfaceid": "1de0676f-0ca1-4874-bf60-193728802441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.205 221554 DEBUG nova.network.os_vif_util [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "1de0676f-0ca1-4874-bf60-193728802441", "address": "fa:16:3e:00:00:8f", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de0676f-0c", "ovs_interfaceid": "1de0676f-0ca1-4874-bf60-193728802441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.205 221554 DEBUG nova.network.os_vif_util [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:00:8f,bridge_name='br-int',has_traffic_filtering=True,id=1de0676f-0ca1-4874-bf60-193728802441,network=Network(5071208b-ad60-4226-80bf-e8d76f33781f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de0676f-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.207 221554 DEBUG nova.virt.libvirt.guest [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] attach device xml: <interface type="ethernet">
Jan 31 03:58:06 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:00:00:8f"/>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:  <target dev="tap1de0676f-0c"/>
Jan 31 03:58:06 np0005603609 nova_compute[221550]: </interface>
Jan 31 03:58:06 np0005603609 nova_compute[221550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 03:58:06 np0005603609 kernel: tap1de0676f-0c: entered promiscuous mode
Jan 31 03:58:06 np0005603609 NetworkManager[49064]: <info>  [1769849886.2171] manager: (tap1de0676f-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/450)
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.218 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:58:06Z|00966|binding|INFO|Claiming lport 1de0676f-0ca1-4874-bf60-193728802441 for this chassis.
Jan 31 03:58:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:58:06Z|00967|binding|INFO|1de0676f-0ca1-4874-bf60-193728802441: Claiming fa:16:3e:00:00:8f 10.100.0.23
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.220 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.233 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.233 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:00:8f 10.100.0.23'], port_security=['fa:16:3e:00:00:8f 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5071208b-ad60-4226-80bf-e8d76f33781f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc812d40-ff23-43ac-a90f-2ee695b7bd6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82a39089-ebf2-4133-93ff-475b6e4ca7f0, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=1de0676f-0ca1-4874-bf60-193728802441) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.235 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 1de0676f-0ca1-4874-bf60-193728802441 in datapath 5071208b-ad60-4226-80bf-e8d76f33781f bound to our chassis#033[00m
Jan 31 03:58:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:58:06Z|00968|binding|INFO|Setting lport 1de0676f-0ca1-4874-bf60-193728802441 ovn-installed in OVS
Jan 31 03:58:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:58:06Z|00969|binding|INFO|Setting lport 1de0676f-0ca1-4874-bf60-193728802441 up in Southbound
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.236 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5071208b-ad60-4226-80bf-e8d76f33781f#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.236 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:06 np0005603609 systemd-udevd[313185]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.243 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d1abe14d-63f3-4f96-9c55-37d66224381e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.244 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5071208b-a1 in ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.245 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5071208b-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.245 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a3389214-07f6-41d5-ad2a-d6bc5042782d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.246 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[469a83be-baa6-4e84-8f9d-1e9b1978da5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:06 np0005603609 NetworkManager[49064]: <info>  [1769849886.2501] device (tap1de0676f-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:58:06 np0005603609 NetworkManager[49064]: <info>  [1769849886.2507] device (tap1de0676f-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.255 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[3dcd6f76-75a6-4d48-b367-4fae5bd534a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.266 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4e842677-8e3d-485f-a9c9-4bb1bf689442]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.285 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[81d25282-8e37-411e-b5b4-9f0ddd462646]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.289 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[dd82b4dc-6bc4-4bd7-bbb2-9a9d1cb36756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:06 np0005603609 NetworkManager[49064]: <info>  [1769849886.2906] manager: (tap5071208b-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/451)
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.315 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[0fbfe428-3777-4c5d-b8aa-d933561f164d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.318 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c3be0a-4484-48f0-a2e6-385481b84fe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.326 221554 DEBUG nova.virt.libvirt.driver [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.326 221554 DEBUG nova.virt.libvirt.driver [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.326 221554 DEBUG nova.virt.libvirt.driver [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No VIF found with MAC fa:16:3e:2d:67:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.327 221554 DEBUG nova.virt.libvirt.driver [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No VIF found with MAC fa:16:3e:00:00:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:58:06 np0005603609 NetworkManager[49064]: <info>  [1769849886.3335] device (tap5071208b-a0): carrier: link connected
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.337 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[71ad5ee6-6f80-48c6-b326-a5050710d69f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.348 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[808146ff-bd5d-4ed3-a1f4-1c9dd24d1a3d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5071208b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:80:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1012603, 'reachable_time': 39575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313212, 'error': None, 'target': 'ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.353 221554 DEBUG nova.virt.libvirt.guest [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:58:06 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:  <nova:name>tempest-TestNetworkBasicOps-server-325444004</nova:name>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 08:58:06</nova:creationTime>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 03:58:06 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:    <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:    <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:    <nova:port uuid="9ae8abb4-0619-4212-857c-5e62212e10f1">
Jan 31 03:58:06 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:    <nova:port uuid="1de0676f-0ca1-4874-bf60-193728802441">
Jan 31 03:58:06 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:58:06 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 03:58:06 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 03:58:06 np0005603609 nova_compute[221550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.361 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1f348fe1-c893-4288-a62b-88942200d54e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe93:80c4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1012603, 'tstamp': 1012603}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313213, 'error': None, 'target': 'ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.371 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0a1eb6ec-5188-45d6-932d-b9a4a1f11d75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5071208b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:80:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1012603, 'reachable_time': 39575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313214, 'error': None, 'target': 'ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.389 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[691e5d4c-2b01-4908-af00-50c7773084d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.396 221554 DEBUG oslo_concurrency.lockutils [None req-5667fb02-2af2-43a6-815c-821531bd52c4 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "interface-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.417 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5d337092-d378-4a80-a1d4-4f0ede5e2180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.418 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5071208b-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.418 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.419 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5071208b-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.420 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:06 np0005603609 kernel: tap5071208b-a0: entered promiscuous mode
Jan 31 03:58:06 np0005603609 NetworkManager[49064]: <info>  [1769849886.4221] manager: (tap5071208b-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.423 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.424 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5071208b-a0, col_values=(('external_ids', {'iface-id': 'fa0a8913-d154-4437-aa1c-6e031382e325'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.425 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:06 np0005603609 ovn_controller[130359]: 2026-01-31T08:58:06Z|00970|binding|INFO|Releasing lport fa0a8913-d154-4437-aa1c-6e031382e325 from this chassis (sb_readonly=0)
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.426 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.427 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5071208b-ad60-4226-80bf-e8d76f33781f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5071208b-ad60-4226-80bf-e8d76f33781f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.427 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f127e39f-7cd2-494f-b55d-f5b3a65bc136]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.428 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-5071208b-ad60-4226-80bf-e8d76f33781f
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/5071208b-ad60-4226-80bf-e8d76f33781f.pid.haproxy
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 5071208b-ad60-4226-80bf-e8d76f33781f
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:58:06 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:06.428 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f', 'env', 'PROCESS_TAG=haproxy-5071208b-ad60-4226-80bf-e8d76f33781f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5071208b-ad60-4226-80bf-e8d76f33781f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.430 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.603 221554 DEBUG nova.compute.manager [req-d6341e11-4bef-4c56-9807-5063f5f885ea req-04bce8b4-0d97-464d-9423-5a27d24932d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received event network-vif-plugged-1de0676f-0ca1-4874-bf60-193728802441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.603 221554 DEBUG oslo_concurrency.lockutils [req-d6341e11-4bef-4c56-9807-5063f5f885ea req-04bce8b4-0d97-464d-9423-5a27d24932d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.604 221554 DEBUG oslo_concurrency.lockutils [req-d6341e11-4bef-4c56-9807-5063f5f885ea req-04bce8b4-0d97-464d-9423-5a27d24932d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.604 221554 DEBUG oslo_concurrency.lockutils [req-d6341e11-4bef-4c56-9807-5063f5f885ea req-04bce8b4-0d97-464d-9423-5a27d24932d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.604 221554 DEBUG nova.compute.manager [req-d6341e11-4bef-4c56-9807-5063f5f885ea req-04bce8b4-0d97-464d-9423-5a27d24932d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] No waiting events found dispatching network-vif-plugged-1de0676f-0ca1-4874-bf60-193728802441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.604 221554 WARNING nova.compute.manager [req-d6341e11-4bef-4c56-9807-5063f5f885ea req-04bce8b4-0d97-464d-9423-5a27d24932d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received unexpected event network-vif-plugged-1de0676f-0ca1-4874-bf60-193728802441 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:58:06 np0005603609 nova_compute[221550]: 2026-01-31 08:58:06.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:06 np0005603609 podman[313246]: 2026-01-31 08:58:06.744524256 +0000 UTC m=+0.047608484 container create 89a0cda7c84ca966e36e5512b2976620a7dc006582bbc38aa8344db554ef45c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 03:58:06 np0005603609 systemd[1]: Started libpod-conmon-89a0cda7c84ca966e36e5512b2976620a7dc006582bbc38aa8344db554ef45c6.scope.
Jan 31 03:58:06 np0005603609 systemd[1]: Started libcrun container.
Jan 31 03:58:06 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffe3643c2443fc951f923f17eba0b7257ba4c4dec2e16f09e844c8f3277c667/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:58:06 np0005603609 podman[313246]: 2026-01-31 08:58:06.718738961 +0000 UTC m=+0.021823189 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:58:06 np0005603609 podman[313246]: 2026-01-31 08:58:06.824005709 +0000 UTC m=+0.127089967 container init 89a0cda7c84ca966e36e5512b2976620a7dc006582bbc38aa8344db554ef45c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:58:06 np0005603609 podman[313246]: 2026-01-31 08:58:06.827627978 +0000 UTC m=+0.130712226 container start 89a0cda7c84ca966e36e5512b2976620a7dc006582bbc38aa8344db554ef45c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:58:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:06 np0005603609 neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f[313261]: [NOTICE]   (313265) : New worker (313267) forked
Jan 31 03:58:06 np0005603609 neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f[313261]: [NOTICE]   (313265) : Loading success.
Jan 31 03:58:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:07.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:07 np0005603609 nova_compute[221550]: 2026-01-31 08:58:07.459 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:07.549 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:07.550 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:07.551 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:58:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:08.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:58:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:58:08Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:00:00:8f 10.100.0.23
Jan 31 03:58:08 np0005603609 ovn_controller[130359]: 2026-01-31T08:58:08Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:00:8f 10.100.0.23
Jan 31 03:58:08 np0005603609 nova_compute[221550]: 2026-01-31 08:58:08.901 221554 DEBUG nova.compute.manager [req-86cae05c-2a82-4313-a5a0-07324032898d req-cd71e424-17e9-4e8d-bb58-e086c37ee483 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received event network-vif-plugged-1de0676f-0ca1-4874-bf60-193728802441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:58:08 np0005603609 nova_compute[221550]: 2026-01-31 08:58:08.902 221554 DEBUG oslo_concurrency.lockutils [req-86cae05c-2a82-4313-a5a0-07324032898d req-cd71e424-17e9-4e8d-bb58-e086c37ee483 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:08 np0005603609 nova_compute[221550]: 2026-01-31 08:58:08.902 221554 DEBUG oslo_concurrency.lockutils [req-86cae05c-2a82-4313-a5a0-07324032898d req-cd71e424-17e9-4e8d-bb58-e086c37ee483 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:08 np0005603609 nova_compute[221550]: 2026-01-31 08:58:08.902 221554 DEBUG oslo_concurrency.lockutils [req-86cae05c-2a82-4313-a5a0-07324032898d req-cd71e424-17e9-4e8d-bb58-e086c37ee483 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:08 np0005603609 nova_compute[221550]: 2026-01-31 08:58:08.902 221554 DEBUG nova.compute.manager [req-86cae05c-2a82-4313-a5a0-07324032898d req-cd71e424-17e9-4e8d-bb58-e086c37ee483 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] No waiting events found dispatching network-vif-plugged-1de0676f-0ca1-4874-bf60-193728802441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:58:08 np0005603609 nova_compute[221550]: 2026-01-31 08:58:08.903 221554 WARNING nova.compute.manager [req-86cae05c-2a82-4313-a5a0-07324032898d req-cd71e424-17e9-4e8d-bb58-e086c37ee483 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received unexpected event network-vif-plugged-1de0676f-0ca1-4874-bf60-193728802441 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:58:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:58:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:09.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:58:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:58:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:10.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:58:10 np0005603609 nova_compute[221550]: 2026-01-31 08:58:10.147 221554 DEBUG nova.network.neutron [req-9fb8c46e-b40e-4c64-bb08-bc34733f4908 req-415b375b-e272-4d5f-81d2-565908009348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Updated VIF entry in instance network info cache for port 1de0676f-0ca1-4874-bf60-193728802441. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:58:10 np0005603609 nova_compute[221550]: 2026-01-31 08:58:10.148 221554 DEBUG nova.network.neutron [req-9fb8c46e-b40e-4c64-bb08-bc34733f4908 req-415b375b-e272-4d5f-81d2-565908009348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Updating instance_info_cache with network_info: [{"id": "9ae8abb4-0619-4212-857c-5e62212e10f1", "address": "fa:16:3e:2d:67:50", "network": {"id": "7136c4a9-529c-4978-b239-a8adc0348e1b", "bridge": "br-int", "label": "tempest-network-smoke--1511896950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ae8abb4-06", "ovs_interfaceid": "9ae8abb4-0619-4212-857c-5e62212e10f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1de0676f-0ca1-4874-bf60-193728802441", "address": "fa:16:3e:00:00:8f", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de0676f-0c", "ovs_interfaceid": "1de0676f-0ca1-4874-bf60-193728802441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:58:10 np0005603609 nova_compute[221550]: 2026-01-31 08:58:10.315 221554 DEBUG oslo_concurrency.lockutils [req-9fb8c46e-b40e-4c64-bb08-bc34733f4908 req-415b375b-e272-4d5f-81d2-565908009348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:58:10 np0005603609 nova_compute[221550]: 2026-01-31 08:58:10.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:10 np0005603609 nova_compute[221550]: 2026-01-31 08:58:10.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:58:11 np0005603609 nova_compute[221550]: 2026-01-31 08:58:11.199 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:11.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:11 np0005603609 nova_compute[221550]: 2026-01-31 08:58:11.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:58:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:12.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:58:12 np0005603609 nova_compute[221550]: 2026-01-31 08:58:12.506 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:13.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:14.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:14 np0005603609 nova_compute[221550]: 2026-01-31 08:58:14.396 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:14 np0005603609 nova_compute[221550]: 2026-01-31 08:58:14.453 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:14 np0005603609 nova_compute[221550]: 2026-01-31 08:58:14.453 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:14 np0005603609 nova_compute[221550]: 2026-01-31 08:58:14.454 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:14 np0005603609 nova_compute[221550]: 2026-01-31 08:58:14.454 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:58:14 np0005603609 nova_compute[221550]: 2026-01-31 08:58:14.454 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:58:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:58:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3727841173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:58:14 np0005603609 nova_compute[221550]: 2026-01-31 08:58:14.935 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:58:15 np0005603609 nova_compute[221550]: 2026-01-31 08:58:15.078 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000cf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:58:15 np0005603609 nova_compute[221550]: 2026-01-31 08:58:15.078 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000cf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:58:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:15.229 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=98, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=97) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:58:15 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:15.231 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:58:15 np0005603609 nova_compute[221550]: 2026-01-31 08:58:15.230 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:15 np0005603609 nova_compute[221550]: 2026-01-31 08:58:15.242 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:58:15 np0005603609 nova_compute[221550]: 2026-01-31 08:58:15.243 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3972MB free_disk=20.94271469116211GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:58:15 np0005603609 nova_compute[221550]: 2026-01-31 08:58:15.243 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:15 np0005603609 nova_compute[221550]: 2026-01-31 08:58:15.243 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:15.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:15 np0005603609 nova_compute[221550]: 2026-01-31 08:58:15.493 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:58:15 np0005603609 nova_compute[221550]: 2026-01-31 08:58:15.494 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:58:15 np0005603609 nova_compute[221550]: 2026-01-31 08:58:15.494 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:58:15 np0005603609 nova_compute[221550]: 2026-01-31 08:58:15.542 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:58:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:58:15 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1143733964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:58:16 np0005603609 nova_compute[221550]: 2026-01-31 08:58:16.000 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:58:16 np0005603609 nova_compute[221550]: 2026-01-31 08:58:16.007 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:58:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:16.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:16 np0005603609 nova_compute[221550]: 2026-01-31 08:58:16.115 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:58:16 np0005603609 nova_compute[221550]: 2026-01-31 08:58:16.200 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:58:16.234 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '98'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:58:16 np0005603609 nova_compute[221550]: 2026-01-31 08:58:16.303 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:58:16 np0005603609 nova_compute[221550]: 2026-01-31 08:58:16.303 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:17.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:17 np0005603609 nova_compute[221550]: 2026-01-31 08:58:17.509 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:17 np0005603609 nova_compute[221550]: 2026-01-31 08:58:17.567 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:17 np0005603609 nova_compute[221550]: 2026-01-31 08:58:17.567 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:58:17 np0005603609 nova_compute[221550]: 2026-01-31 08:58:17.567 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:58:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:18.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:18 np0005603609 nova_compute[221550]: 2026-01-31 08:58:18.119 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:58:18 np0005603609 nova_compute[221550]: 2026-01-31 08:58:18.120 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:58:18 np0005603609 nova_compute[221550]: 2026-01-31 08:58:18.120 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:58:18 np0005603609 nova_compute[221550]: 2026-01-31 08:58:18.120 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:58:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:58:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:19.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:58:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:20.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #184. Immutable memtables: 0.
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:58:20.461322) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 184
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849900461403, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 926, "num_deletes": 251, "total_data_size": 1843901, "memory_usage": 1862096, "flush_reason": "Manual Compaction"}
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #185: started
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e405 e405: 3 total, 3 up, 3 in
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849900476667, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 185, "file_size": 1215924, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89154, "largest_seqno": 90074, "table_properties": {"data_size": 1211751, "index_size": 1888, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9550, "raw_average_key_size": 19, "raw_value_size": 1203218, "raw_average_value_size": 2480, "num_data_blocks": 84, "num_entries": 485, "num_filter_entries": 485, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849830, "oldest_key_time": 1769849830, "file_creation_time": 1769849900, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 15378 microseconds, and 3248 cpu microseconds.
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:58:20.476709) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #185: 1215924 bytes OK
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:58:20.476728) [db/memtable_list.cc:519] [default] Level-0 commit table #185 started
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:58:20.481048) [db/memtable_list.cc:722] [default] Level-0 commit table #185: memtable #1 done
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:58:20.481099) EVENT_LOG_v1 {"time_micros": 1769849900481087, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:58:20.481125) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 1839268, prev total WAL file size 1839309, number of live WAL files 2.
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000181.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:58:20.481797) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [185(1187KB)], [183(11MB)]
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849900481863, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [185], "files_L6": [183], "score": -1, "input_data_size": 13315694, "oldest_snapshot_seqno": -1}
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #186: 10818 keys, 11381078 bytes, temperature: kUnknown
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849900553619, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 186, "file_size": 11381078, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11315157, "index_size": 37776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27077, "raw_key_size": 286588, "raw_average_key_size": 26, "raw_value_size": 11129745, "raw_average_value_size": 1028, "num_data_blocks": 1421, "num_entries": 10818, "num_filter_entries": 10818, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769849900, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 186, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:58:20.553888) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 11381078 bytes
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:58:20.559535) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.3 rd, 158.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.5 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(20.3) write-amplify(9.4) OK, records in: 11336, records dropped: 518 output_compression: NoCompression
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:58:20.559573) EVENT_LOG_v1 {"time_micros": 1769849900559558, "job": 118, "event": "compaction_finished", "compaction_time_micros": 71850, "compaction_time_cpu_micros": 22490, "output_level": 6, "num_output_files": 1, "total_output_size": 11381078, "num_input_records": 11336, "num_output_records": 10818, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849900559810, "job": 118, "event": "table_file_deletion", "file_number": 185}
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000183.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849900560908, "job": 118, "event": "table_file_deletion", "file_number": 183}
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:58:20.481709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:58:20.560983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:58:20.560987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:58:20.560989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:58:20.560991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:58:20 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-08:58:20.560992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:58:21 np0005603609 nova_compute[221550]: 2026-01-31 08:58:21.204 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:58:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:21.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:58:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e406 e406: 3 total, 3 up, 3 in
Jan 31 03:58:21 np0005603609 nova_compute[221550]: 2026-01-31 08:58:21.520 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Updating instance_info_cache with network_info: [{"id": "9ae8abb4-0619-4212-857c-5e62212e10f1", "address": "fa:16:3e:2d:67:50", "network": {"id": "7136c4a9-529c-4978-b239-a8adc0348e1b", "bridge": "br-int", "label": "tempest-network-smoke--1511896950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ae8abb4-06", "ovs_interfaceid": "9ae8abb4-0619-4212-857c-5e62212e10f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1de0676f-0ca1-4874-bf60-193728802441", "address": "fa:16:3e:00:00:8f", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de0676f-0c", "ovs_interfaceid": "1de0676f-0ca1-4874-bf60-193728802441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:58:21 np0005603609 nova_compute[221550]: 2026-01-31 08:58:21.682 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:58:21 np0005603609 nova_compute[221550]: 2026-01-31 08:58:21.683 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:58:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:58:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:22.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:58:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e407 e407: 3 total, 3 up, 3 in
Jan 31 03:58:22 np0005603609 nova_compute[221550]: 2026-01-31 08:58:22.512 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:23.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e408 e408: 3 total, 3 up, 3 in
Jan 31 03:58:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:24.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:24 np0005603609 nova_compute[221550]: 2026-01-31 08:58:24.770 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:25.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:25 np0005603609 nova_compute[221550]: 2026-01-31 08:58:25.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:26.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:26 np0005603609 nova_compute[221550]: 2026-01-31 08:58:26.207 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:58:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:27.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:58:27 np0005603609 nova_compute[221550]: 2026-01-31 08:58:27.514 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:28.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:29.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e409 e409: 3 total, 3 up, 3 in
Jan 31 03:58:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:30.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e410 e410: 3 total, 3 up, 3 in
Jan 31 03:58:31 np0005603609 podman[313324]: 2026-01-31 08:58:31.162681355 +0000 UTC m=+0.047638614 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:58:31 np0005603609 podman[313323]: 2026-01-31 08:58:31.206863115 +0000 UTC m=+0.091752793 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 03:58:31 np0005603609 nova_compute[221550]: 2026-01-31 08:58:31.210 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:31.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:31 np0005603609 nova_compute[221550]: 2026-01-31 08:58:31.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:58:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:32.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:58:32 np0005603609 nova_compute[221550]: 2026-01-31 08:58:32.537 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:33.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:34.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:35.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:36.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:36 np0005603609 nova_compute[221550]: 2026-01-31 08:58:36.213 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:37.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:37 np0005603609 nova_compute[221550]: 2026-01-31 08:58:37.539 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:38.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:39.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:58:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:40.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:58:41 np0005603609 nova_compute[221550]: 2026-01-31 08:58:41.214 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:58:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:41.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:58:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:42.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:42 np0005603609 nova_compute[221550]: 2026-01-31 08:58:42.542 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:58:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:43.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:58:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:58:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:44.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:58:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:45.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:46.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:46 np0005603609 nova_compute[221550]: 2026-01-31 08:58:46.216 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:46 np0005603609 nova_compute[221550]: 2026-01-31 08:58:46.621 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:47.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:47 np0005603609 nova_compute[221550]: 2026-01-31 08:58:47.543 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:58:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:48.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:58:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:49.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:50.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:51 np0005603609 nova_compute[221550]: 2026-01-31 08:58:51.218 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:51.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:52.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:52 np0005603609 nova_compute[221550]: 2026-01-31 08:58:52.545 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:58:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:53.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:58:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:54.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:55.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:58:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:56.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:58:56 np0005603609 nova_compute[221550]: 2026-01-31 08:58:56.253 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:58:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:58:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:58:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:58:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:57.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:58:57 np0005603609 nova_compute[221550]: 2026-01-31 08:58:57.547 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:57 np0005603609 nova_compute[221550]: 2026-01-31 08:58:57.568 221554 DEBUG nova.compute.manager [req-8cbc5787-62ed-4597-9b5f-092684e384d5 req-a6112b9b-70e5-4089-aa78-04d92757f983 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received event network-changed-1de0676f-0ca1-4874-bf60-193728802441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:58:57 np0005603609 nova_compute[221550]: 2026-01-31 08:58:57.568 221554 DEBUG nova.compute.manager [req-8cbc5787-62ed-4597-9b5f-092684e384d5 req-a6112b9b-70e5-4089-aa78-04d92757f983 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Refreshing instance network info cache due to event network-changed-1de0676f-0ca1-4874-bf60-193728802441. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:58:57 np0005603609 nova_compute[221550]: 2026-01-31 08:58:57.569 221554 DEBUG oslo_concurrency.lockutils [req-8cbc5787-62ed-4597-9b5f-092684e384d5 req-a6112b9b-70e5-4089-aa78-04d92757f983 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:58:57 np0005603609 nova_compute[221550]: 2026-01-31 08:58:57.569 221554 DEBUG oslo_concurrency.lockutils [req-8cbc5787-62ed-4597-9b5f-092684e384d5 req-a6112b9b-70e5-4089-aa78-04d92757f983 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:58:57 np0005603609 nova_compute[221550]: 2026-01-31 08:58:57.569 221554 DEBUG nova.network.neutron [req-8cbc5787-62ed-4597-9b5f-092684e384d5 req-a6112b9b-70e5-4089-aa78-04d92757f983 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Refreshing network info cache for port 1de0676f-0ca1-4874-bf60-193728802441 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:58:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:58.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:58 np0005603609 nova_compute[221550]: 2026-01-31 08:58:58.902 221554 DEBUG nova.network.neutron [req-8cbc5787-62ed-4597-9b5f-092684e384d5 req-a6112b9b-70e5-4089-aa78-04d92757f983 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Updated VIF entry in instance network info cache for port 1de0676f-0ca1-4874-bf60-193728802441. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:58:58 np0005603609 nova_compute[221550]: 2026-01-31 08:58:58.902 221554 DEBUG nova.network.neutron [req-8cbc5787-62ed-4597-9b5f-092684e384d5 req-a6112b9b-70e5-4089-aa78-04d92757f983 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Updating instance_info_cache with network_info: [{"id": "9ae8abb4-0619-4212-857c-5e62212e10f1", "address": "fa:16:3e:2d:67:50", "network": {"id": "7136c4a9-529c-4978-b239-a8adc0348e1b", "bridge": "br-int", "label": "tempest-network-smoke--1511896950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ae8abb4-06", "ovs_interfaceid": "9ae8abb4-0619-4212-857c-5e62212e10f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1de0676f-0ca1-4874-bf60-193728802441", "address": "fa:16:3e:00:00:8f", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de0676f-0c", "ovs_interfaceid": "1de0676f-0ca1-4874-bf60-193728802441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:58:59 np0005603609 nova_compute[221550]: 2026-01-31 08:58:59.074 221554 DEBUG oslo_concurrency.lockutils [req-8cbc5787-62ed-4597-9b5f-092684e384d5 req-a6112b9b-70e5-4089-aa78-04d92757f983 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:58:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:58:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:58:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:59.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:59:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:00.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:01 np0005603609 nova_compute[221550]: 2026-01-31 08:59:01.256 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:01.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:02.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:02 np0005603609 podman[313521]: 2026-01-31 08:59:02.16386886 +0000 UTC m=+0.049049429 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 03:59:02 np0005603609 podman[313504]: 2026-01-31 08:59:02.188799984 +0000 UTC m=+0.073701566 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:59:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:59:02 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 03:59:02 np0005603609 nova_compute[221550]: 2026-01-31 08:59:02.549 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:59:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:03.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:59:03 np0005603609 nova_compute[221550]: 2026-01-31 08:59:03.681 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:04.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:05.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:05 np0005603609 nova_compute[221550]: 2026-01-31 08:59:05.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:06.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:06 np0005603609 nova_compute[221550]: 2026-01-31 08:59:06.259 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:59:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:07.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:59:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:07.550 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:07.550 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:07.551 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:07 np0005603609 nova_compute[221550]: 2026-01-31 08:59:07.579 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:07 np0005603609 nova_compute[221550]: 2026-01-31 08:59:07.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:08.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:59:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:09.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:59:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:10.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:10 np0005603609 nova_compute[221550]: 2026-01-31 08:59:10.677 221554 DEBUG oslo_concurrency.lockutils [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "interface-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-1de0676f-0ca1-4874-bf60-193728802441" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:10 np0005603609 nova_compute[221550]: 2026-01-31 08:59:10.678 221554 DEBUG oslo_concurrency.lockutils [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "interface-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-1de0676f-0ca1-4874-bf60-193728802441" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:10 np0005603609 nova_compute[221550]: 2026-01-31 08:59:10.761 221554 DEBUG nova.objects.instance [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'flavor' on Instance uuid c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:59:10 np0005603609 nova_compute[221550]: 2026-01-31 08:59:10.790 221554 DEBUG nova.virt.libvirt.vif [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:57:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-325444004',display_name='tempest-TestNetworkBasicOps-server-325444004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-325444004',id=207,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKCg3ytR14Nf7CENayPR51bXFfzZWnxTZmB8lC8INqq2x6QgZpD/VZdNUwq0Udix1R8WQSQlmA1Enu1ua+RdsCZeuxctqBrfeJfW5tPxL9qkV7f/1FqN7Eat1NdVebjdgg==',key_name='tempest-TestNetworkBasicOps-271661057',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:57:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-bpsj0gn1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:57:32Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1de0676f-0ca1-4874-bf60-193728802441", "address": "fa:16:3e:00:00:8f", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de0676f-0c", "ovs_interfaceid": "1de0676f-0ca1-4874-bf60-193728802441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:59:10 np0005603609 nova_compute[221550]: 2026-01-31 08:59:10.791 221554 DEBUG nova.network.os_vif_util [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "1de0676f-0ca1-4874-bf60-193728802441", "address": "fa:16:3e:00:00:8f", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de0676f-0c", "ovs_interfaceid": "1de0676f-0ca1-4874-bf60-193728802441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:59:10 np0005603609 nova_compute[221550]: 2026-01-31 08:59:10.793 221554 DEBUG nova.network.os_vif_util [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:00:8f,bridge_name='br-int',has_traffic_filtering=True,id=1de0676f-0ca1-4874-bf60-193728802441,network=Network(5071208b-ad60-4226-80bf-e8d76f33781f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de0676f-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:59:10 np0005603609 nova_compute[221550]: 2026-01-31 08:59:10.797 221554 DEBUG nova.virt.libvirt.guest [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:00:00:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1de0676f-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:59:10 np0005603609 nova_compute[221550]: 2026-01-31 08:59:10.799 221554 DEBUG nova.virt.libvirt.guest [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:00:00:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1de0676f-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:59:10 np0005603609 nova_compute[221550]: 2026-01-31 08:59:10.801 221554 DEBUG nova.virt.libvirt.driver [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Attempting to detach device tap1de0676f-0c from instance c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:59:10 np0005603609 nova_compute[221550]: 2026-01-31 08:59:10.801 221554 DEBUG nova.virt.libvirt.guest [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] detach device xml: <interface type="ethernet">
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:00:00:8f"/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <target dev="tap1de0676f-0c"/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]: </interface>
Jan 31 03:59:10 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:59:10 np0005603609 nova_compute[221550]: 2026-01-31 08:59:10.918 221554 DEBUG nova.virt.libvirt.guest [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:00:00:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1de0676f-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:59:10 np0005603609 nova_compute[221550]: 2026-01-31 08:59:10.922 221554 DEBUG nova.virt.libvirt.guest [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:00:00:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1de0676f-0c"/></interface>not found in domain: <domain type='kvm' id='113'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <name>instance-000000cf</name>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <uuid>c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141</uuid>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <nova:name>tempest-TestNetworkBasicOps-server-325444004</nova:name>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 08:58:06</nova:creationTime>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <nova:port uuid="9ae8abb4-0619-4212-857c-5e62212e10f1">
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <nova:port uuid="1de0676f-0ca1-4874-bf60-193728802441">
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 03:59:10 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <memory unit='KiB'>131072</memory>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <vcpu placement='static'>1</vcpu>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <resource>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <partition>/machine</partition>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  </resource>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <sysinfo type='smbios'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <entry name='manufacturer'>RDO</entry>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <entry name='serial'>c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141</entry>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <entry name='uuid'>c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141</entry>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <entry name='family'>Virtual Machine</entry>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <boot dev='hd'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <smbios mode='sysinfo'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <vmcoreinfo state='on'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <model fallback='forbid'>Nehalem</model>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <feature policy='require' name='x2apic'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <feature policy='require' name='hypervisor'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <feature policy='require' name='vme'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <clock offset='utc'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <timer name='hpet' present='no'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <on_poweroff>destroy</on_poweroff>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <on_reboot>restart</on_reboot>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <on_crash>destroy</on_crash>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <disk type='network' device='disk'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk' index='2'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target dev='vda' bus='virtio'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='virtio-disk0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <disk type='network' device='cdrom'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk.config' index='1'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target dev='sda' bus='sata'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <readonly/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='sata0-0-0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pcie.0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='1' port='0x10'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.1'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='2' port='0x11'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.2'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='3' port='0x12'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.3'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='4' port='0x13'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.4'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='5' port='0x14'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.5'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='6' port='0x15'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.6'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='7' port='0x16'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.7'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='8' port='0x17'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.8'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='9' port='0x18'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.9'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='10' port='0x19'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.10'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='11' port='0x1a'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.11'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='12' port='0x1b'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.12'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='13' port='0x1c'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.13'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='14' port='0x1d'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.14'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='15' port='0x1e'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.15'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='16' port='0x1f'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.16'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='17' port='0x20'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.17'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='18' port='0x21'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.18'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='19' port='0x22'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.19'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='20' port='0x23'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.20'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='21' port='0x24'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.21'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='22' port='0x25'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.22'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='23' port='0x26'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.23'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='24' port='0x27'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.24'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target chassis='25' port='0x28'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.25'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model name='pcie-pci-bridge'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='pci.26'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='usb'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <controller type='sata' index='0'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='ide'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:2d:67:50'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target dev='tap9ae8abb4-06'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='net0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:00:00:8f'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target dev='tap1de0676f-0c'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='net1'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <serial type='pty'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141/console.log' append='off'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target type='isa-serial' port='0'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:        <model name='isa-serial'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      </target>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <console type='pty' tty='/dev/pts/0'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141/console.log' append='off'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <target type='serial' port='0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </console>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <input type='tablet' bus='usb'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='input0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='usb' bus='0' port='1'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <input type='mouse' bus='ps2'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='input1'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <input type='keyboard' bus='ps2'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='input2'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <listen type='address' address='::0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </graphics>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <audio id='1' type='none'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='video0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <watchdog model='itco' action='reset'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='watchdog0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </watchdog>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <memballoon model='virtio'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <stats period='10'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='balloon0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <rng model='virtio'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <backend model='random'>/dev/urandom</backend>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <alias name='rng0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <label>system_u:system_r:svirt_t:s0:c478,c672</label>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c478,c672</imagelabel>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <label>+107:+107</label>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:    <imagelabel>+107:+107</imagelabel>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 03:59:10 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:59:10 np0005603609 nova_compute[221550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 03:59:10 np0005603609 nova_compute[221550]: 2026-01-31 08:59:10.923 221554 INFO nova.virt.libvirt.driver [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully detached device tap1de0676f-0c from instance c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 from the persistent domain config.#033[00m
Jan 31 03:59:10 np0005603609 nova_compute[221550]: 2026-01-31 08:59:10.923 221554 DEBUG nova.virt.libvirt.driver [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] (1/8): Attempting to detach device tap1de0676f-0c with device alias net1 from instance c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 03:59:10 np0005603609 nova_compute[221550]: 2026-01-31 08:59:10.924 221554 DEBUG nova.virt.libvirt.guest [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] detach device xml: <interface type="ethernet">
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <mac address="fa:16:3e:00:00:8f"/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <model type="virtio"/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <mtu size="1442"/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]:  <target dev="tap1de0676f-0c"/>
Jan 31 03:59:10 np0005603609 nova_compute[221550]: </interface>
Jan 31 03:59:10 np0005603609 nova_compute[221550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:59:11 np0005603609 kernel: tap1de0676f-0c (unregistering): left promiscuous mode
Jan 31 03:59:11 np0005603609 NetworkManager[49064]: <info>  [1769849951.0344] device (tap1de0676f-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:59:11 np0005603609 ovn_controller[130359]: 2026-01-31T08:59:11Z|00971|binding|INFO|Releasing lport 1de0676f-0ca1-4874-bf60-193728802441 from this chassis (sb_readonly=0)
Jan 31 03:59:11 np0005603609 ovn_controller[130359]: 2026-01-31T08:59:11Z|00972|binding|INFO|Setting lport 1de0676f-0ca1-4874-bf60-193728802441 down in Southbound
Jan 31 03:59:11 np0005603609 ovn_controller[130359]: 2026-01-31T08:59:11Z|00973|binding|INFO|Removing iface tap1de0676f-0c ovn-installed in OVS
Jan 31 03:59:11 np0005603609 nova_compute[221550]: 2026-01-31 08:59:11.042 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:11 np0005603609 nova_compute[221550]: 2026-01-31 08:59:11.043 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:11 np0005603609 nova_compute[221550]: 2026-01-31 08:59:11.049 221554 DEBUG nova.virt.libvirt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Received event <DeviceRemovedEvent: 1769849951.0487432, c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 03:59:11 np0005603609 nova_compute[221550]: 2026-01-31 08:59:11.050 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:11 np0005603609 nova_compute[221550]: 2026-01-31 08:59:11.051 221554 DEBUG nova.virt.libvirt.driver [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Start waiting for the detach event from libvirt for device tap1de0676f-0c with device alias net1 for instance c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 03:59:11 np0005603609 nova_compute[221550]: 2026-01-31 08:59:11.051 221554 DEBUG nova.virt.libvirt.guest [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:00:00:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1de0676f-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:59:11 np0005603609 nova_compute[221550]: 2026-01-31 08:59:11.055 221554 DEBUG nova.virt.libvirt.guest [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:00:00:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1de0676f-0c"/></interface>not found in domain: <domain type='kvm' id='113'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <name>instance-000000cf</name>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <uuid>c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141</uuid>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <nova:name>tempest-TestNetworkBasicOps-server-325444004</nova:name>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 08:58:06</nova:creationTime>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <nova:port uuid="9ae8abb4-0619-4212-857c-5e62212e10f1">
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <nova:port uuid="1de0676f-0ca1-4874-bf60-193728802441">
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 03:59:11 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <memory unit='KiB'>131072</memory>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <vcpu placement='static'>1</vcpu>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <resource>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <partition>/machine</partition>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  </resource>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <sysinfo type='smbios'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <entry name='manufacturer'>RDO</entry>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <entry name='serial'>c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141</entry>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <entry name='uuid'>c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141</entry>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <entry name='family'>Virtual Machine</entry>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <boot dev='hd'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <smbios mode='sysinfo'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <vmcoreinfo state='on'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <model fallback='forbid'>Nehalem</model>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <feature policy='require' name='x2apic'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <feature policy='require' name='hypervisor'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <feature policy='require' name='vme'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <clock offset='utc'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <timer name='hpet' present='no'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <on_poweroff>destroy</on_poweroff>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <on_reboot>restart</on_reboot>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <on_crash>destroy</on_crash>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <disk type='network' device='disk'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk' index='2'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target dev='vda' bus='virtio'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='virtio-disk0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <disk type='network' device='cdrom'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk.config' index='1'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target dev='sda' bus='sata'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <readonly/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='sata0-0-0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pcie.0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='1' port='0x10'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.1'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='2' port='0x11'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.2'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='3' port='0x12'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.3'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='4' port='0x13'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.4'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='5' port='0x14'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.5'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='6' port='0x15'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.6'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='7' port='0x16'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.7'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='8' port='0x17'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.8'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='9' port='0x18'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.9'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='10' port='0x19'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.10'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='11' port='0x1a'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.11'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='12' port='0x1b'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.12'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='13' port='0x1c'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.13'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='14' port='0x1d'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.14'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='15' port='0x1e'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.15'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='16' port='0x1f'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.16'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='17' port='0x20'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.17'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='18' port='0x21'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.18'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='19' port='0x22'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.19'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='20' port='0x23'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.20'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='21' port='0x24'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.21'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='22' port='0x25'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.22'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='23' port='0x26'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.23'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='24' port='0x27'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.24'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target chassis='25' port='0x28'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.25'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model name='pcie-pci-bridge'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='pci.26'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='usb'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <controller type='sata' index='0'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='ide'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:2d:67:50'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target dev='tap9ae8abb4-06'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='net0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <serial type='pty'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141/console.log' append='off'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target type='isa-serial' port='0'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:        <model name='isa-serial'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      </target>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <console type='pty' tty='/dev/pts/0'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141/console.log' append='off'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <target type='serial' port='0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </console>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <input type='tablet' bus='usb'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='input0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='usb' bus='0' port='1'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <input type='mouse' bus='ps2'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='input1'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <input type='keyboard' bus='ps2'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='input2'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <listen type='address' address='::0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </graphics>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <audio id='1' type='none'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='video0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <watchdog model='itco' action='reset'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='watchdog0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </watchdog>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <memballoon model='virtio'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <stats period='10'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='balloon0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <rng model='virtio'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <backend model='random'>/dev/urandom</backend>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <alias name='rng0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <label>system_u:system_r:svirt_t:s0:c478,c672</label>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c478,c672</imagelabel>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <label>+107:+107</label>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <imagelabel>+107:+107</imagelabel>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 03:59:11 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:59:11 np0005603609 nova_compute[221550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 03:59:11 np0005603609 nova_compute[221550]: 2026-01-31 08:59:11.055 221554 INFO nova.virt.libvirt.driver [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully detached device tap1de0676f-0c from instance c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 from the live domain config.#033[00m
Jan 31 03:59:11 np0005603609 nova_compute[221550]: 2026-01-31 08:59:11.055 221554 DEBUG nova.virt.libvirt.vif [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:57:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-325444004',display_name='tempest-TestNetworkBasicOps-server-325444004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-325444004',id=207,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKCg3ytR14Nf7CENayPR51bXFfzZWnxTZmB8lC8INqq2x6QgZpD/VZdNUwq0Udix1R8WQSQlmA1Enu1ua+RdsCZeuxctqBrfeJfW5tPxL9qkV7f/1FqN7Eat1NdVebjdgg==',key_name='tempest-TestNetworkBasicOps-271661057',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:57:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-bpsj0gn1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:57:32Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1de0676f-0ca1-4874-bf60-193728802441", "address": "fa:16:3e:00:00:8f", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de0676f-0c", "ovs_interfaceid": "1de0676f-0ca1-4874-bf60-193728802441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:59:11 np0005603609 nova_compute[221550]: 2026-01-31 08:59:11.056 221554 DEBUG nova.network.os_vif_util [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "1de0676f-0ca1-4874-bf60-193728802441", "address": "fa:16:3e:00:00:8f", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de0676f-0c", "ovs_interfaceid": "1de0676f-0ca1-4874-bf60-193728802441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:59:11 np0005603609 nova_compute[221550]: 2026-01-31 08:59:11.056 221554 DEBUG nova.network.os_vif_util [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:00:8f,bridge_name='br-int',has_traffic_filtering=True,id=1de0676f-0ca1-4874-bf60-193728802441,network=Network(5071208b-ad60-4226-80bf-e8d76f33781f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de0676f-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:59:11 np0005603609 nova_compute[221550]: 2026-01-31 08:59:11.056 221554 DEBUG os_vif [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:00:8f,bridge_name='br-int',has_traffic_filtering=True,id=1de0676f-0ca1-4874-bf60-193728802441,network=Network(5071208b-ad60-4226-80bf-e8d76f33781f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de0676f-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:59:11 np0005603609 nova_compute[221550]: 2026-01-31 08:59:11.058 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:11 np0005603609 nova_compute[221550]: 2026-01-31 08:59:11.058 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1de0676f-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:59:11 np0005603609 nova_compute[221550]: 2026-01-31 08:59:11.061 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:11 np0005603609 nova_compute[221550]: 2026-01-31 08:59:11.063 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:59:11 np0005603609 nova_compute[221550]: 2026-01-31 08:59:11.073 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:11 np0005603609 nova_compute[221550]: 2026-01-31 08:59:11.075 221554 INFO os_vif [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:00:8f,bridge_name='br-int',has_traffic_filtering=True,id=1de0676f-0ca1-4874-bf60-193728802441,network=Network(5071208b-ad60-4226-80bf-e8d76f33781f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de0676f-0c')#033[00m
Jan 31 03:59:11 np0005603609 nova_compute[221550]: 2026-01-31 08:59:11.076 221554 DEBUG nova.virt.libvirt.guest [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <nova:name>tempest-TestNetworkBasicOps-server-325444004</nova:name>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 08:59:11</nova:creationTime>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    <nova:port uuid="9ae8abb4-0619-4212-857c-5e62212e10f1">
Jan 31 03:59:11 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:59:11 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 03:59:11 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 03:59:11 np0005603609 nova_compute[221550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 03:59:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:11.116 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:00:8f 10.100.0.23', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5071208b-ad60-4226-80bf-e8d76f33781f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82a39089-ebf2-4133-93ff-475b6e4ca7f0, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=1de0676f-0ca1-4874-bf60-193728802441) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:59:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:11.117 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 1de0676f-0ca1-4874-bf60-193728802441 in datapath 5071208b-ad60-4226-80bf-e8d76f33781f unbound from our chassis#033[00m
Jan 31 03:59:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:11.118 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5071208b-ad60-4226-80bf-e8d76f33781f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:59:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:11.119 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b9e67c3b-1f15-4dd2-bb9e-45541135747f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:11 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:11.119 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f namespace which is not needed anymore#033[00m
Jan 31 03:59:11 np0005603609 neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f[313261]: [NOTICE]   (313265) : haproxy version is 2.8.14-c23fe91
Jan 31 03:59:11 np0005603609 neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f[313261]: [NOTICE]   (313265) : path to executable is /usr/sbin/haproxy
Jan 31 03:59:11 np0005603609 neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f[313261]: [WARNING]  (313265) : Exiting Master process...
Jan 31 03:59:11 np0005603609 neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f[313261]: [ALERT]    (313265) : Current worker (313267) exited with code 143 (Terminated)
Jan 31 03:59:11 np0005603609 neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f[313261]: [WARNING]  (313265) : All workers exited. Exiting... (0)
Jan 31 03:59:11 np0005603609 systemd[1]: libpod-89a0cda7c84ca966e36e5512b2976620a7dc006582bbc38aa8344db554ef45c6.scope: Deactivated successfully.
Jan 31 03:59:11 np0005603609 podman[313621]: 2026-01-31 08:59:11.298349424 +0000 UTC m=+0.118296845 container died 89a0cda7c84ca966e36e5512b2976620a7dc006582bbc38aa8344db554ef45c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:59:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:11.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:11 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-89a0cda7c84ca966e36e5512b2976620a7dc006582bbc38aa8344db554ef45c6-userdata-shm.mount: Deactivated successfully.
Jan 31 03:59:11 np0005603609 systemd[1]: var-lib-containers-storage-overlay-5ffe3643c2443fc951f923f17eba0b7257ba4c4dec2e16f09e844c8f3277c667-merged.mount: Deactivated successfully.
Jan 31 03:59:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:12.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:12 np0005603609 podman[313621]: 2026-01-31 08:59:12.132837897 +0000 UTC m=+0.952785318 container cleanup 89a0cda7c84ca966e36e5512b2976620a7dc006582bbc38aa8344db554ef45c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:59:12 np0005603609 systemd[1]: libpod-conmon-89a0cda7c84ca966e36e5512b2976620a7dc006582bbc38aa8344db554ef45c6.scope: Deactivated successfully.
Jan 31 03:59:12 np0005603609 nova_compute[221550]: 2026-01-31 08:59:12.582 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:12 np0005603609 nova_compute[221550]: 2026-01-31 08:59:12.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:12 np0005603609 nova_compute[221550]: 2026-01-31 08:59:12.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:59:12 np0005603609 podman[313653]: 2026-01-31 08:59:12.664864167 +0000 UTC m=+0.516223049 container remove 89a0cda7c84ca966e36e5512b2976620a7dc006582bbc38aa8344db554ef45c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:59:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:12.669 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[98958721-9951-4609-96f3-4647f9e7d989]: (4, ('Sat Jan 31 08:59:11 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f (89a0cda7c84ca966e36e5512b2976620a7dc006582bbc38aa8344db554ef45c6)\n89a0cda7c84ca966e36e5512b2976620a7dc006582bbc38aa8344db554ef45c6\nSat Jan 31 08:59:12 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f (89a0cda7c84ca966e36e5512b2976620a7dc006582bbc38aa8344db554ef45c6)\n89a0cda7c84ca966e36e5512b2976620a7dc006582bbc38aa8344db554ef45c6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:12.671 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[52a45e7a-f665-4244-829c-366f8e5821c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:12.672 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5071208b-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:59:12 np0005603609 nova_compute[221550]: 2026-01-31 08:59:12.673 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:12 np0005603609 kernel: tap5071208b-a0: left promiscuous mode
Jan 31 03:59:12 np0005603609 nova_compute[221550]: 2026-01-31 08:59:12.678 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:12.680 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[98d8ab2d-dab5-4c50-a421-411055a6286f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:12.694 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6457a27c-ce96-489e-8dd4-cba44ced1d1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:12.696 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[75a2e71c-64b6-468b-a76e-75355365140f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:12.705 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6fdfc561-8deb-4ab2-8c29-ae1ff9a45fc6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1012598, 'reachable_time': 15214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313670, 'error': None, 'target': 'ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:12 np0005603609 systemd[1]: run-netns-ovnmeta\x2d5071208b\x2dad60\x2d4226\x2d80bf\x2de8d76f33781f.mount: Deactivated successfully.
Jan 31 03:59:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:12.706 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5071208b-ad60-4226-80bf-e8d76f33781f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:59:12 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:12.707 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[242c0249-1d84-4a6c-a920-3f0afeb856cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:12 np0005603609 nova_compute[221550]: 2026-01-31 08:59:12.854 221554 DEBUG nova.compute.manager [req-46ded030-f3fa-44e3-93a1-f74e6c430612 req-e4674eee-ab80-4d63-a0e1-bf12b2d41458 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received event network-vif-unplugged-1de0676f-0ca1-4874-bf60-193728802441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:59:12 np0005603609 nova_compute[221550]: 2026-01-31 08:59:12.854 221554 DEBUG oslo_concurrency.lockutils [req-46ded030-f3fa-44e3-93a1-f74e6c430612 req-e4674eee-ab80-4d63-a0e1-bf12b2d41458 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:12 np0005603609 nova_compute[221550]: 2026-01-31 08:59:12.855 221554 DEBUG oslo_concurrency.lockutils [req-46ded030-f3fa-44e3-93a1-f74e6c430612 req-e4674eee-ab80-4d63-a0e1-bf12b2d41458 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:12 np0005603609 nova_compute[221550]: 2026-01-31 08:59:12.855 221554 DEBUG oslo_concurrency.lockutils [req-46ded030-f3fa-44e3-93a1-f74e6c430612 req-e4674eee-ab80-4d63-a0e1-bf12b2d41458 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:12 np0005603609 nova_compute[221550]: 2026-01-31 08:59:12.855 221554 DEBUG nova.compute.manager [req-46ded030-f3fa-44e3-93a1-f74e6c430612 req-e4674eee-ab80-4d63-a0e1-bf12b2d41458 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] No waiting events found dispatching network-vif-unplugged-1de0676f-0ca1-4874-bf60-193728802441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:59:12 np0005603609 nova_compute[221550]: 2026-01-31 08:59:12.855 221554 WARNING nova.compute.manager [req-46ded030-f3fa-44e3-93a1-f74e6c430612 req-e4674eee-ab80-4d63-a0e1-bf12b2d41458 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received unexpected event network-vif-unplugged-1de0676f-0ca1-4874-bf60-193728802441 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:59:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:13.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:14.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.273 221554 DEBUG oslo_concurrency.lockutils [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.273 221554 DEBUG oslo_concurrency.lockutils [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquired lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.274 221554 DEBUG nova.network.neutron [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.288 221554 DEBUG nova.compute.manager [req-f401e371-9518-4366-9014-fd1a7951bdb2 req-20bc9510-bd64-40e7-b353-7fa654df0396 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received event network-vif-deleted-1de0676f-0ca1-4874-bf60-193728802441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.289 221554 INFO nova.compute.manager [req-f401e371-9518-4366-9014-fd1a7951bdb2 req-20bc9510-bd64-40e7-b353-7fa654df0396 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Neutron deleted interface 1de0676f-0ca1-4874-bf60-193728802441; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.289 221554 DEBUG nova.network.neutron [req-f401e371-9518-4366-9014-fd1a7951bdb2 req-20bc9510-bd64-40e7-b353-7fa654df0396 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Updating instance_info_cache with network_info: [{"id": "9ae8abb4-0619-4212-857c-5e62212e10f1", "address": "fa:16:3e:2d:67:50", "network": {"id": "7136c4a9-529c-4978-b239-a8adc0348e1b", "bridge": "br-int", "label": "tempest-network-smoke--1511896950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ae8abb4-06", "ovs_interfaceid": "9ae8abb4-0619-4212-857c-5e62212e10f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.388 221554 DEBUG nova.objects.instance [req-f401e371-9518-4366-9014-fd1a7951bdb2 req-20bc9510-bd64-40e7-b353-7fa654df0396 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lazy-loading 'system_metadata' on Instance uuid c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.560 221554 DEBUG nova.objects.instance [req-f401e371-9518-4366-9014-fd1a7951bdb2 req-20bc9510-bd64-40e7-b353-7fa654df0396 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lazy-loading 'flavor' on Instance uuid c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.893 221554 DEBUG nova.virt.libvirt.vif [req-f401e371-9518-4366-9014-fd1a7951bdb2 req-20bc9510-bd64-40e7-b353-7fa654df0396 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:57:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-325444004',display_name='tempest-TestNetworkBasicOps-server-325444004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-325444004',id=207,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKCg3ytR14Nf7CENayPR51bXFfzZWnxTZmB8lC8INqq2x6QgZpD/VZdNUwq0Udix1R8WQSQlmA1Enu1ua+RdsCZeuxctqBrfeJfW5tPxL9qkV7f/1FqN7Eat1NdVebjdgg==',key_name='tempest-TestNetworkBasicOps-271661057',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:57:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-bpsj0gn1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:57:32Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1de0676f-0ca1-4874-bf60-193728802441", "address": "fa:16:3e:00:00:8f", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de0676f-0c", "ovs_interfaceid": "1de0676f-0ca1-4874-bf60-193728802441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.894 221554 DEBUG nova.network.os_vif_util [req-f401e371-9518-4366-9014-fd1a7951bdb2 req-20bc9510-bd64-40e7-b353-7fa654df0396 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converting VIF {"id": "1de0676f-0ca1-4874-bf60-193728802441", "address": "fa:16:3e:00:00:8f", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de0676f-0c", "ovs_interfaceid": "1de0676f-0ca1-4874-bf60-193728802441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.894 221554 DEBUG nova.network.os_vif_util [req-f401e371-9518-4366-9014-fd1a7951bdb2 req-20bc9510-bd64-40e7-b353-7fa654df0396 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:00:8f,bridge_name='br-int',has_traffic_filtering=True,id=1de0676f-0ca1-4874-bf60-193728802441,network=Network(5071208b-ad60-4226-80bf-e8d76f33781f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de0676f-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.897 221554 DEBUG nova.virt.libvirt.guest [req-f401e371-9518-4366-9014-fd1a7951bdb2 req-20bc9510-bd64-40e7-b353-7fa654df0396 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:00:00:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1de0676f-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.899 221554 DEBUG nova.virt.libvirt.guest [req-f401e371-9518-4366-9014-fd1a7951bdb2 req-20bc9510-bd64-40e7-b353-7fa654df0396 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:00:00:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1de0676f-0c"/></interface>not found in domain: <domain type='kvm' id='113'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <name>instance-000000cf</name>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <uuid>c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141</uuid>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:name>tempest-TestNetworkBasicOps-server-325444004</nova:name>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 08:59:11</nova:creationTime>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:port uuid="9ae8abb4-0619-4212-857c-5e62212e10f1">
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 03:59:14 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <memory unit='KiB'>131072</memory>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <vcpu placement='static'>1</vcpu>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <resource>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <partition>/machine</partition>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </resource>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <sysinfo type='smbios'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <entry name='manufacturer'>RDO</entry>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <entry name='serial'>c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141</entry>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <entry name='uuid'>c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141</entry>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <entry name='family'>Virtual Machine</entry>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <boot dev='hd'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <smbios mode='sysinfo'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <vmcoreinfo state='on'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <model fallback='forbid'>Nehalem</model>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <feature policy='require' name='x2apic'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <feature policy='require' name='hypervisor'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <feature policy='require' name='vme'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <clock offset='utc'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <timer name='hpet' present='no'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <on_poweroff>destroy</on_poweroff>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <on_reboot>restart</on_reboot>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <on_crash>destroy</on_crash>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <disk type='network' device='disk'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk' index='2'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target dev='vda' bus='virtio'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='virtio-disk0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <disk type='network' device='cdrom'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk.config' index='1'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target dev='sda' bus='sata'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <readonly/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='sata0-0-0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pcie.0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='1' port='0x10'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.1'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='2' port='0x11'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.2'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='3' port='0x12'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.3'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='4' port='0x13'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.4'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='5' port='0x14'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.5'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='6' port='0x15'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.6'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='7' port='0x16'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.7'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='8' port='0x17'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.8'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='9' port='0x18'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.9'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='10' port='0x19'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.10'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='11' port='0x1a'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.11'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='12' port='0x1b'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.12'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='13' port='0x1c'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.13'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='14' port='0x1d'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.14'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='15' port='0x1e'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.15'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='16' port='0x1f'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.16'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='17' port='0x20'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.17'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='18' port='0x21'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.18'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='19' port='0x22'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.19'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='20' port='0x23'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.20'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='21' port='0x24'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.21'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='22' port='0x25'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.22'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='23' port='0x26'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.23'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='24' port='0x27'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.24'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='25' port='0x28'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.25'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-pci-bridge'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.26'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='usb'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='sata' index='0'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='ide'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:2d:67:50'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target dev='tap9ae8abb4-06'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='net0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <serial type='pty'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141/console.log' append='off'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target type='isa-serial' port='0'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:        <model name='isa-serial'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      </target>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <console type='pty' tty='/dev/pts/0'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141/console.log' append='off'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target type='serial' port='0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </console>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <input type='tablet' bus='usb'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='input0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='usb' bus='0' port='1'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <input type='mouse' bus='ps2'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='input1'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <input type='keyboard' bus='ps2'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='input2'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <listen type='address' address='::0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </graphics>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <audio id='1' type='none'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='video0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <watchdog model='itco' action='reset'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='watchdog0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </watchdog>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <memballoon model='virtio'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <stats period='10'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='balloon0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <rng model='virtio'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <backend model='random'>/dev/urandom</backend>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='rng0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <label>system_u:system_r:svirt_t:s0:c478,c672</label>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c478,c672</imagelabel>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <label>+107:+107</label>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <imagelabel>+107:+107</imagelabel>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 03:59:14 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:59:14 np0005603609 nova_compute[221550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.901 221554 DEBUG nova.virt.libvirt.guest [req-f401e371-9518-4366-9014-fd1a7951bdb2 req-20bc9510-bd64-40e7-b353-7fa654df0396 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:00:00:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1de0676f-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.903 221554 DEBUG nova.virt.libvirt.guest [req-f401e371-9518-4366-9014-fd1a7951bdb2 req-20bc9510-bd64-40e7-b353-7fa654df0396 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:00:00:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1de0676f-0c"/></interface>not found in domain: <domain type='kvm' id='113'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <name>instance-000000cf</name>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <uuid>c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141</uuid>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:name>tempest-TestNetworkBasicOps-server-325444004</nova:name>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 08:59:11</nova:creationTime>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:port uuid="9ae8abb4-0619-4212-857c-5e62212e10f1">
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 03:59:14 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <memory unit='KiB'>131072</memory>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <vcpu placement='static'>1</vcpu>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <resource>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <partition>/machine</partition>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </resource>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <sysinfo type='smbios'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <system>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <entry name='manufacturer'>RDO</entry>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <entry name='serial'>c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141</entry>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <entry name='uuid'>c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141</entry>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <entry name='family'>Virtual Machine</entry>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </system>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <os>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <boot dev='hd'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <smbios mode='sysinfo'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </os>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <features>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <vmcoreinfo state='on'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </features>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <model fallback='forbid'>Nehalem</model>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <feature policy='require' name='x2apic'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <feature policy='require' name='hypervisor'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <feature policy='require' name='vme'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <clock offset='utc'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <timer name='hpet' present='no'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </clock>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <on_poweroff>destroy</on_poweroff>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <on_reboot>restart</on_reboot>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <on_crash>destroy</on_crash>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <devices>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <disk type='network' device='disk'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk' index='2'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target dev='vda' bus='virtio'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='virtio-disk0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <disk type='network' device='cdrom'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <auth username='openstack'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:        <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      </auth>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <source protocol='rbd' name='vms/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_disk.config' index='1'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      </source>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target dev='sda' bus='sata'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <readonly/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='sata0-0-0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </disk>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pcie.0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='1' port='0x10'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.1'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='2' port='0x11'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.2'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='3' port='0x12'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.3'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='4' port='0x13'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.4'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='5' port='0x14'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.5'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='6' port='0x15'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.6'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='7' port='0x16'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.7'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='8' port='0x17'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.8'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='9' port='0x18'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.9'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='10' port='0x19'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.10'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='11' port='0x1a'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.11'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='12' port='0x1b'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.12'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='13' port='0x1c'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.13'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='14' port='0x1d'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.14'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='15' port='0x1e'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.15'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='16' port='0x1f'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.16'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='17' port='0x20'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.17'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='18' port='0x21'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.18'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='19' port='0x22'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.19'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='20' port='0x23'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.20'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='21' port='0x24'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.21'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='22' port='0x25'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.22'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='23' port='0x26'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.23'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='24' port='0x27'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.24'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-root-port'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target chassis='25' port='0x28'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.25'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model name='pcie-pci-bridge'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='pci.26'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='usb'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <controller type='sata' index='0'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='ide'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </controller>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <interface type='ethernet'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <mac address='fa:16:3e:2d:67:50'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target dev='tap9ae8abb4-06'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model type='virtio'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <mtu size='1442'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='net0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </interface>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <serial type='pty'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141/console.log' append='off'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target type='isa-serial' port='0'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:        <model name='isa-serial'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      </target>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </serial>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <console type='pty' tty='/dev/pts/0'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <source path='/dev/pts/0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <log file='/var/lib/nova/instances/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141/console.log' append='off'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <target type='serial' port='0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='serial0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </console>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <input type='tablet' bus='usb'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='input0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='usb' bus='0' port='1'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <input type='mouse' bus='ps2'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='input1'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <input type='keyboard' bus='ps2'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='input2'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </input>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <listen type='address' address='::0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </graphics>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <audio id='1' type='none'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <video>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='video0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </video>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <watchdog model='itco' action='reset'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='watchdog0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </watchdog>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <memballoon model='virtio'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <stats period='10'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='balloon0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <rng model='virtio'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <backend model='random'>/dev/urandom</backend>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <alias name='rng0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </rng>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </devices>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <label>system_u:system_r:svirt_t:s0:c478,c672</label>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c478,c672</imagelabel>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <label>+107:+107</label>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <imagelabel>+107:+107</imagelabel>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </seclabel>
Jan 31 03:59:14 np0005603609 nova_compute[221550]: </domain>
Jan 31 03:59:14 np0005603609 nova_compute[221550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.905 221554 WARNING nova.virt.libvirt.driver [req-f401e371-9518-4366-9014-fd1a7951bdb2 req-20bc9510-bd64-40e7-b353-7fa654df0396 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Detaching interface fa:16:3e:00:00:8f failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap1de0676f-0c' not found.#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.906 221554 DEBUG nova.virt.libvirt.vif [req-f401e371-9518-4366-9014-fd1a7951bdb2 req-20bc9510-bd64-40e7-b353-7fa654df0396 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:57:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-325444004',display_name='tempest-TestNetworkBasicOps-server-325444004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-325444004',id=207,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKCg3ytR14Nf7CENayPR51bXFfzZWnxTZmB8lC8INqq2x6QgZpD/VZdNUwq0Udix1R8WQSQlmA1Enu1ua+RdsCZeuxctqBrfeJfW5tPxL9qkV7f/1FqN7Eat1NdVebjdgg==',key_name='tempest-TestNetworkBasicOps-271661057',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:57:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-bpsj0gn1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:57:32Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1de0676f-0ca1-4874-bf60-193728802441", "address": "fa:16:3e:00:00:8f", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de0676f-0c", "ovs_interfaceid": "1de0676f-0ca1-4874-bf60-193728802441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.906 221554 DEBUG nova.network.os_vif_util [req-f401e371-9518-4366-9014-fd1a7951bdb2 req-20bc9510-bd64-40e7-b353-7fa654df0396 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converting VIF {"id": "1de0676f-0ca1-4874-bf60-193728802441", "address": "fa:16:3e:00:00:8f", "network": {"id": "5071208b-ad60-4226-80bf-e8d76f33781f", "bridge": "br-int", "label": "tempest-network-smoke--458584935", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1de0676f-0c", "ovs_interfaceid": "1de0676f-0ca1-4874-bf60-193728802441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.907 221554 DEBUG nova.network.os_vif_util [req-f401e371-9518-4366-9014-fd1a7951bdb2 req-20bc9510-bd64-40e7-b353-7fa654df0396 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:00:8f,bridge_name='br-int',has_traffic_filtering=True,id=1de0676f-0ca1-4874-bf60-193728802441,network=Network(5071208b-ad60-4226-80bf-e8d76f33781f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de0676f-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.907 221554 DEBUG os_vif [req-f401e371-9518-4366-9014-fd1a7951bdb2 req-20bc9510-bd64-40e7-b353-7fa654df0396 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:00:8f,bridge_name='br-int',has_traffic_filtering=True,id=1de0676f-0ca1-4874-bf60-193728802441,network=Network(5071208b-ad60-4226-80bf-e8d76f33781f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de0676f-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.908 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.909 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1de0676f-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.909 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.911 221554 INFO os_vif [req-f401e371-9518-4366-9014-fd1a7951bdb2 req-20bc9510-bd64-40e7-b353-7fa654df0396 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:00:8f,bridge_name='br-int',has_traffic_filtering=True,id=1de0676f-0ca1-4874-bf60-193728802441,network=Network(5071208b-ad60-4226-80bf-e8d76f33781f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1de0676f-0c')#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.912 221554 DEBUG nova.virt.libvirt.guest [req-f401e371-9518-4366-9014-fd1a7951bdb2 req-20bc9510-bd64-40e7-b353-7fa654df0396 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:name>tempest-TestNetworkBasicOps-server-325444004</nova:name>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:creationTime>2026-01-31 08:59:14</nova:creationTime>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:flavor name="m1.nano">
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:memory>128</nova:memory>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:disk>1</nova:disk>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:swap>0</nova:swap>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </nova:flavor>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:owner>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </nova:owner>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  <nova:ports>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    <nova:port uuid="9ae8abb4-0619-4212-857c-5e62212e10f1">
Jan 31 03:59:14 np0005603609 nova_compute[221550]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:    </nova:port>
Jan 31 03:59:14 np0005603609 nova_compute[221550]:  </nova:ports>
Jan 31 03:59:14 np0005603609 nova_compute[221550]: </nova:instance>
Jan 31 03:59:14 np0005603609 nova_compute[221550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.991 221554 DEBUG nova.compute.manager [req-d07ddd0b-ab4f-4542-818a-821e752c9884 req-8cb9bba9-d513-4071-91cb-29be1089c386 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received event network-vif-plugged-1de0676f-0ca1-4874-bf60-193728802441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.991 221554 DEBUG oslo_concurrency.lockutils [req-d07ddd0b-ab4f-4542-818a-821e752c9884 req-8cb9bba9-d513-4071-91cb-29be1089c386 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.992 221554 DEBUG oslo_concurrency.lockutils [req-d07ddd0b-ab4f-4542-818a-821e752c9884 req-8cb9bba9-d513-4071-91cb-29be1089c386 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.992 221554 DEBUG oslo_concurrency.lockutils [req-d07ddd0b-ab4f-4542-818a-821e752c9884 req-8cb9bba9-d513-4071-91cb-29be1089c386 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.992 221554 DEBUG nova.compute.manager [req-d07ddd0b-ab4f-4542-818a-821e752c9884 req-8cb9bba9-d513-4071-91cb-29be1089c386 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] No waiting events found dispatching network-vif-plugged-1de0676f-0ca1-4874-bf60-193728802441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:59:14 np0005603609 nova_compute[221550]: 2026-01-31 08:59:14.993 221554 WARNING nova.compute.manager [req-d07ddd0b-ab4f-4542-818a-821e752c9884 req-8cb9bba9-d513-4071-91cb-29be1089c386 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received unexpected event network-vif-plugged-1de0676f-0ca1-4874-bf60-193728802441 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:59:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:15.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:15 np0005603609 nova_compute[221550]: 2026-01-31 08:59:15.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:15 np0005603609 nova_compute[221550]: 2026-01-31 08:59:15.995 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:15 np0005603609 nova_compute[221550]: 2026-01-31 08:59:15.995 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:15 np0005603609 nova_compute[221550]: 2026-01-31 08:59:15.996 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:15 np0005603609 nova_compute[221550]: 2026-01-31 08:59:15.996 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:59:15 np0005603609 nova_compute[221550]: 2026-01-31 08:59:15.996 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:59:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:16.015 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=99, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=98) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:59:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:16.017 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:59:16 np0005603609 nova_compute[221550]: 2026-01-31 08:59:16.017 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:16 np0005603609 nova_compute[221550]: 2026-01-31 08:59:16.061 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:16.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:59:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1437794474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:59:16 np0005603609 nova_compute[221550]: 2026-01-31 08:59:16.438 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:59:16 np0005603609 ovn_controller[130359]: 2026-01-31T08:59:16Z|00974|binding|INFO|Releasing lport c2e351b7-d03f-4dee-9d1a-9812c248f934 from this chassis (sb_readonly=0)
Jan 31 03:59:16 np0005603609 nova_compute[221550]: 2026-01-31 08:59:16.640 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:16 np0005603609 nova_compute[221550]: 2026-01-31 08:59:16.723 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000cf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:59:16 np0005603609 nova_compute[221550]: 2026-01-31 08:59:16.723 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000cf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:59:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:16 np0005603609 nova_compute[221550]: 2026-01-31 08:59:16.855 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:59:16 np0005603609 nova_compute[221550]: 2026-01-31 08:59:16.857 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3985MB free_disk=20.94240951538086GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:59:16 np0005603609 nova_compute[221550]: 2026-01-31 08:59:16.857 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:16 np0005603609 nova_compute[221550]: 2026-01-31 08:59:16.857 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:16 np0005603609 nova_compute[221550]: 2026-01-31 08:59:16.874 221554 INFO nova.network.neutron [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Port 1de0676f-0ca1-4874-bf60-193728802441 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Jan 31 03:59:16 np0005603609 nova_compute[221550]: 2026-01-31 08:59:16.874 221554 DEBUG nova.network.neutron [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Updating instance_info_cache with network_info: [{"id": "9ae8abb4-0619-4212-857c-5e62212e10f1", "address": "fa:16:3e:2d:67:50", "network": {"id": "7136c4a9-529c-4978-b239-a8adc0348e1b", "bridge": "br-int", "label": "tempest-network-smoke--1511896950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ae8abb4-06", "ovs_interfaceid": "9ae8abb4-0619-4212-857c-5e62212e10f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:59:16 np0005603609 nova_compute[221550]: 2026-01-31 08:59:16.937 221554 DEBUG oslo_concurrency.lockutils [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Releasing lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:59:17 np0005603609 nova_compute[221550]: 2026-01-31 08:59:17.008 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:59:17 np0005603609 nova_compute[221550]: 2026-01-31 08:59:17.008 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:59:17 np0005603609 nova_compute[221550]: 2026-01-31 08:59:17.008 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:59:17 np0005603609 nova_compute[221550]: 2026-01-31 08:59:17.066 221554 DEBUG oslo_concurrency.lockutils [None req-e42355fe-18b1-46a1-83f7-917010fc8564 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "interface-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-1de0676f-0ca1-4874-bf60-193728802441" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 6.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:17 np0005603609 nova_compute[221550]: 2026-01-31 08:59:17.135 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:59:17 np0005603609 nova_compute[221550]: 2026-01-31 08:59:17.167 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:59:17 np0005603609 nova_compute[221550]: 2026-01-31 08:59:17.168 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:59:17 np0005603609 nova_compute[221550]: 2026-01-31 08:59:17.181 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:59:17 np0005603609 nova_compute[221550]: 2026-01-31 08:59:17.208 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:59:17 np0005603609 nova_compute[221550]: 2026-01-31 08:59:17.246 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:59:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:17.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:17 np0005603609 nova_compute[221550]: 2026-01-31 08:59:17.584 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:59:17 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/929412592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:59:17 np0005603609 nova_compute[221550]: 2026-01-31 08:59:17.668 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:59:17 np0005603609 nova_compute[221550]: 2026-01-31 08:59:17.673 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:59:17 np0005603609 nova_compute[221550]: 2026-01-31 08:59:17.717 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:59:17 np0005603609 nova_compute[221550]: 2026-01-31 08:59:17.873 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:59:17 np0005603609 nova_compute[221550]: 2026-01-31 08:59:17.873 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:18.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.487 221554 DEBUG oslo_concurrency.lockutils [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.487 221554 DEBUG oslo_concurrency.lockutils [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.488 221554 DEBUG oslo_concurrency.lockutils [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.488 221554 DEBUG oslo_concurrency.lockutils [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.488 221554 DEBUG oslo_concurrency.lockutils [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.489 221554 INFO nova.compute.manager [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Terminating instance#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.490 221554 DEBUG nova.compute.manager [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:59:18 np0005603609 kernel: tap9ae8abb4-06 (unregistering): left promiscuous mode
Jan 31 03:59:18 np0005603609 NetworkManager[49064]: <info>  [1769849958.5506] device (tap9ae8abb4-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:59:18 np0005603609 ovn_controller[130359]: 2026-01-31T08:59:18Z|00975|binding|INFO|Releasing lport 9ae8abb4-0619-4212-857c-5e62212e10f1 from this chassis (sb_readonly=0)
Jan 31 03:59:18 np0005603609 ovn_controller[130359]: 2026-01-31T08:59:18Z|00976|binding|INFO|Setting lport 9ae8abb4-0619-4212-857c-5e62212e10f1 down in Southbound
Jan 31 03:59:18 np0005603609 ovn_controller[130359]: 2026-01-31T08:59:18Z|00977|binding|INFO|Removing iface tap9ae8abb4-06 ovn-installed in OVS
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.557 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.558 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.567 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:18 np0005603609 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d000000cf.scope: Deactivated successfully.
Jan 31 03:59:18 np0005603609 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d000000cf.scope: Consumed 17.388s CPU time.
Jan 31 03:59:18 np0005603609 systemd-machined[190912]: Machine qemu-113-instance-000000cf terminated.
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.725 221554 INFO nova.virt.libvirt.driver [-] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Instance destroyed successfully.#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.726 221554 DEBUG nova.objects.instance [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'resources' on Instance uuid c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:59:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:18.808 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:67:50 10.100.0.14'], port_security=['fa:16:3e:2d:67:50 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7136c4a9-529c-4978-b239-a8adc0348e1b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05fa5eca-e3e2-490f-93c3-a165799ce264', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc343683-1015-4ebe-a3f3-14c358ce76bf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=9ae8abb4-0619-4212-857c-5e62212e10f1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:59:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:18.809 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 9ae8abb4-0619-4212-857c-5e62212e10f1 in datapath 7136c4a9-529c-4978-b239-a8adc0348e1b unbound from our chassis#033[00m
Jan 31 03:59:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:18.810 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7136c4a9-529c-4978-b239-a8adc0348e1b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:59:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:18.811 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fb9179bb-43db-46c6-b8df-27bcf37cb89b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:18 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:18.811 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b namespace which is not needed anymore#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.873 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.873 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.874 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.884 221554 DEBUG nova.virt.libvirt.vif [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:57:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-325444004',display_name='tempest-TestNetworkBasicOps-server-325444004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-325444004',id=207,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKCg3ytR14Nf7CENayPR51bXFfzZWnxTZmB8lC8INqq2x6QgZpD/VZdNUwq0Udix1R8WQSQlmA1Enu1ua+RdsCZeuxctqBrfeJfW5tPxL9qkV7f/1FqN7Eat1NdVebjdgg==',key_name='tempest-TestNetworkBasicOps-271661057',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:57:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-bpsj0gn1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:57:32Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ae8abb4-0619-4212-857c-5e62212e10f1", "address": "fa:16:3e:2d:67:50", "network": {"id": "7136c4a9-529c-4978-b239-a8adc0348e1b", "bridge": "br-int", "label": "tempest-network-smoke--1511896950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ae8abb4-06", "ovs_interfaceid": "9ae8abb4-0619-4212-857c-5e62212e10f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.884 221554 DEBUG nova.network.os_vif_util [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "9ae8abb4-0619-4212-857c-5e62212e10f1", "address": "fa:16:3e:2d:67:50", "network": {"id": "7136c4a9-529c-4978-b239-a8adc0348e1b", "bridge": "br-int", "label": "tempest-network-smoke--1511896950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ae8abb4-06", "ovs_interfaceid": "9ae8abb4-0619-4212-857c-5e62212e10f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.885 221554 DEBUG nova.network.os_vif_util [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:67:50,bridge_name='br-int',has_traffic_filtering=True,id=9ae8abb4-0619-4212-857c-5e62212e10f1,network=Network(7136c4a9-529c-4978-b239-a8adc0348e1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ae8abb4-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.885 221554 DEBUG os_vif [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:67:50,bridge_name='br-int',has_traffic_filtering=True,id=9ae8abb4-0619-4212-857c-5e62212e10f1,network=Network(7136c4a9-529c-4978-b239-a8adc0348e1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ae8abb4-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.888 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.888 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ae8abb4-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.890 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.891 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:18 np0005603609 nova_compute[221550]: 2026-01-31 08:59:18.893 221554 INFO os_vif [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:67:50,bridge_name='br-int',has_traffic_filtering=True,id=9ae8abb4-0619-4212-857c-5e62212e10f1,network=Network(7136c4a9-529c-4978-b239-a8adc0348e1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ae8abb4-06')#033[00m
Jan 31 03:59:18 np0005603609 neutron-haproxy-ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b[312938]: [NOTICE]   (312944) : haproxy version is 2.8.14-c23fe91
Jan 31 03:59:18 np0005603609 neutron-haproxy-ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b[312938]: [NOTICE]   (312944) : path to executable is /usr/sbin/haproxy
Jan 31 03:59:18 np0005603609 neutron-haproxy-ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b[312938]: [WARNING]  (312944) : Exiting Master process...
Jan 31 03:59:18 np0005603609 neutron-haproxy-ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b[312938]: [WARNING]  (312944) : Exiting Master process...
Jan 31 03:59:18 np0005603609 neutron-haproxy-ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b[312938]: [ALERT]    (312944) : Current worker (312946) exited with code 143 (Terminated)
Jan 31 03:59:18 np0005603609 neutron-haproxy-ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b[312938]: [WARNING]  (312944) : All workers exited. Exiting... (0)
Jan 31 03:59:18 np0005603609 systemd[1]: libpod-f02fb282cf8b55d568d5f083ed22656e559429c92f39bdf8a26584f012286224.scope: Deactivated successfully.
Jan 31 03:59:18 np0005603609 podman[313754]: 2026-01-31 08:59:18.928708704 +0000 UTC m=+0.046803345 container died f02fb282cf8b55d568d5f083ed22656e559429c92f39bdf8a26584f012286224 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:59:18 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f02fb282cf8b55d568d5f083ed22656e559429c92f39bdf8a26584f012286224-userdata-shm.mount: Deactivated successfully.
Jan 31 03:59:18 np0005603609 systemd[1]: var-lib-containers-storage-overlay-135dc3bfc69f36e3506aa5659b19ca0ec08814903afb098e1591d33d54306665-merged.mount: Deactivated successfully.
Jan 31 03:59:18 np0005603609 podman[313754]: 2026-01-31 08:59:18.96575864 +0000 UTC m=+0.083853301 container cleanup f02fb282cf8b55d568d5f083ed22656e559429c92f39bdf8a26584f012286224 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 03:59:18 np0005603609 systemd[1]: libpod-conmon-f02fb282cf8b55d568d5f083ed22656e559429c92f39bdf8a26584f012286224.scope: Deactivated successfully.
Jan 31 03:59:19 np0005603609 podman[313802]: 2026-01-31 08:59:19.020124216 +0000 UTC m=+0.039772284 container remove f02fb282cf8b55d568d5f083ed22656e559429c92f39bdf8a26584f012286224 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:59:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:19.023 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[cc70984a-4338-4432-93a1-86c7c151d279]: (4, ('Sat Jan 31 08:59:18 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b (f02fb282cf8b55d568d5f083ed22656e559429c92f39bdf8a26584f012286224)\nf02fb282cf8b55d568d5f083ed22656e559429c92f39bdf8a26584f012286224\nSat Jan 31 08:59:18 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b (f02fb282cf8b55d568d5f083ed22656e559429c92f39bdf8a26584f012286224)\nf02fb282cf8b55d568d5f083ed22656e559429c92f39bdf8a26584f012286224\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:19.026 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2833cd-21c6-46d0-a140-32bba5a706f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:19.027 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7136c4a9-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:59:19 np0005603609 kernel: tap7136c4a9-50: left promiscuous mode
Jan 31 03:59:19 np0005603609 nova_compute[221550]: 2026-01-31 08:59:19.029 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:19 np0005603609 nova_compute[221550]: 2026-01-31 08:59:19.033 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:19.037 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[59e19b97-8b3b-4c36-9e70-f3dca0a0cee5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:19.053 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[446e9f56-d8ee-4a08-84a7-92762593d434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:19.055 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1540dd8d-8606-458e-a1a9-8d455db4f00e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:19.072 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[37cbf670-cbed-40f2-834e-58d7e5cbda7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1008948, 'reachable_time': 44499, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313817, 'error': None, 'target': 'ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:19 np0005603609 systemd[1]: run-netns-ovnmeta\x2d7136c4a9\x2d529c\x2d4978\x2db239\x2da8adc0348e1b.mount: Deactivated successfully.
Jan 31 03:59:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:19.077 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7136c4a9-529c-4978-b239-a8adc0348e1b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:59:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:19.077 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb80d9e-05e3-4997-8186-9a790cc4fe61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:19 np0005603609 nova_compute[221550]: 2026-01-31 08:59:19.280 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 31 03:59:19 np0005603609 nova_compute[221550]: 2026-01-31 08:59:19.280 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:59:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:59:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:19.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:59:19 np0005603609 nova_compute[221550]: 2026-01-31 08:59:19.344 221554 INFO nova.virt.libvirt.driver [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Deleting instance files /var/lib/nova/instances/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_del#033[00m
Jan 31 03:59:19 np0005603609 nova_compute[221550]: 2026-01-31 08:59:19.345 221554 INFO nova.virt.libvirt.driver [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Deletion of /var/lib/nova/instances/c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141_del complete#033[00m
Jan 31 03:59:19 np0005603609 nova_compute[221550]: 2026-01-31 08:59:19.351 221554 DEBUG nova.compute.manager [req-5555e3dc-f8f1-454c-8602-932f18a9397f req-2617c9bf-d7d6-41fb-be31-63749757ed30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received event network-vif-unplugged-9ae8abb4-0619-4212-857c-5e62212e10f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:59:19 np0005603609 nova_compute[221550]: 2026-01-31 08:59:19.351 221554 DEBUG oslo_concurrency.lockutils [req-5555e3dc-f8f1-454c-8602-932f18a9397f req-2617c9bf-d7d6-41fb-be31-63749757ed30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:19 np0005603609 nova_compute[221550]: 2026-01-31 08:59:19.351 221554 DEBUG oslo_concurrency.lockutils [req-5555e3dc-f8f1-454c-8602-932f18a9397f req-2617c9bf-d7d6-41fb-be31-63749757ed30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:19 np0005603609 nova_compute[221550]: 2026-01-31 08:59:19.351 221554 DEBUG oslo_concurrency.lockutils [req-5555e3dc-f8f1-454c-8602-932f18a9397f req-2617c9bf-d7d6-41fb-be31-63749757ed30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:19 np0005603609 nova_compute[221550]: 2026-01-31 08:59:19.352 221554 DEBUG nova.compute.manager [req-5555e3dc-f8f1-454c-8602-932f18a9397f req-2617c9bf-d7d6-41fb-be31-63749757ed30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] No waiting events found dispatching network-vif-unplugged-9ae8abb4-0619-4212-857c-5e62212e10f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:59:19 np0005603609 nova_compute[221550]: 2026-01-31 08:59:19.352 221554 DEBUG nova.compute.manager [req-5555e3dc-f8f1-454c-8602-932f18a9397f req-2617c9bf-d7d6-41fb-be31-63749757ed30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received event network-vif-unplugged-9ae8abb4-0619-4212-857c-5e62212e10f1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:59:19 np0005603609 nova_compute[221550]: 2026-01-31 08:59:19.469 221554 DEBUG nova.compute.manager [req-2099b218-412d-4302-a46f-43c75cf7d068 req-dd4a9551-1619-45e2-9f02-58ac2446b770 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received event network-changed-9ae8abb4-0619-4212-857c-5e62212e10f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:59:19 np0005603609 nova_compute[221550]: 2026-01-31 08:59:19.470 221554 DEBUG nova.compute.manager [req-2099b218-412d-4302-a46f-43c75cf7d068 req-dd4a9551-1619-45e2-9f02-58ac2446b770 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Refreshing instance network info cache due to event network-changed-9ae8abb4-0619-4212-857c-5e62212e10f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:59:19 np0005603609 nova_compute[221550]: 2026-01-31 08:59:19.470 221554 DEBUG oslo_concurrency.lockutils [req-2099b218-412d-4302-a46f-43c75cf7d068 req-dd4a9551-1619-45e2-9f02-58ac2446b770 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:59:19 np0005603609 nova_compute[221550]: 2026-01-31 08:59:19.471 221554 DEBUG oslo_concurrency.lockutils [req-2099b218-412d-4302-a46f-43c75cf7d068 req-dd4a9551-1619-45e2-9f02-58ac2446b770 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:59:19 np0005603609 nova_compute[221550]: 2026-01-31 08:59:19.471 221554 DEBUG nova.network.neutron [req-2099b218-412d-4302-a46f-43c75cf7d068 req-dd4a9551-1619-45e2-9f02-58ac2446b770 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Refreshing network info cache for port 9ae8abb4-0619-4212-857c-5e62212e10f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:59:20 np0005603609 nova_compute[221550]: 2026-01-31 08:59:20.015 221554 INFO nova.compute.manager [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Took 1.52 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:59:20 np0005603609 nova_compute[221550]: 2026-01-31 08:59:20.016 221554 DEBUG oslo.service.loopingcall [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:59:20 np0005603609 nova_compute[221550]: 2026-01-31 08:59:20.016 221554 DEBUG nova.compute.manager [-] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:59:20 np0005603609 nova_compute[221550]: 2026-01-31 08:59:20.016 221554 DEBUG nova.network.neutron [-] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:59:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:20.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 08:59:21.018 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '99'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:59:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:21.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:22 np0005603609 nova_compute[221550]: 2026-01-31 08:59:22.061 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:22.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:22 np0005603609 nova_compute[221550]: 2026-01-31 08:59:22.098 221554 DEBUG nova.compute.manager [req-79870f68-9bb7-491c-9d54-ff95aa4a0d95 req-5fe078e3-abf4-49cb-beee-db29606847cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received event network-vif-plugged-9ae8abb4-0619-4212-857c-5e62212e10f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:59:22 np0005603609 nova_compute[221550]: 2026-01-31 08:59:22.099 221554 DEBUG oslo_concurrency.lockutils [req-79870f68-9bb7-491c-9d54-ff95aa4a0d95 req-5fe078e3-abf4-49cb-beee-db29606847cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:22 np0005603609 nova_compute[221550]: 2026-01-31 08:59:22.100 221554 DEBUG oslo_concurrency.lockutils [req-79870f68-9bb7-491c-9d54-ff95aa4a0d95 req-5fe078e3-abf4-49cb-beee-db29606847cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:22 np0005603609 nova_compute[221550]: 2026-01-31 08:59:22.100 221554 DEBUG oslo_concurrency.lockutils [req-79870f68-9bb7-491c-9d54-ff95aa4a0d95 req-5fe078e3-abf4-49cb-beee-db29606847cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:22 np0005603609 nova_compute[221550]: 2026-01-31 08:59:22.100 221554 DEBUG nova.compute.manager [req-79870f68-9bb7-491c-9d54-ff95aa4a0d95 req-5fe078e3-abf4-49cb-beee-db29606847cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] No waiting events found dispatching network-vif-plugged-9ae8abb4-0619-4212-857c-5e62212e10f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:59:22 np0005603609 nova_compute[221550]: 2026-01-31 08:59:22.100 221554 WARNING nova.compute.manager [req-79870f68-9bb7-491c-9d54-ff95aa4a0d95 req-5fe078e3-abf4-49cb-beee-db29606847cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received unexpected event network-vif-plugged-9ae8abb4-0619-4212-857c-5e62212e10f1 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:59:22 np0005603609 nova_compute[221550]: 2026-01-31 08:59:22.586 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:59:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:23.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:59:23 np0005603609 nova_compute[221550]: 2026-01-31 08:59:23.472 221554 DEBUG nova.network.neutron [-] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:59:23 np0005603609 nova_compute[221550]: 2026-01-31 08:59:23.891 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:23 np0005603609 nova_compute[221550]: 2026-01-31 08:59:23.912 221554 INFO nova.compute.manager [-] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Took 3.90 seconds to deallocate network for instance.#033[00m
Jan 31 03:59:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:59:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:24.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:59:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e411 e411: 3 total, 3 up, 3 in
Jan 31 03:59:25 np0005603609 nova_compute[221550]: 2026-01-31 08:59:25.250 221554 DEBUG oslo_concurrency.lockutils [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:25 np0005603609 nova_compute[221550]: 2026-01-31 08:59:25.251 221554 DEBUG oslo_concurrency.lockutils [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:25.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:25 np0005603609 nova_compute[221550]: 2026-01-31 08:59:25.562 221554 DEBUG nova.compute.manager [req-7bc3b357-fdf9-4d6e-a39c-0e676cf5efd7 req-39356c06-6d14-44d5-82de-331608a1c645 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Received event network-vif-deleted-9ae8abb4-0619-4212-857c-5e62212e10f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:59:26 np0005603609 nova_compute[221550]: 2026-01-31 08:59:26.040 221554 DEBUG oslo_concurrency.processutils [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:59:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:59:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:26.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:59:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:59:26 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1659983838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:59:26 np0005603609 nova_compute[221550]: 2026-01-31 08:59:26.488 221554 DEBUG oslo_concurrency.processutils [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:59:26 np0005603609 nova_compute[221550]: 2026-01-31 08:59:26.493 221554 DEBUG nova.compute.provider_tree [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:59:26 np0005603609 nova_compute[221550]: 2026-01-31 08:59:26.629 221554 DEBUG nova.scheduler.client.report [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:59:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:26 np0005603609 nova_compute[221550]: 2026-01-31 08:59:26.868 221554 DEBUG nova.network.neutron [req-2099b218-412d-4302-a46f-43c75cf7d068 req-dd4a9551-1619-45e2-9f02-58ac2446b770 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Updated VIF entry in instance network info cache for port 9ae8abb4-0619-4212-857c-5e62212e10f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:59:26 np0005603609 nova_compute[221550]: 2026-01-31 08:59:26.868 221554 DEBUG nova.network.neutron [req-2099b218-412d-4302-a46f-43c75cf7d068 req-dd4a9551-1619-45e2-9f02-58ac2446b770 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Updating instance_info_cache with network_info: [{"id": "9ae8abb4-0619-4212-857c-5e62212e10f1", "address": "fa:16:3e:2d:67:50", "network": {"id": "7136c4a9-529c-4978-b239-a8adc0348e1b", "bridge": "br-int", "label": "tempest-network-smoke--1511896950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ae8abb4-06", "ovs_interfaceid": "9ae8abb4-0619-4212-857c-5e62212e10f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:59:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e412 e412: 3 total, 3 up, 3 in
Jan 31 03:59:27 np0005603609 nova_compute[221550]: 2026-01-31 08:59:27.168 221554 DEBUG oslo_concurrency.lockutils [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:59:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:27.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:59:27 np0005603609 nova_compute[221550]: 2026-01-31 08:59:27.588 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:27 np0005603609 nova_compute[221550]: 2026-01-31 08:59:27.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:27 np0005603609 nova_compute[221550]: 2026-01-31 08:59:27.849 221554 INFO nova.scheduler.client.report [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Deleted allocations for instance c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141#033[00m
Jan 31 03:59:27 np0005603609 nova_compute[221550]: 2026-01-31 08:59:27.851 221554 DEBUG oslo_concurrency.lockutils [req-2099b218-412d-4302-a46f-43c75cf7d068 req-dd4a9551-1619-45e2-9f02-58ac2446b770 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:59:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e413 e413: 3 total, 3 up, 3 in
Jan 31 03:59:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:59:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:28.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:59:28 np0005603609 nova_compute[221550]: 2026-01-31 08:59:28.353 221554 DEBUG oslo_concurrency.lockutils [None req-f91b0848-d98c-40fa-8439-c4187b4010bd 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:28 np0005603609 nova_compute[221550]: 2026-01-31 08:59:28.894 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:29.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:30.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:31.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:32.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:32 np0005603609 nova_compute[221550]: 2026-01-31 08:59:32.590 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:32 np0005603609 nova_compute[221550]: 2026-01-31 08:59:32.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:33 np0005603609 podman[313842]: 2026-01-31 08:59:33.163781121 +0000 UTC m=+0.052460301 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 03:59:33 np0005603609 podman[313841]: 2026-01-31 08:59:33.184763009 +0000 UTC m=+0.073520041 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:59:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:33.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:33 np0005603609 nova_compute[221550]: 2026-01-31 08:59:33.724 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849958.7228105, c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:59:33 np0005603609 nova_compute[221550]: 2026-01-31 08:59:33.725 221554 INFO nova.compute.manager [-] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:59:33 np0005603609 nova_compute[221550]: 2026-01-31 08:59:33.765 221554 DEBUG nova.compute.manager [None req-e4437552-4aab-4c05-a087-4cfc12910ac7 - - - - - -] [instance: c3ed3ce5-e0c3-4a00-95fb-5f1ecbc54141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:59:33 np0005603609 nova_compute[221550]: 2026-01-31 08:59:33.896 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:59:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:34.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:59:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 e414: 3 total, 3 up, 3 in
Jan 31 03:59:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:59:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:35.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:59:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:59:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:36.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:59:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:37.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:37 np0005603609 nova_compute[221550]: 2026-01-31 08:59:37.592 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:38.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:38 np0005603609 nova_compute[221550]: 2026-01-31 08:59:38.899 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:39.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:59:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:40.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:59:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:41.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:59:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:42.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:59:42 np0005603609 nova_compute[221550]: 2026-01-31 08:59:42.594 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:43 np0005603609 nova_compute[221550]: 2026-01-31 08:59:43.324 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:59:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:43.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:59:43 np0005603609 nova_compute[221550]: 2026-01-31 08:59:43.414 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:43 np0005603609 nova_compute[221550]: 2026-01-31 08:59:43.901 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:59:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:44.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:59:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:45.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:59:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:46.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:59:46 np0005603609 nova_compute[221550]: 2026-01-31 08:59:46.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:47.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:47 np0005603609 nova_compute[221550]: 2026-01-31 08:59:47.596 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:48.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:48 np0005603609 nova_compute[221550]: 2026-01-31 08:59:48.904 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 03:59:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:49.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 03:59:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:50.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:51.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:59:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:52.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 03:59:52 np0005603609 nova_compute[221550]: 2026-01-31 08:59:52.597 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:53.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:53 np0005603609 nova_compute[221550]: 2026-01-31 08:59:53.907 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:54.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:55.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:56.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:57.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:57 np0005603609 nova_compute[221550]: 2026-01-31 08:59:57.600 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:58.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:58 np0005603609 nova_compute[221550]: 2026-01-31 08:59:58.910 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 03:59:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 03:59:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:59.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:00:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:00.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:00 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 04:00:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:01.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:02.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:02 np0005603609 nova_compute[221550]: 2026-01-31 09:00:02.602 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:03.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:00:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:00:03 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:00:03 np0005603609 nova_compute[221550]: 2026-01-31 09:00:03.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:00:03 np0005603609 nova_compute[221550]: 2026-01-31 09:00:03.912 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:04.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:04 np0005603609 podman[314023]: 2026-01-31 09:00:04.159205033 +0000 UTC m=+0.048168827 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:00:04 np0005603609 podman[314022]: 2026-01-31 09:00:04.176731287 +0000 UTC m=+0.066875009 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 04:00:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:05.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:05 np0005603609 nova_compute[221550]: 2026-01-31 09:00:05.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:00:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:00:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:06.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:00:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:00:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:07.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:00:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:00:07.551 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:00:07.551 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:00:07.551 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:07 np0005603609 nova_compute[221550]: 2026-01-31 09:00:07.604 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:08.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:08 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:00:08 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:00:08 np0005603609 nova_compute[221550]: 2026-01-31 09:00:08.915 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:00:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:09.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:00:09 np0005603609 nova_compute[221550]: 2026-01-31 09:00:09.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:00:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:10.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:00:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:11.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:00:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:12.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:12 np0005603609 nova_compute[221550]: 2026-01-31 09:00:12.606 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:00:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:13.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:00:13 np0005603609 nova_compute[221550]: 2026-01-31 09:00:13.918 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:00:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:14.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:00:14 np0005603609 nova_compute[221550]: 2026-01-31 09:00:14.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:00:14 np0005603609 nova_compute[221550]: 2026-01-31 09:00:14.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:00:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:00:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:15.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:00:15 np0005603609 nova_compute[221550]: 2026-01-31 09:00:15.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:00:15 np0005603609 nova_compute[221550]: 2026-01-31 09:00:15.689 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:15 np0005603609 nova_compute[221550]: 2026-01-31 09:00:15.690 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:15 np0005603609 nova_compute[221550]: 2026-01-31 09:00:15.690 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:15 np0005603609 nova_compute[221550]: 2026-01-31 09:00:15.690 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:00:15 np0005603609 nova_compute[221550]: 2026-01-31 09:00:15.691 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:00:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:00:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3029930112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:00:16 np0005603609 nova_compute[221550]: 2026-01-31 09:00:16.149 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:00:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:16.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:16 np0005603609 nova_compute[221550]: 2026-01-31 09:00:16.365 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:00:16 np0005603609 nova_compute[221550]: 2026-01-31 09:00:16.367 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4208MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:00:16 np0005603609 nova_compute[221550]: 2026-01-31 09:00:16.367 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:16 np0005603609 nova_compute[221550]: 2026-01-31 09:00:16.367 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:16 np0005603609 nova_compute[221550]: 2026-01-31 09:00:16.463 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:00:16 np0005603609 nova_compute[221550]: 2026-01-31 09:00:16.464 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:00:16 np0005603609 nova_compute[221550]: 2026-01-31 09:00:16.492 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:00:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:00:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/33126419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:00:16 np0005603609 nova_compute[221550]: 2026-01-31 09:00:16.880 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:00:16 np0005603609 nova_compute[221550]: 2026-01-31 09:00:16.886 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:00:16 np0005603609 nova_compute[221550]: 2026-01-31 09:00:16.913 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:00:16 np0005603609 nova_compute[221550]: 2026-01-31 09:00:16.953 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:00:16 np0005603609 nova_compute[221550]: 2026-01-31 09:00:16.953 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:17.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:17 np0005603609 nova_compute[221550]: 2026-01-31 09:00:17.606 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:17 np0005603609 nova_compute[221550]: 2026-01-31 09:00:17.954 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:00:17 np0005603609 nova_compute[221550]: 2026-01-31 09:00:17.955 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:00:17 np0005603609 nova_compute[221550]: 2026-01-31 09:00:17.955 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:00:17 np0005603609 nova_compute[221550]: 2026-01-31 09:00:17.976 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:00:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:00:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:18.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:00:18 np0005603609 nova_compute[221550]: 2026-01-31 09:00:18.919 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:00:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:19.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:00:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:00:19.543 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=100, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=99) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:00:19 np0005603609 nova_compute[221550]: 2026-01-31 09:00:19.543 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:00:19.544 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:00:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:20.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:21.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:21 np0005603609 nova_compute[221550]: 2026-01-31 09:00:21.676 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:00:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:22.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:22 np0005603609 nova_compute[221550]: 2026-01-31 09:00:22.641 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:23.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:23 np0005603609 nova_compute[221550]: 2026-01-31 09:00:23.921 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:00:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:24.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:00:24 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:00:24.546 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '100'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:00:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:00:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:25.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:00:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:26.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:27 np0005603609 ovn_controller[130359]: 2026-01-31T09:00:27Z|00978|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 31 04:00:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:27.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:27 np0005603609 nova_compute[221550]: 2026-01-31 09:00:27.642 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:28.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:28 np0005603609 nova_compute[221550]: 2026-01-31 09:00:28.924 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:00:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:29.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:00:29 np0005603609 nova_compute[221550]: 2026-01-31 09:00:29.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:00:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:30.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:31.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:32.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:32 np0005603609 nova_compute[221550]: 2026-01-31 09:00:32.644 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:33.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:33 np0005603609 nova_compute[221550]: 2026-01-31 09:00:33.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:00:33 np0005603609 nova_compute[221550]: 2026-01-31 09:00:33.927 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:34.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:35 np0005603609 podman[314164]: 2026-01-31 09:00:35.166807173 +0000 UTC m=+0.054069210 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 04:00:35 np0005603609 podman[314163]: 2026-01-31 09:00:35.193794656 +0000 UTC m=+0.080673733 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:00:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:35.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:36.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:37.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:37 np0005603609 nova_compute[221550]: 2026-01-31 09:00:37.646 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:38.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:38 np0005603609 nova_compute[221550]: 2026-01-31 09:00:38.930 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:39.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:40.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:41.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:42.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:42 np0005603609 nova_compute[221550]: 2026-01-31 09:00:42.647 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:43.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:43 np0005603609 nova_compute[221550]: 2026-01-31 09:00:43.933 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:44.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:45.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:46.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:00:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:47.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:00:47 np0005603609 nova_compute[221550]: 2026-01-31 09:00:47.649 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:48.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:48 np0005603609 nova_compute[221550]: 2026-01-31 09:00:48.936 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:49.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:50.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:51 np0005603609 nova_compute[221550]: 2026-01-31 09:00:51.079 221554 DEBUG oslo_concurrency.lockutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "a2022698-76e1-4402-9fb8-7482be4ecf6a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:51 np0005603609 nova_compute[221550]: 2026-01-31 09:00:51.079 221554 DEBUG oslo_concurrency.lockutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "a2022698-76e1-4402-9fb8-7482be4ecf6a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:51 np0005603609 nova_compute[221550]: 2026-01-31 09:00:51.360 221554 DEBUG nova.compute.manager [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 04:00:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:51.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:52.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:52 np0005603609 nova_compute[221550]: 2026-01-31 09:00:52.651 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:53 np0005603609 nova_compute[221550]: 2026-01-31 09:00:53.160 221554 DEBUG oslo_concurrency.lockutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:53 np0005603609 nova_compute[221550]: 2026-01-31 09:00:53.160 221554 DEBUG oslo_concurrency.lockutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:53 np0005603609 nova_compute[221550]: 2026-01-31 09:00:53.174 221554 DEBUG nova.virt.hardware [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 04:00:53 np0005603609 nova_compute[221550]: 2026-01-31 09:00:53.175 221554 INFO nova.compute.claims [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 04:00:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:53.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:53 np0005603609 nova_compute[221550]: 2026-01-31 09:00:53.939 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:54.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:55.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:56.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:57.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:57 np0005603609 nova_compute[221550]: 2026-01-31 09:00:57.654 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:57 np0005603609 nova_compute[221550]: 2026-01-31 09:00:57.852 221554 DEBUG oslo_concurrency.processutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:00:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:58.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:00:58 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2174453895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:00:58 np0005603609 nova_compute[221550]: 2026-01-31 09:00:58.279 221554 DEBUG oslo_concurrency.processutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:00:58 np0005603609 nova_compute[221550]: 2026-01-31 09:00:58.284 221554 DEBUG nova.compute.provider_tree [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:00:58 np0005603609 nova_compute[221550]: 2026-01-31 09:00:58.537 221554 DEBUG nova.scheduler.client.report [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:00:58 np0005603609 nova_compute[221550]: 2026-01-31 09:00:58.941 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:59 np0005603609 nova_compute[221550]: 2026-01-31 09:00:59.077 221554 DEBUG oslo_concurrency.lockutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 5.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:59 np0005603609 nova_compute[221550]: 2026-01-31 09:00:59.078 221554 DEBUG nova.compute.manager [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 04:00:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:00:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:59.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:59 np0005603609 nova_compute[221550]: 2026-01-31 09:00:59.617 221554 DEBUG nova.compute.manager [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 04:00:59 np0005603609 nova_compute[221550]: 2026-01-31 09:00:59.618 221554 DEBUG nova.network.neutron [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 04:00:59 np0005603609 nova_compute[221550]: 2026-01-31 09:00:59.849 221554 INFO nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 04:01:00 np0005603609 nova_compute[221550]: 2026-01-31 09:01:00.014 221554 DEBUG nova.compute.manager [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 04:01:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:01:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:00.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:01:00 np0005603609 nova_compute[221550]: 2026-01-31 09:01:00.564 221554 DEBUG nova.policy [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a56abd8fdd341ae88a99e102ab399de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 04:01:01 np0005603609 nova_compute[221550]: 2026-01-31 09:01:01.129 221554 DEBUG nova.compute.manager [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 04:01:01 np0005603609 nova_compute[221550]: 2026-01-31 09:01:01.130 221554 DEBUG nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 04:01:01 np0005603609 nova_compute[221550]: 2026-01-31 09:01:01.131 221554 INFO nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Creating image(s)#033[00m
Jan 31 04:01:01 np0005603609 nova_compute[221550]: 2026-01-31 09:01:01.163 221554 DEBUG nova.storage.rbd_utils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image a2022698-76e1-4402-9fb8-7482be4ecf6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:01:01 np0005603609 nova_compute[221550]: 2026-01-31 09:01:01.191 221554 DEBUG nova.storage.rbd_utils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image a2022698-76e1-4402-9fb8-7482be4ecf6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:01:01 np0005603609 nova_compute[221550]: 2026-01-31 09:01:01.217 221554 DEBUG nova.storage.rbd_utils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image a2022698-76e1-4402-9fb8-7482be4ecf6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:01:01 np0005603609 nova_compute[221550]: 2026-01-31 09:01:01.220 221554 DEBUG oslo_concurrency.processutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:01:01 np0005603609 nova_compute[221550]: 2026-01-31 09:01:01.276 221554 DEBUG oslo_concurrency.processutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:01:01 np0005603609 nova_compute[221550]: 2026-01-31 09:01:01.277 221554 DEBUG oslo_concurrency.lockutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:01 np0005603609 nova_compute[221550]: 2026-01-31 09:01:01.277 221554 DEBUG oslo_concurrency.lockutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:01 np0005603609 nova_compute[221550]: 2026-01-31 09:01:01.278 221554 DEBUG oslo_concurrency.lockutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:01 np0005603609 nova_compute[221550]: 2026-01-31 09:01:01.299 221554 DEBUG nova.storage.rbd_utils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image a2022698-76e1-4402-9fb8-7482be4ecf6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:01:01 np0005603609 nova_compute[221550]: 2026-01-31 09:01:01.303 221554 DEBUG oslo_concurrency.processutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 a2022698-76e1-4402-9fb8-7482be4ecf6a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:01:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:01.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:01 np0005603609 nova_compute[221550]: 2026-01-31 09:01:01.994 221554 DEBUG oslo_concurrency.processutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 a2022698-76e1-4402-9fb8-7482be4ecf6a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.691s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:01:02 np0005603609 nova_compute[221550]: 2026-01-31 09:01:02.059 221554 DEBUG nova.storage.rbd_utils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] resizing rbd image a2022698-76e1-4402-9fb8-7482be4ecf6a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 04:01:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:02.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:02 np0005603609 nova_compute[221550]: 2026-01-31 09:01:02.654 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:02.712 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=101, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=100) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:01:02 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:02.713 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:01:02 np0005603609 nova_compute[221550]: 2026-01-31 09:01:02.740 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:02 np0005603609 nova_compute[221550]: 2026-01-31 09:01:02.745 221554 DEBUG nova.objects.instance [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'migration_context' on Instance uuid a2022698-76e1-4402-9fb8-7482be4ecf6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:01:02 np0005603609 nova_compute[221550]: 2026-01-31 09:01:02.813 221554 DEBUG nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 04:01:02 np0005603609 nova_compute[221550]: 2026-01-31 09:01:02.814 221554 DEBUG nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Ensure instance console log exists: /var/lib/nova/instances/a2022698-76e1-4402-9fb8-7482be4ecf6a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 04:01:02 np0005603609 nova_compute[221550]: 2026-01-31 09:01:02.814 221554 DEBUG oslo_concurrency.lockutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:02 np0005603609 nova_compute[221550]: 2026-01-31 09:01:02.815 221554 DEBUG oslo_concurrency.lockutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:02 np0005603609 nova_compute[221550]: 2026-01-31 09:01:02.815 221554 DEBUG oslo_concurrency.lockutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:01:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:03.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:01:03 np0005603609 nova_compute[221550]: 2026-01-31 09:01:03.944 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:04.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:01:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:05.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #187. Immutable memtables: 0.
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:05.494570) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 187
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850065494622, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1970, "num_deletes": 261, "total_data_size": 4517608, "memory_usage": 4585232, "flush_reason": "Manual Compaction"}
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #188: started
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850065598790, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 188, "file_size": 2967205, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90080, "largest_seqno": 92044, "table_properties": {"data_size": 2959131, "index_size": 4887, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17064, "raw_average_key_size": 20, "raw_value_size": 2942727, "raw_average_value_size": 3490, "num_data_blocks": 213, "num_entries": 843, "num_filter_entries": 843, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849900, "oldest_key_time": 1769849900, "file_creation_time": 1769850065, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 104280 microseconds, and 6300 cpu microseconds.
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:01:05 np0005603609 nova_compute[221550]: 2026-01-31 09:01:05.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:05.598849) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #188: 2967205 bytes OK
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:05.598869) [db/memtable_list.cc:519] [default] Level-0 commit table #188 started
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:05.689045) [db/memtable_list.cc:722] [default] Level-0 commit table #188: memtable #1 done
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:05.689104) EVENT_LOG_v1 {"time_micros": 1769850065689090, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:05.689134) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 4508801, prev total WAL file size 4508801, number of live WAL files 2.
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000184.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:05.690751) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353137' seq:72057594037927935, type:22 .. '6C6F676D0033373731' seq:0, type:0; will stop at (end)
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [188(2897KB)], [186(10MB)]
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850065690857, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [188], "files_L6": [186], "score": -1, "input_data_size": 14348283, "oldest_snapshot_seqno": -1}
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #189: 11122 keys, 14206413 bytes, temperature: kUnknown
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850065915388, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 189, "file_size": 14206413, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14135383, "index_size": 42117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27845, "raw_key_size": 294001, "raw_average_key_size": 26, "raw_value_size": 13941872, "raw_average_value_size": 1253, "num_data_blocks": 1603, "num_entries": 11122, "num_filter_entries": 11122, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769850065, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 189, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:05.915656) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 14206413 bytes
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:05.933884) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.9 rd, 63.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 10.9 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(9.6) write-amplify(4.8) OK, records in: 11661, records dropped: 539 output_compression: NoCompression
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:05.933933) EVENT_LOG_v1 {"time_micros": 1769850065933915, "job": 120, "event": "compaction_finished", "compaction_time_micros": 224601, "compaction_time_cpu_micros": 27160, "output_level": 6, "num_output_files": 1, "total_output_size": 14206413, "num_input_records": 11661, "num_output_records": 11122, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850065934434, "job": 120, "event": "table_file_deletion", "file_number": 188}
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000186.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850065935795, "job": 120, "event": "table_file_deletion", "file_number": 186}
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:05.690550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:05.935830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:05.935835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:05.935837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:05.935839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:01:05 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:05.935841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:01:06 np0005603609 podman[314410]: 2026-01-31 09:01:06.157556124 +0000 UTC m=+0.044021868 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 31 04:01:06 np0005603609 podman[314409]: 2026-01-31 09:01:06.202120723 +0000 UTC m=+0.092328297 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:01:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:06.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:06 np0005603609 nova_compute[221550]: 2026-01-31 09:01:06.624 221554 DEBUG nova.network.neutron [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Successfully updated port: a9477ae0-b9b1-427b-b136-9017671bc84e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 04:01:06 np0005603609 nova_compute[221550]: 2026-01-31 09:01:06.709 221554 DEBUG oslo_concurrency.lockutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "refresh_cache-a2022698-76e1-4402-9fb8-7482be4ecf6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:01:06 np0005603609 nova_compute[221550]: 2026-01-31 09:01:06.710 221554 DEBUG oslo_concurrency.lockutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquired lock "refresh_cache-a2022698-76e1-4402-9fb8-7482be4ecf6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:01:06 np0005603609 nova_compute[221550]: 2026-01-31 09:01:06.710 221554 DEBUG nova.network.neutron [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:01:06 np0005603609 nova_compute[221550]: 2026-01-31 09:01:06.834 221554 DEBUG nova.compute.manager [req-9d90cbe5-5e15-4644-a457-be958630edcb req-4e48075e-9d23-4334-9cc1-dbb1f6d4cd32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Received event network-changed-a9477ae0-b9b1-427b-b136-9017671bc84e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:01:06 np0005603609 nova_compute[221550]: 2026-01-31 09:01:06.834 221554 DEBUG nova.compute.manager [req-9d90cbe5-5e15-4644-a457-be958630edcb req-4e48075e-9d23-4334-9cc1-dbb1f6d4cd32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Refreshing instance network info cache due to event network-changed-a9477ae0-b9b1-427b-b136-9017671bc84e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:01:06 np0005603609 nova_compute[221550]: 2026-01-31 09:01:06.835 221554 DEBUG oslo_concurrency.lockutils [req-9d90cbe5-5e15-4644-a457-be958630edcb req-4e48075e-9d23-4334-9cc1-dbb1f6d4cd32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-a2022698-76e1-4402-9fb8-7482be4ecf6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:01:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:07 np0005603609 nova_compute[221550]: 2026-01-31 09:01:07.275 221554 DEBUG nova.network.neutron [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 04:01:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:07.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:07.552 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:07.552 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:07.553 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:07 np0005603609 nova_compute[221550]: 2026-01-31 09:01:07.656 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:07 np0005603609 nova_compute[221550]: 2026-01-31 09:01:07.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:07.715 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '101'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:01:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:01:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:08.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:01:08 np0005603609 nova_compute[221550]: 2026-01-31 09:01:08.713 221554 DEBUG nova.network.neutron [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Updating instance_info_cache with network_info: [{"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:01:08 np0005603609 nova_compute[221550]: 2026-01-31 09:01:08.947 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.026 221554 DEBUG oslo_concurrency.lockutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Releasing lock "refresh_cache-a2022698-76e1-4402-9fb8-7482be4ecf6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.026 221554 DEBUG nova.compute.manager [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Instance network_info: |[{"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.027 221554 DEBUG oslo_concurrency.lockutils [req-9d90cbe5-5e15-4644-a457-be958630edcb req-4e48075e-9d23-4334-9cc1-dbb1f6d4cd32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-a2022698-76e1-4402-9fb8-7482be4ecf6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.027 221554 DEBUG nova.network.neutron [req-9d90cbe5-5e15-4644-a457-be958630edcb req-4e48075e-9d23-4334-9cc1-dbb1f6d4cd32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Refreshing network info cache for port a9477ae0-b9b1-427b-b136-9017671bc84e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.030 221554 DEBUG nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Start _get_guest_xml network_info=[{"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.033 221554 WARNING nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.046 221554 DEBUG nova.virt.libvirt.host [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.046 221554 DEBUG nova.virt.libvirt.host [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.049 221554 DEBUG nova.virt.libvirt.host [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.049 221554 DEBUG nova.virt.libvirt.host [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.050 221554 DEBUG nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.050 221554 DEBUG nova.virt.hardware [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.051 221554 DEBUG nova.virt.hardware [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.051 221554 DEBUG nova.virt.hardware [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.051 221554 DEBUG nova.virt.hardware [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.051 221554 DEBUG nova.virt.hardware [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.052 221554 DEBUG nova.virt.hardware [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.052 221554 DEBUG nova.virt.hardware [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.052 221554 DEBUG nova.virt.hardware [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.052 221554 DEBUG nova.virt.hardware [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.052 221554 DEBUG nova.virt.hardware [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.052 221554 DEBUG nova.virt.hardware [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.055 221554 DEBUG oslo_concurrency.processutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:01:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:01:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3591830032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:01:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:01:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:09.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.487 221554 DEBUG oslo_concurrency.processutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.523 221554 DEBUG nova.storage.rbd_utils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image a2022698-76e1-4402-9fb8-7482be4ecf6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.527 221554 DEBUG oslo_concurrency.processutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:01:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:01:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3119043435' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.954 221554 DEBUG oslo_concurrency.processutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.955 221554 DEBUG nova.virt.libvirt.vif [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:00:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-705261740',display_name='tempest-TestNetworkBasicOps-server-705261740',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-705261740',id=211,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLh5jo2ZOHpJNEc7sEy+Rh5ouZixH202Znqm9OxjLl4trLDQqRG/VTdVc+98/HmllDWcHP2C0z8kKaRyRWwnytDPAoRwjNLcX9ByQoYSJ9lg6nfKkGayPhHM26LvB3YjAg==',key_name='tempest-TestNetworkBasicOps-654870968',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-ohkx7k1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:01:00Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=a2022698-76e1-4402-9fb8-7482be4ecf6a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.956 221554 DEBUG nova.network.os_vif_util [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.956 221554 DEBUG nova.network.os_vif_util [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:d1:e5,bridge_name='br-int',has_traffic_filtering=True,id=a9477ae0-b9b1-427b-b136-9017671bc84e,network=Network(4e170bdb-6ef8-49b3-bd1f-9130dcc7a216),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9477ae0-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:01:09 np0005603609 nova_compute[221550]: 2026-01-31 09:01:09.957 221554 DEBUG nova.objects.instance [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid a2022698-76e1-4402-9fb8-7482be4ecf6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.077 221554 DEBUG nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 04:01:10 np0005603609 nova_compute[221550]:  <uuid>a2022698-76e1-4402-9fb8-7482be4ecf6a</uuid>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:  <name>instance-000000d3</name>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestNetworkBasicOps-server-705261740</nova:name>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 09:01:09</nova:creationTime>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 04:01:10 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:        <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:        <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:        <nova:port uuid="a9477ae0-b9b1-427b-b136-9017671bc84e">
Jan 31 04:01:10 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <system>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <entry name="serial">a2022698-76e1-4402-9fb8-7482be4ecf6a</entry>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <entry name="uuid">a2022698-76e1-4402-9fb8-7482be4ecf6a</entry>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    </system>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:  <os>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:  </os>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:  <features>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:  </features>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:  </clock>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:  <devices>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/a2022698-76e1-4402-9fb8-7482be4ecf6a_disk">
Jan 31 04:01:10 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      </source>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 04:01:10 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      </auth>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    </disk>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/a2022698-76e1-4402-9fb8-7482be4ecf6a_disk.config">
Jan 31 04:01:10 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      </source>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 04:01:10 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      </auth>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    </disk>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:73:d1:e5"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <target dev="tapa9477ae0-b9"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    </interface>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/a2022698-76e1-4402-9fb8-7482be4ecf6a/console.log" append="off"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    </serial>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <video>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    </video>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    </rng>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 04:01:10 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 04:01:10 np0005603609 nova_compute[221550]:  </devices>
Jan 31 04:01:10 np0005603609 nova_compute[221550]: </domain>
Jan 31 04:01:10 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.078 221554 DEBUG nova.compute.manager [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Preparing to wait for external event network-vif-plugged-a9477ae0-b9b1-427b-b136-9017671bc84e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.078 221554 DEBUG oslo_concurrency.lockutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "a2022698-76e1-4402-9fb8-7482be4ecf6a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.079 221554 DEBUG oslo_concurrency.lockutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "a2022698-76e1-4402-9fb8-7482be4ecf6a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.079 221554 DEBUG oslo_concurrency.lockutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "a2022698-76e1-4402-9fb8-7482be4ecf6a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.080 221554 DEBUG nova.virt.libvirt.vif [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:00:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-705261740',display_name='tempest-TestNetworkBasicOps-server-705261740',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-705261740',id=211,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLh5jo2ZOHpJNEc7sEy+Rh5ouZixH202Znqm9OxjLl4trLDQqRG/VTdVc+98/HmllDWcHP2C0z8kKaRyRWwnytDPAoRwjNLcX9ByQoYSJ9lg6nfKkGayPhHM26LvB3YjAg==',key_name='tempest-TestNetworkBasicOps-654870968',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-ohkx7k1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:01:00Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=a2022698-76e1-4402-9fb8-7482be4ecf6a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.080 221554 DEBUG nova.network.os_vif_util [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.081 221554 DEBUG nova.network.os_vif_util [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:d1:e5,bridge_name='br-int',has_traffic_filtering=True,id=a9477ae0-b9b1-427b-b136-9017671bc84e,network=Network(4e170bdb-6ef8-49b3-bd1f-9130dcc7a216),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9477ae0-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.081 221554 DEBUG os_vif [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:d1:e5,bridge_name='br-int',has_traffic_filtering=True,id=a9477ae0-b9b1-427b-b136-9017671bc84e,network=Network(4e170bdb-6ef8-49b3-bd1f-9130dcc7a216),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9477ae0-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.081 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.082 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.082 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.085 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.085 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa9477ae0-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.085 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa9477ae0-b9, col_values=(('external_ids', {'iface-id': 'a9477ae0-b9b1-427b-b136-9017671bc84e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:d1:e5', 'vm-uuid': 'a2022698-76e1-4402-9fb8-7482be4ecf6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:01:10 np0005603609 NetworkManager[49064]: <info>  [1769850070.1158] manager: (tapa9477ae0-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/453)
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.115 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.117 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.121 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.121 221554 INFO os_vif [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:d1:e5,bridge_name='br-int',has_traffic_filtering=True,id=a9477ae0-b9b1-427b-b136-9017671bc84e,network=Network(4e170bdb-6ef8-49b3-bd1f-9130dcc7a216),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9477ae0-b9')#033[00m
Jan 31 04:01:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:10.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:01:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:01:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:01:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:01:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.442 221554 DEBUG nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.443 221554 DEBUG nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.443 221554 DEBUG nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No VIF found with MAC fa:16:3e:73:d1:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.444 221554 INFO nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Using config drive#033[00m
Jan 31 04:01:10 np0005603609 nova_compute[221550]: 2026-01-31 09:01:10.568 221554 DEBUG nova.storage.rbd_utils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image a2022698-76e1-4402-9fb8-7482be4ecf6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:01:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:11.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:11 np0005603609 nova_compute[221550]: 2026-01-31 09:01:11.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:12.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:12 np0005603609 nova_compute[221550]: 2026-01-31 09:01:12.313 221554 INFO nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Creating config drive at /var/lib/nova/instances/a2022698-76e1-4402-9fb8-7482be4ecf6a/disk.config#033[00m
Jan 31 04:01:12 np0005603609 nova_compute[221550]: 2026-01-31 09:01:12.316 221554 DEBUG oslo_concurrency.processutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a2022698-76e1-4402-9fb8-7482be4ecf6a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpcixmcz3x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:01:12 np0005603609 nova_compute[221550]: 2026-01-31 09:01:12.446 221554 DEBUG oslo_concurrency.processutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a2022698-76e1-4402-9fb8-7482be4ecf6a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpcixmcz3x" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:01:12 np0005603609 nova_compute[221550]: 2026-01-31 09:01:12.487 221554 DEBUG nova.storage.rbd_utils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image a2022698-76e1-4402-9fb8-7482be4ecf6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:01:12 np0005603609 nova_compute[221550]: 2026-01-31 09:01:12.492 221554 DEBUG oslo_concurrency.processutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a2022698-76e1-4402-9fb8-7482be4ecf6a/disk.config a2022698-76e1-4402-9fb8-7482be4ecf6a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:01:12 np0005603609 nova_compute[221550]: 2026-01-31 09:01:12.659 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:13 np0005603609 nova_compute[221550]: 2026-01-31 09:01:13.429 221554 DEBUG oslo_concurrency.processutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a2022698-76e1-4402-9fb8-7482be4ecf6a/disk.config a2022698-76e1-4402-9fb8-7482be4ecf6a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.937s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:01:13 np0005603609 nova_compute[221550]: 2026-01-31 09:01:13.430 221554 INFO nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Deleting local config drive /var/lib/nova/instances/a2022698-76e1-4402-9fb8-7482be4ecf6a/disk.config because it was imported into RBD.#033[00m
Jan 31 04:01:13 np0005603609 kernel: tapa9477ae0-b9: entered promiscuous mode
Jan 31 04:01:13 np0005603609 NetworkManager[49064]: <info>  [1769850073.4779] manager: (tapa9477ae0-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/454)
Jan 31 04:01:13 np0005603609 ovn_controller[130359]: 2026-01-31T09:01:13Z|00979|binding|INFO|Claiming lport a9477ae0-b9b1-427b-b136-9017671bc84e for this chassis.
Jan 31 04:01:13 np0005603609 ovn_controller[130359]: 2026-01-31T09:01:13Z|00980|binding|INFO|a9477ae0-b9b1-427b-b136-9017671bc84e: Claiming fa:16:3e:73:d1:e5 10.100.0.5
Jan 31 04:01:13 np0005603609 nova_compute[221550]: 2026-01-31 09:01:13.478 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:13 np0005603609 NetworkManager[49064]: <info>  [1769850073.4891] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/455)
Jan 31 04:01:13 np0005603609 NetworkManager[49064]: <info>  [1769850073.4897] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/456)
Jan 31 04:01:13 np0005603609 nova_compute[221550]: 2026-01-31 09:01:13.488 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:13.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:13 np0005603609 systemd-udevd[314718]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:01:13 np0005603609 systemd-machined[190912]: New machine qemu-114-instance-000000d3.
Jan 31 04:01:13 np0005603609 NetworkManager[49064]: <info>  [1769850073.5080] device (tapa9477ae0-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:01:13 np0005603609 NetworkManager[49064]: <info>  [1769850073.5088] device (tapa9477ae0-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 04:01:13 np0005603609 nova_compute[221550]: 2026-01-31 09:01:13.513 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:13 np0005603609 systemd[1]: Started Virtual Machine qemu-114-instance-000000d3.
Jan 31 04:01:13 np0005603609 nova_compute[221550]: 2026-01-31 09:01:13.515 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:13 np0005603609 nova_compute[221550]: 2026-01-31 09:01:13.519 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.701 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:d1:e5 10.100.0.5'], port_security=['fa:16:3e:73:d1:e5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1274203822', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a2022698-76e1-4402-9fb8-7482be4ecf6a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1274203822', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'cc812d40-ff23-43ac-a90f-2ee695b7bd6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07659edf-94fb-4be4-adbf-22b52c034c40, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=a9477ae0-b9b1-427b-b136-9017671bc84e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.703 140058 INFO neutron.agent.ovn.metadata.agent [-] Port a9477ae0-b9b1-427b-b136-9017671bc84e in datapath 4e170bdb-6ef8-49b3-bd1f-9130dcc7a216 bound to our chassis#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.704 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4e170bdb-6ef8-49b3-bd1f-9130dcc7a216#033[00m
Jan 31 04:01:13 np0005603609 ovn_controller[130359]: 2026-01-31T09:01:13Z|00981|binding|INFO|Setting lport a9477ae0-b9b1-427b-b136-9017671bc84e ovn-installed in OVS
Jan 31 04:01:13 np0005603609 ovn_controller[130359]: 2026-01-31T09:01:13Z|00982|binding|INFO|Setting lport a9477ae0-b9b1-427b-b136-9017671bc84e up in Southbound
Jan 31 04:01:13 np0005603609 nova_compute[221550]: 2026-01-31 09:01:13.707 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:13 np0005603609 nova_compute[221550]: 2026-01-31 09:01:13.712 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.712 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[0fbf5ad9-62d0-4433-9228-ff606c936cdd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.713 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4e170bdb-61 in ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.714 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4e170bdb-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.714 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5fdc0b7e-278e-4500-a400-ef8d97e35843]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.716 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7ab2eaa6-24bb-4c98-8164-5f889ed5555f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.722 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[216526b2-e0c3-4db6-b522-7d00861a9b30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.733 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b098ad3e-ba04-4946-bf25-a24fac7df778]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.754 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[5a149669-781e-4e9c-99ef-b7580a9f01a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:13 np0005603609 systemd-udevd[314721]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:01:13 np0005603609 NetworkManager[49064]: <info>  [1769850073.7603] manager: (tap4e170bdb-60): new Veth device (/org/freedesktop/NetworkManager/Devices/457)
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.760 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[64cc8e9c-ee91-4cc1-9355-c72b48b3ecc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.787 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[134b0148-0454-47e2-af82-6e8a5dc2a76f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.790 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[b2aeb7ed-c024-4eb4-aebc-c33ee6e02ab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:13 np0005603609 NetworkManager[49064]: <info>  [1769850073.8112] device (tap4e170bdb-60): carrier: link connected
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.815 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad6a19e-6e07-4906-aa5c-79892f6d371a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.831 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[223f3d87-53c2-4201-86a8-ea75afbba4a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4e170bdb-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:fc:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1031351, 'reachable_time': 19212, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314752, 'error': None, 'target': 'ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.844 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f9fd6a-75c3-440f-9ed7-17a3b5f88904]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:fcfc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1031351, 'tstamp': 1031351}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314760, 'error': None, 'target': 'ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.859 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[13b7114c-b7df-4be5-b0f8-df4e614aaed6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4e170bdb-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:fc:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1031351, 'reachable_time': 19212, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314769, 'error': None, 'target': 'ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.879 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ebb2d746-7259-4621-bf58-7e366c8261c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.919 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[803a3936-6bc1-4683-bcac-126750eb61eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.920 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e170bdb-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.921 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.921 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4e170bdb-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:01:13 np0005603609 nova_compute[221550]: 2026-01-31 09:01:13.922 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:13 np0005603609 NetworkManager[49064]: <info>  [1769850073.9235] manager: (tap4e170bdb-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/458)
Jan 31 04:01:13 np0005603609 kernel: tap4e170bdb-60: entered promiscuous mode
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.925 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4e170bdb-60, col_values=(('external_ids', {'iface-id': '58845f69-5ae5-46ec-8d5b-7dca32eb756b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:01:13 np0005603609 nova_compute[221550]: 2026-01-31 09:01:13.926 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:13 np0005603609 ovn_controller[130359]: 2026-01-31T09:01:13Z|00983|binding|INFO|Releasing lport 58845f69-5ae5-46ec-8d5b-7dca32eb756b from this chassis (sb_readonly=1)
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.927 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4e170bdb-6ef8-49b3-bd1f-9130dcc7a216.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4e170bdb-6ef8-49b3-bd1f-9130dcc7a216.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.928 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc41dd6-e8cb-46b9-942d-dbfa6073ecd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.928 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/4e170bdb-6ef8-49b3-bd1f-9130dcc7a216.pid.haproxy
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 4e170bdb-6ef8-49b3-bd1f-9130dcc7a216
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 04:01:13 np0005603609 nova_compute[221550]: 2026-01-31 09:01:13.930 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:13.929 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216', 'env', 'PROCESS_TAG=haproxy-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4e170bdb-6ef8-49b3-bd1f-9130dcc7a216.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 04:01:14 np0005603609 nova_compute[221550]: 2026-01-31 09:01:14.079 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769850074.0790744, a2022698-76e1-4402-9fb8-7482be4ecf6a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:01:14 np0005603609 nova_compute[221550]: 2026-01-31 09:01:14.079 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] VM Started (Lifecycle Event)#033[00m
Jan 31 04:01:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:14.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:14 np0005603609 nova_compute[221550]: 2026-01-31 09:01:14.294 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:01:14 np0005603609 nova_compute[221550]: 2026-01-31 09:01:14.297 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769850074.0798187, a2022698-76e1-4402-9fb8-7482be4ecf6a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:01:14 np0005603609 nova_compute[221550]: 2026-01-31 09:01:14.298 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] VM Paused (Lifecycle Event)#033[00m
Jan 31 04:01:14 np0005603609 podman[314828]: 2026-01-31 09:01:14.272289771 +0000 UTC m=+0.017405323 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 04:01:14 np0005603609 nova_compute[221550]: 2026-01-31 09:01:14.439 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:01:14 np0005603609 nova_compute[221550]: 2026-01-31 09:01:14.442 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:01:14 np0005603609 nova_compute[221550]: 2026-01-31 09:01:14.541 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:01:14 np0005603609 podman[314828]: 2026-01-31 09:01:14.635014782 +0000 UTC m=+0.380130314 container create 7cfd81862cbde004415f9d85eb39ac0a6dd113cf8eee8fc10d71711a56955a06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 04:01:14 np0005603609 systemd[1]: Started libpod-conmon-7cfd81862cbde004415f9d85eb39ac0a6dd113cf8eee8fc10d71711a56955a06.scope.
Jan 31 04:01:14 np0005603609 systemd[1]: Started libcrun container.
Jan 31 04:01:14 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dba59780d4c374b97f4f53aa68b5ae3115c9ea088fa6ffc3bc6d2fb3e6abf85/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 04:01:14 np0005603609 podman[314828]: 2026-01-31 09:01:14.996221617 +0000 UTC m=+0.741337169 container init 7cfd81862cbde004415f9d85eb39ac0a6dd113cf8eee8fc10d71711a56955a06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 04:01:15 np0005603609 podman[314828]: 2026-01-31 09:01:15.000591692 +0000 UTC m=+0.745707254 container start 7cfd81862cbde004415f9d85eb39ac0a6dd113cf8eee8fc10d71711a56955a06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:01:15 np0005603609 neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216[314843]: [NOTICE]   (314847) : New worker (314849) forked
Jan 31 04:01:15 np0005603609 neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216[314843]: [NOTICE]   (314847) : Loading success.
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.153 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.211 221554 DEBUG nova.compute.manager [req-c2e5829c-c48f-4aeb-8a7e-585fcf4d1cc0 req-199b9515-f4b1-49d9-a58a-8535deed95ee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Received event network-vif-plugged-a9477ae0-b9b1-427b-b136-9017671bc84e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.212 221554 DEBUG oslo_concurrency.lockutils [req-c2e5829c-c48f-4aeb-8a7e-585fcf4d1cc0 req-199b9515-f4b1-49d9-a58a-8535deed95ee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a2022698-76e1-4402-9fb8-7482be4ecf6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.213 221554 DEBUG oslo_concurrency.lockutils [req-c2e5829c-c48f-4aeb-8a7e-585fcf4d1cc0 req-199b9515-f4b1-49d9-a58a-8535deed95ee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a2022698-76e1-4402-9fb8-7482be4ecf6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.213 221554 DEBUG oslo_concurrency.lockutils [req-c2e5829c-c48f-4aeb-8a7e-585fcf4d1cc0 req-199b9515-f4b1-49d9-a58a-8535deed95ee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a2022698-76e1-4402-9fb8-7482be4ecf6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.214 221554 DEBUG nova.compute.manager [req-c2e5829c-c48f-4aeb-8a7e-585fcf4d1cc0 req-199b9515-f4b1-49d9-a58a-8535deed95ee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Processing event network-vif-plugged-a9477ae0-b9b1-427b-b136-9017671bc84e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.215 221554 DEBUG nova.compute.manager [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.220 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769850075.2197177, a2022698-76e1-4402-9fb8-7482be4ecf6a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.220 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] VM Resumed (Lifecycle Event)#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.224 221554 DEBUG nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.228 221554 INFO nova.virt.libvirt.driver [-] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Instance spawned successfully.#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.229 221554 DEBUG nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.311 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.316 221554 DEBUG nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.316 221554 DEBUG nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.317 221554 DEBUG nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.317 221554 DEBUG nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.318 221554 DEBUG nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.318 221554 DEBUG nova.virt.libvirt.driver [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.322 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.418 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.468 221554 INFO nova.compute.manager [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Took 14.34 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.468 221554 DEBUG nova.compute.manager [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:01:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:15.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.619 221554 INFO nova.compute.manager [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Took 22.61 seconds to build instance.#033[00m
Jan 31 04:01:15 np0005603609 nova_compute[221550]: 2026-01-31 09:01:15.681 221554 DEBUG oslo_concurrency.lockutils [None req-9ccbea3a-0c3e-46a9-9135-2dea8c4797e9 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "a2022698-76e1-4402-9fb8-7482be4ecf6a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 24.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:16.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:16 np0005603609 nova_compute[221550]: 2026-01-31 09:01:16.328 221554 DEBUG nova.network.neutron [req-9d90cbe5-5e15-4644-a457-be958630edcb req-4e48075e-9d23-4334-9cc1-dbb1f6d4cd32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Updated VIF entry in instance network info cache for port a9477ae0-b9b1-427b-b136-9017671bc84e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:01:16 np0005603609 nova_compute[221550]: 2026-01-31 09:01:16.329 221554 DEBUG nova.network.neutron [req-9d90cbe5-5e15-4644-a457-be958630edcb req-4e48075e-9d23-4334-9cc1-dbb1f6d4cd32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Updating instance_info_cache with network_info: [{"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:01:16 np0005603609 nova_compute[221550]: 2026-01-31 09:01:16.537 221554 DEBUG oslo_concurrency.lockutils [req-9d90cbe5-5e15-4644-a457-be958630edcb req-4e48075e-9d23-4334-9cc1-dbb1f6d4cd32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-a2022698-76e1-4402-9fb8-7482be4ecf6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:01:16 np0005603609 nova_compute[221550]: 2026-01-31 09:01:16.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:16 np0005603609 nova_compute[221550]: 2026-01-31 09:01:16.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:01:16 np0005603609 nova_compute[221550]: 2026-01-31 09:01:16.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:01:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:17 np0005603609 nova_compute[221550]: 2026-01-31 09:01:17.281 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-a2022698-76e1-4402-9fb8-7482be4ecf6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:01:17 np0005603609 nova_compute[221550]: 2026-01-31 09:01:17.281 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-a2022698-76e1-4402-9fb8-7482be4ecf6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:01:17 np0005603609 nova_compute[221550]: 2026-01-31 09:01:17.281 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 04:01:17 np0005603609 nova_compute[221550]: 2026-01-31 09:01:17.281 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a2022698-76e1-4402-9fb8-7482be4ecf6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:01:17 np0005603609 nova_compute[221550]: 2026-01-31 09:01:17.397 221554 DEBUG nova.compute.manager [req-f26b1b56-3b3e-4155-b258-1a120dfbeb0f req-64a1999f-30d1-4948-b68e-4055f3095727 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Received event network-vif-plugged-a9477ae0-b9b1-427b-b136-9017671bc84e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:01:17 np0005603609 nova_compute[221550]: 2026-01-31 09:01:17.398 221554 DEBUG oslo_concurrency.lockutils [req-f26b1b56-3b3e-4155-b258-1a120dfbeb0f req-64a1999f-30d1-4948-b68e-4055f3095727 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a2022698-76e1-4402-9fb8-7482be4ecf6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:17 np0005603609 nova_compute[221550]: 2026-01-31 09:01:17.398 221554 DEBUG oslo_concurrency.lockutils [req-f26b1b56-3b3e-4155-b258-1a120dfbeb0f req-64a1999f-30d1-4948-b68e-4055f3095727 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a2022698-76e1-4402-9fb8-7482be4ecf6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:17 np0005603609 nova_compute[221550]: 2026-01-31 09:01:17.398 221554 DEBUG oslo_concurrency.lockutils [req-f26b1b56-3b3e-4155-b258-1a120dfbeb0f req-64a1999f-30d1-4948-b68e-4055f3095727 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a2022698-76e1-4402-9fb8-7482be4ecf6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:17 np0005603609 nova_compute[221550]: 2026-01-31 09:01:17.399 221554 DEBUG nova.compute.manager [req-f26b1b56-3b3e-4155-b258-1a120dfbeb0f req-64a1999f-30d1-4948-b68e-4055f3095727 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] No waiting events found dispatching network-vif-plugged-a9477ae0-b9b1-427b-b136-9017671bc84e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:01:17 np0005603609 nova_compute[221550]: 2026-01-31 09:01:17.399 221554 WARNING nova.compute.manager [req-f26b1b56-3b3e-4155-b258-1a120dfbeb0f req-64a1999f-30d1-4948-b68e-4055f3095727 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Received unexpected event network-vif-plugged-a9477ae0-b9b1-427b-b136-9017671bc84e for instance with vm_state active and task_state None.#033[00m
Jan 31 04:01:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:01:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:17.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:01:17 np0005603609 nova_compute[221550]: 2026-01-31 09:01:17.684 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:18.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:19.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:20 np0005603609 nova_compute[221550]: 2026-01-31 09:01:20.198 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:20.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:21 np0005603609 nova_compute[221550]: 2026-01-31 09:01:21.497 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Updating instance_info_cache with network_info: [{"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:01:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:21.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.210 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-a2022698-76e1-4402-9fb8-7482be4ecf6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.211 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.211 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.211 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.212 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.238 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.239 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.239 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.239 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.240 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:01:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:22.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.264 221554 DEBUG oslo_concurrency.lockutils [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "a2022698-76e1-4402-9fb8-7482be4ecf6a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.268 221554 DEBUG oslo_concurrency.lockutils [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "a2022698-76e1-4402-9fb8-7482be4ecf6a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.269 221554 DEBUG oslo_concurrency.lockutils [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "a2022698-76e1-4402-9fb8-7482be4ecf6a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.270 221554 DEBUG oslo_concurrency.lockutils [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "a2022698-76e1-4402-9fb8-7482be4ecf6a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.271 221554 DEBUG oslo_concurrency.lockutils [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "a2022698-76e1-4402-9fb8-7482be4ecf6a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.275 221554 INFO nova.compute.manager [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Terminating instance#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.277 221554 DEBUG nova.compute.manager [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 04:01:22 np0005603609 kernel: tapa9477ae0-b9 (unregistering): left promiscuous mode
Jan 31 04:01:22 np0005603609 NetworkManager[49064]: <info>  [1769850082.3439] device (tapa9477ae0-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.349 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:22 np0005603609 ovn_controller[130359]: 2026-01-31T09:01:22Z|00984|binding|INFO|Releasing lport a9477ae0-b9b1-427b-b136-9017671bc84e from this chassis (sb_readonly=0)
Jan 31 04:01:22 np0005603609 ovn_controller[130359]: 2026-01-31T09:01:22Z|00985|binding|INFO|Setting lport a9477ae0-b9b1-427b-b136-9017671bc84e down in Southbound
Jan 31 04:01:22 np0005603609 ovn_controller[130359]: 2026-01-31T09:01:22Z|00986|binding|INFO|Removing iface tapa9477ae0-b9 ovn-installed in OVS
Jan 31 04:01:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:22.363 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:d1:e5 10.100.0.5'], port_security=['fa:16:3e:73:d1:e5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1274203822', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a2022698-76e1-4402-9fb8-7482be4ecf6a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1274203822', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'cc812d40-ff23-43ac-a90f-2ee695b7bd6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07659edf-94fb-4be4-adbf-22b52c034c40, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=a9477ae0-b9b1-427b-b136-9017671bc84e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:01:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:22.365 140058 INFO neutron.agent.ovn.metadata.agent [-] Port a9477ae0-b9b1-427b-b136-9017671bc84e in datapath 4e170bdb-6ef8-49b3-bd1f-9130dcc7a216 unbound from our chassis#033[00m
Jan 31 04:01:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:22.366 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4e170bdb-6ef8-49b3-bd1f-9130dcc7a216, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:01:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:22.404 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4341557a-2d65-4f46-9e9d-59bc3e13d69c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.405 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:22 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:22.405 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216 namespace which is not needed anymore#033[00m
Jan 31 04:01:22 np0005603609 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d000000d3.scope: Deactivated successfully.
Jan 31 04:01:22 np0005603609 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d000000d3.scope: Consumed 7.674s CPU time.
Jan 31 04:01:22 np0005603609 systemd-machined[190912]: Machine qemu-114-instance-000000d3 terminated.
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.515 221554 INFO nova.virt.libvirt.driver [-] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Instance destroyed successfully.#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.517 221554 DEBUG nova.objects.instance [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'resources' on Instance uuid a2022698-76e1-4402-9fb8-7482be4ecf6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.543 221554 DEBUG nova.virt.libvirt.vif [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:00:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-705261740',display_name='tempest-TestNetworkBasicOps-server-705261740',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-705261740',id=211,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLh5jo2ZOHpJNEc7sEy+Rh5ouZixH202Znqm9OxjLl4trLDQqRG/VTdVc+98/HmllDWcHP2C0z8kKaRyRWwnytDPAoRwjNLcX9ByQoYSJ9lg6nfKkGayPhHM26LvB3YjAg==',key_name='tempest-TestNetworkBasicOps-654870968',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:01:15Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-ohkx7k1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:01:15Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=a2022698-76e1-4402-9fb8-7482be4ecf6a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.545 221554 DEBUG nova.network.os_vif_util [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.546 221554 DEBUG nova.network.os_vif_util [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:d1:e5,bridge_name='br-int',has_traffic_filtering=True,id=a9477ae0-b9b1-427b-b136-9017671bc84e,network=Network(4e170bdb-6ef8-49b3-bd1f-9130dcc7a216),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9477ae0-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.546 221554 DEBUG os_vif [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:d1:e5,bridge_name='br-int',has_traffic_filtering=True,id=a9477ae0-b9b1-427b-b136-9017671bc84e,network=Network(4e170bdb-6ef8-49b3-bd1f-9130dcc7a216),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9477ae0-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.548 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.548 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa9477ae0-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.552 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.555 221554 INFO os_vif [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:d1:e5,bridge_name='br-int',has_traffic_filtering=True,id=a9477ae0-b9b1-427b-b136-9017671bc84e,network=Network(4e170bdb-6ef8-49b3-bd1f-9130dcc7a216),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9477ae0-b9')#033[00m
Jan 31 04:01:22 np0005603609 neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216[314843]: [NOTICE]   (314847) : haproxy version is 2.8.14-c23fe91
Jan 31 04:01:22 np0005603609 neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216[314843]: [NOTICE]   (314847) : path to executable is /usr/sbin/haproxy
Jan 31 04:01:22 np0005603609 neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216[314843]: [WARNING]  (314847) : Exiting Master process...
Jan 31 04:01:22 np0005603609 neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216[314843]: [ALERT]    (314847) : Current worker (314849) exited with code 143 (Terminated)
Jan 31 04:01:22 np0005603609 neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216[314843]: [WARNING]  (314847) : All workers exited. Exiting... (0)
Jan 31 04:01:22 np0005603609 systemd[1]: libpod-7cfd81862cbde004415f9d85eb39ac0a6dd113cf8eee8fc10d71711a56955a06.scope: Deactivated successfully.
Jan 31 04:01:22 np0005603609 podman[314909]: 2026-01-31 09:01:22.64041954 +0000 UTC m=+0.148868785 container died 7cfd81862cbde004415f9d85eb39ac0a6dd113cf8eee8fc10d71711a56955a06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:01:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:01:22 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.684 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:01:22 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1502050569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.743 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.817 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000d3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.817 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000d3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:01:22 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7cfd81862cbde004415f9d85eb39ac0a6dd113cf8eee8fc10d71711a56955a06-userdata-shm.mount: Deactivated successfully.
Jan 31 04:01:22 np0005603609 systemd[1]: var-lib-containers-storage-overlay-3dba59780d4c374b97f4f53aa68b5ae3115c9ea088fa6ffc3bc6d2fb3e6abf85-merged.mount: Deactivated successfully.
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.947 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.948 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4125MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.948 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:22 np0005603609 nova_compute[221550]: 2026-01-31 09:01:22.949 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:23 np0005603609 podman[314909]: 2026-01-31 09:01:23.039780618 +0000 UTC m=+0.548229873 container cleanup 7cfd81862cbde004415f9d85eb39ac0a6dd113cf8eee8fc10d71711a56955a06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 04:01:23 np0005603609 systemd[1]: libpod-conmon-7cfd81862cbde004415f9d85eb39ac0a6dd113cf8eee8fc10d71711a56955a06.scope: Deactivated successfully.
Jan 31 04:01:23 np0005603609 nova_compute[221550]: 2026-01-31 09:01:23.132 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance a2022698-76e1-4402-9fb8-7482be4ecf6a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 04:01:23 np0005603609 nova_compute[221550]: 2026-01-31 09:01:23.132 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:01:23 np0005603609 nova_compute[221550]: 2026-01-31 09:01:23.132 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:01:23 np0005603609 nova_compute[221550]: 2026-01-31 09:01:23.214 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:01:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:23.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:23 np0005603609 nova_compute[221550]: 2026-01-31 09:01:23.548 221554 DEBUG nova.compute.manager [req-78b77d04-d7c2-426f-a503-211bbaed8102 req-423671f3-0363-485f-aaec-557dde767168 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Received event network-vif-unplugged-a9477ae0-b9b1-427b-b136-9017671bc84e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:01:23 np0005603609 nova_compute[221550]: 2026-01-31 09:01:23.549 221554 DEBUG oslo_concurrency.lockutils [req-78b77d04-d7c2-426f-a503-211bbaed8102 req-423671f3-0363-485f-aaec-557dde767168 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a2022698-76e1-4402-9fb8-7482be4ecf6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:23 np0005603609 nova_compute[221550]: 2026-01-31 09:01:23.550 221554 DEBUG oslo_concurrency.lockutils [req-78b77d04-d7c2-426f-a503-211bbaed8102 req-423671f3-0363-485f-aaec-557dde767168 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a2022698-76e1-4402-9fb8-7482be4ecf6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:23 np0005603609 nova_compute[221550]: 2026-01-31 09:01:23.550 221554 DEBUG oslo_concurrency.lockutils [req-78b77d04-d7c2-426f-a503-211bbaed8102 req-423671f3-0363-485f-aaec-557dde767168 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a2022698-76e1-4402-9fb8-7482be4ecf6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:23 np0005603609 nova_compute[221550]: 2026-01-31 09:01:23.551 221554 DEBUG nova.compute.manager [req-78b77d04-d7c2-426f-a503-211bbaed8102 req-423671f3-0363-485f-aaec-557dde767168 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] No waiting events found dispatching network-vif-unplugged-a9477ae0-b9b1-427b-b136-9017671bc84e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:01:23 np0005603609 nova_compute[221550]: 2026-01-31 09:01:23.551 221554 DEBUG nova.compute.manager [req-78b77d04-d7c2-426f-a503-211bbaed8102 req-423671f3-0363-485f-aaec-557dde767168 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Received event network-vif-unplugged-a9477ae0-b9b1-427b-b136-9017671bc84e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 04:01:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:01:23 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3584786760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:01:23 np0005603609 nova_compute[221550]: 2026-01-31 09:01:23.612 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:01:23 np0005603609 nova_compute[221550]: 2026-01-31 09:01:23.618 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:01:23 np0005603609 nova_compute[221550]: 2026-01-31 09:01:23.639 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:01:23 np0005603609 nova_compute[221550]: 2026-01-31 09:01:23.685 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:01:23 np0005603609 nova_compute[221550]: 2026-01-31 09:01:23.685 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:23 np0005603609 podman[315013]: 2026-01-31 09:01:23.744981651 +0000 UTC m=+0.689960225 container remove 7cfd81862cbde004415f9d85eb39ac0a6dd113cf8eee8fc10d71711a56955a06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 04:01:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:23.748 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d04b3fc7-e667-4e2a-80a5-4cc1bddc1e53]: (4, ('Sat Jan 31 09:01:22 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216 (7cfd81862cbde004415f9d85eb39ac0a6dd113cf8eee8fc10d71711a56955a06)\n7cfd81862cbde004415f9d85eb39ac0a6dd113cf8eee8fc10d71711a56955a06\nSat Jan 31 09:01:23 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216 (7cfd81862cbde004415f9d85eb39ac0a6dd113cf8eee8fc10d71711a56955a06)\n7cfd81862cbde004415f9d85eb39ac0a6dd113cf8eee8fc10d71711a56955a06\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:23.749 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8a671405-91b5-48d7-a5d3-c63ffa352315]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:23.750 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e170bdb-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:01:23 np0005603609 nova_compute[221550]: 2026-01-31 09:01:23.752 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:23 np0005603609 kernel: tap4e170bdb-60: left promiscuous mode
Jan 31 04:01:23 np0005603609 nova_compute[221550]: 2026-01-31 09:01:23.755 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:23.757 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ce867c5b-f727-4d34-9965-a91441f6b6aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:23 np0005603609 nova_compute[221550]: 2026-01-31 09:01:23.758 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:23.778 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7924f34f-470e-4348-a2cc-b899ca13539e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:23.779 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee5989a-db24-47cd-96f5-198a3736ed8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:23.788 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[207e0373-689d-49ed-9602-741e266abecd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1031345, 'reachable_time': 42051, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315050, 'error': None, 'target': 'ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:23.791 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:01:23 np0005603609 systemd[1]: run-netns-ovnmeta\x2d4e170bdb\x2d6ef8\x2d49b3\x2dbd1f\x2d9130dcc7a216.mount: Deactivated successfully.
Jan 31 04:01:23 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:01:23.791 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[60209e48-81ca-4dae-960d-b11f9ccb9c3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:24.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:25.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:25 np0005603609 nova_compute[221550]: 2026-01-31 09:01:25.719 221554 INFO nova.virt.libvirt.driver [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Deleting instance files /var/lib/nova/instances/a2022698-76e1-4402-9fb8-7482be4ecf6a_del#033[00m
Jan 31 04:01:25 np0005603609 nova_compute[221550]: 2026-01-31 09:01:25.719 221554 INFO nova.virt.libvirt.driver [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Deletion of /var/lib/nova/instances/a2022698-76e1-4402-9fb8-7482be4ecf6a_del complete#033[00m
Jan 31 04:01:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:26.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:27 np0005603609 nova_compute[221550]: 2026-01-31 09:01:27.038 221554 DEBUG nova.compute.manager [req-74feb641-cad6-4bb9-b5f9-b840f9ad3917 req-f26352f6-2ef2-4edc-92a2-9fa7380f7255 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Received event network-vif-plugged-a9477ae0-b9b1-427b-b136-9017671bc84e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:01:27 np0005603609 nova_compute[221550]: 2026-01-31 09:01:27.038 221554 DEBUG oslo_concurrency.lockutils [req-74feb641-cad6-4bb9-b5f9-b840f9ad3917 req-f26352f6-2ef2-4edc-92a2-9fa7380f7255 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a2022698-76e1-4402-9fb8-7482be4ecf6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:27 np0005603609 nova_compute[221550]: 2026-01-31 09:01:27.039 221554 DEBUG oslo_concurrency.lockutils [req-74feb641-cad6-4bb9-b5f9-b840f9ad3917 req-f26352f6-2ef2-4edc-92a2-9fa7380f7255 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a2022698-76e1-4402-9fb8-7482be4ecf6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:27 np0005603609 nova_compute[221550]: 2026-01-31 09:01:27.039 221554 DEBUG oslo_concurrency.lockutils [req-74feb641-cad6-4bb9-b5f9-b840f9ad3917 req-f26352f6-2ef2-4edc-92a2-9fa7380f7255 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a2022698-76e1-4402-9fb8-7482be4ecf6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:27 np0005603609 nova_compute[221550]: 2026-01-31 09:01:27.039 221554 DEBUG nova.compute.manager [req-74feb641-cad6-4bb9-b5f9-b840f9ad3917 req-f26352f6-2ef2-4edc-92a2-9fa7380f7255 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] No waiting events found dispatching network-vif-plugged-a9477ae0-b9b1-427b-b136-9017671bc84e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:01:27 np0005603609 nova_compute[221550]: 2026-01-31 09:01:27.039 221554 WARNING nova.compute.manager [req-74feb641-cad6-4bb9-b5f9-b840f9ad3917 req-f26352f6-2ef2-4edc-92a2-9fa7380f7255 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Received unexpected event network-vif-plugged-a9477ae0-b9b1-427b-b136-9017671bc84e for instance with vm_state active and task_state deleting.#033[00m
Jan 31 04:01:27 np0005603609 nova_compute[221550]: 2026-01-31 09:01:27.099 221554 INFO nova.compute.manager [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Took 4.82 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 04:01:27 np0005603609 nova_compute[221550]: 2026-01-31 09:01:27.100 221554 DEBUG oslo.service.loopingcall [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 04:01:27 np0005603609 nova_compute[221550]: 2026-01-31 09:01:27.100 221554 DEBUG nova.compute.manager [-] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 04:01:27 np0005603609 nova_compute[221550]: 2026-01-31 09:01:27.100 221554 DEBUG nova.network.neutron [-] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 04:01:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:01:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:27.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:01:27 np0005603609 nova_compute[221550]: 2026-01-31 09:01:27.553 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:27 np0005603609 nova_compute[221550]: 2026-01-31 09:01:27.688 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:28.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:29 np0005603609 nova_compute[221550]: 2026-01-31 09:01:29.120 221554 DEBUG nova.network.neutron [-] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:01:29 np0005603609 nova_compute[221550]: 2026-01-31 09:01:29.148 221554 INFO nova.compute.manager [-] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Took 2.05 seconds to deallocate network for instance.#033[00m
Jan 31 04:01:29 np0005603609 nova_compute[221550]: 2026-01-31 09:01:29.208 221554 DEBUG oslo_concurrency.lockutils [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:29 np0005603609 nova_compute[221550]: 2026-01-31 09:01:29.208 221554 DEBUG oslo_concurrency.lockutils [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:29 np0005603609 nova_compute[221550]: 2026-01-31 09:01:29.277 221554 DEBUG oslo_concurrency.processutils [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:01:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:01:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:29.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:01:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:01:29 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/444433900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:01:29 np0005603609 nova_compute[221550]: 2026-01-31 09:01:29.665 221554 DEBUG oslo_concurrency.processutils [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:01:29 np0005603609 nova_compute[221550]: 2026-01-31 09:01:29.670 221554 DEBUG nova.compute.provider_tree [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:01:29 np0005603609 nova_compute[221550]: 2026-01-31 09:01:29.764 221554 DEBUG nova.scheduler.client.report [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:01:29 np0005603609 nova_compute[221550]: 2026-01-31 09:01:29.785 221554 DEBUG oslo_concurrency.lockutils [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:30 np0005603609 nova_compute[221550]: 2026-01-31 09:01:29.999 221554 INFO nova.scheduler.client.report [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Deleted allocations for instance a2022698-76e1-4402-9fb8-7482be4ecf6a#033[00m
Jan 31 04:01:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:30.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:30 np0005603609 nova_compute[221550]: 2026-01-31 09:01:30.400 221554 DEBUG oslo_concurrency.lockutils [None req-56b71091-3586-4554-a62a-eca505b7a69e 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "a2022698-76e1-4402-9fb8-7482be4ecf6a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:30 np0005603609 nova_compute[221550]: 2026-01-31 09:01:30.681 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:30 np0005603609 nova_compute[221550]: 2026-01-31 09:01:30.681 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:01:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:31.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:01:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:32.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:32 np0005603609 nova_compute[221550]: 2026-01-31 09:01:32.556 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:32 np0005603609 nova_compute[221550]: 2026-01-31 09:01:32.687 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:33.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:34.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:34 np0005603609 nova_compute[221550]: 2026-01-31 09:01:34.700 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:01:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:35.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:01:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:36.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:37 np0005603609 podman[315079]: 2026-01-31 09:01:37.162720114 +0000 UTC m=+0.044838837 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:01:37 np0005603609 podman[315078]: 2026-01-31 09:01:37.210854769 +0000 UTC m=+0.092829129 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 04:01:37 np0005603609 nova_compute[221550]: 2026-01-31 09:01:37.513 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850082.5104842, a2022698-76e1-4402-9fb8-7482be4ecf6a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:01:37 np0005603609 nova_compute[221550]: 2026-01-31 09:01:37.514 221554 INFO nova.compute.manager [-] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:01:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:37.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:37 np0005603609 nova_compute[221550]: 2026-01-31 09:01:37.554 221554 DEBUG nova.compute.manager [None req-6b0ce366-1477-4dae-b057-448e51549f69 - - - - - -] [instance: a2022698-76e1-4402-9fb8-7482be4ecf6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:01:37 np0005603609 nova_compute[221550]: 2026-01-31 09:01:37.558 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:37 np0005603609 nova_compute[221550]: 2026-01-31 09:01:37.688 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:01:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:38.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:01:38 np0005603609 nova_compute[221550]: 2026-01-31 09:01:38.622 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:38 np0005603609 nova_compute[221550]: 2026-01-31 09:01:38.664 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:39.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:01:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:40.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:01:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:41.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:42.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:42 np0005603609 nova_compute[221550]: 2026-01-31 09:01:42.561 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:42 np0005603609 nova_compute[221550]: 2026-01-31 09:01:42.689 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:01:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:43.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:01:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:01:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:44.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:01:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:01:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:45.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:01:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:01:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:46.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:01:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:47.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:47 np0005603609 nova_compute[221550]: 2026-01-31 09:01:47.564 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:47 np0005603609 nova_compute[221550]: 2026-01-31 09:01:47.692 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:48.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:01:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:49.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:01:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:50.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:50 np0005603609 nova_compute[221550]: 2026-01-31 09:01:50.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #190. Immutable memtables: 0.
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:51.533552) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 190
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850111533579, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 710, "num_deletes": 251, "total_data_size": 1349110, "memory_usage": 1364520, "flush_reason": "Manual Compaction"}
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #191: started
Jan 31 04:01:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:51.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850111541879, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 191, "file_size": 879489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92049, "largest_seqno": 92754, "table_properties": {"data_size": 875920, "index_size": 1412, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8076, "raw_average_key_size": 19, "raw_value_size": 868866, "raw_average_value_size": 2098, "num_data_blocks": 63, "num_entries": 414, "num_filter_entries": 414, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850066, "oldest_key_time": 1769850066, "file_creation_time": 1769850111, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 8359 microseconds, and 2163 cpu microseconds.
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:51.541910) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #191: 879489 bytes OK
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:51.541929) [db/memtable_list.cc:519] [default] Level-0 commit table #191 started
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:51.544419) [db/memtable_list.cc:722] [default] Level-0 commit table #191: memtable #1 done
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:51.544431) EVENT_LOG_v1 {"time_micros": 1769850111544427, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:51.544445) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 1345293, prev total WAL file size 1345293, number of live WAL files 2.
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000187.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:51.544822) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [191(858KB)], [189(13MB)]
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850111544846, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [191], "files_L6": [189], "score": -1, "input_data_size": 15085902, "oldest_snapshot_seqno": -1}
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #192: 11021 keys, 13133573 bytes, temperature: kUnknown
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850111676355, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 192, "file_size": 13133573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13064138, "index_size": 40778, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27589, "raw_key_size": 292615, "raw_average_key_size": 26, "raw_value_size": 12873191, "raw_average_value_size": 1168, "num_data_blocks": 1541, "num_entries": 11021, "num_filter_entries": 11021, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769850111, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 192, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:51.676616) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 13133573 bytes
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:51.685277) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.6 rd, 99.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 13.5 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(32.1) write-amplify(14.9) OK, records in: 11536, records dropped: 515 output_compression: NoCompression
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:51.685294) EVENT_LOG_v1 {"time_micros": 1769850111685287, "job": 122, "event": "compaction_finished", "compaction_time_micros": 131591, "compaction_time_cpu_micros": 22698, "output_level": 6, "num_output_files": 1, "total_output_size": 13133573, "num_input_records": 11536, "num_output_records": 11021, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850111685460, "job": 122, "event": "table_file_deletion", "file_number": 191}
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000189.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850111686722, "job": 122, "event": "table_file_deletion", "file_number": 189}
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:51.544778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:51.686773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:51.686776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:51.686778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:51.686780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:01:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:01:51.686782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:01:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:01:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:52.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:01:52 np0005603609 nova_compute[221550]: 2026-01-31 09:01:52.568 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:52 np0005603609 nova_compute[221550]: 2026-01-31 09:01:52.693 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:01:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2693558940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:01:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:01:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2693558940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:01:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:01:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:53.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:01:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:01:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:54.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:01:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:55.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:01:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:56.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:01:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:57.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:57 np0005603609 nova_compute[221550]: 2026-01-31 09:01:57.572 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:57 np0005603609 nova_compute[221550]: 2026-01-31 09:01:57.696 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:58.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:01:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:59.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:00.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:02:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:01.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:02:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:02.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:02 np0005603609 nova_compute[221550]: 2026-01-31 09:02:02.604 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:02 np0005603609 nova_compute[221550]: 2026-01-31 09:02:02.698 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:03.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:04.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:05.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:06 np0005603609 nova_compute[221550]: 2026-01-31 09:02:06.301 221554 DEBUG oslo_concurrency.lockutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:06 np0005603609 nova_compute[221550]: 2026-01-31 09:02:06.302 221554 DEBUG oslo_concurrency.lockutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:06.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:06 np0005603609 nova_compute[221550]: 2026-01-31 09:02:06.339 221554 DEBUG nova.compute.manager [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 04:02:06 np0005603609 nova_compute[221550]: 2026-01-31 09:02:06.503 221554 DEBUG oslo_concurrency.lockutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:06 np0005603609 nova_compute[221550]: 2026-01-31 09:02:06.504 221554 DEBUG oslo_concurrency.lockutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:06 np0005603609 nova_compute[221550]: 2026-01-31 09:02:06.524 221554 DEBUG nova.virt.hardware [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 04:02:06 np0005603609 nova_compute[221550]: 2026-01-31 09:02:06.524 221554 INFO nova.compute.claims [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 04:02:06 np0005603609 nova_compute[221550]: 2026-01-31 09:02:06.697 221554 DEBUG oslo_concurrency.processutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:02:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:02:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/349253811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.131 221554 DEBUG oslo_concurrency.processutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.138 221554 DEBUG nova.compute.provider_tree [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.163 221554 DEBUG nova.scheduler.client.report [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.185 221554 DEBUG oslo_concurrency.lockutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.186 221554 DEBUG nova.compute.manager [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.237 221554 DEBUG nova.compute.manager [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.237 221554 DEBUG nova.network.neutron [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.263 221554 INFO nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.286 221554 DEBUG nova.compute.manager [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.449 221554 DEBUG nova.compute.manager [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.451 221554 DEBUG nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.451 221554 INFO nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Creating image(s)#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.480 221554 DEBUG nova.storage.rbd_utils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image a11f6d7a-3f71-4519-9604-2cbfbc5b888e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.513 221554 DEBUG nova.storage.rbd_utils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image a11f6d7a-3f71-4519-9604-2cbfbc5b888e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.543 221554 DEBUG nova.storage.rbd_utils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image a11f6d7a-3f71-4519-9604-2cbfbc5b888e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.550 221554 DEBUG oslo_concurrency.processutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:02:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:07.553 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:07.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:07.554 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:07.554 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.586 221554 DEBUG nova.policy [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a56abd8fdd341ae88a99e102ab399de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.650 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.652 221554 DEBUG oslo_concurrency.processutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.653 221554 DEBUG oslo_concurrency.lockutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.653 221554 DEBUG oslo_concurrency.lockutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.654 221554 DEBUG oslo_concurrency.lockutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.679 221554 DEBUG nova.storage.rbd_utils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image a11f6d7a-3f71-4519-9604-2cbfbc5b888e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.683 221554 DEBUG oslo_concurrency.processutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 a11f6d7a-3f71-4519-9604-2cbfbc5b888e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.703 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.706 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:07 np0005603609 nova_compute[221550]: 2026-01-31 09:02:07.707 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:08 np0005603609 podman[315240]: 2026-01-31 09:02:08.178707272 +0000 UTC m=+0.055442063 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 04:02:08 np0005603609 podman[315239]: 2026-01-31 09:02:08.233856358 +0000 UTC m=+0.112449254 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 04:02:08 np0005603609 nova_compute[221550]: 2026-01-31 09:02:08.285 221554 DEBUG oslo_concurrency.processutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 a11f6d7a-3f71-4519-9604-2cbfbc5b888e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:02:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:02:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:08.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:02:08 np0005603609 nova_compute[221550]: 2026-01-31 09:02:08.348 221554 DEBUG nova.storage.rbd_utils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] resizing rbd image a11f6d7a-3f71-4519-9604-2cbfbc5b888e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 04:02:09 np0005603609 nova_compute[221550]: 2026-01-31 09:02:09.155 221554 DEBUG nova.objects.instance [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'migration_context' on Instance uuid a11f6d7a-3f71-4519-9604-2cbfbc5b888e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:02:09 np0005603609 nova_compute[221550]: 2026-01-31 09:02:09.197 221554 DEBUG nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 04:02:09 np0005603609 nova_compute[221550]: 2026-01-31 09:02:09.198 221554 DEBUG nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Ensure instance console log exists: /var/lib/nova/instances/a11f6d7a-3f71-4519-9604-2cbfbc5b888e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 04:02:09 np0005603609 nova_compute[221550]: 2026-01-31 09:02:09.198 221554 DEBUG oslo_concurrency.lockutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:09 np0005603609 nova_compute[221550]: 2026-01-31 09:02:09.198 221554 DEBUG oslo_concurrency.lockutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:09 np0005603609 nova_compute[221550]: 2026-01-31 09:02:09.198 221554 DEBUG oslo_concurrency.lockutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:02:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:09.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:02:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:10.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:10 np0005603609 nova_compute[221550]: 2026-01-31 09:02:10.680 221554 DEBUG nova.network.neutron [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Successfully created port: 10ec3184-8343-4827-91c5-b4627821beca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 04:02:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:11.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:02:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:12.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:02:12 np0005603609 nova_compute[221550]: 2026-01-31 09:02:12.618 221554 DEBUG nova.network.neutron [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Successfully updated port: 10ec3184-8343-4827-91c5-b4627821beca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 04:02:12 np0005603609 nova_compute[221550]: 2026-01-31 09:02:12.638 221554 DEBUG oslo_concurrency.lockutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "refresh_cache-a11f6d7a-3f71-4519-9604-2cbfbc5b888e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:02:12 np0005603609 nova_compute[221550]: 2026-01-31 09:02:12.639 221554 DEBUG oslo_concurrency.lockutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquired lock "refresh_cache-a11f6d7a-3f71-4519-9604-2cbfbc5b888e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:02:12 np0005603609 nova_compute[221550]: 2026-01-31 09:02:12.639 221554 DEBUG nova.network.neutron [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:02:12 np0005603609 nova_compute[221550]: 2026-01-31 09:02:12.657 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:12 np0005603609 nova_compute[221550]: 2026-01-31 09:02:12.701 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:12 np0005603609 nova_compute[221550]: 2026-01-31 09:02:12.786 221554 DEBUG nova.compute.manager [req-a77215e4-e149-457d-a566-2135cb2bd2d0 req-5d42480f-cacd-49d1-b913-4c05912a3c53 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Received event network-changed-10ec3184-8343-4827-91c5-b4627821beca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:02:12 np0005603609 nova_compute[221550]: 2026-01-31 09:02:12.786 221554 DEBUG nova.compute.manager [req-a77215e4-e149-457d-a566-2135cb2bd2d0 req-5d42480f-cacd-49d1-b913-4c05912a3c53 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Refreshing instance network info cache due to event network-changed-10ec3184-8343-4827-91c5-b4627821beca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:02:12 np0005603609 nova_compute[221550]: 2026-01-31 09:02:12.787 221554 DEBUG oslo_concurrency.lockutils [req-a77215e4-e149-457d-a566-2135cb2bd2d0 req-5d42480f-cacd-49d1-b913-4c05912a3c53 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-a11f6d7a-3f71-4519-9604-2cbfbc5b888e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:02:12 np0005603609 nova_compute[221550]: 2026-01-31 09:02:12.856 221554 DEBUG nova.network.neutron [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 04:02:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:02:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:13.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:02:13 np0005603609 nova_compute[221550]: 2026-01-31 09:02:13.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:13 np0005603609 nova_compute[221550]: 2026-01-31 09:02:13.966 221554 DEBUG nova.network.neutron [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Updating instance_info_cache with network_info: [{"id": "10ec3184-8343-4827-91c5-b4627821beca", "address": "fa:16:3e:73:0b:17", "network": {"id": "43fe9c37-3066-4674-8300-45954f140375", "bridge": "br-int", "label": "tempest-network-smoke--1079484700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ec3184-83", "ovs_interfaceid": "10ec3184-8343-4827-91c5-b4627821beca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:02:13 np0005603609 nova_compute[221550]: 2026-01-31 09:02:13.993 221554 DEBUG oslo_concurrency.lockutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Releasing lock "refresh_cache-a11f6d7a-3f71-4519-9604-2cbfbc5b888e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:02:13 np0005603609 nova_compute[221550]: 2026-01-31 09:02:13.993 221554 DEBUG nova.compute.manager [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Instance network_info: |[{"id": "10ec3184-8343-4827-91c5-b4627821beca", "address": "fa:16:3e:73:0b:17", "network": {"id": "43fe9c37-3066-4674-8300-45954f140375", "bridge": "br-int", "label": "tempest-network-smoke--1079484700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ec3184-83", "ovs_interfaceid": "10ec3184-8343-4827-91c5-b4627821beca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 04:02:13 np0005603609 nova_compute[221550]: 2026-01-31 09:02:13.993 221554 DEBUG oslo_concurrency.lockutils [req-a77215e4-e149-457d-a566-2135cb2bd2d0 req-5d42480f-cacd-49d1-b913-4c05912a3c53 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-a11f6d7a-3f71-4519-9604-2cbfbc5b888e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:02:13 np0005603609 nova_compute[221550]: 2026-01-31 09:02:13.994 221554 DEBUG nova.network.neutron [req-a77215e4-e149-457d-a566-2135cb2bd2d0 req-5d42480f-cacd-49d1-b913-4c05912a3c53 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Refreshing network info cache for port 10ec3184-8343-4827-91c5-b4627821beca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:02:13 np0005603609 nova_compute[221550]: 2026-01-31 09:02:13.996 221554 DEBUG nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Start _get_guest_xml network_info=[{"id": "10ec3184-8343-4827-91c5-b4627821beca", "address": "fa:16:3e:73:0b:17", "network": {"id": "43fe9c37-3066-4674-8300-45954f140375", "bridge": "br-int", "label": "tempest-network-smoke--1079484700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ec3184-83", "ovs_interfaceid": "10ec3184-8343-4827-91c5-b4627821beca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.000 221554 WARNING nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.008 221554 DEBUG nova.virt.libvirt.host [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.008 221554 DEBUG nova.virt.libvirt.host [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.011 221554 DEBUG nova.virt.libvirt.host [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.012 221554 DEBUG nova.virt.libvirt.host [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.013 221554 DEBUG nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.013 221554 DEBUG nova.virt.hardware [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.013 221554 DEBUG nova.virt.hardware [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.014 221554 DEBUG nova.virt.hardware [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.014 221554 DEBUG nova.virt.hardware [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.014 221554 DEBUG nova.virt.hardware [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.014 221554 DEBUG nova.virt.hardware [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.015 221554 DEBUG nova.virt.hardware [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.015 221554 DEBUG nova.virt.hardware [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.015 221554 DEBUG nova.virt.hardware [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.015 221554 DEBUG nova.virt.hardware [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.015 221554 DEBUG nova.virt.hardware [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.018 221554 DEBUG oslo_concurrency.processutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:02:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:14.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:02:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3830990779' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.413 221554 DEBUG oslo_concurrency.processutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.435 221554 DEBUG nova.storage.rbd_utils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image a11f6d7a-3f71-4519-9604-2cbfbc5b888e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.439 221554 DEBUG oslo_concurrency.processutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:02:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:02:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2146965649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.838 221554 DEBUG oslo_concurrency.processutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.840 221554 DEBUG nova.virt.libvirt.vif [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:02:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1147534750',display_name='tempest-TestNetworkBasicOps-server-1147534750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1147534750',id=212,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOk46nSq8WG44FR7TgFRI3xXdxBOYNIUOf8OL4cX2tAm4auQr60qM60bhnRGE8WCnD1l6YZnieZRfd6RMYjLQTqZJLUbFa7Lmv2J0ZaW8AP+qfNxIDfxUqfwxVYNaNeGHA==',key_name='tempest-TestNetworkBasicOps-911323969',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-naplw6aw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:02:07Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=a11f6d7a-3f71-4519-9604-2cbfbc5b888e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10ec3184-8343-4827-91c5-b4627821beca", "address": "fa:16:3e:73:0b:17", "network": {"id": "43fe9c37-3066-4674-8300-45954f140375", "bridge": "br-int", "label": "tempest-network-smoke--1079484700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ec3184-83", "ovs_interfaceid": "10ec3184-8343-4827-91c5-b4627821beca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.840 221554 DEBUG nova.network.os_vif_util [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "10ec3184-8343-4827-91c5-b4627821beca", "address": "fa:16:3e:73:0b:17", "network": {"id": "43fe9c37-3066-4674-8300-45954f140375", "bridge": "br-int", "label": "tempest-network-smoke--1079484700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ec3184-83", "ovs_interfaceid": "10ec3184-8343-4827-91c5-b4627821beca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.841 221554 DEBUG nova.network.os_vif_util [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:0b:17,bridge_name='br-int',has_traffic_filtering=True,id=10ec3184-8343-4827-91c5-b4627821beca,network=Network(43fe9c37-3066-4674-8300-45954f140375),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10ec3184-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.842 221554 DEBUG nova.objects.instance [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid a11f6d7a-3f71-4519-9604-2cbfbc5b888e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.872 221554 DEBUG nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] End _get_guest_xml xml=<domain type="kvm">
Jan 31 04:02:14 np0005603609 nova_compute[221550]:  <uuid>a11f6d7a-3f71-4519-9604-2cbfbc5b888e</uuid>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:  <name>instance-000000d4</name>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestNetworkBasicOps-server-1147534750</nova:name>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 09:02:14</nova:creationTime>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 04:02:14 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:        <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:        <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:        <nova:port uuid="10ec3184-8343-4827-91c5-b4627821beca">
Jan 31 04:02:14 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <system>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <entry name="serial">a11f6d7a-3f71-4519-9604-2cbfbc5b888e</entry>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <entry name="uuid">a11f6d7a-3f71-4519-9604-2cbfbc5b888e</entry>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    </system>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:  <os>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:  </os>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:  <features>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:  </features>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:  </clock>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:  <devices>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/a11f6d7a-3f71-4519-9604-2cbfbc5b888e_disk">
Jan 31 04:02:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      </source>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 04:02:14 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      </auth>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    </disk>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/a11f6d7a-3f71-4519-9604-2cbfbc5b888e_disk.config">
Jan 31 04:02:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      </source>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 04:02:14 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      </auth>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    </disk>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:73:0b:17"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <target dev="tap10ec3184-83"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    </interface>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/a11f6d7a-3f71-4519-9604-2cbfbc5b888e/console.log" append="off"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    </serial>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <video>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    </video>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    </rng>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 04:02:14 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 04:02:14 np0005603609 nova_compute[221550]:  </devices>
Jan 31 04:02:14 np0005603609 nova_compute[221550]: </domain>
Jan 31 04:02:14 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.873 221554 DEBUG nova.compute.manager [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Preparing to wait for external event network-vif-plugged-10ec3184-8343-4827-91c5-b4627821beca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.873 221554 DEBUG oslo_concurrency.lockutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.874 221554 DEBUG oslo_concurrency.lockutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.874 221554 DEBUG oslo_concurrency.lockutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.874 221554 DEBUG nova.virt.libvirt.vif [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:02:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1147534750',display_name='tempest-TestNetworkBasicOps-server-1147534750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1147534750',id=212,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOk46nSq8WG44FR7TgFRI3xXdxBOYNIUOf8OL4cX2tAm4auQr60qM60bhnRGE8WCnD1l6YZnieZRfd6RMYjLQTqZJLUbFa7Lmv2J0ZaW8AP+qfNxIDfxUqfwxVYNaNeGHA==',key_name='tempest-TestNetworkBasicOps-911323969',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-naplw6aw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:02:07Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=a11f6d7a-3f71-4519-9604-2cbfbc5b888e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10ec3184-8343-4827-91c5-b4627821beca", "address": "fa:16:3e:73:0b:17", "network": {"id": "43fe9c37-3066-4674-8300-45954f140375", "bridge": "br-int", "label": "tempest-network-smoke--1079484700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ec3184-83", "ovs_interfaceid": "10ec3184-8343-4827-91c5-b4627821beca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.875 221554 DEBUG nova.network.os_vif_util [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "10ec3184-8343-4827-91c5-b4627821beca", "address": "fa:16:3e:73:0b:17", "network": {"id": "43fe9c37-3066-4674-8300-45954f140375", "bridge": "br-int", "label": "tempest-network-smoke--1079484700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ec3184-83", "ovs_interfaceid": "10ec3184-8343-4827-91c5-b4627821beca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.875 221554 DEBUG nova.network.os_vif_util [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:0b:17,bridge_name='br-int',has_traffic_filtering=True,id=10ec3184-8343-4827-91c5-b4627821beca,network=Network(43fe9c37-3066-4674-8300-45954f140375),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10ec3184-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.875 221554 DEBUG os_vif [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:0b:17,bridge_name='br-int',has_traffic_filtering=True,id=10ec3184-8343-4827-91c5-b4627821beca,network=Network(43fe9c37-3066-4674-8300-45954f140375),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10ec3184-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.876 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.876 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.877 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.878 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.879 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10ec3184-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.879 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap10ec3184-83, col_values=(('external_ids', {'iface-id': '10ec3184-8343-4827-91c5-b4627821beca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:0b:17', 'vm-uuid': 'a11f6d7a-3f71-4519-9604-2cbfbc5b888e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.880 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:14 np0005603609 NetworkManager[49064]: <info>  [1769850134.8819] manager: (tap10ec3184-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/459)
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.883 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.886 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.887 221554 INFO os_vif [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:0b:17,bridge_name='br-int',has_traffic_filtering=True,id=10ec3184-8343-4827-91c5-b4627821beca,network=Network(43fe9c37-3066-4674-8300-45954f140375),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10ec3184-83')#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.962 221554 DEBUG nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.962 221554 DEBUG nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.963 221554 DEBUG nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No VIF found with MAC fa:16:3e:73:0b:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.963 221554 INFO nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Using config drive#033[00m
Jan 31 04:02:14 np0005603609 nova_compute[221550]: 2026-01-31 09:02:14.985 221554 DEBUG nova.storage.rbd_utils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image a11f6d7a-3f71-4519-9604-2cbfbc5b888e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:02:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:02:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:15.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:02:15 np0005603609 nova_compute[221550]: 2026-01-31 09:02:15.640 221554 INFO nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Creating config drive at /var/lib/nova/instances/a11f6d7a-3f71-4519-9604-2cbfbc5b888e/disk.config#033[00m
Jan 31 04:02:15 np0005603609 nova_compute[221550]: 2026-01-31 09:02:15.644 221554 DEBUG oslo_concurrency.processutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a11f6d7a-3f71-4519-9604-2cbfbc5b888e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmjypil6x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:02:15 np0005603609 nova_compute[221550]: 2026-01-31 09:02:15.769 221554 DEBUG oslo_concurrency.processutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a11f6d7a-3f71-4519-9604-2cbfbc5b888e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmjypil6x" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:02:15 np0005603609 nova_compute[221550]: 2026-01-31 09:02:15.813 221554 DEBUG nova.storage.rbd_utils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image a11f6d7a-3f71-4519-9604-2cbfbc5b888e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:02:15 np0005603609 nova_compute[221550]: 2026-01-31 09:02:15.816 221554 DEBUG oslo_concurrency.processutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a11f6d7a-3f71-4519-9604-2cbfbc5b888e/disk.config a11f6d7a-3f71-4519-9604-2cbfbc5b888e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:02:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:16.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.368 221554 DEBUG oslo_concurrency.processutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a11f6d7a-3f71-4519-9604-2cbfbc5b888e/disk.config a11f6d7a-3f71-4519-9604-2cbfbc5b888e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.368 221554 INFO nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Deleting local config drive /var/lib/nova/instances/a11f6d7a-3f71-4519-9604-2cbfbc5b888e/disk.config because it was imported into RBD.#033[00m
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.388 221554 DEBUG nova.network.neutron [req-a77215e4-e149-457d-a566-2135cb2bd2d0 req-5d42480f-cacd-49d1-b913-4c05912a3c53 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Updated VIF entry in instance network info cache for port 10ec3184-8343-4827-91c5-b4627821beca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.388 221554 DEBUG nova.network.neutron [req-a77215e4-e149-457d-a566-2135cb2bd2d0 req-5d42480f-cacd-49d1-b913-4c05912a3c53 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Updating instance_info_cache with network_info: [{"id": "10ec3184-8343-4827-91c5-b4627821beca", "address": "fa:16:3e:73:0b:17", "network": {"id": "43fe9c37-3066-4674-8300-45954f140375", "bridge": "br-int", "label": "tempest-network-smoke--1079484700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ec3184-83", "ovs_interfaceid": "10ec3184-8343-4827-91c5-b4627821beca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:02:16 np0005603609 kernel: tap10ec3184-83: entered promiscuous mode
Jan 31 04:02:16 np0005603609 NetworkManager[49064]: <info>  [1769850136.4053] manager: (tap10ec3184-83): new Tun device (/org/freedesktop/NetworkManager/Devices/460)
Jan 31 04:02:16 np0005603609 ovn_controller[130359]: 2026-01-31T09:02:16Z|00987|binding|INFO|Claiming lport 10ec3184-8343-4827-91c5-b4627821beca for this chassis.
Jan 31 04:02:16 np0005603609 ovn_controller[130359]: 2026-01-31T09:02:16Z|00988|binding|INFO|10ec3184-8343-4827-91c5-b4627821beca: Claiming fa:16:3e:73:0b:17 10.100.0.4
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.405 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.408 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.412 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.414 221554 DEBUG oslo_concurrency.lockutils [req-a77215e4-e149-457d-a566-2135cb2bd2d0 req-5d42480f-cacd-49d1-b913-4c05912a3c53 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-a11f6d7a-3f71-4519-9604-2cbfbc5b888e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.421 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:0b:17 10.100.0.4'], port_security=['fa:16:3e:73:0b:17 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a11f6d7a-3f71-4519-9604-2cbfbc5b888e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43fe9c37-3066-4674-8300-45954f140375', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3e0cf4b6-8432-4ff6-8cb2-d4523b39f989', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a96d510-e9fa-4749-a164-3a7b6824e79a, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=10ec3184-8343-4827-91c5-b4627821beca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.423 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 10ec3184-8343-4827-91c5-b4627821beca in datapath 43fe9c37-3066-4674-8300-45954f140375 bound to our chassis#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.423 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43fe9c37-3066-4674-8300-45954f140375#033[00m
Jan 31 04:02:16 np0005603609 systemd-machined[190912]: New machine qemu-115-instance-000000d4.
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.431 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[81b7f408-60bd-4763-830e-ae98199b58ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.432 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43fe9c37-31 in ovnmeta-43fe9c37-3066-4674-8300-45954f140375 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 04:02:16 np0005603609 ovn_controller[130359]: 2026-01-31T09:02:16Z|00989|binding|INFO|Setting lport 10ec3184-8343-4827-91c5-b4627821beca ovn-installed in OVS
Jan 31 04:02:16 np0005603609 ovn_controller[130359]: 2026-01-31T09:02:16Z|00990|binding|INFO|Setting lport 10ec3184-8343-4827-91c5-b4627821beca up in Southbound
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.434 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.434 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43fe9c37-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.434 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a9cd4501-3843-4938-be3b-3475d1ec9718]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.435 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2794bad2-f03c-45e6-9ab3-30afea74c71c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.442 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[dba5d642-516c-494c-89af-07bcbea36272]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:16 np0005603609 systemd[1]: Started Virtual Machine qemu-115-instance-000000d4.
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.451 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fea11ec7-8fc7-4d0b-949f-a14143115a72]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:16 np0005603609 systemd-udevd[315494]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:02:16 np0005603609 NetworkManager[49064]: <info>  [1769850136.4659] device (tap10ec3184-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:02:16 np0005603609 NetworkManager[49064]: <info>  [1769850136.4667] device (tap10ec3184-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.472 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[88526ebd-f53e-4f29-9cb9-7ae87361f2b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.476 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b4fe5643-383c-4c99-994f-bf5b9348127c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:16 np0005603609 NetworkManager[49064]: <info>  [1769850136.4773] manager: (tap43fe9c37-30): new Veth device (/org/freedesktop/NetworkManager/Devices/461)
Jan 31 04:02:16 np0005603609 systemd-udevd[315498]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.497 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[8732a317-f3c1-4466-b7d7-f6f7df1440d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.500 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[cb95f70b-372a-4740-863e-33f7294e911f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:16 np0005603609 NetworkManager[49064]: <info>  [1769850136.5158] device (tap43fe9c37-30): carrier: link connected
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.517 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[7b36b788-90d1-4a2a-9238-dea3e4c9bdc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.529 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7a6481-d9d8-40d5-9a69-f74472f3f6e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43fe9c37-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:73:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 302], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1037621, 'reachable_time': 37486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315524, 'error': None, 'target': 'ovnmeta-43fe9c37-3066-4674-8300-45954f140375', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.542 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0fb225-7954-4c54-ab2e-8c6b2e26ad60]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:7330'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1037621, 'tstamp': 1037621}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315525, 'error': None, 'target': 'ovnmeta-43fe9c37-3066-4674-8300-45954f140375', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.554 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5dd40114-68fb-44aa-98b5-3e3c7b846e13]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43fe9c37-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:73:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 302], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1037621, 'reachable_time': 37486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315526, 'error': None, 'target': 'ovnmeta-43fe9c37-3066-4674-8300-45954f140375', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.574 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a2fbd4-22e9-4eeb-b505-d1ec21348cc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.614 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e1f149-01b6-4bd5-be71-cd690b3df128]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.615 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43fe9c37-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.615 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.616 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43fe9c37-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:02:16 np0005603609 kernel: tap43fe9c37-30: entered promiscuous mode
Jan 31 04:02:16 np0005603609 NetworkManager[49064]: <info>  [1769850136.6185] manager: (tap43fe9c37-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/462)
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.618 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.620 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.621 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43fe9c37-30, col_values=(('external_ids', {'iface-id': 'a5655e84-4616-4f34-8e20-c95316f0adfa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.624 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43fe9c37-3066-4674-8300-45954f140375.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43fe9c37-3066-4674-8300-45954f140375.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 04:02:16 np0005603609 ovn_controller[130359]: 2026-01-31T09:02:16Z|00991|binding|INFO|Releasing lport a5655e84-4616-4f34-8e20-c95316f0adfa from this chassis (sb_readonly=0)
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.622 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.623 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.624 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[01cfdd83-245e-457a-aab6-dbba761357e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.625 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-43fe9c37-3066-4674-8300-45954f140375
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/43fe9c37-3066-4674-8300-45954f140375.pid.haproxy
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 43fe9c37-3066-4674-8300-45954f140375
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.626 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43fe9c37-3066-4674-8300-45954f140375', 'env', 'PROCESS_TAG=haproxy-43fe9c37-3066-4674-8300-45954f140375', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43fe9c37-3066-4674-8300-45954f140375.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.629 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.809 221554 DEBUG nova.compute.manager [req-4bfab0b5-4103-4216-b0aa-63dddab39fae req-ceea63c6-7217-49bc-80bb-6dcceb030817 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Received event network-vif-plugged-10ec3184-8343-4827-91c5-b4627821beca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.809 221554 DEBUG oslo_concurrency.lockutils [req-4bfab0b5-4103-4216-b0aa-63dddab39fae req-ceea63c6-7217-49bc-80bb-6dcceb030817 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.810 221554 DEBUG oslo_concurrency.lockutils [req-4bfab0b5-4103-4216-b0aa-63dddab39fae req-ceea63c6-7217-49bc-80bb-6dcceb030817 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.810 221554 DEBUG oslo_concurrency.lockutils [req-4bfab0b5-4103-4216-b0aa-63dddab39fae req-ceea63c6-7217-49bc-80bb-6dcceb030817 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.810 221554 DEBUG nova.compute.manager [req-4bfab0b5-4103-4216-b0aa-63dddab39fae req-ceea63c6-7217-49bc-80bb-6dcceb030817 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Processing event network-vif-plugged-10ec3184-8343-4827-91c5-b4627821beca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 04:02:16 np0005603609 podman[315556]: 2026-01-31 09:02:16.920450137 +0000 UTC m=+0.041168898 container create 81f1e86de3de587112d2d1b4dd93f1157b9080b0c2459b92a1c2adc166b670f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43fe9c37-3066-4674-8300-45954f140375, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 04:02:16 np0005603609 systemd[1]: Started libpod-conmon-81f1e86de3de587112d2d1b4dd93f1157b9080b0c2459b92a1c2adc166b670f0.scope.
Jan 31 04:02:16 np0005603609 systemd[1]: Started libcrun container.
Jan 31 04:02:16 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fb2ccc46e2e29b32bf5dc171731754e1534f2dd5c302803ff68082e1741d50/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 04:02:16 np0005603609 podman[315556]: 2026-01-31 09:02:16.979349693 +0000 UTC m=+0.100068464 container init 81f1e86de3de587112d2d1b4dd93f1157b9080b0c2459b92a1c2adc166b670f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43fe9c37-3066-4674-8300-45954f140375, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 04:02:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:16.982 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=102, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=101) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:02:16 np0005603609 podman[315556]: 2026-01-31 09:02:16.983796171 +0000 UTC m=+0.104514922 container start 81f1e86de3de587112d2d1b4dd93f1157b9080b0c2459b92a1c2adc166b670f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43fe9c37-3066-4674-8300-45954f140375, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 04:02:16 np0005603609 nova_compute[221550]: 2026-01-31 09:02:16.984 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:16 np0005603609 podman[315556]: 2026-01-31 09:02:16.896512978 +0000 UTC m=+0.017231749 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 04:02:17 np0005603609 neutron-haproxy-ovnmeta-43fe9c37-3066-4674-8300-45954f140375[315604]: [NOTICE]   (315615) : New worker (315618) forked
Jan 31 04:02:17 np0005603609 neutron-haproxy-ovnmeta-43fe9c37-3066-4674-8300-45954f140375[315604]: [NOTICE]   (315615) : Loading success.
Jan 31 04:02:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:17 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:17.029 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.059 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769850137.0585635, a11f6d7a-3f71-4519-9604-2cbfbc5b888e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.059 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] VM Started (Lifecycle Event)#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.061 221554 DEBUG nova.compute.manager [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.063 221554 DEBUG nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.066 221554 INFO nova.virt.libvirt.driver [-] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Instance spawned successfully.#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.067 221554 DEBUG nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.090 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.094 221554 DEBUG nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.094 221554 DEBUG nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.094 221554 DEBUG nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.095 221554 DEBUG nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.095 221554 DEBUG nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.095 221554 DEBUG nova.virt.libvirt.driver [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.100 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.130 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.130 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769850137.058726, a11f6d7a-3f71-4519-9604-2cbfbc5b888e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.131 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] VM Paused (Lifecycle Event)#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.159 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.161 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769850137.0629292, a11f6d7a-3f71-4519-9604-2cbfbc5b888e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.162 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] VM Resumed (Lifecycle Event)#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.173 221554 INFO nova.compute.manager [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Took 9.72 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.173 221554 DEBUG nova.compute.manager [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.209 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.213 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.243 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.258 221554 INFO nova.compute.manager [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Took 10.79 seconds to build instance.#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.275 221554 DEBUG oslo_concurrency.lockutils [None req-c3b8610a-c60c-45ec-9ec0-326dd2501cb3 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:17.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:02:17 np0005603609 nova_compute[221550]: 2026-01-31 09:02:17.704 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:18.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:18 np0005603609 nova_compute[221550]: 2026-01-31 09:02:18.347 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-a11f6d7a-3f71-4519-9604-2cbfbc5b888e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:02:18 np0005603609 nova_compute[221550]: 2026-01-31 09:02:18.347 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-a11f6d7a-3f71-4519-9604-2cbfbc5b888e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:02:18 np0005603609 nova_compute[221550]: 2026-01-31 09:02:18.348 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 04:02:18 np0005603609 nova_compute[221550]: 2026-01-31 09:02:18.348 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a11f6d7a-3f71-4519-9604-2cbfbc5b888e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:02:18 np0005603609 nova_compute[221550]: 2026-01-31 09:02:18.949 221554 DEBUG nova.compute.manager [req-317add1a-70ad-4983-8597-4d44ce264d2c req-6d5fd5b3-1ad2-4170-8765-689d6ac9f080 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Received event network-vif-plugged-10ec3184-8343-4827-91c5-b4627821beca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:02:18 np0005603609 nova_compute[221550]: 2026-01-31 09:02:18.950 221554 DEBUG oslo_concurrency.lockutils [req-317add1a-70ad-4983-8597-4d44ce264d2c req-6d5fd5b3-1ad2-4170-8765-689d6ac9f080 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:18 np0005603609 nova_compute[221550]: 2026-01-31 09:02:18.950 221554 DEBUG oslo_concurrency.lockutils [req-317add1a-70ad-4983-8597-4d44ce264d2c req-6d5fd5b3-1ad2-4170-8765-689d6ac9f080 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:18 np0005603609 nova_compute[221550]: 2026-01-31 09:02:18.951 221554 DEBUG oslo_concurrency.lockutils [req-317add1a-70ad-4983-8597-4d44ce264d2c req-6d5fd5b3-1ad2-4170-8765-689d6ac9f080 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:18 np0005603609 nova_compute[221550]: 2026-01-31 09:02:18.951 221554 DEBUG nova.compute.manager [req-317add1a-70ad-4983-8597-4d44ce264d2c req-6d5fd5b3-1ad2-4170-8765-689d6ac9f080 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] No waiting events found dispatching network-vif-plugged-10ec3184-8343-4827-91c5-b4627821beca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:02:18 np0005603609 nova_compute[221550]: 2026-01-31 09:02:18.951 221554 WARNING nova.compute.manager [req-317add1a-70ad-4983-8597-4d44ce264d2c req-6d5fd5b3-1ad2-4170-8765-689d6ac9f080 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Received unexpected event network-vif-plugged-10ec3184-8343-4827-91c5-b4627821beca for instance with vm_state active and task_state None.#033[00m
Jan 31 04:02:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:02:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:19.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:02:19 np0005603609 nova_compute[221550]: 2026-01-31 09:02:19.931 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:20 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:20.031 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '102'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:02:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:20.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:21 np0005603609 nova_compute[221550]: 2026-01-31 09:02:21.420 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Updating instance_info_cache with network_info: [{"id": "10ec3184-8343-4827-91c5-b4627821beca", "address": "fa:16:3e:73:0b:17", "network": {"id": "43fe9c37-3066-4674-8300-45954f140375", "bridge": "br-int", "label": "tempest-network-smoke--1079484700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ec3184-83", "ovs_interfaceid": "10ec3184-8343-4827-91c5-b4627821beca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:02:21 np0005603609 nova_compute[221550]: 2026-01-31 09:02:21.464 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-a11f6d7a-3f71-4519-9604-2cbfbc5b888e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:02:21 np0005603609 nova_compute[221550]: 2026-01-31 09:02:21.465 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 04:02:21 np0005603609 nova_compute[221550]: 2026-01-31 09:02:21.465 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:21 np0005603609 nova_compute[221550]: 2026-01-31 09:02:21.465 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:02:21 np0005603609 nova_compute[221550]: 2026-01-31 09:02:21.465 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:21 np0005603609 nova_compute[221550]: 2026-01-31 09:02:21.490 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:21 np0005603609 nova_compute[221550]: 2026-01-31 09:02:21.491 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:21 np0005603609 nova_compute[221550]: 2026-01-31 09:02:21.491 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:21 np0005603609 nova_compute[221550]: 2026-01-31 09:02:21.491 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:02:21 np0005603609 nova_compute[221550]: 2026-01-31 09:02:21.492 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:02:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:02:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:21.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:02:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:02:21 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2934368418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:02:21 np0005603609 nova_compute[221550]: 2026-01-31 09:02:21.913 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:02:21 np0005603609 nova_compute[221550]: 2026-01-31 09:02:21.985 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000d4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:02:21 np0005603609 nova_compute[221550]: 2026-01-31 09:02:21.985 221554 DEBUG nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] skipping disk for instance-000000d4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:02:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:22 np0005603609 ovn_controller[130359]: 2026-01-31T09:02:22Z|00992|binding|INFO|Releasing lport a5655e84-4616-4f34-8e20-c95316f0adfa from this chassis (sb_readonly=0)
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.116 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:22 np0005603609 NetworkManager[49064]: <info>  [1769850142.1177] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/463)
Jan 31 04:02:22 np0005603609 NetworkManager[49064]: <info>  [1769850142.1186] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/464)
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.119 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.119 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4044MB free_disk=20.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.120 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.120 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:22 np0005603609 ovn_controller[130359]: 2026-01-31T09:02:22Z|00993|binding|INFO|Releasing lport a5655e84-4616-4f34-8e20-c95316f0adfa from this chassis (sb_readonly=0)
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.131 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.135 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.216 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance a11f6d7a-3f71-4519-9604-2cbfbc5b888e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.216 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.217 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.262 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:02:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:22.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:02:22 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1838854291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.658 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.664 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.677 221554 DEBUG nova.compute.manager [req-51af3f0d-56d8-4fb8-96c8-fdb8e87e3870 req-2931d331-c88d-4a02-b851-90d66bb32684 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Received event network-changed-10ec3184-8343-4827-91c5-b4627821beca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.678 221554 DEBUG nova.compute.manager [req-51af3f0d-56d8-4fb8-96c8-fdb8e87e3870 req-2931d331-c88d-4a02-b851-90d66bb32684 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Refreshing instance network info cache due to event network-changed-10ec3184-8343-4827-91c5-b4627821beca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.678 221554 DEBUG oslo_concurrency.lockutils [req-51af3f0d-56d8-4fb8-96c8-fdb8e87e3870 req-2931d331-c88d-4a02-b851-90d66bb32684 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-a11f6d7a-3f71-4519-9604-2cbfbc5b888e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.679 221554 DEBUG oslo_concurrency.lockutils [req-51af3f0d-56d8-4fb8-96c8-fdb8e87e3870 req-2931d331-c88d-4a02-b851-90d66bb32684 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-a11f6d7a-3f71-4519-9604-2cbfbc5b888e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.679 221554 DEBUG nova.network.neutron [req-51af3f0d-56d8-4fb8-96c8-fdb8e87e3870 req-2931d331-c88d-4a02-b851-90d66bb32684 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Refreshing network info cache for port 10ec3184-8343-4827-91c5-b4627821beca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.703 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.709 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.740 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:02:22 np0005603609 nova_compute[221550]: 2026-01-31 09:02:22.741 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:02:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:23.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:02:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:02:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:02:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:02:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:02:23 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 04:02:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:02:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:24.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:02:24 np0005603609 nova_compute[221550]: 2026-01-31 09:02:24.388 221554 DEBUG nova.network.neutron [req-51af3f0d-56d8-4fb8-96c8-fdb8e87e3870 req-2931d331-c88d-4a02-b851-90d66bb32684 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Updated VIF entry in instance network info cache for port 10ec3184-8343-4827-91c5-b4627821beca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:02:24 np0005603609 nova_compute[221550]: 2026-01-31 09:02:24.389 221554 DEBUG nova.network.neutron [req-51af3f0d-56d8-4fb8-96c8-fdb8e87e3870 req-2931d331-c88d-4a02-b851-90d66bb32684 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Updating instance_info_cache with network_info: [{"id": "10ec3184-8343-4827-91c5-b4627821beca", "address": "fa:16:3e:73:0b:17", "network": {"id": "43fe9c37-3066-4674-8300-45954f140375", "bridge": "br-int", "label": "tempest-network-smoke--1079484700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ec3184-83", "ovs_interfaceid": "10ec3184-8343-4827-91c5-b4627821beca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:02:24 np0005603609 nova_compute[221550]: 2026-01-31 09:02:24.415 221554 DEBUG oslo_concurrency.lockutils [req-51af3f0d-56d8-4fb8-96c8-fdb8e87e3870 req-2931d331-c88d-4a02-b851-90d66bb32684 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-a11f6d7a-3f71-4519-9604-2cbfbc5b888e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:02:24 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 04:02:24 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:02:24 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:02:24 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:02:24 np0005603609 nova_compute[221550]: 2026-01-31 09:02:24.934 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:02:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:25.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:02:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:26.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:27.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:27 np0005603609 nova_compute[221550]: 2026-01-31 09:02:27.743 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:28.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:29 np0005603609 ovn_controller[130359]: 2026-01-31T09:02:29Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:73:0b:17 10.100.0.4
Jan 31 04:02:29 np0005603609 ovn_controller[130359]: 2026-01-31T09:02:29Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:73:0b:17 10.100.0.4
Jan 31 04:02:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:29.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:29 np0005603609 nova_compute[221550]: 2026-01-31 09:02:29.937 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:02:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:02:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:30.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:30 np0005603609 nova_compute[221550]: 2026-01-31 09:02:30.737 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:30 np0005603609 nova_compute[221550]: 2026-01-31 09:02:30.738 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:31.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:02:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:32.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:02:32 np0005603609 nova_compute[221550]: 2026-01-31 09:02:32.745 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:33.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:34.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:34 np0005603609 nova_compute[221550]: 2026-01-31 09:02:34.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:34 np0005603609 nova_compute[221550]: 2026-01-31 09:02:34.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 04:02:34 np0005603609 nova_compute[221550]: 2026-01-31 09:02:34.940 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:35.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:35 np0005603609 nova_compute[221550]: 2026-01-31 09:02:35.663 221554 INFO nova.compute.manager [None req-0a79cff1-c56f-49fb-b9bc-716a27bbf4fa 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Get console output#033[00m
Jan 31 04:02:35 np0005603609 nova_compute[221550]: 2026-01-31 09:02:35.668 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 04:02:35 np0005603609 nova_compute[221550]: 2026-01-31 09:02:35.681 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:36.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:02:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:37.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:02:37 np0005603609 nova_compute[221550]: 2026-01-31 09:02:37.747 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:38.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:38 np0005603609 ovn_controller[130359]: 2026-01-31T09:02:38Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:73:0b:17 10.100.0.4
Jan 31 04:02:39 np0005603609 podman[315977]: 2026-01-31 09:02:39.167885322 +0000 UTC m=+0.053438325 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:02:39 np0005603609 podman[315976]: 2026-01-31 09:02:39.190578821 +0000 UTC m=+0.076343619 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:02:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:02:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:39.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:02:39 np0005603609 nova_compute[221550]: 2026-01-31 09:02:39.943 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:02:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:40.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:02:41 np0005603609 ovn_controller[130359]: 2026-01-31T09:02:41Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:73:0b:17 10.100.0.4
Jan 31 04:02:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:41.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:41 np0005603609 nova_compute[221550]: 2026-01-31 09:02:41.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:41 np0005603609 nova_compute[221550]: 2026-01-31 09:02:41.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 04:02:41 np0005603609 nova_compute[221550]: 2026-01-31 09:02:41.686 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 04:02:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:42 np0005603609 nova_compute[221550]: 2026-01-31 09:02:42.108 221554 DEBUG nova.compute.manager [req-8f8e2144-9856-44e8-87ac-91f4f764f80a req-f1e93d57-3e92-4b5e-8b77-467cbaf4900b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Received event network-changed-10ec3184-8343-4827-91c5-b4627821beca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:02:42 np0005603609 nova_compute[221550]: 2026-01-31 09:02:42.109 221554 DEBUG nova.compute.manager [req-8f8e2144-9856-44e8-87ac-91f4f764f80a req-f1e93d57-3e92-4b5e-8b77-467cbaf4900b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Refreshing instance network info cache due to event network-changed-10ec3184-8343-4827-91c5-b4627821beca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:02:42 np0005603609 nova_compute[221550]: 2026-01-31 09:02:42.109 221554 DEBUG oslo_concurrency.lockutils [req-8f8e2144-9856-44e8-87ac-91f4f764f80a req-f1e93d57-3e92-4b5e-8b77-467cbaf4900b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-a11f6d7a-3f71-4519-9604-2cbfbc5b888e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:02:42 np0005603609 nova_compute[221550]: 2026-01-31 09:02:42.110 221554 DEBUG oslo_concurrency.lockutils [req-8f8e2144-9856-44e8-87ac-91f4f764f80a req-f1e93d57-3e92-4b5e-8b77-467cbaf4900b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-a11f6d7a-3f71-4519-9604-2cbfbc5b888e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:02:42 np0005603609 nova_compute[221550]: 2026-01-31 09:02:42.110 221554 DEBUG nova.network.neutron [req-8f8e2144-9856-44e8-87ac-91f4f764f80a req-f1e93d57-3e92-4b5e-8b77-467cbaf4900b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Refreshing network info cache for port 10ec3184-8343-4827-91c5-b4627821beca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:02:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:42.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:42 np0005603609 nova_compute[221550]: 2026-01-31 09:02:42.750 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:42 np0005603609 nova_compute[221550]: 2026-01-31 09:02:42.836 221554 DEBUG oslo_concurrency.lockutils [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:42 np0005603609 nova_compute[221550]: 2026-01-31 09:02:42.838 221554 DEBUG oslo_concurrency.lockutils [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:42 np0005603609 nova_compute[221550]: 2026-01-31 09:02:42.838 221554 DEBUG oslo_concurrency.lockutils [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:42 np0005603609 nova_compute[221550]: 2026-01-31 09:02:42.839 221554 DEBUG oslo_concurrency.lockutils [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:42 np0005603609 nova_compute[221550]: 2026-01-31 09:02:42.839 221554 DEBUG oslo_concurrency.lockutils [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:42 np0005603609 nova_compute[221550]: 2026-01-31 09:02:42.843 221554 INFO nova.compute.manager [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Terminating instance#033[00m
Jan 31 04:02:42 np0005603609 nova_compute[221550]: 2026-01-31 09:02:42.845 221554 DEBUG nova.compute.manager [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 04:02:42 np0005603609 kernel: tap10ec3184-83 (unregistering): left promiscuous mode
Jan 31 04:02:42 np0005603609 NetworkManager[49064]: <info>  [1769850162.9195] device (tap10ec3184-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:02:42 np0005603609 nova_compute[221550]: 2026-01-31 09:02:42.928 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:42 np0005603609 ovn_controller[130359]: 2026-01-31T09:02:42Z|00994|binding|INFO|Releasing lport 10ec3184-8343-4827-91c5-b4627821beca from this chassis (sb_readonly=0)
Jan 31 04:02:42 np0005603609 ovn_controller[130359]: 2026-01-31T09:02:42Z|00995|binding|INFO|Setting lport 10ec3184-8343-4827-91c5-b4627821beca down in Southbound
Jan 31 04:02:42 np0005603609 ovn_controller[130359]: 2026-01-31T09:02:42Z|00996|binding|INFO|Removing iface tap10ec3184-83 ovn-installed in OVS
Jan 31 04:02:42 np0005603609 nova_compute[221550]: 2026-01-31 09:02:42.933 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:42.938 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:0b:17 10.100.0.4'], port_security=['fa:16:3e:73:0b:17 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a11f6d7a-3f71-4519-9604-2cbfbc5b888e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43fe9c37-3066-4674-8300-45954f140375', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3e0cf4b6-8432-4ff6-8cb2-d4523b39f989', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a96d510-e9fa-4749-a164-3a7b6824e79a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=10ec3184-8343-4827-91c5-b4627821beca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:02:42 np0005603609 nova_compute[221550]: 2026-01-31 09:02:42.940 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:42.940 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 10ec3184-8343-4827-91c5-b4627821beca in datapath 43fe9c37-3066-4674-8300-45954f140375 unbound from our chassis#033[00m
Jan 31 04:02:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:42.942 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43fe9c37-3066-4674-8300-45954f140375, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:02:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:42.943 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[36005bf3-686b-4d38-ab7f-5c812eea4463]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:42 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:42.945 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43fe9c37-3066-4674-8300-45954f140375 namespace which is not needed anymore#033[00m
Jan 31 04:02:42 np0005603609 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d000000d4.scope: Deactivated successfully.
Jan 31 04:02:42 np0005603609 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d000000d4.scope: Consumed 13.328s CPU time.
Jan 31 04:02:42 np0005603609 systemd-machined[190912]: Machine qemu-115-instance-000000d4 terminated.
Jan 31 04:02:43 np0005603609 neutron-haproxy-ovnmeta-43fe9c37-3066-4674-8300-45954f140375[315604]: [NOTICE]   (315615) : haproxy version is 2.8.14-c23fe91
Jan 31 04:02:43 np0005603609 neutron-haproxy-ovnmeta-43fe9c37-3066-4674-8300-45954f140375[315604]: [NOTICE]   (315615) : path to executable is /usr/sbin/haproxy
Jan 31 04:02:43 np0005603609 neutron-haproxy-ovnmeta-43fe9c37-3066-4674-8300-45954f140375[315604]: [WARNING]  (315615) : Exiting Master process...
Jan 31 04:02:43 np0005603609 neutron-haproxy-ovnmeta-43fe9c37-3066-4674-8300-45954f140375[315604]: [WARNING]  (315615) : Exiting Master process...
Jan 31 04:02:43 np0005603609 neutron-haproxy-ovnmeta-43fe9c37-3066-4674-8300-45954f140375[315604]: [ALERT]    (315615) : Current worker (315618) exited with code 143 (Terminated)
Jan 31 04:02:43 np0005603609 neutron-haproxy-ovnmeta-43fe9c37-3066-4674-8300-45954f140375[315604]: [WARNING]  (315615) : All workers exited. Exiting... (0)
Jan 31 04:02:43 np0005603609 systemd[1]: libpod-81f1e86de3de587112d2d1b4dd93f1157b9080b0c2459b92a1c2adc166b670f0.scope: Deactivated successfully.
Jan 31 04:02:43 np0005603609 podman[316043]: 2026-01-31 09:02:43.066363243 +0000 UTC m=+0.041917086 container died 81f1e86de3de587112d2d1b4dd93f1157b9080b0c2459b92a1c2adc166b670f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43fe9c37-3066-4674-8300-45954f140375, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.102 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.118 221554 INFO nova.virt.libvirt.driver [-] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Instance destroyed successfully.#033[00m
Jan 31 04:02:43 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-81f1e86de3de587112d2d1b4dd93f1157b9080b0c2459b92a1c2adc166b670f0-userdata-shm.mount: Deactivated successfully.
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.122 221554 DEBUG nova.objects.instance [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'resources' on Instance uuid a11f6d7a-3f71-4519-9604-2cbfbc5b888e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:02:43 np0005603609 systemd[1]: var-lib-containers-storage-overlay-98fb2ccc46e2e29b32bf5dc171731754e1534f2dd5c302803ff68082e1741d50-merged.mount: Deactivated successfully.
Jan 31 04:02:43 np0005603609 podman[316043]: 2026-01-31 09:02:43.126035497 +0000 UTC m=+0.101589320 container cleanup 81f1e86de3de587112d2d1b4dd93f1157b9080b0c2459b92a1c2adc166b670f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43fe9c37-3066-4674-8300-45954f140375, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 04:02:43 np0005603609 systemd[1]: libpod-conmon-81f1e86de3de587112d2d1b4dd93f1157b9080b0c2459b92a1c2adc166b670f0.scope: Deactivated successfully.
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.142 221554 DEBUG nova.virt.libvirt.vif [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:02:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1147534750',display_name='tempest-TestNetworkBasicOps-server-1147534750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1147534750',id=212,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOk46nSq8WG44FR7TgFRI3xXdxBOYNIUOf8OL4cX2tAm4auQr60qM60bhnRGE8WCnD1l6YZnieZRfd6RMYjLQTqZJLUbFa7Lmv2J0ZaW8AP+qfNxIDfxUqfwxVYNaNeGHA==',key_name='tempest-TestNetworkBasicOps-911323969',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:02:17Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-naplw6aw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:02:17Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=a11f6d7a-3f71-4519-9604-2cbfbc5b888e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "10ec3184-8343-4827-91c5-b4627821beca", "address": "fa:16:3e:73:0b:17", "network": {"id": "43fe9c37-3066-4674-8300-45954f140375", "bridge": "br-int", "label": "tempest-network-smoke--1079484700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ec3184-83", "ovs_interfaceid": "10ec3184-8343-4827-91c5-b4627821beca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.142 221554 DEBUG nova.network.os_vif_util [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "10ec3184-8343-4827-91c5-b4627821beca", "address": "fa:16:3e:73:0b:17", "network": {"id": "43fe9c37-3066-4674-8300-45954f140375", "bridge": "br-int", "label": "tempest-network-smoke--1079484700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ec3184-83", "ovs_interfaceid": "10ec3184-8343-4827-91c5-b4627821beca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.143 221554 DEBUG nova.network.os_vif_util [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:73:0b:17,bridge_name='br-int',has_traffic_filtering=True,id=10ec3184-8343-4827-91c5-b4627821beca,network=Network(43fe9c37-3066-4674-8300-45954f140375),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10ec3184-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.143 221554 DEBUG os_vif [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:0b:17,bridge_name='br-int',has_traffic_filtering=True,id=10ec3184-8343-4827-91c5-b4627821beca,network=Network(43fe9c37-3066-4674-8300-45954f140375),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10ec3184-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.144 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.145 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10ec3184-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.146 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.147 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.149 221554 INFO os_vif [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:0b:17,bridge_name='br-int',has_traffic_filtering=True,id=10ec3184-8343-4827-91c5-b4627821beca,network=Network(43fe9c37-3066-4674-8300-45954f140375),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10ec3184-83')#033[00m
Jan 31 04:02:43 np0005603609 podman[316083]: 2026-01-31 09:02:43.193672714 +0000 UTC m=+0.050266447 container remove 81f1e86de3de587112d2d1b4dd93f1157b9080b0c2459b92a1c2adc166b670f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43fe9c37-3066-4674-8300-45954f140375, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 04:02:43 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:43.200 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d45363b5-705b-4e9a-bd97-1d25c6a52d18]: (4, ('Sat Jan 31 09:02:43 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-43fe9c37-3066-4674-8300-45954f140375 (81f1e86de3de587112d2d1b4dd93f1157b9080b0c2459b92a1c2adc166b670f0)\n81f1e86de3de587112d2d1b4dd93f1157b9080b0c2459b92a1c2adc166b670f0\nSat Jan 31 09:02:43 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-43fe9c37-3066-4674-8300-45954f140375 (81f1e86de3de587112d2d1b4dd93f1157b9080b0c2459b92a1c2adc166b670f0)\n81f1e86de3de587112d2d1b4dd93f1157b9080b0c2459b92a1c2adc166b670f0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:43 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:43.201 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1788e14f-cffb-440a-8c87-36c5a9683eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:43 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:43.202 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43fe9c37-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.203 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:43 np0005603609 kernel: tap43fe9c37-30: left promiscuous mode
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.208 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:43 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:43.210 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5b1603ef-b50b-4724-8302-b0456c7f5708]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:43 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:43.225 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[61b61564-66bb-47a7-84fa-6dbcc3196dd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:43 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:43.227 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a660d3fa-0ba8-4450-8f26-5db40eb60a9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.240 221554 DEBUG nova.compute.manager [req-3498571a-477b-4a4b-8cf5-0237030492dd req-7bcf7339-ea6d-484f-bbcd-4e6267ab95f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Received event network-vif-unplugged-10ec3184-8343-4827-91c5-b4627821beca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.240 221554 DEBUG oslo_concurrency.lockutils [req-3498571a-477b-4a4b-8cf5-0237030492dd req-7bcf7339-ea6d-484f-bbcd-4e6267ab95f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.240 221554 DEBUG oslo_concurrency.lockutils [req-3498571a-477b-4a4b-8cf5-0237030492dd req-7bcf7339-ea6d-484f-bbcd-4e6267ab95f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.241 221554 DEBUG oslo_concurrency.lockutils [req-3498571a-477b-4a4b-8cf5-0237030492dd req-7bcf7339-ea6d-484f-bbcd-4e6267ab95f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.241 221554 DEBUG nova.compute.manager [req-3498571a-477b-4a4b-8cf5-0237030492dd req-7bcf7339-ea6d-484f-bbcd-4e6267ab95f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] No waiting events found dispatching network-vif-unplugged-10ec3184-8343-4827-91c5-b4627821beca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:02:43 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:43.240 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa5b7e1-fe89-468e-aff1-c57ac55e8ccf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1037616, 'reachable_time': 23732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316116, 'error': None, 'target': 'ovnmeta-43fe9c37-3066-4674-8300-45954f140375', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.241 221554 DEBUG nova.compute.manager [req-3498571a-477b-4a4b-8cf5-0237030492dd req-7bcf7339-ea6d-484f-bbcd-4e6267ab95f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Received event network-vif-unplugged-10ec3184-8343-4827-91c5-b4627821beca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 04:02:43 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:43.243 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43fe9c37-3066-4674-8300-45954f140375 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:02:43 np0005603609 systemd[1]: run-netns-ovnmeta\x2d43fe9c37\x2d3066\x2d4674\x2d8300\x2d45954f140375.mount: Deactivated successfully.
Jan 31 04:02:43 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:02:43.243 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[d061d9cb-40e4-4a93-b274-daf90472ccf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:43.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.601 221554 INFO nova.virt.libvirt.driver [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Deleting instance files /var/lib/nova/instances/a11f6d7a-3f71-4519-9604-2cbfbc5b888e_del#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.602 221554 INFO nova.virt.libvirt.driver [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Deletion of /var/lib/nova/instances/a11f6d7a-3f71-4519-9604-2cbfbc5b888e_del complete#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.729 221554 INFO nova.compute.manager [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.729 221554 DEBUG oslo.service.loopingcall [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.730 221554 DEBUG nova.compute.manager [-] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.730 221554 DEBUG nova.network.neutron [-] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.918 221554 DEBUG nova.network.neutron [req-8f8e2144-9856-44e8-87ac-91f4f764f80a req-f1e93d57-3e92-4b5e-8b77-467cbaf4900b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Updated VIF entry in instance network info cache for port 10ec3184-8343-4827-91c5-b4627821beca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.919 221554 DEBUG nova.network.neutron [req-8f8e2144-9856-44e8-87ac-91f4f764f80a req-f1e93d57-3e92-4b5e-8b77-467cbaf4900b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Updating instance_info_cache with network_info: [{"id": "10ec3184-8343-4827-91c5-b4627821beca", "address": "fa:16:3e:73:0b:17", "network": {"id": "43fe9c37-3066-4674-8300-45954f140375", "bridge": "br-int", "label": "tempest-network-smoke--1079484700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10ec3184-83", "ovs_interfaceid": "10ec3184-8343-4827-91c5-b4627821beca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:02:43 np0005603609 nova_compute[221550]: 2026-01-31 09:02:43.958 221554 DEBUG oslo_concurrency.lockutils [req-8f8e2144-9856-44e8-87ac-91f4f764f80a req-f1e93d57-3e92-4b5e-8b77-467cbaf4900b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-a11f6d7a-3f71-4519-9604-2cbfbc5b888e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:02:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:44.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:44 np0005603609 nova_compute[221550]: 2026-01-31 09:02:44.710 221554 DEBUG nova.network.neutron [-] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:02:44 np0005603609 nova_compute[221550]: 2026-01-31 09:02:44.743 221554 INFO nova.compute.manager [-] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Took 1.01 seconds to deallocate network for instance.#033[00m
Jan 31 04:02:44 np0005603609 nova_compute[221550]: 2026-01-31 09:02:44.803 221554 DEBUG nova.compute.manager [req-de524b35-4013-4583-9bcd-619aec794a74 req-54027b2a-5c4c-4edb-b549-e235a9d77026 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Received event network-vif-deleted-10ec3184-8343-4827-91c5-b4627821beca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:02:44 np0005603609 nova_compute[221550]: 2026-01-31 09:02:44.807 221554 DEBUG oslo_concurrency.lockutils [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:44 np0005603609 nova_compute[221550]: 2026-01-31 09:02:44.808 221554 DEBUG oslo_concurrency.lockutils [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:44 np0005603609 nova_compute[221550]: 2026-01-31 09:02:44.921 221554 DEBUG oslo_concurrency.processutils [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:02:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:02:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2159508482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:02:45 np0005603609 nova_compute[221550]: 2026-01-31 09:02:45.329 221554 DEBUG oslo_concurrency.processutils [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:02:45 np0005603609 nova_compute[221550]: 2026-01-31 09:02:45.334 221554 DEBUG nova.compute.provider_tree [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:02:45 np0005603609 nova_compute[221550]: 2026-01-31 09:02:45.366 221554 DEBUG nova.scheduler.client.report [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:02:45 np0005603609 nova_compute[221550]: 2026-01-31 09:02:45.410 221554 DEBUG nova.compute.manager [req-16b7ab2f-6b3e-4ed9-8c31-b5a4dce057f7 req-d28f04fb-5aa2-45f2-886e-4883207aab66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Received event network-vif-plugged-10ec3184-8343-4827-91c5-b4627821beca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:02:45 np0005603609 nova_compute[221550]: 2026-01-31 09:02:45.410 221554 DEBUG oslo_concurrency.lockutils [req-16b7ab2f-6b3e-4ed9-8c31-b5a4dce057f7 req-d28f04fb-5aa2-45f2-886e-4883207aab66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:45 np0005603609 nova_compute[221550]: 2026-01-31 09:02:45.411 221554 DEBUG oslo_concurrency.lockutils [req-16b7ab2f-6b3e-4ed9-8c31-b5a4dce057f7 req-d28f04fb-5aa2-45f2-886e-4883207aab66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:45 np0005603609 nova_compute[221550]: 2026-01-31 09:02:45.411 221554 DEBUG oslo_concurrency.lockutils [req-16b7ab2f-6b3e-4ed9-8c31-b5a4dce057f7 req-d28f04fb-5aa2-45f2-886e-4883207aab66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:45 np0005603609 nova_compute[221550]: 2026-01-31 09:02:45.411 221554 DEBUG nova.compute.manager [req-16b7ab2f-6b3e-4ed9-8c31-b5a4dce057f7 req-d28f04fb-5aa2-45f2-886e-4883207aab66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] No waiting events found dispatching network-vif-plugged-10ec3184-8343-4827-91c5-b4627821beca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:02:45 np0005603609 nova_compute[221550]: 2026-01-31 09:02:45.411 221554 WARNING nova.compute.manager [req-16b7ab2f-6b3e-4ed9-8c31-b5a4dce057f7 req-d28f04fb-5aa2-45f2-886e-4883207aab66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Received unexpected event network-vif-plugged-10ec3184-8343-4827-91c5-b4627821beca for instance with vm_state deleted and task_state None.#033[00m
Jan 31 04:02:45 np0005603609 nova_compute[221550]: 2026-01-31 09:02:45.422 221554 DEBUG oslo_concurrency.lockutils [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:45 np0005603609 nova_compute[221550]: 2026-01-31 09:02:45.463 221554 INFO nova.scheduler.client.report [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Deleted allocations for instance a11f6d7a-3f71-4519-9604-2cbfbc5b888e#033[00m
Jan 31 04:02:45 np0005603609 nova_compute[221550]: 2026-01-31 09:02:45.555 221554 DEBUG oslo_concurrency.lockutils [None req-6b3c3cfb-9594-4072-99f8-63b69a7ecc1d 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "a11f6d7a-3f71-4519-9604-2cbfbc5b888e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:02:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:45.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:02:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:46.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:47.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:47 np0005603609 nova_compute[221550]: 2026-01-31 09:02:47.752 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:48 np0005603609 nova_compute[221550]: 2026-01-31 09:02:48.146 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:48.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:49 np0005603609 nova_compute[221550]: 2026-01-31 09:02:49.165 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:49 np0005603609 nova_compute[221550]: 2026-01-31 09:02:49.194 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:49.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:50.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:51.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:52.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:52 np0005603609 nova_compute[221550]: 2026-01-31 09:02:52.754 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:53 np0005603609 nova_compute[221550]: 2026-01-31 09:02:53.147 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:53.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:54.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:55.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:56.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:57.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:57 np0005603609 nova_compute[221550]: 2026-01-31 09:02:57.756 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:58 np0005603609 nova_compute[221550]: 2026-01-31 09:02:58.118 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850163.11651, a11f6d7a-3f71-4519-9604-2cbfbc5b888e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:02:58 np0005603609 nova_compute[221550]: 2026-01-31 09:02:58.118 221554 INFO nova.compute.manager [-] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:02:58 np0005603609 nova_compute[221550]: 2026-01-31 09:02:58.143 221554 DEBUG nova.compute.manager [None req-1a8355d4-0ad9-4e3f-80dc-5b7500a61b75 - - - - - -] [instance: a11f6d7a-3f71-4519-9604-2cbfbc5b888e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:02:58 np0005603609 nova_compute[221550]: 2026-01-31 09:02:58.149 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:58.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:02:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:59.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:03:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:00.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:03:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:03:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:01.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:03:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:02.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:02 np0005603609 nova_compute[221550]: 2026-01-31 09:03:02.757 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:03 np0005603609 nova_compute[221550]: 2026-01-31 09:03:03.151 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:03.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:04.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:05.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:06.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:07.555 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:07.555 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:07.556 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:03:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:07.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:03:07 np0005603609 nova_compute[221550]: 2026-01-31 09:03:07.686 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:07 np0005603609 nova_compute[221550]: 2026-01-31 09:03:07.759 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:08 np0005603609 nova_compute[221550]: 2026-01-31 09:03:08.153 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:03:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:08.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:03:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:09.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:09 np0005603609 nova_compute[221550]: 2026-01-31 09:03:09.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:10 np0005603609 podman[316146]: 2026-01-31 09:03:10.234065027 +0000 UTC m=+0.109170264 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:03:10 np0005603609 podman[316145]: 2026-01-31 09:03:10.270009687 +0000 UTC m=+0.144728715 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 04:03:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:03:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:10.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:03:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:03:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:11.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:03:11 np0005603609 nova_compute[221550]: 2026-01-31 09:03:11.748 221554 DEBUG oslo_concurrency.lockutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:11 np0005603609 nova_compute[221550]: 2026-01-31 09:03:11.748 221554 DEBUG oslo_concurrency.lockutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:11 np0005603609 nova_compute[221550]: 2026-01-31 09:03:11.772 221554 DEBUG nova.compute.manager [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 04:03:11 np0005603609 nova_compute[221550]: 2026-01-31 09:03:11.870 221554 DEBUG oslo_concurrency.lockutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:11 np0005603609 nova_compute[221550]: 2026-01-31 09:03:11.870 221554 DEBUG oslo_concurrency.lockutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:11 np0005603609 nova_compute[221550]: 2026-01-31 09:03:11.877 221554 DEBUG nova.virt.hardware [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 04:03:11 np0005603609 nova_compute[221550]: 2026-01-31 09:03:11.877 221554 INFO nova.compute.claims [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 04:03:11 np0005603609 nova_compute[221550]: 2026-01-31 09:03:11.969 221554 DEBUG oslo_concurrency.processutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:03:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.397 221554 DEBUG oslo_concurrency.processutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.403 221554 DEBUG nova.compute.provider_tree [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:03:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:12.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.418 221554 DEBUG nova.scheduler.client.report [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.438 221554 DEBUG oslo_concurrency.lockutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.439 221554 DEBUG nova.compute.manager [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.493 221554 DEBUG nova.compute.manager [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.494 221554 DEBUG nova.network.neutron [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.512 221554 INFO nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.538 221554 DEBUG nova.compute.manager [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.631 221554 DEBUG nova.compute.manager [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.633 221554 DEBUG nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.633 221554 INFO nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Creating image(s)#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.658 221554 DEBUG nova.storage.rbd_utils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 507b81ef-dd4f-409b-95ac-9e19b1ca7a71_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.685 221554 DEBUG nova.storage.rbd_utils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 507b81ef-dd4f-409b-95ac-9e19b1ca7a71_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.708 221554 DEBUG nova.storage.rbd_utils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 507b81ef-dd4f-409b-95ac-9e19b1ca7a71_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.712 221554 DEBUG oslo_concurrency.processutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.760 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.772 221554 DEBUG oslo_concurrency.processutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.773 221554 DEBUG oslo_concurrency.lockutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.774 221554 DEBUG oslo_concurrency.lockutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.774 221554 DEBUG oslo_concurrency.lockutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.801 221554 DEBUG nova.storage.rbd_utils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 507b81ef-dd4f-409b-95ac-9e19b1ca7a71_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:03:12 np0005603609 nova_compute[221550]: 2026-01-31 09:03:12.804 221554 DEBUG oslo_concurrency.processutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 507b81ef-dd4f-409b-95ac-9e19b1ca7a71_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:03:13 np0005603609 nova_compute[221550]: 2026-01-31 09:03:13.156 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:13 np0005603609 nova_compute[221550]: 2026-01-31 09:03:13.268 221554 DEBUG nova.policy [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a56abd8fdd341ae88a99e102ab399de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 04:03:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:03:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:13.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:03:14 np0005603609 nova_compute[221550]: 2026-01-31 09:03:14.303 221554 DEBUG oslo_concurrency.processutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 507b81ef-dd4f-409b-95ac-9e19b1ca7a71_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:03:14 np0005603609 nova_compute[221550]: 2026-01-31 09:03:14.373 221554 DEBUG nova.storage.rbd_utils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] resizing rbd image 507b81ef-dd4f-409b-95ac-9e19b1ca7a71_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 04:03:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:14.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:14 np0005603609 nova_compute[221550]: 2026-01-31 09:03:14.863 221554 DEBUG nova.objects.instance [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'migration_context' on Instance uuid 507b81ef-dd4f-409b-95ac-9e19b1ca7a71 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:03:14 np0005603609 nova_compute[221550]: 2026-01-31 09:03:14.941 221554 DEBUG nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 04:03:14 np0005603609 nova_compute[221550]: 2026-01-31 09:03:14.941 221554 DEBUG nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Ensure instance console log exists: /var/lib/nova/instances/507b81ef-dd4f-409b-95ac-9e19b1ca7a71/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 04:03:14 np0005603609 nova_compute[221550]: 2026-01-31 09:03:14.942 221554 DEBUG oslo_concurrency.lockutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:14 np0005603609 nova_compute[221550]: 2026-01-31 09:03:14.942 221554 DEBUG oslo_concurrency.lockutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:14 np0005603609 nova_compute[221550]: 2026-01-31 09:03:14.942 221554 DEBUG oslo_concurrency.lockutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:03:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:15.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:03:15 np0005603609 nova_compute[221550]: 2026-01-31 09:03:15.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:15 np0005603609 nova_compute[221550]: 2026-01-31 09:03:15.693 221554 DEBUG nova.network.neutron [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Successfully created port: 24bc56a6-c7f4-41dd-8977-ab50c86debd1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 04:03:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:16.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:17 np0005603609 nova_compute[221550]: 2026-01-31 09:03:17.594 221554 DEBUG nova.network.neutron [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Successfully updated port: 24bc56a6-c7f4-41dd-8977-ab50c86debd1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 04:03:17 np0005603609 nova_compute[221550]: 2026-01-31 09:03:17.616 221554 DEBUG oslo_concurrency.lockutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "refresh_cache-507b81ef-dd4f-409b-95ac-9e19b1ca7a71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:03:17 np0005603609 nova_compute[221550]: 2026-01-31 09:03:17.616 221554 DEBUG oslo_concurrency.lockutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquired lock "refresh_cache-507b81ef-dd4f-409b-95ac-9e19b1ca7a71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:03:17 np0005603609 nova_compute[221550]: 2026-01-31 09:03:17.616 221554 DEBUG nova.network.neutron [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:03:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:03:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:17.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:03:17 np0005603609 nova_compute[221550]: 2026-01-31 09:03:17.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:17 np0005603609 nova_compute[221550]: 2026-01-31 09:03:17.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:03:17 np0005603609 nova_compute[221550]: 2026-01-31 09:03:17.696 221554 DEBUG nova.compute.manager [req-7d4fb2fd-6b6a-4657-84fd-5a1327fee251 req-b98171f9-aa7b-406c-a63d-faeb6c6551ce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received event network-changed-24bc56a6-c7f4-41dd-8977-ab50c86debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:03:17 np0005603609 nova_compute[221550]: 2026-01-31 09:03:17.696 221554 DEBUG nova.compute.manager [req-7d4fb2fd-6b6a-4657-84fd-5a1327fee251 req-b98171f9-aa7b-406c-a63d-faeb6c6551ce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Refreshing instance network info cache due to event network-changed-24bc56a6-c7f4-41dd-8977-ab50c86debd1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:03:17 np0005603609 nova_compute[221550]: 2026-01-31 09:03:17.697 221554 DEBUG oslo_concurrency.lockutils [req-7d4fb2fd-6b6a-4657-84fd-5a1327fee251 req-b98171f9-aa7b-406c-a63d-faeb6c6551ce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-507b81ef-dd4f-409b-95ac-9e19b1ca7a71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:03:17 np0005603609 nova_compute[221550]: 2026-01-31 09:03:17.746 221554 DEBUG nova.network.neutron [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 04:03:17 np0005603609 nova_compute[221550]: 2026-01-31 09:03:17.762 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:18 np0005603609 nova_compute[221550]: 2026-01-31 09:03:18.158 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:18.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:18 np0005603609 nova_compute[221550]: 2026-01-31 09:03:18.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:18 np0005603609 nova_compute[221550]: 2026-01-31 09:03:18.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:03:18 np0005603609 nova_compute[221550]: 2026-01-31 09:03:18.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:03:18 np0005603609 nova_compute[221550]: 2026-01-31 09:03:18.681 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 04:03:18 np0005603609 nova_compute[221550]: 2026-01-31 09:03:18.682 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:03:18 np0005603609 nova_compute[221550]: 2026-01-31 09:03:18.682 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:18 np0005603609 nova_compute[221550]: 2026-01-31 09:03:18.715 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:18 np0005603609 nova_compute[221550]: 2026-01-31 09:03:18.715 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:18 np0005603609 nova_compute[221550]: 2026-01-31 09:03:18.715 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:18 np0005603609 nova_compute[221550]: 2026-01-31 09:03:18.716 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:03:18 np0005603609 nova_compute[221550]: 2026-01-31 09:03:18.716 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:03:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:03:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2283409527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.125 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.264 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.265 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4199MB free_disk=20.97614288330078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.266 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.266 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.333 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 507b81ef-dd4f-409b-95ac-9e19b1ca7a71 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.333 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.333 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.363 221554 DEBUG nova.network.neutron [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Updating instance_info_cache with network_info: [{"id": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "address": "fa:16:3e:25:57:08", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bc56a6-c7", "ovs_interfaceid": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.375 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.395 221554 DEBUG oslo_concurrency.lockutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Releasing lock "refresh_cache-507b81ef-dd4f-409b-95ac-9e19b1ca7a71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.396 221554 DEBUG nova.compute.manager [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Instance network_info: |[{"id": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "address": "fa:16:3e:25:57:08", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bc56a6-c7", "ovs_interfaceid": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.397 221554 DEBUG oslo_concurrency.lockutils [req-7d4fb2fd-6b6a-4657-84fd-5a1327fee251 req-b98171f9-aa7b-406c-a63d-faeb6c6551ce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-507b81ef-dd4f-409b-95ac-9e19b1ca7a71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.397 221554 DEBUG nova.network.neutron [req-7d4fb2fd-6b6a-4657-84fd-5a1327fee251 req-b98171f9-aa7b-406c-a63d-faeb6c6551ce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Refreshing network info cache for port 24bc56a6-c7f4-41dd-8977-ab50c86debd1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.400 221554 DEBUG nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Start _get_guest_xml network_info=[{"id": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "address": "fa:16:3e:25:57:08", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bc56a6-c7", "ovs_interfaceid": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.403 221554 WARNING nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.408 221554 DEBUG nova.virt.libvirt.host [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.408 221554 DEBUG nova.virt.libvirt.host [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.411 221554 DEBUG nova.virt.libvirt.host [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.411 221554 DEBUG nova.virt.libvirt.host [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.412 221554 DEBUG nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.412 221554 DEBUG nova.virt.hardware [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.412 221554 DEBUG nova.virt.hardware [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.413 221554 DEBUG nova.virt.hardware [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.413 221554 DEBUG nova.virt.hardware [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.413 221554 DEBUG nova.virt.hardware [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.413 221554 DEBUG nova.virt.hardware [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.414 221554 DEBUG nova.virt.hardware [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.414 221554 DEBUG nova.virt.hardware [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.414 221554 DEBUG nova.virt.hardware [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.415 221554 DEBUG nova.virt.hardware [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.415 221554 DEBUG nova.virt.hardware [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.418 221554 DEBUG oslo_concurrency.processutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:03:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:19.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:03:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2384576305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.784 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.789 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.810 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:03:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:03:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/26884125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.850 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.850 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.853 221554 DEBUG oslo_concurrency.processutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.879 221554 DEBUG nova.storage.rbd_utils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 507b81ef-dd4f-409b-95ac-9e19b1ca7a71_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:03:19 np0005603609 nova_compute[221550]: 2026-01-31 09:03:19.883 221554 DEBUG oslo_concurrency.processutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:03:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:03:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3981064920' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:03:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:20.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.448 221554 DEBUG oslo_concurrency.processutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.450 221554 DEBUG nova.virt.libvirt.vif [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:03:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-181972363',display_name='tempest-TestNetworkBasicOps-server-181972363',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-181972363',id=213,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHyCsO9U2u28zbNQ2wDexqHXg5PlUvBASpDV198Do2yfuHVMN5dHZEY+biSdpfTk3cwlfimVuuXN+VPnmdszLZ+Z7DHy2QbDeJvL62GQkjssmr+ckCtf9HK/c8PjilH1Kw==',key_name='tempest-TestNetworkBasicOps-692812669',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-kiiznuqv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:03:12Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=507b81ef-dd4f-409b-95ac-9e19b1ca7a71,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "address": "fa:16:3e:25:57:08", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bc56a6-c7", "ovs_interfaceid": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.450 221554 DEBUG nova.network.os_vif_util [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "address": "fa:16:3e:25:57:08", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bc56a6-c7", "ovs_interfaceid": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.451 221554 DEBUG nova.network.os_vif_util [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:57:08,bridge_name='br-int',has_traffic_filtering=True,id=24bc56a6-c7f4-41dd-8977-ab50c86debd1,network=Network(c7019d0b-5031-4941-b812-751bbbab3ab4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24bc56a6-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.452 221554 DEBUG nova.objects.instance [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 507b81ef-dd4f-409b-95ac-9e19b1ca7a71 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.473 221554 DEBUG nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] End _get_guest_xml xml=<domain type="kvm">
Jan 31 04:03:20 np0005603609 nova_compute[221550]:  <uuid>507b81ef-dd4f-409b-95ac-9e19b1ca7a71</uuid>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:  <name>instance-000000d5</name>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestNetworkBasicOps-server-181972363</nova:name>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 09:03:19</nova:creationTime>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 04:03:20 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:        <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:        <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:        <nova:port uuid="24bc56a6-c7f4-41dd-8977-ab50c86debd1">
Jan 31 04:03:20 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <system>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <entry name="serial">507b81ef-dd4f-409b-95ac-9e19b1ca7a71</entry>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <entry name="uuid">507b81ef-dd4f-409b-95ac-9e19b1ca7a71</entry>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    </system>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:  <os>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:  </os>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:  <features>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:  </features>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:  </clock>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:  <devices>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/507b81ef-dd4f-409b-95ac-9e19b1ca7a71_disk">
Jan 31 04:03:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      </source>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 04:03:20 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      </auth>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    </disk>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/507b81ef-dd4f-409b-95ac-9e19b1ca7a71_disk.config">
Jan 31 04:03:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      </source>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 04:03:20 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      </auth>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    </disk>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:25:57:08"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <target dev="tap24bc56a6-c7"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    </interface>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/507b81ef-dd4f-409b-95ac-9e19b1ca7a71/console.log" append="off"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    </serial>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <video>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    </video>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    </rng>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 04:03:20 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 04:03:20 np0005603609 nova_compute[221550]:  </devices>
Jan 31 04:03:20 np0005603609 nova_compute[221550]: </domain>
Jan 31 04:03:20 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.474 221554 DEBUG nova.compute.manager [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Preparing to wait for external event network-vif-plugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.474 221554 DEBUG oslo_concurrency.lockutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.475 221554 DEBUG oslo_concurrency.lockutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.475 221554 DEBUG oslo_concurrency.lockutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.476 221554 DEBUG nova.virt.libvirt.vif [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:03:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-181972363',display_name='tempest-TestNetworkBasicOps-server-181972363',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-181972363',id=213,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHyCsO9U2u28zbNQ2wDexqHXg5PlUvBASpDV198Do2yfuHVMN5dHZEY+biSdpfTk3cwlfimVuuXN+VPnmdszLZ+Z7DHy2QbDeJvL62GQkjssmr+ckCtf9HK/c8PjilH1Kw==',key_name='tempest-TestNetworkBasicOps-692812669',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-kiiznuqv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:03:12Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=507b81ef-dd4f-409b-95ac-9e19b1ca7a71,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "address": "fa:16:3e:25:57:08", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bc56a6-c7", "ovs_interfaceid": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.476 221554 DEBUG nova.network.os_vif_util [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "address": "fa:16:3e:25:57:08", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bc56a6-c7", "ovs_interfaceid": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.476 221554 DEBUG nova.network.os_vif_util [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:57:08,bridge_name='br-int',has_traffic_filtering=True,id=24bc56a6-c7f4-41dd-8977-ab50c86debd1,network=Network(c7019d0b-5031-4941-b812-751bbbab3ab4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24bc56a6-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.477 221554 DEBUG os_vif [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:57:08,bridge_name='br-int',has_traffic_filtering=True,id=24bc56a6-c7f4-41dd-8977-ab50c86debd1,network=Network(c7019d0b-5031-4941-b812-751bbbab3ab4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24bc56a6-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.477 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.478 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.478 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.480 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.480 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24bc56a6-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.481 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap24bc56a6-c7, col_values=(('external_ids', {'iface-id': '24bc56a6-c7f4-41dd-8977-ab50c86debd1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:57:08', 'vm-uuid': '507b81ef-dd4f-409b-95ac-9e19b1ca7a71'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.482 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:20 np0005603609 NetworkManager[49064]: <info>  [1769850200.4833] manager: (tap24bc56a6-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/465)
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.484 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.487 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.488 221554 INFO os_vif [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:57:08,bridge_name='br-int',has_traffic_filtering=True,id=24bc56a6-c7f4-41dd-8977-ab50c86debd1,network=Network(c7019d0b-5031-4941-b812-751bbbab3ab4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24bc56a6-c7')#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.544 221554 DEBUG nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.545 221554 DEBUG nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.545 221554 DEBUG nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No VIF found with MAC fa:16:3e:25:57:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.545 221554 INFO nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Using config drive#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.566 221554 DEBUG nova.storage.rbd_utils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 507b81ef-dd4f-409b-95ac-9e19b1ca7a71_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.884 221554 INFO nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Creating config drive at /var/lib/nova/instances/507b81ef-dd4f-409b-95ac-9e19b1ca7a71/disk.config#033[00m
Jan 31 04:03:20 np0005603609 nova_compute[221550]: 2026-01-31 09:03:20.889 221554 DEBUG oslo_concurrency.processutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/507b81ef-dd4f-409b-95ac-9e19b1ca7a71/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmuupw725 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.015 221554 DEBUG oslo_concurrency.processutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/507b81ef-dd4f-409b-95ac-9e19b1ca7a71/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmuupw725" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.044 221554 DEBUG nova.storage.rbd_utils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 507b81ef-dd4f-409b-95ac-9e19b1ca7a71_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.048 221554 DEBUG oslo_concurrency.processutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/507b81ef-dd4f-409b-95ac-9e19b1ca7a71/disk.config 507b81ef-dd4f-409b-95ac-9e19b1ca7a71_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.463 221554 DEBUG nova.network.neutron [req-7d4fb2fd-6b6a-4657-84fd-5a1327fee251 req-b98171f9-aa7b-406c-a63d-faeb6c6551ce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Updated VIF entry in instance network info cache for port 24bc56a6-c7f4-41dd-8977-ab50c86debd1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.464 221554 DEBUG nova.network.neutron [req-7d4fb2fd-6b6a-4657-84fd-5a1327fee251 req-b98171f9-aa7b-406c-a63d-faeb6c6551ce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Updating instance_info_cache with network_info: [{"id": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "address": "fa:16:3e:25:57:08", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bc56a6-c7", "ovs_interfaceid": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.480 221554 DEBUG oslo_concurrency.lockutils [req-7d4fb2fd-6b6a-4657-84fd-5a1327fee251 req-b98171f9-aa7b-406c-a63d-faeb6c6551ce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-507b81ef-dd4f-409b-95ac-9e19b1ca7a71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:03:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:21.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.653 221554 DEBUG oslo_concurrency.processutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/507b81ef-dd4f-409b-95ac-9e19b1ca7a71/disk.config 507b81ef-dd4f-409b-95ac-9e19b1ca7a71_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.654 221554 INFO nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Deleting local config drive /var/lib/nova/instances/507b81ef-dd4f-409b-95ac-9e19b1ca7a71/disk.config because it was imported into RBD.#033[00m
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:21 np0005603609 kernel: tap24bc56a6-c7: entered promiscuous mode
Jan 31 04:03:21 np0005603609 NetworkManager[49064]: <info>  [1769850201.6958] manager: (tap24bc56a6-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/466)
Jan 31 04:03:21 np0005603609 ovn_controller[130359]: 2026-01-31T09:03:21Z|00997|binding|INFO|Claiming lport 24bc56a6-c7f4-41dd-8977-ab50c86debd1 for this chassis.
Jan 31 04:03:21 np0005603609 ovn_controller[130359]: 2026-01-31T09:03:21Z|00998|binding|INFO|24bc56a6-c7f4-41dd-8977-ab50c86debd1: Claiming fa:16:3e:25:57:08 10.100.0.13
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.696 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.699 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.704 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.712 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:57:08 10.100.0.13'], port_security=['fa:16:3e:25:57:08 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '507b81ef-dd4f-409b-95ac-9e19b1ca7a71', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c7019d0b-5031-4941-b812-751bbbab3ab4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '995f3a57-15aa-4ffd-8d92-52db4b28c494', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7722569a-62de-45e9-b1af-a80aadef9b5c, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=24bc56a6-c7f4-41dd-8977-ab50c86debd1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.713 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 24bc56a6-c7f4-41dd-8977-ab50c86debd1 in datapath c7019d0b-5031-4941-b812-751bbbab3ab4 bound to our chassis#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.714 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c7019d0b-5031-4941-b812-751bbbab3ab4#033[00m
Jan 31 04:03:21 np0005603609 systemd-udevd[316556]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:03:21 np0005603609 systemd-machined[190912]: New machine qemu-116-instance-000000d5.
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.722 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[93b9d636-5217-415c-8a83-64c42abe7b0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.723 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc7019d0b-51 in ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.723 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:21 np0005603609 NetworkManager[49064]: <info>  [1769850201.7251] device (tap24bc56a6-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.724 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc7019d0b-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.724 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[abb01a96-36e5-4f1a-a437-13d33bdf81c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:21 np0005603609 NetworkManager[49064]: <info>  [1769850201.7258] device (tap24bc56a6-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.725 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4dcdc6-491d-4ed8-b59a-48b85a5d1685]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:21 np0005603609 systemd[1]: Started Virtual Machine qemu-116-instance-000000d5.
Jan 31 04:03:21 np0005603609 ovn_controller[130359]: 2026-01-31T09:03:21Z|00999|binding|INFO|Setting lport 24bc56a6-c7f4-41dd-8977-ab50c86debd1 ovn-installed in OVS
Jan 31 04:03:21 np0005603609 ovn_controller[130359]: 2026-01-31T09:03:21Z|01000|binding|INFO|Setting lport 24bc56a6-c7f4-41dd-8977-ab50c86debd1 up in Southbound
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.728 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.735 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[f435f5ad-aa77-4e5b-96cf-c67f52a00487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.744 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4e10e90e-50a5-42ea-a487-fa15a3514754]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.763 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[22f0d5ee-ebbd-4c6f-942c-7f30a7277a31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:21 np0005603609 systemd-udevd[316560]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:03:21 np0005603609 NetworkManager[49064]: <info>  [1769850201.7687] manager: (tapc7019d0b-50): new Veth device (/org/freedesktop/NetworkManager/Devices/467)
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.767 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[158f56d7-ee60-4b45-9998-2503baeb4d9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.788 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1f3f38-8ba5-4153-96e5-1cd800586245]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.791 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[21def940-21fe-4037-984b-e242ab4f6b65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:21 np0005603609 NetworkManager[49064]: <info>  [1769850201.8043] device (tapc7019d0b-50): carrier: link connected
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.809 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[164761ec-aae5-43df-b9c4-716f5fc88d83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.822 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d950de12-3cf6-4fea-8e2e-bda2618c09b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc7019d0b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:07:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 305], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1044150, 'reachable_time': 31926, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316590, 'error': None, 'target': 'ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.833 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2f00eb9a-4334-4c5b-a099-dc65e75d637c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:745'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1044150, 'tstamp': 1044150}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316591, 'error': None, 'target': 'ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.845 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c83c6f58-4517-4065-85d4-6c9e2b7753c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc7019d0b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:07:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 305], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1044150, 'reachable_time': 31926, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316592, 'error': None, 'target': 'ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.868 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b536e83d-2431-40b6-95db-cc9cf8ba90bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.914 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b3210c23-48c2-4b4d-9821-aa26b92c6fa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.915 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7019d0b-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.915 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.916 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7019d0b-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.917 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:21 np0005603609 NetworkManager[49064]: <info>  [1769850201.9187] manager: (tapc7019d0b-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Jan 31 04:03:21 np0005603609 kernel: tapc7019d0b-50: entered promiscuous mode
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.919 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.920 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc7019d0b-50, col_values=(('external_ids', {'iface-id': '7680dcda-ece7-42a1-b1cc-bd8687a1f61f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.921 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:21 np0005603609 ovn_controller[130359]: 2026-01-31T09:03:21Z|01001|binding|INFO|Releasing lport 7680dcda-ece7-42a1-b1cc-bd8687a1f61f from this chassis (sb_readonly=0)
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.921 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.922 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c7019d0b-5031-4941-b812-751bbbab3ab4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c7019d0b-5031-4941-b812-751bbbab3ab4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.925 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e74fe0ea-30e6-4e50-9a32-4c5e1e1118b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.926 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.926 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-c7019d0b-5031-4941-b812-751bbbab3ab4
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/c7019d0b-5031-4941-b812-751bbbab3ab4.pid.haproxy
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID c7019d0b-5031-4941-b812-751bbbab3ab4
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 04:03:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:21.928 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4', 'env', 'PROCESS_TAG=haproxy-c7019d0b-5031-4941-b812-751bbbab3ab4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c7019d0b-5031-4941-b812-751bbbab3ab4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.997 221554 DEBUG nova.compute.manager [req-0a590716-a0f1-412d-b205-52cc5385e739 req-e7424753-a3dc-4dde-89f1-d0148b5f8ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received event network-vif-plugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.998 221554 DEBUG oslo_concurrency.lockutils [req-0a590716-a0f1-412d-b205-52cc5385e739 req-e7424753-a3dc-4dde-89f1-d0148b5f8ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.998 221554 DEBUG oslo_concurrency.lockutils [req-0a590716-a0f1-412d-b205-52cc5385e739 req-e7424753-a3dc-4dde-89f1-d0148b5f8ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.998 221554 DEBUG oslo_concurrency.lockutils [req-0a590716-a0f1-412d-b205-52cc5385e739 req-e7424753-a3dc-4dde-89f1-d0148b5f8ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:21 np0005603609 nova_compute[221550]: 2026-01-31 09:03:21.999 221554 DEBUG nova.compute.manager [req-0a590716-a0f1-412d-b205-52cc5385e739 req-e7424753-a3dc-4dde-89f1-d0148b5f8ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Processing event network-vif-plugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 04:03:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:22 np0005603609 podman[316661]: 2026-01-31 09:03:22.240750796 +0000 UTC m=+0.047056010 container create d84da0e02a344789b2fac4fede3dcf47ea9407c55fe6eb7d8f55bc58cbafa44e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.262 221554 DEBUG nova.compute.manager [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.263 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769850202.2620225, 507b81ef-dd4f-409b-95ac-9e19b1ca7a71 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.263 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] VM Started (Lifecycle Event)#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.265 221554 DEBUG nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.268 221554 INFO nova.virt.libvirt.driver [-] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Instance spawned successfully.#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.269 221554 DEBUG nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 04:03:22 np0005603609 systemd[1]: Started libpod-conmon-d84da0e02a344789b2fac4fede3dcf47ea9407c55fe6eb7d8f55bc58cbafa44e.scope.
Jan 31 04:03:22 np0005603609 systemd[1]: Started libcrun container.
Jan 31 04:03:22 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c5569bc1269183e05e73f9d880a4406c08ec6969a1bede0bf0cf9a6a9df24ec/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 04:03:22 np0005603609 podman[316661]: 2026-01-31 09:03:22.301648251 +0000 UTC m=+0.107953495 container init d84da0e02a344789b2fac4fede3dcf47ea9407c55fe6eb7d8f55bc58cbafa44e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:03:22 np0005603609 podman[316661]: 2026-01-31 09:03:22.305413782 +0000 UTC m=+0.111718996 container start d84da0e02a344789b2fac4fede3dcf47ea9407c55fe6eb7d8f55bc58cbafa44e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 31 04:03:22 np0005603609 podman[316661]: 2026-01-31 09:03:22.211652142 +0000 UTC m=+0.017957376 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 04:03:22 np0005603609 neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4[316681]: [NOTICE]   (316685) : New worker (316687) forked
Jan 31 04:03:22 np0005603609 neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4[316681]: [NOTICE]   (316685) : Loading success.
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.324 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.329 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.333 221554 DEBUG nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.334 221554 DEBUG nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.334 221554 DEBUG nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.335 221554 DEBUG nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.335 221554 DEBUG nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.335 221554 DEBUG nova.virt.libvirt.driver [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.385 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.386 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769850202.2622106, 507b81ef-dd4f-409b-95ac-9e19b1ca7a71 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.386 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] VM Paused (Lifecycle Event)#033[00m
Jan 31 04:03:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:22.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.424 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.426 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769850202.264764, 507b81ef-dd4f-409b-95ac-9e19b1ca7a71 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.427 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] VM Resumed (Lifecycle Event)#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.439 221554 INFO nova.compute.manager [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Took 9.81 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.440 221554 DEBUG nova.compute.manager [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.455 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.458 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.486 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.526 221554 INFO nova.compute.manager [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Took 10.69 seconds to build instance.#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.545 221554 DEBUG oslo_concurrency.lockutils [None req-310a9b39-0941-4ee8-9ef8-11853902cd60 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:22 np0005603609 nova_compute[221550]: 2026-01-31 09:03:22.804 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:23.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:24 np0005603609 nova_compute[221550]: 2026-01-31 09:03:24.072 221554 DEBUG nova.compute.manager [req-ccd95b5f-53c1-4d18-8111-75b0fe50e5a4 req-f6e090ae-7c73-4b04-859c-ceb65cc4f7c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received event network-vif-plugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:03:24 np0005603609 nova_compute[221550]: 2026-01-31 09:03:24.072 221554 DEBUG oslo_concurrency.lockutils [req-ccd95b5f-53c1-4d18-8111-75b0fe50e5a4 req-f6e090ae-7c73-4b04-859c-ceb65cc4f7c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:24 np0005603609 nova_compute[221550]: 2026-01-31 09:03:24.072 221554 DEBUG oslo_concurrency.lockutils [req-ccd95b5f-53c1-4d18-8111-75b0fe50e5a4 req-f6e090ae-7c73-4b04-859c-ceb65cc4f7c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:24 np0005603609 nova_compute[221550]: 2026-01-31 09:03:24.072 221554 DEBUG oslo_concurrency.lockutils [req-ccd95b5f-53c1-4d18-8111-75b0fe50e5a4 req-f6e090ae-7c73-4b04-859c-ceb65cc4f7c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:24 np0005603609 nova_compute[221550]: 2026-01-31 09:03:24.073 221554 DEBUG nova.compute.manager [req-ccd95b5f-53c1-4d18-8111-75b0fe50e5a4 req-f6e090ae-7c73-4b04-859c-ceb65cc4f7c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] No waiting events found dispatching network-vif-plugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:03:24 np0005603609 nova_compute[221550]: 2026-01-31 09:03:24.073 221554 WARNING nova.compute.manager [req-ccd95b5f-53c1-4d18-8111-75b0fe50e5a4 req-f6e090ae-7c73-4b04-859c-ceb65cc4f7c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received unexpected event network-vif-plugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 for instance with vm_state active and task_state None.#033[00m
Jan 31 04:03:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:24.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:25 np0005603609 nova_compute[221550]: 2026-01-31 09:03:25.482 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:25.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:25.750 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=103, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=102) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:03:25 np0005603609 nova_compute[221550]: 2026-01-31 09:03:25.751 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:25.752 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:03:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:26.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:26 np0005603609 NetworkManager[49064]: <info>  [1769850206.6465] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/469)
Jan 31 04:03:26 np0005603609 nova_compute[221550]: 2026-01-31 09:03:26.646 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:26 np0005603609 NetworkManager[49064]: <info>  [1769850206.6473] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/470)
Jan 31 04:03:26 np0005603609 ovn_controller[130359]: 2026-01-31T09:03:26Z|01002|binding|INFO|Releasing lport 7680dcda-ece7-42a1-b1cc-bd8687a1f61f from this chassis (sb_readonly=0)
Jan 31 04:03:26 np0005603609 ovn_controller[130359]: 2026-01-31T09:03:26Z|01003|binding|INFO|Releasing lport 7680dcda-ece7-42a1-b1cc-bd8687a1f61f from this chassis (sb_readonly=0)
Jan 31 04:03:26 np0005603609 nova_compute[221550]: 2026-01-31 09:03:26.655 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:26 np0005603609 nova_compute[221550]: 2026-01-31 09:03:26.659 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:26 np0005603609 nova_compute[221550]: 2026-01-31 09:03:26.669 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:26 np0005603609 nova_compute[221550]: 2026-01-31 09:03:26.941 221554 DEBUG nova.compute.manager [req-4198303f-89c5-4989-97d4-b775a0af2307 req-2043a6ce-eade-4b1e-b9a3-1003991d90a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received event network-changed-24bc56a6-c7f4-41dd-8977-ab50c86debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:03:26 np0005603609 nova_compute[221550]: 2026-01-31 09:03:26.942 221554 DEBUG nova.compute.manager [req-4198303f-89c5-4989-97d4-b775a0af2307 req-2043a6ce-eade-4b1e-b9a3-1003991d90a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Refreshing instance network info cache due to event network-changed-24bc56a6-c7f4-41dd-8977-ab50c86debd1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:03:26 np0005603609 nova_compute[221550]: 2026-01-31 09:03:26.942 221554 DEBUG oslo_concurrency.lockutils [req-4198303f-89c5-4989-97d4-b775a0af2307 req-2043a6ce-eade-4b1e-b9a3-1003991d90a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-507b81ef-dd4f-409b-95ac-9e19b1ca7a71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:03:26 np0005603609 nova_compute[221550]: 2026-01-31 09:03:26.942 221554 DEBUG oslo_concurrency.lockutils [req-4198303f-89c5-4989-97d4-b775a0af2307 req-2043a6ce-eade-4b1e-b9a3-1003991d90a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-507b81ef-dd4f-409b-95ac-9e19b1ca7a71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:03:26 np0005603609 nova_compute[221550]: 2026-01-31 09:03:26.942 221554 DEBUG nova.network.neutron [req-4198303f-89c5-4989-97d4-b775a0af2307 req-2043a6ce-eade-4b1e-b9a3-1003991d90a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Refreshing network info cache for port 24bc56a6-c7f4-41dd-8977-ab50c86debd1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:03:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:03:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:27.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:03:27 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:03:27.753 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '103'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:27 np0005603609 nova_compute[221550]: 2026-01-31 09:03:27.806 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:28.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:28 np0005603609 nova_compute[221550]: 2026-01-31 09:03:28.993 221554 DEBUG nova.network.neutron [req-4198303f-89c5-4989-97d4-b775a0af2307 req-2043a6ce-eade-4b1e-b9a3-1003991d90a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Updated VIF entry in instance network info cache for port 24bc56a6-c7f4-41dd-8977-ab50c86debd1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:03:28 np0005603609 nova_compute[221550]: 2026-01-31 09:03:28.994 221554 DEBUG nova.network.neutron [req-4198303f-89c5-4989-97d4-b775a0af2307 req-2043a6ce-eade-4b1e-b9a3-1003991d90a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Updating instance_info_cache with network_info: [{"id": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "address": "fa:16:3e:25:57:08", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bc56a6-c7", "ovs_interfaceid": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:03:29 np0005603609 nova_compute[221550]: 2026-01-31 09:03:29.024 221554 DEBUG oslo_concurrency.lockutils [req-4198303f-89c5-4989-97d4-b775a0af2307 req-2043a6ce-eade-4b1e-b9a3-1003991d90a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-507b81ef-dd4f-409b-95ac-9e19b1ca7a71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:03:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:03:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:29.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:03:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:30.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:30 np0005603609 nova_compute[221550]: 2026-01-31 09:03:30.486 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 04:03:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:03:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:03:30 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:03:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:31.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:31 np0005603609 nova_compute[221550]: 2026-01-31 09:03:31.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:03:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:32.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:03:32 np0005603609 nova_compute[221550]: 2026-01-31 09:03:32.808 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:33.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:34.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:35 np0005603609 ovn_controller[130359]: 2026-01-31T09:03:35Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:25:57:08 10.100.0.13
Jan 31 04:03:35 np0005603609 ovn_controller[130359]: 2026-01-31T09:03:35Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:25:57:08 10.100.0.13
Jan 31 04:03:35 np0005603609 nova_compute[221550]: 2026-01-31 09:03:35.487 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:35.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:36.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:36 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:03:36 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:03:36 np0005603609 nova_compute[221550]: 2026-01-31 09:03:36.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:37.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:37 np0005603609 nova_compute[221550]: 2026-01-31 09:03:37.846 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:38.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:03:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:39.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:03:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:03:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:40.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:03:40 np0005603609 nova_compute[221550]: 2026-01-31 09:03:40.488 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:41 np0005603609 podman[316880]: 2026-01-31 09:03:41.150540979 +0000 UTC m=+0.038601305 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 31 04:03:41 np0005603609 podman[316879]: 2026-01-31 09:03:41.169687963 +0000 UTC m=+0.060249510 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 04:03:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:41.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:42.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:42 np0005603609 nova_compute[221550]: 2026-01-31 09:03:42.849 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:03:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:43.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:03:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:44.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:45 np0005603609 nova_compute[221550]: 2026-01-31 09:03:45.490 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:45.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:46.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:47 np0005603609 nova_compute[221550]: 2026-01-31 09:03:47.852 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:03:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:47.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:03:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:48.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:03:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:49.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:03:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:03:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:50.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:03:50 np0005603609 nova_compute[221550]: 2026-01-31 09:03:50.551 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:50 np0005603609 nova_compute[221550]: 2026-01-31 09:03:50.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:03:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:51.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:03:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:03:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:52.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:03:52 np0005603609 nova_compute[221550]: 2026-01-31 09:03:52.854 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:03:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1116350203' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:03:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:03:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1116350203' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:03:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:03:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:53.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:03:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:03:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:54.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:03:55 np0005603609 nova_compute[221550]: 2026-01-31 09:03:55.552 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:55.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:56.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:57 np0005603609 nova_compute[221550]: 2026-01-31 09:03:57.856 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:03:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:57.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:03:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:58.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:03:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:59.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:04:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:00.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:04:00 np0005603609 nova_compute[221550]: 2026-01-31 09:04:00.554 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:00 np0005603609 nova_compute[221550]: 2026-01-31 09:04:00.962 221554 INFO nova.compute.manager [None req-49b0348a-7045-4019-8483-f1b5dae17153 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Get console output#033[00m
Jan 31 04:04:00 np0005603609 nova_compute[221550]: 2026-01-31 09:04:00.967 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 04:04:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:01.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:01 np0005603609 nova_compute[221550]: 2026-01-31 09:04:01.907 221554 DEBUG nova.compute.manager [req-c7cc12c2-3808-4763-ae52-4efbdb4b7472 req-7a0eb7b1-7804-4fe1-880d-ec3fa2f448b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received event network-changed-24bc56a6-c7f4-41dd-8977-ab50c86debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:04:01 np0005603609 nova_compute[221550]: 2026-01-31 09:04:01.908 221554 DEBUG nova.compute.manager [req-c7cc12c2-3808-4763-ae52-4efbdb4b7472 req-7a0eb7b1-7804-4fe1-880d-ec3fa2f448b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Refreshing instance network info cache due to event network-changed-24bc56a6-c7f4-41dd-8977-ab50c86debd1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:04:01 np0005603609 nova_compute[221550]: 2026-01-31 09:04:01.908 221554 DEBUG oslo_concurrency.lockutils [req-c7cc12c2-3808-4763-ae52-4efbdb4b7472 req-7a0eb7b1-7804-4fe1-880d-ec3fa2f448b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-507b81ef-dd4f-409b-95ac-9e19b1ca7a71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:04:01 np0005603609 nova_compute[221550]: 2026-01-31 09:04:01.908 221554 DEBUG oslo_concurrency.lockutils [req-c7cc12c2-3808-4763-ae52-4efbdb4b7472 req-7a0eb7b1-7804-4fe1-880d-ec3fa2f448b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-507b81ef-dd4f-409b-95ac-9e19b1ca7a71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:04:01 np0005603609 nova_compute[221550]: 2026-01-31 09:04:01.908 221554 DEBUG nova.network.neutron [req-c7cc12c2-3808-4763-ae52-4efbdb4b7472 req-7a0eb7b1-7804-4fe1-880d-ec3fa2f448b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Refreshing network info cache for port 24bc56a6-c7f4-41dd-8977-ab50c86debd1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:04:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:02.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:02 np0005603609 nova_compute[221550]: 2026-01-31 09:04:02.859 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:03 np0005603609 nova_compute[221550]: 2026-01-31 09:04:03.174 221554 INFO nova.compute.manager [None req-93941633-8110-4cdf-8f65-061456ea472c 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Get console output#033[00m
Jan 31 04:04:03 np0005603609 nova_compute[221550]: 2026-01-31 09:04:03.179 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 04:04:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:03.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:04 np0005603609 nova_compute[221550]: 2026-01-31 09:04:04.007 221554 DEBUG nova.compute.manager [req-b2eb41b6-ecf5-4ed5-a50a-a6fa7647abc0 req-5c40ddba-ce7b-440a-b042-1d059de568f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received event network-vif-unplugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:04:04 np0005603609 nova_compute[221550]: 2026-01-31 09:04:04.008 221554 DEBUG oslo_concurrency.lockutils [req-b2eb41b6-ecf5-4ed5-a50a-a6fa7647abc0 req-5c40ddba-ce7b-440a-b042-1d059de568f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:04 np0005603609 nova_compute[221550]: 2026-01-31 09:04:04.008 221554 DEBUG oslo_concurrency.lockutils [req-b2eb41b6-ecf5-4ed5-a50a-a6fa7647abc0 req-5c40ddba-ce7b-440a-b042-1d059de568f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:04 np0005603609 nova_compute[221550]: 2026-01-31 09:04:04.009 221554 DEBUG oslo_concurrency.lockutils [req-b2eb41b6-ecf5-4ed5-a50a-a6fa7647abc0 req-5c40ddba-ce7b-440a-b042-1d059de568f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:04 np0005603609 nova_compute[221550]: 2026-01-31 09:04:04.010 221554 DEBUG nova.compute.manager [req-b2eb41b6-ecf5-4ed5-a50a-a6fa7647abc0 req-5c40ddba-ce7b-440a-b042-1d059de568f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] No waiting events found dispatching network-vif-unplugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:04:04 np0005603609 nova_compute[221550]: 2026-01-31 09:04:04.010 221554 WARNING nova.compute.manager [req-b2eb41b6-ecf5-4ed5-a50a-a6fa7647abc0 req-5c40ddba-ce7b-440a-b042-1d059de568f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received unexpected event network-vif-unplugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 for instance with vm_state active and task_state None.#033[00m
Jan 31 04:04:04 np0005603609 nova_compute[221550]: 2026-01-31 09:04:04.011 221554 DEBUG nova.compute.manager [req-b2eb41b6-ecf5-4ed5-a50a-a6fa7647abc0 req-5c40ddba-ce7b-440a-b042-1d059de568f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received event network-vif-plugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:04:04 np0005603609 nova_compute[221550]: 2026-01-31 09:04:04.011 221554 DEBUG oslo_concurrency.lockutils [req-b2eb41b6-ecf5-4ed5-a50a-a6fa7647abc0 req-5c40ddba-ce7b-440a-b042-1d059de568f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:04 np0005603609 nova_compute[221550]: 2026-01-31 09:04:04.012 221554 DEBUG oslo_concurrency.lockutils [req-b2eb41b6-ecf5-4ed5-a50a-a6fa7647abc0 req-5c40ddba-ce7b-440a-b042-1d059de568f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:04 np0005603609 nova_compute[221550]: 2026-01-31 09:04:04.012 221554 DEBUG oslo_concurrency.lockutils [req-b2eb41b6-ecf5-4ed5-a50a-a6fa7647abc0 req-5c40ddba-ce7b-440a-b042-1d059de568f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:04 np0005603609 nova_compute[221550]: 2026-01-31 09:04:04.012 221554 DEBUG nova.compute.manager [req-b2eb41b6-ecf5-4ed5-a50a-a6fa7647abc0 req-5c40ddba-ce7b-440a-b042-1d059de568f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] No waiting events found dispatching network-vif-plugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:04:04 np0005603609 nova_compute[221550]: 2026-01-31 09:04:04.013 221554 WARNING nova.compute.manager [req-b2eb41b6-ecf5-4ed5-a50a-a6fa7647abc0 req-5c40ddba-ce7b-440a-b042-1d059de568f1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received unexpected event network-vif-plugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 for instance with vm_state active and task_state None.#033[00m
Jan 31 04:04:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:04.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:04 np0005603609 nova_compute[221550]: 2026-01-31 09:04:04.592 221554 DEBUG nova.network.neutron [req-c7cc12c2-3808-4763-ae52-4efbdb4b7472 req-7a0eb7b1-7804-4fe1-880d-ec3fa2f448b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Updated VIF entry in instance network info cache for port 24bc56a6-c7f4-41dd-8977-ab50c86debd1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:04:04 np0005603609 nova_compute[221550]: 2026-01-31 09:04:04.593 221554 DEBUG nova.network.neutron [req-c7cc12c2-3808-4763-ae52-4efbdb4b7472 req-7a0eb7b1-7804-4fe1-880d-ec3fa2f448b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Updating instance_info_cache with network_info: [{"id": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "address": "fa:16:3e:25:57:08", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bc56a6-c7", "ovs_interfaceid": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:04:04 np0005603609 nova_compute[221550]: 2026-01-31 09:04:04.613 221554 DEBUG oslo_concurrency.lockutils [req-c7cc12c2-3808-4763-ae52-4efbdb4b7472 req-7a0eb7b1-7804-4fe1-880d-ec3fa2f448b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-507b81ef-dd4f-409b-95ac-9e19b1ca7a71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:04:05 np0005603609 nova_compute[221550]: 2026-01-31 09:04:05.556 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:05 np0005603609 nova_compute[221550]: 2026-01-31 09:04:05.682 221554 INFO nova.compute.manager [None req-200dc66c-1cd1-4cf8-b3fe-88e5f8fe5be0 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Get console output#033[00m
Jan 31 04:04:05 np0005603609 nova_compute[221550]: 2026-01-31 09:04:05.688 285517 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 04:04:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:04:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:05.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:04:06 np0005603609 nova_compute[221550]: 2026-01-31 09:04:06.231 221554 DEBUG nova.compute.manager [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received event network-changed-24bc56a6-c7f4-41dd-8977-ab50c86debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:04:06 np0005603609 nova_compute[221550]: 2026-01-31 09:04:06.232 221554 DEBUG nova.compute.manager [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Refreshing instance network info cache due to event network-changed-24bc56a6-c7f4-41dd-8977-ab50c86debd1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:04:06 np0005603609 nova_compute[221550]: 2026-01-31 09:04:06.233 221554 DEBUG oslo_concurrency.lockutils [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-507b81ef-dd4f-409b-95ac-9e19b1ca7a71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:04:06 np0005603609 nova_compute[221550]: 2026-01-31 09:04:06.233 221554 DEBUG oslo_concurrency.lockutils [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-507b81ef-dd4f-409b-95ac-9e19b1ca7a71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:04:06 np0005603609 nova_compute[221550]: 2026-01-31 09:04:06.233 221554 DEBUG nova.network.neutron [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Refreshing network info cache for port 24bc56a6-c7f4-41dd-8977-ab50c86debd1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:04:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:04:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:06.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:04:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:07 np0005603609 nova_compute[221550]: 2026-01-31 09:04:07.507 221554 DEBUG nova.network.neutron [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Updated VIF entry in instance network info cache for port 24bc56a6-c7f4-41dd-8977-ab50c86debd1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:04:07 np0005603609 nova_compute[221550]: 2026-01-31 09:04:07.507 221554 DEBUG nova.network.neutron [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Updating instance_info_cache with network_info: [{"id": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "address": "fa:16:3e:25:57:08", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bc56a6-c7", "ovs_interfaceid": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:04:07 np0005603609 nova_compute[221550]: 2026-01-31 09:04:07.526 221554 DEBUG oslo_concurrency.lockutils [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-507b81ef-dd4f-409b-95ac-9e19b1ca7a71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:04:07 np0005603609 nova_compute[221550]: 2026-01-31 09:04:07.527 221554 DEBUG nova.compute.manager [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received event network-vif-plugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:04:07 np0005603609 nova_compute[221550]: 2026-01-31 09:04:07.527 221554 DEBUG oslo_concurrency.lockutils [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:07 np0005603609 nova_compute[221550]: 2026-01-31 09:04:07.527 221554 DEBUG oslo_concurrency.lockutils [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:07 np0005603609 nova_compute[221550]: 2026-01-31 09:04:07.528 221554 DEBUG oslo_concurrency.lockutils [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:07 np0005603609 nova_compute[221550]: 2026-01-31 09:04:07.528 221554 DEBUG nova.compute.manager [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] No waiting events found dispatching network-vif-plugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:04:07 np0005603609 nova_compute[221550]: 2026-01-31 09:04:07.528 221554 WARNING nova.compute.manager [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received unexpected event network-vif-plugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 for instance with vm_state active and task_state None.#033[00m
Jan 31 04:04:07 np0005603609 nova_compute[221550]: 2026-01-31 09:04:07.529 221554 DEBUG nova.compute.manager [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received event network-vif-plugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:04:07 np0005603609 nova_compute[221550]: 2026-01-31 09:04:07.529 221554 DEBUG oslo_concurrency.lockutils [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:07 np0005603609 nova_compute[221550]: 2026-01-31 09:04:07.529 221554 DEBUG oslo_concurrency.lockutils [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:07 np0005603609 nova_compute[221550]: 2026-01-31 09:04:07.529 221554 DEBUG oslo_concurrency.lockutils [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:07 np0005603609 nova_compute[221550]: 2026-01-31 09:04:07.530 221554 DEBUG nova.compute.manager [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] No waiting events found dispatching network-vif-plugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:04:07 np0005603609 nova_compute[221550]: 2026-01-31 09:04:07.530 221554 WARNING nova.compute.manager [req-ef31e7ee-f529-4443-871d-4ed8064660dc req-fa9d5572-3f52-4d29-b6c9-3f068fe20de2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received unexpected event network-vif-plugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 for instance with vm_state active and task_state None.#033[00m
Jan 31 04:04:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:07.556 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:07.556 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:07.557 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:07 np0005603609 nova_compute[221550]: 2026-01-31 09:04:07.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:04:07 np0005603609 nova_compute[221550]: 2026-01-31 09:04:07.861 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:07.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:08.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:04:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:09.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:04:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:10.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:10 np0005603609 nova_compute[221550]: 2026-01-31 09:04:10.558 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:10 np0005603609 nova_compute[221550]: 2026-01-31 09:04:10.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:04:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:11.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:12 np0005603609 podman[316928]: 2026-01-31 09:04:12.16288198 +0000 UTC m=+0.045336189 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Jan 31 04:04:12 np0005603609 podman[316927]: 2026-01-31 09:04:12.18358138 +0000 UTC m=+0.067970406 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 04:04:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:12.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:12 np0005603609 nova_compute[221550]: 2026-01-31 09:04:12.863 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.608 221554 DEBUG nova.compute.manager [req-c0ec0c2e-1485-486c-b16d-f641dac40da2 req-7ea900d0-95eb-43ff-b974-501507cf8541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received event network-changed-24bc56a6-c7f4-41dd-8977-ab50c86debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.609 221554 DEBUG nova.compute.manager [req-c0ec0c2e-1485-486c-b16d-f641dac40da2 req-7ea900d0-95eb-43ff-b974-501507cf8541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Refreshing instance network info cache due to event network-changed-24bc56a6-c7f4-41dd-8977-ab50c86debd1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.609 221554 DEBUG oslo_concurrency.lockutils [req-c0ec0c2e-1485-486c-b16d-f641dac40da2 req-7ea900d0-95eb-43ff-b974-501507cf8541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-507b81ef-dd4f-409b-95ac-9e19b1ca7a71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.609 221554 DEBUG oslo_concurrency.lockutils [req-c0ec0c2e-1485-486c-b16d-f641dac40da2 req-7ea900d0-95eb-43ff-b974-501507cf8541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-507b81ef-dd4f-409b-95ac-9e19b1ca7a71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.610 221554 DEBUG nova.network.neutron [req-c0ec0c2e-1485-486c-b16d-f641dac40da2 req-7ea900d0-95eb-43ff-b974-501507cf8541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Refreshing network info cache for port 24bc56a6-c7f4-41dd-8977-ab50c86debd1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.640 221554 DEBUG oslo_concurrency.lockutils [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.641 221554 DEBUG oslo_concurrency.lockutils [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.642 221554 DEBUG oslo_concurrency.lockutils [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.642 221554 DEBUG oslo_concurrency.lockutils [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.642 221554 DEBUG oslo_concurrency.lockutils [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.644 221554 INFO nova.compute.manager [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Terminating instance#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.645 221554 DEBUG nova.compute.manager [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 04:04:13 np0005603609 kernel: tap24bc56a6-c7 (unregistering): left promiscuous mode
Jan 31 04:04:13 np0005603609 NetworkManager[49064]: <info>  [1769850253.7738] device (tap24bc56a6-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.780 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:13 np0005603609 ovn_controller[130359]: 2026-01-31T09:04:13Z|01004|binding|INFO|Releasing lport 24bc56a6-c7f4-41dd-8977-ab50c86debd1 from this chassis (sb_readonly=0)
Jan 31 04:04:13 np0005603609 ovn_controller[130359]: 2026-01-31T09:04:13Z|01005|binding|INFO|Setting lport 24bc56a6-c7f4-41dd-8977-ab50c86debd1 down in Southbound
Jan 31 04:04:13 np0005603609 ovn_controller[130359]: 2026-01-31T09:04:13Z|01006|binding|INFO|Removing iface tap24bc56a6-c7 ovn-installed in OVS
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.783 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:13.789 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:57:08 10.100.0.13'], port_security=['fa:16:3e:25:57:08 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '507b81ef-dd4f-409b-95ac-9e19b1ca7a71', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c7019d0b-5031-4941-b812-751bbbab3ab4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '8', 'neutron:security_group_ids': '995f3a57-15aa-4ffd-8d92-52db4b28c494', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7722569a-62de-45e9-b1af-a80aadef9b5c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=24bc56a6-c7f4-41dd-8977-ab50c86debd1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.792 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:13.793 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 24bc56a6-c7f4-41dd-8977-ab50c86debd1 in datapath c7019d0b-5031-4941-b812-751bbbab3ab4 unbound from our chassis#033[00m
Jan 31 04:04:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:13.794 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c7019d0b-5031-4941-b812-751bbbab3ab4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:04:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:13.795 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d31846-48e2-4144-8297-b063700fc389]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:13 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:13.796 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4 namespace which is not needed anymore#033[00m
Jan 31 04:04:13 np0005603609 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d000000d5.scope: Deactivated successfully.
Jan 31 04:04:13 np0005603609 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d000000d5.scope: Consumed 14.043s CPU time.
Jan 31 04:04:13 np0005603609 systemd-machined[190912]: Machine qemu-116-instance-000000d5 terminated.
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.863 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.870 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.877 221554 INFO nova.virt.libvirt.driver [-] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Instance destroyed successfully.#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.878 221554 DEBUG nova.objects.instance [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'resources' on Instance uuid 507b81ef-dd4f-409b-95ac-9e19b1ca7a71 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:04:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:13.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.899 221554 DEBUG nova.virt.libvirt.vif [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:03:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-181972363',display_name='tempest-TestNetworkBasicOps-server-181972363',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-181972363',id=213,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHyCsO9U2u28zbNQ2wDexqHXg5PlUvBASpDV198Do2yfuHVMN5dHZEY+biSdpfTk3cwlfimVuuXN+VPnmdszLZ+Z7DHy2QbDeJvL62GQkjssmr+ckCtf9HK/c8PjilH1Kw==',key_name='tempest-TestNetworkBasicOps-692812669',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:03:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-kiiznuqv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:03:22Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=507b81ef-dd4f-409b-95ac-9e19b1ca7a71,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "address": "fa:16:3e:25:57:08", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bc56a6-c7", "ovs_interfaceid": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.900 221554 DEBUG nova.network.os_vif_util [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "address": "fa:16:3e:25:57:08", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bc56a6-c7", "ovs_interfaceid": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.900 221554 DEBUG nova.network.os_vif_util [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:25:57:08,bridge_name='br-int',has_traffic_filtering=True,id=24bc56a6-c7f4-41dd-8977-ab50c86debd1,network=Network(c7019d0b-5031-4941-b812-751bbbab3ab4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24bc56a6-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.901 221554 DEBUG os_vif [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:57:08,bridge_name='br-int',has_traffic_filtering=True,id=24bc56a6-c7f4-41dd-8977-ab50c86debd1,network=Network(c7019d0b-5031-4941-b812-751bbbab3ab4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24bc56a6-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.902 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.903 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24bc56a6-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.904 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.907 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:04:13 np0005603609 neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4[316681]: [NOTICE]   (316685) : haproxy version is 2.8.14-c23fe91
Jan 31 04:04:13 np0005603609 neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4[316681]: [NOTICE]   (316685) : path to executable is /usr/sbin/haproxy
Jan 31 04:04:13 np0005603609 neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4[316681]: [WARNING]  (316685) : Exiting Master process...
Jan 31 04:04:13 np0005603609 neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4[316681]: [ALERT]    (316685) : Current worker (316687) exited with code 143 (Terminated)
Jan 31 04:04:13 np0005603609 neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4[316681]: [WARNING]  (316685) : All workers exited. Exiting... (0)
Jan 31 04:04:13 np0005603609 nova_compute[221550]: 2026-01-31 09:04:13.910 221554 INFO os_vif [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:57:08,bridge_name='br-int',has_traffic_filtering=True,id=24bc56a6-c7f4-41dd-8977-ab50c86debd1,network=Network(c7019d0b-5031-4941-b812-751bbbab3ab4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24bc56a6-c7')#033[00m
Jan 31 04:04:13 np0005603609 systemd[1]: libpod-d84da0e02a344789b2fac4fede3dcf47ea9407c55fe6eb7d8f55bc58cbafa44e.scope: Deactivated successfully.
Jan 31 04:04:13 np0005603609 podman[316997]: 2026-01-31 09:04:13.92003205 +0000 UTC m=+0.046790465 container died d84da0e02a344789b2fac4fede3dcf47ea9407c55fe6eb7d8f55bc58cbafa44e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 04:04:13 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d84da0e02a344789b2fac4fede3dcf47ea9407c55fe6eb7d8f55bc58cbafa44e-userdata-shm.mount: Deactivated successfully.
Jan 31 04:04:13 np0005603609 systemd[1]: var-lib-containers-storage-overlay-8c5569bc1269183e05e73f9d880a4406c08ec6969a1bede0bf0cf9a6a9df24ec-merged.mount: Deactivated successfully.
Jan 31 04:04:13 np0005603609 podman[316997]: 2026-01-31 09:04:13.9857777 +0000 UTC m=+0.112536085 container cleanup d84da0e02a344789b2fac4fede3dcf47ea9407c55fe6eb7d8f55bc58cbafa44e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 04:04:13 np0005603609 systemd[1]: libpod-conmon-d84da0e02a344789b2fac4fede3dcf47ea9407c55fe6eb7d8f55bc58cbafa44e.scope: Deactivated successfully.
Jan 31 04:04:14 np0005603609 podman[317054]: 2026-01-31 09:04:14.059187718 +0000 UTC m=+0.058165420 container remove d84da0e02a344789b2fac4fede3dcf47ea9407c55fe6eb7d8f55bc58cbafa44e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 04:04:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:14.062 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a174512d-9bc3-4ca6-a5d7-16f5df56e943]: (4, ('Sat Jan 31 09:04:13 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4 (d84da0e02a344789b2fac4fede3dcf47ea9407c55fe6eb7d8f55bc58cbafa44e)\nd84da0e02a344789b2fac4fede3dcf47ea9407c55fe6eb7d8f55bc58cbafa44e\nSat Jan 31 09:04:13 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4 (d84da0e02a344789b2fac4fede3dcf47ea9407c55fe6eb7d8f55bc58cbafa44e)\nd84da0e02a344789b2fac4fede3dcf47ea9407c55fe6eb7d8f55bc58cbafa44e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:14.064 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[107eb6d4-2e8a-4425-8744-a921a5170a19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:14.065 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7019d0b-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:04:14 np0005603609 kernel: tapc7019d0b-50: left promiscuous mode
Jan 31 04:04:14 np0005603609 nova_compute[221550]: 2026-01-31 09:04:14.066 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:14 np0005603609 nova_compute[221550]: 2026-01-31 09:04:14.071 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:14.074 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6686a0cb-f937-4a9d-b6bd-e21f27ef560c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:14.094 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8d92e4a6-2499-42dd-8a57-4423095b9798]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:14.095 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[3d672b04-dbc7-42aa-b800-a33ee25c4dc3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:14.106 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[34d98e28-0f54-4b75-9ce2-a7ae47bdefd2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1044146, 'reachable_time': 15888, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317069, 'error': None, 'target': 'ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:14.109 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c7019d0b-5031-4941-b812-751bbbab3ab4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:04:14 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:14.109 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[653026c4-3e0e-4621-877e-1194a98c728d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:14 np0005603609 systemd[1]: run-netns-ovnmeta\x2dc7019d0b\x2d5031\x2d4941\x2db812\x2d751bbbab3ab4.mount: Deactivated successfully.
Jan 31 04:04:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:14.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:14 np0005603609 nova_compute[221550]: 2026-01-31 09:04:14.821 221554 DEBUG nova.network.neutron [req-c0ec0c2e-1485-486c-b16d-f641dac40da2 req-7ea900d0-95eb-43ff-b974-501507cf8541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Updated VIF entry in instance network info cache for port 24bc56a6-c7f4-41dd-8977-ab50c86debd1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:04:14 np0005603609 nova_compute[221550]: 2026-01-31 09:04:14.822 221554 DEBUG nova.network.neutron [req-c0ec0c2e-1485-486c-b16d-f641dac40da2 req-7ea900d0-95eb-43ff-b974-501507cf8541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Updating instance_info_cache with network_info: [{"id": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "address": "fa:16:3e:25:57:08", "network": {"id": "c7019d0b-5031-4941-b812-751bbbab3ab4", "bridge": "br-int", "label": "tempest-network-smoke--1956104466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bc56a6-c7", "ovs_interfaceid": "24bc56a6-c7f4-41dd-8977-ab50c86debd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:04:14 np0005603609 nova_compute[221550]: 2026-01-31 09:04:14.873 221554 DEBUG oslo_concurrency.lockutils [req-c0ec0c2e-1485-486c-b16d-f641dac40da2 req-7ea900d0-95eb-43ff-b974-501507cf8541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-507b81ef-dd4f-409b-95ac-9e19b1ca7a71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:04:14 np0005603609 nova_compute[221550]: 2026-01-31 09:04:14.999 221554 INFO nova.virt.libvirt.driver [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Deleting instance files /var/lib/nova/instances/507b81ef-dd4f-409b-95ac-9e19b1ca7a71_del#033[00m
Jan 31 04:04:15 np0005603609 nova_compute[221550]: 2026-01-31 09:04:15.000 221554 INFO nova.virt.libvirt.driver [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Deletion of /var/lib/nova/instances/507b81ef-dd4f-409b-95ac-9e19b1ca7a71_del complete#033[00m
Jan 31 04:04:15 np0005603609 nova_compute[221550]: 2026-01-31 09:04:15.061 221554 INFO nova.compute.manager [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Took 1.42 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 04:04:15 np0005603609 nova_compute[221550]: 2026-01-31 09:04:15.062 221554 DEBUG oslo.service.loopingcall [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 04:04:15 np0005603609 nova_compute[221550]: 2026-01-31 09:04:15.062 221554 DEBUG nova.compute.manager [-] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 04:04:15 np0005603609 nova_compute[221550]: 2026-01-31 09:04:15.062 221554 DEBUG nova.network.neutron [-] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 04:04:15 np0005603609 nova_compute[221550]: 2026-01-31 09:04:15.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:04:15 np0005603609 nova_compute[221550]: 2026-01-31 09:04:15.696 221554 DEBUG nova.compute.manager [req-ac2bc8a3-7323-4d91-91ad-9876903f6bbc req-a2fd21a6-cd0e-4cb5-8937-a3efda7b2e87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received event network-vif-unplugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:04:15 np0005603609 nova_compute[221550]: 2026-01-31 09:04:15.697 221554 DEBUG oslo_concurrency.lockutils [req-ac2bc8a3-7323-4d91-91ad-9876903f6bbc req-a2fd21a6-cd0e-4cb5-8937-a3efda7b2e87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:15 np0005603609 nova_compute[221550]: 2026-01-31 09:04:15.697 221554 DEBUG oslo_concurrency.lockutils [req-ac2bc8a3-7323-4d91-91ad-9876903f6bbc req-a2fd21a6-cd0e-4cb5-8937-a3efda7b2e87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:15 np0005603609 nova_compute[221550]: 2026-01-31 09:04:15.697 221554 DEBUG oslo_concurrency.lockutils [req-ac2bc8a3-7323-4d91-91ad-9876903f6bbc req-a2fd21a6-cd0e-4cb5-8937-a3efda7b2e87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:15 np0005603609 nova_compute[221550]: 2026-01-31 09:04:15.698 221554 DEBUG nova.compute.manager [req-ac2bc8a3-7323-4d91-91ad-9876903f6bbc req-a2fd21a6-cd0e-4cb5-8937-a3efda7b2e87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] No waiting events found dispatching network-vif-unplugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:04:15 np0005603609 nova_compute[221550]: 2026-01-31 09:04:15.698 221554 DEBUG nova.compute.manager [req-ac2bc8a3-7323-4d91-91ad-9876903f6bbc req-a2fd21a6-cd0e-4cb5-8937-a3efda7b2e87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received event network-vif-unplugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 04:04:15 np0005603609 nova_compute[221550]: 2026-01-31 09:04:15.698 221554 DEBUG nova.compute.manager [req-ac2bc8a3-7323-4d91-91ad-9876903f6bbc req-a2fd21a6-cd0e-4cb5-8937-a3efda7b2e87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received event network-vif-plugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:04:15 np0005603609 nova_compute[221550]: 2026-01-31 09:04:15.698 221554 DEBUG oslo_concurrency.lockutils [req-ac2bc8a3-7323-4d91-91ad-9876903f6bbc req-a2fd21a6-cd0e-4cb5-8937-a3efda7b2e87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:15 np0005603609 nova_compute[221550]: 2026-01-31 09:04:15.698 221554 DEBUG oslo_concurrency.lockutils [req-ac2bc8a3-7323-4d91-91ad-9876903f6bbc req-a2fd21a6-cd0e-4cb5-8937-a3efda7b2e87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:15 np0005603609 nova_compute[221550]: 2026-01-31 09:04:15.699 221554 DEBUG oslo_concurrency.lockutils [req-ac2bc8a3-7323-4d91-91ad-9876903f6bbc req-a2fd21a6-cd0e-4cb5-8937-a3efda7b2e87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:15 np0005603609 nova_compute[221550]: 2026-01-31 09:04:15.699 221554 DEBUG nova.compute.manager [req-ac2bc8a3-7323-4d91-91ad-9876903f6bbc req-a2fd21a6-cd0e-4cb5-8937-a3efda7b2e87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] No waiting events found dispatching network-vif-plugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:04:15 np0005603609 nova_compute[221550]: 2026-01-31 09:04:15.699 221554 WARNING nova.compute.manager [req-ac2bc8a3-7323-4d91-91ad-9876903f6bbc req-a2fd21a6-cd0e-4cb5-8937-a3efda7b2e87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received unexpected event network-vif-plugged-24bc56a6-c7f4-41dd-8977-ab50c86debd1 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 04:04:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:15.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:16.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:17 np0005603609 nova_compute[221550]: 2026-01-31 09:04:17.865 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:17.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:18.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:18 np0005603609 nova_compute[221550]: 2026-01-31 09:04:18.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:04:18 np0005603609 nova_compute[221550]: 2026-01-31 09:04:18.694 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:18 np0005603609 nova_compute[221550]: 2026-01-31 09:04:18.694 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:18 np0005603609 nova_compute[221550]: 2026-01-31 09:04:18.694 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:18 np0005603609 nova_compute[221550]: 2026-01-31 09:04:18.695 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:04:18 np0005603609 nova_compute[221550]: 2026-01-31 09:04:18.695 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:04:18 np0005603609 nova_compute[221550]: 2026-01-31 09:04:18.906 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:04:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1720794115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:04:19 np0005603609 nova_compute[221550]: 2026-01-31 09:04:19.094 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:04:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:19.222 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=104, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=103) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:04:19 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:19.223 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:04:19 np0005603609 nova_compute[221550]: 2026-01-31 09:04:19.264 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:19 np0005603609 nova_compute[221550]: 2026-01-31 09:04:19.277 221554 DEBUG nova.network.neutron [-] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:04:19 np0005603609 nova_compute[221550]: 2026-01-31 09:04:19.299 221554 INFO nova.compute.manager [-] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Took 4.24 seconds to deallocate network for instance.#033[00m
Jan 31 04:04:19 np0005603609 nova_compute[221550]: 2026-01-31 09:04:19.318 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:04:19 np0005603609 nova_compute[221550]: 2026-01-31 09:04:19.319 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4199MB free_disk=20.975173950195312GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:04:19 np0005603609 nova_compute[221550]: 2026-01-31 09:04:19.320 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:19 np0005603609 nova_compute[221550]: 2026-01-31 09:04:19.320 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:19 np0005603609 nova_compute[221550]: 2026-01-31 09:04:19.384 221554 DEBUG oslo_concurrency.lockutils [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:19 np0005603609 nova_compute[221550]: 2026-01-31 09:04:19.395 221554 DEBUG nova.compute.manager [req-ff4d548e-f310-4f02-909e-788c72525963 req-7be6800a-f5d3-4ba3-acd6-8365785fc594 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Received event network-vif-deleted-24bc56a6-c7f4-41dd-8977-ab50c86debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:04:19 np0005603609 nova_compute[221550]: 2026-01-31 09:04:19.452 221554 WARNING nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 507b81ef-dd4f-409b-95ac-9e19b1ca7a71 is not being actively managed by this compute host but has allocations referencing this compute host: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. Skipping heal of allocation because we do not know what to do.#033[00m
Jan 31 04:04:19 np0005603609 nova_compute[221550]: 2026-01-31 09:04:19.452 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:04:19 np0005603609 nova_compute[221550]: 2026-01-31 09:04:19.453 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:04:19 np0005603609 nova_compute[221550]: 2026-01-31 09:04:19.517 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 04:04:19 np0005603609 nova_compute[221550]: 2026-01-31 09:04:19.535 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 04:04:19 np0005603609 nova_compute[221550]: 2026-01-31 09:04:19.536 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 04:04:19 np0005603609 nova_compute[221550]: 2026-01-31 09:04:19.546 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 04:04:19 np0005603609 nova_compute[221550]: 2026-01-31 09:04:19.571 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 04:04:19 np0005603609 nova_compute[221550]: 2026-01-31 09:04:19.609 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:04:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:04:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:19.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:04:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:04:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2909094674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:04:20 np0005603609 nova_compute[221550]: 2026-01-31 09:04:20.165 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:04:20 np0005603609 nova_compute[221550]: 2026-01-31 09:04:20.171 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:04:20 np0005603609 nova_compute[221550]: 2026-01-31 09:04:20.190 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:04:20 np0005603609 nova_compute[221550]: 2026-01-31 09:04:20.224 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:04:20 np0005603609 nova_compute[221550]: 2026-01-31 09:04:20.225 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:20 np0005603609 nova_compute[221550]: 2026-01-31 09:04:20.225 221554 DEBUG oslo_concurrency.lockutils [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:20 np0005603609 nova_compute[221550]: 2026-01-31 09:04:20.232 221554 DEBUG oslo_concurrency.lockutils [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:20 np0005603609 nova_compute[221550]: 2026-01-31 09:04:20.269 221554 INFO nova.scheduler.client.report [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Deleted allocations for instance 507b81ef-dd4f-409b-95ac-9e19b1ca7a71#033[00m
Jan 31 04:04:20 np0005603609 nova_compute[221550]: 2026-01-31 09:04:20.341 221554 DEBUG oslo_concurrency.lockutils [None req-2725ad70-7c0a-4fd5-954c-0d28891475be 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "507b81ef-dd4f-409b-95ac-9e19b1ca7a71" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:04:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:20.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:04:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:04:21.224 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '104'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:04:21 np0005603609 nova_compute[221550]: 2026-01-31 09:04:21.225 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:04:21 np0005603609 nova_compute[221550]: 2026-01-31 09:04:21.226 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:04:21 np0005603609 nova_compute[221550]: 2026-01-31 09:04:21.226 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:04:21 np0005603609 nova_compute[221550]: 2026-01-31 09:04:21.245 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:04:21 np0005603609 nova_compute[221550]: 2026-01-31 09:04:21.246 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:04:21 np0005603609 nova_compute[221550]: 2026-01-31 09:04:21.246 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:04:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:21.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:04:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:22.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:04:22 np0005603609 nova_compute[221550]: 2026-01-31 09:04:22.866 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:04:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:23.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:04:23 np0005603609 nova_compute[221550]: 2026-01-31 09:04:23.958 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:24.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:25.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:26.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:27 np0005603609 nova_compute[221550]: 2026-01-31 09:04:27.675 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:04:27 np0005603609 nova_compute[221550]: 2026-01-31 09:04:27.868 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:27.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:28.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:28 np0005603609 nova_compute[221550]: 2026-01-31 09:04:28.876 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850253.8745165, 507b81ef-dd4f-409b-95ac-9e19b1ca7a71 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:04:28 np0005603609 nova_compute[221550]: 2026-01-31 09:04:28.876 221554 INFO nova.compute.manager [-] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:04:28 np0005603609 nova_compute[221550]: 2026-01-31 09:04:28.898 221554 DEBUG nova.compute.manager [None req-3af0e72b-0c3e-4cd1-adc4-92808e4b3f33 - - - - - -] [instance: 507b81ef-dd4f-409b-95ac-9e19b1ca7a71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:04:28 np0005603609 nova_compute[221550]: 2026-01-31 09:04:28.960 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:29 np0005603609 nova_compute[221550]: 2026-01-31 09:04:29.757 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:29.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:30.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:31.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:32.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:32 np0005603609 nova_compute[221550]: 2026-01-31 09:04:32.870 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:33 np0005603609 nova_compute[221550]: 2026-01-31 09:04:33.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:04:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:04:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:33.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:04:33 np0005603609 nova_compute[221550]: 2026-01-31 09:04:33.963 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:34.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:35.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:36.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:36 np0005603609 nova_compute[221550]: 2026-01-31 09:04:36.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:04:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:37 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:04:37 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:04:37 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:04:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:37.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:37 np0005603609 nova_compute[221550]: 2026-01-31 09:04:37.915 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:38.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:38 np0005603609 nova_compute[221550]: 2026-01-31 09:04:38.965 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:39.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:40.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:41.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:42.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:42 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:04:42 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:04:42 np0005603609 nova_compute[221550]: 2026-01-31 09:04:42.915 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:43 np0005603609 podman[317300]: 2026-01-31 09:04:43.160932698 +0000 UTC m=+0.049269864 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 04:04:43 np0005603609 podman[317299]: 2026-01-31 09:04:43.204636276 +0000 UTC m=+0.092810888 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:04:43 np0005603609 nova_compute[221550]: 2026-01-31 09:04:43.979 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:04:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:45.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:45 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:45.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:04:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:47 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:47.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:47.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:47 np0005603609 nova_compute[221550]: 2026-01-31 09:04:47.917 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:48 np0005603609 nova_compute[221550]: 2026-01-31 09:04:48.982 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:04:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:49.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:49 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:49.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:51.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:51.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:52 np0005603609 nova_compute[221550]: 2026-01-31 09:04:52.918 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:53.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:04:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:53.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:04:53 np0005603609 nova_compute[221550]: 2026-01-31 09:04:53.983 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:55.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:55.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:57.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:57.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:57 np0005603609 nova_compute[221550]: 2026-01-31 09:04:57.922 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:58 np0005603609 nova_compute[221550]: 2026-01-31 09:04:58.985 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:04:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:59.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:04:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:04:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:04:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:59.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:05:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:01.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:01.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:02 np0005603609 nova_compute[221550]: 2026-01-31 09:05:02.922 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:03.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:03.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:03 np0005603609 nova_compute[221550]: 2026-01-31 09:05:03.987 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:05:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:05 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:05.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:05.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:05:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:07.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:05:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:05:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:05:07 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:07.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:05:07 np0005603609 ovn_controller[130359]: 2026-01-31T09:05:07Z|01007|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Jan 31 04:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:05:07.557 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:05:07.557 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:05:07.557 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:07 np0005603609 nova_compute[221550]: 2026-01-31 09:05:07.924 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:08 np0005603609 nova_compute[221550]: 2026-01-31 09:05:08.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:09 np0005603609 nova_compute[221550]: 2026-01-31 09:05:09.028 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:09.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:09.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:10 np0005603609 nova_compute[221550]: 2026-01-31 09:05:10.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:11.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:11.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:12 np0005603609 nova_compute[221550]: 2026-01-31 09:05:12.925 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:13.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:13.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:14 np0005603609 nova_compute[221550]: 2026-01-31 09:05:14.031 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:14 np0005603609 podman[317347]: 2026-01-31 09:05:14.194858553 +0000 UTC m=+0.074538477 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent)
Jan 31 04:05:14 np0005603609 podman[317346]: 2026-01-31 09:05:14.194849642 +0000 UTC m=+0.083335899 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 04:05:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:05:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:15.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:05:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:15.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:05:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:17.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:17 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:17.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:17 np0005603609 nova_compute[221550]: 2026-01-31 09:05:17.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:17 np0005603609 nova_compute[221550]: 2026-01-31 09:05:17.927 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:18 np0005603609 nova_compute[221550]: 2026-01-31 09:05:18.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:18 np0005603609 nova_compute[221550]: 2026-01-31 09:05:18.725 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:18 np0005603609 nova_compute[221550]: 2026-01-31 09:05:18.725 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:18 np0005603609 nova_compute[221550]: 2026-01-31 09:05:18.725 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:18 np0005603609 nova_compute[221550]: 2026-01-31 09:05:18.726 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:05:18 np0005603609 nova_compute[221550]: 2026-01-31 09:05:18.726 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:05:19 np0005603609 nova_compute[221550]: 2026-01-31 09:05:19.071 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:05:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3269783881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:05:19 np0005603609 nova_compute[221550]: 2026-01-31 09:05:19.116 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:05:19 np0005603609 nova_compute[221550]: 2026-01-31 09:05:19.241 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:05:19 np0005603609 nova_compute[221550]: 2026-01-31 09:05:19.242 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4191MB free_disk=20.942890167236328GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:05:19 np0005603609 nova_compute[221550]: 2026-01-31 09:05:19.242 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:19 np0005603609 nova_compute[221550]: 2026-01-31 09:05:19.242 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:19 np0005603609 nova_compute[221550]: 2026-01-31 09:05:19.313 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:05:19 np0005603609 nova_compute[221550]: 2026-01-31 09:05:19.314 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:05:19 np0005603609 nova_compute[221550]: 2026-01-31 09:05:19.328 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:05:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:05:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:19.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:19 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:19.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:05:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2974409001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:05:19 np0005603609 nova_compute[221550]: 2026-01-31 09:05:19.765 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:05:19 np0005603609 nova_compute[221550]: 2026-01-31 09:05:19.769 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:05:19 np0005603609 nova_compute[221550]: 2026-01-31 09:05:19.790 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:05:19 np0005603609 nova_compute[221550]: 2026-01-31 09:05:19.819 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:05:19 np0005603609 nova_compute[221550]: 2026-01-31 09:05:19.819 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:05:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:05:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:21.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:05:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:05:21 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:21.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:05:21 np0005603609 nova_compute[221550]: 2026-01-31 09:05:21.820 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:21 np0005603609 nova_compute[221550]: 2026-01-31 09:05:21.821 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:05:21 np0005603609 nova_compute[221550]: 2026-01-31 09:05:21.821 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:05:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:05:21.851 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=105, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=104) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:05:21 np0005603609 nova_compute[221550]: 2026-01-31 09:05:21.851 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:21 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:05:21.852 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:05:21 np0005603609 nova_compute[221550]: 2026-01-31 09:05:21.871 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:05:21 np0005603609 nova_compute[221550]: 2026-01-31 09:05:21.871 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:21 np0005603609 nova_compute[221550]: 2026-01-31 09:05:21.872 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:05:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:22 np0005603609 nova_compute[221550]: 2026-01-31 09:05:22.928 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:05:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:23.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:05:23 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:23.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:05:24 np0005603609 nova_compute[221550]: 2026-01-31 09:05:24.073 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:05:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:05:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:25.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:05:25 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:25.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:25 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:05:25.853 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '105'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:05:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:05:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:27.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:05:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:27.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:27 np0005603609 nova_compute[221550]: 2026-01-31 09:05:27.930 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #193. Immutable memtables: 0.
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:28.095137) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 193
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850328095182, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2364, "num_deletes": 251, "total_data_size": 5710629, "memory_usage": 5770320, "flush_reason": "Manual Compaction"}
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #194: started
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850328164653, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 194, "file_size": 3744645, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92759, "largest_seqno": 95118, "table_properties": {"data_size": 3735148, "index_size": 5990, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19595, "raw_average_key_size": 20, "raw_value_size": 3716158, "raw_average_value_size": 3866, "num_data_blocks": 261, "num_entries": 961, "num_filter_entries": 961, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850112, "oldest_key_time": 1769850112, "file_creation_time": 1769850328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 69596 microseconds, and 5876 cpu microseconds.
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:28.164723) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #194: 3744645 bytes OK
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:28.164753) [db/memtable_list.cc:519] [default] Level-0 commit table #194 started
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:28.176219) [db/memtable_list.cc:722] [default] Level-0 commit table #194: memtable #1 done
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:28.176270) EVENT_LOG_v1 {"time_micros": 1769850328176260, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:28.176298) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 5700370, prev total WAL file size 5700370, number of live WAL files 2.
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000190.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:28.177442) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [194(3656KB)], [192(12MB)]
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850328177512, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [194], "files_L6": [192], "score": -1, "input_data_size": 16878218, "oldest_snapshot_seqno": -1}
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #195: 11460 keys, 14922761 bytes, temperature: kUnknown
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850328373797, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 195, "file_size": 14922761, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14848994, "index_size": 44022, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28677, "raw_key_size": 302459, "raw_average_key_size": 26, "raw_value_size": 14648927, "raw_average_value_size": 1278, "num_data_blocks": 1675, "num_entries": 11460, "num_filter_entries": 11460, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769850328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 195, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:28.374254) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 14922761 bytes
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:28.376739) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 85.9 rd, 76.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 12.5 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(8.5) write-amplify(4.0) OK, records in: 11982, records dropped: 522 output_compression: NoCompression
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:28.376769) EVENT_LOG_v1 {"time_micros": 1769850328376756, "job": 124, "event": "compaction_finished", "compaction_time_micros": 196407, "compaction_time_cpu_micros": 29104, "output_level": 6, "num_output_files": 1, "total_output_size": 14922761, "num_input_records": 11982, "num_output_records": 11460, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850328377255, "job": 124, "event": "table_file_deletion", "file_number": 194}
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000192.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850328379134, "job": 124, "event": "table_file_deletion", "file_number": 192}
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:28.177311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:28.379180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:28.379186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:28.379188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:28.379190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:05:28 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:28.379193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:05:28 np0005603609 nova_compute[221550]: 2026-01-31 09:05:28.705 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:29 np0005603609 nova_compute[221550]: 2026-01-31 09:05:29.076 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:05:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:29.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:05:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:05:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:29 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:29.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #196. Immutable memtables: 0.
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:31.006701) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 196
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850331006928, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 273, "num_deletes": 250, "total_data_size": 71449, "memory_usage": 77136, "flush_reason": "Manual Compaction"}
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #197: started
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850331011149, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 197, "file_size": 46197, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 95119, "largest_seqno": 95391, "table_properties": {"data_size": 44312, "index_size": 113, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5340, "raw_average_key_size": 20, "raw_value_size": 40642, "raw_average_value_size": 153, "num_data_blocks": 5, "num_entries": 264, "num_filter_entries": 264, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850329, "oldest_key_time": 1769850329, "file_creation_time": 1769850331, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 4482 microseconds, and 837 cpu microseconds.
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:31.011193) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #197: 46197 bytes OK
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:31.011209) [db/memtable_list.cc:519] [default] Level-0 commit table #197 started
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:31.016404) [db/memtable_list.cc:722] [default] Level-0 commit table #197: memtable #1 done
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:31.016434) EVENT_LOG_v1 {"time_micros": 1769850331016426, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:31.016454) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 69372, prev total WAL file size 69372, number of live WAL files 2.
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000193.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:31.016997) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323630' seq:72057594037927935, type:22 .. '6D6772737461740033353131' seq:0, type:0; will stop at (end)
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [197(45KB)], [195(14MB)]
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850331017228, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [197], "files_L6": [195], "score": -1, "input_data_size": 14968958, "oldest_snapshot_seqno": -1}
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #198: 11216 keys, 11145575 bytes, temperature: kUnknown
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850331176233, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 198, "file_size": 11145575, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11078288, "index_size": 38118, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28101, "raw_key_size": 297579, "raw_average_key_size": 26, "raw_value_size": 10887325, "raw_average_value_size": 970, "num_data_blocks": 1428, "num_entries": 11216, "num_filter_entries": 11216, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769850331, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 198, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:31.176522) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 11145575 bytes
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:31.196673) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 94.1 rd, 70.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 14.2 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(565.3) write-amplify(241.3) OK, records in: 11724, records dropped: 508 output_compression: NoCompression
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:31.196720) EVENT_LOG_v1 {"time_micros": 1769850331196704, "job": 126, "event": "compaction_finished", "compaction_time_micros": 159051, "compaction_time_cpu_micros": 21373, "output_level": 6, "num_output_files": 1, "total_output_size": 11145575, "num_input_records": 11724, "num_output_records": 11216, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850331197119, "job": 126, "event": "table_file_deletion", "file_number": 197}
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000195.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850331198946, "job": 126, "event": "table_file_deletion", "file_number": 195}
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:31.016876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:31.199149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:31.199156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:31.199157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:31.199159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:05:31 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:05:31.199162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:05:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:05:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:31.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:05:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:05:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:31 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:31.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:32 np0005603609 nova_compute[221550]: 2026-01-31 09:05:32.933 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:05:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:33 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:33.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:05:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:33.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:05:34 np0005603609 nova_compute[221550]: 2026-01-31 09:05:34.078 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:05:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:35 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:35.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:35.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:35 np0005603609 nova_compute[221550]: 2026-01-31 09:05:35.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:37.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:37.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:37 np0005603609 nova_compute[221550]: 2026-01-31 09:05:37.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:37 np0005603609 nova_compute[221550]: 2026-01-31 09:05:37.933 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:39 np0005603609 nova_compute[221550]: 2026-01-31 09:05:39.080 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:39.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:39.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:41.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:41.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:42 np0005603609 podman[317607]: 2026-01-31 09:05:42.91821684 +0000 UTC m=+0.108912758 container exec 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:05:42 np0005603609 nova_compute[221550]: 2026-01-31 09:05:42.935 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:43 np0005603609 podman[317607]: 2026-01-31 09:05:43.011370316 +0000 UTC m=+0.202066204 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:05:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:05:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:43.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:05:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:43.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:44 np0005603609 nova_compute[221550]: 2026-01-31 09:05:44.082 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:05:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:05:44 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:05:45 np0005603609 podman[317862]: 2026-01-31 09:05:45.167363452 +0000 UTC m=+0.049837368 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Jan 31 04:05:45 np0005603609 podman[317861]: 2026-01-31 09:05:45.205610368 +0000 UTC m=+0.091981499 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 04:05:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:45.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:45.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:05:45 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:05:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:47.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:47.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:47 np0005603609 nova_compute[221550]: 2026-01-31 09:05:47.937 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:49 np0005603609 nova_compute[221550]: 2026-01-31 09:05:49.084 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:49.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:49.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:05:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:51.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:05:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:05:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:51.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:05:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:52 np0005603609 nova_compute[221550]: 2026-01-31 09:05:52.975 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:05:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:53.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:05:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:53.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:05:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/825411166' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:05:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:05:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/825411166' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:05:54 np0005603609 nova_compute[221550]: 2026-01-31 09:05:54.129 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:05:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:55.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:05:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:55.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:55 np0005603609 nova_compute[221550]: 2026-01-31 09:05:55.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:05:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:57.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:05:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:05:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:57 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:57.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:57 np0005603609 nova_compute[221550]: 2026-01-31 09:05:57.977 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:58 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:05:58 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:05:59 np0005603609 nova_compute[221550]: 2026-01-31 09:05:59.131 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:05:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:59.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:05:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:59 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:59.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:06:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:01.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:06:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:06:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:01 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:01.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:02 np0005603609 nova_compute[221550]: 2026-01-31 09:06:02.979 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:06:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:06:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:03.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:06:03 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:03.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:04 np0005603609 nova_compute[221550]: 2026-01-31 09:06:04.187 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:06:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:05.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:05 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:05.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:06:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:06:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:07.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:06:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:07 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:07.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:06:07.558 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:06:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:06:07.559 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:06:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:06:07.559 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:06:07 np0005603609 nova_compute[221550]: 2026-01-31 09:06:07.982 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:09 np0005603609 nova_compute[221550]: 2026-01-31 09:06:09.236 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:06:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:06:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:09.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:06:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:09 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:09.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:10 np0005603609 nova_compute[221550]: 2026-01-31 09:06:10.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:06:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:11 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:11.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:11.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:11 np0005603609 nova_compute[221550]: 2026-01-31 09:06:11.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:12 np0005603609 nova_compute[221550]: 2026-01-31 09:06:12.983 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:06:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:13.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:06:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:13.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:14 np0005603609 nova_compute[221550]: 2026-01-31 09:06:14.238 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:15.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:15.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:16 np0005603609 podman[317961]: 2026-01-31 09:06:16.167543199 +0000 UTC m=+0.051896787 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:06:16 np0005603609 podman[317960]: 2026-01-31 09:06:16.193218441 +0000 UTC m=+0.074857073 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 04:06:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:06:16.303 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=106, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=105) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:06:16 np0005603609 nova_compute[221550]: 2026-01-31 09:06:16.304 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:16 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:06:16.304 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:06:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:06:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:17.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:17 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:17.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:17 np0005603609 nova_compute[221550]: 2026-01-31 09:06:17.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:17 np0005603609 nova_compute[221550]: 2026-01-31 09:06:17.985 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:18 np0005603609 nova_compute[221550]: 2026-01-31 09:06:18.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:18 np0005603609 nova_compute[221550]: 2026-01-31 09:06:18.703 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:06:18 np0005603609 nova_compute[221550]: 2026-01-31 09:06:18.704 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:06:18 np0005603609 nova_compute[221550]: 2026-01-31 09:06:18.704 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:06:18 np0005603609 nova_compute[221550]: 2026-01-31 09:06:18.704 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:06:18 np0005603609 nova_compute[221550]: 2026-01-31 09:06:18.704 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:06:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:06:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2691059760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:06:19 np0005603609 nova_compute[221550]: 2026-01-31 09:06:19.108 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:06:19 np0005603609 nova_compute[221550]: 2026-01-31 09:06:19.240 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:19 np0005603609 nova_compute[221550]: 2026-01-31 09:06:19.245 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:06:19 np0005603609 nova_compute[221550]: 2026-01-31 09:06:19.246 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4174MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:06:19 np0005603609 nova_compute[221550]: 2026-01-31 09:06:19.246 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:06:19 np0005603609 nova_compute[221550]: 2026-01-31 09:06:19.246 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:06:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:19.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:19.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:19 np0005603609 nova_compute[221550]: 2026-01-31 09:06:19.475 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:06:19 np0005603609 nova_compute[221550]: 2026-01-31 09:06:19.476 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:06:19 np0005603609 nova_compute[221550]: 2026-01-31 09:06:19.905 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:06:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:06:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/69904375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:06:20 np0005603609 nova_compute[221550]: 2026-01-31 09:06:20.332 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:06:20 np0005603609 nova_compute[221550]: 2026-01-31 09:06:20.336 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:06:20 np0005603609 nova_compute[221550]: 2026-01-31 09:06:20.370 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:06:20 np0005603609 nova_compute[221550]: 2026-01-31 09:06:20.371 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:06:20 np0005603609 nova_compute[221550]: 2026-01-31 09:06:20.372 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:06:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:06:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:06:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:21.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:06:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:21 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:21.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:22 np0005603609 nova_compute[221550]: 2026-01-31 09:06:22.372 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:22 np0005603609 nova_compute[221550]: 2026-01-31 09:06:22.373 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:06:22 np0005603609 nova_compute[221550]: 2026-01-31 09:06:22.373 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:06:22 np0005603609 nova_compute[221550]: 2026-01-31 09:06:22.395 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:06:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:22 np0005603609 nova_compute[221550]: 2026-01-31 09:06:22.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:22 np0005603609 nova_compute[221550]: 2026-01-31 09:06:22.658 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:06:22 np0005603609 nova_compute[221550]: 2026-01-31 09:06:22.987 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:06:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:23.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:06:23 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:23.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:06:24 np0005603609 nova_compute[221550]: 2026-01-31 09:06:24.303 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:25.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:06:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:25 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:25.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:26 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:06:26.306 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '106'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:06:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:06:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:27.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:06:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:27.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:27 np0005603609 nova_compute[221550]: 2026-01-31 09:06:27.988 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:29 np0005603609 nova_compute[221550]: 2026-01-31 09:06:29.305 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:29.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:06:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:29.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:06:29 np0005603609 nova_compute[221550]: 2026-01-31 09:06:29.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:06:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 66K writes, 258K keys, 66K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.04 MB/s#012Cumulative WAL: 66K writes, 24K syncs, 2.70 writes per sync, written: 0.25 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2663 writes, 9401 keys, 2663 commit groups, 1.0 writes per commit group, ingest: 8.75 MB, 0.01 MB/s#012Interval WAL: 2663 writes, 1081 syncs, 2.46 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 04:06:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:06:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:31.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:06:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:31.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:32 np0005603609 nova_compute[221550]: 2026-01-31 09:06:32.991 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:33.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:33.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:34 np0005603609 nova_compute[221550]: 2026-01-31 09:06:34.308 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:35.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:35.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:37.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:37.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:37 np0005603609 nova_compute[221550]: 2026-01-31 09:06:37.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:37 np0005603609 nova_compute[221550]: 2026-01-31 09:06:37.992 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:38 np0005603609 nova_compute[221550]: 2026-01-31 09:06:38.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:39 np0005603609 nova_compute[221550]: 2026-01-31 09:06:39.310 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:39.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:39.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:41.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:41.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:42 np0005603609 nova_compute[221550]: 2026-01-31 09:06:42.994 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:06:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:43.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:06:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:43.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:44 np0005603609 nova_compute[221550]: 2026-01-31 09:06:44.311 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:45.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:06:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:45.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:06:47 np0005603609 podman[318050]: 2026-01-31 09:06:47.164957283 +0000 UTC m=+0.049638812 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:06:47 np0005603609 podman[318049]: 2026-01-31 09:06:47.216356857 +0000 UTC m=+0.102157304 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 04:06:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:06:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:47.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:06:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:06:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:47.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:06:47 np0005603609 nova_compute[221550]: 2026-01-31 09:06:47.997 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:49 np0005603609 nova_compute[221550]: 2026-01-31 09:06:49.316 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:06:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:49.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:06:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:49.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:51.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:06:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:51.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:06:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:53 np0005603609 nova_compute[221550]: 2026-01-31 09:06:53.000 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:06:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/77891392' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:06:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:06:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/77891392' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:06:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:06:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:53.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:06:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:53.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:54 np0005603609 nova_compute[221550]: 2026-01-31 09:06:54.318 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:55.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:06:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:55 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:55.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:06:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:57.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:57 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:57.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:58 np0005603609 nova_compute[221550]: 2026-01-31 09:06:58.003 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:59 np0005603609 nova_compute[221550]: 2026-01-31 09:06:59.319 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:06:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:06:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:06:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:59.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:06:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:06:59 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:59.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:07:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:07:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:07:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:07:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:07:00 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:07:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:07:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:01.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:07:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:01.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:07:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.3 total, 600.0 interval#012Cumulative writes: 19K writes, 96K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s#012Cumulative WAL: 19K writes, 19K syncs, 1.00 writes per sync, written: 0.19 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1524 writes, 7439 keys, 1524 commit groups, 1.0 writes per commit group, ingest: 15.39 MB, 0.03 MB/s#012Interval WAL: 1524 writes, 1524 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     44.4      2.67              0.27        63    0.042       0      0       0.0       0.0#012  L6      1/0   10.63 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.5     77.9     66.9      9.80              1.51        62    0.158    507K    33K       0.0       0.0#012 Sum      1/0   10.63 MB   0.0      0.7     0.1      0.6       0.8      0.1       0.0   6.5     61.2     62.1     12.47              1.78       125    0.100    507K    33K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4     78.0     76.1      1.10              0.17        12    0.091     69K   3064       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0     77.9     66.9      9.80              1.51        62    0.158    507K    33K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     45.0      2.64              0.27        62    0.043       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7200.3 total, 600.0 interval#012Flush(GB): cumulative 0.116, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.76 GB write, 0.11 MB/s write, 0.75 GB read, 0.11 MB/s read, 12.5 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619ad8ed1f0#2 capacity: 304.00 MB usage: 81.50 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000648 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4677,77.92 MB,25.6316%) FilterBlock(125,1.37 MB,0.449587%) IndexBlock(125,2.21 MB,0.727809%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 04:07:03 np0005603609 nova_compute[221550]: 2026-01-31 09:07:03.003 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e415 e415: 3 total, 3 up, 3 in
Jan 31 04:07:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:03.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:03.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:04 np0005603609 nova_compute[221550]: 2026-01-31 09:07:04.322 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:05.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:05.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e416 e416: 3 total, 3 up, 3 in
Jan 31 04:07:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:07.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:07:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:07.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:07:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:07.559 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:07.560 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:07.560 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:08 np0005603609 nova_compute[221550]: 2026-01-31 09:07:08.005 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e417 e417: 3 total, 3 up, 3 in
Jan 31 04:07:09 np0005603609 nova_compute[221550]: 2026-01-31 09:07:09.324 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:07:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:09.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:07:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:09.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:07:10 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:07:10 np0005603609 nova_compute[221550]: 2026-01-31 09:07:10.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:07:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:11.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:11 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:11.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:12 np0005603609 nova_compute[221550]: 2026-01-31 09:07:12.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:13 np0005603609 nova_compute[221550]: 2026-01-31 09:07:13.007 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:13.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:13.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:14 np0005603609 nova_compute[221550]: 2026-01-31 09:07:14.326 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:07:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:15.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:07:15 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:15.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:07:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e418 e418: 3 total, 3 up, 3 in
Jan 31 04:07:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:07:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:17 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:17.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:17.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:18 np0005603609 nova_compute[221550]: 2026-01-31 09:07:18.009 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:18 np0005603609 podman[318274]: 2026-01-31 09:07:18.162585516 +0000 UTC m=+0.046787873 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 04:07:18 np0005603609 podman[318273]: 2026-01-31 09:07:18.209682947 +0000 UTC m=+0.095145994 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:07:18 np0005603609 nova_compute[221550]: 2026-01-31 09:07:18.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:19 np0005603609 nova_compute[221550]: 2026-01-31 09:07:19.371 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:07:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:19.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:07:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:07:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:19 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:19.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:20 np0005603609 nova_compute[221550]: 2026-01-31 09:07:20.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:20 np0005603609 nova_compute[221550]: 2026-01-31 09:07:20.716 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:20 np0005603609 nova_compute[221550]: 2026-01-31 09:07:20.717 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:20 np0005603609 nova_compute[221550]: 2026-01-31 09:07:20.717 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:20 np0005603609 nova_compute[221550]: 2026-01-31 09:07:20.717 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:07:20 np0005603609 nova_compute[221550]: 2026-01-31 09:07:20.717 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:07:21 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3941672233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:07:21 np0005603609 nova_compute[221550]: 2026-01-31 09:07:21.137 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:21 np0005603609 nova_compute[221550]: 2026-01-31 09:07:21.275 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:07:21 np0005603609 nova_compute[221550]: 2026-01-31 09:07:21.277 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4170MB free_disk=20.953540802001953GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:07:21 np0005603609 nova_compute[221550]: 2026-01-31 09:07:21.277 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:21 np0005603609 nova_compute[221550]: 2026-01-31 09:07:21.277 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:21.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:21.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:21 np0005603609 nova_compute[221550]: 2026-01-31 09:07:21.572 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:07:21 np0005603609 nova_compute[221550]: 2026-01-31 09:07:21.573 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:07:21 np0005603609 nova_compute[221550]: 2026-01-31 09:07:21.617 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:21 np0005603609 nova_compute[221550]: 2026-01-31 09:07:21.933 221554 DEBUG oslo_concurrency.lockutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:21 np0005603609 nova_compute[221550]: 2026-01-31 09:07:21.934 221554 DEBUG oslo_concurrency.lockutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:21 np0005603609 nova_compute[221550]: 2026-01-31 09:07:21.934 221554 INFO nova.compute.manager [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Unshelving#033[00m
Jan 31 04:07:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:07:22 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1871789765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:07:22 np0005603609 nova_compute[221550]: 2026-01-31 09:07:22.041 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:22 np0005603609 nova_compute[221550]: 2026-01-31 09:07:22.045 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:07:22 np0005603609 nova_compute[221550]: 2026-01-31 09:07:22.159 221554 DEBUG oslo_concurrency.lockutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:22 np0005603609 nova_compute[221550]: 2026-01-31 09:07:22.224 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:07:22 np0005603609 nova_compute[221550]: 2026-01-31 09:07:22.226 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:07:22 np0005603609 nova_compute[221550]: 2026-01-31 09:07:22.227 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:22 np0005603609 nova_compute[221550]: 2026-01-31 09:07:22.227 221554 DEBUG oslo_concurrency.lockutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:22 np0005603609 nova_compute[221550]: 2026-01-31 09:07:22.232 221554 DEBUG nova.objects.instance [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'pci_requests' on Instance uuid 1fb94592-4c46-41d2-990b-7d5d8d1a7fce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:07:22 np0005603609 nova_compute[221550]: 2026-01-31 09:07:22.296 221554 DEBUG nova.objects.instance [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'numa_topology' on Instance uuid 1fb94592-4c46-41d2-990b-7d5d8d1a7fce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:07:22 np0005603609 nova_compute[221550]: 2026-01-31 09:07:22.346 221554 DEBUG nova.virt.hardware [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 04:07:22 np0005603609 nova_compute[221550]: 2026-01-31 09:07:22.347 221554 INFO nova.compute.claims [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 04:07:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:22 np0005603609 nova_compute[221550]: 2026-01-31 09:07:22.678 221554 DEBUG oslo_concurrency.processutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:23 np0005603609 nova_compute[221550]: 2026-01-31 09:07:23.010 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:07:23 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3951575014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:07:23 np0005603609 nova_compute[221550]: 2026-01-31 09:07:23.096 221554 DEBUG oslo_concurrency.processutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:23 np0005603609 nova_compute[221550]: 2026-01-31 09:07:23.100 221554 DEBUG nova.compute.provider_tree [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:07:23 np0005603609 nova_compute[221550]: 2026-01-31 09:07:23.116 221554 DEBUG nova.scheduler.client.report [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:07:23 np0005603609 nova_compute[221550]: 2026-01-31 09:07:23.136 221554 DEBUG oslo_concurrency.lockutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:23 np0005603609 nova_compute[221550]: 2026-01-31 09:07:23.510 221554 INFO nova.network.neutron [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Updating port 32656689-8c91-4c26-8aea-d5aaac071876 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 31 04:07:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:07:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:07:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:23.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:07:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:23 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:23.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:24 np0005603609 nova_compute[221550]: 2026-01-31 09:07:24.168 221554 DEBUG oslo_concurrency.lockutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:07:24 np0005603609 nova_compute[221550]: 2026-01-31 09:07:24.168 221554 DEBUG oslo_concurrency.lockutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquired lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:07:24 np0005603609 nova_compute[221550]: 2026-01-31 09:07:24.169 221554 DEBUG nova.network.neutron [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:07:24 np0005603609 nova_compute[221550]: 2026-01-31 09:07:24.227 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:24 np0005603609 nova_compute[221550]: 2026-01-31 09:07:24.227 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:07:24 np0005603609 nova_compute[221550]: 2026-01-31 09:07:24.228 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:07:24 np0005603609 nova_compute[221550]: 2026-01-31 09:07:24.244 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:07:24 np0005603609 nova_compute[221550]: 2026-01-31 09:07:24.255 221554 DEBUG nova.compute.manager [req-a7bc0563-dbd6-461c-ba0f-89c1060cf234 req-fb405180-d5b3-464b-88fe-29cb28a9f895 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received event network-changed-32656689-8c91-4c26-8aea-d5aaac071876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:07:24 np0005603609 nova_compute[221550]: 2026-01-31 09:07:24.256 221554 DEBUG nova.compute.manager [req-a7bc0563-dbd6-461c-ba0f-89c1060cf234 req-fb405180-d5b3-464b-88fe-29cb28a9f895 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Refreshing instance network info cache due to event network-changed-32656689-8c91-4c26-8aea-d5aaac071876. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:07:24 np0005603609 nova_compute[221550]: 2026-01-31 09:07:24.256 221554 DEBUG oslo_concurrency.lockutils [req-a7bc0563-dbd6-461c-ba0f-89c1060cf234 req-fb405180-d5b3-464b-88fe-29cb28a9f895 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:07:24 np0005603609 nova_compute[221550]: 2026-01-31 09:07:24.373 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:25.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:25.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:25 np0005603609 nova_compute[221550]: 2026-01-31 09:07:25.563 221554 DEBUG nova.network.neutron [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Updating instance_info_cache with network_info: [{"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:07:25 np0005603609 nova_compute[221550]: 2026-01-31 09:07:25.581 221554 DEBUG oslo_concurrency.lockutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Releasing lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:07:25 np0005603609 nova_compute[221550]: 2026-01-31 09:07:25.583 221554 DEBUG nova.virt.libvirt.driver [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 04:07:25 np0005603609 nova_compute[221550]: 2026-01-31 09:07:25.584 221554 INFO nova.virt.libvirt.driver [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Creating image(s)#033[00m
Jan 31 04:07:25 np0005603609 nova_compute[221550]: 2026-01-31 09:07:25.609 221554 DEBUG nova.storage.rbd_utils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] rbd image 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:07:25 np0005603609 nova_compute[221550]: 2026-01-31 09:07:25.613 221554 DEBUG nova.objects.instance [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1fb94592-4c46-41d2-990b-7d5d8d1a7fce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:07:25 np0005603609 nova_compute[221550]: 2026-01-31 09:07:25.615 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquired lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:07:25 np0005603609 nova_compute[221550]: 2026-01-31 09:07:25.615 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 04:07:25 np0005603609 nova_compute[221550]: 2026-01-31 09:07:25.615 221554 DEBUG nova.objects.instance [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1fb94592-4c46-41d2-990b-7d5d8d1a7fce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:07:25 np0005603609 nova_compute[221550]: 2026-01-31 09:07:25.656 221554 DEBUG nova.storage.rbd_utils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] rbd image 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:07:25 np0005603609 nova_compute[221550]: 2026-01-31 09:07:25.684 221554 DEBUG nova.storage.rbd_utils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] rbd image 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:07:25 np0005603609 nova_compute[221550]: 2026-01-31 09:07:25.687 221554 DEBUG oslo_concurrency.lockutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "56da76d137cbe2a259f8d3613eac84071d007003" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:25 np0005603609 nova_compute[221550]: 2026-01-31 09:07:25.688 221554 DEBUG oslo_concurrency.lockutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "56da76d137cbe2a259f8d3613eac84071d007003" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:25 np0005603609 nova_compute[221550]: 2026-01-31 09:07:25.975 221554 DEBUG nova.virt.libvirt.imagebackend [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/f336042a-a032-4a9a-8976-bb911c951591/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/f336042a-a032-4a9a-8976-bb911c951591/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.032 221554 DEBUG nova.virt.libvirt.imagebackend [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Selected location: {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/f336042a-a032-4a9a-8976-bb911c951591/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.032 221554 DEBUG nova.storage.rbd_utils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] cloning images/f336042a-a032-4a9a-8976-bb911c951591@snap to None/1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.170 221554 DEBUG oslo_concurrency.lockutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "56da76d137cbe2a259f8d3613eac84071d007003" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.326 221554 DEBUG nova.objects.instance [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'migration_context' on Instance uuid 1fb94592-4c46-41d2-990b-7d5d8d1a7fce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.404 221554 DEBUG nova.storage.rbd_utils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] flattening vms/1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.726 221554 DEBUG nova.virt.libvirt.driver [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Image rbd:vms/1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.727 221554 DEBUG nova.virt.libvirt.driver [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.727 221554 DEBUG nova.virt.libvirt.driver [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Ensure instance console log exists: /var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.727 221554 DEBUG oslo_concurrency.lockutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.727 221554 DEBUG oslo_concurrency.lockutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.728 221554 DEBUG oslo_concurrency.lockutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.729 221554 DEBUG nova.virt.libvirt.driver [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Start _get_guest_xml network_info=[{"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T09:06:58Z,direct_url=<?>,disk_format='raw',id=f336042a-a032-4a9a-8976-bb911c951591,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-863769904-shelved',owner='496e06c7521f45c994e6426c4313acea',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T09:07:11Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.732 221554 WARNING nova.virt.libvirt.driver [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.736 221554 DEBUG nova.virt.libvirt.host [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.736 221554 DEBUG nova.virt.libvirt.host [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.739 221554 DEBUG nova.virt.libvirt.host [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.739 221554 DEBUG nova.virt.libvirt.host [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.740 221554 DEBUG nova.virt.libvirt.driver [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.741 221554 DEBUG nova.virt.hardware [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T09:06:58Z,direct_url=<?>,disk_format='raw',id=f336042a-a032-4a9a-8976-bb911c951591,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-863769904-shelved',owner='496e06c7521f45c994e6426c4313acea',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T09:07:11Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.741 221554 DEBUG nova.virt.hardware [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.741 221554 DEBUG nova.virt.hardware [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.741 221554 DEBUG nova.virt.hardware [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.741 221554 DEBUG nova.virt.hardware [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.741 221554 DEBUG nova.virt.hardware [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.742 221554 DEBUG nova.virt.hardware [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.742 221554 DEBUG nova.virt.hardware [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.742 221554 DEBUG nova.virt.hardware [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.742 221554 DEBUG nova.virt.hardware [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.742 221554 DEBUG nova.virt.hardware [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.742 221554 DEBUG nova.objects.instance [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1fb94592-4c46-41d2-990b-7d5d8d1a7fce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:07:26 np0005603609 nova_compute[221550]: 2026-01-31 09:07:26.760 221554 DEBUG oslo_concurrency.processutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.109 221554 DEBUG nova.network.neutron [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Updating instance_info_cache with network_info: [{"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.127 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Releasing lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.127 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.127 221554 DEBUG oslo_concurrency.lockutils [req-a7bc0563-dbd6-461c-ba0f-89c1060cf234 req-fb405180-d5b3-464b-88fe-29cb28a9f895 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.128 221554 DEBUG nova.network.neutron [req-a7bc0563-dbd6-461c-ba0f-89c1060cf234 req-fb405180-d5b3-464b-88fe-29cb28a9f895 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Refreshing network info cache for port 32656689-8c91-4c26-8aea-d5aaac071876 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.129 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:07:27 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1621740039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.152 221554 DEBUG oslo_concurrency.processutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.181 221554 DEBUG nova.storage.rbd_utils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] rbd image 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.185 221554 DEBUG oslo_concurrency.processutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.222 221554 WARNING nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.222 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Triggering sync for uuid 1fb94592-4c46-41d2-990b-7d5d8d1a7fce _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.223 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.223 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.224 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:07:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:27.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:27.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:07:27 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/993917815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.601 221554 DEBUG oslo_concurrency.processutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.602 221554 DEBUG nova.virt.libvirt.vif [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T09:06:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-863769904',display_name='tempest-TestShelveInstance-server-863769904',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-863769904',id=216,image_ref='f336042a-a032-4a9a-8976-bb911c951591',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1892807111',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:06:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='496e06c7521f45c994e6426c4313acea',ramdisk_id='',reservation_id='r-wmbqilfc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1485729988',owner_user_name='tempest-TestShelveInstance-1485729988-project-member',shelved_at='2026-01-31T09:07:12.102703',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='f336042a-a032-4a9a-8976-bb911c951591'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:07:22Z,user_data=None,user_id='4883c0d4a7f54a6898eba5bfdbb41266',uuid=1fb94592-4c46-41d2-990b-7d5d8d1a7fce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.603 221554 DEBUG nova.network.os_vif_util [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converting VIF {"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.604 221554 DEBUG nova.network.os_vif_util [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:2a:d0,bridge_name='br-int',has_traffic_filtering=True,id=32656689-8c91-4c26-8aea-d5aaac071876,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32656689-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.605 221554 DEBUG nova.objects.instance [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'pci_devices' on Instance uuid 1fb94592-4c46-41d2-990b-7d5d8d1a7fce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.620 221554 DEBUG nova.virt.libvirt.driver [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] End _get_guest_xml xml=<domain type="kvm">
Jan 31 04:07:27 np0005603609 nova_compute[221550]:  <uuid>1fb94592-4c46-41d2-990b-7d5d8d1a7fce</uuid>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:  <name>instance-000000d8</name>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestShelveInstance-server-863769904</nova:name>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 09:07:26</nova:creationTime>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 04:07:27 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:        <nova:user uuid="4883c0d4a7f54a6898eba5bfdbb41266">tempest-TestShelveInstance-1485729988-project-member</nova:user>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:        <nova:project uuid="496e06c7521f45c994e6426c4313acea">tempest-TestShelveInstance-1485729988</nova:project>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <nova:root type="image" uuid="f336042a-a032-4a9a-8976-bb911c951591"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:        <nova:port uuid="32656689-8c91-4c26-8aea-d5aaac071876">
Jan 31 04:07:27 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <system>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <entry name="serial">1fb94592-4c46-41d2-990b-7d5d8d1a7fce</entry>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <entry name="uuid">1fb94592-4c46-41d2-990b-7d5d8d1a7fce</entry>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    </system>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:  <os>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:  </os>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:  <features>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:  </features>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:  </clock>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:  <devices>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk">
Jan 31 04:07:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      </source>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 04:07:27 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      </auth>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    </disk>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk.config">
Jan 31 04:07:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      </source>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 04:07:27 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      </auth>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    </disk>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:3d:2a:d0"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <target dev="tap32656689-8c"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    </interface>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce/console.log" append="off"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    </serial>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <video>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    </video>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <input type="keyboard" bus="usb"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    </rng>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 04:07:27 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 04:07:27 np0005603609 nova_compute[221550]:  </devices>
Jan 31 04:07:27 np0005603609 nova_compute[221550]: </domain>
Jan 31 04:07:27 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.621 221554 DEBUG nova.compute.manager [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Preparing to wait for external event network-vif-plugged-32656689-8c91-4c26-8aea-d5aaac071876 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.622 221554 DEBUG oslo_concurrency.lockutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.622 221554 DEBUG oslo_concurrency.lockutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.623 221554 DEBUG oslo_concurrency.lockutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.623 221554 DEBUG nova.virt.libvirt.vif [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T09:06:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-863769904',display_name='tempest-TestShelveInstance-server-863769904',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-863769904',id=216,image_ref='f336042a-a032-4a9a-8976-bb911c951591',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1892807111',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:06:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='496e06c7521f45c994e6426c4313acea',ramdisk_id='',reservation_id='r-wmbqilfc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1485729988',owner_user_name='tempest-TestShelveInstance-1485729988-project-member',shelved_at='2026-01-31T09:07:12.102703',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='f336042a-a032-4a9a-8976-bb911c951591'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:07:22Z,user_data=None,user_id='4883c0d4a7f54a6898eba5bfdbb41266',uuid=1fb94592-4c46-41d2-990b-7d5d8d1a7fce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.623 221554 DEBUG nova.network.os_vif_util [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converting VIF {"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.624 221554 DEBUG nova.network.os_vif_util [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:2a:d0,bridge_name='br-int',has_traffic_filtering=True,id=32656689-8c91-4c26-8aea-d5aaac071876,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32656689-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.624 221554 DEBUG os_vif [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:2a:d0,bridge_name='br-int',has_traffic_filtering=True,id=32656689-8c91-4c26-8aea-d5aaac071876,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32656689-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.625 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.625 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.626 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.629 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.629 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32656689-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.630 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap32656689-8c, col_values=(('external_ids', {'iface-id': '32656689-8c91-4c26-8aea-d5aaac071876', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:2a:d0', 'vm-uuid': '1fb94592-4c46-41d2-990b-7d5d8d1a7fce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.631 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:27 np0005603609 NetworkManager[49064]: <info>  [1769850447.6328] manager: (tap32656689-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/471)
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.634 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.637 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.637 221554 INFO os_vif [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:2a:d0,bridge_name='br-int',has_traffic_filtering=True,id=32656689-8c91-4c26-8aea-d5aaac071876,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32656689-8c')#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.725 221554 DEBUG nova.virt.libvirt.driver [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.725 221554 DEBUG nova.virt.libvirt.driver [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.726 221554 DEBUG nova.virt.libvirt.driver [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] No VIF found with MAC fa:16:3e:3d:2a:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.726 221554 INFO nova.virt.libvirt.driver [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Using config drive#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.751 221554 DEBUG nova.storage.rbd_utils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] rbd image 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.778 221554 DEBUG nova.objects.instance [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1fb94592-4c46-41d2-990b-7d5d8d1a7fce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:07:27 np0005603609 nova_compute[221550]: 2026-01-31 09:07:27.838 221554 DEBUG nova.objects.instance [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'keypairs' on Instance uuid 1fb94592-4c46-41d2-990b-7d5d8d1a7fce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.049 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.212 221554 INFO nova.virt.libvirt.driver [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Creating config drive at /var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce/disk.config#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.215 221554 DEBUG oslo_concurrency.processutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpt4pq7mh4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.342 221554 DEBUG oslo_concurrency.processutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpt4pq7mh4" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.369 221554 DEBUG nova.storage.rbd_utils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] rbd image 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.373 221554 DEBUG oslo_concurrency.processutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce/disk.config 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.468 221554 DEBUG nova.network.neutron [req-a7bc0563-dbd6-461c-ba0f-89c1060cf234 req-fb405180-d5b3-464b-88fe-29cb28a9f895 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Updated VIF entry in instance network info cache for port 32656689-8c91-4c26-8aea-d5aaac071876. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.469 221554 DEBUG nova.network.neutron [req-a7bc0563-dbd6-461c-ba0f-89c1060cf234 req-fb405180-d5b3-464b-88fe-29cb28a9f895 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Updating instance_info_cache with network_info: [{"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.486 221554 DEBUG oslo_concurrency.lockutils [req-a7bc0563-dbd6-461c-ba0f-89c1060cf234 req-fb405180-d5b3-464b-88fe-29cb28a9f895 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.540 221554 DEBUG oslo_concurrency.processutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce/disk.config 1fb94592-4c46-41d2-990b-7d5d8d1a7fce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.541 221554 INFO nova.virt.libvirt.driver [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Deleting local config drive /var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce/disk.config because it was imported into RBD.#033[00m
Jan 31 04:07:28 np0005603609 kernel: tap32656689-8c: entered promiscuous mode
Jan 31 04:07:28 np0005603609 NetworkManager[49064]: <info>  [1769850448.5935] manager: (tap32656689-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/472)
Jan 31 04:07:28 np0005603609 ovn_controller[130359]: 2026-01-31T09:07:28Z|01008|binding|INFO|Claiming lport 32656689-8c91-4c26-8aea-d5aaac071876 for this chassis.
Jan 31 04:07:28 np0005603609 ovn_controller[130359]: 2026-01-31T09:07:28Z|01009|binding|INFO|32656689-8c91-4c26-8aea-d5aaac071876: Claiming fa:16:3e:3d:2a:d0 10.100.0.12
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.594 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.599 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.606 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:28 np0005603609 NetworkManager[49064]: <info>  [1769850448.6070] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/473)
Jan 31 04:07:28 np0005603609 NetworkManager[49064]: <info>  [1769850448.6082] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/474)
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.610 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:2a:d0 10.100.0.12'], port_security=['fa:16:3e:3d:2a:d0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1fb94592-4c46-41d2-990b-7d5d8d1a7fce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '496e06c7521f45c994e6426c4313acea', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'e909f648-e644-4ddc-8790-deca52a25b73', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=62ab62d6-b781-4309-8775-90b43197869a, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=32656689-8c91-4c26-8aea-d5aaac071876) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.612 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 32656689-8c91-4c26-8aea-d5aaac071876 in datapath 1d742d07-ac6a-4870-9712-15a33a8a1e71 bound to our chassis#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.613 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1d742d07-ac6a-4870-9712-15a33a8a1e71#033[00m
Jan 31 04:07:28 np0005603609 systemd-udevd[318727]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.622 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4492538a-2fc2-414a-a3d2-b1f6adc0ff32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.624 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1d742d07-a1 in ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.627 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1d742d07-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.627 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[01540eda-9a73-4523-bf49-e94702bf0739]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.628 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[95c965f8-1920-4d87-b3c0-fc5c4af72059]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:28 np0005603609 NetworkManager[49064]: <info>  [1769850448.6314] device (tap32656689-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:07:28 np0005603609 NetworkManager[49064]: <info>  [1769850448.6322] device (tap32656689-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 04:07:28 np0005603609 systemd-machined[190912]: New machine qemu-117-instance-000000d8.
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.635 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.637 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.639 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.637 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[3230af61-80ce-483c-8e20-f7deada2e5fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:28 np0005603609 systemd[1]: Started Virtual Machine qemu-117-instance-000000d8.
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.649 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:28 np0005603609 ovn_controller[130359]: 2026-01-31T09:07:28Z|01010|binding|INFO|Setting lport 32656689-8c91-4c26-8aea-d5aaac071876 ovn-installed in OVS
Jan 31 04:07:28 np0005603609 ovn_controller[130359]: 2026-01-31T09:07:28Z|01011|binding|INFO|Setting lport 32656689-8c91-4c26-8aea-d5aaac071876 up in Southbound
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.652 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.662 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6eeed270-7e80-4444-9bed-239b0c3b4e98]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.682 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[8ac7819b-1815-4904-9881-ef476c148f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.687 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b898a8c5-b884-48a7-acfa-1a28d2675fa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:28 np0005603609 NetworkManager[49064]: <info>  [1769850448.6881] manager: (tap1d742d07-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/475)
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.709 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f3950bb0-b963-4f8e-9029-6d2294ce8b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.712 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[f87f0aa8-459c-4bc6-b401-f86f646475a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:28 np0005603609 NetworkManager[49064]: <info>  [1769850448.7292] device (tap1d742d07-a0): carrier: link connected
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.733 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[cb891b1b-d957-4d16-83f3-c517242becfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.744 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e147f61d-7798-4cf4-8571-f45942c89f5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d742d07-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:3d:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 308], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1068842, 'reachable_time': 22856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318763, 'error': None, 'target': 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.755 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[85b1750d-1112-47c0-bd56-53ee99e4593f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0f:3d77'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1068842, 'tstamp': 1068842}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318764, 'error': None, 'target': 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.772 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9e6391-3908-4698-a4e4-8c63a22cb8f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d742d07-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:3d:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 308], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1068842, 'reachable_time': 22856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318765, 'error': None, 'target': 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.797 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2618d5-103f-4782-95b8-14dd22a91bbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.841 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bef878b1-f230-4dc9-aa2b-5811321617cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.843 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d742d07-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.843 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.843 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d742d07-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:28 np0005603609 kernel: tap1d742d07-a0: entered promiscuous mode
Jan 31 04:07:28 np0005603609 NetworkManager[49064]: <info>  [1769850448.8458] manager: (tap1d742d07-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/476)
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.845 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.853 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1d742d07-a0, col_values=(('external_ids', {'iface-id': 'ff7f72f7-b69e-4d38-bd70-12b9fe05b593'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:28 np0005603609 ovn_controller[130359]: 2026-01-31T09:07:28Z|01012|binding|INFO|Releasing lport ff7f72f7-b69e-4d38-bd70-12b9fe05b593 from this chassis (sb_readonly=0)
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.856 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1d742d07-ac6a-4870-9712-15a33a8a1e71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1d742d07-ac6a-4870-9712-15a33a8a1e71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.857 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8c6f5d17-9afb-4f45-bf7d-4aaf4ee36876]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.857 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-1d742d07-ac6a-4870-9712-15a33a8a1e71
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/1d742d07-ac6a-4870-9712-15a33a8a1e71.pid.haproxy
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 1d742d07-ac6a-4870-9712-15a33a8a1e71
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 04:07:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:28.858 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'env', 'PROCESS_TAG=haproxy-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1d742d07-ac6a-4870-9712-15a33a8a1e71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.859 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.864 221554 DEBUG nova.compute.manager [req-42886733-9b25-4cfb-aace-68c630c7dcad req-78a640a2-1860-4da2-a237-3931983dd783 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received event network-vif-plugged-32656689-8c91-4c26-8aea-d5aaac071876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.865 221554 DEBUG oslo_concurrency.lockutils [req-42886733-9b25-4cfb-aace-68c630c7dcad req-78a640a2-1860-4da2-a237-3931983dd783 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.865 221554 DEBUG oslo_concurrency.lockutils [req-42886733-9b25-4cfb-aace-68c630c7dcad req-78a640a2-1860-4da2-a237-3931983dd783 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.865 221554 DEBUG oslo_concurrency.lockutils [req-42886733-9b25-4cfb-aace-68c630c7dcad req-78a640a2-1860-4da2-a237-3931983dd783 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.866 221554 DEBUG nova.compute.manager [req-42886733-9b25-4cfb-aace-68c630c7dcad req-78a640a2-1860-4da2-a237-3931983dd783 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Processing event network-vif-plugged-32656689-8c91-4c26-8aea-d5aaac071876 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.977 221554 DEBUG nova.compute.manager [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.978 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769850448.97724, 1fb94592-4c46-41d2-990b-7d5d8d1a7fce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.979 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] VM Started (Lifecycle Event)#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.984 221554 DEBUG nova.virt.libvirt.driver [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 04:07:28 np0005603609 nova_compute[221550]: 2026-01-31 09:07:28.988 221554 INFO nova.virt.libvirt.driver [-] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Instance spawned successfully.#033[00m
Jan 31 04:07:29 np0005603609 nova_compute[221550]: 2026-01-31 09:07:29.002 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:07:29 np0005603609 nova_compute[221550]: 2026-01-31 09:07:29.005 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:07:29 np0005603609 nova_compute[221550]: 2026-01-31 09:07:29.023 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:07:29 np0005603609 nova_compute[221550]: 2026-01-31 09:07:29.024 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769850448.978297, 1fb94592-4c46-41d2-990b-7d5d8d1a7fce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:07:29 np0005603609 nova_compute[221550]: 2026-01-31 09:07:29.024 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] VM Paused (Lifecycle Event)#033[00m
Jan 31 04:07:29 np0005603609 nova_compute[221550]: 2026-01-31 09:07:29.082 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:07:29 np0005603609 nova_compute[221550]: 2026-01-31 09:07:29.088 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769850448.9827387, 1fb94592-4c46-41d2-990b-7d5d8d1a7fce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:07:29 np0005603609 nova_compute[221550]: 2026-01-31 09:07:29.088 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] VM Resumed (Lifecycle Event)#033[00m
Jan 31 04:07:29 np0005603609 nova_compute[221550]: 2026-01-31 09:07:29.123 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:07:29 np0005603609 nova_compute[221550]: 2026-01-31 09:07:29.130 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:07:29 np0005603609 nova_compute[221550]: 2026-01-31 09:07:29.175 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:07:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:29.190 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=107, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=106) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:07:29 np0005603609 podman[318837]: 2026-01-31 09:07:29.206753043 +0000 UTC m=+0.048229218 container create 1db2333324fe2ae03abcbd922a24d92e653757877da68d7ec38dd2e5bf9893ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 04:07:29 np0005603609 nova_compute[221550]: 2026-01-31 09:07:29.237 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:29 np0005603609 systemd[1]: Started libpod-conmon-1db2333324fe2ae03abcbd922a24d92e653757877da68d7ec38dd2e5bf9893ef.scope.
Jan 31 04:07:29 np0005603609 podman[318837]: 2026-01-31 09:07:29.182512367 +0000 UTC m=+0.023988572 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 04:07:29 np0005603609 systemd[1]: Started libcrun container.
Jan 31 04:07:29 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69904dfb9938716790ef58d4af1072d82602fa7e334d02b87b7d82a6591b33de/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 04:07:29 np0005603609 podman[318837]: 2026-01-31 09:07:29.293833652 +0000 UTC m=+0.135309857 container init 1db2333324fe2ae03abcbd922a24d92e653757877da68d7ec38dd2e5bf9893ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 04:07:29 np0005603609 podman[318837]: 2026-01-31 09:07:29.299303394 +0000 UTC m=+0.140779569 container start 1db2333324fe2ae03abcbd922a24d92e653757877da68d7ec38dd2e5bf9893ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:07:29 np0005603609 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[318850]: [NOTICE]   (318854) : New worker (318856) forked
Jan 31 04:07:29 np0005603609 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[318850]: [NOTICE]   (318854) : Loading success.
Jan 31 04:07:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:29.348 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:07:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:07:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:29.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:07:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:07:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:29.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:07:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e419 e419: 3 total, 3 up, 3 in
Jan 31 04:07:30 np0005603609 nova_compute[221550]: 2026-01-31 09:07:30.283 221554 DEBUG nova.compute.manager [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:07:30 np0005603609 nova_compute[221550]: 2026-01-31 09:07:30.356 221554 DEBUG oslo_concurrency.lockutils [None req-a01483e3-7a57-4f78-856f-5ecb6c36ecf5 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 8.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:30 np0005603609 nova_compute[221550]: 2026-01-31 09:07:30.358 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 3.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:30 np0005603609 nova_compute[221550]: 2026-01-31 09:07:30.358 221554 INFO nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:07:30 np0005603609 nova_compute[221550]: 2026-01-31 09:07:30.358 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:30 np0005603609 nova_compute[221550]: 2026-01-31 09:07:30.997 221554 DEBUG nova.compute.manager [req-eb48b852-bc90-45bb-a3c2-092ea00751e5 req-376ce82f-cb7b-43c5-b377-799684b34704 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received event network-vif-plugged-32656689-8c91-4c26-8aea-d5aaac071876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:07:30 np0005603609 nova_compute[221550]: 2026-01-31 09:07:30.998 221554 DEBUG oslo_concurrency.lockutils [req-eb48b852-bc90-45bb-a3c2-092ea00751e5 req-376ce82f-cb7b-43c5-b377-799684b34704 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:30 np0005603609 nova_compute[221550]: 2026-01-31 09:07:30.998 221554 DEBUG oslo_concurrency.lockutils [req-eb48b852-bc90-45bb-a3c2-092ea00751e5 req-376ce82f-cb7b-43c5-b377-799684b34704 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:30 np0005603609 nova_compute[221550]: 2026-01-31 09:07:30.998 221554 DEBUG oslo_concurrency.lockutils [req-eb48b852-bc90-45bb-a3c2-092ea00751e5 req-376ce82f-cb7b-43c5-b377-799684b34704 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:30 np0005603609 nova_compute[221550]: 2026-01-31 09:07:30.999 221554 DEBUG nova.compute.manager [req-eb48b852-bc90-45bb-a3c2-092ea00751e5 req-376ce82f-cb7b-43c5-b377-799684b34704 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] No waiting events found dispatching network-vif-plugged-32656689-8c91-4c26-8aea-d5aaac071876 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:07:30 np0005603609 nova_compute[221550]: 2026-01-31 09:07:30.999 221554 WARNING nova.compute.manager [req-eb48b852-bc90-45bb-a3c2-092ea00751e5 req-376ce82f-cb7b-43c5-b377-799684b34704 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received unexpected event network-vif-plugged-32656689-8c91-4c26-8aea-d5aaac071876 for instance with vm_state active and task_state None.#033[00m
Jan 31 04:07:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:07:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:31 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:31.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:31.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:32 np0005603609 nova_compute[221550]: 2026-01-31 09:07:32.633 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:33 np0005603609 nova_compute[221550]: 2026-01-31 09:07:33.051 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:33.350 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '107'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:33.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:07:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:07:33 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:33.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:07:34 np0005603609 nova_compute[221550]: 2026-01-31 09:07:34.650 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:07:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:07:35 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:35.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:07:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:35.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 e420: 3 total, 3 up, 3 in
Jan 31 04:07:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:07:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:37.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:07:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:07:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:37 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:37.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:37 np0005603609 nova_compute[221550]: 2026-01-31 09:07:37.636 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:38 np0005603609 nova_compute[221550]: 2026-01-31 09:07:38.053 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:07:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:07:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:39.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:07:39 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:39.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:39 np0005603609 nova_compute[221550]: 2026-01-31 09:07:39.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:39 np0005603609 nova_compute[221550]: 2026-01-31 09:07:39.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:39 np0005603609 nova_compute[221550]: 2026-01-31 09:07:39.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:39 np0005603609 nova_compute[221550]: 2026-01-31 09:07:39.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 04:07:41 np0005603609 ovn_controller[130359]: 2026-01-31T09:07:41Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:2a:d0 10.100.0.12
Jan 31 04:07:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:07:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:07:41 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:41.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:07:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:41.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:42 np0005603609 nova_compute[221550]: 2026-01-31 09:07:42.639 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:43 np0005603609 nova_compute[221550]: 2026-01-31 09:07:43.572 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:07:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:43.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:07:43 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:43.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:07:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:45.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:07:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:45 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:45.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:07:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:47.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:07:47 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:47.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:07:47 np0005603609 nova_compute[221550]: 2026-01-31 09:07:47.643 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:47 np0005603609 nova_compute[221550]: 2026-01-31 09:07:47.693 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:47 np0005603609 nova_compute[221550]: 2026-01-31 09:07:47.693 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 04:07:47 np0005603609 nova_compute[221550]: 2026-01-31 09:07:47.777 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 04:07:48 np0005603609 nova_compute[221550]: 2026-01-31 09:07:48.573 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:49 np0005603609 podman[318870]: 2026-01-31 09:07:49.181772904 +0000 UTC m=+0.065959128 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:07:49 np0005603609 podman[318869]: 2026-01-31 09:07:49.21670509 +0000 UTC m=+0.102170635 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:07:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:07:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:49.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:07:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:07:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:49.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:07:50 np0005603609 nova_compute[221550]: 2026-01-31 09:07:50.627 221554 DEBUG nova.compute.manager [req-2741e34d-df0d-4079-86da-655ea340a459 req-a58bc0b4-9dc9-46ed-be4b-156edb352069 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received event network-changed-32656689-8c91-4c26-8aea-d5aaac071876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:07:50 np0005603609 nova_compute[221550]: 2026-01-31 09:07:50.627 221554 DEBUG nova.compute.manager [req-2741e34d-df0d-4079-86da-655ea340a459 req-a58bc0b4-9dc9-46ed-be4b-156edb352069 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Refreshing instance network info cache due to event network-changed-32656689-8c91-4c26-8aea-d5aaac071876. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:07:50 np0005603609 nova_compute[221550]: 2026-01-31 09:07:50.628 221554 DEBUG oslo_concurrency.lockutils [req-2741e34d-df0d-4079-86da-655ea340a459 req-a58bc0b4-9dc9-46ed-be4b-156edb352069 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:07:50 np0005603609 nova_compute[221550]: 2026-01-31 09:07:50.628 221554 DEBUG oslo_concurrency.lockutils [req-2741e34d-df0d-4079-86da-655ea340a459 req-a58bc0b4-9dc9-46ed-be4b-156edb352069 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:07:50 np0005603609 nova_compute[221550]: 2026-01-31 09:07:50.628 221554 DEBUG nova.network.neutron [req-2741e34d-df0d-4079-86da-655ea340a459 req-a58bc0b4-9dc9-46ed-be4b-156edb352069 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Refreshing network info cache for port 32656689-8c91-4c26-8aea-d5aaac071876 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:07:51 np0005603609 nova_compute[221550]: 2026-01-31 09:07:51.126 221554 DEBUG oslo_concurrency.lockutils [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:51 np0005603609 nova_compute[221550]: 2026-01-31 09:07:51.127 221554 DEBUG oslo_concurrency.lockutils [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:51 np0005603609 nova_compute[221550]: 2026-01-31 09:07:51.127 221554 DEBUG oslo_concurrency.lockutils [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:51 np0005603609 nova_compute[221550]: 2026-01-31 09:07:51.127 221554 DEBUG oslo_concurrency.lockutils [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:51 np0005603609 nova_compute[221550]: 2026-01-31 09:07:51.127 221554 DEBUG oslo_concurrency.lockutils [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:51 np0005603609 nova_compute[221550]: 2026-01-31 09:07:51.128 221554 INFO nova.compute.manager [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Terminating instance#033[00m
Jan 31 04:07:51 np0005603609 nova_compute[221550]: 2026-01-31 09:07:51.129 221554 DEBUG nova.compute.manager [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 04:07:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:51.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:07:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:51.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:07:52 np0005603609 kernel: tap32656689-8c (unregistering): left promiscuous mode
Jan 31 04:07:52 np0005603609 NetworkManager[49064]: <info>  [1769850472.0545] device (tap32656689-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:07:52 np0005603609 ovn_controller[130359]: 2026-01-31T09:07:52Z|01013|binding|INFO|Releasing lport 32656689-8c91-4c26-8aea-d5aaac071876 from this chassis (sb_readonly=0)
Jan 31 04:07:52 np0005603609 ovn_controller[130359]: 2026-01-31T09:07:52Z|01014|binding|INFO|Setting lport 32656689-8c91-4c26-8aea-d5aaac071876 down in Southbound
Jan 31 04:07:52 np0005603609 ovn_controller[130359]: 2026-01-31T09:07:52Z|01015|binding|INFO|Removing iface tap32656689-8c ovn-installed in OVS
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.058 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.069 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:52 np0005603609 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d000000d8.scope: Deactivated successfully.
Jan 31 04:07:52 np0005603609 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d000000d8.scope: Consumed 13.052s CPU time.
Jan 31 04:07:52 np0005603609 systemd-machined[190912]: Machine qemu-117-instance-000000d8 terminated.
Jan 31 04:07:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:52.144 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:2a:d0 10.100.0.12'], port_security=['fa:16:3e:3d:2a:d0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1fb94592-4c46-41d2-990b-7d5d8d1a7fce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '496e06c7521f45c994e6426c4313acea', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'e909f648-e644-4ddc-8790-deca52a25b73', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=62ab62d6-b781-4309-8775-90b43197869a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=32656689-8c91-4c26-8aea-d5aaac071876) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:07:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:52.145 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 32656689-8c91-4c26-8aea-d5aaac071876 in datapath 1d742d07-ac6a-4870-9712-15a33a8a1e71 unbound from our chassis#033[00m
Jan 31 04:07:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:52.146 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1d742d07-ac6a-4870-9712-15a33a8a1e71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.147 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:52.147 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1e67037e-ce20-4856-9e54-47b4b6bd6497]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:52.148 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 namespace which is not needed anymore#033[00m
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.151 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.172 221554 INFO nova.virt.libvirt.driver [-] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Instance destroyed successfully.#033[00m
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.173 221554 DEBUG nova.objects.instance [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'resources' on Instance uuid 1fb94592-4c46-41d2-990b-7d5d8d1a7fce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.349 221554 DEBUG nova.virt.libvirt.vif [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T09:06:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-863769904',display_name='tempest-TestShelveInstance-server-863769904',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-863769904',id=216,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO97h7ZlFklCcOMhjjvfpHATJVRthvPNErjo8kSCbjC05jkmSen4F2QOXVYh+JGW6zHXigZL/fAl9BFzIpJo57WrH+bNNyDWYa77kRt0fmFgaUAwVEYhYw4aHgZ+3T7mqQ==',key_name='tempest-TestShelveInstance-1892807111',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:07:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='496e06c7521f45c994e6426c4313acea',ramdisk_id='',reservation_id='r-wmbqilfc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1485729988',owner_user_name='tempest-TestShelveInstance-1485729988-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:07:30Z,user_data=None,user_id='4883c0d4a7f54a6898eba5bfdbb41266',uuid=1fb94592-4c46-41d2-990b-7d5d8d1a7fce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.350 221554 DEBUG nova.network.os_vif_util [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converting VIF {"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.350 221554 DEBUG nova.network.os_vif_util [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:2a:d0,bridge_name='br-int',has_traffic_filtering=True,id=32656689-8c91-4c26-8aea-d5aaac071876,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32656689-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.351 221554 DEBUG os_vif [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:2a:d0,bridge_name='br-int',has_traffic_filtering=True,id=32656689-8c91-4c26-8aea-d5aaac071876,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32656689-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.352 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.353 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32656689-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.354 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.357 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.359 221554 INFO os_vif [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:2a:d0,bridge_name='br-int',has_traffic_filtering=True,id=32656689-8c91-4c26-8aea-d5aaac071876,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32656689-8c')#033[00m
Jan 31 04:07:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:52 np0005603609 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[318850]: [NOTICE]   (318854) : haproxy version is 2.8.14-c23fe91
Jan 31 04:07:52 np0005603609 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[318850]: [NOTICE]   (318854) : path to executable is /usr/sbin/haproxy
Jan 31 04:07:52 np0005603609 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[318850]: [WARNING]  (318854) : Exiting Master process...
Jan 31 04:07:52 np0005603609 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[318850]: [WARNING]  (318854) : Exiting Master process...
Jan 31 04:07:52 np0005603609 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[318850]: [ALERT]    (318854) : Current worker (318856) exited with code 143 (Terminated)
Jan 31 04:07:52 np0005603609 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[318850]: [WARNING]  (318854) : All workers exited. Exiting... (0)
Jan 31 04:07:52 np0005603609 systemd[1]: libpod-1db2333324fe2ae03abcbd922a24d92e653757877da68d7ec38dd2e5bf9893ef.scope: Deactivated successfully.
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.886 221554 DEBUG nova.compute.manager [req-d2528966-05c4-41d7-8195-c2c9fa3c6dd2 req-d8938dd0-3b1c-446f-858c-03fc7a3efc05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received event network-vif-unplugged-32656689-8c91-4c26-8aea-d5aaac071876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.886 221554 DEBUG oslo_concurrency.lockutils [req-d2528966-05c4-41d7-8195-c2c9fa3c6dd2 req-d8938dd0-3b1c-446f-858c-03fc7a3efc05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.887 221554 DEBUG oslo_concurrency.lockutils [req-d2528966-05c4-41d7-8195-c2c9fa3c6dd2 req-d8938dd0-3b1c-446f-858c-03fc7a3efc05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.887 221554 DEBUG oslo_concurrency.lockutils [req-d2528966-05c4-41d7-8195-c2c9fa3c6dd2 req-d8938dd0-3b1c-446f-858c-03fc7a3efc05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.887 221554 DEBUG nova.compute.manager [req-d2528966-05c4-41d7-8195-c2c9fa3c6dd2 req-d8938dd0-3b1c-446f-858c-03fc7a3efc05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] No waiting events found dispatching network-vif-unplugged-32656689-8c91-4c26-8aea-d5aaac071876 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:07:52 np0005603609 nova_compute[221550]: 2026-01-31 09:07:52.887 221554 DEBUG nova.compute.manager [req-d2528966-05c4-41d7-8195-c2c9fa3c6dd2 req-d8938dd0-3b1c-446f-858c-03fc7a3efc05 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received event network-vif-unplugged-32656689-8c91-4c26-8aea-d5aaac071876 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 04:07:52 np0005603609 podman[318951]: 2026-01-31 09:07:52.888059284 +0000 UTC m=+0.669554872 container died 1db2333324fe2ae03abcbd922a24d92e653757877da68d7ec38dd2e5bf9893ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:07:53 np0005603609 nova_compute[221550]: 2026-01-31 09:07:53.151 221554 DEBUG nova.network.neutron [req-2741e34d-df0d-4079-86da-655ea340a459 req-a58bc0b4-9dc9-46ed-be4b-156edb352069 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Updated VIF entry in instance network info cache for port 32656689-8c91-4c26-8aea-d5aaac071876. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:07:53 np0005603609 nova_compute[221550]: 2026-01-31 09:07:53.152 221554 DEBUG nova.network.neutron [req-2741e34d-df0d-4079-86da-655ea340a459 req-a58bc0b4-9dc9-46ed-be4b-156edb352069 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Updating instance_info_cache with network_info: [{"id": "32656689-8c91-4c26-8aea-d5aaac071876", "address": "fa:16:3e:3d:2a:d0", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32656689-8c", "ovs_interfaceid": "32656689-8c91-4c26-8aea-d5aaac071876", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:07:53 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1db2333324fe2ae03abcbd922a24d92e653757877da68d7ec38dd2e5bf9893ef-userdata-shm.mount: Deactivated successfully.
Jan 31 04:07:53 np0005603609 systemd[1]: var-lib-containers-storage-overlay-69904dfb9938716790ef58d4af1072d82602fa7e334d02b87b7d82a6591b33de-merged.mount: Deactivated successfully.
Jan 31 04:07:53 np0005603609 podman[318951]: 2026-01-31 09:07:53.397845846 +0000 UTC m=+1.179341444 container cleanup 1db2333324fe2ae03abcbd922a24d92e653757877da68d7ec38dd2e5bf9893ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:07:53 np0005603609 systemd[1]: libpod-conmon-1db2333324fe2ae03abcbd922a24d92e653757877da68d7ec38dd2e5bf9893ef.scope: Deactivated successfully.
Jan 31 04:07:53 np0005603609 nova_compute[221550]: 2026-01-31 09:07:53.409 221554 DEBUG oslo_concurrency.lockutils [req-2741e34d-df0d-4079-86da-655ea340a459 req-a58bc0b4-9dc9-46ed-be4b-156edb352069 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-1fb94592-4c46-41d2-990b-7d5d8d1a7fce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:07:53 np0005603609 nova_compute[221550]: 2026-01-31 09:07:53.575 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:07:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:07:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:07:53 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:53.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:07:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:53.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:07:53 np0005603609 podman[319002]: 2026-01-31 09:07:53.625777823 +0000 UTC m=+0.209972334 container remove 1db2333324fe2ae03abcbd922a24d92e653757877da68d7ec38dd2e5bf9893ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:07:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:53.629 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e2d46c-28d0-46a6-8cdb-cf1bc0375bd8]: (4, ('Sat Jan 31 09:07:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 (1db2333324fe2ae03abcbd922a24d92e653757877da68d7ec38dd2e5bf9893ef)\n1db2333324fe2ae03abcbd922a24d92e653757877da68d7ec38dd2e5bf9893ef\nSat Jan 31 09:07:53 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 (1db2333324fe2ae03abcbd922a24d92e653757877da68d7ec38dd2e5bf9893ef)\n1db2333324fe2ae03abcbd922a24d92e653757877da68d7ec38dd2e5bf9893ef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:53.631 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8c9527ed-57d4-4e56-8cc5-4ad0afc33342]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:53.632 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d742d07-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:53 np0005603609 nova_compute[221550]: 2026-01-31 09:07:53.634 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:53 np0005603609 kernel: tap1d742d07-a0: left promiscuous mode
Jan 31 04:07:53 np0005603609 nova_compute[221550]: 2026-01-31 09:07:53.641 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:53.644 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e1d2095c-ea50-48a4-8de7-26444b173a77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:53.657 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[efa845a4-a222-4a17-8b82-d57d60e45f89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:53.659 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1684f225-7d05-49bd-8c18-26f2c56835cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:53.677 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[14862429-1961-4dd9-b8cb-43a1bc3b7a86]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1068837, 'reachable_time': 41664, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319018, 'error': None, 'target': 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:53 np0005603609 systemd[1]: run-netns-ovnmeta\x2d1d742d07\x2dac6a\x2d4870\x2d9712\x2d15a33a8a1e71.mount: Deactivated successfully.
Jan 31 04:07:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:53.680 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:07:53 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:07:53.680 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8f8ad3-5417-4321-b5d4-762a3d749dba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:54 np0005603609 nova_compute[221550]: 2026-01-31 09:07:54.952 221554 DEBUG nova.compute.manager [req-6b000306-e6c2-42d5-b1b9-e25643ff1e98 req-651004c3-4f9f-4837-b2db-5f725fd8291c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received event network-vif-plugged-32656689-8c91-4c26-8aea-d5aaac071876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:07:54 np0005603609 nova_compute[221550]: 2026-01-31 09:07:54.952 221554 DEBUG oslo_concurrency.lockutils [req-6b000306-e6c2-42d5-b1b9-e25643ff1e98 req-651004c3-4f9f-4837-b2db-5f725fd8291c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:54 np0005603609 nova_compute[221550]: 2026-01-31 09:07:54.954 221554 DEBUG oslo_concurrency.lockutils [req-6b000306-e6c2-42d5-b1b9-e25643ff1e98 req-651004c3-4f9f-4837-b2db-5f725fd8291c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:54 np0005603609 nova_compute[221550]: 2026-01-31 09:07:54.954 221554 DEBUG oslo_concurrency.lockutils [req-6b000306-e6c2-42d5-b1b9-e25643ff1e98 req-651004c3-4f9f-4837-b2db-5f725fd8291c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:54 np0005603609 nova_compute[221550]: 2026-01-31 09:07:54.954 221554 DEBUG nova.compute.manager [req-6b000306-e6c2-42d5-b1b9-e25643ff1e98 req-651004c3-4f9f-4837-b2db-5f725fd8291c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] No waiting events found dispatching network-vif-plugged-32656689-8c91-4c26-8aea-d5aaac071876 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:07:54 np0005603609 nova_compute[221550]: 2026-01-31 09:07:54.954 221554 WARNING nova.compute.manager [req-6b000306-e6c2-42d5-b1b9-e25643ff1e98 req-651004c3-4f9f-4837-b2db-5f725fd8291c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received unexpected event network-vif-plugged-32656689-8c91-4c26-8aea-d5aaac071876 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 04:07:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:07:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:55.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:07:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:07:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:55.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:07:56 np0005603609 nova_compute[221550]: 2026-01-31 09:07:56.581 221554 INFO nova.virt.libvirt.driver [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Deleting instance files /var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce_del#033[00m
Jan 31 04:07:56 np0005603609 nova_compute[221550]: 2026-01-31 09:07:56.582 221554 INFO nova.virt.libvirt.driver [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Deletion of /var/lib/nova/instances/1fb94592-4c46-41d2-990b-7d5d8d1a7fce_del complete#033[00m
Jan 31 04:07:56 np0005603609 nova_compute[221550]: 2026-01-31 09:07:56.775 221554 INFO nova.compute.manager [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Took 5.65 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 04:07:56 np0005603609 nova_compute[221550]: 2026-01-31 09:07:56.775 221554 DEBUG oslo.service.loopingcall [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 04:07:56 np0005603609 nova_compute[221550]: 2026-01-31 09:07:56.776 221554 DEBUG nova.compute.manager [-] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 04:07:56 np0005603609 nova_compute[221550]: 2026-01-31 09:07:56.776 221554 DEBUG nova.network.neutron [-] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 04:07:57 np0005603609 nova_compute[221550]: 2026-01-31 09:07:57.409 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:07:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:57.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:07:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:07:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:57 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:57.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:58 np0005603609 nova_compute[221550]: 2026-01-31 09:07:58.529 221554 DEBUG nova.network.neutron [-] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:07:58 np0005603609 nova_compute[221550]: 2026-01-31 09:07:58.576 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:58 np0005603609 nova_compute[221550]: 2026-01-31 09:07:58.668 221554 DEBUG nova.compute.manager [req-c8e5c47d-bd6c-4285-ba0e-d9195bd1c001 req-36b4ffd1-c00c-4ecf-b896-4fba2eaa7f4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Received event network-vif-deleted-32656689-8c91-4c26-8aea-d5aaac071876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:07:58 np0005603609 nova_compute[221550]: 2026-01-31 09:07:58.669 221554 INFO nova.compute.manager [req-c8e5c47d-bd6c-4285-ba0e-d9195bd1c001 req-36b4ffd1-c00c-4ecf-b896-4fba2eaa7f4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Neutron deleted interface 32656689-8c91-4c26-8aea-d5aaac071876; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 04:07:58 np0005603609 nova_compute[221550]: 2026-01-31 09:07:58.669 221554 DEBUG nova.network.neutron [req-c8e5c47d-bd6c-4285-ba0e-d9195bd1c001 req-36b4ffd1-c00c-4ecf-b896-4fba2eaa7f4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:07:58 np0005603609 nova_compute[221550]: 2026-01-31 09:07:58.772 221554 INFO nova.compute.manager [-] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Took 2.00 seconds to deallocate network for instance.#033[00m
Jan 31 04:07:58 np0005603609 nova_compute[221550]: 2026-01-31 09:07:58.788 221554 DEBUG nova.compute.manager [req-c8e5c47d-bd6c-4285-ba0e-d9195bd1c001 req-36b4ffd1-c00c-4ecf-b896-4fba2eaa7f4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Detach interface failed, port_id=32656689-8c91-4c26-8aea-d5aaac071876, reason: Instance 1fb94592-4c46-41d2-990b-7d5d8d1a7fce could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 04:07:59 np0005603609 nova_compute[221550]: 2026-01-31 09:07:59.028 221554 DEBUG oslo_concurrency.lockutils [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:59 np0005603609 nova_compute[221550]: 2026-01-31 09:07:59.029 221554 DEBUG oslo_concurrency.lockutils [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:59 np0005603609 nova_compute[221550]: 2026-01-31 09:07:59.081 221554 DEBUG oslo_concurrency.processutils [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:07:59 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2616368524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:07:59 np0005603609 nova_compute[221550]: 2026-01-31 09:07:59.482 221554 DEBUG oslo_concurrency.processutils [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:59 np0005603609 nova_compute[221550]: 2026-01-31 09:07:59.488 221554 DEBUG nova.compute.provider_tree [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:07:59 np0005603609 nova_compute[221550]: 2026-01-31 09:07:59.553 221554 DEBUG nova.scheduler.client.report [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:07:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:59.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:07:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:59.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:59 np0005603609 nova_compute[221550]: 2026-01-31 09:07:59.839 221554 DEBUG oslo_concurrency.lockutils [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:59 np0005603609 nova_compute[221550]: 2026-01-31 09:07:59.895 221554 INFO nova.scheduler.client.report [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Deleted allocations for instance 1fb94592-4c46-41d2-990b-7d5d8d1a7fce#033[00m
Jan 31 04:08:00 np0005603609 nova_compute[221550]: 2026-01-31 09:08:00.225 221554 DEBUG oslo_concurrency.lockutils [None req-2e0d9385-0f3f-4590-be9f-529bc447da64 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "1fb94592-4c46-41d2-990b-7d5d8d1a7fce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:00 np0005603609 nova_compute[221550]: 2026-01-31 09:08:00.738 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:08:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:01.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:08:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:01.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:02 np0005603609 nova_compute[221550]: 2026-01-31 09:08:02.411 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:03 np0005603609 nova_compute[221550]: 2026-01-31 09:08:03.578 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:03.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:08:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:03 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:03.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:08:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:05.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:08:05 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:05.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:08:07 np0005603609 nova_compute[221550]: 2026-01-31 09:08:07.169 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850472.168513, 1fb94592-4c46-41d2-990b-7d5d8d1a7fce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:08:07 np0005603609 nova_compute[221550]: 2026-01-31 09:08:07.170 221554 INFO nova.compute.manager [-] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:08:07 np0005603609 nova_compute[221550]: 2026-01-31 09:08:07.199 221554 DEBUG nova.compute.manager [None req-78302ab7-8738-4cc7-8225-c19385b34048 - - - - - -] [instance: 1fb94592-4c46-41d2-990b-7d5d8d1a7fce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:08:07 np0005603609 nova_compute[221550]: 2026-01-31 09:08:07.441 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:07.561 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:07.562 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:07.562 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:08:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:07.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:08:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:08:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:07.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:08:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:08 np0005603609 nova_compute[221550]: 2026-01-31 09:08:08.581 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:09.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:09.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:10 np0005603609 nova_compute[221550]: 2026-01-31 09:08:10.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:08:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:08:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:11.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:08:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:11 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:11.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:08:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:08:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:08:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:08:12 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:08:12 np0005603609 nova_compute[221550]: 2026-01-31 09:08:12.444 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:13 np0005603609 nova_compute[221550]: 2026-01-31 09:08:13.582 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:08:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:13.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:13 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:13.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:13 np0005603609 nova_compute[221550]: 2026-01-31 09:08:13.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:08:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:08:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:08:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:15.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:08:15 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:15.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:08:16 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:08:16 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:08:17 np0005603609 nova_compute[221550]: 2026-01-31 09:08:17.446 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:08:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:08:17 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:17.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:08:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:08:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:17.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:08:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:18 np0005603609 nova_compute[221550]: 2026-01-31 09:08:18.584 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:08:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:08:19 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:19.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:19.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:08:20 np0005603609 podman[319225]: 2026-01-31 09:08:20.161833837 +0000 UTC m=+0.046425565 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 04:08:20 np0005603609 podman[319224]: 2026-01-31 09:08:20.186483183 +0000 UTC m=+0.071092442 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:08:20 np0005603609 nova_compute[221550]: 2026-01-31 09:08:20.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:20 np0005603609 nova_compute[221550]: 2026-01-31 09:08:20.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:20 np0005603609 nova_compute[221550]: 2026-01-31 09:08:20.682 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:20 np0005603609 nova_compute[221550]: 2026-01-31 09:08:20.682 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:20 np0005603609 nova_compute[221550]: 2026-01-31 09:08:20.682 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:20 np0005603609 nova_compute[221550]: 2026-01-31 09:08:20.683 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:08:20 np0005603609 nova_compute[221550]: 2026-01-31 09:08:20.683 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1243239800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.132 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.261 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.263 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4144MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.263 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.263 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.347 221554 INFO nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Instance 9a579d21-79af-418a-a1d6-756329428431 has allocations against this compute host but is not found in the database.#033[00m
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.348 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.348 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.382 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #199. Immutable memtables: 0.
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:08:21.466669) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 199
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850501466738, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 1985, "num_deletes": 257, "total_data_size": 4792307, "memory_usage": 4855160, "flush_reason": "Manual Compaction"}
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #200: started
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.486 221554 DEBUG oslo_concurrency.lockutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "9a579d21-79af-418a-a1d6-756329428431" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.486 221554 DEBUG oslo_concurrency.lockutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.501 221554 DEBUG nova.compute.manager [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850501550262, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 200, "file_size": 3117258, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 95396, "largest_seqno": 97376, "table_properties": {"data_size": 3109048, "index_size": 5023, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16910, "raw_average_key_size": 20, "raw_value_size": 3092580, "raw_average_value_size": 3681, "num_data_blocks": 220, "num_entries": 840, "num_filter_entries": 840, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850332, "oldest_key_time": 1769850332, "file_creation_time": 1769850501, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 83675 microseconds, and 4757 cpu microseconds.
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.568 221554 DEBUG oslo_concurrency.lockutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:08:21.550359) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #200: 3117258 bytes OK
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:08:21.550381) [db/memtable_list.cc:519] [default] Level-0 commit table #200 started
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:08:21.601931) [db/memtable_list.cc:722] [default] Level-0 commit table #200: memtable #1 done
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:08:21.602019) EVENT_LOG_v1 {"time_micros": 1769850501602007, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:08:21.602047) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 4783459, prev total WAL file size 4783459, number of live WAL files 2.
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000196.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:08:21.602926) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373730' seq:72057594037927935, type:22 .. '6C6F676D0034303231' seq:0, type:0; will stop at (end)
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [200(3044KB)], [198(10MB)]
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850501603003, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [200], "files_L6": [198], "score": -1, "input_data_size": 14262833, "oldest_snapshot_seqno": -1}
Jan 31 04:08:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:08:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:21.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:08:21 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:21.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:08:21 np0005603609 radosgw[84443]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3430568507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #201: 11523 keys, 14131268 bytes, temperature: kUnknown
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850501770789, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 201, "file_size": 14131268, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14058682, "index_size": 42649, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28869, "raw_key_size": 304940, "raw_average_key_size": 26, "raw_value_size": 13859088, "raw_average_value_size": 1202, "num_data_blocks": 1619, "num_entries": 11523, "num_filter_entries": 11523, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769850501, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 201, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:08:21.771253) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 14131268 bytes
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:08:21.773025) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 85.0 rd, 84.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.6 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(9.1) write-amplify(4.5) OK, records in: 12056, records dropped: 533 output_compression: NoCompression
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:08:21.773042) EVENT_LOG_v1 {"time_micros": 1769850501773035, "job": 128, "event": "compaction_finished", "compaction_time_micros": 167862, "compaction_time_cpu_micros": 24896, "output_level": 6, "num_output_files": 1, "total_output_size": 14131268, "num_input_records": 12056, "num_output_records": 11523, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850501773528, "job": 128, "event": "table_file_deletion", "file_number": 200}
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000198.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850501774654, "job": 128, "event": "table_file_deletion", "file_number": 198}
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:08:21.602830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:08:21.774726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:08:21.774730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:08:21.774732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:08:21.774734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:08:21 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:08:21.774735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.785 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:08:21 np0005603609 radosgw[84443]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.790 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.806 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:08:21 np0005603609 radosgw[84443]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.834 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.835 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.835 221554 DEBUG oslo_concurrency.lockutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.267s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.840 221554 DEBUG nova.virt.hardware [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.841 221554 INFO nova.compute.claims [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 04:08:21 np0005603609 nova_compute[221550]: 2026-01-31 09:08:21.932 221554 DEBUG oslo_concurrency.processutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:08:22 np0005603609 radosgw[84443]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 31 04:08:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:08:22 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3031132125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.379 221554 DEBUG oslo_concurrency.processutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.385 221554 DEBUG nova.compute.provider_tree [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.400 221554 DEBUG nova.scheduler.client.report [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.422 221554 DEBUG oslo_concurrency.lockutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.423 221554 DEBUG nova.compute.manager [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.448 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.474 221554 DEBUG nova.compute.manager [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.474 221554 DEBUG nova.network.neutron [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.493 221554 INFO nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.513 221554 DEBUG nova.compute.manager [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 04:08:22 np0005603609 radosgw[84443]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.575 221554 INFO nova.virt.block_device [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Booting with volume cf361715-504e-40e5-87bf-5f49366009aa at /dev/vda#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.703 221554 DEBUG os_brick.utils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.705 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.716 226739 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.717 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[2bcf6c93-6402-4083-a114-e5b47396141e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.718 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.723 226739 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.723 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[3934f197-0aa6-42ae-8460-5f4ccddd5816]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1765b9b6275c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.724 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.731 226739 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.732 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[3103246a-05c4-4352-9c52-0290499958b4]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.733 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c472fa-af29-4014-a745-6bb780dd623b]: (4, '231927d4-1ded-4b84-843c-456d697af567') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.733 221554 DEBUG oslo_concurrency.processutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.760 221554 DEBUG oslo_concurrency.processutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.761 221554 DEBUG os_brick.initiator.connectors.lightos [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.762 221554 DEBUG os_brick.initiator.connectors.lightos [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.762 221554 DEBUG os_brick.initiator.connectors.lightos [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.762 221554 DEBUG os_brick.utils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] <== get_connector_properties: return (59ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1765b9b6275c', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '231927d4-1ded-4b84-843c-456d697af567', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 04:08:22 np0005603609 nova_compute[221550]: 2026-01-31 09:08:22.762 221554 DEBUG nova.virt.block_device [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Updating existing volume attachment record: c8db5bd0-1a98-4aee-9c13-632937fd835a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 04:08:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:23 np0005603609 nova_compute[221550]: 2026-01-31 09:08:23.174 221554 DEBUG nova.policy [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4883c0d4a7f54a6898eba5bfdbb41266', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '496e06c7521f45c994e6426c4313acea', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 04:08:23 np0005603609 nova_compute[221550]: 2026-01-31 09:08:23.587 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:08:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:23.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:23 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:23.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.045 221554 DEBUG nova.network.neutron [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Successfully created port: 5ca89d89-0532-424d-9005-27c4b82e9793 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.068 221554 DEBUG nova.compute.manager [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.070 221554 DEBUG nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.070 221554 INFO nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Creating image(s)#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.071 221554 DEBUG nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.071 221554 DEBUG nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Ensure instance console log exists: /var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.072 221554 DEBUG oslo_concurrency.lockutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.072 221554 DEBUG oslo_concurrency.lockutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.072 221554 DEBUG oslo_concurrency.lockutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.777 221554 DEBUG nova.network.neutron [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Successfully updated port: 5ca89d89-0532-424d-9005-27c4b82e9793 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.803 221554 DEBUG oslo_concurrency.lockutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.803 221554 DEBUG oslo_concurrency.lockutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquired lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.803 221554 DEBUG nova.network.neutron [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.835 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.835 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.836 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.852 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.853 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.854 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.855 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.899 221554 DEBUG nova.compute.manager [req-2bb81149-6c71-4487-b27c-82a336ced7ca req-54268cb1-6c83-455b-b3d3-e35aa52a0f58 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received event network-changed-5ca89d89-0532-424d-9005-27c4b82e9793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.899 221554 DEBUG nova.compute.manager [req-2bb81149-6c71-4487-b27c-82a336ced7ca req-54268cb1-6c83-455b-b3d3-e35aa52a0f58 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Refreshing instance network info cache due to event network-changed-5ca89d89-0532-424d-9005-27c4b82e9793. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.899 221554 DEBUG oslo_concurrency.lockutils [req-2bb81149-6c71-4487-b27c-82a336ced7ca req-54268cb1-6c83-455b-b3d3-e35aa52a0f58 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:08:24 np0005603609 nova_compute[221550]: 2026-01-31 09:08:24.959 221554 DEBUG nova.network.neutron [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 04:08:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:08:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:25 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:25.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:08:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:25.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.349 221554 DEBUG nova.network.neutron [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Updating instance_info_cache with network_info: [{"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.582 221554 DEBUG oslo_concurrency.lockutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Releasing lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.583 221554 DEBUG nova.compute.manager [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Instance network_info: |[{"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.585 221554 DEBUG oslo_concurrency.lockutils [req-2bb81149-6c71-4487-b27c-82a336ced7ca req-54268cb1-6c83-455b-b3d3-e35aa52a0f58 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.586 221554 DEBUG nova.network.neutron [req-2bb81149-6c71-4487-b27c-82a336ced7ca req-54268cb1-6c83-455b-b3d3-e35aa52a0f58 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Refreshing network info cache for port 5ca89d89-0532-424d-9005-27c4b82e9793 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.591 221554 DEBUG nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Start _get_guest_xml network_info=[{"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': 'c8db5bd0-1a98-4aee-9c13-632937fd835a', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-cf361715-504e-40e5-87bf-5f49366009aa', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'cf361715-504e-40e5-87bf-5f49366009aa', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '9a579d21-79af-418a-a1d6-756329428431', 'attached_at': '', 'detached_at': '', 'volume_id': 'cf361715-504e-40e5-87bf-5f49366009aa', 'serial': 'cf361715-504e-40e5-87bf-5f49366009aa'}, 'delete_on_termination': True, 'device_type': 'disk', 'disk_bus': 'virtio', 'guest_format': None, 'mount_device': '/dev/vda', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.596 221554 WARNING nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.604 221554 DEBUG nova.virt.libvirt.host [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.604 221554 DEBUG nova.virt.libvirt.host [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.608 221554 DEBUG nova.virt.libvirt.host [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.608 221554 DEBUG nova.virt.libvirt.host [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.610 221554 DEBUG nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.610 221554 DEBUG nova.virt.hardware [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.610 221554 DEBUG nova.virt.hardware [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.611 221554 DEBUG nova.virt.hardware [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.611 221554 DEBUG nova.virt.hardware [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.611 221554 DEBUG nova.virt.hardware [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.612 221554 DEBUG nova.virt.hardware [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.612 221554 DEBUG nova.virt.hardware [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.612 221554 DEBUG nova.virt.hardware [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.612 221554 DEBUG nova.virt.hardware [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.613 221554 DEBUG nova.virt.hardware [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.613 221554 DEBUG nova.virt.hardware [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.646 221554 DEBUG nova.storage.rbd_utils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] rbd image 9a579d21-79af-418a-a1d6-756329428431_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.650 221554 DEBUG oslo_concurrency.processutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:08:26 np0005603609 nova_compute[221550]: 2026-01-31 09:08:26.671 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:08:27 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3963900475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.090 221554 DEBUG oslo_concurrency.processutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.117 221554 DEBUG nova.virt.libvirt.vif [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:08:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-573915944',display_name='tempest-TestShelveInstance-server-573915944',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-573915944',id=217,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHS4QCUVvfZkmnlZau8lua//iz8/NvQiFwIgU+DK5hCJmShHpaL0dY344ffU+61pAJtdb//6v7ovdut7evpHBD6pN1yMnlHNxMsL0uM0TRO0xjCGQzhCeySiVn/Uz1Je8Q==',key_name='tempest-TestShelveInstance-385640846',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='496e06c7521f45c994e6426c4313acea',ramdisk_id='',reservation_id='r-1rp3uu0q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1485729988',owner_user_name='tempest-TestShelveInstance-1485729988-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:08:22Z,user_data=None,user_id='4883c0d4a7f54a6898eba5bfdbb41266',uuid=9a579d21-79af-418a-a1d6-756329428431,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.117 221554 DEBUG nova.network.os_vif_util [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converting VIF {"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.118 221554 DEBUG nova.network.os_vif_util [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:8f:3e,bridge_name='br-int',has_traffic_filtering=True,id=5ca89d89-0532-424d-9005-27c4b82e9793,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca89d89-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.119 221554 DEBUG nova.objects.instance [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a579d21-79af-418a-a1d6-756329428431 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.140 221554 DEBUG nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] End _get_guest_xml xml=<domain type="kvm">
Jan 31 04:08:27 np0005603609 nova_compute[221550]:  <uuid>9a579d21-79af-418a-a1d6-756329428431</uuid>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:  <name>instance-000000d9</name>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestShelveInstance-server-573915944</nova:name>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 09:08:26</nova:creationTime>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 04:08:27 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:        <nova:user uuid="4883c0d4a7f54a6898eba5bfdbb41266">tempest-TestShelveInstance-1485729988-project-member</nova:user>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:        <nova:project uuid="496e06c7521f45c994e6426c4313acea">tempest-TestShelveInstance-1485729988</nova:project>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:        <nova:port uuid="5ca89d89-0532-424d-9005-27c4b82e9793">
Jan 31 04:08:27 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <system>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <entry name="serial">9a579d21-79af-418a-a1d6-756329428431</entry>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <entry name="uuid">9a579d21-79af-418a-a1d6-756329428431</entry>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    </system>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:  <os>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:  </os>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:  <features>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:  </features>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:  </clock>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:  <devices>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/9a579d21-79af-418a-a1d6-756329428431_disk.config">
Jan 31 04:08:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      </source>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 04:08:27 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      </auth>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    </disk>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="volumes/volume-cf361715-504e-40e5-87bf-5f49366009aa">
Jan 31 04:08:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      </source>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 04:08:27 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      </auth>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <serial>cf361715-504e-40e5-87bf-5f49366009aa</serial>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    </disk>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:cc:8f:3e"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <target dev="tap5ca89d89-05"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    </interface>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431/console.log" append="off"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    </serial>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <video>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    </video>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    </rng>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 04:08:27 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 04:08:27 np0005603609 nova_compute[221550]:  </devices>
Jan 31 04:08:27 np0005603609 nova_compute[221550]: </domain>
Jan 31 04:08:27 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.141 221554 DEBUG nova.compute.manager [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Preparing to wait for external event network-vif-plugged-5ca89d89-0532-424d-9005-27c4b82e9793 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.142 221554 DEBUG oslo_concurrency.lockutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "9a579d21-79af-418a-a1d6-756329428431-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.142 221554 DEBUG oslo_concurrency.lockutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.142 221554 DEBUG oslo_concurrency.lockutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.143 221554 DEBUG nova.virt.libvirt.vif [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:08:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-573915944',display_name='tempest-TestShelveInstance-server-573915944',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-573915944',id=217,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHS4QCUVvfZkmnlZau8lua//iz8/NvQiFwIgU+DK5hCJmShHpaL0dY344ffU+61pAJtdb//6v7ovdut7evpHBD6pN1yMnlHNxMsL0uM0TRO0xjCGQzhCeySiVn/Uz1Je8Q==',key_name='tempest-TestShelveInstance-385640846',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='496e06c7521f45c994e6426c4313acea',ramdisk_id='',reservation_id='r-1rp3uu0q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1485729988',owner_user_name='tempest-TestShelveInstance-1485729988-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:08:22Z,user_data=None,user_id='4883c0d4a7f54a6898eba5bfdbb41266',uuid=9a579d21-79af-418a-a1d6-756329428431,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.143 221554 DEBUG nova.network.os_vif_util [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converting VIF {"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.144 221554 DEBUG nova.network.os_vif_util [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:8f:3e,bridge_name='br-int',has_traffic_filtering=True,id=5ca89d89-0532-424d-9005-27c4b82e9793,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca89d89-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.144 221554 DEBUG os_vif [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:8f:3e,bridge_name='br-int',has_traffic_filtering=True,id=5ca89d89-0532-424d-9005-27c4b82e9793,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca89d89-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.145 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.145 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.145 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.148 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.148 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ca89d89-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.149 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5ca89d89-05, col_values=(('external_ids', {'iface-id': '5ca89d89-0532-424d-9005-27c4b82e9793', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:8f:3e', 'vm-uuid': '9a579d21-79af-418a-a1d6-756329428431'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.150 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:27 np0005603609 NetworkManager[49064]: <info>  [1769850507.1516] manager: (tap5ca89d89-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/477)
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.152 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.157 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.158 221554 INFO os_vif [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:8f:3e,bridge_name='br-int',has_traffic_filtering=True,id=5ca89d89-0532-424d-9005-27c4b82e9793,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca89d89-05')#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.211 221554 DEBUG nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.211 221554 DEBUG nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.212 221554 DEBUG nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] No VIF found with MAC fa:16:3e:cc:8f:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.212 221554 INFO nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Using config drive#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.239 221554 DEBUG nova.storage.rbd_utils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] rbd image 9a579d21-79af-418a-a1d6-756329428431_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.546 221554 INFO nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Creating config drive at /var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431/disk.config#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.549 221554 DEBUG oslo_concurrency.processutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpz50k_6hf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:08:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:08:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.003000072s ======
Jan 31 04:08:27 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:27.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000072s
Jan 31 04:08:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.003000072s ======
Jan 31 04:08:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:27.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000072s
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.674 221554 DEBUG oslo_concurrency.processutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpz50k_6hf" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.700 221554 DEBUG nova.storage.rbd_utils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] rbd image 9a579d21-79af-418a-a1d6-756329428431_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:08:27 np0005603609 nova_compute[221550]: 2026-01-31 09:08:27.704 221554 DEBUG oslo_concurrency.processutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431/disk.config 9a579d21-79af-418a-a1d6-756329428431_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:08:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.237 221554 DEBUG oslo_concurrency.processutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431/disk.config 9a579d21-79af-418a-a1d6-756329428431_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.237 221554 INFO nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Deleting local config drive /var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431/disk.config because it was imported into RBD.#033[00m
Jan 31 04:08:28 np0005603609 kernel: tap5ca89d89-05: entered promiscuous mode
Jan 31 04:08:28 np0005603609 NetworkManager[49064]: <info>  [1769850508.2777] manager: (tap5ca89d89-05): new Tun device (/org/freedesktop/NetworkManager/Devices/478)
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.277 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:28 np0005603609 ovn_controller[130359]: 2026-01-31T09:08:28Z|01016|binding|INFO|Claiming lport 5ca89d89-0532-424d-9005-27c4b82e9793 for this chassis.
Jan 31 04:08:28 np0005603609 ovn_controller[130359]: 2026-01-31T09:08:28Z|01017|binding|INFO|5ca89d89-0532-424d-9005-27c4b82e9793: Claiming fa:16:3e:cc:8f:3e 10.100.0.8
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.280 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:28 np0005603609 ovn_controller[130359]: 2026-01-31T09:08:28Z|01018|binding|INFO|Setting lport 5ca89d89-0532-424d-9005-27c4b82e9793 ovn-installed in OVS
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.285 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:28 np0005603609 ovn_controller[130359]: 2026-01-31T09:08:28Z|01019|binding|INFO|Setting lport 5ca89d89-0532-424d-9005-27c4b82e9793 up in Southbound
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.287 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.288 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:8f:3e 10.100.0.8'], port_security=['fa:16:3e:cc:8f:3e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9a579d21-79af-418a-a1d6-756329428431', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '496e06c7521f45c994e6426c4313acea', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be46c5ac-d5d7-434f-9c7a-f07c0d88092e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=62ab62d6-b781-4309-8775-90b43197869a, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=5ca89d89-0532-424d-9005-27c4b82e9793) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.289 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 5ca89d89-0532-424d-9005-27c4b82e9793 in datapath 1d742d07-ac6a-4870-9712-15a33a8a1e71 bound to our chassis#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.291 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1d742d07-ac6a-4870-9712-15a33a8a1e71#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.300 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbe22bd-8342-4c7b-97ef-d852d715ed75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.301 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1d742d07-a1 in ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.303 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1d742d07-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.303 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c366d93e-ca12-4376-bb34-eed2220fc6ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.304 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c03e5698-b681-481e-bcf8-dc5bbba52b36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:28 np0005603609 systemd-udevd[319458]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.314 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[798c3a48-352c-43c3-bb67-67f79fd2f31b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:28 np0005603609 systemd-machined[190912]: New machine qemu-118-instance-000000d9.
Jan 31 04:08:28 np0005603609 NetworkManager[49064]: <info>  [1769850508.3208] device (tap5ca89d89-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:08:28 np0005603609 NetworkManager[49064]: <info>  [1769850508.3216] device (tap5ca89d89-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 04:08:28 np0005603609 systemd[1]: Started Virtual Machine qemu-118-instance-000000d9.
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.327 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[bb20f505-d3b0-45b6-bdb3-cf6000becba9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.357 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc9aaf6-a187-4886-9001-4fa8bc4d6b1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.361 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[63bd4e84-2691-4116-b937-4f1c9673d189]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:28 np0005603609 systemd-udevd[319463]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:08:28 np0005603609 NetworkManager[49064]: <info>  [1769850508.3634] manager: (tap1d742d07-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/479)
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.384 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb439ea-a1b4-4015-9510-36aedbb7d308]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.387 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b0f75f-8f93-4c7f-86ad-405091ffd064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:28 np0005603609 NetworkManager[49064]: <info>  [1769850508.4020] device (tap1d742d07-a0): carrier: link connected
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.406 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[38e3bc8d-99b8-496f-badf-e55cf6d09967]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.417 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[63ec4d86-1753-4334-87bf-0b29b8e765a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d742d07-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:3d:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 311], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1074810, 'reachable_time': 31557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319491, 'error': None, 'target': 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.430 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e993d4-d077-4214-a2b4-32a19eeae0b8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0f:3d77'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1074810, 'tstamp': 1074810}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319492, 'error': None, 'target': 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.442 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7780e578-5f11-463f-914c-72bab077f7be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d742d07-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:3d:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 311], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1074810, 'reachable_time': 31557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319493, 'error': None, 'target': 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.443 221554 DEBUG nova.network.neutron [req-2bb81149-6c71-4487-b27c-82a336ced7ca req-54268cb1-6c83-455b-b3d3-e35aa52a0f58 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Updated VIF entry in instance network info cache for port 5ca89d89-0532-424d-9005-27c4b82e9793. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.444 221554 DEBUG nova.network.neutron [req-2bb81149-6c71-4487-b27c-82a336ced7ca req-54268cb1-6c83-455b-b3d3-e35aa52a0f58 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Updating instance_info_cache with network_info: [{"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.463 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[19a9bfcd-0483-45d0-95a7-c6474c71ccba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.464 221554 DEBUG oslo_concurrency.lockutils [req-2bb81149-6c71-4487-b27c-82a336ced7ca req-54268cb1-6c83-455b-b3d3-e35aa52a0f58 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.505 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8c08a3da-b46a-4194-b597-826d578fc843]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.507 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d742d07-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.507 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.508 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d742d07-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:08:28 np0005603609 kernel: tap1d742d07-a0: entered promiscuous mode
Jan 31 04:08:28 np0005603609 NetworkManager[49064]: <info>  [1769850508.5112] manager: (tap1d742d07-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/480)
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.514 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1d742d07-a0, col_values=(('external_ids', {'iface-id': 'ff7f72f7-b69e-4d38-bd70-12b9fe05b593'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.516 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1d742d07-ac6a-4870-9712-15a33a8a1e71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1d742d07-ac6a-4870-9712-15a33a8a1e71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 04:08:28 np0005603609 ovn_controller[130359]: 2026-01-31T09:08:28Z|01020|binding|INFO|Releasing lport ff7f72f7-b69e-4d38-bd70-12b9fe05b593 from this chassis (sb_readonly=0)
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.517 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d51d21-f3da-4f31-83ae-7636b087b20a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.518 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-1d742d07-ac6a-4870-9712-15a33a8a1e71
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/1d742d07-ac6a-4870-9712-15a33a8a1e71.pid.haproxy
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 1d742d07-ac6a-4870-9712-15a33a8a1e71
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 04:08:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:28.519 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'env', 'PROCESS_TAG=haproxy-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1d742d07-ac6a-4870-9712-15a33a8a1e71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.520 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.589 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.607 221554 DEBUG nova.compute.manager [req-4bfebe85-7e90-446f-9b8a-27ecb338b974 req-51fef77f-ce42-44ba-b3d7-18a48f926b09 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received event network-vif-plugged-5ca89d89-0532-424d-9005-27c4b82e9793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.607 221554 DEBUG oslo_concurrency.lockutils [req-4bfebe85-7e90-446f-9b8a-27ecb338b974 req-51fef77f-ce42-44ba-b3d7-18a48f926b09 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9a579d21-79af-418a-a1d6-756329428431-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.608 221554 DEBUG oslo_concurrency.lockutils [req-4bfebe85-7e90-446f-9b8a-27ecb338b974 req-51fef77f-ce42-44ba-b3d7-18a48f926b09 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.608 221554 DEBUG oslo_concurrency.lockutils [req-4bfebe85-7e90-446f-9b8a-27ecb338b974 req-51fef77f-ce42-44ba-b3d7-18a48f926b09 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.608 221554 DEBUG nova.compute.manager [req-4bfebe85-7e90-446f-9b8a-27ecb338b974 req-51fef77f-ce42-44ba-b3d7-18a48f926b09 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Processing event network-vif-plugged-5ca89d89-0532-424d-9005-27c4b82e9793 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.814 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769850508.8144972, 9a579d21-79af-418a-a1d6-756329428431 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.817 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] VM Started (Lifecycle Event)#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.820 221554 DEBUG nova.compute.manager [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.824 221554 DEBUG nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.828 221554 INFO nova.virt.libvirt.driver [-] [instance: 9a579d21-79af-418a-a1d6-756329428431] Instance spawned successfully.#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.828 221554 DEBUG nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.842 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.848 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.852 221554 DEBUG nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.852 221554 DEBUG nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.852 221554 DEBUG nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.853 221554 DEBUG nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:08:28 np0005603609 podman[319567]: 2026-01-31 09:08:28.853583992 +0000 UTC m=+0.054558974 container create 1e6e55d86793b99b09d1e6d56a360310fafd96a2fa73bea5331d1ac900f66337 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.853 221554 DEBUG nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.854 221554 DEBUG nova.virt.libvirt.driver [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.886 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.887 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769850508.8146806, 9a579d21-79af-418a-a1d6-756329428431 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.887 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] VM Paused (Lifecycle Event)#033[00m
Jan 31 04:08:28 np0005603609 systemd[1]: Started libpod-conmon-1e6e55d86793b99b09d1e6d56a360310fafd96a2fa73bea5331d1ac900f66337.scope.
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.913 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.916 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769850508.8228235, 9a579d21-79af-418a-a1d6-756329428431 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.916 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] VM Resumed (Lifecycle Event)#033[00m
Jan 31 04:08:28 np0005603609 systemd[1]: Started libcrun container.
Jan 31 04:08:28 np0005603609 podman[319567]: 2026-01-31 09:08:28.823608753 +0000 UTC m=+0.024583745 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 04:08:28 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad3bb05eea9021f142d88b19b0f1d64af15dc0efa40911bebdd8dcc4ee7aa0dd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.927 221554 INFO nova.compute.manager [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Took 4.86 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.928 221554 DEBUG nova.compute.manager [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:08:28 np0005603609 podman[319567]: 2026-01-31 09:08:28.934645157 +0000 UTC m=+0.135620139 container init 1e6e55d86793b99b09d1e6d56a360310fafd96a2fa73bea5331d1ac900f66337 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.936 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.938 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:08:28 np0005603609 podman[319567]: 2026-01-31 09:08:28.939862325 +0000 UTC m=+0.140837307 container start 1e6e55d86793b99b09d1e6d56a360310fafd96a2fa73bea5331d1ac900f66337 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 04:08:28 np0005603609 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[319582]: [NOTICE]   (319586) : New worker (319588) forked
Jan 31 04:08:28 np0005603609 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[319582]: [NOTICE]   (319586) : Loading success.
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.962 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:08:28 np0005603609 nova_compute[221550]: 2026-01-31 09:08:28.995 221554 INFO nova.compute.manager [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Took 7.44 seconds to build instance.#033[00m
Jan 31 04:08:29 np0005603609 nova_compute[221550]: 2026-01-31 09:08:29.011 221554 DEBUG oslo_concurrency.lockutils [None req-40392894-9721-4b1b-8fc9-bab1d43ea7ac 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:29.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:29.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:30 np0005603609 nova_compute[221550]: 2026-01-31 09:08:30.691 221554 DEBUG nova.compute.manager [req-1c83db5d-c1c5-4fa1-b563-18f945f2c5d8 req-f99f3ea9-f86d-41d7-90a6-0cb8ffcbc49b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received event network-vif-plugged-5ca89d89-0532-424d-9005-27c4b82e9793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:08:30 np0005603609 nova_compute[221550]: 2026-01-31 09:08:30.692 221554 DEBUG oslo_concurrency.lockutils [req-1c83db5d-c1c5-4fa1-b563-18f945f2c5d8 req-f99f3ea9-f86d-41d7-90a6-0cb8ffcbc49b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9a579d21-79af-418a-a1d6-756329428431-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:30 np0005603609 nova_compute[221550]: 2026-01-31 09:08:30.692 221554 DEBUG oslo_concurrency.lockutils [req-1c83db5d-c1c5-4fa1-b563-18f945f2c5d8 req-f99f3ea9-f86d-41d7-90a6-0cb8ffcbc49b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:30 np0005603609 nova_compute[221550]: 2026-01-31 09:08:30.692 221554 DEBUG oslo_concurrency.lockutils [req-1c83db5d-c1c5-4fa1-b563-18f945f2c5d8 req-f99f3ea9-f86d-41d7-90a6-0cb8ffcbc49b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:30 np0005603609 nova_compute[221550]: 2026-01-31 09:08:30.692 221554 DEBUG nova.compute.manager [req-1c83db5d-c1c5-4fa1-b563-18f945f2c5d8 req-f99f3ea9-f86d-41d7-90a6-0cb8ffcbc49b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] No waiting events found dispatching network-vif-plugged-5ca89d89-0532-424d-9005-27c4b82e9793 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:08:30 np0005603609 nova_compute[221550]: 2026-01-31 09:08:30.693 221554 WARNING nova.compute.manager [req-1c83db5d-c1c5-4fa1-b563-18f945f2c5d8 req-f99f3ea9-f86d-41d7-90a6-0cb8ffcbc49b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received unexpected event network-vif-plugged-5ca89d89-0532-424d-9005-27c4b82e9793 for instance with vm_state active and task_state None.#033[00m
Jan 31 04:08:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:08:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:31.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:31 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:31.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:32 np0005603609 nova_compute[221550]: 2026-01-31 09:08:32.152 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:33 np0005603609 nova_compute[221550]: 2026-01-31 09:08:33.591 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:08:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:08:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:33.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:08:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:33 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:33.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:33 np0005603609 nova_compute[221550]: 2026-01-31 09:08:33.783 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:35 np0005603609 nova_compute[221550]: 2026-01-31 09:08:35.638 221554 DEBUG nova.compute.manager [req-00610b0e-caa1-4f0e-99b5-d497b6e0509a req-c78d2197-9dd3-4711-a6e0-049d2cee4d2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received event network-changed-5ca89d89-0532-424d-9005-27c4b82e9793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:08:35 np0005603609 nova_compute[221550]: 2026-01-31 09:08:35.639 221554 DEBUG nova.compute.manager [req-00610b0e-caa1-4f0e-99b5-d497b6e0509a req-c78d2197-9dd3-4711-a6e0-049d2cee4d2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Refreshing instance network info cache due to event network-changed-5ca89d89-0532-424d-9005-27c4b82e9793. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:08:35 np0005603609 nova_compute[221550]: 2026-01-31 09:08:35.639 221554 DEBUG oslo_concurrency.lockutils [req-00610b0e-caa1-4f0e-99b5-d497b6e0509a req-c78d2197-9dd3-4711-a6e0-049d2cee4d2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:08:35 np0005603609 nova_compute[221550]: 2026-01-31 09:08:35.639 221554 DEBUG oslo_concurrency.lockutils [req-00610b0e-caa1-4f0e-99b5-d497b6e0509a req-c78d2197-9dd3-4711-a6e0-049d2cee4d2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:08:35 np0005603609 nova_compute[221550]: 2026-01-31 09:08:35.640 221554 DEBUG nova.network.neutron [req-00610b0e-caa1-4f0e-99b5-d497b6e0509a req-c78d2197-9dd3-4711-a6e0-049d2cee4d2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Refreshing network info cache for port 5ca89d89-0532-424d-9005-27c4b82e9793 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:08:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:08:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:35.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:35 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:35.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:37 np0005603609 nova_compute[221550]: 2026-01-31 09:08:37.155 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:08:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:37.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:37 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:37.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:38 np0005603609 nova_compute[221550]: 2026-01-31 09:08:38.154 221554 DEBUG nova.network.neutron [req-00610b0e-caa1-4f0e-99b5-d497b6e0509a req-c78d2197-9dd3-4711-a6e0-049d2cee4d2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Updated VIF entry in instance network info cache for port 5ca89d89-0532-424d-9005-27c4b82e9793. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:08:38 np0005603609 nova_compute[221550]: 2026-01-31 09:08:38.154 221554 DEBUG nova.network.neutron [req-00610b0e-caa1-4f0e-99b5-d497b6e0509a req-c78d2197-9dd3-4711-a6e0-049d2cee4d2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Updating instance_info_cache with network_info: [{"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:08:38 np0005603609 nova_compute[221550]: 2026-01-31 09:08:38.180 221554 DEBUG oslo_concurrency.lockutils [req-00610b0e-caa1-4f0e-99b5-d497b6e0509a req-c78d2197-9dd3-4711-a6e0-049d2cee4d2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:08:38 np0005603609 nova_compute[221550]: 2026-01-31 09:08:38.592 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:08:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:39 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:39.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:39.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:40 np0005603609 nova_compute[221550]: 2026-01-31 09:08:40.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:40 np0005603609 nova_compute[221550]: 2026-01-31 09:08:40.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:41.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:08:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:41.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:08:42 np0005603609 nova_compute[221550]: 2026-01-31 09:08:42.157 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:42 np0005603609 ovn_controller[130359]: 2026-01-31T09:08:42Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cc:8f:3e 10.100.0.8
Jan 31 04:08:42 np0005603609 ovn_controller[130359]: 2026-01-31T09:08:42Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cc:8f:3e 10.100.0.8
Jan 31 04:08:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:43 np0005603609 nova_compute[221550]: 2026-01-31 09:08:43.594 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:43.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:43.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:45.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:08:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:08:45 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:45.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:08:47 np0005603609 nova_compute[221550]: 2026-01-31 09:08:47.159 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:08:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:47.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:47 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:47.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:48 np0005603609 nova_compute[221550]: 2026-01-31 09:08:48.596 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 04:08:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:49.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 04:08:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:08:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:49.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:08:51 np0005603609 podman[319600]: 2026-01-31 09:08:51.165756811 +0000 UTC m=+0.049690464 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 04:08:51 np0005603609 podman[319599]: 2026-01-31 09:08:51.190801058 +0000 UTC m=+0.074810402 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 04:08:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:51.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe55d66f0 =====
Jan 31 04:08:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe55d66f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:08:51 np0005603609 radosgw[84443]: beast: 0x7f4fe55d66f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:51.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:08:51 np0005603609 nova_compute[221550]: 2026-01-31 09:08:51.720 221554 DEBUG oslo_concurrency.lockutils [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "9a579d21-79af-418a-a1d6-756329428431" by "nova.compute.manager.ComputeManager.shelve_offload_instance.<locals>.do_shelve_offload_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:51 np0005603609 nova_compute[221550]: 2026-01-31 09:08:51.720 221554 DEBUG oslo_concurrency.lockutils [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431" acquired by "nova.compute.manager.ComputeManager.shelve_offload_instance.<locals>.do_shelve_offload_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:51 np0005603609 nova_compute[221550]: 2026-01-31 09:08:51.721 221554 INFO nova.compute.manager [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Shelve offloading#033[00m
Jan 31 04:08:51 np0005603609 nova_compute[221550]: 2026-01-31 09:08:51.739 221554 DEBUG nova.virt.libvirt.driver [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 04:08:52 np0005603609 nova_compute[221550]: 2026-01-31 09:08:52.162 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:53 np0005603609 nova_compute[221550]: 2026-01-31 09:08:53.599 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:53.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:53.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:54 np0005603609 kernel: tap5ca89d89-05 (unregistering): left promiscuous mode
Jan 31 04:08:54 np0005603609 NetworkManager[49064]: <info>  [1769850534.2023] device (tap5ca89d89-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:08:54 np0005603609 ovn_controller[130359]: 2026-01-31T09:08:54Z|01021|binding|INFO|Releasing lport 5ca89d89-0532-424d-9005-27c4b82e9793 from this chassis (sb_readonly=0)
Jan 31 04:08:54 np0005603609 ovn_controller[130359]: 2026-01-31T09:08:54Z|01022|binding|INFO|Setting lport 5ca89d89-0532-424d-9005-27c4b82e9793 down in Southbound
Jan 31 04:08:54 np0005603609 ovn_controller[130359]: 2026-01-31T09:08:54Z|01023|binding|INFO|Removing iface tap5ca89d89-05 ovn-installed in OVS
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.208 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.210 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:54.216 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:8f:3e 10.100.0.8'], port_security=['fa:16:3e:cc:8f:3e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9a579d21-79af-418a-a1d6-756329428431', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '496e06c7521f45c994e6426c4313acea', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be46c5ac-d5d7-434f-9c7a-f07c0d88092e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=62ab62d6-b781-4309-8775-90b43197869a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=5ca89d89-0532-424d-9005-27c4b82e9793) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:08:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:54.217 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 5ca89d89-0532-424d-9005-27c4b82e9793 in datapath 1d742d07-ac6a-4870-9712-15a33a8a1e71 unbound from our chassis#033[00m
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.217 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:54.218 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1d742d07-ac6a-4870-9712-15a33a8a1e71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:08:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:54.219 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[469df94b-96be-4c8f-8170-9f25dfd5aa02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:54.220 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 namespace which is not needed anymore#033[00m
Jan 31 04:08:54 np0005603609 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d000000d9.scope: Deactivated successfully.
Jan 31 04:08:54 np0005603609 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d000000d9.scope: Consumed 14.266s CPU time.
Jan 31 04:08:54 np0005603609 systemd-machined[190912]: Machine qemu-118-instance-000000d9 terminated.
Jan 31 04:08:54 np0005603609 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[319582]: [NOTICE]   (319586) : haproxy version is 2.8.14-c23fe91
Jan 31 04:08:54 np0005603609 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[319582]: [NOTICE]   (319586) : path to executable is /usr/sbin/haproxy
Jan 31 04:08:54 np0005603609 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[319582]: [WARNING]  (319586) : Exiting Master process...
Jan 31 04:08:54 np0005603609 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[319582]: [ALERT]    (319586) : Current worker (319588) exited with code 143 (Terminated)
Jan 31 04:08:54 np0005603609 neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71[319582]: [WARNING]  (319586) : All workers exited. Exiting... (0)
Jan 31 04:08:54 np0005603609 systemd[1]: libpod-1e6e55d86793b99b09d1e6d56a360310fafd96a2fa73bea5331d1ac900f66337.scope: Deactivated successfully.
Jan 31 04:08:54 np0005603609 podman[319663]: 2026-01-31 09:08:54.319901335 +0000 UTC m=+0.040093448 container died 1e6e55d86793b99b09d1e6d56a360310fafd96a2fa73bea5331d1ac900f66337 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 04:08:54 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1e6e55d86793b99b09d1e6d56a360310fafd96a2fa73bea5331d1ac900f66337-userdata-shm.mount: Deactivated successfully.
Jan 31 04:08:54 np0005603609 systemd[1]: var-lib-containers-storage-overlay-ad3bb05eea9021f142d88b19b0f1d64af15dc0efa40911bebdd8dcc4ee7aa0dd-merged.mount: Deactivated successfully.
Jan 31 04:08:54 np0005603609 podman[319663]: 2026-01-31 09:08:54.369804794 +0000 UTC m=+0.089996887 container cleanup 1e6e55d86793b99b09d1e6d56a360310fafd96a2fa73bea5331d1ac900f66337 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 04:08:54 np0005603609 systemd[1]: libpod-conmon-1e6e55d86793b99b09d1e6d56a360310fafd96a2fa73bea5331d1ac900f66337.scope: Deactivated successfully.
Jan 31 04:08:54 np0005603609 podman[319695]: 2026-01-31 09:08:54.422750276 +0000 UTC m=+0.039397380 container remove 1e6e55d86793b99b09d1e6d56a360310fafd96a2fa73bea5331d1ac900f66337 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.423 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.426 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:54.426 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[6311c910-f8eb-4b32-99a5-4c9a5cb2cc3b]: (4, ('Sat Jan 31 09:08:54 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 (1e6e55d86793b99b09d1e6d56a360310fafd96a2fa73bea5331d1ac900f66337)\n1e6e55d86793b99b09d1e6d56a360310fafd96a2fa73bea5331d1ac900f66337\nSat Jan 31 09:08:54 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 (1e6e55d86793b99b09d1e6d56a360310fafd96a2fa73bea5331d1ac900f66337)\n1e6e55d86793b99b09d1e6d56a360310fafd96a2fa73bea5331d1ac900f66337\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:54.428 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2c3fcb-f3e4-4934-84ee-a9a1a90f4ceb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:54.428 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d742d07-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:08:54 np0005603609 kernel: tap1d742d07-a0: left promiscuous mode
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.429 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.436 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:54.438 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[4dff8cd1-0dfa-48ad-b502-5a56c5cc50ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:54.456 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[f24611cd-f6be-4f38-b662-bd186034056d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:54.458 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[dac4033f-5bdd-41c7-bc92-0ee059ec544b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:54.470 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[71e15931-bebf-4fbd-8878-e8acc036e6d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1074805, 'reachable_time': 30210, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319723, 'error': None, 'target': 'ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:54.472 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1d742d07-ac6a-4870-9712-15a33a8a1e71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:08:54 np0005603609 systemd[1]: run-netns-ovnmeta\x2d1d742d07\x2dac6a\x2d4870\x2d9712\x2d15a33a8a1e71.mount: Deactivated successfully.
Jan 31 04:08:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:54.472 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ed9304-e5e2-45ea-880e-d4addddc2960]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.473 221554 DEBUG nova.compute.manager [req-d5ba1889-b547-48f3-ac7f-b9ded0678f94 req-ad0cc55c-7dd1-46d6-a942-31880b7b7cb0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received event network-vif-unplugged-5ca89d89-0532-424d-9005-27c4b82e9793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.473 221554 DEBUG oslo_concurrency.lockutils [req-d5ba1889-b547-48f3-ac7f-b9ded0678f94 req-ad0cc55c-7dd1-46d6-a942-31880b7b7cb0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9a579d21-79af-418a-a1d6-756329428431-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.473 221554 DEBUG oslo_concurrency.lockutils [req-d5ba1889-b547-48f3-ac7f-b9ded0678f94 req-ad0cc55c-7dd1-46d6-a942-31880b7b7cb0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.474 221554 DEBUG oslo_concurrency.lockutils [req-d5ba1889-b547-48f3-ac7f-b9ded0678f94 req-ad0cc55c-7dd1-46d6-a942-31880b7b7cb0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.474 221554 DEBUG nova.compute.manager [req-d5ba1889-b547-48f3-ac7f-b9ded0678f94 req-ad0cc55c-7dd1-46d6-a942-31880b7b7cb0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] No waiting events found dispatching network-vif-unplugged-5ca89d89-0532-424d-9005-27c4b82e9793 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.474 221554 WARNING nova.compute.manager [req-d5ba1889-b547-48f3-ac7f-b9ded0678f94 req-ad0cc55c-7dd1-46d6-a942-31880b7b7cb0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received unexpected event network-vif-unplugged-5ca89d89-0532-424d-9005-27c4b82e9793 for instance with vm_state active and task_state shelving.#033[00m
Jan 31 04:08:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:54.668 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=108, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=107) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:08:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:54.669 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.693 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.753 221554 INFO nova.virt.libvirt.driver [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.757 221554 INFO nova.virt.libvirt.driver [-] [instance: 9a579d21-79af-418a-a1d6-756329428431] Instance destroyed successfully.#033[00m
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.758 221554 DEBUG nova.objects.instance [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'numa_topology' on Instance uuid 9a579d21-79af-418a-a1d6-756329428431 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.784 221554 DEBUG nova.compute.manager [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.787 221554 DEBUG oslo_concurrency.lockutils [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.787 221554 DEBUG oslo_concurrency.lockutils [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquired lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:08:54 np0005603609 nova_compute[221550]: 2026-01-31 09:08:54.787 221554 DEBUG nova.network.neutron [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:08:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:08:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:55.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:08:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:55.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:56 np0005603609 nova_compute[221550]: 2026-01-31 09:08:56.420 221554 DEBUG nova.network.neutron [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Updating instance_info_cache with network_info: [{"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:08:56 np0005603609 nova_compute[221550]: 2026-01-31 09:08:56.448 221554 DEBUG oslo_concurrency.lockutils [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Releasing lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:08:56 np0005603609 nova_compute[221550]: 2026-01-31 09:08:56.561 221554 DEBUG nova.compute.manager [req-b989844d-7395-4685-8ac2-ee7472685cd1 req-a60576c6-4063-49e0-84c1-9c37c40c0e45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received event network-vif-plugged-5ca89d89-0532-424d-9005-27c4b82e9793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:08:56 np0005603609 nova_compute[221550]: 2026-01-31 09:08:56.561 221554 DEBUG oslo_concurrency.lockutils [req-b989844d-7395-4685-8ac2-ee7472685cd1 req-a60576c6-4063-49e0-84c1-9c37c40c0e45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9a579d21-79af-418a-a1d6-756329428431-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:56 np0005603609 nova_compute[221550]: 2026-01-31 09:08:56.561 221554 DEBUG oslo_concurrency.lockutils [req-b989844d-7395-4685-8ac2-ee7472685cd1 req-a60576c6-4063-49e0-84c1-9c37c40c0e45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:56 np0005603609 nova_compute[221550]: 2026-01-31 09:08:56.562 221554 DEBUG oslo_concurrency.lockutils [req-b989844d-7395-4685-8ac2-ee7472685cd1 req-a60576c6-4063-49e0-84c1-9c37c40c0e45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:56 np0005603609 nova_compute[221550]: 2026-01-31 09:08:56.562 221554 DEBUG nova.compute.manager [req-b989844d-7395-4685-8ac2-ee7472685cd1 req-a60576c6-4063-49e0-84c1-9c37c40c0e45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] No waiting events found dispatching network-vif-plugged-5ca89d89-0532-424d-9005-27c4b82e9793 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:08:56 np0005603609 nova_compute[221550]: 2026-01-31 09:08:56.562 221554 WARNING nova.compute.manager [req-b989844d-7395-4685-8ac2-ee7472685cd1 req-a60576c6-4063-49e0-84c1-9c37c40c0e45 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received unexpected event network-vif-plugged-5ca89d89-0532-424d-9005-27c4b82e9793 for instance with vm_state active and task_state shelving.#033[00m
Jan 31 04:08:57 np0005603609 nova_compute[221550]: 2026-01-31 09:08:57.164 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:57 np0005603609 nova_compute[221550]: 2026-01-31 09:08:57.253 221554 INFO nova.virt.libvirt.driver [-] [instance: 9a579d21-79af-418a-a1d6-756329428431] Instance destroyed successfully.#033[00m
Jan 31 04:08:57 np0005603609 nova_compute[221550]: 2026-01-31 09:08:57.253 221554 DEBUG nova.objects.instance [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lazy-loading 'resources' on Instance uuid 9a579d21-79af-418a-a1d6-756329428431 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:08:57 np0005603609 nova_compute[221550]: 2026-01-31 09:08:57.268 221554 DEBUG nova.virt.libvirt.vif [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:08:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-573915944',display_name='tempest-TestShelveInstance-server-573915944',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-573915944',id=217,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHS4QCUVvfZkmnlZau8lua//iz8/NvQiFwIgU+DK5hCJmShHpaL0dY344ffU+61pAJtdb//6v7ovdut7evpHBD6pN1yMnlHNxMsL0uM0TRO0xjCGQzhCeySiVn/Uz1Je8Q==',key_name='tempest-TestShelveInstance-385640846',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:08:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='496e06c7521f45c994e6426c4313acea',ramdisk_id='',reservation_id='r-1rp3uu0q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1485729988',owner_user_name='tempest-TestShelveInstance-1485729988-project-member'},tags=<?>,task_state='shelving',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:08:28Z,user_data=None,user_id='4883c0d4a7f54a6898eba5bfdbb41266',uuid=9a579d21-79af-418a-a1d6-756329428431,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:08:57 np0005603609 nova_compute[221550]: 2026-01-31 09:08:57.268 221554 DEBUG nova.network.os_vif_util [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converting VIF {"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ca89d89-05", "ovs_interfaceid": "5ca89d89-0532-424d-9005-27c4b82e9793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:08:57 np0005603609 nova_compute[221550]: 2026-01-31 09:08:57.269 221554 DEBUG nova.network.os_vif_util [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:8f:3e,bridge_name='br-int',has_traffic_filtering=True,id=5ca89d89-0532-424d-9005-27c4b82e9793,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca89d89-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:08:57 np0005603609 nova_compute[221550]: 2026-01-31 09:08:57.269 221554 DEBUG os_vif [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:8f:3e,bridge_name='br-int',has_traffic_filtering=True,id=5ca89d89-0532-424d-9005-27c4b82e9793,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca89d89-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:08:57 np0005603609 nova_compute[221550]: 2026-01-31 09:08:57.271 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:57 np0005603609 nova_compute[221550]: 2026-01-31 09:08:57.272 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ca89d89-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:08:57 np0005603609 nova_compute[221550]: 2026-01-31 09:08:57.273 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:57 np0005603609 nova_compute[221550]: 2026-01-31 09:08:57.274 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:57 np0005603609 nova_compute[221550]: 2026-01-31 09:08:57.276 221554 INFO os_vif [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:8f:3e,bridge_name='br-int',has_traffic_filtering=True,id=5ca89d89-0532-424d-9005-27c4b82e9793,network=Network(1d742d07-ac6a-4870-9712-15a33a8a1e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ca89d89-05')#033[00m
Jan 31 04:08:57 np0005603609 nova_compute[221550]: 2026-01-31 09:08:57.366 221554 DEBUG nova.compute.manager [req-db5b8cdd-3eb5-46aa-b729-5089a4109a96 req-1876bbcf-6625-4427-b2b7-1732ecdba2c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Received event network-changed-5ca89d89-0532-424d-9005-27c4b82e9793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:08:57 np0005603609 nova_compute[221550]: 2026-01-31 09:08:57.366 221554 DEBUG nova.compute.manager [req-db5b8cdd-3eb5-46aa-b729-5089a4109a96 req-1876bbcf-6625-4427-b2b7-1732ecdba2c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Refreshing instance network info cache due to event network-changed-5ca89d89-0532-424d-9005-27c4b82e9793. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:08:57 np0005603609 nova_compute[221550]: 2026-01-31 09:08:57.366 221554 DEBUG oslo_concurrency.lockutils [req-db5b8cdd-3eb5-46aa-b729-5089a4109a96 req-1876bbcf-6625-4427-b2b7-1732ecdba2c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:08:57 np0005603609 nova_compute[221550]: 2026-01-31 09:08:57.367 221554 DEBUG oslo_concurrency.lockutils [req-db5b8cdd-3eb5-46aa-b729-5089a4109a96 req-1876bbcf-6625-4427-b2b7-1732ecdba2c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:08:57 np0005603609 nova_compute[221550]: 2026-01-31 09:08:57.367 221554 DEBUG nova.network.neutron [req-db5b8cdd-3eb5-46aa-b729-5089a4109a96 req-1876bbcf-6625-4427-b2b7-1732ecdba2c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Refreshing network info cache for port 5ca89d89-0532-424d-9005-27c4b82e9793 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:08:57 np0005603609 nova_compute[221550]: 2026-01-31 09:08:57.545 221554 INFO nova.virt.libvirt.driver [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Deleting instance files /var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431_del#033[00m
Jan 31 04:08:57 np0005603609 nova_compute[221550]: 2026-01-31 09:08:57.545 221554 INFO nova.virt.libvirt.driver [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Deletion of /var/lib/nova/instances/9a579d21-79af-418a-a1d6-756329428431_del complete#033[00m
Jan 31 04:08:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:57.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:08:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:57.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:08:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:58 np0005603609 nova_compute[221550]: 2026-01-31 09:08:58.601 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:58 np0005603609 nova_compute[221550]: 2026-01-31 09:08:58.603 221554 INFO nova.scheduler.client.report [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Deleted allocations for instance 9a579d21-79af-418a-a1d6-756329428431#033[00m
Jan 31 04:08:58 np0005603609 nova_compute[221550]: 2026-01-31 09:08:58.646 221554 DEBUG oslo_concurrency.lockutils [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:58 np0005603609 nova_compute[221550]: 2026-01-31 09:08:58.646 221554 DEBUG oslo_concurrency.lockutils [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:58 np0005603609 nova_compute[221550]: 2026-01-31 09:08:58.677 221554 DEBUG oslo_concurrency.processutils [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:08:58 np0005603609 nova_compute[221550]: 2026-01-31 09:08:58.795 221554 DEBUG nova.network.neutron [req-db5b8cdd-3eb5-46aa-b729-5089a4109a96 req-1876bbcf-6625-4427-b2b7-1732ecdba2c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Updated VIF entry in instance network info cache for port 5ca89d89-0532-424d-9005-27c4b82e9793. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:08:58 np0005603609 nova_compute[221550]: 2026-01-31 09:08:58.797 221554 DEBUG nova.network.neutron [req-db5b8cdd-3eb5-46aa-b729-5089a4109a96 req-1876bbcf-6625-4427-b2b7-1732ecdba2c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9a579d21-79af-418a-a1d6-756329428431] Updating instance_info_cache with network_info: [{"id": "5ca89d89-0532-424d-9005-27c4b82e9793", "address": "fa:16:3e:cc:8f:3e", "network": {"id": "1d742d07-ac6a-4870-9712-15a33a8a1e71", "bridge": null, "label": "tempest-TestShelveInstance-1957663658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "496e06c7521f45c994e6426c4313acea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap5ca89d89-05", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:08:58 np0005603609 nova_compute[221550]: 2026-01-31 09:08:58.842 221554 DEBUG oslo_concurrency.lockutils [req-db5b8cdd-3eb5-46aa-b729-5089a4109a96 req-1876bbcf-6625-4427-b2b7-1732ecdba2c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9a579d21-79af-418a-a1d6-756329428431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:08:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:08:59 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4113501718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:08:59 np0005603609 nova_compute[221550]: 2026-01-31 09:08:59.077 221554 DEBUG oslo_concurrency.processutils [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:08:59 np0005603609 nova_compute[221550]: 2026-01-31 09:08:59.083 221554 DEBUG nova.compute.provider_tree [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:08:59 np0005603609 nova_compute[221550]: 2026-01-31 09:08:59.141 221554 DEBUG nova.scheduler.client.report [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:08:59 np0005603609 nova_compute[221550]: 2026-01-31 09:08:59.168 221554 DEBUG oslo_concurrency.lockutils [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:59 np0005603609 nova_compute[221550]: 2026-01-31 09:08:59.211 221554 DEBUG oslo_concurrency.lockutils [None req-ba9febb4-bd14-49e9-8e15-fb6f84777e6a 4883c0d4a7f54a6898eba5bfdbb41266 496e06c7521f45c994e6426c4313acea - - default default] Lock "9a579d21-79af-418a-a1d6-756329428431" "released" by "nova.compute.manager.ComputeManager.shelve_offload_instance.<locals>.do_shelve_offload_instance" :: held 7.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:59 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:08:59.672 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '108'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:08:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:59.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:08:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:59.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:01.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:01.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:01 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #202. Immutable memtables: 0.
Jan 31 04:09:01 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:09:01.835333) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:09:01 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 202
Jan 31 04:09:01 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850541835398, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 644, "num_deletes": 251, "total_data_size": 1076086, "memory_usage": 1093832, "flush_reason": "Manual Compaction"}
Jan 31 04:09:01 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #203: started
Jan 31 04:09:01 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850541850785, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 203, "file_size": 709428, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 97381, "largest_seqno": 98020, "table_properties": {"data_size": 706255, "index_size": 1080, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7396, "raw_average_key_size": 19, "raw_value_size": 699981, "raw_average_value_size": 1808, "num_data_blocks": 48, "num_entries": 387, "num_filter_entries": 387, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850502, "oldest_key_time": 1769850502, "file_creation_time": 1769850541, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:09:01 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 15508 microseconds, and 2551 cpu microseconds.
Jan 31 04:09:01 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:09:01 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:09:01.850842) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #203: 709428 bytes OK
Jan 31 04:09:01 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:09:01.850863) [db/memtable_list.cc:519] [default] Level-0 commit table #203 started
Jan 31 04:09:01 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:09:01.855367) [db/memtable_list.cc:722] [default] Level-0 commit table #203: memtable #1 done
Jan 31 04:09:01 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:09:01.855397) EVENT_LOG_v1 {"time_micros": 1769850541855389, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:09:01 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:09:01.855417) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:09:01 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 1072551, prev total WAL file size 1072551, number of live WAL files 2.
Jan 31 04:09:01 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000199.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:09:01 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:09:01.856188) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Jan 31 04:09:01 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:09:01 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [203(692KB)], [201(13MB)]
Jan 31 04:09:01 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850541856223, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [203], "files_L6": [201], "score": -1, "input_data_size": 14840696, "oldest_snapshot_seqno": -1}
Jan 31 04:09:02 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #204: 11399 keys, 12878944 bytes, temperature: kUnknown
Jan 31 04:09:02 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850542036695, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 204, "file_size": 12878944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12808308, "index_size": 41019, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28549, "raw_key_size": 303058, "raw_average_key_size": 26, "raw_value_size": 12611929, "raw_average_value_size": 1106, "num_data_blocks": 1543, "num_entries": 11399, "num_filter_entries": 11399, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769850541, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 204, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:09:02 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:09:02 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:09:02.036929) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 12878944 bytes
Jan 31 04:09:02 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:09:02.051038) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 82.2 rd, 71.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 13.5 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(39.1) write-amplify(18.2) OK, records in: 11910, records dropped: 511 output_compression: NoCompression
Jan 31 04:09:02 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:09:02.051074) EVENT_LOG_v1 {"time_micros": 1769850542051061, "job": 130, "event": "compaction_finished", "compaction_time_micros": 180544, "compaction_time_cpu_micros": 31353, "output_level": 6, "num_output_files": 1, "total_output_size": 12878944, "num_input_records": 11910, "num_output_records": 11399, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:09:02 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:09:02 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850542051291, "job": 130, "event": "table_file_deletion", "file_number": 203}
Jan 31 04:09:02 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000201.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:09:02 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850542052739, "job": 130, "event": "table_file_deletion", "file_number": 201}
Jan 31 04:09:02 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:09:01.856103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:09:02 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:09:02.052861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:09:02 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:09:02.052867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:09:02 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:09:02.052869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:09:02 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:09:02.052871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:09:02 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:09:02.052872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:09:02 np0005603609 nova_compute[221550]: 2026-01-31 09:09:02.274 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:03 np0005603609 nova_compute[221550]: 2026-01-31 09:09:03.603 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:03.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:03.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:09:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:05.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:05.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:07 np0005603609 nova_compute[221550]: 2026-01-31 09:09:07.276 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:09:07.563 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:09:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:09:07.564 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:09:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:09:07.564 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:09:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:07.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:09:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:07.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:09:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:08 np0005603609 nova_compute[221550]: 2026-01-31 09:09:08.602 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:09 np0005603609 nova_compute[221550]: 2026-01-31 09:09:09.437 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850534.4352443, 9a579d21-79af-418a-a1d6-756329428431 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:09:09 np0005603609 nova_compute[221550]: 2026-01-31 09:09:09.437 221554 INFO nova.compute.manager [-] [instance: 9a579d21-79af-418a-a1d6-756329428431] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:09:09 np0005603609 nova_compute[221550]: 2026-01-31 09:09:09.464 221554 DEBUG nova.compute.manager [None req-c7a86b57-a587-4b71-a1ce-21e4a33774a7 - - - - - -] [instance: 9a579d21-79af-418a-a1d6-756329428431] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:09:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:09.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:09:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:09.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:11.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:11.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:12 np0005603609 nova_compute[221550]: 2026-01-31 09:09:12.313 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:12 np0005603609 nova_compute[221550]: 2026-01-31 09:09:12.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:09:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:13 np0005603609 nova_compute[221550]: 2026-01-31 09:09:13.604 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:13.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:13.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:14 np0005603609 nova_compute[221550]: 2026-01-31 09:09:14.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:09:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:15.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:15.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:09:17 np0005603609 nova_compute[221550]: 2026-01-31 09:09:17.315 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:17.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:09:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:17.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:18 np0005603609 podman[320155]: 2026-01-31 09:09:18.354826935 +0000 UTC m=+0.015716608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:09:18 np0005603609 nova_compute[221550]: 2026-01-31 09:09:18.605 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:18 np0005603609 podman[320155]: 2026-01-31 09:09:18.776217058 +0000 UTC m=+0.437106751 container create ad30193ffd79053ad62016bd5f9804ec21424ebf81fe1e3a370797b005a7e986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:09:18 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:09:18 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:09:18 np0005603609 systemd[1]: Started libpod-conmon-ad30193ffd79053ad62016bd5f9804ec21424ebf81fe1e3a370797b005a7e986.scope.
Jan 31 04:09:19 np0005603609 systemd[1]: Started libcrun container.
Jan 31 04:09:19 np0005603609 podman[320155]: 2026-01-31 09:09:19.040485304 +0000 UTC m=+0.701374957 container init ad30193ffd79053ad62016bd5f9804ec21424ebf81fe1e3a370797b005a7e986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:09:19 np0005603609 podman[320155]: 2026-01-31 09:09:19.045196139 +0000 UTC m=+0.706085792 container start ad30193ffd79053ad62016bd5f9804ec21424ebf81fe1e3a370797b005a7e986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 31 04:09:19 np0005603609 exciting_cori[320171]: 167 167
Jan 31 04:09:19 np0005603609 systemd[1]: libpod-ad30193ffd79053ad62016bd5f9804ec21424ebf81fe1e3a370797b005a7e986.scope: Deactivated successfully.
Jan 31 04:09:19 np0005603609 podman[320155]: 2026-01-31 09:09:19.06147783 +0000 UTC m=+0.722367483 container attach ad30193ffd79053ad62016bd5f9804ec21424ebf81fe1e3a370797b005a7e986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:09:19 np0005603609 podman[320155]: 2026-01-31 09:09:19.06186351 +0000 UTC m=+0.722753183 container died ad30193ffd79053ad62016bd5f9804ec21424ebf81fe1e3a370797b005a7e986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:09:19 np0005603609 systemd[1]: var-lib-containers-storage-overlay-d0a560e421b363a04d07a9d0fc8d66a7c8bddb27920c6fc40be2d6d86b64e6f5-merged.mount: Deactivated successfully.
Jan 31 04:09:19 np0005603609 podman[320155]: 2026-01-31 09:09:19.126449349 +0000 UTC m=+0.787339002 container remove ad30193ffd79053ad62016bd5f9804ec21424ebf81fe1e3a370797b005a7e986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 04:09:19 np0005603609 systemd[1]: libpod-conmon-ad30193ffd79053ad62016bd5f9804ec21424ebf81fe1e3a370797b005a7e986.scope: Deactivated successfully.
Jan 31 04:09:19 np0005603609 podman[320194]: 2026-01-31 09:09:19.224021812 +0000 UTC m=+0.030301758 container create de88988d080cae821953f1aaca37f310a0e58b01d4812736f1d2111b32e9aa2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 04:09:19 np0005603609 systemd[1]: Started libpod-conmon-de88988d080cae821953f1aaca37f310a0e58b01d4812736f1d2111b32e9aa2c.scope.
Jan 31 04:09:19 np0005603609 systemd[1]: Started libcrun container.
Jan 31 04:09:19 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25fd843024763f851b84e87c3137b9671a5c680c3e561e0bf1d32436fad78665/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:09:19 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25fd843024763f851b84e87c3137b9671a5c680c3e561e0bf1d32436fad78665/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:09:19 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25fd843024763f851b84e87c3137b9671a5c680c3e561e0bf1d32436fad78665/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:09:19 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25fd843024763f851b84e87c3137b9671a5c680c3e561e0bf1d32436fad78665/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:09:19 np0005603609 podman[320194]: 2026-01-31 09:09:19.210479078 +0000 UTC m=+0.016759024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:09:19 np0005603609 podman[320194]: 2026-01-31 09:09:19.308532901 +0000 UTC m=+0.114812877 container init de88988d080cae821953f1aaca37f310a0e58b01d4812736f1d2111b32e9aa2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:09:19 np0005603609 podman[320194]: 2026-01-31 09:09:19.314301253 +0000 UTC m=+0.120581199 container start de88988d080cae821953f1aaca37f310a0e58b01d4812736f1d2111b32e9aa2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Jan 31 04:09:19 np0005603609 podman[320194]: 2026-01-31 09:09:19.317566094 +0000 UTC m=+0.123846040 container attach de88988d080cae821953f1aaca37f310a0e58b01d4812736f1d2111b32e9aa2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_meitner, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:09:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:19.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:19.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:09:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:09:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:09:19 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]: [
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:    {
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:        "available": false,
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:        "ceph_device": false,
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:        "lsm_data": {},
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:        "lvs": [],
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:        "path": "/dev/sr0",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:        "rejected_reasons": [
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "Has a FileSystem",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "Insufficient space (<5GB)"
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:        ],
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:        "sys_api": {
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "actuators": null,
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "device_nodes": "sr0",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "devname": "sr0",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "human_readable_size": "482.00 KB",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "id_bus": "ata",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "model": "QEMU DVD-ROM",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "nr_requests": "2",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "parent": "/dev/sr0",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "partitions": {},
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "path": "/dev/sr0",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "removable": "1",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "rev": "2.5+",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "ro": "0",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "rotational": "1",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "sas_address": "",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "sas_device_handle": "",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "scheduler_mode": "mq-deadline",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "sectors": 0,
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "sectorsize": "2048",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "size": 493568.0,
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "support_discard": "2048",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "type": "disk",
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:            "vendor": "QEMU"
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:        }
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]:    }
Jan 31 04:09:20 np0005603609 adoring_meitner[320210]: ]
Jan 31 04:09:20 np0005603609 systemd[1]: libpod-de88988d080cae821953f1aaca37f310a0e58b01d4812736f1d2111b32e9aa2c.scope: Deactivated successfully.
Jan 31 04:09:20 np0005603609 systemd[1]: libpod-de88988d080cae821953f1aaca37f310a0e58b01d4812736f1d2111b32e9aa2c.scope: Consumed 1.039s CPU time.
Jan 31 04:09:20 np0005603609 podman[320194]: 2026-01-31 09:09:20.363838629 +0000 UTC m=+1.170118575 container died de88988d080cae821953f1aaca37f310a0e58b01d4812736f1d2111b32e9aa2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_meitner, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 04:09:20 np0005603609 systemd[1]: var-lib-containers-storage-overlay-25fd843024763f851b84e87c3137b9671a5c680c3e561e0bf1d32436fad78665-merged.mount: Deactivated successfully.
Jan 31 04:09:20 np0005603609 podman[320194]: 2026-01-31 09:09:20.424597685 +0000 UTC m=+1.230877631 container remove de88988d080cae821953f1aaca37f310a0e58b01d4812736f1d2111b32e9aa2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:09:20 np0005603609 systemd[1]: libpod-conmon-de88988d080cae821953f1aaca37f310a0e58b01d4812736f1d2111b32e9aa2c.scope: Deactivated successfully.
Jan 31 04:09:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:09:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:09:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:09:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:09:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:09:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:09:21 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:09:21 np0005603609 nova_compute[221550]: 2026-01-31 09:09:21.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:09:21 np0005603609 nova_compute[221550]: 2026-01-31 09:09:21.661 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:09:21 np0005603609 nova_compute[221550]: 2026-01-31 09:09:21.690 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:09:21 np0005603609 nova_compute[221550]: 2026-01-31 09:09:21.690 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:09:21 np0005603609 nova_compute[221550]: 2026-01-31 09:09:21.690 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:09:21 np0005603609 nova_compute[221550]: 2026-01-31 09:09:21.691 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:09:21 np0005603609 nova_compute[221550]: 2026-01-31 09:09:21.691 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:09:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 04:09:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:21.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 04:09:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:21.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:09:22 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4073190936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:09:22 np0005603609 nova_compute[221550]: 2026-01-31 09:09:22.109 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:09:22 np0005603609 podman[321385]: 2026-01-31 09:09:22.172651916 +0000 UTC m=+0.056528093 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 04:09:22 np0005603609 podman[321383]: 2026-01-31 09:09:22.196528033 +0000 UTC m=+0.081669991 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 04:09:22 np0005603609 nova_compute[221550]: 2026-01-31 09:09:22.286 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:09:22 np0005603609 nova_compute[221550]: 2026-01-31 09:09:22.288 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4083MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:09:22 np0005603609 nova_compute[221550]: 2026-01-31 09:09:22.288 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:09:22 np0005603609 nova_compute[221550]: 2026-01-31 09:09:22.288 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:09:22 np0005603609 nova_compute[221550]: 2026-01-31 09:09:22.315 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:22 np0005603609 nova_compute[221550]: 2026-01-31 09:09:22.430 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:09:22 np0005603609 nova_compute[221550]: 2026-01-31 09:09:22.430 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:09:22 np0005603609 nova_compute[221550]: 2026-01-31 09:09:22.444 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 04:09:22 np0005603609 nova_compute[221550]: 2026-01-31 09:09:22.464 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 04:09:22 np0005603609 nova_compute[221550]: 2026-01-31 09:09:22.465 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 04:09:22 np0005603609 nova_compute[221550]: 2026-01-31 09:09:22.481 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 04:09:22 np0005603609 nova_compute[221550]: 2026-01-31 09:09:22.508 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 04:09:22 np0005603609 nova_compute[221550]: 2026-01-31 09:09:22.533 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:09:22 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:09:22 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2880264468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:09:22 np0005603609 nova_compute[221550]: 2026-01-31 09:09:22.991 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:09:22 np0005603609 nova_compute[221550]: 2026-01-31 09:09:22.997 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:09:23 np0005603609 nova_compute[221550]: 2026-01-31 09:09:23.012 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:09:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:23 np0005603609 nova_compute[221550]: 2026-01-31 09:09:23.039 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:09:23 np0005603609 nova_compute[221550]: 2026-01-31 09:09:23.039 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:09:23 np0005603609 nova_compute[221550]: 2026-01-31 09:09:23.608 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:23.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:23.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:25.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:25.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:09:26 np0005603609 nova_compute[221550]: 2026-01-31 09:09:26.037 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:09:26 np0005603609 nova_compute[221550]: 2026-01-31 09:09:26.038 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:09:26 np0005603609 nova_compute[221550]: 2026-01-31 09:09:26.039 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:09:26 np0005603609 nova_compute[221550]: 2026-01-31 09:09:26.057 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:09:26 np0005603609 nova_compute[221550]: 2026-01-31 09:09:26.058 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:09:26 np0005603609 nova_compute[221550]: 2026-01-31 09:09:26.058 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:09:27 np0005603609 ovn_controller[130359]: 2026-01-31T09:09:27Z|01024|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Jan 31 04:09:27 np0005603609 nova_compute[221550]: 2026-01-31 09:09:27.317 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:27.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:27.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:09:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:28 np0005603609 nova_compute[221550]: 2026-01-31 09:09:28.609 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:09:28 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:09:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:29.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:29.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:09:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:31.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:09:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:31.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:09:32 np0005603609 nova_compute[221550]: 2026-01-31 09:09:32.320 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:33 np0005603609 nova_compute[221550]: 2026-01-31 09:09:33.612 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:33.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:33.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:35 np0005603609 nova_compute[221550]: 2026-01-31 09:09:35.676 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:09:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:35.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:09:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:35.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:37 np0005603609 nova_compute[221550]: 2026-01-31 09:09:37.323 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:37.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:09:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:37.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:09:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:38 np0005603609 nova_compute[221550]: 2026-01-31 09:09:38.613 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:09:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:39.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:09:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:39.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:09:40 np0005603609 nova_compute[221550]: 2026-01-31 09:09:40.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:09:41 np0005603609 nova_compute[221550]: 2026-01-31 09:09:41.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:09:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:41.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:09:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:41.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:09:42 np0005603609 nova_compute[221550]: 2026-01-31 09:09:42.326 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:43 np0005603609 nova_compute[221550]: 2026-01-31 09:09:43.615 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:43.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:43.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:45.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:09:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:45.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:47 np0005603609 nova_compute[221550]: 2026-01-31 09:09:47.329 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:47.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:47.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:48 np0005603609 nova_compute[221550]: 2026-01-31 09:09:48.618 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:49.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:09:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:49.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:51.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:51.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:52 np0005603609 nova_compute[221550]: 2026-01-31 09:09:52.332 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:53 np0005603609 podman[321507]: 2026-01-31 09:09:53.171859926 +0000 UTC m=+0.057824514 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 04:09:53 np0005603609 podman[321506]: 2026-01-31 09:09:53.231778111 +0000 UTC m=+0.113880865 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible)
Jan 31 04:09:53 np0005603609 nova_compute[221550]: 2026-01-31 09:09:53.620 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:53.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:53.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:55.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:09:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:55.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:56 np0005603609 nova_compute[221550]: 2026-01-31 09:09:56.441 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:09:56.443 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=109, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=108) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:09:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:09:56.444 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:09:57 np0005603609 nova_compute[221550]: 2026-01-31 09:09:57.334 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:57.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:09:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:57.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:58 np0005603609 nova_compute[221550]: 2026-01-31 09:09:58.622 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:59.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:09:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:09:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:59.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:10:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:10:00.446 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '109'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:10:00 np0005603609 nova_compute[221550]: 2026-01-31 09:10:00.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:10:01 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 04:10:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:01.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:01.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:02 np0005603609 nova_compute[221550]: 2026-01-31 09:10:02.336 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:03 np0005603609 nova_compute[221550]: 2026-01-31 09:10:03.649 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:03 np0005603609 nova_compute[221550]: 2026-01-31 09:10:03.693 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:03 np0005603609 nova_compute[221550]: 2026-01-31 09:10:03.722 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:03.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:03.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:10:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:05.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:10:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:10:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:05.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:10:07 np0005603609 nova_compute[221550]: 2026-01-31 09:10:07.339 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:10:07.565 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:10:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:10:07.565 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:10:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:10:07.565 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:10:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:07.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:07.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:08 np0005603609 nova_compute[221550]: 2026-01-31 09:10:08.651 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:10:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:09.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:10:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:09.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:11.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:10:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:11.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:10:12 np0005603609 nova_compute[221550]: 2026-01-31 09:10:12.341 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:13 np0005603609 nova_compute[221550]: 2026-01-31 09:10:13.653 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:13 np0005603609 nova_compute[221550]: 2026-01-31 09:10:13.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:10:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:13.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:13.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:14 np0005603609 nova_compute[221550]: 2026-01-31 09:10:14.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:10:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:15.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:15.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:17 np0005603609 nova_compute[221550]: 2026-01-31 09:10:17.344 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:17.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:17.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:18 np0005603609 nova_compute[221550]: 2026-01-31 09:10:18.655 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:19.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:10:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:19.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:10:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:21.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:21.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:22 np0005603609 nova_compute[221550]: 2026-01-31 09:10:22.346 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:22 np0005603609 nova_compute[221550]: 2026-01-31 09:10:22.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:10:22 np0005603609 nova_compute[221550]: 2026-01-31 09:10:22.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:10:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:23 np0005603609 nova_compute[221550]: 2026-01-31 09:10:23.562 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:10:23 np0005603609 nova_compute[221550]: 2026-01-31 09:10:23.562 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:10:23 np0005603609 nova_compute[221550]: 2026-01-31 09:10:23.563 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:10:23 np0005603609 nova_compute[221550]: 2026-01-31 09:10:23.563 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:10:23 np0005603609 nova_compute[221550]: 2026-01-31 09:10:23.563 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:10:23 np0005603609 nova_compute[221550]: 2026-01-31 09:10:23.657 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:23.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:23.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:10:23 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/722512109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:10:23 np0005603609 nova_compute[221550]: 2026-01-31 09:10:23.996 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:10:24 np0005603609 nova_compute[221550]: 2026-01-31 09:10:24.170 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:10:24 np0005603609 nova_compute[221550]: 2026-01-31 09:10:24.172 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4146MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:10:24 np0005603609 nova_compute[221550]: 2026-01-31 09:10:24.172 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:10:24 np0005603609 nova_compute[221550]: 2026-01-31 09:10:24.172 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:10:24 np0005603609 podman[321579]: 2026-01-31 09:10:24.188944708 +0000 UTC m=+0.060131041 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:10:24 np0005603609 podman[321578]: 2026-01-31 09:10:24.216897506 +0000 UTC m=+0.094124889 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:10:24 np0005603609 nova_compute[221550]: 2026-01-31 09:10:24.506 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:10:24 np0005603609 nova_compute[221550]: 2026-01-31 09:10:24.506 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:10:24 np0005603609 nova_compute[221550]: 2026-01-31 09:10:24.530 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:10:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:10:25 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/644742043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:10:25 np0005603609 nova_compute[221550]: 2026-01-31 09:10:25.057 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:10:25 np0005603609 nova_compute[221550]: 2026-01-31 09:10:25.064 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:10:25 np0005603609 nova_compute[221550]: 2026-01-31 09:10:25.178 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:10:25 np0005603609 nova_compute[221550]: 2026-01-31 09:10:25.181 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:10:25 np0005603609 nova_compute[221550]: 2026-01-31 09:10:25.182 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:10:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:25.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:10:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:25.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:10:27 np0005603609 nova_compute[221550]: 2026-01-31 09:10:27.349 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:27.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:10:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:27.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:10:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:28 np0005603609 nova_compute[221550]: 2026-01-31 09:10:28.659 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:29 np0005603609 nova_compute[221550]: 2026-01-31 09:10:29.182 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:10:29 np0005603609 nova_compute[221550]: 2026-01-31 09:10:29.183 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:10:29 np0005603609 nova_compute[221550]: 2026-01-31 09:10:29.183 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:10:29 np0005603609 nova_compute[221550]: 2026-01-31 09:10:29.279 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:10:29 np0005603609 nova_compute[221550]: 2026-01-31 09:10:29.280 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:10:29 np0005603609 nova_compute[221550]: 2026-01-31 09:10:29.280 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:10:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:29.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:29.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:31.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:31.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:32 np0005603609 nova_compute[221550]: 2026-01-31 09:10:32.352 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:32 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:10:32 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:10:32 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:10:32 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:10:32 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:10:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:33 np0005603609 nova_compute[221550]: 2026-01-31 09:10:33.711 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:10:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:33.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:10:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:10:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:33.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:10:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:35.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:35.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:36 np0005603609 nova_compute[221550]: 2026-01-31 09:10:36.752 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:10:37 np0005603609 nova_compute[221550]: 2026-01-31 09:10:37.372 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:37.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:37.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:10:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:10:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:38 np0005603609 nova_compute[221550]: 2026-01-31 09:10:38.714 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:39.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:39.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:40 np0005603609 nova_compute[221550]: 2026-01-31 09:10:40.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:10:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:41.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:41.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:42 np0005603609 nova_compute[221550]: 2026-01-31 09:10:42.375 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:43 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:10:43.300 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=110, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=109) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:10:43 np0005603609 nova_compute[221550]: 2026-01-31 09:10:43.300 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:43 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:10:43.301 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:10:43 np0005603609 nova_compute[221550]: 2026-01-31 09:10:43.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:10:43 np0005603609 nova_compute[221550]: 2026-01-31 09:10:43.715 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:10:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:43.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:10:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:43.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:44 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:10:44.303 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '110'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:10:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:45.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:45.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:47 np0005603609 nova_compute[221550]: 2026-01-31 09:10:47.378 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:47.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:10:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:47.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:10:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:48 np0005603609 nova_compute[221550]: 2026-01-31 09:10:48.716 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:10:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:49.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:10:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:10:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:49.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:10:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:51.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:51.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:52 np0005603609 nova_compute[221550]: 2026-01-31 09:10:52.381 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:10:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3570653807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:10:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:10:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3570653807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:10:53 np0005603609 nova_compute[221550]: 2026-01-31 09:10:53.718 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:10:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:53.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:10:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:10:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:53.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:10:55 np0005603609 podman[321826]: 2026-01-31 09:10:55.204744599 +0000 UTC m=+0.086540062 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:10:55 np0005603609 podman[321827]: 2026-01-31 09:10:55.204984644 +0000 UTC m=+0.085122236 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:10:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:55.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:55.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:57 np0005603609 nova_compute[221550]: 2026-01-31 09:10:57.384 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:10:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:57.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:10:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:57.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:58 np0005603609 nova_compute[221550]: 2026-01-31 09:10:58.748 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:59.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:10:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:10:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:59.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:11:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:01.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:01.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:02 np0005603609 nova_compute[221550]: 2026-01-31 09:11:02.387 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:03 np0005603609 nova_compute[221550]: 2026-01-31 09:11:03.750 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:03.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:11:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:03.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:11:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:11:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:05.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:11:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:11:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:05.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:11:07 np0005603609 nova_compute[221550]: 2026-01-31 09:11:07.390 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:07.566 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:11:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:07.566 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:11:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:07.566 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:11:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:07.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:07.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:08 np0005603609 nova_compute[221550]: 2026-01-31 09:11:08.752 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:11:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:09.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:11:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:11:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:09.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:11:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:11.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:11.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:12 np0005603609 nova_compute[221550]: 2026-01-31 09:11:12.393 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:13 np0005603609 nova_compute[221550]: 2026-01-31 09:11:13.754 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:13.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:13.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:15 np0005603609 nova_compute[221550]: 2026-01-31 09:11:15.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:15 np0005603609 nova_compute[221550]: 2026-01-31 09:11:15.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:11:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:15.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:11:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:15.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:17 np0005603609 nova_compute[221550]: 2026-01-31 09:11:17.394 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:17.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:17.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:18 np0005603609 nova_compute[221550]: 2026-01-31 09:11:18.756 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:19.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:19.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:20 np0005603609 ovn_controller[130359]: 2026-01-31T09:11:20Z|01025|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Jan 31 04:11:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:21.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:21.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:22 np0005603609 nova_compute[221550]: 2026-01-31 09:11:22.398 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:22 np0005603609 nova_compute[221550]: 2026-01-31 09:11:22.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:22 np0005603609 nova_compute[221550]: 2026-01-31 09:11:22.857 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:11:22 np0005603609 nova_compute[221550]: 2026-01-31 09:11:22.858 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:11:22 np0005603609 nova_compute[221550]: 2026-01-31 09:11:22.858 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:11:22 np0005603609 nova_compute[221550]: 2026-01-31 09:11:22.858 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:11:22 np0005603609 nova_compute[221550]: 2026-01-31 09:11:22.858 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:11:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:11:23 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3332761948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:11:23 np0005603609 nova_compute[221550]: 2026-01-31 09:11:23.264 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:11:23 np0005603609 nova_compute[221550]: 2026-01-31 09:11:23.392 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:11:23 np0005603609 nova_compute[221550]: 2026-01-31 09:11:23.394 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4156MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:11:23 np0005603609 nova_compute[221550]: 2026-01-31 09:11:23.394 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:11:23 np0005603609 nova_compute[221550]: 2026-01-31 09:11:23.394 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:11:23 np0005603609 nova_compute[221550]: 2026-01-31 09:11:23.673 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:11:23 np0005603609 nova_compute[221550]: 2026-01-31 09:11:23.673 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:11:23 np0005603609 nova_compute[221550]: 2026-01-31 09:11:23.698 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:11:23 np0005603609 nova_compute[221550]: 2026-01-31 09:11:23.758 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:11:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:23.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:11:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:23.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:11:24 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1859109925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:11:24 np0005603609 nova_compute[221550]: 2026-01-31 09:11:24.122 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:11:24 np0005603609 nova_compute[221550]: 2026-01-31 09:11:24.127 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:11:24 np0005603609 nova_compute[221550]: 2026-01-31 09:11:24.194 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:11:24 np0005603609 nova_compute[221550]: 2026-01-31 09:11:24.195 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:11:24 np0005603609 nova_compute[221550]: 2026-01-31 09:11:24.195 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:11:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:25.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:25.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:26 np0005603609 podman[321916]: 2026-01-31 09:11:26.175884666 +0000 UTC m=+0.057327982 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:11:26 np0005603609 nova_compute[221550]: 2026-01-31 09:11:26.196 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:26 np0005603609 podman[321915]: 2026-01-31 09:11:26.263884602 +0000 UTC m=+0.146664851 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 04:11:26 np0005603609 nova_compute[221550]: 2026-01-31 09:11:26.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:26 np0005603609 nova_compute[221550]: 2026-01-31 09:11:26.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:11:27 np0005603609 nova_compute[221550]: 2026-01-31 09:11:27.401 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:27 np0005603609 nova_compute[221550]: 2026-01-31 09:11:27.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:27 np0005603609 nova_compute[221550]: 2026-01-31 09:11:27.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:11:27 np0005603609 nova_compute[221550]: 2026-01-31 09:11:27.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:11:27 np0005603609 nova_compute[221550]: 2026-01-31 09:11:27.710 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:11:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:11:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:27.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:11:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:27.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:28 np0005603609 nova_compute[221550]: 2026-01-31 09:11:28.070 221554 DEBUG oslo_concurrency.lockutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "8a396bde-ca33-406e-a877-1d3e33fec3e7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:11:28 np0005603609 nova_compute[221550]: 2026-01-31 09:11:28.071 221554 DEBUG oslo_concurrency.lockutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "8a396bde-ca33-406e-a877-1d3e33fec3e7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:11:28 np0005603609 nova_compute[221550]: 2026-01-31 09:11:28.139 221554 DEBUG nova.compute.manager [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 04:11:28 np0005603609 nova_compute[221550]: 2026-01-31 09:11:28.699 221554 DEBUG oslo_concurrency.lockutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:11:28 np0005603609 nova_compute[221550]: 2026-01-31 09:11:28.699 221554 DEBUG oslo_concurrency.lockutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:11:28 np0005603609 nova_compute[221550]: 2026-01-31 09:11:28.707 221554 DEBUG nova.virt.hardware [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 04:11:28 np0005603609 nova_compute[221550]: 2026-01-31 09:11:28.707 221554 INFO nova.compute.claims [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Claim successful on node compute-1.ctlplane.example.com#033[00m
Jan 31 04:11:28 np0005603609 nova_compute[221550]: 2026-01-31 09:11:28.761 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:29 np0005603609 nova_compute[221550]: 2026-01-31 09:11:29.045 221554 DEBUG oslo_concurrency.processutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:11:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:11:29 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2631700456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:11:29 np0005603609 nova_compute[221550]: 2026-01-31 09:11:29.500 221554 DEBUG oslo_concurrency.processutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:11:29 np0005603609 nova_compute[221550]: 2026-01-31 09:11:29.506 221554 DEBUG nova.compute.provider_tree [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:11:29 np0005603609 nova_compute[221550]: 2026-01-31 09:11:29.533 221554 DEBUG nova.scheduler.client.report [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:11:29 np0005603609 nova_compute[221550]: 2026-01-31 09:11:29.594 221554 DEBUG oslo_concurrency.lockutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:11:29 np0005603609 nova_compute[221550]: 2026-01-31 09:11:29.595 221554 DEBUG nova.compute.manager [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 04:11:29 np0005603609 nova_compute[221550]: 2026-01-31 09:11:29.734 221554 DEBUG nova.compute.manager [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 04:11:29 np0005603609 nova_compute[221550]: 2026-01-31 09:11:29.734 221554 DEBUG nova.network.neutron [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 04:11:29 np0005603609 nova_compute[221550]: 2026-01-31 09:11:29.805 221554 INFO nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 04:11:29 np0005603609 nova_compute[221550]: 2026-01-31 09:11:29.879 221554 DEBUG nova.compute.manager [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 04:11:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:29.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:29.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.015 221554 INFO nova.virt.block_device [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Booting with volume 2af2ded2-cbd7-44f4-b8e4-72d974e773dd at /dev/vda#033[00m
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.345 221554 DEBUG os_brick.utils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.349 221554 DEBUG nova.policy [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecd39871d7fd438f88b36601f25d6eb6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '98d10c0290e340a08e9d1726bf0066bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.347 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.354 226739 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.355 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc63f4e-12d3-426b-8bf8-1f8e5362a02e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.356 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.360 226739 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.360 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[918c3003-5553-4275-8eba-bad9d35471e7]: (4, ('InitiatorName=iqn.1994-05.com.redhat:1765b9b6275c', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.362 226739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.368 226739 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.368 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[6e23f014-c9ff-4964-bab9-c9258d724abf]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.370 226739 DEBUG oslo.privsep.daemon [-] privsep: reply[e7322c3f-3ddd-4771-bb83-174abc1c787a]: (4, '231927d4-1ded-4b84-843c-456d697af567') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.371 221554 DEBUG oslo_concurrency.processutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.391 221554 DEBUG oslo_concurrency.processutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.393 221554 DEBUG os_brick.initiator.connectors.lightos [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.393 221554 DEBUG os_brick.initiator.connectors.lightos [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.394 221554 DEBUG os_brick.initiator.connectors.lightos [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.394 221554 DEBUG os_brick.utils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] <== get_connector_properties: return (47ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:1765b9b6275c', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '231927d4-1ded-4b84-843c-456d697af567', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 04:11:30 np0005603609 nova_compute[221550]: 2026-01-31 09:11:30.394 221554 DEBUG nova.virt.block_device [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Updating existing volume attachment record: db0d205e-31f6-4e20-96ba-6275d6869991 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 04:11:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:11:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:31.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:11:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:31.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:32 np0005603609 nova_compute[221550]: 2026-01-31 09:11:32.403 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:33.227 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=111, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=110) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:11:33 np0005603609 nova_compute[221550]: 2026-01-31 09:11:33.227 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:33.228 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:11:33 np0005603609 nova_compute[221550]: 2026-01-31 09:11:33.489 221554 DEBUG nova.compute.manager [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 04:11:33 np0005603609 nova_compute[221550]: 2026-01-31 09:11:33.491 221554 DEBUG nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 04:11:33 np0005603609 nova_compute[221550]: 2026-01-31 09:11:33.491 221554 INFO nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Creating image(s)#033[00m
Jan 31 04:11:33 np0005603609 nova_compute[221550]: 2026-01-31 09:11:33.492 221554 DEBUG nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 04:11:33 np0005603609 nova_compute[221550]: 2026-01-31 09:11:33.492 221554 DEBUG nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Ensure instance console log exists: /var/lib/nova/instances/8a396bde-ca33-406e-a877-1d3e33fec3e7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 04:11:33 np0005603609 nova_compute[221550]: 2026-01-31 09:11:33.492 221554 DEBUG oslo_concurrency.lockutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:11:33 np0005603609 nova_compute[221550]: 2026-01-31 09:11:33.492 221554 DEBUG oslo_concurrency.lockutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:11:33 np0005603609 nova_compute[221550]: 2026-01-31 09:11:33.493 221554 DEBUG oslo_concurrency.lockutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:11:33 np0005603609 nova_compute[221550]: 2026-01-31 09:11:33.635 221554 DEBUG nova.network.neutron [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Successfully created port: 0d4abe9d-8315-4af6-bd71-3d285948fcfa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 04:11:33 np0005603609 nova_compute[221550]: 2026-01-31 09:11:33.763 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:11:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:33.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:11:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:11:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:33.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:11:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:35.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:35.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:36 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:36.230 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '111'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:11:37 np0005603609 nova_compute[221550]: 2026-01-31 09:11:37.405 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:37 np0005603609 nova_compute[221550]: 2026-01-31 09:11:37.704 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:11:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:37.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:11:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:37.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:38 np0005603609 nova_compute[221550]: 2026-01-31 09:11:38.764 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:11:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:11:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:11:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:11:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:11:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:11:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:39.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:11:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:39.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:41 np0005603609 nova_compute[221550]: 2026-01-31 09:11:41.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:11:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:41.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:11:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:41.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:42 np0005603609 nova_compute[221550]: 2026-01-31 09:11:42.408 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:43 np0005603609 nova_compute[221550]: 2026-01-31 09:11:43.713 221554 DEBUG nova.network.neutron [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Successfully updated port: 0d4abe9d-8315-4af6-bd71-3d285948fcfa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 04:11:43 np0005603609 nova_compute[221550]: 2026-01-31 09:11:43.766 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:43 np0005603609 nova_compute[221550]: 2026-01-31 09:11:43.803 221554 DEBUG oslo_concurrency.lockutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "refresh_cache-8a396bde-ca33-406e-a877-1d3e33fec3e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:11:43 np0005603609 nova_compute[221550]: 2026-01-31 09:11:43.804 221554 DEBUG oslo_concurrency.lockutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquired lock "refresh_cache-8a396bde-ca33-406e-a877-1d3e33fec3e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:11:43 np0005603609 nova_compute[221550]: 2026-01-31 09:11:43.804 221554 DEBUG nova.network.neutron [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:11:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:11:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:43.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:11:43 np0005603609 nova_compute[221550]: 2026-01-31 09:11:43.934 221554 DEBUG nova.compute.manager [req-2947f390-64e2-471c-adf6-2589ceccba5f req-4a376798-6e58-4305-87de-a1703fa7b0df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Received event network-changed-0d4abe9d-8315-4af6-bd71-3d285948fcfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:11:43 np0005603609 nova_compute[221550]: 2026-01-31 09:11:43.935 221554 DEBUG nova.compute.manager [req-2947f390-64e2-471c-adf6-2589ceccba5f req-4a376798-6e58-4305-87de-a1703fa7b0df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Refreshing instance network info cache due to event network-changed-0d4abe9d-8315-4af6-bd71-3d285948fcfa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:11:43 np0005603609 nova_compute[221550]: 2026-01-31 09:11:43.935 221554 DEBUG oslo_concurrency.lockutils [req-2947f390-64e2-471c-adf6-2589ceccba5f req-4a376798-6e58-4305-87de-a1703fa7b0df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-8a396bde-ca33-406e-a877-1d3e33fec3e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:11:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:11:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:43.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:11:44 np0005603609 nova_compute[221550]: 2026-01-31 09:11:44.063 221554 DEBUG nova.network.neutron [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.359 221554 DEBUG nova.network.neutron [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Updating instance_info_cache with network_info: [{"id": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "address": "fa:16:3e:db:66:ae", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d4abe9d-83", "ovs_interfaceid": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.414 221554 DEBUG oslo_concurrency.lockutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Releasing lock "refresh_cache-8a396bde-ca33-406e-a877-1d3e33fec3e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.415 221554 DEBUG nova.compute.manager [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Instance network_info: |[{"id": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "address": "fa:16:3e:db:66:ae", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d4abe9d-83", "ovs_interfaceid": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.415 221554 DEBUG oslo_concurrency.lockutils [req-2947f390-64e2-471c-adf6-2589ceccba5f req-4a376798-6e58-4305-87de-a1703fa7b0df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-8a396bde-ca33-406e-a877-1d3e33fec3e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.415 221554 DEBUG nova.network.neutron [req-2947f390-64e2-471c-adf6-2589ceccba5f req-4a376798-6e58-4305-87de-a1703fa7b0df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Refreshing network info cache for port 0d4abe9d-8315-4af6-bd71-3d285948fcfa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.419 221554 DEBUG nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Start _get_guest_xml network_info=[{"id": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "address": "fa:16:3e:db:66:ae", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d4abe9d-83", "ovs_interfaceid": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': 'db0d205e-31f6-4e20-96ba-6275d6869991', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-2af2ded2-cbd7-44f4-b8e4-72d974e773dd', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '2af2ded2-cbd7-44f4-b8e4-72d974e773dd', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '8a396bde-ca33-406e-a877-1d3e33fec3e7', 'attached_at': '', 'detached_at': '', 'volume_id': '2af2ded2-cbd7-44f4-b8e4-72d974e773dd', 'serial': '2af2ded2-cbd7-44f4-b8e4-72d974e773dd'}, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'guest_format': None, 'mount_device': '/dev/vda', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.423 221554 WARNING nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.427 221554 DEBUG nova.virt.libvirt.host [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.428 221554 DEBUG nova.virt.libvirt.host [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.430 221554 DEBUG nova.virt.libvirt.host [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.431 221554 DEBUG nova.virt.libvirt.host [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.432 221554 DEBUG nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.432 221554 DEBUG nova.virt.hardware [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.433 221554 DEBUG nova.virt.hardware [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.433 221554 DEBUG nova.virt.hardware [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.433 221554 DEBUG nova.virt.hardware [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.434 221554 DEBUG nova.virt.hardware [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.434 221554 DEBUG nova.virt.hardware [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.434 221554 DEBUG nova.virt.hardware [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.435 221554 DEBUG nova.virt.hardware [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.435 221554 DEBUG nova.virt.hardware [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.435 221554 DEBUG nova.virt.hardware [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.436 221554 DEBUG nova.virt.hardware [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.467 221554 DEBUG nova.storage.rbd_utils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image 8a396bde-ca33-406e-a877-1d3e33fec3e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.471 221554 DEBUG oslo_concurrency.processutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:11:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3893306248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:11:45 np0005603609 nova_compute[221550]: 2026-01-31 09:11:45.887 221554 DEBUG oslo_concurrency.processutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:11:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:45.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:45.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.048 221554 DEBUG os_brick.encryptors [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Using volume encryption metadata '{'encryption_key_id': 'ec92ef3b-edd6-40a0-a6ad-12d162206d8b', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-2af2ded2-cbd7-44f4-b8e4-72d974e773dd', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '2af2ded2-cbd7-44f4-b8e4-72d974e773dd', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '8a396bde-ca33-406e-a877-1d3e33fec3e7', 'attached_at': '', 'detached_at': '', 'volume_id': '2af2ded2-cbd7-44f4-b8e4-72d974e773dd', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.050 221554 DEBUG barbicanclient.client [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.077 221554 DEBUG barbicanclient.v1.secrets [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/ec92ef3b-edd6-40a0-a6ad-12d162206d8b get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.077 221554 INFO barbicanclient.base [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Calculated Secrets uuid ref: secrets/ec92ef3b-edd6-40a0-a6ad-12d162206d8b#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.117 221554 DEBUG barbicanclient.client [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.118 221554 INFO barbicanclient.base [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Calculated Secrets uuid ref: secrets/ec92ef3b-edd6-40a0-a6ad-12d162206d8b#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.173 221554 DEBUG barbicanclient.client [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.174 221554 INFO barbicanclient.base [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Calculated Secrets uuid ref: secrets/ec92ef3b-edd6-40a0-a6ad-12d162206d8b#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.223 221554 DEBUG barbicanclient.client [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.224 221554 INFO barbicanclient.base [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Calculated Secrets uuid ref: secrets/ec92ef3b-edd6-40a0-a6ad-12d162206d8b#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.246 221554 DEBUG barbicanclient.client [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.247 221554 INFO barbicanclient.base [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Calculated Secrets uuid ref: secrets/ec92ef3b-edd6-40a0-a6ad-12d162206d8b#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.280 221554 DEBUG barbicanclient.client [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.280 221554 INFO barbicanclient.base [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Calculated Secrets uuid ref: secrets/ec92ef3b-edd6-40a0-a6ad-12d162206d8b#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.315 221554 DEBUG barbicanclient.client [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.316 221554 INFO barbicanclient.base [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Calculated Secrets uuid ref: secrets/ec92ef3b-edd6-40a0-a6ad-12d162206d8b#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.354 221554 DEBUG barbicanclient.client [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.354 221554 INFO barbicanclient.base [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Calculated Secrets uuid ref: secrets/ec92ef3b-edd6-40a0-a6ad-12d162206d8b#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.385 221554 DEBUG barbicanclient.client [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.387 221554 INFO barbicanclient.base [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Calculated Secrets uuid ref: secrets/ec92ef3b-edd6-40a0-a6ad-12d162206d8b#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.413 221554 DEBUG barbicanclient.client [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.414 221554 INFO barbicanclient.base [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Calculated Secrets uuid ref: secrets/ec92ef3b-edd6-40a0-a6ad-12d162206d8b#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.435 221554 DEBUG barbicanclient.client [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.436 221554 INFO barbicanclient.base [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Calculated Secrets uuid ref: secrets/ec92ef3b-edd6-40a0-a6ad-12d162206d8b#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.481 221554 DEBUG barbicanclient.client [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.482 221554 INFO barbicanclient.base [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Calculated Secrets uuid ref: secrets/ec92ef3b-edd6-40a0-a6ad-12d162206d8b#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.533 221554 DEBUG barbicanclient.client [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.534 221554 INFO barbicanclient.base [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Calculated Secrets uuid ref: secrets/ec92ef3b-edd6-40a0-a6ad-12d162206d8b#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.565 221554 DEBUG barbicanclient.client [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.566 221554 INFO barbicanclient.base [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Calculated Secrets uuid ref: secrets/ec92ef3b-edd6-40a0-a6ad-12d162206d8b#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.587 221554 DEBUG barbicanclient.client [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.588 221554 INFO barbicanclient.base [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Calculated Secrets uuid ref: secrets/ec92ef3b-edd6-40a0-a6ad-12d162206d8b#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.608 221554 DEBUG barbicanclient.client [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 04:11:46 np0005603609 nova_compute[221550]: 2026-01-31 09:11:46.609 221554 DEBUG nova.virt.libvirt.host [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 31 04:11:46 np0005603609 nova_compute[221550]:  <usage type="volume">
Jan 31 04:11:46 np0005603609 nova_compute[221550]:    <volume>2af2ded2-cbd7-44f4-b8e4-72d974e773dd</volume>
Jan 31 04:11:46 np0005603609 nova_compute[221550]:  </usage>
Jan 31 04:11:46 np0005603609 nova_compute[221550]: </secret>
Jan 31 04:11:46 np0005603609 nova_compute[221550]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Jan 31 04:11:46 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:11:46 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:11:47 np0005603609 nova_compute[221550]: 2026-01-31 09:11:47.411 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:11:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:47.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:11:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:11:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:47.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.125 221554 DEBUG nova.virt.libvirt.vif [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:11:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1485656873',display_name='tempest-TestVolumeBootPattern-server-1485656873',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1485656873',id=218,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-0w9rey0s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:11:29Z,user_data=None,user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=8a396bde-ca33-406e-a877-1d3e33fec3e7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "address": "fa:16:3e:db:66:ae", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d4abe9d-83", "ovs_interfaceid": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.126 221554 DEBUG nova.network.os_vif_util [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "address": "fa:16:3e:db:66:ae", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d4abe9d-83", "ovs_interfaceid": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.127 221554 DEBUG nova.network.os_vif_util [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:66:ae,bridge_name='br-int',has_traffic_filtering=True,id=0d4abe9d-8315-4af6-bd71-3d285948fcfa,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d4abe9d-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.128 221554 DEBUG nova.objects.instance [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 8a396bde-ca33-406e-a877-1d3e33fec3e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:11:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.159 221554 DEBUG nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] End _get_guest_xml xml=<domain type="kvm">
Jan 31 04:11:48 np0005603609 nova_compute[221550]:  <uuid>8a396bde-ca33-406e-a877-1d3e33fec3e7</uuid>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:  <name>instance-000000da</name>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:  <memory>131072</memory>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:  <vcpu>1</vcpu>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:  <metadata>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <nova:name>tempest-TestVolumeBootPattern-server-1485656873</nova:name>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <nova:creationTime>2026-01-31 09:11:45</nova:creationTime>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <nova:flavor name="m1.nano">
Jan 31 04:11:48 np0005603609 nova_compute[221550]:        <nova:memory>128</nova:memory>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:        <nova:disk>1</nova:disk>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:        <nova:swap>0</nova:swap>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:        <nova:vcpus>1</nova:vcpus>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      </nova:flavor>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <nova:owner>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:        <nova:user uuid="ecd39871d7fd438f88b36601f25d6eb6">tempest-TestVolumeBootPattern-1294459393-project-member</nova:user>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:        <nova:project uuid="98d10c0290e340a08e9d1726bf0066bf">tempest-TestVolumeBootPattern-1294459393</nova:project>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      </nova:owner>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <nova:ports>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:        <nova:port uuid="0d4abe9d-8315-4af6-bd71-3d285948fcfa">
Jan 31 04:11:48 np0005603609 nova_compute[221550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:        </nova:port>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      </nova:ports>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    </nova:instance>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:  </metadata>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:  <sysinfo type="smbios">
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <system>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <entry name="manufacturer">RDO</entry>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <entry name="product">OpenStack Compute</entry>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <entry name="serial">8a396bde-ca33-406e-a877-1d3e33fec3e7</entry>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <entry name="uuid">8a396bde-ca33-406e-a877-1d3e33fec3e7</entry>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <entry name="family">Virtual Machine</entry>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    </system>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:  </sysinfo>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:  <os>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <boot dev="hd"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <smbios mode="sysinfo"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:  </os>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:  <features>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <acpi/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <apic/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <vmcoreinfo/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:  </features>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:  <clock offset="utc">
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <timer name="hpet" present="no"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:  </clock>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:  <cpu mode="custom" match="exact">
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <model>Nehalem</model>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:  </cpu>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:  <devices>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <disk type="network" device="cdrom">
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <driver type="raw" cache="none"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="vms/8a396bde-ca33-406e-a877-1d3e33fec3e7_disk.config">
Jan 31 04:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      </source>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 04:11:48 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      </auth>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <target dev="sda" bus="sata"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    </disk>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <disk type="network" device="disk">
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <source protocol="rbd" name="volumes/volume-2af2ded2-cbd7-44f4-b8e4-72d974e773dd">
Jan 31 04:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      </source>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <auth username="openstack">
Jan 31 04:11:48 np0005603609 nova_compute[221550]:        <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      </auth>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <target dev="vda" bus="virtio"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <serial>2af2ded2-cbd7-44f4-b8e4-72d974e773dd</serial>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <encryption format="luks">
Jan 31 04:11:48 np0005603609 nova_compute[221550]:        <secret type="passphrase" uuid="1b3a3e27-b907-4a3a-97dc-5719f14d58a2"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      </encryption>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    </disk>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <interface type="ethernet">
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <mac address="fa:16:3e:db:66:ae"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <mtu size="1442"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <target dev="tap0d4abe9d-83"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    </interface>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <serial type="pty">
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <log file="/var/lib/nova/instances/8a396bde-ca33-406e-a877-1d3e33fec3e7/console.log" append="off"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    </serial>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <video>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <model type="virtio"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    </video>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <input type="tablet" bus="usb"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <rng model="virtio">
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <backend model="random">/dev/urandom</backend>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    </rng>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <controller type="usb" index="0"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    <memballoon model="virtio">
Jan 31 04:11:48 np0005603609 nova_compute[221550]:      <stats period="10"/>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:    </memballoon>
Jan 31 04:11:48 np0005603609 nova_compute[221550]:  </devices>
Jan 31 04:11:48 np0005603609 nova_compute[221550]: </domain>
Jan 31 04:11:48 np0005603609 nova_compute[221550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.161 221554 DEBUG nova.compute.manager [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Preparing to wait for external event network-vif-plugged-0d4abe9d-8315-4af6-bd71-3d285948fcfa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.162 221554 DEBUG oslo_concurrency.lockutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "8a396bde-ca33-406e-a877-1d3e33fec3e7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.162 221554 DEBUG oslo_concurrency.lockutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "8a396bde-ca33-406e-a877-1d3e33fec3e7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.163 221554 DEBUG oslo_concurrency.lockutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "8a396bde-ca33-406e-a877-1d3e33fec3e7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.164 221554 DEBUG nova.virt.libvirt.vif [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:11:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1485656873',display_name='tempest-TestVolumeBootPattern-server-1485656873',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1485656873',id=218,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-0w9rey0s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:11:29Z,user_data=None,user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=8a396bde-ca33-406e-a877-1d3e33fec3e7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "address": "fa:16:3e:db:66:ae", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d4abe9d-83", "ovs_interfaceid": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.165 221554 DEBUG nova.network.os_vif_util [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "address": "fa:16:3e:db:66:ae", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d4abe9d-83", "ovs_interfaceid": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.166 221554 DEBUG nova.network.os_vif_util [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:66:ae,bridge_name='br-int',has_traffic_filtering=True,id=0d4abe9d-8315-4af6-bd71-3d285948fcfa,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d4abe9d-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.167 221554 DEBUG os_vif [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:66:ae,bridge_name='br-int',has_traffic_filtering=True,id=0d4abe9d-8315-4af6-bd71-3d285948fcfa,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d4abe9d-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.168 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.169 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.170 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.175 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.175 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d4abe9d-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.176 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0d4abe9d-83, col_values=(('external_ids', {'iface-id': '0d4abe9d-8315-4af6-bd71-3d285948fcfa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:66:ae', 'vm-uuid': '8a396bde-ca33-406e-a877-1d3e33fec3e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.178 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:48 np0005603609 NetworkManager[49064]: <info>  [1769850708.1800] manager: (tap0d4abe9d-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/481)
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.180 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.186 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.188 221554 INFO os_vif [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:66:ae,bridge_name='br-int',has_traffic_filtering=True,id=0d4abe9d-8315-4af6-bd71-3d285948fcfa,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d4abe9d-83')#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.292 221554 DEBUG nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.293 221554 DEBUG nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.293 221554 DEBUG nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No VIF found with MAC fa:16:3e:db:66:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.294 221554 INFO nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Using config drive#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.329 221554 DEBUG nova.storage.rbd_utils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image 8a396bde-ca33-406e-a877-1d3e33fec3e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.768 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.839 221554 DEBUG nova.network.neutron [req-2947f390-64e2-471c-adf6-2589ceccba5f req-4a376798-6e58-4305-87de-a1703fa7b0df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Updated VIF entry in instance network info cache for port 0d4abe9d-8315-4af6-bd71-3d285948fcfa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.839 221554 DEBUG nova.network.neutron [req-2947f390-64e2-471c-adf6-2589ceccba5f req-4a376798-6e58-4305-87de-a1703fa7b0df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Updating instance_info_cache with network_info: [{"id": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "address": "fa:16:3e:db:66:ae", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d4abe9d-83", "ovs_interfaceid": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:11:48 np0005603609 nova_compute[221550]: 2026-01-31 09:11:48.875 221554 DEBUG oslo_concurrency.lockutils [req-2947f390-64e2-471c-adf6-2589ceccba5f req-4a376798-6e58-4305-87de-a1703fa7b0df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-8a396bde-ca33-406e-a877-1d3e33fec3e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:11:49 np0005603609 nova_compute[221550]: 2026-01-31 09:11:49.327 221554 INFO nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Creating config drive at /var/lib/nova/instances/8a396bde-ca33-406e-a877-1d3e33fec3e7/disk.config#033[00m
Jan 31 04:11:49 np0005603609 nova_compute[221550]: 2026-01-31 09:11:49.336 221554 DEBUG oslo_concurrency.processutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8a396bde-ca33-406e-a877-1d3e33fec3e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpayp81ls6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:11:49 np0005603609 nova_compute[221550]: 2026-01-31 09:11:49.478 221554 DEBUG oslo_concurrency.processutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8a396bde-ca33-406e-a877-1d3e33fec3e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpayp81ls6" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:11:49 np0005603609 nova_compute[221550]: 2026-01-31 09:11:49.519 221554 DEBUG nova.storage.rbd_utils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image 8a396bde-ca33-406e-a877-1d3e33fec3e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:11:49 np0005603609 nova_compute[221550]: 2026-01-31 09:11:49.524 221554 DEBUG oslo_concurrency.processutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8a396bde-ca33-406e-a877-1d3e33fec3e7/disk.config 8a396bde-ca33-406e-a877-1d3e33fec3e7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:11:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:49.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:50.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:50 np0005603609 nova_compute[221550]: 2026-01-31 09:11:50.936 221554 DEBUG oslo_concurrency.processutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8a396bde-ca33-406e-a877-1d3e33fec3e7/disk.config 8a396bde-ca33-406e-a877-1d3e33fec3e7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:11:50 np0005603609 nova_compute[221550]: 2026-01-31 09:11:50.937 221554 INFO nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Deleting local config drive /var/lib/nova/instances/8a396bde-ca33-406e-a877-1d3e33fec3e7/disk.config because it was imported into RBD.#033[00m
Jan 31 04:11:50 np0005603609 kernel: tap0d4abe9d-83: entered promiscuous mode
Jan 31 04:11:50 np0005603609 NetworkManager[49064]: <info>  [1769850710.9826] manager: (tap0d4abe9d-83): new Tun device (/org/freedesktop/NetworkManager/Devices/482)
Jan 31 04:11:50 np0005603609 nova_compute[221550]: 2026-01-31 09:11:50.982 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:50 np0005603609 ovn_controller[130359]: 2026-01-31T09:11:50Z|01026|binding|INFO|Claiming lport 0d4abe9d-8315-4af6-bd71-3d285948fcfa for this chassis.
Jan 31 04:11:50 np0005603609 ovn_controller[130359]: 2026-01-31T09:11:50Z|01027|binding|INFO|0d4abe9d-8315-4af6-bd71-3d285948fcfa: Claiming fa:16:3e:db:66:ae 10.100.0.13
Jan 31 04:11:50 np0005603609 nova_compute[221550]: 2026-01-31 09:11:50.987 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:50 np0005603609 nova_compute[221550]: 2026-01-31 09:11:50.990 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:51 np0005603609 systemd-udevd[322284]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:11:51 np0005603609 systemd-machined[190912]: New machine qemu-119-instance-000000da.
Jan 31 04:11:51 np0005603609 ovn_controller[130359]: 2026-01-31T09:11:51Z|01028|binding|INFO|Setting lport 0d4abe9d-8315-4af6-bd71-3d285948fcfa ovn-installed in OVS
Jan 31 04:11:51 np0005603609 nova_compute[221550]: 2026-01-31 09:11:51.015 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:51 np0005603609 NetworkManager[49064]: <info>  [1769850711.0177] device (tap0d4abe9d-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:11:51 np0005603609 ovn_controller[130359]: 2026-01-31T09:11:51Z|01029|binding|INFO|Setting lport 0d4abe9d-8315-4af6-bd71-3d285948fcfa up in Southbound
Jan 31 04:11:51 np0005603609 systemd[1]: Started Virtual Machine qemu-119-instance-000000da.
Jan 31 04:11:51 np0005603609 NetworkManager[49064]: <info>  [1769850711.0196] device (tap0d4abe9d-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.018 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:66:ae 10.100.0.13'], port_security=['fa:16:3e:db:66:ae 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8a396bde-ca33-406e-a877-1d3e33fec3e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c9ca540-57e7-412d-8ef3-af923db0a265', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98d10c0290e340a08e9d1726bf0066bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bffa9ac-c434-4d26-bfda-d8e35c99a8ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e88fefe4-17cc-4664-bc86-8614a5f025ec, chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=0d4abe9d-8315-4af6-bd71-3d285948fcfa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.019 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 0d4abe9d-8315-4af6-bd71-3d285948fcfa in datapath 5c9ca540-57e7-412d-8ef3-af923db0a265 bound to our chassis#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.020 140058 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c9ca540-57e7-412d-8ef3-af923db0a265#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.028 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb9dda0-3514-49de-8ead-c814bdc6eae4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.029 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c9ca540-51 in ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.031 224823 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c9ca540-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.031 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8f99bb-9e55-4f4c-a57a-783dc928866d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.032 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ac68aa-f60d-48bb-9549-d6a9b9c68c86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.041 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc91ba3-39b7-45eb-9ffe-95de3a02dcb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.063 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e219b0e7-e198-4489-8363-c461d3d0ebbb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.081 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[6a24807d-ca2c-490a-b8d7-27aef5284cb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.088 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5511426d-aa72-4d48-b9a8-aa1b5a597fc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:51 np0005603609 NetworkManager[49064]: <info>  [1769850711.0891] manager: (tap5c9ca540-50): new Veth device (/org/freedesktop/NetworkManager/Devices/483)
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.109 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[532ffe05-44da-49ac-949c-e35012bb0bf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.116 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[4bc17c40-e03c-465f-b9c1-56067c1f2dab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:51 np0005603609 NetworkManager[49064]: <info>  [1769850711.1350] device (tap5c9ca540-50): carrier: link connected
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.139 224837 DEBUG oslo.privsep.daemon [-] privsep: reply[ba643277-1030-4a61-9d6a-cfcb8a45b645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.151 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[73d78aa2-ca56-4ac4-b6e6-a475b4ce0ced]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c9ca540-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:dc:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1095083, 'reachable_time': 34183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322317, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.162 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[59a2942b-578f-4d77-987d-d58840856ef8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:dcf7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1095083, 'tstamp': 1095083}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322318, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.173 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd3ad23-d112-4b7d-b30f-eb23a6cdcc93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c9ca540-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:dc:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1095083, 'reachable_time': 34183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322319, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.192 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7055a487-949c-412f-990e-2ca556250d90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.231 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd5c709-e4f0-4b99-9a1e-fb189e0a1b0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.232 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c9ca540-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.232 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.232 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c9ca540-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:11:51 np0005603609 nova_compute[221550]: 2026-01-31 09:11:51.234 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:51 np0005603609 kernel: tap5c9ca540-50: entered promiscuous mode
Jan 31 04:11:51 np0005603609 NetworkManager[49064]: <info>  [1769850711.2429] manager: (tap5c9ca540-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/484)
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.246 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c9ca540-50, col_values=(('external_ids', {'iface-id': '016c97be-36ee-470a-8bac-28db98577a8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:11:51 np0005603609 ovn_controller[130359]: 2026-01-31T09:11:51Z|01030|binding|INFO|Releasing lport 016c97be-36ee-470a-8bac-28db98577a8c from this chassis (sb_readonly=0)
Jan 31 04:11:51 np0005603609 nova_compute[221550]: 2026-01-31 09:11:51.247 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.248 140058 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.252 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[435ac2e7-c248-45ec-8d14-6db8f9c37491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.252 140058 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: global
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    log         /dev/log local0 debug
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    log-tag     haproxy-metadata-proxy-5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    user        root
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    group       root
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    maxconn     1024
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    pidfile     /var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    daemon
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: defaults
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    log global
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    mode http
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    option httplog
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    option dontlognull
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    option http-server-close
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    option forwardfor
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    retries                 3
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    timeout http-request    30s
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    timeout connect         30s
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    timeout client          32s
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    timeout server          32s
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    timeout http-keep-alive 30s
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: listen listener
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    bind 169.254.169.254:80
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]:    http-request add-header X-OVN-Network-ID 5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 04:11:51 np0005603609 nova_compute[221550]: 2026-01-31 09:11:51.253 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:11:51.253 140058 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'env', 'PROCESS_TAG=haproxy-5c9ca540-57e7-412d-8ef3-af923db0a265', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c9ca540-57e7-412d-8ef3-af923db0a265.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 04:11:51 np0005603609 podman[322369]: 2026-01-31 09:11:51.562153416 +0000 UTC m=+0.073821958 container create 8a5a1237a5e3c16409cdaed00da900f3074b3cac47754f9f233cbeaa75099472 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:11:51 np0005603609 podman[322369]: 2026-01-31 09:11:51.508418323 +0000 UTC m=+0.020086905 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 04:11:51 np0005603609 systemd[1]: Started libpod-conmon-8a5a1237a5e3c16409cdaed00da900f3074b3cac47754f9f233cbeaa75099472.scope.
Jan 31 04:11:51 np0005603609 systemd[1]: Started libcrun container.
Jan 31 04:11:51 np0005603609 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2c787e7909f18cec3e873064dcf2406d335809338304056cdc4367a69aab94/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 04:11:51 np0005603609 podman[322369]: 2026-01-31 09:11:51.650492161 +0000 UTC m=+0.162160743 container init 8a5a1237a5e3c16409cdaed00da900f3074b3cac47754f9f233cbeaa75099472 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:11:51 np0005603609 podman[322369]: 2026-01-31 09:11:51.654465339 +0000 UTC m=+0.166133901 container start 8a5a1237a5e3c16409cdaed00da900f3074b3cac47754f9f233cbeaa75099472 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:11:51 np0005603609 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[322401]: [NOTICE]   (322405) : New worker (322407) forked
Jan 31 04:11:51 np0005603609 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[322401]: [NOTICE]   (322405) : Loading success.
Jan 31 04:11:51 np0005603609 nova_compute[221550]: 2026-01-31 09:11:51.746 221554 DEBUG nova.compute.manager [req-4b0581e0-b10c-4e02-9191-ef598837c38b req-d221bf6c-0181-4754-b3b4-2ce3cbeca051 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Received event network-vif-plugged-0d4abe9d-8315-4af6-bd71-3d285948fcfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:11:51 np0005603609 nova_compute[221550]: 2026-01-31 09:11:51.747 221554 DEBUG oslo_concurrency.lockutils [req-4b0581e0-b10c-4e02-9191-ef598837c38b req-d221bf6c-0181-4754-b3b4-2ce3cbeca051 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8a396bde-ca33-406e-a877-1d3e33fec3e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:11:51 np0005603609 nova_compute[221550]: 2026-01-31 09:11:51.747 221554 DEBUG oslo_concurrency.lockutils [req-4b0581e0-b10c-4e02-9191-ef598837c38b req-d221bf6c-0181-4754-b3b4-2ce3cbeca051 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8a396bde-ca33-406e-a877-1d3e33fec3e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:11:51 np0005603609 nova_compute[221550]: 2026-01-31 09:11:51.747 221554 DEBUG oslo_concurrency.lockutils [req-4b0581e0-b10c-4e02-9191-ef598837c38b req-d221bf6c-0181-4754-b3b4-2ce3cbeca051 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8a396bde-ca33-406e-a877-1d3e33fec3e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:11:51 np0005603609 nova_compute[221550]: 2026-01-31 09:11:51.747 221554 DEBUG nova.compute.manager [req-4b0581e0-b10c-4e02-9191-ef598837c38b req-d221bf6c-0181-4754-b3b4-2ce3cbeca051 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Processing event network-vif-plugged-0d4abe9d-8315-4af6-bd71-3d285948fcfa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 04:11:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:51.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:52.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:53 np0005603609 nova_compute[221550]: 2026-01-31 09:11:53.180 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:11:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4140901243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:11:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:11:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4140901243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:11:53 np0005603609 nova_compute[221550]: 2026-01-31 09:11:53.770 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:53 np0005603609 nova_compute[221550]: 2026-01-31 09:11:53.817 221554 DEBUG nova.compute.manager [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 04:11:53 np0005603609 nova_compute[221550]: 2026-01-31 09:11:53.817 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769850713.816559, 8a396bde-ca33-406e-a877-1d3e33fec3e7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:11:53 np0005603609 nova_compute[221550]: 2026-01-31 09:11:53.817 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] VM Started (Lifecycle Event)#033[00m
Jan 31 04:11:53 np0005603609 nova_compute[221550]: 2026-01-31 09:11:53.821 221554 DEBUG nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 04:11:53 np0005603609 nova_compute[221550]: 2026-01-31 09:11:53.824 221554 INFO nova.virt.libvirt.driver [-] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Instance spawned successfully.#033[00m
Jan 31 04:11:53 np0005603609 nova_compute[221550]: 2026-01-31 09:11:53.825 221554 DEBUG nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 04:11:53 np0005603609 nova_compute[221550]: 2026-01-31 09:11:53.863 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:11:53 np0005603609 nova_compute[221550]: 2026-01-31 09:11:53.867 221554 DEBUG nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:11:53 np0005603609 nova_compute[221550]: 2026-01-31 09:11:53.867 221554 DEBUG nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:11:53 np0005603609 nova_compute[221550]: 2026-01-31 09:11:53.868 221554 DEBUG nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:11:53 np0005603609 nova_compute[221550]: 2026-01-31 09:11:53.868 221554 DEBUG nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:11:53 np0005603609 nova_compute[221550]: 2026-01-31 09:11:53.868 221554 DEBUG nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:11:53 np0005603609 nova_compute[221550]: 2026-01-31 09:11:53.869 221554 DEBUG nova.virt.libvirt.driver [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:11:53 np0005603609 nova_compute[221550]: 2026-01-31 09:11:53.873 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:11:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:53.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:53 np0005603609 nova_compute[221550]: 2026-01-31 09:11:53.965 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:11:53 np0005603609 nova_compute[221550]: 2026-01-31 09:11:53.965 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769850713.8166673, 8a396bde-ca33-406e-a877-1d3e33fec3e7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:11:53 np0005603609 nova_compute[221550]: 2026-01-31 09:11:53.966 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] VM Paused (Lifecycle Event)#033[00m
Jan 31 04:11:54 np0005603609 nova_compute[221550]: 2026-01-31 09:11:54.001 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:11:54 np0005603609 nova_compute[221550]: 2026-01-31 09:11:54.004 221554 DEBUG nova.virt.driver [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] Emitting event <LifecycleEvent: 1769850713.8207572, 8a396bde-ca33-406e-a877-1d3e33fec3e7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:11:54 np0005603609 nova_compute[221550]: 2026-01-31 09:11:54.004 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] VM Resumed (Lifecycle Event)#033[00m
Jan 31 04:11:54 np0005603609 nova_compute[221550]: 2026-01-31 09:11:54.007 221554 DEBUG nova.compute.manager [req-931f2485-1131-43b2-b88b-1cf80f8dc488 req-1c56fc65-160d-49f2-aff7-b51adc191ea3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Received event network-vif-plugged-0d4abe9d-8315-4af6-bd71-3d285948fcfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:11:54 np0005603609 nova_compute[221550]: 2026-01-31 09:11:54.007 221554 DEBUG oslo_concurrency.lockutils [req-931f2485-1131-43b2-b88b-1cf80f8dc488 req-1c56fc65-160d-49f2-aff7-b51adc191ea3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8a396bde-ca33-406e-a877-1d3e33fec3e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:11:54 np0005603609 nova_compute[221550]: 2026-01-31 09:11:54.007 221554 DEBUG oslo_concurrency.lockutils [req-931f2485-1131-43b2-b88b-1cf80f8dc488 req-1c56fc65-160d-49f2-aff7-b51adc191ea3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8a396bde-ca33-406e-a877-1d3e33fec3e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:11:54 np0005603609 nova_compute[221550]: 2026-01-31 09:11:54.007 221554 DEBUG oslo_concurrency.lockutils [req-931f2485-1131-43b2-b88b-1cf80f8dc488 req-1c56fc65-160d-49f2-aff7-b51adc191ea3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8a396bde-ca33-406e-a877-1d3e33fec3e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:11:54 np0005603609 nova_compute[221550]: 2026-01-31 09:11:54.008 221554 DEBUG nova.compute.manager [req-931f2485-1131-43b2-b88b-1cf80f8dc488 req-1c56fc65-160d-49f2-aff7-b51adc191ea3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] No waiting events found dispatching network-vif-plugged-0d4abe9d-8315-4af6-bd71-3d285948fcfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:11:54 np0005603609 nova_compute[221550]: 2026-01-31 09:11:54.008 221554 WARNING nova.compute.manager [req-931f2485-1131-43b2-b88b-1cf80f8dc488 req-1c56fc65-160d-49f2-aff7-b51adc191ea3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Received unexpected event network-vif-plugged-0d4abe9d-8315-4af6-bd71-3d285948fcfa for instance with vm_state building and task_state spawning.#033[00m
Jan 31 04:11:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:54.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:54 np0005603609 nova_compute[221550]: 2026-01-31 09:11:54.076 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:11:54 np0005603609 nova_compute[221550]: 2026-01-31 09:11:54.081 221554 DEBUG nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:11:54 np0005603609 nova_compute[221550]: 2026-01-31 09:11:54.097 221554 INFO nova.compute.manager [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Took 20.61 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 04:11:54 np0005603609 nova_compute[221550]: 2026-01-31 09:11:54.098 221554 DEBUG nova.compute.manager [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:11:54 np0005603609 nova_compute[221550]: 2026-01-31 09:11:54.134 221554 INFO nova.compute.manager [None req-9da331b8-140b-4e5a-9669-f26f52cbc51a - - - - - -] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:11:55 np0005603609 nova_compute[221550]: 2026-01-31 09:11:55.497 221554 INFO nova.compute.manager [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Took 27.25 seconds to build instance.#033[00m
Jan 31 04:11:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 04:11:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:55.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 04:11:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:56.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:56 np0005603609 nova_compute[221550]: 2026-01-31 09:11:56.480 221554 DEBUG oslo_concurrency.lockutils [None req-ce3c351f-52e1-4641-9a4c-38adb6b5a7bb ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "8a396bde-ca33-406e-a877-1d3e33fec3e7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 28.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:11:57 np0005603609 podman[322425]: 2026-01-31 09:11:57.173470977 +0000 UTC m=+0.049991112 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 04:11:57 np0005603609 podman[322424]: 2026-01-31 09:11:57.197860917 +0000 UTC m=+0.077250012 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 04:11:57 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:57 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:57 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:57.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:58.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:58 np0005603609 nova_compute[221550]: 2026-01-31 09:11:58.183 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:58 np0005603609 nova_compute[221550]: 2026-01-31 09:11:58.773 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:59 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:11:59 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:59 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:59.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:12:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:00.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.506 221554 DEBUG oslo_concurrency.lockutils [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "8a396bde-ca33-406e-a877-1d3e33fec3e7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.507 221554 DEBUG oslo_concurrency.lockutils [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "8a396bde-ca33-406e-a877-1d3e33fec3e7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.507 221554 DEBUG oslo_concurrency.lockutils [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "8a396bde-ca33-406e-a877-1d3e33fec3e7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.508 221554 DEBUG oslo_concurrency.lockutils [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "8a396bde-ca33-406e-a877-1d3e33fec3e7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.508 221554 DEBUG oslo_concurrency.lockutils [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "8a396bde-ca33-406e-a877-1d3e33fec3e7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.509 221554 INFO nova.compute.manager [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Terminating instance#033[00m
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.510 221554 DEBUG nova.compute.manager [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 04:12:00 np0005603609 kernel: tap0d4abe9d-83 (unregistering): left promiscuous mode
Jan 31 04:12:00 np0005603609 NetworkManager[49064]: <info>  [1769850720.6518] device (tap0d4abe9d-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:12:00 np0005603609 ovn_controller[130359]: 2026-01-31T09:12:00Z|01031|binding|INFO|Releasing lport 0d4abe9d-8315-4af6-bd71-3d285948fcfa from this chassis (sb_readonly=0)
Jan 31 04:12:00 np0005603609 ovn_controller[130359]: 2026-01-31T09:12:00Z|01032|binding|INFO|Setting lport 0d4abe9d-8315-4af6-bd71-3d285948fcfa down in Southbound
Jan 31 04:12:00 np0005603609 ovn_controller[130359]: 2026-01-31T09:12:00Z|01033|binding|INFO|Removing iface tap0d4abe9d-83 ovn-installed in OVS
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.657 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:00.670 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:66:ae 10.100.0.13'], port_security=['fa:16:3e:db:66:ae 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8a396bde-ca33-406e-a877-1d3e33fec3e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c9ca540-57e7-412d-8ef3-af923db0a265', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98d10c0290e340a08e9d1726bf0066bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bffa9ac-c434-4d26-bfda-d8e35c99a8ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e88fefe4-17cc-4664-bc86-8614a5f025ec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>], logical_port=0d4abe9d-8315-4af6-bd71-3d285948fcfa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2fcd7d9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.671 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:00.672 140058 INFO neutron.agent.ovn.metadata.agent [-] Port 0d4abe9d-8315-4af6-bd71-3d285948fcfa in datapath 5c9ca540-57e7-412d-8ef3-af923db0a265 unbound from our chassis#033[00m
Jan 31 04:12:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:00.673 140058 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c9ca540-57e7-412d-8ef3-af923db0a265, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:12:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:00.674 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[5f8a1af0-43c8-4128-bbc2-c5e10e18b383]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:12:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:00.674 140058 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 namespace which is not needed anymore#033[00m
Jan 31 04:12:00 np0005603609 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d000000da.scope: Deactivated successfully.
Jan 31 04:12:00 np0005603609 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d000000da.scope: Consumed 3.155s CPU time.
Jan 31 04:12:00 np0005603609 systemd-machined[190912]: Machine qemu-119-instance-000000da terminated.
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.744 221554 INFO nova.virt.libvirt.driver [-] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Instance destroyed successfully.#033[00m
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.744 221554 DEBUG nova.objects.instance [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lazy-loading 'resources' on Instance uuid 8a396bde-ca33-406e-a877-1d3e33fec3e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.767 221554 DEBUG nova.virt.libvirt.vif [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:11:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1485656873',display_name='tempest-TestVolumeBootPattern-server-1485656873',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1485656873',id=218,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:11:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-0w9rey0s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:11:54Z,user_data=None,user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=8a396bde-ca33-406e-a877-1d3e33fec3e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "address": "fa:16:3e:db:66:ae", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d4abe9d-83", "ovs_interfaceid": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.768 221554 DEBUG nova.network.os_vif_util [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "address": "fa:16:3e:db:66:ae", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d4abe9d-83", "ovs_interfaceid": "0d4abe9d-8315-4af6-bd71-3d285948fcfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.768 221554 DEBUG nova.network.os_vif_util [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:66:ae,bridge_name='br-int',has_traffic_filtering=True,id=0d4abe9d-8315-4af6-bd71-3d285948fcfa,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d4abe9d-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.769 221554 DEBUG os_vif [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:66:ae,bridge_name='br-int',has_traffic_filtering=True,id=0d4abe9d-8315-4af6-bd71-3d285948fcfa,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d4abe9d-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.770 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.770 221554 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d4abe9d-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.771 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.773 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.775 221554 INFO os_vif [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:66:ae,bridge_name='br-int',has_traffic_filtering=True,id=0d4abe9d-8315-4af6-bd71-3d285948fcfa,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d4abe9d-83')#033[00m
Jan 31 04:12:00 np0005603609 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[322401]: [NOTICE]   (322405) : haproxy version is 2.8.14-c23fe91
Jan 31 04:12:00 np0005603609 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[322401]: [NOTICE]   (322405) : path to executable is /usr/sbin/haproxy
Jan 31 04:12:00 np0005603609 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[322401]: [WARNING]  (322405) : Exiting Master process...
Jan 31 04:12:00 np0005603609 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[322401]: [ALERT]    (322405) : Current worker (322407) exited with code 143 (Terminated)
Jan 31 04:12:00 np0005603609 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[322401]: [WARNING]  (322405) : All workers exited. Exiting... (0)
Jan 31 04:12:00 np0005603609 systemd[1]: libpod-8a5a1237a5e3c16409cdaed00da900f3074b3cac47754f9f233cbeaa75099472.scope: Deactivated successfully.
Jan 31 04:12:00 np0005603609 podman[322495]: 2026-01-31 09:12:00.807404381 +0000 UTC m=+0.049185471 container died 8a5a1237a5e3c16409cdaed00da900f3074b3cac47754f9f233cbeaa75099472 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 04:12:00 np0005603609 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a5a1237a5e3c16409cdaed00da900f3074b3cac47754f9f233cbeaa75099472-userdata-shm.mount: Deactivated successfully.
Jan 31 04:12:00 np0005603609 systemd[1]: var-lib-containers-storage-overlay-7c2c787e7909f18cec3e873064dcf2406d335809338304056cdc4367a69aab94-merged.mount: Deactivated successfully.
Jan 31 04:12:00 np0005603609 podman[322495]: 2026-01-31 09:12:00.877426945 +0000 UTC m=+0.119208035 container cleanup 8a5a1237a5e3c16409cdaed00da900f3074b3cac47754f9f233cbeaa75099472 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:12:00 np0005603609 systemd[1]: libpod-conmon-8a5a1237a5e3c16409cdaed00da900f3074b3cac47754f9f233cbeaa75099472.scope: Deactivated successfully.
Jan 31 04:12:00 np0005603609 podman[322543]: 2026-01-31 09:12:00.953663421 +0000 UTC m=+0.053634281 container remove 8a5a1237a5e3c16409cdaed00da900f3074b3cac47754f9f233cbeaa75099472 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 04:12:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:00.957 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[fffcdf57-7371-4376-9bc3-a4b52a3c44f1]: (4, ('Sat Jan 31 09:12:00 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 (8a5a1237a5e3c16409cdaed00da900f3074b3cac47754f9f233cbeaa75099472)\n8a5a1237a5e3c16409cdaed00da900f3074b3cac47754f9f233cbeaa75099472\nSat Jan 31 09:12:00 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 (8a5a1237a5e3c16409cdaed00da900f3074b3cac47754f9f233cbeaa75099472)\n8a5a1237a5e3c16409cdaed00da900f3074b3cac47754f9f233cbeaa75099472\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:12:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:00.959 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[7da11305-df42-45e8-8e14-3529cdd5e9fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:12:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:00.960 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c9ca540-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.962 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:00 np0005603609 kernel: tap5c9ca540-50: left promiscuous mode
Jan 31 04:12:00 np0005603609 nova_compute[221550]: 2026-01-31 09:12:00.968 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:00.972 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[01c7f1aa-f7a4-45db-8bab-9a53959084d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:12:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:01.002 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[e97e62b3-c8c5-4275-a869-a8bf3bebedec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:12:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:01.004 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[8e9809d2-2f72-4553-bdcc-f25645a88a62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:12:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:01.023 224823 DEBUG oslo.privsep.daemon [-] privsep: reply[11ae3464-6d8b-460c-bb03-7e33cb0e1914]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1095077, 'reachable_time': 22434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322558, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:12:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:01.027 140172 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:12:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:01.027 140172 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb00207-807c-47cf-87b0-c3e7bb72eec1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:12:01 np0005603609 systemd[1]: run-netns-ovnmeta\x2d5c9ca540\x2d57e7\x2d412d\x2d8ef3\x2daf923db0a265.mount: Deactivated successfully.
Jan 31 04:12:01 np0005603609 nova_compute[221550]: 2026-01-31 09:12:01.117 221554 DEBUG nova.compute.manager [req-5576c70d-36f8-4057-a065-a7fbf7f8be9d req-6f801e87-1621-4bbb-aa0a-3f2fae19af1c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Received event network-vif-unplugged-0d4abe9d-8315-4af6-bd71-3d285948fcfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:12:01 np0005603609 nova_compute[221550]: 2026-01-31 09:12:01.118 221554 DEBUG oslo_concurrency.lockutils [req-5576c70d-36f8-4057-a065-a7fbf7f8be9d req-6f801e87-1621-4bbb-aa0a-3f2fae19af1c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8a396bde-ca33-406e-a877-1d3e33fec3e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:12:01 np0005603609 nova_compute[221550]: 2026-01-31 09:12:01.118 221554 DEBUG oslo_concurrency.lockutils [req-5576c70d-36f8-4057-a065-a7fbf7f8be9d req-6f801e87-1621-4bbb-aa0a-3f2fae19af1c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8a396bde-ca33-406e-a877-1d3e33fec3e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:12:01 np0005603609 nova_compute[221550]: 2026-01-31 09:12:01.119 221554 DEBUG oslo_concurrency.lockutils [req-5576c70d-36f8-4057-a065-a7fbf7f8be9d req-6f801e87-1621-4bbb-aa0a-3f2fae19af1c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8a396bde-ca33-406e-a877-1d3e33fec3e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:12:01 np0005603609 nova_compute[221550]: 2026-01-31 09:12:01.119 221554 DEBUG nova.compute.manager [req-5576c70d-36f8-4057-a065-a7fbf7f8be9d req-6f801e87-1621-4bbb-aa0a-3f2fae19af1c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] No waiting events found dispatching network-vif-unplugged-0d4abe9d-8315-4af6-bd71-3d285948fcfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:12:01 np0005603609 nova_compute[221550]: 2026-01-31 09:12:01.119 221554 DEBUG nova.compute.manager [req-5576c70d-36f8-4057-a065-a7fbf7f8be9d req-6f801e87-1621-4bbb-aa0a-3f2fae19af1c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Received event network-vif-unplugged-0d4abe9d-8315-4af6-bd71-3d285948fcfa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 04:12:01 np0005603609 nova_compute[221550]: 2026-01-31 09:12:01.231 221554 INFO nova.virt.libvirt.driver [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Deleting instance files /var/lib/nova/instances/8a396bde-ca33-406e-a877-1d3e33fec3e7_del#033[00m
Jan 31 04:12:01 np0005603609 nova_compute[221550]: 2026-01-31 09:12:01.232 221554 INFO nova.virt.libvirt.driver [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Deletion of /var/lib/nova/instances/8a396bde-ca33-406e-a877-1d3e33fec3e7_del complete#033[00m
Jan 31 04:12:01 np0005603609 nova_compute[221550]: 2026-01-31 09:12:01.389 221554 INFO nova.compute.manager [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 04:12:01 np0005603609 nova_compute[221550]: 2026-01-31 09:12:01.390 221554 DEBUG oslo.service.loopingcall [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 04:12:01 np0005603609 nova_compute[221550]: 2026-01-31 09:12:01.390 221554 DEBUG nova.compute.manager [-] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 04:12:01 np0005603609 nova_compute[221550]: 2026-01-31 09:12:01.390 221554 DEBUG nova.network.neutron [-] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 04:12:01 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:01 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:01 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:01.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:12:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:02.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:12:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:03 np0005603609 nova_compute[221550]: 2026-01-31 09:12:03.342 221554 DEBUG nova.compute.manager [req-0b72277b-c4e9-471c-811a-c5a3e8894618 req-b0decdcf-e916-4476-ab78-1acf216f0837 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Received event network-vif-plugged-0d4abe9d-8315-4af6-bd71-3d285948fcfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:12:03 np0005603609 nova_compute[221550]: 2026-01-31 09:12:03.343 221554 DEBUG oslo_concurrency.lockutils [req-0b72277b-c4e9-471c-811a-c5a3e8894618 req-b0decdcf-e916-4476-ab78-1acf216f0837 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8a396bde-ca33-406e-a877-1d3e33fec3e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:12:03 np0005603609 nova_compute[221550]: 2026-01-31 09:12:03.344 221554 DEBUG oslo_concurrency.lockutils [req-0b72277b-c4e9-471c-811a-c5a3e8894618 req-b0decdcf-e916-4476-ab78-1acf216f0837 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8a396bde-ca33-406e-a877-1d3e33fec3e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:12:03 np0005603609 nova_compute[221550]: 2026-01-31 09:12:03.344 221554 DEBUG oslo_concurrency.lockutils [req-0b72277b-c4e9-471c-811a-c5a3e8894618 req-b0decdcf-e916-4476-ab78-1acf216f0837 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8a396bde-ca33-406e-a877-1d3e33fec3e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:12:03 np0005603609 nova_compute[221550]: 2026-01-31 09:12:03.344 221554 DEBUG nova.compute.manager [req-0b72277b-c4e9-471c-811a-c5a3e8894618 req-b0decdcf-e916-4476-ab78-1acf216f0837 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] No waiting events found dispatching network-vif-plugged-0d4abe9d-8315-4af6-bd71-3d285948fcfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:12:03 np0005603609 nova_compute[221550]: 2026-01-31 09:12:03.345 221554 WARNING nova.compute.manager [req-0b72277b-c4e9-471c-811a-c5a3e8894618 req-b0decdcf-e916-4476-ab78-1acf216f0837 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Received unexpected event network-vif-plugged-0d4abe9d-8315-4af6-bd71-3d285948fcfa for instance with vm_state active and task_state deleting.#033[00m
Jan 31 04:12:03 np0005603609 nova_compute[221550]: 2026-01-31 09:12:03.776 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:03 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:03 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:12:03 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:03.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:12:03 np0005603609 nova_compute[221550]: 2026-01-31 09:12:03.948 221554 DEBUG nova.network.neutron [-] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:12:03 np0005603609 nova_compute[221550]: 2026-01-31 09:12:03.965 221554 INFO nova.compute.manager [-] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Took 2.57 seconds to deallocate network for instance.#033[00m
Jan 31 04:12:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:12:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:04.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:12:04 np0005603609 nova_compute[221550]: 2026-01-31 09:12:04.074 221554 DEBUG nova.compute.manager [req-f38c0067-41bd-4821-baa8-0d89af97d43f req-3cd1413f-e547-4d77-b3da-d3e65eef74a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Received event network-vif-deleted-0d4abe9d-8315-4af6-bd71-3d285948fcfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:12:04 np0005603609 nova_compute[221550]: 2026-01-31 09:12:04.254 221554 INFO nova.compute.manager [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Took 0.29 seconds to detach 1 volumes for instance.#033[00m
Jan 31 04:12:04 np0005603609 nova_compute[221550]: 2026-01-31 09:12:04.391 221554 DEBUG oslo_concurrency.lockutils [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:12:04 np0005603609 nova_compute[221550]: 2026-01-31 09:12:04.392 221554 DEBUG oslo_concurrency.lockutils [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:12:04 np0005603609 nova_compute[221550]: 2026-01-31 09:12:04.458 221554 DEBUG oslo_concurrency.processutils [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:12:04 np0005603609 nova_compute[221550]: 2026-01-31 09:12:04.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:12:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3309363941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:12:04 np0005603609 nova_compute[221550]: 2026-01-31 09:12:04.845 221554 DEBUG oslo_concurrency.processutils [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:12:04 np0005603609 nova_compute[221550]: 2026-01-31 09:12:04.850 221554 DEBUG nova.compute.provider_tree [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:12:04 np0005603609 nova_compute[221550]: 2026-01-31 09:12:04.873 221554 DEBUG nova.scheduler.client.report [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:12:04 np0005603609 nova_compute[221550]: 2026-01-31 09:12:04.909 221554 DEBUG oslo_concurrency.lockutils [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:12:04 np0005603609 nova_compute[221550]: 2026-01-31 09:12:04.935 221554 INFO nova.scheduler.client.report [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Deleted allocations for instance 8a396bde-ca33-406e-a877-1d3e33fec3e7#033[00m
Jan 31 04:12:05 np0005603609 nova_compute[221550]: 2026-01-31 09:12:05.030 221554 DEBUG oslo_concurrency.lockutils [None req-bed6b02b-7718-4899-a954-4ecd72050d73 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "8a396bde-ca33-406e-a877-1d3e33fec3e7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:12:05 np0005603609 nova_compute[221550]: 2026-01-31 09:12:05.774 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:05 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:05 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:05 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:05.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:12:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:06.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:12:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:07.567 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:12:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:07.567 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:12:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:07.567 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:12:07 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:07 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:12:07 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:07.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:12:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:08.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:08 np0005603609 nova_compute[221550]: 2026-01-31 09:12:08.778 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:12:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/734212161' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:12:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:12:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/734212161' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:12:09 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:09 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:09 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:09.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:12:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:10.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:12:10 np0005603609 nova_compute[221550]: 2026-01-31 09:12:10.776 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:10 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 04:12:10 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 04:12:11 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:11 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:11 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:11.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:12.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:13 np0005603609 nova_compute[221550]: 2026-01-31 09:12:13.781 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:13 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:13 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:13 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:13.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:12:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:14.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:12:15 np0005603609 nova_compute[221550]: 2026-01-31 09:12:15.743 221554 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850720.7420354, 8a396bde-ca33-406e-a877-1d3e33fec3e7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:12:15 np0005603609 nova_compute[221550]: 2026-01-31 09:12:15.744 221554 INFO nova.compute.manager [-] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:12:15 np0005603609 nova_compute[221550]: 2026-01-31 09:12:15.780 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:15 np0005603609 nova_compute[221550]: 2026-01-31 09:12:15.784 221554 DEBUG nova.compute.manager [None req-0962c774-d890-4ea0-a945-5c78c459e568 - - - - - -] [instance: 8a396bde-ca33-406e-a877-1d3e33fec3e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:12:15 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:15 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:15 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:15.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:16.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:16 np0005603609 nova_compute[221550]: 2026-01-31 09:12:16.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:17 np0005603609 nova_compute[221550]: 2026-01-31 09:12:17.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:17 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:17 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:12:17 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:17.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:12:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:12:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:18.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:12:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:18 np0005603609 nova_compute[221550]: 2026-01-31 09:12:18.821 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:19 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:19 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:19 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:19.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:12:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:20.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:12:20 np0005603609 nova_compute[221550]: 2026-01-31 09:12:20.783 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:21 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:21 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:21 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:21.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:22.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:23 np0005603609 nova_compute[221550]: 2026-01-31 09:12:23.872 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:23 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:23 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:23 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:23.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:24.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:24 np0005603609 nova_compute[221550]: 2026-01-31 09:12:24.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:24 np0005603609 nova_compute[221550]: 2026-01-31 09:12:24.692 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:12:24 np0005603609 nova_compute[221550]: 2026-01-31 09:12:24.692 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:12:24 np0005603609 nova_compute[221550]: 2026-01-31 09:12:24.693 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:12:24 np0005603609 nova_compute[221550]: 2026-01-31 09:12:24.693 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:12:24 np0005603609 nova_compute[221550]: 2026-01-31 09:12:24.694 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:12:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:12:25 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1538677618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:12:25 np0005603609 nova_compute[221550]: 2026-01-31 09:12:25.098 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:12:25 np0005603609 nova_compute[221550]: 2026-01-31 09:12:25.262 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:12:25 np0005603609 nova_compute[221550]: 2026-01-31 09:12:25.263 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4148MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:12:25 np0005603609 nova_compute[221550]: 2026-01-31 09:12:25.263 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:12:25 np0005603609 nova_compute[221550]: 2026-01-31 09:12:25.263 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:12:25 np0005603609 nova_compute[221550]: 2026-01-31 09:12:25.531 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:12:25 np0005603609 nova_compute[221550]: 2026-01-31 09:12:25.531 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:12:25 np0005603609 nova_compute[221550]: 2026-01-31 09:12:25.560 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:12:25 np0005603609 nova_compute[221550]: 2026-01-31 09:12:25.787 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:25 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:25 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:12:25 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:25.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:12:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:12:25 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2139921524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:12:26 np0005603609 nova_compute[221550]: 2026-01-31 09:12:25.999 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:12:26 np0005603609 nova_compute[221550]: 2026-01-31 09:12:26.006 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:12:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:26.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:26 np0005603609 nova_compute[221550]: 2026-01-31 09:12:26.111 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:12:26 np0005603609 nova_compute[221550]: 2026-01-31 09:12:26.303 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:12:26 np0005603609 nova_compute[221550]: 2026-01-31 09:12:26.304 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:12:27 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:27 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:12:27 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:27.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:12:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:28.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:28 np0005603609 podman[322631]: 2026-01-31 09:12:28.170388041 +0000 UTC m=+0.054745688 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 04:12:28 np0005603609 podman[322630]: 2026-01-31 09:12:28.201922657 +0000 UTC m=+0.085925606 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 31 04:12:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:28 np0005603609 nova_compute[221550]: 2026-01-31 09:12:28.304 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:28 np0005603609 nova_compute[221550]: 2026-01-31 09:12:28.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:28 np0005603609 nova_compute[221550]: 2026-01-31 09:12:28.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:12:28 np0005603609 nova_compute[221550]: 2026-01-31 09:12:28.873 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e421 e421: 3 total, 3 up, 3 in
Jan 31 04:12:29 np0005603609 nova_compute[221550]: 2026-01-31 09:12:29.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:29 np0005603609 nova_compute[221550]: 2026-01-31 09:12:29.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:12:29 np0005603609 nova_compute[221550]: 2026-01-31 09:12:29.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:12:29 np0005603609 nova_compute[221550]: 2026-01-31 09:12:29.680 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:12:29 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:29 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:29 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:29.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:30.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:30 np0005603609 nova_compute[221550]: 2026-01-31 09:12:30.791 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:31 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:31 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:31 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:31.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:32.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #205. Immutable memtables: 0.
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:12:32.175310) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 131] Flushing memtable with next log file: 205
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850752175362, "job": 131, "event": "flush_started", "num_memtables": 1, "num_entries": 2373, "num_deletes": 251, "total_data_size": 5906219, "memory_usage": 5985696, "flush_reason": "Manual Compaction"}
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 131] Level-0 flush table #206: started
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850752221701, "cf_name": "default", "job": 131, "event": "table_file_creation", "file_number": 206, "file_size": 3821959, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 98025, "largest_seqno": 100393, "table_properties": {"data_size": 3812383, "index_size": 6069, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19564, "raw_average_key_size": 20, "raw_value_size": 3793277, "raw_average_value_size": 3943, "num_data_blocks": 265, "num_entries": 962, "num_filter_entries": 962, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850543, "oldest_key_time": 1769850543, "file_creation_time": 1769850752, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 131] Flush lasted 46432 microseconds, and 6519 cpu microseconds.
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:12:32.221748) [db/flush_job.cc:967] [default] [JOB 131] Level-0 flush table #206: 3821959 bytes OK
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:12:32.221767) [db/memtable_list.cc:519] [default] Level-0 commit table #206 started
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:12:32.228030) [db/memtable_list.cc:722] [default] Level-0 commit table #206: memtable #1 done
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:12:32.228062) EVENT_LOG_v1 {"time_micros": 1769850752228054, "job": 131, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:12:32.228083) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 131] Try to delete WAL files size 5895897, prev total WAL file size 5895897, number of live WAL files 2.
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000202.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:12:32.228908) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038373835' seq:72057594037927935, type:22 .. '7061786F730039303337' seq:0, type:0; will stop at (end)
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 132] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 131 Base level 0, inputs: [206(3732KB)], [204(12MB)]
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850752228999, "job": 132, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [206], "files_L6": [204], "score": -1, "input_data_size": 16700903, "oldest_snapshot_seqno": -1}
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 132] Generated table #207: 11838 keys, 14760880 bytes, temperature: kUnknown
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850752422532, "cf_name": "default", "job": 132, "event": "table_file_creation", "file_number": 207, "file_size": 14760880, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14685771, "index_size": 44404, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29637, "raw_key_size": 312836, "raw_average_key_size": 26, "raw_value_size": 14480200, "raw_average_value_size": 1223, "num_data_blocks": 1684, "num_entries": 11838, "num_filter_entries": 11838, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769850752, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 207, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:12:32.422820) [db/compaction/compaction_job.cc:1663] [default] [JOB 132] Compacted 1@0 + 1@6 files to L6 => 14760880 bytes
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:12:32.426100) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 86.3 rd, 76.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 12.3 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(8.2) write-amplify(3.9) OK, records in: 12361, records dropped: 523 output_compression: NoCompression
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:12:32.426128) EVENT_LOG_v1 {"time_micros": 1769850752426115, "job": 132, "event": "compaction_finished", "compaction_time_micros": 193621, "compaction_time_cpu_micros": 31873, "output_level": 6, "num_output_files": 1, "total_output_size": 14760880, "num_input_records": 12361, "num_output_records": 11838, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850752426880, "job": 132, "event": "table_file_deletion", "file_number": 206}
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000204.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850752428683, "job": 132, "event": "table_file_deletion", "file_number": 204}
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:12:32.228764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:12:32.428724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:12:32.428729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:12:32.428732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:12:32.428734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:12:32 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:12:32.428737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:12:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:33 np0005603609 ovn_controller[130359]: 2026-01-31T09:12:33Z|01034|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 31 04:12:33 np0005603609 nova_compute[221550]: 2026-01-31 09:12:33.875 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:33 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:33 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:33 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:33.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:34.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:35 np0005603609 nova_compute[221550]: 2026-01-31 09:12:35.792 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:35 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:35 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:12:35 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:35.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:12:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:12:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:36.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:12:37 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:37 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:12:37 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:37.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:12:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:38.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:38 np0005603609 nova_compute[221550]: 2026-01-31 09:12:38.877 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:39 np0005603609 nova_compute[221550]: 2026-01-31 09:12:39.675 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:39 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:39 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:12:39 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:39.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:12:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:40.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:40 np0005603609 nova_compute[221550]: 2026-01-31 09:12:40.794 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:41 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:41 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:41 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:41.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:42.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:42 np0005603609 nova_compute[221550]: 2026-01-31 09:12:42.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:43 np0005603609 nova_compute[221550]: 2026-01-31 09:12:43.913 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:43 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:43 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:43 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:43.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:44.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:44 np0005603609 nova_compute[221550]: 2026-01-31 09:12:44.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:44 np0005603609 nova_compute[221550]: 2026-01-31 09:12:44.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 04:12:45 np0005603609 nova_compute[221550]: 2026-01-31 09:12:45.677 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:45 np0005603609 nova_compute[221550]: 2026-01-31 09:12:45.798 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:45 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:45 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:12:45 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:45.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:12:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:46.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:47 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:12:47 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 04:12:47 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:47 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:47 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:47.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:48.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:12:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:12:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:12:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 04:12:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:12:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:12:48 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:12:48 np0005603609 nova_compute[221550]: 2026-01-31 09:12:48.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:48 np0005603609 nova_compute[221550]: 2026-01-31 09:12:48.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 04:12:48 np0005603609 nova_compute[221550]: 2026-01-31 09:12:48.687 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 04:12:48 np0005603609 nova_compute[221550]: 2026-01-31 09:12:48.917 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:49 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:49 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:49 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:49.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:50.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:50 np0005603609 nova_compute[221550]: 2026-01-31 09:12:50.849 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:51.720 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=112, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=111) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:12:51 np0005603609 nova_compute[221550]: 2026-01-31 09:12:51.722 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:51.723 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:12:51 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:51 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:12:51 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:51.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:12:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:52.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:53 np0005603609 nova_compute[221550]: 2026-01-31 09:12:53.918 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:53 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:53 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:53 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:53.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:54.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:54 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:12:54.726 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '112'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:12:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:12:55 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:12:55 np0005603609 nova_compute[221550]: 2026-01-31 09:12:55.853 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:55 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:55 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:55 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:55.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:56.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:12:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:57.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:12:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:12:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:58.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:58 np0005603609 nova_compute[221550]: 2026-01-31 09:12:58.920 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:59 np0005603609 podman[322979]: 2026-01-31 09:12:59.163195127 +0000 UTC m=+0.050292139 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 04:12:59 np0005603609 podman[322978]: 2026-01-31 09:12:59.183337923 +0000 UTC m=+0.070145718 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:13:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:00.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:00.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:00 np0005603609 nova_compute[221550]: 2026-01-31 09:13:00.857 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:02.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:02.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:03 np0005603609 nova_compute[221550]: 2026-01-31 09:13:03.921 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:13:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:04.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:13:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:04.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:13:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2207572594' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:13:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:13:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2207572594' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:13:05 np0005603609 nova_compute[221550]: 2026-01-31 09:13:05.860 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:06.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:13:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:06.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:13:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e422 e422: 3 total, 3 up, 3 in
Jan 31 04:13:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:13:07.568 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:13:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:13:07.568 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:13:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:13:07.568 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:13:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:08.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:13:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:08.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:13:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:08 np0005603609 nova_compute[221550]: 2026-01-31 09:13:08.922 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:10.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:10.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:10 np0005603609 nova_compute[221550]: 2026-01-31 09:13:10.863 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:13:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:12.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:13:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:13:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:12.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:13:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:13 np0005603609 nova_compute[221550]: 2026-01-31 09:13:13.924 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:14.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:14.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:15 np0005603609 nova_compute[221550]: 2026-01-31 09:13:15.865 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:16.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:16.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e423 e423: 3 total, 3 up, 3 in
Jan 31 04:13:17 np0005603609 nova_compute[221550]: 2026-01-31 09:13:17.688 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:17 np0005603609 nova_compute[221550]: 2026-01-31 09:13:17.689 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:13:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:18.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:13:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:18.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:18 np0005603609 nova_compute[221550]: 2026-01-31 09:13:18.927 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:20.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:20.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:20 np0005603609 nova_compute[221550]: 2026-01-31 09:13:20.868 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:22.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:22.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:23 np0005603609 nova_compute[221550]: 2026-01-31 09:13:23.929 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:24.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:24.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:25 np0005603609 nova_compute[221550]: 2026-01-31 09:13:25.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:25 np0005603609 nova_compute[221550]: 2026-01-31 09:13:25.687 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:13:25 np0005603609 nova_compute[221550]: 2026-01-31 09:13:25.688 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:13:25 np0005603609 nova_compute[221550]: 2026-01-31 09:13:25.688 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:13:25 np0005603609 nova_compute[221550]: 2026-01-31 09:13:25.688 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:13:25 np0005603609 nova_compute[221550]: 2026-01-31 09:13:25.688 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:13:25 np0005603609 nova_compute[221550]: 2026-01-31 09:13:25.871 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 04:13:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:26.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 04:13:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:13:26 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2455795959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:13:26 np0005603609 nova_compute[221550]: 2026-01-31 09:13:26.115 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:13:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:26.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:26 np0005603609 nova_compute[221550]: 2026-01-31 09:13:26.261 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:13:26 np0005603609 nova_compute[221550]: 2026-01-31 09:13:26.262 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4190MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:13:26 np0005603609 nova_compute[221550]: 2026-01-31 09:13:26.262 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:13:26 np0005603609 nova_compute[221550]: 2026-01-31 09:13:26.263 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:13:26 np0005603609 nova_compute[221550]: 2026-01-31 09:13:26.335 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:13:26 np0005603609 nova_compute[221550]: 2026-01-31 09:13:26.336 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:13:26 np0005603609 nova_compute[221550]: 2026-01-31 09:13:26.386 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:13:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:13:26 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3879334382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:13:26 np0005603609 nova_compute[221550]: 2026-01-31 09:13:26.814 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:13:26 np0005603609 nova_compute[221550]: 2026-01-31 09:13:26.818 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:13:26 np0005603609 nova_compute[221550]: 2026-01-31 09:13:26.862 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:13:26 np0005603609 nova_compute[221550]: 2026-01-31 09:13:26.864 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:13:26 np0005603609 nova_compute[221550]: 2026-01-31 09:13:26.864 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:13:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:13:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:28.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:13:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:28.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:28 np0005603609 nova_compute[221550]: 2026-01-31 09:13:28.864 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:28 np0005603609 nova_compute[221550]: 2026-01-31 09:13:28.929 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:29 np0005603609 nova_compute[221550]: 2026-01-31 09:13:29.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:29 np0005603609 nova_compute[221550]: 2026-01-31 09:13:29.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:13:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:30.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:30.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:30 np0005603609 podman[323072]: 2026-01-31 09:13:30.197726837 +0000 UTC m=+0.079886898 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 04:13:30 np0005603609 podman[323071]: 2026-01-31 09:13:30.20398136 +0000 UTC m=+0.084907740 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 04:13:30 np0005603609 ovn_controller[130359]: 2026-01-31T09:13:30Z|01035|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Jan 31 04:13:30 np0005603609 nova_compute[221550]: 2026-01-31 09:13:30.897 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:31 np0005603609 nova_compute[221550]: 2026-01-31 09:13:31.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:31 np0005603609 nova_compute[221550]: 2026-01-31 09:13:31.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:13:31 np0005603609 nova_compute[221550]: 2026-01-31 09:13:31.661 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:13:31 np0005603609 nova_compute[221550]: 2026-01-31 09:13:31.752 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:13:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:13:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:32.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:13:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:32.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:13:32 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3699418978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:13:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:33 np0005603609 nova_compute[221550]: 2026-01-31 09:13:33.932 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:34.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:34.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:35 np0005603609 nova_compute[221550]: 2026-01-31 09:13:35.901 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:36.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:36.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:37 np0005603609 nova_compute[221550]: 2026-01-31 09:13:37.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:38.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:38.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:38 np0005603609 nova_compute[221550]: 2026-01-31 09:13:38.933 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:39 np0005603609 nova_compute[221550]: 2026-01-31 09:13:39.687 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:40.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:13:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:40.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:13:40 np0005603609 nova_compute[221550]: 2026-01-31 09:13:40.906 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:42.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:42.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:42 np0005603609 nova_compute[221550]: 2026-01-31 09:13:42.662 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:43 np0005603609 nova_compute[221550]: 2026-01-31 09:13:43.935 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 04:13:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:44.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 04:13:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:44.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:45 np0005603609 nova_compute[221550]: 2026-01-31 09:13:45.911 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:46.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:46.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:46 np0005603609 nova_compute[221550]: 2026-01-31 09:13:46.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:13:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:48.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:13:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:13:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:48.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:13:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:48 np0005603609 nova_compute[221550]: 2026-01-31 09:13:48.937 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:50 np0005603609 nova_compute[221550]: 2026-01-31 09:13:50.050 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:50 np0005603609 NetworkManager[49064]: <info>  [1769850830.0518] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/485)
Jan 31 04:13:50 np0005603609 NetworkManager[49064]: <info>  [1769850830.0540] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/486)
Jan 31 04:13:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:13:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:50.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:13:50 np0005603609 nova_compute[221550]: 2026-01-31 09:13:50.077 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:50 np0005603609 nova_compute[221550]: 2026-01-31 09:13:50.082 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:50 np0005603609 nova_compute[221550]: 2026-01-31 09:13:50.126 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:50.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:50 np0005603609 nova_compute[221550]: 2026-01-31 09:13:50.837 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:13:50.838 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=113, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=112) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:13:50 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:13:50.839 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:13:50 np0005603609 nova_compute[221550]: 2026-01-31 09:13:50.913 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:51 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #208. Immutable memtables: 0.
Jan 31 04:13:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:13:51.828927) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:13:51 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 133] Flushing memtable with next log file: 208
Jan 31 04:13:51 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850831828992, "job": 133, "event": "flush_started", "num_memtables": 1, "num_entries": 1087, "num_deletes": 251, "total_data_size": 2215995, "memory_usage": 2240968, "flush_reason": "Manual Compaction"}
Jan 31 04:13:51 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 133] Level-0 flush table #209: started
Jan 31 04:13:51 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850831943214, "cf_name": "default", "job": 133, "event": "table_file_creation", "file_number": 209, "file_size": 956587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 100398, "largest_seqno": 101480, "table_properties": {"data_size": 952499, "index_size": 1675, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10982, "raw_average_key_size": 21, "raw_value_size": 943687, "raw_average_value_size": 1825, "num_data_blocks": 72, "num_entries": 517, "num_filter_entries": 517, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850752, "oldest_key_time": 1769850752, "file_creation_time": 1769850831, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:13:51 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 133] Flush lasted 114342 microseconds, and 2939 cpu microseconds.
Jan 31 04:13:51 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:13:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:13:51.943264) [db/flush_job.cc:967] [default] [JOB 133] Level-0 flush table #209: 956587 bytes OK
Jan 31 04:13:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:13:51.943285) [db/memtable_list.cc:519] [default] Level-0 commit table #209 started
Jan 31 04:13:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:13:51.960239) [db/memtable_list.cc:722] [default] Level-0 commit table #209: memtable #1 done
Jan 31 04:13:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:13:51.960292) EVENT_LOG_v1 {"time_micros": 1769850831960281, "job": 133, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:13:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:13:51.960319) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:13:51 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 133] Try to delete WAL files size 2210679, prev total WAL file size 2210679, number of live WAL files 2.
Jan 31 04:13:51 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000205.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:13:51 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:13:51.960894) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353130' seq:72057594037927935, type:22 .. '6D6772737461740033373631' seq:0, type:0; will stop at (end)
Jan 31 04:13:51 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 134] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:13:51 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 133 Base level 0, inputs: [209(934KB)], [207(14MB)]
Jan 31 04:13:51 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850831960925, "job": 134, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [209], "files_L6": [207], "score": -1, "input_data_size": 15717467, "oldest_snapshot_seqno": -1}
Jan 31 04:13:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:52.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:13:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:52.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:13:52 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 134] Generated table #210: 11865 keys, 12350696 bytes, temperature: kUnknown
Jan 31 04:13:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850832365524, "cf_name": "default", "job": 134, "event": "table_file_creation", "file_number": 210, "file_size": 12350696, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12279127, "index_size": 40800, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29701, "raw_key_size": 313632, "raw_average_key_size": 26, "raw_value_size": 12077032, "raw_average_value_size": 1017, "num_data_blocks": 1535, "num_entries": 11865, "num_filter_entries": 11865, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769850831, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 210, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:13:52 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:13:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:13:52.366141) [db/compaction/compaction_job.cc:1663] [default] [JOB 134] Compacted 1@0 + 1@6 files to L6 => 12350696 bytes
Jan 31 04:13:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:13:52.429137) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 38.8 rd, 30.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 14.1 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(29.3) write-amplify(12.9) OK, records in: 12355, records dropped: 490 output_compression: NoCompression
Jan 31 04:13:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:13:52.429197) EVENT_LOG_v1 {"time_micros": 1769850832429176, "job": 134, "event": "compaction_finished", "compaction_time_micros": 404896, "compaction_time_cpu_micros": 32276, "output_level": 6, "num_output_files": 1, "total_output_size": 12350696, "num_input_records": 12355, "num_output_records": 11865, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:13:52 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000209.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:13:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850832429869, "job": 134, "event": "table_file_deletion", "file_number": 209}
Jan 31 04:13:52 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000207.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:13:52 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850832432249, "job": 134, "event": "table_file_deletion", "file_number": 207}
Jan 31 04:13:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:13:51.960856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:13:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:13:52.432460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:13:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:13:52.432471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:13:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:13:52.432474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:13:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:13:52.432477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:13:52 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:13:52.432479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:13:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:13:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/455054707' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:13:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:13:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/455054707' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:13:53 np0005603609 nova_compute[221550]: 2026-01-31 09:13:53.939 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:13:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:54.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:13:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:54.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:55 np0005603609 nova_compute[221550]: 2026-01-31 09:13:55.917 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:56.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 04:13:56 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:13:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:56.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:13:57 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:13:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:58.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:13:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:13:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:58.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:13:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:58 np0005603609 nova_compute[221550]: 2026-01-31 09:13:58.942 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:00.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:14:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:00.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:14:00 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:14:00.842 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '113'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:14:00 np0005603609 nova_compute[221550]: 2026-01-31 09:14:00.958 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:01 np0005603609 podman[323253]: 2026-01-31 09:14:01.194804383 +0000 UTC m=+0.082918032 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 04:14:01 np0005603609 podman[323254]: 2026-01-31 09:14:01.201300333 +0000 UTC m=+0.083093906 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:14:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:14:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:02.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:14:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:02.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:03 np0005603609 nova_compute[221550]: 2026-01-31 09:14:03.943 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:04.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:14:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:04.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:14:04 np0005603609 nova_compute[221550]: 2026-01-31 09:14:04.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:14:05 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:14:05 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:14:05 np0005603609 nova_compute[221550]: 2026-01-31 09:14:05.961 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:06.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:06.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:14:07.570 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:14:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:14:07.571 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:14:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:14:07.572 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:14:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:08.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:14:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:08.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:14:08 np0005603609 nova_compute[221550]: 2026-01-31 09:14:08.945 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:10.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:10.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:10 np0005603609 nova_compute[221550]: 2026-01-31 09:14:10.966 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:12.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:12.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:13 np0005603609 nova_compute[221550]: 2026-01-31 09:14:13.947 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:14.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:14.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:15 np0005603609 nova_compute[221550]: 2026-01-31 09:14:15.970 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:14:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:16.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:14:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:16.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:18.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:18.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:18 np0005603609 nova_compute[221550]: 2026-01-31 09:14:18.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:14:18 np0005603609 nova_compute[221550]: 2026-01-31 09:14:18.987 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:19 np0005603609 nova_compute[221550]: 2026-01-31 09:14:19.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:14:20 np0005603609 ovn_controller[130359]: 2026-01-31T09:14:20Z|01036|memory_trim|INFO|Detected inactivity (last active 30020 ms ago): trimming memory
Jan 31 04:14:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:20.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:14:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:20.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:14:20 np0005603609 nova_compute[221550]: 2026-01-31 09:14:20.974 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:22.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:22.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:23 np0005603609 nova_compute[221550]: 2026-01-31 09:14:23.990 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:24.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:24.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:25 np0005603609 nova_compute[221550]: 2026-01-31 09:14:25.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:14:25 np0005603609 nova_compute[221550]: 2026-01-31 09:14:25.693 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:14:25 np0005603609 nova_compute[221550]: 2026-01-31 09:14:25.693 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:14:25 np0005603609 nova_compute[221550]: 2026-01-31 09:14:25.693 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:14:25 np0005603609 nova_compute[221550]: 2026-01-31 09:14:25.694 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:14:25 np0005603609 nova_compute[221550]: 2026-01-31 09:14:25.694 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:14:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e424 e424: 3 total, 3 up, 3 in
Jan 31 04:14:25 np0005603609 nova_compute[221550]: 2026-01-31 09:14:25.977 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:14:26 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/620014776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:14:26 np0005603609 nova_compute[221550]: 2026-01-31 09:14:26.099 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:14:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:26.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:26 np0005603609 nova_compute[221550]: 2026-01-31 09:14:26.208 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:14:26 np0005603609 nova_compute[221550]: 2026-01-31 09:14:26.210 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4165MB free_disk=20.988109588623047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:14:26 np0005603609 nova_compute[221550]: 2026-01-31 09:14:26.210 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:14:26 np0005603609 nova_compute[221550]: 2026-01-31 09:14:26.210 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:14:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:26.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:26 np0005603609 nova_compute[221550]: 2026-01-31 09:14:26.362 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:14:26 np0005603609 nova_compute[221550]: 2026-01-31 09:14:26.362 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:14:26 np0005603609 nova_compute[221550]: 2026-01-31 09:14:26.383 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 04:14:26 np0005603609 nova_compute[221550]: 2026-01-31 09:14:26.416 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 04:14:26 np0005603609 nova_compute[221550]: 2026-01-31 09:14:26.417 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 04:14:26 np0005603609 nova_compute[221550]: 2026-01-31 09:14:26.443 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 04:14:26 np0005603609 nova_compute[221550]: 2026-01-31 09:14:26.493 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 04:14:26 np0005603609 nova_compute[221550]: 2026-01-31 09:14:26.525 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:14:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:14:26 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3477090387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:14:26 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e425 e425: 3 total, 3 up, 3 in
Jan 31 04:14:26 np0005603609 nova_compute[221550]: 2026-01-31 09:14:26.978 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:14:26 np0005603609 nova_compute[221550]: 2026-01-31 09:14:26.984 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:14:27 np0005603609 nova_compute[221550]: 2026-01-31 09:14:27.003 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:14:27 np0005603609 nova_compute[221550]: 2026-01-31 09:14:27.005 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:14:27 np0005603609 nova_compute[221550]: 2026-01-31 09:14:27.006 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:14:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:28.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:14:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:28.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:14:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:14:28.956 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=114, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=113) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:14:28 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:14:28.957 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:14:28 np0005603609 nova_compute[221550]: 2026-01-31 09:14:28.958 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:28 np0005603609 nova_compute[221550]: 2026-01-31 09:14:28.992 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:30 np0005603609 nova_compute[221550]: 2026-01-31 09:14:30.007 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:14:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:30.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:14:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:30.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:14:30 np0005603609 nova_compute[221550]: 2026-01-31 09:14:30.981 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:31 np0005603609 nova_compute[221550]: 2026-01-31 09:14:31.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:14:31 np0005603609 nova_compute[221550]: 2026-01-31 09:14:31.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:14:31 np0005603609 nova_compute[221550]: 2026-01-31 09:14:31.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:14:31 np0005603609 nova_compute[221550]: 2026-01-31 09:14:31.793 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:14:31 np0005603609 nova_compute[221550]: 2026-01-31 09:14:31.793 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:14:31 np0005603609 nova_compute[221550]: 2026-01-31 09:14:31.794 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:14:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 04:14:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:32.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 04:14:32 np0005603609 podman[323396]: 2026-01-31 09:14:32.234976608 +0000 UTC m=+0.110487401 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 04:14:32 np0005603609 podman[323395]: 2026-01-31 09:14:32.248758096 +0000 UTC m=+0.122557807 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 04:14:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:14:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:32.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:14:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:33 np0005603609 nova_compute[221550]: 2026-01-31 09:14:33.994 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:34.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:14:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:34.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:14:34 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:14:34.959 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '114'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:14:35 np0005603609 nova_compute[221550]: 2026-01-31 09:14:35.985 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:36.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:36.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:14:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:38.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:14:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:38.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:39 np0005603609 nova_compute[221550]: 2026-01-31 09:14:39.007 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:14:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:40.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:14:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:14:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:40.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:14:40 np0005603609 nova_compute[221550]: 2026-01-31 09:14:40.989 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:41 np0005603609 nova_compute[221550]: 2026-01-31 09:14:41.789 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:14:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:42.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:14:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:42.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:14:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:44 np0005603609 nova_compute[221550]: 2026-01-31 09:14:44.010 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:44.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:44.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:44 np0005603609 nova_compute[221550]: 2026-01-31 09:14:44.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:14:45 np0005603609 nova_compute[221550]: 2026-01-31 09:14:45.993 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:46.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:46.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:47 np0005603609 nova_compute[221550]: 2026-01-31 09:14:47.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:14:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:48.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:48.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:49 np0005603609 nova_compute[221550]: 2026-01-31 09:14:49.013 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:50.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:50.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:50 np0005603609 systemd-logind[823]: New session 66 of user zuul.
Jan 31 04:14:50 np0005603609 systemd[1]: Started Session 66 of User zuul.
Jan 31 04:14:50 np0005603609 nova_compute[221550]: 2026-01-31 09:14:50.997 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:52.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:52.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:14:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2147898005' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:14:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:14:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2147898005' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:14:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:54 np0005603609 nova_compute[221550]: 2026-01-31 09:14:54.016 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:54.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:54.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:56 np0005603609 nova_compute[221550]: 2026-01-31 09:14:56.001 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:56.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:56.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:57 np0005603609 ovs-vsctl[323726]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 31 04:14:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:14:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:58.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:14:58 np0005603609 virtqemud[221292]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 31 04:14:58 np0005603609 virtqemud[221292]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 31 04:14:58 np0005603609 virtqemud[221292]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 04:14:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:14:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:58.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:58 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:58 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: cache status {prefix=cache status} (starting...)
Jan 31 04:14:58 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:14:58 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: client ls {prefix=client ls} (starting...)
Jan 31 04:14:58 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:14:58 np0005603609 lvm[324075]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 04:14:58 np0005603609 lvm[324075]: VG ceph_vg0 finished
Jan 31 04:14:59 np0005603609 nova_compute[221550]: 2026-01-31 09:14:59.018 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:59 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: damage ls {prefix=damage ls} (starting...)
Jan 31 04:14:59 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:14:59 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: dump loads {prefix=dump loads} (starting...)
Jan 31 04:14:59 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:14:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 04:14:59 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1754209629' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 04:14:59 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 31 04:14:59 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:14:59 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 31 04:14:59 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:14:59 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 31 04:14:59 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:14:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:14:59 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3081761994' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:14:59 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 31 04:14:59 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:15:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:00.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:00 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 31 04:15:00 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:15:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 31 04:15:00 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3338682382' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 04:15:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:00.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 31 04:15:00 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2102815293' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 04:15:00 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 31 04:15:00 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:15:00 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: ops {prefix=ops} (starting...)
Jan 31 04:15:00 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:15:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 31 04:15:00 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1846108260' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 04:15:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 31 04:15:00 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2417167780' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 04:15:01 np0005603609 nova_compute[221550]: 2026-01-31 09:15:01.003 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 04:15:01 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/109357756' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 04:15:01 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: session ls {prefix=session ls} (starting...)
Jan 31 04:15:01 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:15:01 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: status {prefix=status} (starting...)
Jan 31 04:15:01 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 04:15:01 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2668804060' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 04:15:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 31 04:15:02 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1922402219' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 04:15:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:02.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 04:15:02 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3047504095' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 04:15:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:02.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 04:15:02 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3175809376' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 04:15:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 31 04:15:02 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1890310869' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 04:15:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 31 04:15:02 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/549006704' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 04:15:02 np0005603609 podman[324574]: 2026-01-31 09:15:02.896311429 +0000 UTC m=+0.077748254 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 04:15:02 np0005603609 podman[324584]: 2026-01-31 09:15:02.904418059 +0000 UTC m=+0.082672816 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 04:15:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 04:15:03 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3547050634' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 04:15:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 31 04:15:03 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3062583683' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 04:15:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 31 04:15:03 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/488891734' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 04:15:03 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 31 04:15:03 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4009751160' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 04:15:04 np0005603609 nova_compute[221550]: 2026-01-31 09:15:04.018 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:15:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:04.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:15:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:04.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 04:15:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3646074436' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f207a28000 session 0x55f200892960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.789494514s of 15.867616653s, submitted: 9
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20809dc00 session 0x55f2036b2b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435781632 unmapped: 70131712 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4581323 data_alloc: 218103808 data_used: 24449024
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435781632 unmapped: 70131712 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a19ca000/0x0/0x1bfc00000, data 0x3e18a68/0x4014000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435781632 unmapped: 70131712 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435781632 unmapped: 70131712 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435789824 unmapped: 70123520 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a19ca000/0x0/0x1bfc00000, data 0x3e18a68/0x4014000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435789824 unmapped: 70123520 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4599971 data_alloc: 218103808 data_used: 27082752
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a19ca000/0x0/0x1bfc00000, data 0x3e18a68/0x4014000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 437182464 unmapped: 68730880 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 437182464 unmapped: 68730880 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 437182464 unmapped: 68730880 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 437182464 unmapped: 68730880 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f201485400 session 0x55f203385680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a19ca000/0x0/0x1bfc00000, data 0x3e18a68/0x4014000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a19ca000/0x0/0x1bfc00000, data 0x3e18a68/0x4014000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 437198848 unmapped: 68714496 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.639733315s of 10.707028389s, submitted: 14
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4665280 data_alloc: 234881024 data_used: 35549184
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 437198848 unmapped: 68714496 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 437526528 unmapped: 68386816 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438018048 unmapped: 67895296 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a19ca000/0x0/0x1bfc00000, data 0x3e18a68/0x4014000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438018048 unmapped: 67895296 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438018048 unmapped: 67895296 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4709440 data_alloc: 234881024 data_used: 41398272
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441442304 unmapped: 64471040 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441532416 unmapped: 64380928 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441565184 unmapped: 64348160 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441565184 unmapped: 64348160 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a15d1000/0x0/0x1bfc00000, data 0x4211a68/0x440d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441565184 unmapped: 64348160 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4758390 data_alloc: 234881024 data_used: 42356736
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441565184 unmapped: 64348160 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441565184 unmapped: 64348160 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.034734726s of 12.291275024s, submitted: 38
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444006400 unmapped: 61906944 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab0800 session 0x55f2016943c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab1c00 session 0x55f202f32780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444014592 unmapped: 61898752 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a0f12000/0x0/0x1bfc00000, data 0x48d0a68/0x4acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443981824 unmapped: 61931520 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f207a28000 session 0x55f201694000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4600849 data_alloc: 234881024 data_used: 31690752
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 61915136 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a1e4d000/0x0/0x1bfc00000, data 0x3996a58/0x3b91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 61915136 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 61915136 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 61915136 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 61915136 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4601601 data_alloc: 234881024 data_used: 31690752
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a1e4a000/0x0/0x1bfc00000, data 0x3999a58/0x3b94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 61915136 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab3000 session 0x55f200b42d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202403800 session 0x55f202f67680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 61915136 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 61915136 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 61915136 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 61915136 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4601469 data_alloc: 234881024 data_used: 31690752
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 61915136 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a1e4a000/0x0/0x1bfc00000, data 0x3999a58/0x3b94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 61915136 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 61915136 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 61915136 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.566085815s of 17.381101608s, submitted: 120
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 61915136 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f203710c00 session 0x55f200ae25a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4601141 data_alloc: 234881024 data_used: 31690752
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f200b0e5a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440852480 unmapped: 65060864 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab0800 session 0x55f202753e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440852480 unmapped: 65060864 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2b0e000/0x0/0x1bfc00000, data 0x2aa99e6/0x2ca2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2b0e000/0x0/0x1bfc00000, data 0x2aa99e6/0x2ca2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440852480 unmapped: 65060864 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440852480 unmapped: 65060864 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440852480 unmapped: 65060864 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4443585 data_alloc: 218103808 data_used: 25333760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2b0e000/0x0/0x1bfc00000, data 0x2aa99e6/0x2ca2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440852480 unmapped: 65060864 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440852480 unmapped: 65060864 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440860672 unmapped: 65052672 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440860672 unmapped: 65052672 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440860672 unmapped: 65052672 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4443717 data_alloc: 218103808 data_used: 25333760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440860672 unmapped: 65052672 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2b0e000/0x0/0x1bfc00000, data 0x2aa99e6/0x2ca2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440860672 unmapped: 65052672 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440860672 unmapped: 65052672 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2b0e000/0x0/0x1bfc00000, data 0x2aa99e6/0x2ca2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440860672 unmapped: 65052672 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440860672 unmapped: 65052672 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4443717 data_alloc: 218103808 data_used: 25333760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440860672 unmapped: 65052672 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440860672 unmapped: 65052672 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440860672 unmapped: 65052672 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440860672 unmapped: 65052672 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2b0e000/0x0/0x1bfc00000, data 0x2aa99e6/0x2ca2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440860672 unmapped: 65052672 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4443717 data_alloc: 218103808 data_used: 25333760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440860672 unmapped: 65052672 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440860672 unmapped: 65052672 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440860672 unmapped: 65052672 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440860672 unmapped: 65052672 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2b0e000/0x0/0x1bfc00000, data 0x2aa99e6/0x2ca2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440860672 unmapped: 65052672 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4443717 data_alloc: 218103808 data_used: 25333760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440868864 unmapped: 65044480 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab1c00 session 0x55f203692780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.812017441s of 26.971031189s, submitted: 47
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab3000 session 0x55f2008925a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440868864 unmapped: 65044480 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 434143232 unmapped: 71770112 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f203172780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 434151424 unmapped: 71761920 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 434151424 unmapped: 71761920 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a39e2000/0x0/0x1bfc00000, data 0x1e039e6/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4302902 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 434151424 unmapped: 71761920 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 434151424 unmapped: 71761920 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a39e2000/0x0/0x1bfc00000, data 0x1e039e6/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 434151424 unmapped: 71761920 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 434151424 unmapped: 71761920 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a39e2000/0x0/0x1bfc00000, data 0x1e039e6/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 434151424 unmapped: 71761920 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4302902 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 59K writes, 233K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.04 MB/s#012Cumulative WAL: 59K writes, 21K syncs, 2.73 writes per sync, written: 0.22 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4314 writes, 16K keys, 4314 commit groups, 1.0 writes per commit group, ingest: 16.57 MB, 0.03 MB/s#012Interval WAL: 4315 writes, 1680 syncs, 2.57 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f1fef0cf30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f1fef0cf30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab0800 session 0x55f202f665a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab1c00 session 0x55f20335da40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f203710c00 session 0x55f20276b0e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f207a28000 session 0x55f200b0e000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f20109a5a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 70672384 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 70672384 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a36e1000/0x0/0x1bfc00000, data 0x2103a48/0x22fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 70672384 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 70672384 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 70672384 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4327956 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 70672384 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a36e1000/0x0/0x1bfc00000, data 0x2103a48/0x22fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 70672384 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435240960 unmapped: 70672384 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435249152 unmapped: 70664192 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a36e1000/0x0/0x1bfc00000, data 0x2103a48/0x22fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435249152 unmapped: 70664192 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4327956 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435249152 unmapped: 70664192 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435249152 unmapped: 70664192 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a36e1000/0x0/0x1bfc00000, data 0x2103a48/0x22fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435249152 unmapped: 70664192 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435249152 unmapped: 70664192 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a36e1000/0x0/0x1bfc00000, data 0x2103a48/0x22fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435249152 unmapped: 70664192 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4327956 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435249152 unmapped: 70664192 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435249152 unmapped: 70664192 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a36e1000/0x0/0x1bfc00000, data 0x2103a48/0x22fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435249152 unmapped: 70664192 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435249152 unmapped: 70664192 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435249152 unmapped: 70664192 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4327956 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435249152 unmapped: 70664192 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a36e1000/0x0/0x1bfc00000, data 0x2103a48/0x22fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435249152 unmapped: 70664192 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab0800 session 0x55f2025dab40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435249152 unmapped: 70664192 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab1c00 session 0x55f202ef32c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435249152 unmapped: 70664192 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f203710c00 session 0x55f203693860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.095344543s of 33.317989349s, submitted: 61
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20809dc00 session 0x55f20126bc20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435093504 unmapped: 70819840 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4331115 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a36df000/0x0/0x1bfc00000, data 0x2103a7b/0x22ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435093504 unmapped: 70819840 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435093504 unmapped: 70819840 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4353355 data_alloc: 218103808 data_used: 21233664
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a36df000/0x0/0x1bfc00000, data 0x2103a7b/0x22ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a36df000/0x0/0x1bfc00000, data 0x2103a7b/0x22ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4353355 data_alloc: 218103808 data_used: 21233664
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.581322670s of 12.654575348s, submitted: 6
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4360937 data_alloc: 218103808 data_used: 21237760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3643000/0x0/0x1bfc00000, data 0x219fa7b/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3643000/0x0/0x1bfc00000, data 0x219fa7b/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4360937 data_alloc: 218103808 data_used: 21237760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3643000/0x0/0x1bfc00000, data 0x219fa7b/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3643000/0x0/0x1bfc00000, data 0x219fa7b/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3643000/0x0/0x1bfc00000, data 0x219fa7b/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4360937 data_alloc: 218103808 data_used: 21237760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 70885376 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4360937 data_alloc: 218103808 data_used: 21237760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3643000/0x0/0x1bfc00000, data 0x219fa7b/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435036160 unmapped: 70877184 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435036160 unmapped: 70877184 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435036160 unmapped: 70877184 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435036160 unmapped: 70877184 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435036160 unmapped: 70877184 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3643000/0x0/0x1bfc00000, data 0x219fa7b/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4360937 data_alloc: 218103808 data_used: 21237760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435036160 unmapped: 70877184 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435044352 unmapped: 70868992 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435044352 unmapped: 70868992 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3643000/0x0/0x1bfc00000, data 0x219fa7b/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435052544 unmapped: 70860800 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3643000/0x0/0x1bfc00000, data 0x219fa7b/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3643000/0x0/0x1bfc00000, data 0x219fa7b/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435052544 unmapped: 70860800 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4360937 data_alloc: 218103808 data_used: 21237760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435052544 unmapped: 70860800 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3643000/0x0/0x1bfc00000, data 0x219fa7b/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435052544 unmapped: 70860800 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435052544 unmapped: 70860800 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435052544 unmapped: 70860800 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.179656982s of 32.247810364s, submitted: 23
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3643000/0x0/0x1bfc00000, data 0x219fa7b/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435060736 unmapped: 70852608 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4360937 data_alloc: 218103808 data_used: 21237760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3643000/0x0/0x1bfc00000, data 0x219fa7b/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435060736 unmapped: 70852608 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435060736 unmapped: 70852608 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435060736 unmapped: 70852608 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3643000/0x0/0x1bfc00000, data 0x219fa7b/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435068928 unmapped: 70844416 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435068928 unmapped: 70844416 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4360937 data_alloc: 218103808 data_used: 21237760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435068928 unmapped: 70844416 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3643000/0x0/0x1bfc00000, data 0x219fa7b/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435068928 unmapped: 70844416 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435077120 unmapped: 70836224 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435077120 unmapped: 70836224 heap: 505913344 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab1c00 session 0x55f202e92f00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f203710c00 session 0x55f20335de00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20e436c00 session 0x55f203172f00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3643000/0x0/0x1bfc00000, data 0x219fa7b/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f204764800 session 0x55f2008681e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.346018791s of 10.368324280s, submitted: 1
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f2038df800 session 0x55f2031723c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435290112 unmapped: 74301440 heap: 509591552 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab1c00 session 0x55f2028e9c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f203710c00 session 0x55f20051e3c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f204764800 session 0x55f2036ca5a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20e436c00 session 0x55f201695e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4414541 data_alloc: 218103808 data_used: 21241856
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435290112 unmapped: 74301440 heap: 509591552 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435290112 unmapped: 74301440 heap: 509591552 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435290112 unmapped: 74301440 heap: 509591552 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2fa0000/0x0/0x1bfc00000, data 0x2841a8b/0x2a3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435290112 unmapped: 74301440 heap: 509591552 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435290112 unmapped: 74301440 heap: 509591552 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4414541 data_alloc: 218103808 data_used: 21241856
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435298304 unmapped: 74293248 heap: 509591552 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435306496 unmapped: 74285056 heap: 509591552 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435306496 unmapped: 74285056 heap: 509591552 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2fa0000/0x0/0x1bfc00000, data 0x2841a8b/0x2a3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435314688 unmapped: 74276864 heap: 509591552 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435314688 unmapped: 74276864 heap: 509591552 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4414541 data_alloc: 218103808 data_used: 21241856
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435314688 unmapped: 74276864 heap: 509591552 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435314688 unmapped: 74276864 heap: 509591552 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.904721260s of 13.146965981s, submitted: 10
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 435314688 unmapped: 74276864 heap: 509591552 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f2026ca800 session 0x55f203751e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab1c00 session 0x55f2037505a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f203710c00 session 0x55f200094960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f204764800 session 0x55f20121ed20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20e436c00 session 0x55f2036cbe00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f209494000 session 0x55f203693e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 436748288 unmapped: 72843264 heap: 509591552 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2aca000/0x0/0x1bfc00000, data 0x2d16aed/0x2f14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f203710c00 session 0x55f2027512c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 436994048 unmapped: 76800000 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab1c00 session 0x55f2036cb860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f204764800 session 0x55f203692d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20e436c00 session 0x55f2031734a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4528779 data_alloc: 218103808 data_used: 21241856
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202ed0c00 session 0x55f2015bda40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab1c00 session 0x55f202f670e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 437002240 unmapped: 76791808 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202ed0c00 session 0x55f2017252c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 437018624 unmapped: 76775424 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f203710c00 session 0x55f203693c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 436944896 unmapped: 76849152 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 436944896 unmapped: 76849152 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2256000/0x0/0x1bfc00000, data 0x3589b10/0x3788000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 436903936 unmapped: 76890112 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4541484 data_alloc: 218103808 data_used: 22929408
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202d2d400 session 0x55f203172f00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 437346304 unmapped: 76447744 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20510c400 session 0x55f20335de00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 437346304 unmapped: 76447744 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20510c400 session 0x55f20126bc20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.350460529s of 10.049124718s, submitted: 56
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 437346304 unmapped: 76447744 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab1c00 session 0x55f203693860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 437501952 unmapped: 76292096 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f203710c00 session 0x55f202ef21e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2230000/0x0/0x1bfc00000, data 0x35adb43/0x37ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 437501952 unmapped: 76292096 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4586128 data_alloc: 218103808 data_used: 28004352
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20164c000 session 0x55f202f67c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2230000/0x0/0x1bfc00000, data 0x35adb43/0x37ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 437501952 unmapped: 76292096 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 437501952 unmapped: 76292096 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20272d000 session 0x55f2036b2780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab1c00 session 0x55f203184d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 437501952 unmapped: 76292096 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438124544 unmapped: 75669504 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a222e000/0x0/0x1bfc00000, data 0x35adb76/0x37b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438124544 unmapped: 75669504 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4623943 data_alloc: 234881024 data_used: 32702464
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 439566336 unmapped: 74227712 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441647104 unmapped: 72146944 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.675230026s of 10.012818336s, submitted: 103
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442761216 unmapped: 71032832 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a135d000/0x0/0x1bfc00000, data 0x406eb76/0x4271000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,1,1])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442875904 unmapped: 70918144 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442916864 unmapped: 70877184 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4768727 data_alloc: 234881024 data_used: 39923712
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202d2d400 session 0x55f202f665a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202ed0c00 session 0x55f201695680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442916864 unmapped: 70877184 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442916864 unmapped: 70877184 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a135d000/0x0/0x1bfc00000, data 0x406eb76/0x4271000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a135d000/0x0/0x1bfc00000, data 0x406eb76/0x4271000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442916864 unmapped: 70877184 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442916864 unmapped: 70877184 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442916864 unmapped: 70877184 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4770359 data_alloc: 234881024 data_used: 39923712
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442916864 unmapped: 70877184 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a0f6d000/0x0/0x1bfc00000, data 0x445eb76/0x4661000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,1,0,1])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446382080 unmapped: 67411968 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.522171021s of 10.068110466s, submitted: 326
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446554112 unmapped: 67239936 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446996480 unmapped: 66797568 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a08a7000/0x0/0x1bfc00000, data 0x4b24b76/0x4d27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446996480 unmapped: 66797568 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4858491 data_alloc: 234881024 data_used: 40976384
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446996480 unmapped: 66797568 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446996480 unmapped: 66797568 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a08a7000/0x0/0x1bfc00000, data 0x4b24b76/0x4d27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446996480 unmapped: 66797568 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446996480 unmapped: 66797568 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446988288 unmapped: 66805760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4857199 data_alloc: 234881024 data_used: 40980480
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446988288 unmapped: 66805760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a08a4000/0x0/0x1bfc00000, data 0x4b27b76/0x4d2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a089e000/0x0/0x1bfc00000, data 0x4b2db76/0x4d30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446988288 unmapped: 66805760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446988288 unmapped: 66805760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a089e000/0x0/0x1bfc00000, data 0x4b2db76/0x4d30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.837713242s of 11.133482933s, submitted: 27
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446988288 unmapped: 66805760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448069632 unmapped: 65724416 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a03b9000/0x0/0x1bfc00000, data 0x5012b76/0x5215000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4894979 data_alloc: 234881024 data_used: 41005056
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20510c400 session 0x55f201693a40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202d28c00 session 0x55f202f66b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447807488 unmapped: 65986560 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447807488 unmapped: 65986560 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a03b9000/0x0/0x1bfc00000, data 0x5012b76/0x5215000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447807488 unmapped: 65986560 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20164c000 session 0x55f20051f0e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f203710c00 session 0x55f2008925a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447807488 unmapped: 65986560 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202d28c00 session 0x55f20335c780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442548224 unmapped: 71245824 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4683568 data_alloc: 234881024 data_used: 31268864
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a16bf000/0x0/0x1bfc00000, data 0x3d0db43/0x3f0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442548224 unmapped: 71245824 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442548224 unmapped: 71245824 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442548224 unmapped: 71245824 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a16bf000/0x0/0x1bfc00000, data 0x3d0db43/0x3f0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.869043350s of 10.069545746s, submitted: 97
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443564032 unmapped: 70230016 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444301312 unmapped: 69492736 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4729638 data_alloc: 234881024 data_used: 31252480
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446611456 unmapped: 67182592 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447873024 unmapped: 65921024 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202ed0c00 session 0x55f203751c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20510c400 session 0x55f2016b2960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20164c000 session 0x55f202753e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202d28c00 session 0x55f202751860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202ed0c00 session 0x55f20055d860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab1c00 session 0x55f2036930e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202d2d400 session 0x55f2028e85a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446611456 unmapped: 67182592 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab1c00 session 0x55f202f665a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446652416 unmapped: 67141632 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a0e98000/0x0/0x1bfc00000, data 0x4535b43/0x4736000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446652416 unmapped: 67141632 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a154d000/0x0/0x1bfc00000, data 0x348baae/0x3689000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4594335 data_alloc: 218103808 data_used: 26537984
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446652416 unmapped: 67141632 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446652416 unmapped: 67141632 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446652416 unmapped: 67141632 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446652416 unmapped: 67141632 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446652416 unmapped: 67141632 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4594335 data_alloc: 218103808 data_used: 26537984
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446652416 unmapped: 67141632 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a154d000/0x0/0x1bfc00000, data 0x348baae/0x3689000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.050030708s of 13.224179268s, submitted: 161
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20164c000 session 0x55f203751e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f2025dba40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab0800 session 0x55f20171c780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446652416 unmapped: 67141632 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446652416 unmapped: 67141632 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202ed0c00 session 0x55f202ef2000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446652416 unmapped: 67141632 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a1f45000/0x0/0x1bfc00000, data 0x348baae/0x3689000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446660608 unmapped: 67133440 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4586439 data_alloc: 218103808 data_used: 26537984
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446660608 unmapped: 67133440 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a1f45000/0x0/0x1bfc00000, data 0x348baae/0x3689000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446660608 unmapped: 67133440 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f2017225a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a1f45000/0x0/0x1bfc00000, data 0x348baae/0x3689000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1,0,2])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446668800 unmapped: 67125248 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a1f44000/0x0/0x1bfc00000, data 0x348babe/0x368a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1,0,1,2])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446668800 unmapped: 67125248 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446668800 unmapped: 67125248 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4588053 data_alloc: 218103808 data_used: 26537984
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 5.201527596s
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 5.201527596s
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.201743126s, txc = 0x55f201413500
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446668800 unmapped: 67125248 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab0800 session 0x55f203172780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 0.652445555s of 10.016473770s, submitted: 41
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202d28c00 session 0x55f201695e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443662336 unmapped: 70131712 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a22e1000/0x0/0x1bfc00000, data 0x30efa29/0x32eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443662336 unmapped: 70131712 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443662336 unmapped: 70131712 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443678720 unmapped: 70115328 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4538486 data_alloc: 218103808 data_used: 25112576
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a22e1000/0x0/0x1bfc00000, data 0x30efa29/0x32eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443678720 unmapped: 70115328 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443678720 unmapped: 70115328 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443678720 unmapped: 70115328 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443678720 unmapped: 70115328 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443678720 unmapped: 70115328 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4538486 data_alloc: 218103808 data_used: 25112576
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443678720 unmapped: 70115328 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a22e1000/0x0/0x1bfc00000, data 0x30efa29/0x32eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443678720 unmapped: 70115328 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443678720 unmapped: 70115328 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443678720 unmapped: 70115328 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.720011711s of 12.738268852s, submitted: 6
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a22e1000/0x0/0x1bfc00000, data 0x30efa29/0x32eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443678720 unmapped: 70115328 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4538966 data_alloc: 218103808 data_used: 25116672
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444112896 unmapped: 69681152 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444497920 unmapped: 69296128 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444497920 unmapped: 69296128 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2186000/0x0/0x1bfc00000, data 0x324ba29/0x3447000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444497920 unmapped: 69296128 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444497920 unmapped: 69296128 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4554760 data_alloc: 218103808 data_used: 25178112
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444375040 unmapped: 69419008 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2181000/0x0/0x1bfc00000, data 0x3251a29/0x344d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444375040 unmapped: 69419008 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab1c00 session 0x55f203750b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20164c000 session 0x55f200b42b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444391424 unmapped: 69402624 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444399616 unmapped: 69394432 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f20171d680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444399616 unmapped: 69394432 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4527463 data_alloc: 218103808 data_used: 25014272
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2436000/0x0/0x1bfc00000, data 0x2f9da19/0x3198000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444399616 unmapped: 69394432 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2436000/0x0/0x1bfc00000, data 0x2f9da19/0x3198000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.704683304s of 11.929160118s, submitted: 50
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444399616 unmapped: 69394432 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444399616 unmapped: 69394432 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444407808 unmapped: 69386240 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2433000/0x0/0x1bfc00000, data 0x2fa0a19/0x319b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444407808 unmapped: 69386240 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4527891 data_alloc: 218103808 data_used: 25014272
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2433000/0x0/0x1bfc00000, data 0x2fa0a19/0x319b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444407808 unmapped: 69386240 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444407808 unmapped: 69386240 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444407808 unmapped: 69386240 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2433000/0x0/0x1bfc00000, data 0x2fa0a19/0x319b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444416000 unmapped: 69378048 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444416000 unmapped: 69378048 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4527891 data_alloc: 218103808 data_used: 25014272
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444416000 unmapped: 69378048 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444416000 unmapped: 69378048 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444416000 unmapped: 69378048 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444416000 unmapped: 69378048 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2433000/0x0/0x1bfc00000, data 0x2fa0a19/0x319b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.300314903s of 13.311690331s, submitted: 2
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445464576 unmapped: 68329472 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4527859 data_alloc: 218103808 data_used: 25014272
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f204764800 session 0x55f20126a1e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20e436c00 session 0x55f203185860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445464576 unmapped: 68329472 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab0800 session 0x55f200b0f2c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x1e03a09/0x1ffd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438288384 unmapped: 75505664 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x1e039e6/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438288384 unmapped: 75505664 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438288384 unmapped: 75505664 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438288384 unmapped: 75505664 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4349781 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438288384 unmapped: 75505664 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438288384 unmapped: 75505664 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x1e039e6/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438288384 unmapped: 75505664 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x1e039e6/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438288384 unmapped: 75505664 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x1e039e6/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438288384 unmapped: 75505664 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4349781 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438288384 unmapped: 75505664 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438288384 unmapped: 75505664 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438288384 unmapped: 75505664 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438288384 unmapped: 75505664 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x1e039e6/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438288384 unmapped: 75505664 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4349781 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438288384 unmapped: 75505664 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438288384 unmapped: 75505664 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x1e039e6/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438296576 unmapped: 75497472 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438296576 unmapped: 75497472 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438296576 unmapped: 75497472 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4349781 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438296576 unmapped: 75497472 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438304768 unmapped: 75489280 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438312960 unmapped: 75481088 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x1e039e6/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438312960 unmapped: 75481088 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438312960 unmapped: 75481088 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4349781 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438312960 unmapped: 75481088 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438321152 unmapped: 75472896 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x1e039e6/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438321152 unmapped: 75472896 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438321152 unmapped: 75472896 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438321152 unmapped: 75472896 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4349781 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438321152 unmapped: 75472896 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x1e039e6/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438321152 unmapped: 75472896 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438321152 unmapped: 75472896 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x1e039e6/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438329344 unmapped: 75464704 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438329344 unmapped: 75464704 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4349781 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438329344 unmapped: 75464704 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438329344 unmapped: 75464704 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438329344 unmapped: 75464704 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x1e039e6/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438329344 unmapped: 75464704 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438329344 unmapped: 75464704 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4349781 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x1e039e6/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438329344 unmapped: 75464704 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438337536 unmapped: 75456512 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438337536 unmapped: 75456512 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438337536 unmapped: 75456512 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x1e039e6/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438337536 unmapped: 75456512 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4349781 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438337536 unmapped: 75456512 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x1e039e6/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438337536 unmapped: 75456512 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438345728 unmapped: 75448320 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438345728 unmapped: 75448320 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438353920 unmapped: 75440128 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4349781 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 50.467552185s of 50.791564941s, submitted: 29
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab1c00 session 0x55f200b0e5a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f20109ba40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab0800 session 0x55f202752f00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f204764800 session 0x55f202ef21e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20e436c00 session 0x55f203693c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3035000/0x0/0x1bfc00000, data 0x23a09e6/0x2599000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397317 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3035000/0x0/0x1bfc00000, data 0x23a09e6/0x2599000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438714368 unmapped: 75079680 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438714368 unmapped: 75079680 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438714368 unmapped: 75079680 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438714368 unmapped: 75079680 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438714368 unmapped: 75079680 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397317 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3035000/0x0/0x1bfc00000, data 0x23a09e6/0x2599000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3035000/0x0/0x1bfc00000, data 0x23a09e6/0x2599000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397317 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202d28c00 session 0x55f2028e8780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f20051e960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3035000/0x0/0x1bfc00000, data 0x23a09e6/0x2599000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab0800 session 0x55f20055cf00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.561635971s of 18.616905212s, submitted: 11
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f204764800 session 0x55f2031843c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4400456 data_alloc: 218103808 data_used: 18087936
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3033000/0x0/0x1bfc00000, data 0x23a0a19/0x259b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3033000/0x0/0x1bfc00000, data 0x23a0a19/0x259b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4441096 data_alloc: 218103808 data_used: 23687168
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3033000/0x0/0x1bfc00000, data 0x23a0a19/0x259b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4441096 data_alloc: 218103808 data_used: 23687168
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.537538528s of 12.552680016s, submitted: 5
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 439066624 unmapped: 74727424 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a25b0000/0x0/0x1bfc00000, data 0x2e23a19/0x301e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440401920 unmapped: 73392128 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a250d000/0x0/0x1bfc00000, data 0x2ec6a19/0x30c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440401920 unmapped: 73392128 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a250d000/0x0/0x1bfc00000, data 0x2ec6a19/0x30c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440401920 unmapped: 73392128 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4537392 data_alloc: 218103808 data_used: 24784896
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f205d21400 session 0x55f202f33680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202d2c000 session 0x55f201722d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f2025db680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab0800 session 0x55f203172f00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440541184 unmapped: 73252864 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2413000/0x0/0x1bfc00000, data 0x2fbfa29/0x31bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f204764800 session 0x55f202f321e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f205d21400 session 0x55f2016b3e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f203774000 session 0x55f2037501e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f20335dc20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab0800 session 0x55f202737c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440541184 unmapped: 73252864 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440541184 unmapped: 73252864 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440541184 unmapped: 73252864 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440541184 unmapped: 73252864 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23f2000/0x0/0x1bfc00000, data 0x2fe0a29/0x31dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4545826 data_alloc: 218103808 data_used: 24784896
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440541184 unmapped: 73252864 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440541184 unmapped: 73252864 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440549376 unmapped: 73244672 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23f2000/0x0/0x1bfc00000, data 0x2fe0a29/0x31dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440549376 unmapped: 73244672 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.877882957s of 13.225532532s, submitted: 119
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440475648 unmapped: 73318400 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4545962 data_alloc: 218103808 data_used: 24784896
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f204764800 session 0x55f202d40d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440475648 unmapped: 73318400 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23e4000/0x0/0x1bfc00000, data 0x2feea29/0x31ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f205d21400 session 0x55f20051e780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23e4000/0x0/0x1bfc00000, data 0x2feea29/0x31ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440475648 unmapped: 73318400 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440475648 unmapped: 73318400 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202743400 session 0x55f202e93680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f20276a1e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440475648 unmapped: 73318400 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440475648 unmapped: 73318400 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4546122 data_alloc: 218103808 data_used: 24788992
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440475648 unmapped: 73318400 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23e1000/0x0/0x1bfc00000, data 0x2ff1a29/0x31ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440475648 unmapped: 73318400 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440475648 unmapped: 73318400 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23e1000/0x0/0x1bfc00000, data 0x2ff1a29/0x31ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440483840 unmapped: 73310208 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20164c800 session 0x55f200095a40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440483840 unmapped: 73310208 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4546502 data_alloc: 218103808 data_used: 24788992
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20164e400 session 0x55f200ae3e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f21401f000 session 0x55f2036ca960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440483840 unmapped: 73310208 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23e1000/0x0/0x1bfc00000, data 0x2ff1a29/0x31ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440483840 unmapped: 73310208 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440287232 unmapped: 73506816 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440369152 unmapped: 73424896 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440369152 unmapped: 73424896 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4553222 data_alloc: 218103808 data_used: 25632768
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440369152 unmapped: 73424896 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440369152 unmapped: 73424896 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23e1000/0x0/0x1bfc00000, data 0x2ff1a29/0x31ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440369152 unmapped: 73424896 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.388780594s of 18.688554764s, submitted: 5
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440377344 unmapped: 73416704 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440377344 unmapped: 73416704 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4553574 data_alloc: 218103808 data_used: 25632768
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440377344 unmapped: 73416704 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23e1000/0x0/0x1bfc00000, data 0x2ff1a29/0x31ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440377344 unmapped: 73416704 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440377344 unmapped: 73416704 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440385536 unmapped: 73408512 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23e1000/0x0/0x1bfc00000, data 0x2ff1a29/0x31ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,2,0,0,0,0,13])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444809216 unmapped: 68984832 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4599318 data_alloc: 218103808 data_used: 26689536
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444940288 unmapped: 68853760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444940288 unmapped: 68853760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f21401f000 session 0x55f201693a40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20268d000 session 0x55f202f670e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f2026cb400 session 0x55f2000950e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f2092a4800 session 0x55f203184d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f20171cd20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20268d000 session 0x55f201723c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f2026cb400 session 0x55f203692b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f21401f000 session 0x55f202757680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20a487800 session 0x55f202ef32c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444194816 unmapped: 69599232 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20e436c00 session 0x55f200094000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f203710c00 session 0x55f202f661e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.830982208s of 10.382411957s, submitted: 119
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444203008 unmapped: 69591040 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20a487800 session 0x55f2025dab40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a178b000/0x0/0x1bfc00000, data 0x3c46a39/0x3e43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a288e000/0x0/0x1bfc00000, data 0x2b45a06/0x2d40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444203008 unmapped: 69591040 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4490016 data_alloc: 218103808 data_used: 19951616
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a288e000/0x0/0x1bfc00000, data 0x2b45a06/0x2d40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444203008 unmapped: 69591040 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 69582848 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 69582848 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a288b000/0x0/0x1bfc00000, data 0x2b48a06/0x2d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 69582848 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 69582848 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4490948 data_alloc: 218103808 data_used: 19951616
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 69582848 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 69582848 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f20109a5a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a288b000/0x0/0x1bfc00000, data 0x2b48a06/0x2d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20268d000 session 0x55f203385680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 69582848 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 69582848 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20268d000 session 0x55f2025b9860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 69582848 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.296791077s of 11.565423012s, submitted: 30
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f2036cba40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4493755 data_alloc: 218103808 data_used: 19951616
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a288a000/0x0/0x1bfc00000, data 0x2b48a16/0x2d44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444219392 unmapped: 69574656 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442851328 unmapped: 70942720 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab0800 session 0x55f202750960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f204764800 session 0x55f201722d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20288e400 session 0x55f200094b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4476949 data_alloc: 218103808 data_used: 24272896
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2ec2000/0x0/0x1bfc00000, data 0x2511a06/0x270c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4476949 data_alloc: 218103808 data_used: 24272896
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2ec2000/0x0/0x1bfc00000, data 0x2511a06/0x270c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.816506386s of 11.953717232s, submitted: 22
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446660608 unmapped: 67133440 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445833216 unmapped: 67960832 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2797000/0x0/0x1bfc00000, data 0x2c34a06/0x2e2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445833216 unmapped: 67960832 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4556815 data_alloc: 218103808 data_used: 25219072
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445833216 unmapped: 67960832 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445833216 unmapped: 67960832 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445833216 unmapped: 67960832 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445767680 unmapped: 68026368 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2797000/0x0/0x1bfc00000, data 0x2c34a06/0x2e2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445767680 unmapped: 68026368 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a279c000/0x0/0x1bfc00000, data 0x2c37a06/0x2e32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4551743 data_alloc: 218103808 data_used: 25219072
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445767680 unmapped: 68026368 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445767680 unmapped: 68026368 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445767680 unmapped: 68026368 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a279c000/0x0/0x1bfc00000, data 0x2c37a06/0x2e32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445767680 unmapped: 68026368 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a279c000/0x0/0x1bfc00000, data 0x2c37a06/0x2e32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a279c000/0x0/0x1bfc00000, data 0x2c37a06/0x2e32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445767680 unmapped: 68026368 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4551743 data_alloc: 218103808 data_used: 25219072
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445767680 unmapped: 68026368 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.776427269s of 14.190032005s, submitted: 101
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445784064 unmapped: 68009984 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445784064 unmapped: 68009984 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445784064 unmapped: 68009984 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f20335d860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a15fb000/0x0/0x1bfc00000, data 0x2c38a06/0x2e33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446832640 unmapped: 66961408 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4552359 data_alloc: 218103808 data_used: 25227264
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 385 ms_handle_reset con 0x55f200ab0800 session 0x55f2008921e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445800448 unmapped: 67993600 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 385 ms_handle_reset con 0x55f204764800 session 0x55f203750b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 385 ms_handle_reset con 0x55f20268d000 session 0x55f203184d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 65495040 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 385 heartbeat osd_stat(store_statfs(0x1a15f7000/0x0/0x1bfc00000, data 0x2c3a6c1/0x2e37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,10])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 385 heartbeat osd_stat(store_statfs(0x1a15f7000/0x0/0x1bfc00000, data 0x2c3a6c1/0x2e37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [0,0,0,0,0,0,1,2,5])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464240640 unmapped: 57958400 heap: 522199040 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 385 ms_handle_reset con 0x55f20264dc00 session 0x55f202752b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451747840 unmapped: 74432512 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 385 handle_osd_map epochs [386,386], i have 386, src has [1,386]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 386 ms_handle_reset con 0x55f200941800 session 0x55f202736d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 74416128 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f200ab0800 session 0x55f2028e8d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4767617 data_alloc: 234881024 data_used: 31432704
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451772416 unmapped: 74407936 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451772416 unmapped: 74407936 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a008c000/0x0/0x1bfc00000, data 0x41a0fe3/0x43a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451772416 unmapped: 74407936 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.085830688s of 12.269598007s, submitted: 111
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f20268d000 session 0x55f2016943c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f204764800 session 0x55f2008681e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f20809c800 session 0x55f202739e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448528384 unmapped: 77651968 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448528384 unmapped: 77651968 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4756097 data_alloc: 234881024 data_used: 31432704
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448536576 unmapped: 77643776 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f200941800 session 0x55f203384000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f200ab0800 session 0x55f20271b680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f20268d000 session 0x55f202f67860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f204764800 session 0x55f2008694a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f209495800 session 0x55f20271ab40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f200941800 session 0x55f202f67e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f200ab0800 session 0x55f202756f00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f20268d000 session 0x55f202753a40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f204764800 session 0x55f203185680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a008e000/0x0/0x1bfc00000, data 0x41a0fe3/0x43a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448536576 unmapped: 77643776 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448536576 unmapped: 77643776 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f20a8e4000 session 0x55f203185e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448544768 unmapped: 77635584 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448544768 unmapped: 77635584 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4811596 data_alloc: 234881024 data_used: 31440896
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448544768 unmapped: 77635584 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19fa73000/0x0/0x1bfc00000, data 0x47b9b22/0x49ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448544768 unmapped: 77635584 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448544768 unmapped: 77635584 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19fa73000/0x0/0x1bfc00000, data 0x47b9b22/0x49ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448552960 unmapped: 77627392 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19fa73000/0x0/0x1bfc00000, data 0x47b9b22/0x49ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448552960 unmapped: 77627392 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f20164e800 session 0x55f20335d4a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4811596 data_alloc: 234881024 data_used: 31440896
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f20164e800 session 0x55f202e93c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f200941800 session 0x55f202736b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f200ab0800 session 0x55f202f33860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.338602066s of 12.479089737s, submitted: 37
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f204764800 session 0x55f202f665a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f20268d000 session 0x55f202d40d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f20268d000 session 0x55f202f321e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f200941800 session 0x55f203172f00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f200ab0800 session 0x55f202ef3e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448561152 unmapped: 77619200 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19fa74000/0x0/0x1bfc00000, data 0x47b9b22/0x49ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448561152 unmapped: 77619200 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f20164e800 session 0x55f2036b2000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448585728 unmapped: 77594624 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448585728 unmapped: 77594624 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448585728 unmapped: 77594624 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4815940 data_alloc: 234881024 data_used: 31444992
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448593920 unmapped: 77586432 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19fa73000/0x0/0x1bfc00000, data 0x47b9b45/0x49bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448593920 unmapped: 77586432 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448593920 unmapped: 77586432 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f206feac00 session 0x55f202f32780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f206feac00 session 0x55f2025dba40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448593920 unmapped: 77586432 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448593920 unmapped: 77586432 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4850500 data_alloc: 234881024 data_used: 33796096
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f200941800 session 0x55f201693860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f200ab0800 session 0x55f203693a40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448749568 unmapped: 77430784 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.652714729s of 10.741259575s, submitted: 22
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448757760 unmapped: 77422592 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19fa4f000/0x0/0x1bfc00000, data 0x47ddb45/0x49df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448733184 unmapped: 77447168 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454172672 unmapped: 72007680 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454172672 unmapped: 72007680 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4941498 data_alloc: 251658240 data_used: 46231552
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19fa4f000/0x0/0x1bfc00000, data 0x47ddb45/0x49df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454180864 unmapped: 71999488 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454778880 unmapped: 71401472 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454828032 unmapped: 71352320 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454828032 unmapped: 71352320 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19f50e000/0x0/0x1bfc00000, data 0x4d1db45/0x4f1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454828032 unmapped: 71352320 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4994396 data_alloc: 251658240 data_used: 46354432
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454828032 unmapped: 71352320 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454828032 unmapped: 71352320 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454828032 unmapped: 71352320 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.024875641s of 12.330525398s, submitted: 31
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455442432 unmapped: 70737920 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455540736 unmapped: 70639616 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5014001 data_alloc: 251658240 data_used: 48156672
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19f459000/0x0/0x1bfc00000, data 0x4dd3b45/0x4fd5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455540736 unmapped: 70639616 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455540736 unmapped: 70639616 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455540736 unmapped: 70639616 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f204764800 session 0x55f202ef3e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f201271c00 session 0x55f202753e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455557120 unmapped: 70623232 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f200941800 session 0x55f2036b25a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455573504 unmapped: 70606848 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4885187 data_alloc: 234881024 data_used: 42782720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19ffb0000/0x0/0x1bfc00000, data 0x427cb22/0x447d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455581696 unmapped: 70598656 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455581696 unmapped: 70598656 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19ffb0000/0x0/0x1bfc00000, data 0x427cb22/0x447d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455581696 unmapped: 70598656 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455581696 unmapped: 70598656 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19ffb0000/0x0/0x1bfc00000, data 0x427cb22/0x447d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455581696 unmapped: 70598656 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4885187 data_alloc: 234881024 data_used: 42782720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.179315567s of 11.731339455s, submitted: 66
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455581696 unmapped: 70598656 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455688192 unmapped: 70492160 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455688192 unmapped: 70492160 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455696384 unmapped: 70483968 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455696384 unmapped: 70483968 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4889619 data_alloc: 234881024 data_used: 42999808
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19ffb1000/0x0/0x1bfc00000, data 0x427cb22/0x447d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f200ab0800 session 0x55f20276b4a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455704576 unmapped: 70475776 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 388 handle_osd_map epochs [389,389], i have 389, src has [1,389]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 389 ms_handle_reset con 0x55f204764800 session 0x55f20109ba40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 389 ms_handle_reset con 0x55f20a487000 session 0x55f202757680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 389 ms_handle_reset con 0x55f206feac00 session 0x55f2036ca780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 389 heartbeat osd_stat(store_statfs(0x19ffad000/0x0/0x1bfc00000, data 0x427e77b/0x4480000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 389 ms_handle_reset con 0x55f200941800 session 0x55f202756f00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459005952 unmapped: 88186880 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457465856 unmapped: 89726976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 390 ms_handle_reset con 0x55f200ab0800 session 0x55f2025b9c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458530816 unmapped: 88662016 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 390 heartbeat osd_stat(store_statfs(0x19cf62000/0x0/0x1bfc00000, data 0x72c6438/0x74ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 391 ms_handle_reset con 0x55f204764800 session 0x55f2027574a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 391 heartbeat osd_stat(store_statfs(0x19cf62000/0x0/0x1bfc00000, data 0x72c6438/0x74ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458588160 unmapped: 88604672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5257327 data_alloc: 251658240 data_used: 46071808
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458588160 unmapped: 88604672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 391 ms_handle_reset con 0x55f20a487000 session 0x55f2036b21e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 391 ms_handle_reset con 0x55f214020c00 session 0x55f200868f00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 391 ms_handle_reset con 0x55f200941800 session 0x55f2036b3a40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 391 heartbeat osd_stat(store_statfs(0x19cf5f000/0x0/0x1bfc00000, data 0x72c810f/0x74ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458596352 unmapped: 88596480 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.594075203s of 12.305402756s, submitted: 49
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458637312 unmapped: 88555520 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 392 heartbeat osd_stat(store_statfs(0x19cf5d000/0x0/0x1bfc00000, data 0x72c9d66/0x74cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 392 ms_handle_reset con 0x55f200ab0800 session 0x55f20335d2c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 88547328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 88547328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4931344 data_alloc: 251658240 data_used: 46071808
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 392 heartbeat osd_stat(store_statfs(0x19ffa4000/0x0/0x1bfc00000, data 0x4283d66/0x4489000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 392 heartbeat osd_stat(store_statfs(0x19ffa4000/0x0/0x1bfc00000, data 0x4283d66/0x4489000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 88547328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 88547328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 88547328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458661888 unmapped: 88530944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 393 ms_handle_reset con 0x55f20164e800 session 0x55f201725e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 393 ms_handle_reset con 0x55f20268d000 session 0x55f2025b9e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461045760 unmapped: 86147072 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4959254 data_alloc: 251658240 data_used: 46071808
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 393 ms_handle_reset con 0x55f204764800 session 0x55f203384000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462110720 unmapped: 85082112 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 393 heartbeat osd_stat(store_statfs(0x19fe4f000/0x0/0x1bfc00000, data 0x41ab8dd/0x43b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462151680 unmapped: 85041152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.917167664s of 10.137438774s, submitted: 84
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 393 ms_handle_reset con 0x55f204764800 session 0x55f2031730e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459866112 unmapped: 87326720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459866112 unmapped: 87326720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459866112 unmapped: 87326720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4918989 data_alloc: 251658240 data_used: 45215744
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459874304 unmapped: 87318528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f200941800 session 0x55f203185680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448585728 unmapped: 98607104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 394 heartbeat osd_stat(store_statfs(0x1a11ca000/0x0/0x1bfc00000, data 0x2c4a537/0x2e52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448585728 unmapped: 98607104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448585728 unmapped: 98607104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448585728 unmapped: 98607104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4642101 data_alloc: 234881024 data_used: 24956928
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f200ab0800 session 0x55f20121fc20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f20164e800 session 0x55f202750960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f20268d000 session 0x55f2037505a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f200941800 session 0x55f202f66d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f200ab0800 session 0x55f202f66b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f20164e800 session 0x55f200b430e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f204764800 session 0x55f2015bda40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f20a487000 session 0x55f202f33860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f200941800 session 0x55f2036923c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448610304 unmapped: 98582528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 394 heartbeat osd_stat(store_statfs(0x1a0a24000/0x0/0x1bfc00000, data 0x33f1547/0x35fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447373312 unmapped: 99819520 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 394 heartbeat osd_stat(store_statfs(0x1a0a24000/0x0/0x1bfc00000, data 0x33f1547/0x35fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.741806030s of 10.135712624s, submitted: 84
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f203710c00 session 0x55f2031734a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f20a487800 session 0x55f2025da000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447373312 unmapped: 99819520 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f200ab0800 session 0x55f202751860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441516032 unmapped: 105676800 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a1857000/0x0/0x1bfc00000, data 0x25be066/0x27c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441516032 unmapped: 105676800 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4511305 data_alloc: 218103808 data_used: 12959744
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441516032 unmapped: 105676800 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441516032 unmapped: 105676800 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441516032 unmapped: 105676800 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f20164e800 session 0x55f2027574a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441516032 unmapped: 105676800 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f200941800 session 0x55f2025b9c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a1857000/0x0/0x1bfc00000, data 0x25be066/0x27c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a1857000/0x0/0x1bfc00000, data 0x25be066/0x27c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441516032 unmapped: 105676800 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4511305 data_alloc: 218103808 data_used: 12959744
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f200ab0800 session 0x55f202756f00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f203710c00 session 0x55f2036ca780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441819136 unmapped: 105373696 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441819136 unmapped: 105373696 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441827328 unmapped: 105365504 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441827328 unmapped: 105365504 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a1833000/0x0/0x1bfc00000, data 0x25e2076/0x27eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441827328 unmapped: 105365504 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4564431 data_alloc: 218103808 data_used: 19857408
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441827328 unmapped: 105365504 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441827328 unmapped: 105365504 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441827328 unmapped: 105365504 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f20809ec00 session 0x55f201695680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f202d2dc00 session 0x55f20335d860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f200941800 session 0x55f202f67c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f200ab0800 session 0x55f203172b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.575592995s of 15.742922783s, submitted: 70
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f202d2dc00 session 0x55f201695a40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f203710c00 session 0x55f202736d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f20809ec00 session 0x55f20271b680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f200941800 session 0x55f202f33c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f200ab0800 session 0x55f200869860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442146816 unmapped: 105046016 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442146816 unmapped: 105046016 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4615387 data_alloc: 218103808 data_used: 19857408
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a1259000/0x0/0x1bfc00000, data 0x2bbc076/0x2dc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442146816 unmapped: 105046016 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442146816 unmapped: 105046016 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442146816 unmapped: 105046016 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a1259000/0x0/0x1bfc00000, data 0x2bbc076/0x2dc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442146816 unmapped: 105046016 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f202d2dc00 session 0x55f203184780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444301312 unmapped: 102891520 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f203710c00 session 0x55f202f32960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4648189 data_alloc: 218103808 data_used: 20058112
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a1239000/0x0/0x1bfc00000, data 0x2bdc076/0x2de5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f20809ec00 session 0x55f2016950e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f200941800 session 0x55f20274fe00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444301312 unmapped: 102891520 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444391424 unmapped: 102801408 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445415424 unmapped: 101777408 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445448192 unmapped: 101744640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445448192 unmapped: 101744640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4709328 data_alloc: 234881024 data_used: 24592384
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a0e74000/0x0/0x1bfc00000, data 0x2f9e0a9/0x31a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445448192 unmapped: 101744640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.254899025s of 13.528295517s, submitted: 61
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445448192 unmapped: 101744640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445448192 unmapped: 101744640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445448192 unmapped: 101744640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445448192 unmapped: 101744640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4706656 data_alloc: 234881024 data_used: 24596480
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a0e72000/0x0/0x1bfc00000, data 0x2fa10a9/0x31ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445448192 unmapped: 101744640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445448192 unmapped: 101744640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447430656 unmapped: 99762176 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449904640 unmapped: 97288192 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449970176 unmapped: 97222656 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4775348 data_alloc: 234881024 data_used: 25903104
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x19f4b2000/0x0/0x1bfc00000, data 0x37c10a9/0x39cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449978368 unmapped: 97214464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449978368 unmapped: 97214464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f203710c00 session 0x55f1ffce6780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x19f4ae000/0x0/0x1bfc00000, data 0x37c30a9/0x39ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449978368 unmapped: 97214464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449978368 unmapped: 97214464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.387397766s of 12.696340561s, submitted: 78
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 396 ms_handle_reset con 0x55f20809ec00 session 0x55f2036b21e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449978368 unmapped: 97214464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4780218 data_alloc: 234881024 data_used: 25911296
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 396 heartbeat osd_stat(store_statfs(0x19f4aa000/0x0/0x1bfc00000, data 0x37c4d02/0x39d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449978368 unmapped: 97214464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 396 ms_handle_reset con 0x55f20288f800 session 0x55f20171c960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449978368 unmapped: 97214464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 397 ms_handle_reset con 0x55f20274d800 session 0x55f2036ca000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 397 ms_handle_reset con 0x55f200941800 session 0x55f202737c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 397 ms_handle_reset con 0x55f20274d800 session 0x55f202d41a40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 475037696 unmapped: 72155136 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 397 ms_handle_reset con 0x55f20288f800 session 0x55f2031841e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 397 heartbeat osd_stat(store_statfs(0x19e01d000/0x0/0x1bfc00000, data 0x4c5395b/0x4e61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452313088 unmapped: 94879744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 398 ms_handle_reset con 0x55f202ed1800 session 0x55f2015bda40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452321280 unmapped: 94871552 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4964365 data_alloc: 234881024 data_used: 31469568
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 399 ms_handle_reset con 0x55f202402400 session 0x55f2025db680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452329472 unmapped: 94863360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452329472 unmapped: 94863360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 399 ms_handle_reset con 0x55f200941800 session 0x55f200095680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 399 ms_handle_reset con 0x55f20274d800 session 0x55f2017243c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 399 ms_handle_reset con 0x55f20288f800 session 0x55f202739c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 399 heartbeat osd_stat(store_statfs(0x19e017000/0x0/0x1bfc00000, data 0x4c5727d/0x4e67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449495040 unmapped: 97697792 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 399 heartbeat osd_stat(store_statfs(0x19e017000/0x0/0x1bfc00000, data 0x4c5727d/0x4e67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449495040 unmapped: 97697792 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 399 ms_handle_reset con 0x55f202c5b400 session 0x55f2025b8960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 399 ms_handle_reset con 0x55f202ed1800 session 0x55f2033850e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 399 ms_handle_reset con 0x55f202ed1800 session 0x55f2036ca960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 399 heartbeat osd_stat(store_statfs(0x19e017000/0x0/0x1bfc00000, data 0x4c5727d/0x4e67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 97673216 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4957963 data_alloc: 234881024 data_used: 31469568
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 97673216 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 97673216 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 97673216 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.282118797s of 13.623093605s, submitted: 66
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 97673216 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19e013000/0x0/0x1bfc00000, data 0x4c58dbc/0x4e6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f200941800 session 0x55f202f66b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f20288f800 session 0x55f202ef3c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f20274d800 session 0x55f203184b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f202c5b400 session 0x55f2033845a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449536000 unmapped: 97656832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f202c5b400 session 0x55f203751a40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963611 data_alloc: 234881024 data_used: 31477760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f200941800 session 0x55f20335c000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f20274d800 session 0x55f202e93a40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f203775000 session 0x55f202739c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f20e437000 session 0x55f2025db680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f200941800 session 0x55f2031841e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f20274d800 session 0x55f2036b21e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19d78e000/0x0/0x1bfc00000, data 0x54dddbc/0x56ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5034672 data_alloc: 234881024 data_used: 31477760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19d78e000/0x0/0x1bfc00000, data 0x54dddbc/0x56ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.655982971s of 11.790694237s, submitted: 39
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5033616 data_alloc: 234881024 data_used: 31473664
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f202c5b400 session 0x55f202f33c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f203775000 session 0x55f20271b680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19d78f000/0x0/0x1bfc00000, data 0x54dddbc/0x56ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5034168 data_alloc: 234881024 data_used: 31539200
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451461120 unmapped: 95731712 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19d78f000/0x0/0x1bfc00000, data 0x54dddbc/0x56ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5150808 data_alloc: 251658240 data_used: 45867008
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19d78f000/0x0/0x1bfc00000, data 0x54dddbc/0x56ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19d78f000/0x0/0x1bfc00000, data 0x54dddbc/0x56ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5150808 data_alloc: 251658240 data_used: 45867008
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19d78f000/0x0/0x1bfc00000, data 0x54dddbc/0x56ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.328330994s of 17.342882156s, submitted: 4
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455442432 unmapped: 91750400 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455737344 unmapped: 91455488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451870720 unmapped: 95322112 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5272717 data_alloc: 251658240 data_used: 48615424
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451870720 unmapped: 95322112 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19cbcd000/0x0/0x1bfc00000, data 0x609fdbc/0x62b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19cbcd000/0x0/0x1bfc00000, data 0x609fdbc/0x62b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451870720 unmapped: 95322112 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451870720 unmapped: 95322112 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19cbcd000/0x0/0x1bfc00000, data 0x609fdbc/0x62b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452935680 unmapped: 94257152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f203711400 session 0x55f201695a40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f20288f400 session 0x55f2008685a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f203711400 session 0x55f2025b9860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449036288 unmapped: 98156544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5089835 data_alloc: 234881024 data_used: 39350272
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449036288 unmapped: 98156544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449036288 unmapped: 98156544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19da81000/0x0/0x1bfc00000, data 0x51ebdbc/0x53fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449036288 unmapped: 98156544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449036288 unmapped: 98156544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.088488579s of 12.624095917s, submitted: 96
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5090363 data_alloc: 234881024 data_used: 39350272
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19da81000/0x0/0x1bfc00000, data 0x51ebdbc/0x53fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5090363 data_alloc: 234881024 data_used: 39350272
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f200ab0800 session 0x55f2016941e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f202d2dc00 session 0x55f2036b23c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450093056 unmapped: 97099776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19da81000/0x0/0x1bfc00000, data 0x51ebdbc/0x53fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450093056 unmapped: 97099776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450093056 unmapped: 97099776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19da81000/0x0/0x1bfc00000, data 0x51ebdbc/0x53fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f200941800 session 0x55f2031723c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450256896 unmapped: 96935936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4935110 data_alloc: 234881024 data_used: 32653312
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450256896 unmapped: 96935936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.516612053s of 12.381522179s, submitted: 29
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450256896 unmapped: 96935936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450256896 unmapped: 96935936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19e87c000/0x0/0x1bfc00000, data 0x43f1d89/0x4601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450256896 unmapped: 96935936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450256896 unmapped: 96935936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4935110 data_alloc: 234881024 data_used: 32653312
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450256896 unmapped: 96935936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19e87c000/0x0/0x1bfc00000, data 0x43f1d89/0x4601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450256896 unmapped: 96935936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450281472 unmapped: 96911360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450281472 unmapped: 96911360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450281472 unmapped: 96911360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4944442 data_alloc: 234881024 data_used: 33980416
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19e877000/0x0/0x1bfc00000, data 0x43f6d89/0x4606000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450281472 unmapped: 96911360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450281472 unmapped: 96911360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450281472 unmapped: 96911360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19e877000/0x0/0x1bfc00000, data 0x43f6d89/0x4606000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450281472 unmapped: 96911360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f200ab0800 session 0x55f2031725a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.455163956s of 12.743838310s, submitted: 4
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4944266 data_alloc: 234881024 data_used: 33980416
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f2010c3c00 session 0x55f202750f00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f205c03000 session 0x55f20335d860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f20288f400 session 0x55f20335dc20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19e878000/0x0/0x1bfc00000, data 0x43f6d89/0x4606000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4897430 data_alloc: 234881024 data_used: 33878016
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19ee10000/0x0/0x1bfc00000, data 0x3e5ed89/0x406e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19ee10000/0x0/0x1bfc00000, data 0x3e5ed89/0x406e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f202d2dc00 session 0x55f2028e9a40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4897430 data_alloc: 234881024 data_used: 33878016
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.547045708s of 10.706357002s, submitted: 32
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f200ab0800 session 0x55f202ef2d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19ee10000/0x0/0x1bfc00000, data 0x3e5ed89/0x406e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 401 ms_handle_reset con 0x55f2010c3c00 session 0x55f2036b2000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449830912 unmapped: 97361920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 401 ms_handle_reset con 0x55f20288f400 session 0x55f2027523c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449830912 unmapped: 97361920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 401 heartbeat osd_stat(store_statfs(0x1a0299000/0x0/0x1bfc00000, data 0x29d3a36/0x2be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 401 ms_handle_reset con 0x55f205c03000 session 0x55f2017243c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449830912 unmapped: 97361920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4676517 data_alloc: 218103808 data_used: 22364160
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a0296000/0x0/0x1bfc00000, data 0x29d56ff/0x2be7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449830912 unmapped: 97361920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449830912 unmapped: 97361920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a0296000/0x0/0x1bfc00000, data 0x29d56ff/0x2be7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 63K writes, 249K keys, 63K commit groups, 1.0 writes per commit group, ingest: 0.24 GB, 0.04 MB/s#012Cumulative WAL: 63K writes, 23K syncs, 2.71 writes per sync, written: 0.24 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4106 writes, 15K keys, 4106 commit groups, 1.0 writes per commit group, ingest: 15.38 MB, 0.03 MB/s#012Interval WAL: 4106 writes, 1694 syncs, 2.42 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449830912 unmapped: 97361920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a0296000/0x0/0x1bfc00000, data 0x29d56ff/0x2be7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 402 handle_osd_map epochs [403,403], i have 403, src has [1,403]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449839104 unmapped: 97353728 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 403 ms_handle_reset con 0x55f20a487800 session 0x55f2036b25a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 403 ms_handle_reset con 0x55f204764800 session 0x55f203693a40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 403 heartbeat osd_stat(store_statfs(0x1a0292000/0x0/0x1bfc00000, data 0x29d725a/0x2bea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449855488 unmapped: 97337344 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4679267 data_alloc: 218103808 data_used: 22364160
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.596619606s of 10.162866592s, submitted: 95
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443441152 unmapped: 103751680 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 403 ms_handle_reset con 0x55f200ab0800 session 0x55f200b43860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443441152 unmapped: 103751680 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443441152 unmapped: 103751680 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e44000/0x0/0x1bfc00000, data 0x1e26d79/0x2039000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4526142 data_alloc: 218103808 data_used: 12759040
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e44000/0x0/0x1bfc00000, data 0x1e26d79/0x2039000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f2010c3c00 session 0x55f2036b2000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4524382 data_alloc: 218103808 data_used: 12754944
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f20288f400 session 0x55f2028e9a40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e44000/0x0/0x1bfc00000, data 0x1e26d79/0x2039000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e46000/0x0/0x1bfc00000, data 0x1e26d6a/0x2038000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4524382 data_alloc: 218103808 data_used: 12754944
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e46000/0x0/0x1bfc00000, data 0x1e26d6a/0x2038000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443457536 unmapped: 103735296 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4524382 data_alloc: 218103808 data_used: 12754944
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e46000/0x0/0x1bfc00000, data 0x1e26d6a/0x2038000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443457536 unmapped: 103735296 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443457536 unmapped: 103735296 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e46000/0x0/0x1bfc00000, data 0x1e26d6a/0x2038000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443457536 unmapped: 103735296 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443457536 unmapped: 103735296 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e46000/0x0/0x1bfc00000, data 0x1e26d6a/0x2038000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443457536 unmapped: 103735296 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4524382 data_alloc: 218103808 data_used: 12754944
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443457536 unmapped: 103735296 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443457536 unmapped: 103735296 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443465728 unmapped: 103727104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443465728 unmapped: 103727104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e46000/0x0/0x1bfc00000, data 0x1e26d6a/0x2038000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443465728 unmapped: 103727104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4524382 data_alloc: 218103808 data_used: 12754944
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443465728 unmapped: 103727104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443465728 unmapped: 103727104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443465728 unmapped: 103727104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443473920 unmapped: 103718912 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443473920 unmapped: 103718912 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4524382 data_alloc: 218103808 data_used: 12754944
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e46000/0x0/0x1bfc00000, data 0x1e26d6a/0x2038000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443473920 unmapped: 103718912 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.465164185s of 35.908718109s, submitted: 41
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e46000/0x0/0x1bfc00000, data 0x1e26d6a/0x2038000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f205c03000 session 0x55f2008685a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f205c03000 session 0x55f2031841e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f200ab0800 session 0x55f2025db680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f2010c3c00 session 0x55f202739c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f20288f400 session 0x55f20335c000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443482112 unmapped: 103710720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443482112 unmapped: 103710720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443482112 unmapped: 103710720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443482112 unmapped: 103710720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4533409 data_alloc: 218103808 data_used: 12754944
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443482112 unmapped: 103710720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443482112 unmapped: 103710720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443482112 unmapped: 103710720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443482112 unmapped: 103710720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443490304 unmapped: 103702528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4533409 data_alloc: 218103808 data_used: 12754944
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443490304 unmapped: 103702528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443490304 unmapped: 103702528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443490304 unmapped: 103702528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443490304 unmapped: 103702528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443490304 unmapped: 103702528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4533569 data_alloc: 218103808 data_used: 12759040
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443490304 unmapped: 103702528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443498496 unmapped: 103694336 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443498496 unmapped: 103694336 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443498496 unmapped: 103694336 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f204764800 session 0x55f203184b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443498496 unmapped: 103694336 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4533569 data_alloc: 218103808 data_used: 12759040
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f204764800 session 0x55f202ef3c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443498496 unmapped: 103694336 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f200ab0800 session 0x55f2036ca960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443498496 unmapped: 103694336 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f2010c3c00 session 0x55f2033850e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443498496 unmapped: 103694336 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443498496 unmapped: 103694336 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443506688 unmapped: 103686144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4537569 data_alloc: 218103808 data_used: 13287424
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443506688 unmapped: 103686144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443506688 unmapped: 103686144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443506688 unmapped: 103686144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443514880 unmapped: 103677952 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443514880 unmapped: 103677952 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4537569 data_alloc: 218103808 data_used: 13287424
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443514880 unmapped: 103677952 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443514880 unmapped: 103677952 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443523072 unmapped: 103669760 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443523072 unmapped: 103669760 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443523072 unmapped: 103669760 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4537569 data_alloc: 218103808 data_used: 13287424
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.371913910s of 34.493495941s, submitted: 22
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445251584 unmapped: 101941248 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445251584 unmapped: 101941248 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445120512 unmapped: 102072320 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445120512 unmapped: 102072320 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445120512 unmapped: 102072320 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4600855 data_alloc: 218103808 data_used: 13512704
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a05f5000/0x0/0x1bfc00000, data 0x2676dcc/0x2889000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445120512 unmapped: 102072320 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445128704 unmapped: 102064128 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445128704 unmapped: 102064128 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445128704 unmapped: 102064128 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445128704 unmapped: 102064128 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4600839 data_alloc: 218103808 data_used: 13516800
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445136896 unmapped: 102055936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a05f2000/0x0/0x1bfc00000, data 0x2679dcc/0x288c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445136896 unmapped: 102055936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445136896 unmapped: 102055936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445145088 unmapped: 102047744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445145088 unmapped: 102047744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4600839 data_alloc: 218103808 data_used: 13516800
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a05f2000/0x0/0x1bfc00000, data 0x2679dcc/0x288c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445145088 unmapped: 102047744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445145088 unmapped: 102047744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445145088 unmapped: 102047744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445145088 unmapped: 102047744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a05f2000/0x0/0x1bfc00000, data 0x2679dcc/0x288c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445153280 unmapped: 102039552 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4600839 data_alloc: 218103808 data_used: 13516800
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a05f2000/0x0/0x1bfc00000, data 0x2679dcc/0x288c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a05f2000/0x0/0x1bfc00000, data 0x2679dcc/0x288c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [5])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445153280 unmapped: 102039552 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445161472 unmapped: 102031360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445161472 unmapped: 102031360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f203711400 session 0x55f2025b9e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448839680 unmapped: 98353152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448839680 unmapped: 98353152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4614119 data_alloc: 234881024 data_used: 17383424
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448839680 unmapped: 98353152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.575277328s of 25.811300278s, submitted: 74
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f20274d800 session 0x55f2017234a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a05f1000/0x0/0x1bfc00000, data 0x2679e2e/0x288d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448839680 unmapped: 98353152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448839680 unmapped: 98353152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448839680 unmapped: 98353152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448847872 unmapped: 98344960 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4615781 data_alloc: 234881024 data_used: 17383424
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f200ab0800 session 0x55f2027514a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448847872 unmapped: 98344960 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450240512 unmapped: 96952320 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f2010c3c00 session 0x55f201722d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f203711400 session 0x55f2016b2000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0373000/0x0/0x1bfc00000, data 0x28f7e2e/0x2b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450215936 unmapped: 96976896 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450215936 unmapped: 96976896 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450215936 unmapped: 96976896 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4640989 data_alloc: 234881024 data_used: 17383424
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450215936 unmapped: 96976896 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450215936 unmapped: 96976896 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0373000/0x0/0x1bfc00000, data 0x28f7e2e/0x2b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.533443451s of 11.683992386s, submitted: 45
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450191360 unmapped: 97001472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 405 handle_osd_map epochs [406,406], i have 406, src has [1,406]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 406 heartbeat osd_stat(store_statfs(0x1a036e000/0x0/0x1bfc00000, data 0x28f9a8f/0x2b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [0,0,1])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 406 ms_handle_reset con 0x55f204764800 session 0x55f20171cf00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448929792 unmapped: 98263040 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448946176 unmapped: 98246656 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4653679 data_alloc: 234881024 data_used: 17399808
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 407 ms_handle_reset con 0x55f203767800 session 0x55f2015bd2c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a0253000/0x0/0x1bfc00000, data 0x2a113bc/0x2c29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449724416 unmapped: 97468416 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f203775000 session 0x55f202ef2960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f202c5b400 session 0x55f2036b2780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449806336 unmapped: 97386496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 408 heartbeat osd_stat(store_statfs(0x19fc7b000/0x0/0x1bfc00000, data 0x2fea031/0x3203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449847296 unmapped: 97345536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f200ab0800 session 0x55f202739e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f2010c3c00 session 0x55f202f32780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f203711400 session 0x55f203384960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f200ab0800 session 0x55f203384780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f2010c3c00 session 0x55f202ef3680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450109440 unmapped: 97083392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f202c5b400 session 0x55f200094f00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450076672 unmapped: 97116160 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4753373 data_alloc: 234881024 data_used: 17399808
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f204764800 session 0x55f200b125a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449306624 unmapped: 97886208 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f20ef09c00 session 0x55f2025b8000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 408 heartbeat osd_stat(store_statfs(0x19f900000/0x0/0x1bfc00000, data 0x33630a3/0x357e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 408 handle_osd_map epochs [409,409], i have 409, src has [1,409]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449306624 unmapped: 97886208 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 409 ms_handle_reset con 0x55f205c03c00 session 0x55f2037503c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 409 ms_handle_reset con 0x55f200ab0800 session 0x55f20121e3c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449298432 unmapped: 97894400 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f203775000 session 0x55f202f674a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448135168 unmapped: 99057664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f8f8000/0x0/0x1bfc00000, data 0x3366857/0x3584000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448135168 unmapped: 99057664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4761017 data_alloc: 234881024 data_used: 17408000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f8f8000/0x0/0x1bfc00000, data 0x3366857/0x3584000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f8f8000/0x0/0x1bfc00000, data 0x3366857/0x3584000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448135168 unmapped: 99057664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f8f8000/0x0/0x1bfc00000, data 0x3366857/0x3584000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f8f8000/0x0/0x1bfc00000, data 0x3366857/0x3584000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448135168 unmapped: 99057664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.020772934s of 14.548928261s, submitted: 349
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f2010c3c00 session 0x55f202e923c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446914560 unmapped: 100278272 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f8d5000/0x0/0x1bfc00000, data 0x338a87a/0x35a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446914560 unmapped: 100278272 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447602688 unmapped: 99590144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4788342 data_alloc: 234881024 data_used: 20750336
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447602688 unmapped: 99590144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447602688 unmapped: 99590144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f8d5000/0x0/0x1bfc00000, data 0x338a87a/0x35a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447602688 unmapped: 99590144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447602688 unmapped: 99590144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447602688 unmapped: 99590144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4788342 data_alloc: 234881024 data_used: 20750336
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447602688 unmapped: 99590144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447602688 unmapped: 99590144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f8d4000/0x0/0x1bfc00000, data 0x338b87a/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447610880 unmapped: 99581952 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447610880 unmapped: 99581952 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f203767400 session 0x55f200b0e000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.310371399s of 12.341070175s, submitted: 7
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453107712 unmapped: 94085120 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894434 data_alloc: 234881024 data_used: 20783104
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f203767400 session 0x55f20171d680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19eb8e000/0x0/0x1bfc00000, data 0x40d187a/0x42f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f200ab0800 session 0x55f201692d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452583424 unmapped: 94609408 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f2010c3c00 session 0x55f203185680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 94584832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 94584832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19eae3000/0x0/0x1bfc00000, data 0x41798ad/0x439a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453656576 unmapped: 93536256 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453656576 unmapped: 93536256 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4913878 data_alloc: 234881024 data_used: 21114880
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453656576 unmapped: 93536256 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 94584832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19eae3000/0x0/0x1bfc00000, data 0x41798ad/0x439a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 94584832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 94584832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 94584832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4915646 data_alloc: 234881024 data_used: 21114880
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 94584832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 94584832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 94584832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19eabf000/0x0/0x1bfc00000, data 0x419e8ad/0x43bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.402293205s of 13.857833862s, submitted: 140
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452616192 unmapped: 94576640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f202c5b400 session 0x55f2036cb680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f204764800 session 0x55f20051e3c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452673536 unmapped: 94519296 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4770860 data_alloc: 234881024 data_used: 17530880
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f200ab0800 session 0x55f2025dba40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452681728 unmapped: 94511104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452681728 unmapped: 94511104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19fa17000/0x0/0x1bfc00000, data 0x3246828/0x3465000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452681728 unmapped: 94511104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452681728 unmapped: 94511104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452681728 unmapped: 94511104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4767962 data_alloc: 234881024 data_used: 17416192
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452681728 unmapped: 94511104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452681728 unmapped: 94511104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452681728 unmapped: 94511104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19fa17000/0x0/0x1bfc00000, data 0x3246828/0x3465000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452689920 unmapped: 94502912 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452689920 unmapped: 94502912 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4767962 data_alloc: 234881024 data_used: 17416192
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19fa17000/0x0/0x1bfc00000, data 0x3246828/0x3465000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452689920 unmapped: 94502912 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.871665955s of 13.112590790s, submitted: 70
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f203775000 session 0x55f201694b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f205c03c00 session 0x55f202ef3c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452689920 unmapped: 94502912 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f2010c3c00 session 0x55f201694780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452698112 unmapped: 94494720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452698112 unmapped: 94494720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f60a000/0x0/0x1bfc00000, data 0x32467f5/0x3463000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452698112 unmapped: 94494720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4764617 data_alloc: 234881024 data_used: 17412096
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f60a000/0x0/0x1bfc00000, data 0x32467f5/0x3463000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f20288f400 session 0x55f203173e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f205c03000 session 0x55f2033852c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452698112 unmapped: 94494720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f20288f400 session 0x55f2037514a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452706304 unmapped: 94486528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452706304 unmapped: 94486528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452730880 unmapped: 94461952 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19fe62000/0x0/0x1bfc00000, data 0x29ef793/0x2c0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452730880 unmapped: 94461952 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4694709 data_alloc: 234881024 data_used: 16662528
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452747264 unmapped: 94445568 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.015648842s of 10.203917503s, submitted: 52
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 411 ms_handle_reset con 0x55f200ab0800 session 0x55f20335c1e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 411 ms_handle_reset con 0x55f2010c3c00 session 0x55f200892f00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452763648 unmapped: 94429184 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452763648 unmapped: 94429184 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452763648 unmapped: 94429184 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 412 ms_handle_reset con 0x55f203775000 session 0x55f201724f00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 412 heartbeat osd_stat(store_statfs(0x1a079f000/0x0/0x1bfc00000, data 0x20b10f9/0x22ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453836800 unmapped: 93356032 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4627855 data_alloc: 234881024 data_used: 16666624
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 413 ms_handle_reset con 0x55f205c03c00 session 0x55f2036cbc20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 413 ms_handle_reset con 0x55f203775000 session 0x55f203750000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453844992 unmapped: 93347840 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453844992 unmapped: 93347840 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453844992 unmapped: 93347840 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453844992 unmapped: 93347840 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0807000/0x0/0x1bfc00000, data 0x1e36d60/0x2054000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453853184 unmapped: 93339648 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4610289 data_alloc: 234881024 data_used: 16666624
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453853184 unmapped: 93339648 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0807000/0x0/0x1bfc00000, data 0x1e36d60/0x2054000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453853184 unmapped: 93339648 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.563073158s of 10.798902512s, submitted: 72
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454926336 unmapped: 92266496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454926336 unmapped: 92266496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454926336 unmapped: 92266496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612399 data_alloc: 234881024 data_used: 16666624
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454934528 unmapped: 92258304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454934528 unmapped: 92258304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454934528 unmapped: 92258304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454934528 unmapped: 92258304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454934528 unmapped: 92258304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612399 data_alloc: 234881024 data_used: 16666624
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454950912 unmapped: 92241920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454950912 unmapped: 92241920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454950912 unmapped: 92241920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454950912 unmapped: 92241920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454959104 unmapped: 92233728 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612399 data_alloc: 234881024 data_used: 16666624
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454959104 unmapped: 92233728 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454959104 unmapped: 92233728 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454959104 unmapped: 92233728 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454959104 unmapped: 92233728 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454959104 unmapped: 92233728 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612399 data_alloc: 234881024 data_used: 16666624
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454959104 unmapped: 92233728 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454959104 unmapped: 92233728 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454967296 unmapped: 92225536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454967296 unmapped: 92225536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454967296 unmapped: 92225536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612399 data_alloc: 234881024 data_used: 16666624
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454967296 unmapped: 92225536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454967296 unmapped: 92225536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454967296 unmapped: 92225536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454975488 unmapped: 92217344 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454975488 unmapped: 92217344 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612399 data_alloc: 234881024 data_used: 16666624
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454983680 unmapped: 92209152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454983680 unmapped: 92209152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454983680 unmapped: 92209152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454983680 unmapped: 92209152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454983680 unmapped: 92209152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612399 data_alloc: 234881024 data_used: 16666624
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454983680 unmapped: 92209152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454983680 unmapped: 92209152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454983680 unmapped: 92209152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455000064 unmapped: 92192768 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455000064 unmapped: 92192768 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612399 data_alloc: 234881024 data_used: 16666624
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455000064 unmapped: 92192768 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455000064 unmapped: 92192768 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455008256 unmapped: 92184576 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455008256 unmapped: 92184576 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.162109375s of 42.173114777s, submitted: 22
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f202ef3860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f200b130e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f2016b2b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f20121f4a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f203750d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455385088 unmapped: 91807744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4660095 data_alloc: 234881024 data_used: 16666624
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455385088 unmapped: 91807744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a043b000/0x0/0x1bfc00000, data 0x24148d7/0x2633000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455385088 unmapped: 91807744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455385088 unmapped: 91807744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455385088 unmapped: 91807744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455385088 unmapped: 91807744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4660255 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f2025dbc20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203775000 session 0x55f200b0f2c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455385088 unmapped: 91807744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f205c03c00 session 0x55f20171c960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f200869860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455385088 unmapped: 91807744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a043b000/0x0/0x1bfc00000, data 0x24148d7/0x2633000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455401472 unmapped: 91791360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a043b000/0x0/0x1bfc00000, data 0x24148d7/0x2633000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455401472 unmapped: 91791360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455401472 unmapped: 91791360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4665626 data_alloc: 234881024 data_used: 17285120
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456237056 unmapped: 90955776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a043b000/0x0/0x1bfc00000, data 0x24148d7/0x2633000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456237056 unmapped: 90955776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456237056 unmapped: 90955776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a043b000/0x0/0x1bfc00000, data 0x24148d7/0x2633000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456237056 unmapped: 90955776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f200b13860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.005203247s of 15.191963196s, submitted: 17
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f202751860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456237056 unmapped: 90955776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4617078 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203775000 session 0x55f202f32000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a17000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4616487 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a17000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a17000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4616487 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a17000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456261632 unmapped: 90931200 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456261632 unmapped: 90931200 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a17000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456261632 unmapped: 90931200 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456261632 unmapped: 90931200 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4616487 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456261632 unmapped: 90931200 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456261632 unmapped: 90931200 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456261632 unmapped: 90931200 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a17000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456261632 unmapped: 90931200 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456269824 unmapped: 90923008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4616487 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456269824 unmapped: 90923008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456269824 unmapped: 90923008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a17000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456269824 unmapped: 90923008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456269824 unmapped: 90923008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a17000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456269824 unmapped: 90923008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4616487 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456269824 unmapped: 90923008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456278016 unmapped: 90914816 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f205c03000 session 0x55f20121e960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f205c03000 session 0x55f20335c5a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f20335c780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f203384000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.268827438s of 28.641456604s, submitted: 27
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456278016 unmapped: 90914816 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f2017243c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203775000 session 0x55f201722000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203775000 session 0x55f2015bd2c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456515584 unmapped: 90677248 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f202ef21e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f20276b860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0263000/0x0/0x1bfc00000, data 0x25eb8e7/0x280b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456515584 unmapped: 90677248 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4684809 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0263000/0x0/0x1bfc00000, data 0x25eb8e7/0x280b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456515584 unmapped: 90677248 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0263000/0x0/0x1bfc00000, data 0x25eb8e7/0x280b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456515584 unmapped: 90677248 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456515584 unmapped: 90677248 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0263000/0x0/0x1bfc00000, data 0x25eb8e7/0x280b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456515584 unmapped: 90677248 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456515584 unmapped: 90677248 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4684809 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0263000/0x0/0x1bfc00000, data 0x25eb8e7/0x280b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456523776 unmapped: 90669056 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456523776 unmapped: 90669056 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0263000/0x0/0x1bfc00000, data 0x25eb8e7/0x280b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456523776 unmapped: 90669056 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456523776 unmapped: 90669056 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.077899933s of 11.461033821s, submitted: 18
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f202751860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456523776 unmapped: 90669056 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4684878 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456523776 unmapped: 90669056 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456523776 unmapped: 90669056 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457359360 unmapped: 89833472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0262000/0x0/0x1bfc00000, data 0x25eb90a/0x280c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457359360 unmapped: 89833472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457359360 unmapped: 89833472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4733038 data_alloc: 234881024 data_used: 23408640
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457359360 unmapped: 89833472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0262000/0x0/0x1bfc00000, data 0x25eb90a/0x280c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457359360 unmapped: 89833472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0262000/0x0/0x1bfc00000, data 0x25eb90a/0x280c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457359360 unmapped: 89833472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f205c03000 session 0x55f20171c960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f2025dbc20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457359360 unmapped: 89833472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.829244614s of 10.177327156s, submitted: 4
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457359360 unmapped: 89833472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4732112 data_alloc: 234881024 data_used: 23408640
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451477504 unmapped: 95715328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388fa/0x2058000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451477504 unmapped: 95715328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f205c03000 session 0x55f202f332c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451477504 unmapped: 95715328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451477504 unmapped: 95715328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451485696 unmapped: 95707136 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4624342 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451485696 unmapped: 95707136 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4624342 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4624342 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451502080 unmapped: 95690752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451502080 unmapped: 95690752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451502080 unmapped: 95690752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451502080 unmapped: 95690752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4624342 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451502080 unmapped: 95690752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451502080 unmapped: 95690752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451502080 unmapped: 95690752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451502080 unmapped: 95690752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451502080 unmapped: 95690752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4624342 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 95682560 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 95682560 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 95682560 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 95682560 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 95682560 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4624342 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 95682560 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 95682560 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451518464 unmapped: 95674368 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451518464 unmapped: 95674368 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451518464 unmapped: 95674368 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4624342 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451518464 unmapped: 95674368 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451518464 unmapped: 95674368 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451518464 unmapped: 95674368 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451518464 unmapped: 95674368 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451526656 unmapped: 95666176 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4624342 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451526656 unmapped: 95666176 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451534848 unmapped: 95657984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451534848 unmapped: 95657984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f2037514a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f200b0f2c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f2008692c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f2031723c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451534848 unmapped: 95657984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.333183289s of 44.744827271s, submitted: 24
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f20335cb40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f2031721e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f202e92b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4705227 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 95428608 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f205c03000 session 0x55f202ef2000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f201693860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 95428608 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 95428608 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 95428608 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0091000/0x0/0x1bfc00000, data 0x27bd8e7/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 95428608 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4705003 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 95428608 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f202ef3c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 95428608 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f202f32780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0091000/0x0/0x1bfc00000, data 0x27bd8e7/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 95428608 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f201695e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203775000 session 0x55f200b13860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 95428608 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 95879168 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4769618 data_alloc: 234881024 data_used: 24059904
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452558848 unmapped: 94633984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a10cf000/0x0/0x1bfc00000, data 0x27bd91a/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452558848 unmapped: 94633984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452558848 unmapped: 94633984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a10cf000/0x0/0x1bfc00000, data 0x27bd91a/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452558848 unmapped: 94633984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a10cf000/0x0/0x1bfc00000, data 0x27bd91a/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452558848 unmapped: 94633984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4769618 data_alloc: 234881024 data_used: 24059904
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452558848 unmapped: 94633984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452558848 unmapped: 94633984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452558848 unmapped: 94633984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452558848 unmapped: 94633984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a10cf000/0x0/0x1bfc00000, data 0x27bd91a/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452567040 unmapped: 94625792 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.283365250s of 20.507976532s, submitted: 32
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4801728 data_alloc: 234881024 data_used: 24309760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0d9c000/0x0/0x1bfc00000, data 0x2ae891a/0x2d0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454991872 unmapped: 92200960 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455090176 unmapped: 92102656 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455090176 unmapped: 92102656 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0d38000/0x0/0x1bfc00000, data 0x2b3d91a/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455090176 unmapped: 92102656 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455090176 unmapped: 92102656 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4814422 data_alloc: 234881024 data_used: 24125440
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455090176 unmapped: 92102656 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0d38000/0x0/0x1bfc00000, data 0x2b3d91a/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455090176 unmapped: 92102656 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0d38000/0x0/0x1bfc00000, data 0x2b3d91a/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455098368 unmapped: 92094464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455098368 unmapped: 92094464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455098368 unmapped: 92094464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0d38000/0x0/0x1bfc00000, data 0x2b3d91a/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4814438 data_alloc: 234881024 data_used: 24125440
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455098368 unmapped: 92094464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0d38000/0x0/0x1bfc00000, data 0x2b3d91a/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455098368 unmapped: 92094464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455098368 unmapped: 92094464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455098368 unmapped: 92094464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.434202194s of 14.602894783s, submitted: 54
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203775000 session 0x55f2028e8000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454426624 unmapped: 92766208 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f202756960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f2027510e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4638623 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a56000/0x0/0x1bfc00000, data 0x1e388fa/0x2058000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a56000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a56000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4638623 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a56000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4638623 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a56000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4638623 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a56000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4638623 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450076672 unmapped: 97116160 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a56000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450076672 unmapped: 97116160 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450076672 unmapped: 97116160 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450076672 unmapped: 97116160 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450076672 unmapped: 97116160 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a56000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4638623 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a56000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.156669617s of 30.342002869s, submitted: 46
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4714460 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450396160 unmapped: 96796672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f200869860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f200b0e1e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f202d40d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f20126bc20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f203184f00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450396160 unmapped: 96796672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450396160 unmapped: 96796672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a11a2000/0x0/0x1bfc00000, data 0x26ec939/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450396160 unmapped: 96796672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450396160 unmapped: 96796672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a11a2000/0x0/0x1bfc00000, data 0x26ec939/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4714476 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450396160 unmapped: 96796672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f203184960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450396160 unmapped: 96796672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203775000 session 0x55f2025b81e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450396160 unmapped: 96796672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f2016945a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a11a2000/0x0/0x1bfc00000, data 0x26ec939/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f203384960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450396160 unmapped: 96796672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450150400 unmapped: 97042432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4768236 data_alloc: 234881024 data_used: 24272896
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a11a2000/0x0/0x1bfc00000, data 0x26ec939/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a11a2000/0x0/0x1bfc00000, data 0x26ec939/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4768236 data_alloc: 234881024 data_used: 24272896
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a11a2000/0x0/0x1bfc00000, data 0x26ec939/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.087032318s of 20.319116592s, submitted: 41
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4820192 data_alloc: 234881024 data_used: 24297472
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453926912 unmapped: 93265920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454107136 unmapped: 93085696 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a09d3000/0x0/0x1bfc00000, data 0x2ebb939/0x30db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454344704 unmapped: 92848128 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0378000/0x0/0x1bfc00000, data 0x3515939/0x3735000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203767400 session 0x55f2008694a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202742400 session 0x55f202753a40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20bcd7800 session 0x55f202f32b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455450624 unmapped: 91742208 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f2037501e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f203385c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455450624 unmapped: 91742208 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202742400 session 0x55f202f33c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203767400 session 0x55f203693860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4891608 data_alloc: 234881024 data_used: 25812992
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455450624 unmapped: 91742208 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20bcd7800 session 0x55f2015bcf00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f2036ca3c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455467008 unmapped: 91725824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a036f000/0x0/0x1bfc00000, data 0x351e939/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455467008 unmapped: 91725824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455573504 unmapped: 91619328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a036f000/0x0/0x1bfc00000, data 0x351e95c/0x373f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4931021 data_alloc: 251658240 data_used: 31932416
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a034f000/0x0/0x1bfc00000, data 0x353e95c/0x375f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4931021 data_alloc: 251658240 data_used: 31932416
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a034f000/0x0/0x1bfc00000, data 0x353e95c/0x375f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a034f000/0x0/0x1bfc00000, data 0x353e95c/0x375f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.620735168s of 19.012517929s, submitted: 116
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457244672 unmapped: 89948160 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x19fb5a000/0x0/0x1bfc00000, data 0x3d3395c/0x3f54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4999435 data_alloc: 251658240 data_used: 32567296
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457252864 unmapped: 89939968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x19fb5a000/0x0/0x1bfc00000, data 0x3d3395c/0x3f54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457269248 unmapped: 89923584 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457269248 unmapped: 89923584 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457269248 unmapped: 89923584 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x19fb49000/0x0/0x1bfc00000, data 0x3d4495c/0x3f65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457269248 unmapped: 89923584 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5011871 data_alloc: 251658240 data_used: 32710656
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457269248 unmapped: 89923584 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457269248 unmapped: 89923584 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x19fb49000/0x0/0x1bfc00000, data 0x3d4495c/0x3f65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457269248 unmapped: 89923584 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 89915392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x19fb49000/0x0/0x1bfc00000, data 0x3d4495c/0x3f65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 89915392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x19fb49000/0x0/0x1bfc00000, data 0x3d4495c/0x3f65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5012831 data_alloc: 251658240 data_used: 32780288
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 89915392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 89915392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 89915392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.378579140s of 13.722282410s, submitted: 41
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x19fb49000/0x0/0x1bfc00000, data 0x3d4495c/0x3f65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 89915392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f2036cb4a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202742400 session 0x55f2036cb0e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457285632 unmapped: 89907200 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4852779 data_alloc: 234881024 data_used: 25817088
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457293824 unmapped: 89899008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203767400 session 0x55f202f66780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457293824 unmapped: 89899008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457293824 unmapped: 89899008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a091a000/0x0/0x1bfc00000, data 0x2f73939/0x3193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457293824 unmapped: 89899008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a091a000/0x0/0x1bfc00000, data 0x2f73939/0x3193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457302016 unmapped: 89890816 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f202d40d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f200869860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4851771 data_alloc: 234881024 data_used: 25817088
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457302016 unmapped: 89890816 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f2027510e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4660897 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4660897 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457326592 unmapped: 89866240 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457326592 unmapped: 89866240 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457326592 unmapped: 89866240 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457326592 unmapped: 89866240 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4660897 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457334784 unmapped: 89858048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457334784 unmapped: 89858048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457334784 unmapped: 89858048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457334784 unmapped: 89858048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457334784 unmapped: 89858048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4660897 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457334784 unmapped: 89858048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457334784 unmapped: 89858048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457342976 unmapped: 89849856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457342976 unmapped: 89849856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457342976 unmapped: 89849856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4660897 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457342976 unmapped: 89849856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457342976 unmapped: 89849856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457342976 unmapped: 89849856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457342976 unmapped: 89849856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457342976 unmapped: 89849856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4660897 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f20335dc20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202742400 session 0x55f2008690e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203767400 session 0x55f20055d2c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f2037512c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.199741364s of 42.029754639s, submitted: 62
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f2027514a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202742400 session 0x55f20171d0e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f201722d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203766000 session 0x55f2016930e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f202e92f00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1047000/0x0/0x1bfc00000, data 0x28478e7/0x2a67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4746486 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4746486 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1047000/0x0/0x1bfc00000, data 0x28478e7/0x2a67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f202ef3e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1047000/0x0/0x1bfc00000, data 0x28478e7/0x2a67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1047000/0x0/0x1bfc00000, data 0x28478e7/0x2a67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4811898 data_alloc: 234881024 data_used: 25923584
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1047000/0x0/0x1bfc00000, data 0x28478e7/0x2a67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1047000/0x0/0x1bfc00000, data 0x28478e7/0x2a67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4811898 data_alloc: 234881024 data_used: 25923584
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.717470169s of 18.874071121s, submitted: 30
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1047000/0x0/0x1bfc00000, data 0x28478e7/0x2a67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460619776 unmapped: 86573056 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4879482 data_alloc: 234881024 data_used: 25952256
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462323712 unmapped: 84869120 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0aed000/0x0/0x1bfc00000, data 0x2d998e7/0x2fb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462790656 unmapped: 84402176 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0884000/0x0/0x1bfc00000, data 0x2ffc8e7/0x321c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461733888 unmapped: 85458944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0857000/0x0/0x1bfc00000, data 0x302f8e7/0x324f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461733888 unmapped: 85458944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461733888 unmapped: 85458944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4882830 data_alloc: 234881024 data_used: 26583040
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461733888 unmapped: 85458944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0857000/0x0/0x1bfc00000, data 0x302f8e7/0x324f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461733888 unmapped: 85458944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0857000/0x0/0x1bfc00000, data 0x302f8e7/0x324f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461733888 unmapped: 85458944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461733888 unmapped: 85458944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461742080 unmapped: 85450752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4878302 data_alloc: 234881024 data_used: 26583040
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461742080 unmapped: 85450752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461742080 unmapped: 85450752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a085c000/0x0/0x1bfc00000, data 0x30328e7/0x3252000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461742080 unmapped: 85450752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.940708160s of 15.051081657s, submitted: 93
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202742400 session 0x55f2025dbc20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f202f332c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461742080 unmapped: 85450752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2038df400 session 0x55f2031734a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456318976 unmapped: 90873856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456318976 unmapped: 90873856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456318976 unmapped: 90873856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456318976 unmapped: 90873856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456318976 unmapped: 90873856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456318976 unmapped: 90873856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456318976 unmapped: 90873856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456318976 unmapped: 90873856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456327168 unmapped: 90865664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456327168 unmapped: 90865664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456327168 unmapped: 90865664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456335360 unmapped: 90857472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456335360 unmapped: 90857472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456335360 unmapped: 90857472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2038df400 session 0x55f20051e3c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f20276a1e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f2036b2960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202742400 session 0x55f200ae25a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 64.340438843s of 64.467369080s, submitted: 35
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464347136 unmapped: 82845696 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,4,13])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f202736b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f200ae2d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f202739c20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f20171c960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202742400 session 0x55f203185680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451108864 unmapped: 96083968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451108864 unmapped: 96083968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4769250 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451117056 unmapped: 96075776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 66K writes, 258K keys, 66K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.04 MB/s#012Cumulative WAL: 66K writes, 24K syncs, 2.70 writes per sync, written: 0.25 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2663 writes, 9401 keys, 2663 commit groups, 1.0 writes per commit group, ingest: 8.75 MB, 0.01 MB/s#012Interval WAL: 2663 writes, 1081 syncs, 2.46 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451117056 unmapped: 96075776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0e6f000/0x0/0x1bfc00000, data 0x2a1f8e7/0x2c3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451117056 unmapped: 96075776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451117056 unmapped: 96075776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2038df400 session 0x55f20276b860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451117056 unmapped: 96075776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f2008692c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0e6f000/0x0/0x1bfc00000, data 0x2a1f8e7/0x2c3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f2027523c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4769250 data_alloc: 234881024 data_used: 16670720
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451117056 unmapped: 96075776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202742400 session 0x55f20126ba40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451117056 unmapped: 96075776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451125248 unmapped: 96067584 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4851574 data_alloc: 234881024 data_used: 27856896
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0e4b000/0x0/0x1bfc00000, data 0x2a438e7/0x2c63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f205d21400 session 0x55f202f33680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: mgrc ms_handle_reset ms_handle_reset con 0x55f2026ba400
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3465938080
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3465938080,v1:192.168.122.100:6801/3465938080]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: mgrc handle_mgr_configure stats_period=5
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20809c400 session 0x55f2031843c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202d2d000 session 0x55f200094960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4851574 data_alloc: 234881024 data_used: 27856896
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0e4b000/0x0/0x1bfc00000, data 0x2a438e7/0x2c63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.212234497s of 21.114337921s, submitted: 20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455450624 unmapped: 91742208 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457195520 unmapped: 89997312 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0884000/0x0/0x1bfc00000, data 0x30028e7/0x3222000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,11])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4914978 data_alloc: 234881024 data_used: 29282304
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458571776 unmapped: 88621056 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459014144 unmapped: 88178688 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a07ab000/0x0/0x1bfc00000, data 0x30e08e7/0x3300000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,5])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88170496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88170496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0793000/0x0/0x1bfc00000, data 0x30f28e7/0x3312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88170496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4926090 data_alloc: 234881024 data_used: 29192192
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88170496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88170496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0793000/0x0/0x1bfc00000, data 0x30f28e7/0x3312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88170496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.808871269s of 10.092613220s, submitted: 109
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88170496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459030528 unmapped: 88162304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4925154 data_alloc: 234881024 data_used: 29188096
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459030528 unmapped: 88162304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459030528 unmapped: 88162304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f2036932c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2026bbc00 session 0x55f202f66000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0797000/0x0/0x1bfc00000, data 0x30f78e7/0x3317000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459030528 unmapped: 88162304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f202e92f00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459030528 unmapped: 88162304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 414 handle_osd_map epochs [415,415], i have 415, src has [1,415]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 415 ms_handle_reset con 0x55f2010c3c00 session 0x55f20271a1e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459030528 unmapped: 88162304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 415 ms_handle_reset con 0x55f202c5b400 session 0x55f202750d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 415 ms_handle_reset con 0x55f202742400 session 0x55f202e92d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5094863 data_alloc: 251658240 data_used: 33964032
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 488497152 unmapped: 66535424 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 415 heartbeat osd_stat(store_statfs(0x19f884000/0x0/0x1bfc00000, data 0x4007550/0x4229000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,2])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 471924736 unmapped: 83107840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 415 ms_handle_reset con 0x55f202d2d000 session 0x55f202738960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466010112 unmapped: 89022464 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 415 ms_handle_reset con 0x55f202d2d000 session 0x55f2036cad20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.584544182s of 10.057537079s, submitted: 73
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466034688 unmapped: 88997888 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 416 heartbeat osd_stat(store_statfs(0x19eff7000/0x0/0x1bfc00000, data 0x48931fd/0x4ab6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466042880 unmapped: 88989696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 416 handle_osd_map epochs [416,417], i have 417, src has [1,417]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 416 handle_osd_map epochs [417,417], i have 417, src has [1,417]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5139189 data_alloc: 251658240 data_used: 36315136
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463626240 unmapped: 91406336 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 417 ms_handle_reset con 0x55f200ab0800 session 0x55f20055d860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463642624 unmapped: 91389952 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 417 ms_handle_reset con 0x55f2010c3c00 session 0x55f203172d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 417 ms_handle_reset con 0x55f202742400 session 0x55f20335de00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 417 ms_handle_reset con 0x55f202c5b400 session 0x55f202e925a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463650816 unmapped: 91381760 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463650816 unmapped: 91381760 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463650816 unmapped: 91381760 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 417 heartbeat osd_stat(store_statfs(0x19eff4000/0x0/0x1bfc00000, data 0x4894e72/0x4ab9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5138965 data_alloc: 251658240 data_used: 36315136
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463650816 unmapped: 91381760 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 417 ms_handle_reset con 0x55f200ab0800 session 0x55f2025da1e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a02b5000/0x0/0x1bfc00000, data 0x35d5e62/0x37f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 417 handle_osd_map epochs [418,418], i have 418, src has [1,418]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4885991 data_alloc: 234881024 data_used: 16699392
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a02b1000/0x0/0x1bfc00000, data 0x35d79a1/0x37fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a02b1000/0x0/0x1bfc00000, data 0x35d79a1/0x37fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4886151 data_alloc: 234881024 data_used: 16703488
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f2010c3c00 session 0x55f2036cad20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f202742400 session 0x55f202738960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f202d2d000 session 0x55f202e92d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f203774c00 session 0x55f20271a1e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.356239319s of 18.161268234s, submitted: 86
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460070912 unmapped: 94961664 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f200ab0800 session 0x55f200094960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f203774c00 session 0x55f202f66000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f2010c3c00 session 0x55f20126ba40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f202742400 session 0x55f20171c960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f202d2d000 session 0x55f200ae25a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f200ab0800 session 0x55f202f332c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f2010c3c00 session 0x55f203751680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 481771520 unmapped: 73261056 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f202742400 session 0x55f202750d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f203774c00 session 0x55f2036925a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 481771520 unmapped: 73261056 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f681000/0x0/0x1bfc00000, data 0x42079b1/0x442d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f20809f400 session 0x55f20126a000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f200ab0800 session 0x55f2036b2960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 481771520 unmapped: 73261056 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850475 data_alloc: 234881024 data_used: 15556608
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 419 ms_handle_reset con 0x55f203774c00 session 0x55f2033845a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 474718208 unmapped: 80314368 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 419 heartbeat osd_stat(store_statfs(0x19f67e000/0x0/0x1bfc00000, data 0x420964e/0x442f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 474718208 unmapped: 80314368 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0e16000/0x0/0x1bfc00000, data 0x2a7164e/0x2c97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 474718208 unmapped: 80314368 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 474718208 unmapped: 80314368 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0e16000/0x0/0x1bfc00000, data 0x2a7164e/0x2c97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 474726400 unmapped: 80306176 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0e16000/0x0/0x1bfc00000, data 0x2a7164e/0x2c97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4861019 data_alloc: 234881024 data_used: 17170432
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 474726400 unmapped: 80306176 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 474726400 unmapped: 80306176 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.985949516s of 10.376070976s, submitted: 75
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 419 handle_osd_map epochs [420,420], i have 420, src has [1,420]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0e13000/0x0/0x1bfc00000, data 0x2a7318d/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0e13000/0x0/0x1bfc00000, data 0x2a7318d/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4849737 data_alloc: 234881024 data_used: 17190912
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0e14000/0x0/0x1bfc00000, data 0x2a7318d/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4865765 data_alloc: 234881024 data_used: 18698240
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0e14000/0x0/0x1bfc00000, data 0x2a7318d/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 85688320 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 85630976 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 85630976 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0e0e000/0x0/0x1bfc00000, data 0x2a7918d/0x2ca0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4874267 data_alloc: 234881024 data_used: 19820544
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 85630976 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 85630976 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.212800980s of 14.984619141s, submitted: 21
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2010c3c00 session 0x55f202738780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f202742400 session 0x55f2036b2d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 85630976 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0e0e000/0x0/0x1bfc00000, data 0x2a7918d/0x2ca0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 85630976 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0e0e000/0x0/0x1bfc00000, data 0x2a7918d/0x2ca0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,0,0,0,2])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 85622784 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4728193 data_alloc: 234881024 data_used: 13365248
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463888384 unmapped: 91144192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463888384 unmapped: 91144192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f20272d400 session 0x55f200ae2d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1a45000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463896576 unmapped: 91136000 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463896576 unmapped: 91136000 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4728193 data_alloc: 234881024 data_used: 13365248
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1a45000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4728193 data_alloc: 234881024 data_used: 13365248
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1a45000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4728193 data_alloc: 234881024 data_used: 13365248
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f20272d400 session 0x55f203750000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1a45000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464396288 unmapped: 90636288 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1a45000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464396288 unmapped: 90636288 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.411104202s of 21.417512894s, submitted: 36
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f207a28800 session 0x55f2028e85a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464396288 unmapped: 90636288 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1a45000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464396288 unmapped: 90636288 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4731233 data_alloc: 234881024 data_used: 17756160
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464396288 unmapped: 90636288 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f205c02c00 session 0x55f20271ba40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cac00 session 0x55f202ef34a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464420864 unmapped: 90611712 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f20e436c00 session 0x55f200095a40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464420864 unmapped: 90611712 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a10ab000/0x0/0x1bfc00000, data 0x27dc1df/0x2a03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464420864 unmapped: 90611712 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464396288 unmapped: 90636288 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4806002 data_alloc: 234881024 data_used: 17764352
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464404480 unmapped: 90628096 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a10ab000/0x0/0x1bfc00000, data 0x27dc1df/0x2a03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464404480 unmapped: 90628096 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464420864 unmapped: 90611712 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a10ab000/0x0/0x1bfc00000, data 0x27dc1df/0x2a03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464420864 unmapped: 90611712 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.411631584s of 11.095273018s, submitted: 97
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464429056 unmapped: 90603520 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4806322 data_alloc: 234881024 data_used: 17772544
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464453632 unmapped: 90578944 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 90562560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cac00 session 0x55f20109a780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f20272d400 session 0x55f200ae3860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464527360 unmapped: 90505216 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0c9b000/0x0/0x1bfc00000, data 0x27dc1df/0x2a03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f205c02c00 session 0x55f203693680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f207a28800 session 0x55f2016921e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464838656 unmapped: 90193920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0c76000/0x0/0x1bfc00000, data 0x28001ef/0x2a28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464838656 unmapped: 90193920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4883905 data_alloc: 234881024 data_used: 26583040
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465379328 unmapped: 89653248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465379328 unmapped: 89653248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0c76000/0x0/0x1bfc00000, data 0x28001ef/0x2a28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465379328 unmapped: 89653248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465379328 unmapped: 89653248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465379328 unmapped: 89653248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4883905 data_alloc: 234881024 data_used: 26583040
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465379328 unmapped: 89653248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465379328 unmapped: 89653248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0c76000/0x0/0x1bfc00000, data 0x28001ef/0x2a28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465387520 unmapped: 89645056 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465387520 unmapped: 89645056 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465387520 unmapped: 89645056 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.062004089s of 16.160535812s, submitted: 245
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4888009 data_alloc: 234881024 data_used: 26611712
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 467968000 unmapped: 87064576 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0895000/0x0/0x1bfc00000, data 0x2be11ef/0x2e09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468287488 unmapped: 86745088 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468287488 unmapped: 86745088 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469434368 unmapped: 85598208 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03a0000/0x0/0x1bfc00000, data 0x30d61ef/0x32fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469671936 unmapped: 85360640 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0398000/0x0/0x1bfc00000, data 0x30dc1ef/0x3304000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [0,0,0,0,0,9])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4969591 data_alloc: 234881024 data_used: 26976256
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469827584 unmapped: 85204992 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469827584 unmapped: 85204992 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469827584 unmapped: 85204992 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469827584 unmapped: 85204992 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0386000/0x0/0x1bfc00000, data 0x30e81ef/0x3310000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469827584 unmapped: 85204992 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0386000/0x0/0x1bfc00000, data 0x30e81ef/0x3310000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4969591 data_alloc: 234881024 data_used: 26976256
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469827584 unmapped: 85204992 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0386000/0x0/0x1bfc00000, data 0x30e81ef/0x3310000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.132133484s of 11.083799362s, submitted: 75
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469852160 unmapped: 85180416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469852160 unmapped: 85180416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469852160 unmapped: 85180416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cb400 session 0x55f2017225a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f20809e000 session 0x55f2028e9680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469852160 unmapped: 85180416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4965851 data_alloc: 234881024 data_used: 26963968
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469852160 unmapped: 85180416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a038e000/0x0/0x1bfc00000, data 0x30e81ef/0x3310000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469852160 unmapped: 85180416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cb400 session 0x55f2036b23c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4956794 data_alloc: 234881024 data_used: 26853376
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b3000/0x0/0x1bfc00000, data 0x30c41df/0x32eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b3000/0x0/0x1bfc00000, data 0x30c41df/0x32eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b3000/0x0/0x1bfc00000, data 0x30c41df/0x32eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4956794 data_alloc: 234881024 data_used: 26853376
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b3000/0x0/0x1bfc00000, data 0x30c41df/0x32eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b3000/0x0/0x1bfc00000, data 0x30c41df/0x32eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4956794 data_alloc: 234881024 data_used: 26853376
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cac00 session 0x55f20335c1e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f20272d400 session 0x55f2016934a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f205c02c00 session 0x55f203185860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.229480743s of 21.423732758s, submitted: 50
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b3000/0x0/0x1bfc00000, data 0x30c41df/0x32eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [0,0,0,1])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f205c02c00 session 0x55f203692f00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4960484 data_alloc: 234881024 data_used: 26988544
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b1000/0x0/0x1bfc00000, data 0x30c4212/0x32ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4960484 data_alloc: 234881024 data_used: 26988544
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b1000/0x0/0x1bfc00000, data 0x30c4212/0x32ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b1000/0x0/0x1bfc00000, data 0x30c4212/0x32ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4961284 data_alloc: 234881024 data_used: 27009024
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.082525253s of 13.132267952s, submitted: 5
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b1000/0x0/0x1bfc00000, data 0x30c4212/0x32ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4969372 data_alloc: 234881024 data_used: 27381760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b1000/0x0/0x1bfc00000, data 0x30c4212/0x32ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4969634 data_alloc: 234881024 data_used: 27381760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03ac000/0x0/0x1bfc00000, data 0x30c9212/0x32f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03ac000/0x0/0x1bfc00000, data 0x30c9212/0x32f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.205702782s of 12.661606789s, submitted: 20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4969886 data_alloc: 234881024 data_used: 27381760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03ab000/0x0/0x1bfc00000, data 0x30ca212/0x32f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469901312 unmapped: 85131264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469901312 unmapped: 85131264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03ab000/0x0/0x1bfc00000, data 0x30ca212/0x32f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469901312 unmapped: 85131264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4969886 data_alloc: 234881024 data_used: 27381760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469901312 unmapped: 85131264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469901312 unmapped: 85131264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03aa000/0x0/0x1bfc00000, data 0x30ca212/0x32f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469950464 unmapped: 85082112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469950464 unmapped: 85082112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469950464 unmapped: 85082112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03aa000/0x0/0x1bfc00000, data 0x30ca212/0x32f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cac00 session 0x55f2036cb860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.574435234s of 11.623891830s, submitted: 9
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cb400 session 0x55f200869a40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03aa000/0x0/0x1bfc00000, data 0x30ca212/0x32f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4978358 data_alloc: 251658240 data_used: 29290496
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469950464 unmapped: 85082112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f20272d400 session 0x55f202ef3a40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03ac000/0x0/0x1bfc00000, data 0x30ca1df/0x32f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469958656 unmapped: 85073920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469958656 unmapped: 85073920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03ac000/0x0/0x1bfc00000, data 0x30ca1df/0x32f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469958656 unmapped: 85073920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f20809e000 session 0x55f20121fc20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03ac000/0x0/0x1bfc00000, data 0x30ca1df/0x32f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464879616 unmapped: 90152960 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464879616 unmapped: 90152960 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464879616 unmapped: 90152960 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464887808 unmapped: 90144768 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464887808 unmapped: 90144768 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464896000 unmapped: 90136576 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464896000 unmapped: 90136576 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 90128384 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 90128384 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 90128384 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 90128384 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 90128384 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 90128384 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 90128384 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 90128384 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 90120192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 90120192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 90120192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 90120192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 90120192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 90120192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 90120192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 90120192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 90120192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464920576 unmapped: 90112000 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464920576 unmapped: 90112000 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464920576 unmapped: 90112000 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464920576 unmapped: 90112000 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464920576 unmapped: 90112000 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464920576 unmapped: 90112000 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464928768 unmapped: 90103808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464928768 unmapped: 90103808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464936960 unmapped: 90095616 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 87.651184082s of 88.030509949s, submitted: 40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464936960 unmapped: 90095616 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464936960 unmapped: 90095616 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cac00 session 0x55f203184d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464936960 unmapped: 90095616 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4750184 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464936960 unmapped: 90095616 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464936960 unmapped: 90095616 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464945152 unmapped: 90087424 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464945152 unmapped: 90087424 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464953344 unmapped: 90079232 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4750184 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464953344 unmapped: 90079232 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464953344 unmapped: 90079232 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464961536 unmapped: 90071040 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464961536 unmapped: 90071040 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464961536 unmapped: 90071040 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4750184 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464961536 unmapped: 90071040 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464986112 unmapped: 90046464 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4750184 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 90021888 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4750184 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 90021888 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 90021888 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 90021888 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 90021888 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 90021888 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4750184 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465018880 unmapped: 90013696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465018880 unmapped: 90013696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465027072 unmapped: 90005504 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465027072 unmapped: 90005504 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465027072 unmapped: 90005504 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.297000885s of 32.376239777s, submitted: 7
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cb400 session 0x55f200ae3860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4752013 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1633000/0x0/0x1bfc00000, data 0x1e431af/0x206b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465043456 unmapped: 89989120 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1633000/0x0/0x1bfc00000, data 0x1e431af/0x206b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465051648 unmapped: 89980928 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465051648 unmapped: 89980928 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465051648 unmapped: 89980928 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465051648 unmapped: 89980928 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4752305 data_alloc: 234881024 data_used: 16138240
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465059840 unmapped: 89972736 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465068032 unmapped: 89964544 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1633000/0x0/0x1bfc00000, data 0x1e431af/0x206b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465068032 unmapped: 89964544 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465068032 unmapped: 89964544 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465076224 unmapped: 89956352 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f20272d400 session 0x55f200095a40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.952750206s of 10.143758774s, submitted: 5
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f205c02c00 session 0x55f202ef34a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1633000/0x0/0x1bfc00000, data 0x1e431af/0x206b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4752261 data_alloc: 234881024 data_used: 16138240
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f207a28800 session 0x55f202738780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465084416 unmapped: 89948160 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465084416 unmapped: 89948160 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465084416 unmapped: 89948160 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1633000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1633000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465084416 unmapped: 89948160 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465084416 unmapped: 89948160 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4751429 data_alloc: 234881024 data_used: 16138240
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465084416 unmapped: 89948160 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f207a28800 session 0x55f202750d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 89931776 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465117184 unmapped: 89915392 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cac00 session 0x55f200ae25a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465117184 unmapped: 89915392 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1635000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465117184 unmapped: 89915392 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4749782 data_alloc: 234881024 data_used: 16134144
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465117184 unmapped: 89915392 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465125376 unmapped: 89907200 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465125376 unmapped: 89907200 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1635000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465125376 unmapped: 89907200 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464633856 unmapped: 90398720 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1635000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cb400 session 0x55f20051e3c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4749622 data_alloc: 234881024 data_used: 16195584
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 91594752 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 91594752 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.894033432s of 17.817049026s, submitted: 36
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463446016 unmapped: 91586560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f205c02c00 session 0x55f200b0f2c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463454208 unmapped: 91578368 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1635000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463454208 unmapped: 91578368 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4749622 data_alloc: 234881024 data_used: 16195584
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463454208 unmapped: 91578368 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463454208 unmapped: 91578368 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1635000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463454208 unmapped: 91578368 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463470592 unmapped: 91561984 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2074c0000 session 0x55f2027523c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463470592 unmapped: 91561984 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1635000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4827502 data_alloc: 234881024 data_used: 16195584
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 91234304 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2074c0000 session 0x55f200b0e5a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 91234304 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0cc4000/0x0/0x1bfc00000, data 0x27b417d/0x29da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 91234304 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.619276047s of 10.872257233s, submitted: 25
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 91226112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f2026cac00 session 0x55f2000945a0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 91226112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4832436 data_alloc: 234881024 data_used: 16203776
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 91226112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cbf000/0x0/0x1bfc00000, data 0x27b5e38/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 91226112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463814656 unmapped: 91217920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463814656 unmapped: 91217920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463814656 unmapped: 91217920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4832436 data_alloc: 234881024 data_used: 16203776
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463822848 unmapped: 91209728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463822848 unmapped: 91209728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cbf000/0x0/0x1bfc00000, data 0x27b5e38/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 91201536 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cbf000/0x0/0x1bfc00000, data 0x27b5e38/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 91201536 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 91201536 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.353907585s of 11.393359184s, submitted: 2
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4833208 data_alloc: 234881024 data_used: 16203776
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f205c02c00 session 0x55f201692d20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f2026cb400 session 0x55f2036b3860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463839232 unmapped: 91193344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cbf000/0x0/0x1bfc00000, data 0x27b5e48/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463839232 unmapped: 91193344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463839232 unmapped: 91193344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463839232 unmapped: 91193344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463847424 unmapped: 91185152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4833208 data_alloc: 234881024 data_used: 16203776
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463847424 unmapped: 91185152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cbf000/0x0/0x1bfc00000, data 0x27b5e48/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463847424 unmapped: 91185152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463847424 unmapped: 91185152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463847424 unmapped: 91185152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f207a28800 session 0x55f20126ba40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463855616 unmapped: 91176960 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4836101 data_alloc: 234881024 data_used: 16306176
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463732736 unmapped: 91299840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465395712 unmapped: 89636864 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cbf000/0x0/0x1bfc00000, data 0x27b5e48/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465395712 unmapped: 89636864 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465395712 unmapped: 89636864 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.736437798s of 14.802865982s, submitted: 8
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f207a28800 session 0x55f2036b2780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f2026cac00 session 0x55f200ae21e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465395712 unmapped: 89636864 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cbf000/0x0/0x1bfc00000, data 0x27b5e48/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4897569 data_alloc: 234881024 data_used: 24899584
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f2026cb400 session 0x55f2036cb680
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4896977 data_alloc: 234881024 data_used: 24899584
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f205c02c00 session 0x55f2025dab40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cbf000/0x0/0x1bfc00000, data 0x27b5e48/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cbf000/0x0/0x1bfc00000, data 0x27b5e48/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 04:15:04 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2823572616' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f2074c0000 session 0x55f202736b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cc0000/0x0/0x1bfc00000, data 0x27b5e38/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4896097 data_alloc: 234881024 data_used: 24895488
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f2026cac00 session 0x55f200094960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.990265846s of 11.232155800s, submitted: 25
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f2026cb400 session 0x55f200ae2000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a0cbd000/0x0/0x1bfc00000, data 0x27b7a83/0x29e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466534400 unmapped: 88498176 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466534400 unmapped: 88498176 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a0cbd000/0x0/0x1bfc00000, data 0x27b7a83/0x29e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466534400 unmapped: 88498176 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 422 ms_handle_reset con 0x55f205c02c00 session 0x55f2015bd0e0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4764813 data_alloc: 234881024 data_used: 16211968
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463716352 unmapped: 91316224 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463716352 unmapped: 91316224 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463716352 unmapped: 91316224 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463716352 unmapped: 91316224 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a162f000/0x0/0x1bfc00000, data 0x1e46a83/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463716352 unmapped: 91316224 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4765133 data_alloc: 234881024 data_used: 16211968
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 422 ms_handle_reset con 0x55f207a28800 session 0x55f2036b2000
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 93372416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a162f000/0x0/0x1bfc00000, data 0x1e46a83/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.252751350s of 10.051544189s, submitted: 25
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 93372416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 93372416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 ms_handle_reset con 0x55f200754000 session 0x55f202756780
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a162a000/0x0/0x1bfc00000, data 0x1e485d2/0x2073000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 93372416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 93372416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4771455 data_alloc: 234881024 data_used: 16228352
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 ms_handle_reset con 0x55f200754000 session 0x55f20055cf00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 93372416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 93372416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a162a000/0x0/0x1bfc00000, data 0x1e485d2/0x2073000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 93372416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 ms_handle_reset con 0x55f2026cac00 session 0x55f2036cba40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462725120 unmapped: 92307456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 ms_handle_reset con 0x55f2026cb400 session 0x55f202ef3e00
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 ms_handle_reset con 0x55f205c02c00 session 0x55f200869860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4872789 data_alloc: 234881024 data_used: 16228352
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4872789 data_alloc: 234881024 data_used: 16228352
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4872789 data_alloc: 234881024 data_used: 16228352
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462741504 unmapped: 92291072 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462749696 unmapped: 92282880 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4872789 data_alloc: 234881024 data_used: 16228352
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462749696 unmapped: 92282880 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462749696 unmapped: 92282880 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462749696 unmapped: 92282880 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 ms_handle_reset con 0x55f207a28800 session 0x55f203184b40
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462749696 unmapped: 92282880 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.758771896s of 28.318550110s, submitted: 66
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462749696 unmapped: 92282880 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4884121 data_alloc: 234881024 data_used: 17805312
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462946304 unmapped: 92086272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463994880 unmapped: 91037696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463994880 unmapped: 91037696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463994880 unmapped: 91037696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463994880 unmapped: 91037696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963961 data_alloc: 251658240 data_used: 29081600
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463994880 unmapped: 91037696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463994880 unmapped: 91037696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463994880 unmapped: 91037696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463994880 unmapped: 91037696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463994880 unmapped: 91037696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963961 data_alloc: 251658240 data_used: 29081600
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464003072 unmapped: 91029504 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.377329826s of 12.535032272s, submitted: 1
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 470155264 unmapped: 84877312 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468131840 unmapped: 86900736 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468131840 unmapped: 86900736 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb8e000/0x0/0x1bfc00000, data 0x3744634/0x3970000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [0,0,0,0,0,1])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468131840 unmapped: 86900736 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5065939 data_alloc: 251658240 data_used: 29847552
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb66000/0x0/0x1bfc00000, data 0x3764634/0x3990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5077075 data_alloc: 251658240 data_used: 29913088
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb66000/0x0/0x1bfc00000, data 0x3764634/0x3990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb66000/0x0/0x1bfc00000, data 0x3764634/0x3990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5077075 data_alloc: 251658240 data_used: 29913088
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb66000/0x0/0x1bfc00000, data 0x3764634/0x3990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb66000/0x0/0x1bfc00000, data 0x3764634/0x3990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb66000/0x0/0x1bfc00000, data 0x3764634/0x3990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5077075 data_alloc: 251658240 data_used: 29913088
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb66000/0x0/0x1bfc00000, data 0x3764634/0x3990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 85647360 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5077075 data_alloc: 251658240 data_used: 29913088
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 85647360 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb66000/0x0/0x1bfc00000, data 0x3764634/0x3990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 85647360 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 85647360 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.568981171s of 27.033638000s, submitted: 105
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 85622784 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb6d000/0x0/0x1bfc00000, data 0x3764643/0x3991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 423 handle_osd_map epochs [424,424], i have 424, src has [1,424]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469434368 unmapped: 85598208 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 424 ms_handle_reset con 0x55f2026cac00 session 0x55f2027523c0
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5078242 data_alloc: 251658240 data_used: 29925376
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 424 handle_osd_map epochs [425,425], i have 425, src has [1,425]
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469458944 unmapped: 85573632 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 425 ms_handle_reset con 0x55f2026cb400 session 0x55f20171c960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb65000/0x0/0x1bfc00000, data 0x3767f13/0x3998000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469458944 unmapped: 85573632 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb65000/0x0/0x1bfc00000, data 0x3767f13/0x3998000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469475328 unmapped: 85557248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469475328 unmapped: 85557248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469475328 unmapped: 85557248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5082536 data_alloc: 251658240 data_used: 29937664
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469483520 unmapped: 85549056 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469483520 unmapped: 85549056 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb64000/0x0/0x1bfc00000, data 0x376cf13/0x399a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469491712 unmapped: 85540864 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb64000/0x0/0x1bfc00000, data 0x376cf13/0x399a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469491712 unmapped: 85540864 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469491712 unmapped: 85540864 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5084622 data_alloc: 251658240 data_used: 29937664
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469499904 unmapped: 85532672 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb64000/0x0/0x1bfc00000, data 0x376cf13/0x399a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.265442848s of 13.599176407s, submitted: 20
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 85524480 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 425 ms_handle_reset con 0x55f20268c400 session 0x55f20121e960
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 425 ms_handle_reset con 0x55f205c02c00 session 0x55f200ae3860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 85524480 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 85524480 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 85524480 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5087476 data_alloc: 251658240 data_used: 29941760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 85524480 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb63000/0x0/0x1bfc00000, data 0x376cf23/0x399b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 85524480 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 85524480 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb63000/0x0/0x1bfc00000, data 0x376cf23/0x399b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb63000/0x0/0x1bfc00000, data 0x376cf23/0x399b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 85516288 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 85516288 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5087476 data_alloc: 251658240 data_used: 29941760
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 85516288 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.287868500s of 10.076254845s, submitted: 23
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469532672 unmapped: 85499904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 425 ms_handle_reset con 0x55f208ba6c00 session 0x55f2036cb860
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb49000/0x0/0x1bfc00000, data 0x37b3f23/0x39b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [0,0,0,0,0,1])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469540864 unmapped: 85491712 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469557248 unmapped: 85475328 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5108020 data_alloc: 251658240 data_used: 30113792
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb49000/0x0/0x1bfc00000, data 0x37b3f23/0x39b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb49000/0x0/0x1bfc00000, data 0x37b3f23/0x39b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5108020 data_alloc: 251658240 data_used: 30113792
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb49000/0x0/0x1bfc00000, data 0x37b3f23/0x39b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb49000/0x0/0x1bfc00000, data 0x37b3f23/0x39b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.508153915s of 13.553033829s, submitted: 2
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5110375 data_alloc: 251658240 data_used: 30294016
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb49000/0x0/0x1bfc00000, data 0x37b3f23/0x39b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: do_command 'config diff' '{prefix=config diff}'
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469139456 unmapped: 85893120 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: do_command 'config show' '{prefix=config show}'
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468336640 unmapped: 86695936 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468754432 unmapped: 86278144 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:15:04 np0005603609 ceph-osd[79083]: do_command 'log dump' '{prefix=log dump}'
Jan 31 04:15:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 31 04:15:05 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3690361327' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 04:15:05 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:15:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 04:15:05 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2939993999' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 04:15:06 np0005603609 nova_compute[221550]: 2026-01-31 09:15:06.060 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:15:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:06.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:15:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:15:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:06.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:15:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 04:15:06 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3362502088' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 04:15:06 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:15:06 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:15:06 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 31 04:15:06 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2656656714' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 04:15:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:15:07.570 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:15:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:15:07.571 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:15:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:15:07.571 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:15:07 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 31 04:15:07 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4287032663' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 04:15:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 31 04:15:08 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4269053121' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 04:15:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:15:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:08.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:15:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 31 04:15:08 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3246896467' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 04:15:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:08.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 31 04:15:08 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/366387459' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 04:15:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 31 04:15:08 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3227798696' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 04:15:08 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 31 04:15:08 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4091371072' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 04:15:09 np0005603609 nova_compute[221550]: 2026-01-31 09:15:09.020 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:09 np0005603609 systemd[1]: Starting Hostname Service...
Jan 31 04:15:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 31 04:15:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/278558995' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 04:15:09 np0005603609 systemd[1]: Started Hostname Service.
Jan 31 04:15:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 31 04:15:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/786834206' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 04:15:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 31 04:15:09 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/367686405' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 04:15:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 31 04:15:10 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2280146538' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 04:15:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 31 04:15:10 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/44827191' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 04:15:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:15:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:10.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:15:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:10.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 31 04:15:10 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1117598229' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 04:15:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 31 04:15:10 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/859707252' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 04:15:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 31 04:15:10 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2195936456' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2741872118' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 04:15:11 np0005603609 nova_compute[221550]: 2026-01-31 09:15:11.062 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #211. Immutable memtables: 0.
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:15:11.612904) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 135] Flushing memtable with next log file: 211
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850911612936, "job": 135, "event": "flush_started", "num_memtables": 1, "num_entries": 1227, "num_deletes": 257, "total_data_size": 2362225, "memory_usage": 2395296, "flush_reason": "Manual Compaction"}
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 135] Level-0 flush table #212: started
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850911626874, "cf_name": "default", "job": 135, "event": "table_file_creation", "file_number": 212, "file_size": 1557821, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 101485, "largest_seqno": 102707, "table_properties": {"data_size": 1552111, "index_size": 2913, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14514, "raw_average_key_size": 21, "raw_value_size": 1539829, "raw_average_value_size": 2228, "num_data_blocks": 127, "num_entries": 691, "num_filter_entries": 691, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850832, "oldest_key_time": 1769850832, "file_creation_time": 1769850911, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 212, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 135] Flush lasted 14022 microseconds, and 2870 cpu microseconds.
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:15:11.626925) [db/flush_job.cc:967] [default] [JOB 135] Level-0 flush table #212: 1557821 bytes OK
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:15:11.626946) [db/memtable_list.cc:519] [default] Level-0 commit table #212 started
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:15:11.629358) [db/memtable_list.cc:722] [default] Level-0 commit table #212: memtable #1 done
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:15:11.629378) EVENT_LOG_v1 {"time_micros": 1769850911629372, "job": 135, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:15:11.629421) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 135] Try to delete WAL files size 2355911, prev total WAL file size 2355911, number of live WAL files 2.
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000208.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:15:11.630022) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303230' seq:72057594037927935, type:22 .. '6C6F676D0034323733' seq:0, type:0; will stop at (end)
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 136] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 135 Base level 0, inputs: [212(1521KB)], [210(11MB)]
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850911630090, "job": 136, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [212], "files_L6": [210], "score": -1, "input_data_size": 13908517, "oldest_snapshot_seqno": -1}
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 136] Generated table #213: 12025 keys, 13785322 bytes, temperature: kUnknown
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850911795117, "cf_name": "default", "job": 136, "event": "table_file_creation", "file_number": 213, "file_size": 13785322, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13710950, "index_size": 43219, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30085, "raw_key_size": 318825, "raw_average_key_size": 26, "raw_value_size": 13504099, "raw_average_value_size": 1123, "num_data_blocks": 1635, "num_entries": 12025, "num_filter_entries": 12025, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769850911, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 213, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:15:11.795383) [db/compaction/compaction_job.cc:1663] [default] [JOB 136] Compacted 1@0 + 1@6 files to L6 => 13785322 bytes
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:15:11.796853) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 84.2 rd, 83.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 11.8 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(17.8) write-amplify(8.8) OK, records in: 12556, records dropped: 531 output_compression: NoCompression
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:15:11.796889) EVENT_LOG_v1 {"time_micros": 1769850911796870, "job": 136, "event": "compaction_finished", "compaction_time_micros": 165136, "compaction_time_cpu_micros": 24702, "output_level": 6, "num_output_files": 1, "total_output_size": 13785322, "num_input_records": 12556, "num_output_records": 12025, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000212.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850911797314, "job": 136, "event": "table_file_deletion", "file_number": 212}
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000210.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850911798237, "job": 136, "event": "table_file_deletion", "file_number": 210}
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:15:11.629821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:15:11.798354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:15:11.798361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:15:11.798363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:15:11.798365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:15:11 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:15:11.798367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:15:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:12.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:12.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:12 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 31 04:15:12 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3123822868' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 04:15:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 31 04:15:13 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1680094024' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 04:15:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:13 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:15:13 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:15:13 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 31 04:15:13 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3985781755' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 04:15:14 np0005603609 nova_compute[221550]: 2026-01-31 09:15:14.022 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 31 04:15:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4113116959' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 04:15:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:14.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:14.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 04:15:14 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 04:15:15 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 04:15:15 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 04:15:15 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 04:15:15 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 04:15:15 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 04:15:15 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 04:15:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Jan 31 04:15:15 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1026374254' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 04:15:16 np0005603609 nova_compute[221550]: 2026-01-31 09:15:16.066 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:15:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:16.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:15:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:16.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:16 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Jan 31 04:15:16 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2530731165' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 04:15:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Jan 31 04:15:17 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3364397964' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 31 04:15:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Jan 31 04:15:17 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/422475968' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 31 04:15:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:18.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:18.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:18 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:19 np0005603609 nova_compute[221550]: 2026-01-31 09:15:19.053 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Jan 31 04:15:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3083985766' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 31 04:15:19 np0005603609 nova_compute[221550]: 2026-01-31 09:15:19.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:15:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 04:15:19 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 04:15:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 04:15:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 04:15:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:15:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:20.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:15:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:20.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Jan 31 04:15:20 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/772685924' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 31 04:15:20 np0005603609 nova_compute[221550]: 2026-01-31 09:15:20.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:15:21 np0005603609 nova_compute[221550]: 2026-01-31 09:15:21.069 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:22.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:22.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:22 np0005603609 ovs-appctl[328121]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 31 04:15:22 np0005603609 ovs-appctl[328127]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 31 04:15:22 np0005603609 ovs-appctl[328135]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 31 04:15:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Jan 31 04:15:23 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3639415667' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 31 04:15:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:23 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Jan 31 04:15:23 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/19574536' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 31 04:15:24 np0005603609 ovn_controller[130359]: 2026-01-31T09:15:24Z|01037|memory_trim|INFO|Detected inactivity (last active 30021 ms ago): trimming memory
Jan 31 04:15:24 np0005603609 nova_compute[221550]: 2026-01-31 09:15:24.055 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:15:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:24.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:15:24 np0005603609 ceph-mds[84837]: mds.beacon.cephfs.compute-1.dqeaqy missed beacon ack from the monitors
Jan 31 04:15:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:24.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Jan 31 04:15:25 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2338748616' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 31 04:15:26 np0005603609 nova_compute[221550]: 2026-01-31 09:15:26.072 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:15:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:26.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:15:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:26.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:26 np0005603609 nova_compute[221550]: 2026-01-31 09:15:26.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:15:26 np0005603609 nova_compute[221550]: 2026-01-31 09:15:26.688 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:15:26 np0005603609 nova_compute[221550]: 2026-01-31 09:15:26.689 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:15:26 np0005603609 nova_compute[221550]: 2026-01-31 09:15:26.689 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:15:26 np0005603609 nova_compute[221550]: 2026-01-31 09:15:26.689 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:15:26 np0005603609 nova_compute[221550]: 2026-01-31 09:15:26.689 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:15:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:15:27 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/162892977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:15:27 np0005603609 nova_compute[221550]: 2026-01-31 09:15:27.336 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:15:27 np0005603609 nova_compute[221550]: 2026-01-31 09:15:27.470 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:15:27 np0005603609 nova_compute[221550]: 2026-01-31 09:15:27.471 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4005MB free_disk=20.98794174194336GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:15:27 np0005603609 nova_compute[221550]: 2026-01-31 09:15:27.471 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:15:27 np0005603609 nova_compute[221550]: 2026-01-31 09:15:27.471 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:15:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Jan 31 04:15:27 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/703663555' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 31 04:15:27 np0005603609 nova_compute[221550]: 2026-01-31 09:15:27.551 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:15:27 np0005603609 nova_compute[221550]: 2026-01-31 09:15:27.551 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:15:27 np0005603609 nova_compute[221550]: 2026-01-31 09:15:27.777 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:15:27 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Jan 31 04:15:27 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4153647404' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 31 04:15:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Jan 31 04:15:28 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2469722022' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 31 04:15:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:28.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:28 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:15:28 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2043893363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:15:28 np0005603609 nova_compute[221550]: 2026-01-31 09:15:28.252 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:15:28 np0005603609 nova_compute[221550]: 2026-01-31 09:15:28.258 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:15:28 np0005603609 nova_compute[221550]: 2026-01-31 09:15:28.288 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:15:28 np0005603609 nova_compute[221550]: 2026-01-31 09:15:28.290 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:15:28 np0005603609 nova_compute[221550]: 2026-01-31 09:15:28.290 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:15:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:28.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:29 np0005603609 nova_compute[221550]: 2026-01-31 09:15:29.058 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:30.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:30.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Jan 31 04:15:30 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4126448503' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 31 04:15:31 np0005603609 nova_compute[221550]: 2026-01-31 09:15:31.075 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Jan 31 04:15:31 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2428046179' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 31 04:15:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:15:31.222 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=115, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=114) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:15:31 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:15:31.223 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:15:31 np0005603609 nova_compute[221550]: 2026-01-31 09:15:31.261 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 31 04:15:32 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1597342083' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 04:15:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:32.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:32 np0005603609 nova_compute[221550]: 2026-01-31 09:15:32.292 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:15:32 np0005603609 nova_compute[221550]: 2026-01-31 09:15:32.292 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:15:32 np0005603609 nova_compute[221550]: 2026-01-31 09:15:32.293 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:15:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:32.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:32 np0005603609 nova_compute[221550]: 2026-01-31 09:15:32.519 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:15:32 np0005603609 nova_compute[221550]: 2026-01-31 09:15:32.519 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:15:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Jan 31 04:15:32 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2724833909' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 31 04:15:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Jan 31 04:15:32 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4018119954' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 31 04:15:33 np0005603609 podman[329341]: 2026-01-31 09:15:33.180700336 +0000 UTC m=+0.053437516 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 04:15:33 np0005603609 podman[329340]: 2026-01-31 09:15:33.206318267 +0000 UTC m=+0.078598416 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 04:15:33 np0005603609 nova_compute[221550]: 2026-01-31 09:15:33.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:15:33 np0005603609 nova_compute[221550]: 2026-01-31 09:15:33.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:15:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:34 np0005603609 nova_compute[221550]: 2026-01-31 09:15:34.058 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 31 04:15:34 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4247018269' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 04:15:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:34.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:34.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Jan 31 04:15:34 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3223520930' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 31 04:15:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Jan 31 04:15:34 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3675144590' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 31 04:15:35 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:15:35.224 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '115'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:15:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Jan 31 04:15:35 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/329207965' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 31 04:15:35 np0005603609 virtqemud[221292]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 04:15:36 np0005603609 nova_compute[221550]: 2026-01-31 09:15:36.078 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:36.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:15:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:36.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:15:36 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Jan 31 04:15:36 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/251849106' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 31 04:15:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Jan 31 04:15:37 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3073995165' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 31 04:15:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Jan 31 04:15:37 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/17899084' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 31 04:15:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:15:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:38.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:15:38 np0005603609 systemd[1]: Starting Time & Date Service...
Jan 31 04:15:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:38.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:38 np0005603609 systemd[1]: Started Time & Date Service.
Jan 31 04:15:38 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Jan 31 04:15:38 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/730697016' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 31 04:15:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:39 np0005603609 nova_compute[221550]: 2026-01-31 09:15:39.060 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Jan 31 04:15:39 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2699539177' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 31 04:15:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 31 04:15:40 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3189746421' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 04:15:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:40.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:40.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Jan 31 04:15:40 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2542830996' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 31 04:15:41 np0005603609 nova_compute[221550]: 2026-01-31 09:15:41.081 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 04:15:41 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3260021682' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 04:15:41 np0005603609 nova_compute[221550]: 2026-01-31 09:15:41.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:15:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e426 e426: 3 total, 3 up, 3 in
Jan 31 04:15:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Jan 31 04:15:42 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4210385020' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 31 04:15:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:15:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:42.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:15:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:15:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:42.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:15:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:44 np0005603609 nova_compute[221550]: 2026-01-31 09:15:44.062 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:44.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:44.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:44 np0005603609 nova_compute[221550]: 2026-01-31 09:15:44.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:15:46 np0005603609 nova_compute[221550]: 2026-01-31 09:15:46.087 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:15:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:46.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:15:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:46.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:47 np0005603609 nova_compute[221550]: 2026-01-31 09:15:47.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:15:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:48.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:15:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:48.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:15:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:49 np0005603609 nova_compute[221550]: 2026-01-31 09:15:49.064 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e427 e427: 3 total, 3 up, 3 in
Jan 31 04:15:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:15:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:50.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:15:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:50.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:51 np0005603609 nova_compute[221550]: 2026-01-31 09:15:51.090 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:52.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:52.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e428 e428: 3 total, 3 up, 3 in
Jan 31 04:15:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:15:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2514494485' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:15:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:15:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2514494485' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:15:54 np0005603609 nova_compute[221550]: 2026-01-31 09:15:54.066 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:15:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:54.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:15:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:15:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:54.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:15:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:56 np0005603609 nova_compute[221550]: 2026-01-31 09:15:56.094 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:56.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:15:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:56.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:15:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:58.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:15:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:15:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:58.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:15:59 np0005603609 nova_compute[221550]: 2026-01-31 09:15:59.103 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 e429: 3 total, 3 up, 3 in
Jan 31 04:16:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:00.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:00.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:01 np0005603609 nova_compute[221550]: 2026-01-31 09:16:01.100 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:02.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:02.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:03 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #214. Immutable memtables: 0.
Jan 31 04:16:03 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:16:03.202257) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:16:03 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 137] Flushing memtable with next log file: 214
Jan 31 04:16:03 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850963202292, "job": 137, "event": "flush_started", "num_memtables": 1, "num_entries": 994, "num_deletes": 253, "total_data_size": 1661930, "memory_usage": 1683232, "flush_reason": "Manual Compaction"}
Jan 31 04:16:03 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 137] Level-0 flush table #215: started
Jan 31 04:16:03 np0005603609 podman[330389]: 2026-01-31 09:16:03.45038542 +0000 UTC m=+0.066183870 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 31 04:16:03 np0005603609 podman[330390]: 2026-01-31 09:16:03.461623476 +0000 UTC m=+0.075414147 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:16:04 np0005603609 nova_compute[221550]: 2026-01-31 09:16:04.105 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:04.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:16:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:04.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:16:04 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850964428530, "cf_name": "default", "job": 137, "event": "table_file_creation", "file_number": 215, "file_size": 1095439, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 102713, "largest_seqno": 103701, "table_properties": {"data_size": 1090209, "index_size": 2433, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 15100, "raw_average_key_size": 22, "raw_value_size": 1078572, "raw_average_value_size": 1600, "num_data_blocks": 104, "num_entries": 674, "num_filter_entries": 674, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850912, "oldest_key_time": 1769850912, "file_creation_time": 1769850963, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 215, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:16:04 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 137] Flush lasted 1226386 microseconds, and 2943 cpu microseconds.
Jan 31 04:16:04 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:16:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:16:04.428627) [db/flush_job.cc:967] [default] [JOB 137] Level-0 flush table #215: 1095439 bytes OK
Jan 31 04:16:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:16:04.428665) [db/memtable_list.cc:519] [default] Level-0 commit table #215 started
Jan 31 04:16:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:16:04.627487) [db/memtable_list.cc:722] [default] Level-0 commit table #215: memtable #1 done
Jan 31 04:16:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:16:04.627538) EVENT_LOG_v1 {"time_micros": 1769850964627527, "job": 137, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:16:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:16:04.627564) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:16:04 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 137] Try to delete WAL files size 1656255, prev total WAL file size 1671698, number of live WAL files 2.
Jan 31 04:16:04 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000211.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:16:04 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:16:04.628444) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039303336' seq:72057594037927935, type:22 .. '7061786F730039323838' seq:0, type:0; will stop at (end)
Jan 31 04:16:04 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 138] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:16:04 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 137 Base level 0, inputs: [215(1069KB)], [213(13MB)]
Jan 31 04:16:04 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850964628528, "job": 138, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [215], "files_L6": [213], "score": -1, "input_data_size": 14880761, "oldest_snapshot_seqno": -1}
Jan 31 04:16:06 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 138] Generated table #216: 12177 keys, 12882075 bytes, temperature: kUnknown
Jan 31 04:16:06 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850966022179, "cf_name": "default", "job": 138, "event": "table_file_creation", "file_number": 216, "file_size": 12882075, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12807572, "index_size": 42966, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30469, "raw_key_size": 324218, "raw_average_key_size": 26, "raw_value_size": 12599110, "raw_average_value_size": 1034, "num_data_blocks": 1615, "num_entries": 12177, "num_filter_entries": 12177, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769850964, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 216, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:16:06 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:16:06 np0005603609 nova_compute[221550]: 2026-01-31 09:16:06.103 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:06.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:06.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:06 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:16:06.022421) [db/compaction/compaction_job.cc:1663] [default] [JOB 138] Compacted 1@0 + 1@6 files to L6 => 12882075 bytes
Jan 31 04:16:06 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:16:06.448547) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 10.7 rd, 9.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.1 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(25.3) write-amplify(11.8) OK, records in: 12699, records dropped: 522 output_compression: NoCompression
Jan 31 04:16:06 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:16:06.448585) EVENT_LOG_v1 {"time_micros": 1769850966448568, "job": 138, "event": "compaction_finished", "compaction_time_micros": 1393718, "compaction_time_cpu_micros": 40713, "output_level": 6, "num_output_files": 1, "total_output_size": 12882075, "num_input_records": 12699, "num_output_records": 12177, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:16:06 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000215.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:16:06 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850966448818, "job": 138, "event": "table_file_deletion", "file_number": 215}
Jan 31 04:16:06 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000213.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:16:06 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850966449853, "job": 138, "event": "table_file_deletion", "file_number": 213}
Jan 31 04:16:06 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:16:04.628281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:16:06 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:16:06.449985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:16:06 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:16:06.449991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:16:06 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:16:06.449994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:16:06 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:16:06.449996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:16:06 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:16:06.449998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:16:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:16:07.572 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:16:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:16:07.572 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:16:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:16:07.572 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:16:07 np0005603609 nova_compute[221550]: 2026-01-31 09:16:07.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:08.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:08.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:08 np0005603609 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 04:16:08 np0005603609 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 04:16:09 np0005603609 nova_compute[221550]: 2026-01-31 09:16:09.107 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:10.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:10.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:11 np0005603609 nova_compute[221550]: 2026-01-31 09:16:11.158 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:12.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:16:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:12.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:16:14 np0005603609 nova_compute[221550]: 2026-01-31 09:16:14.109 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:14.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:14.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:14 np0005603609 podman[330609]: 2026-01-31 09:16:14.456827346 +0000 UTC m=+0.919726111 container exec 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 04:16:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:15 np0005603609 podman[330609]: 2026-01-31 09:16:15.483760184 +0000 UTC m=+1.946658969 container exec_died 92426b64c0917a790421dd8a6e2c33c7f626ec7b0b768ad6a93c6a3e3fb3e95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:16:16 np0005603609 nova_compute[221550]: 2026-01-31 09:16:16.161 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:16.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:16 np0005603609 ovn_controller[130359]: 2026-01-31T09:16:16Z|01038|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Jan 31 04:16:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:16.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:17 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:16:17 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:16:17 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:16:17 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:16:17 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:16:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:18.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:18.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:19 np0005603609 nova_compute[221550]: 2026-01-31 09:16:19.110 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:19 np0005603609 nova_compute[221550]: 2026-01-31 09:16:19.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:20.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:20.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:21 np0005603609 nova_compute[221550]: 2026-01-31 09:16:21.203 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:22.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:22.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:22 np0005603609 nova_compute[221550]: 2026-01-31 09:16:22.660 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:24 np0005603609 nova_compute[221550]: 2026-01-31 09:16:24.112 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:24.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:24 np0005603609 systemd[1]: session-66.scope: Deactivated successfully.
Jan 31 04:16:24 np0005603609 systemd[1]: session-66.scope: Consumed 2min 35.683s CPU time, 1.0G memory peak, read 453.7M from disk, written 335.4M to disk.
Jan 31 04:16:24 np0005603609 systemd-logind[823]: Session 66 logged out. Waiting for processes to exit.
Jan 31 04:16:24 np0005603609 systemd-logind[823]: Removed session 66.
Jan 31 04:16:24 np0005603609 systemd-logind[823]: New session 67 of user zuul.
Jan 31 04:16:24 np0005603609 systemd[1]: Started Session 67 of User zuul.
Jan 31 04:16:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:24.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:24 np0005603609 systemd[1]: session-67.scope: Deactivated successfully.
Jan 31 04:16:25 np0005603609 systemd-logind[823]: Session 67 logged out. Waiting for processes to exit.
Jan 31 04:16:25 np0005603609 systemd-logind[823]: Removed session 67.
Jan 31 04:16:25 np0005603609 systemd-logind[823]: New session 68 of user zuul.
Jan 31 04:16:25 np0005603609 systemd[1]: Started Session 68 of User zuul.
Jan 31 04:16:25 np0005603609 systemd[1]: session-68.scope: Deactivated successfully.
Jan 31 04:16:25 np0005603609 systemd-logind[823]: Session 68 logged out. Waiting for processes to exit.
Jan 31 04:16:25 np0005603609 systemd-logind[823]: Removed session 68.
Jan 31 04:16:26 np0005603609 nova_compute[221550]: 2026-01-31 09:16:26.206 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:26.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:26.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:26 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:16:26 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:16:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:28.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:28.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:28 np0005603609 nova_compute[221550]: 2026-01-31 09:16:28.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:28 np0005603609 nova_compute[221550]: 2026-01-31 09:16:28.688 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:16:28 np0005603609 nova_compute[221550]: 2026-01-31 09:16:28.689 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:16:28 np0005603609 nova_compute[221550]: 2026-01-31 09:16:28.689 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:16:28 np0005603609 nova_compute[221550]: 2026-01-31 09:16:28.689 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:16:28 np0005603609 nova_compute[221550]: 2026-01-31 09:16:28.690 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:16:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:16:29 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/305204523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:16:29 np0005603609 nova_compute[221550]: 2026-01-31 09:16:29.115 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:29 np0005603609 nova_compute[221550]: 2026-01-31 09:16:29.128 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:16:29 np0005603609 nova_compute[221550]: 2026-01-31 09:16:29.263 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:16:29 np0005603609 nova_compute[221550]: 2026-01-31 09:16:29.264 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4071MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:16:29 np0005603609 nova_compute[221550]: 2026-01-31 09:16:29.264 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:16:29 np0005603609 nova_compute[221550]: 2026-01-31 09:16:29.264 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:16:29 np0005603609 nova_compute[221550]: 2026-01-31 09:16:29.377 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:16:29 np0005603609 nova_compute[221550]: 2026-01-31 09:16:29.377 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:16:29 np0005603609 nova_compute[221550]: 2026-01-31 09:16:29.473 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:16:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:16:29 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3815745731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:16:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:16:29 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 68K writes, 267K keys, 68K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.03 MB/s#012Cumulative WAL: 68K writes, 25K syncs, 2.69 writes per sync, written: 0.25 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2606 writes, 8519 keys, 2606 commit groups, 1.0 writes per commit group, ingest: 7.73 MB, 0.01 MB/s#012Interval WAL: 2606 writes, 1074 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 04:16:29 np0005603609 nova_compute[221550]: 2026-01-31 09:16:29.888 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:16:29 np0005603609 nova_compute[221550]: 2026-01-31 09:16:29.893 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:16:29 np0005603609 nova_compute[221550]: 2026-01-31 09:16:29.924 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:16:29 np0005603609 nova_compute[221550]: 2026-01-31 09:16:29.926 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:16:29 np0005603609 nova_compute[221550]: 2026-01-31 09:16:29.927 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:16:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:30.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:30.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:31 np0005603609 nova_compute[221550]: 2026-01-31 09:16:31.210 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:16:31 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3152122720' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:16:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:32.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:32.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:32 np0005603609 nova_compute[221550]: 2026-01-31 09:16:32.927 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:32 np0005603609 nova_compute[221550]: 2026-01-31 09:16:32.928 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:16:32 np0005603609 nova_compute[221550]: 2026-01-31 09:16:32.928 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:16:32 np0005603609 nova_compute[221550]: 2026-01-31 09:16:32.946 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:16:32 np0005603609 nova_compute[221550]: 2026-01-31 09:16:32.947 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:33 np0005603609 nova_compute[221550]: 2026-01-31 09:16:33.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:33 np0005603609 nova_compute[221550]: 2026-01-31 09:16:33.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:16:34 np0005603609 nova_compute[221550]: 2026-01-31 09:16:34.116 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:34 np0005603609 podman[331020]: 2026-01-31 09:16:34.157573323 +0000 UTC m=+0.044368523 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 04:16:34 np0005603609 podman[331019]: 2026-01-31 09:16:34.180661472 +0000 UTC m=+0.067500083 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 04:16:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:34.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:34.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:36 np0005603609 nova_compute[221550]: 2026-01-31 09:16:36.213 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:36.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:36.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:38.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:38.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:39 np0005603609 nova_compute[221550]: 2026-01-31 09:16:39.119 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:40.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:40.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:41 np0005603609 nova_compute[221550]: 2026-01-31 09:16:41.217 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:42.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:42.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:42 np0005603609 nova_compute[221550]: 2026-01-31 09:16:42.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:44 np0005603609 nova_compute[221550]: 2026-01-31 09:16:44.120 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:44.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:44.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:46 np0005603609 nova_compute[221550]: 2026-01-31 09:16:46.221 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:46.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:46.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:46 np0005603609 nova_compute[221550]: 2026-01-31 09:16:46.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:47 np0005603609 nova_compute[221550]: 2026-01-31 09:16:47.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:48.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:48.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:49 np0005603609 nova_compute[221550]: 2026-01-31 09:16:49.123 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:50.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:50.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:16:51.020 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=116, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=115) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:16:51 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:16:51.021 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:16:51 np0005603609 nova_compute[221550]: 2026-01-31 09:16:51.022 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:51 np0005603609 nova_compute[221550]: 2026-01-31 09:16:51.223 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:52.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:52.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:16:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3871755717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:16:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:16:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3871755717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:16:54 np0005603609 nova_compute[221550]: 2026-01-31 09:16:54.124 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:54.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:54.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:56 np0005603609 nova_compute[221550]: 2026-01-31 09:16:56.227 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:56.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:16:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:56.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:16:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:58.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:16:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:58.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:59 np0005603609 nova_compute[221550]: 2026-01-31 09:16:59.125 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:17:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:00.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:17:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:00.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:01 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:17:01.023 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '116'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:17:01 np0005603609 nova_compute[221550]: 2026-01-31 09:17:01.230 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:02 np0005603609 ovn_controller[130359]: 2026-01-31T09:17:02Z|01039|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 31 04:17:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:17:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:02.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:17:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:02.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:17:02 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.3 total, 600.0 interval#012Cumulative writes: 20K writes, 104K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.03 MB/s#012Cumulative WAL: 20K writes, 20K syncs, 1.00 writes per sync, written: 0.21 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1564 writes, 8009 keys, 1564 commit groups, 1.0 writes per commit group, ingest: 16.28 MB, 0.03 MB/s#012Interval WAL: 1564 writes, 1564 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     31.0      4.17              0.29        69    0.060       0      0       0.0       0.0#012  L6      1/0   12.29 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.7     69.0     59.5     12.30              1.69        68    0.181    581K    36K       0.0       0.0#012 Sum      1/0   12.29 MB   0.0      0.8     0.1      0.7       0.8      0.1       0.0   6.7     51.5     52.3     16.47              1.98       137    0.120    581K    36K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.2     21.5     21.9      4.01              0.21        12    0.334     73K   3110       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0     69.0     59.5     12.30              1.69        68    0.181    581K    36K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     31.2      4.14              0.29        68    0.061       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7800.3 total, 600.0 interval#012Flush(GB): cumulative 0.126, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.84 GB write, 0.11 MB/s write, 0.83 GB read, 0.11 MB/s read, 16.5 seconds#012Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.08 GB read, 0.14 MB/s read, 4.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619ad8ed1f0#2 capacity: 304.00 MB usage: 92.23 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000709 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(5726,88.16 MB,29.0003%) FilterBlock(137,1.57 MB,0.514839%) IndexBlock(137,2.51 MB,0.824281%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 04:17:04 np0005603609 nova_compute[221550]: 2026-01-31 09:17:04.126 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:04.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:04.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:05 np0005603609 podman[331067]: 2026-01-31 09:17:05.160227199 +0000 UTC m=+0.045359887 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:17:05 np0005603609 podman[331066]: 2026-01-31 09:17:05.177775131 +0000 UTC m=+0.064342085 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 04:17:06 np0005603609 nova_compute[221550]: 2026-01-31 09:17:06.232 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:06.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:06.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:17:07.573 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:17:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:17:07.573 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:17:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:17:07.573 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:17:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:17:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:08.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:17:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:17:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:08.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:17:09 np0005603609 nova_compute[221550]: 2026-01-31 09:17:09.129 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:10.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:10.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:11 np0005603609 nova_compute[221550]: 2026-01-31 09:17:11.236 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:12.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:12.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:14 np0005603609 nova_compute[221550]: 2026-01-31 09:17:14.132 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:14.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 04:17:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:14.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 04:17:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:16 np0005603609 nova_compute[221550]: 2026-01-31 09:17:16.240 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:17:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:16.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:17:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:16.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:18.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:17:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:18.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:17:19 np0005603609 nova_compute[221550]: 2026-01-31 09:17:19.132 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:20.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:17:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:20.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:17:20 np0005603609 nova_compute[221550]: 2026-01-31 09:17:20.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:21 np0005603609 nova_compute[221550]: 2026-01-31 09:17:21.244 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:22.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:17:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:22.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:17:23 np0005603609 nova_compute[221550]: 2026-01-31 09:17:23.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:24 np0005603609 nova_compute[221550]: 2026-01-31 09:17:24.134 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:24.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:17:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:24.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:17:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:26 np0005603609 nova_compute[221550]: 2026-01-31 09:17:26.248 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:26.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:26.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:17:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:28.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:17:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:28.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:28 np0005603609 nova_compute[221550]: 2026-01-31 09:17:28.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:28 np0005603609 nova_compute[221550]: 2026-01-31 09:17:28.811 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:17:28 np0005603609 nova_compute[221550]: 2026-01-31 09:17:28.812 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:17:28 np0005603609 nova_compute[221550]: 2026-01-31 09:17:28.812 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:17:28 np0005603609 nova_compute[221550]: 2026-01-31 09:17:28.812 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:17:28 np0005603609 nova_compute[221550]: 2026-01-31 09:17:28.812 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:17:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:17:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:17:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:17:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:17:29 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:17:29 np0005603609 nova_compute[221550]: 2026-01-31 09:17:29.135 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:17:29 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/695105816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:17:29 np0005603609 nova_compute[221550]: 2026-01-31 09:17:29.215 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:17:29 np0005603609 nova_compute[221550]: 2026-01-31 09:17:29.348 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:17:29 np0005603609 nova_compute[221550]: 2026-01-31 09:17:29.349 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4113MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:17:29 np0005603609 nova_compute[221550]: 2026-01-31 09:17:29.349 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:17:29 np0005603609 nova_compute[221550]: 2026-01-31 09:17:29.349 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:17:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:17:29.398 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=117, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=116) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:17:29 np0005603609 nova_compute[221550]: 2026-01-31 09:17:29.398 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:29 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:17:29.399 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:17:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:17:29 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/35238300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:17:29 np0005603609 nova_compute[221550]: 2026-01-31 09:17:29.730 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:17:29 np0005603609 nova_compute[221550]: 2026-01-31 09:17:29.730 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:17:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:29 np0005603609 nova_compute[221550]: 2026-01-31 09:17:29.799 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:17:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:17:30 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2282457959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:17:30 np0005603609 nova_compute[221550]: 2026-01-31 09:17:30.230 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:17:30 np0005603609 nova_compute[221550]: 2026-01-31 09:17:30.235 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:17:30 np0005603609 nova_compute[221550]: 2026-01-31 09:17:30.265 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:17:30 np0005603609 nova_compute[221550]: 2026-01-31 09:17:30.267 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:17:30 np0005603609 nova_compute[221550]: 2026-01-31 09:17:30.267 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:17:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:30.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:30.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:31 np0005603609 nova_compute[221550]: 2026-01-31 09:17:31.252 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:32.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:32.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:33 np0005603609 nova_compute[221550]: 2026-01-31 09:17:33.267 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:33 np0005603609 nova_compute[221550]: 2026-01-31 09:17:33.268 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:17:33 np0005603609 nova_compute[221550]: 2026-01-31 09:17:33.268 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:17:33 np0005603609 nova_compute[221550]: 2026-01-31 09:17:33.294 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:17:33 np0005603609 nova_compute[221550]: 2026-01-31 09:17:33.294 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:33 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:17:33.401 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '117'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:17:34 np0005603609 nova_compute[221550]: 2026-01-31 09:17:34.136 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:34.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:34.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:34 np0005603609 nova_compute[221550]: 2026-01-31 09:17:34.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:34 np0005603609 nova_compute[221550]: 2026-01-31 09:17:34.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:17:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:36 np0005603609 podman[331289]: 2026-01-31 09:17:36.155162766 +0000 UTC m=+0.038614822 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 04:17:36 np0005603609 podman[331288]: 2026-01-31 09:17:36.174248206 +0000 UTC m=+0.059393324 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 04:17:36 np0005603609 nova_compute[221550]: 2026-01-31 09:17:36.254 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:36.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:17:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:36.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:17:36 np0005603609 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 04:17:38 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:17:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:38.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:38.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:39 np0005603609 nova_compute[221550]: 2026-01-31 09:17:39.139 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:39 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:17:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:40.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:40.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:41 np0005603609 nova_compute[221550]: 2026-01-31 09:17:41.256 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:42.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:17:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:42.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:17:43 np0005603609 nova_compute[221550]: 2026-01-31 09:17:43.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:44 np0005603609 nova_compute[221550]: 2026-01-31 09:17:44.141 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:44.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:44.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:46 np0005603609 nova_compute[221550]: 2026-01-31 09:17:46.258 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:46.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:46.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:47 np0005603609 nova_compute[221550]: 2026-01-31 09:17:47.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:17:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:48.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:17:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:17:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:48.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:17:48 np0005603609 nova_compute[221550]: 2026-01-31 09:17:48.603 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:48 np0005603609 nova_compute[221550]: 2026-01-31 09:17:48.684 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:49 np0005603609 nova_compute[221550]: 2026-01-31 09:17:49.143 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:49 np0005603609 nova_compute[221550]: 2026-01-31 09:17:49.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:49 np0005603609 nova_compute[221550]: 2026-01-31 09:17:49.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 04:17:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:50.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:50.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:51 np0005603609 nova_compute[221550]: 2026-01-31 09:17:51.262 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:52.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:52.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:54 np0005603609 nova_compute[221550]: 2026-01-31 09:17:54.144 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:54.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:54.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:56 np0005603609 nova_compute[221550]: 2026-01-31 09:17:56.265 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:56.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:56.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:56 np0005603609 nova_compute[221550]: 2026-01-31 09:17:56.711 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:56 np0005603609 nova_compute[221550]: 2026-01-31 09:17:56.711 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 04:17:56 np0005603609 nova_compute[221550]: 2026-01-31 09:17:56.738 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 04:17:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:58.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:17:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:17:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:58.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:17:59 np0005603609 nova_compute[221550]: 2026-01-31 09:17:59.147 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:00.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:00.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:01 np0005603609 nova_compute[221550]: 2026-01-31 09:18:01.268 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:02.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:02.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:04 np0005603609 nova_compute[221550]: 2026-01-31 09:18:04.149 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:04.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:04.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:06 np0005603609 nova_compute[221550]: 2026-01-31 09:18:06.271 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:06.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:06.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:07 np0005603609 podman[331387]: 2026-01-31 09:18:07.17035633 +0000 UTC m=+0.052495073 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:18:07 np0005603609 podman[331386]: 2026-01-31 09:18:07.195833928 +0000 UTC m=+0.079922038 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 04:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:18:07.574 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:18:07.574 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:18:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:18:07.575 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:18:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:08.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:08.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:09 np0005603609 nova_compute[221550]: 2026-01-31 09:18:09.151 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:10.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:10.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:10 np0005603609 nova_compute[221550]: 2026-01-31 09:18:10.682 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:11 np0005603609 nova_compute[221550]: 2026-01-31 09:18:11.275 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:11 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e430 e430: 3 total, 3 up, 3 in
Jan 31 04:18:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:12.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:12.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:14 np0005603609 nova_compute[221550]: 2026-01-31 09:18:14.152 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:14.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:14.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:16 np0005603609 nova_compute[221550]: 2026-01-31 09:18:16.280 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:16.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:16.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:18.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:18.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:19 np0005603609 nova_compute[221550]: 2026-01-31 09:18:19.154 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:19 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:20.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:18:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:20.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:18:21 np0005603609 nova_compute[221550]: 2026-01-31 09:18:21.284 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:22.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:22.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:22 np0005603609 nova_compute[221550]: 2026-01-31 09:18:22.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:23 np0005603609 nova_compute[221550]: 2026-01-31 09:18:23.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:24 np0005603609 nova_compute[221550]: 2026-01-31 09:18:24.155 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:24.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:24.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:24 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:18:25 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1731910585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:18:26 np0005603609 nova_compute[221550]: 2026-01-31 09:18:26.288 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:26.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:26.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:28.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:28.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:29 np0005603609 nova_compute[221550]: 2026-01-31 09:18:29.157 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:29 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:30.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:30 np0005603609 nova_compute[221550]: 2026-01-31 09:18:30.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:18:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:30.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:18:30 np0005603609 nova_compute[221550]: 2026-01-31 09:18:30.681 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:18:30 np0005603609 nova_compute[221550]: 2026-01-31 09:18:30.681 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:18:30 np0005603609 nova_compute[221550]: 2026-01-31 09:18:30.681 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:18:30 np0005603609 nova_compute[221550]: 2026-01-31 09:18:30.682 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:18:30 np0005603609 nova_compute[221550]: 2026-01-31 09:18:30.682 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:18:31 np0005603609 nova_compute[221550]: 2026-01-31 09:18:31.292 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:31 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:18:31 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3661090237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:18:31 np0005603609 nova_compute[221550]: 2026-01-31 09:18:31.368 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.686s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:18:31 np0005603609 nova_compute[221550]: 2026-01-31 09:18:31.547 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:18:31 np0005603609 nova_compute[221550]: 2026-01-31 09:18:31.548 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4120MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:18:31 np0005603609 nova_compute[221550]: 2026-01-31 09:18:31.548 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:18:31 np0005603609 nova_compute[221550]: 2026-01-31 09:18:31.548 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:18:31 np0005603609 nova_compute[221550]: 2026-01-31 09:18:31.657 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:18:31 np0005603609 nova_compute[221550]: 2026-01-31 09:18:31.657 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:18:31 np0005603609 nova_compute[221550]: 2026-01-31 09:18:31.696 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:18:32 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:18:32 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/668408901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:18:32 np0005603609 nova_compute[221550]: 2026-01-31 09:18:32.107 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:18:32 np0005603609 nova_compute[221550]: 2026-01-31 09:18:32.114 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:18:32 np0005603609 nova_compute[221550]: 2026-01-31 09:18:32.133 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:18:32 np0005603609 nova_compute[221550]: 2026-01-31 09:18:32.136 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:18:32 np0005603609 nova_compute[221550]: 2026-01-31 09:18:32.136 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:18:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:32.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:32.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:34 np0005603609 nova_compute[221550]: 2026-01-31 09:18:34.139 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:34 np0005603609 nova_compute[221550]: 2026-01-31 09:18:34.139 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:18:34 np0005603609 nova_compute[221550]: 2026-01-31 09:18:34.139 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:18:34 np0005603609 nova_compute[221550]: 2026-01-31 09:18:34.160 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:34 np0005603609 nova_compute[221550]: 2026-01-31 09:18:34.173 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:18:34 np0005603609 nova_compute[221550]: 2026-01-31 09:18:34.173 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:34.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:34 np0005603609 nova_compute[221550]: 2026-01-31 09:18:34.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:34 np0005603609 nova_compute[221550]: 2026-01-31 09:18:34.659 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:18:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:18:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:34.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:18:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:36 np0005603609 nova_compute[221550]: 2026-01-31 09:18:36.296 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:36.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:36.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:37 np0005603609 podman[331501]: 2026-01-31 09:18:37.872125339 +0000 UTC m=+0.082041001 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 04:18:37 np0005603609 podman[331500]: 2026-01-31 09:18:37.898829756 +0000 UTC m=+0.109418694 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 04:18:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:38.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:38.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:39 np0005603609 nova_compute[221550]: 2026-01-31 09:18:39.162 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:39 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:18:39 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1275901134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:18:40 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:18:40 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:18:40 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:18:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:18:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:40.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:18:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:40.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:41 np0005603609 nova_compute[221550]: 2026-01-31 09:18:41.300 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:42.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:42.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:44 np0005603609 nova_compute[221550]: 2026-01-31 09:18:44.165 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:44.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:44 np0005603609 nova_compute[221550]: 2026-01-31 09:18:44.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:18:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:44.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:18:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:45 np0005603609 nova_compute[221550]: 2026-01-31 09:18:45.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:46 np0005603609 nova_compute[221550]: 2026-01-31 09:18:46.305 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:46.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:46.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:48.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:48 np0005603609 nova_compute[221550]: 2026-01-31 09:18:48.675 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:48.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:49 np0005603609 nova_compute[221550]: 2026-01-31 09:18:49.167 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:18:49 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:18:49 np0005603609 nova_compute[221550]: 2026-01-31 09:18:49.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:50.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:50.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:51 np0005603609 nova_compute[221550]: 2026-01-31 09:18:51.329 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:52.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:52.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:18:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1372370753' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:18:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:18:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1372370753' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:18:54 np0005603609 nova_compute[221550]: 2026-01-31 09:18:54.168 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:54.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:54.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:54 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:18:56.328 140058 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=118, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=117) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:18:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:18:56.329 140058 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:18:56 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:18:56.329 140058 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a8402939-fce1-46a9-9749-88c4c6334003, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '118'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:18:56 np0005603609 nova_compute[221550]: 2026-01-31 09:18:56.367 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:18:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:56.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:18:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:56.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:58.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:18:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:58.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:59 np0005603609 nova_compute[221550]: 2026-01-31 09:18:59.169 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:59 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:00.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:00.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:01 np0005603609 nova_compute[221550]: 2026-01-31 09:19:01.371 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:02.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:19:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:02.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:19:02 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e431 e431: 3 total, 3 up, 3 in
Jan 31 04:19:04 np0005603609 nova_compute[221550]: 2026-01-31 09:19:04.171 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:04.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:04.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:04 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:06 np0005603609 nova_compute[221550]: 2026-01-31 09:19:06.375 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:06.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:06.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:19:07.575 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:19:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:19:07.576 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:19:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:19:07.576 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:19:08 np0005603609 podman[331705]: 2026-01-31 09:19:08.180882596 +0000 UTC m=+0.060367217 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 04:19:08 np0005603609 podman[331704]: 2026-01-31 09:19:08.207125162 +0000 UTC m=+0.086697175 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:19:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:08.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:08.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:09 np0005603609 nova_compute[221550]: 2026-01-31 09:19:09.172 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:09 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 e432: 3 total, 3 up, 3 in
Jan 31 04:19:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:10.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:19:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:10.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:19:11 np0005603609 nova_compute[221550]: 2026-01-31 09:19:11.402 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:12.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:12.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:14 np0005603609 nova_compute[221550]: 2026-01-31 09:19:14.175 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:19:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:14.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:19:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:19:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:14.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:19:14 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:16 np0005603609 nova_compute[221550]: 2026-01-31 09:19:16.406 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:19:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:16.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:19:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:16.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:19:17 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/314481979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:19:17 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:19:17 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/314481979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:19:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:19:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:18.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:19:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:19:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:18.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:19:19 np0005603609 nova_compute[221550]: 2026-01-31 09:19:19.175 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:20.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:20.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:21 np0005603609 nova_compute[221550]: 2026-01-31 09:19:21.408 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:22.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:19:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:22.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:19:23 np0005603609 nova_compute[221550]: 2026-01-31 09:19:23.661 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:19:24 np0005603609 nova_compute[221550]: 2026-01-31 09:19:24.176 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:19:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:24.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:19:24 np0005603609 nova_compute[221550]: 2026-01-31 09:19:24.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:19:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:24.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:26 np0005603609 nova_compute[221550]: 2026-01-31 09:19:26.412 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:19:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:26.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:19:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:26.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:28.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:19:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:28.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:19:29 np0005603609 nova_compute[221550]: 2026-01-31 09:19:29.178 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:30.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:30.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:31 np0005603609 nova_compute[221550]: 2026-01-31 09:19:31.416 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:32.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:32 np0005603609 nova_compute[221550]: 2026-01-31 09:19:32.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:19:32 np0005603609 nova_compute[221550]: 2026-01-31 09:19:32.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:19:32 np0005603609 nova_compute[221550]: 2026-01-31 09:19:32.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:19:32 np0005603609 nova_compute[221550]: 2026-01-31 09:19:32.683 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:19:32 np0005603609 nova_compute[221550]: 2026-01-31 09:19:32.683 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:19:32 np0005603609 nova_compute[221550]: 2026-01-31 09:19:32.684 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:19:32 np0005603609 nova_compute[221550]: 2026-01-31 09:19:32.712 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:19:32 np0005603609 nova_compute[221550]: 2026-01-31 09:19:32.712 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:19:32 np0005603609 nova_compute[221550]: 2026-01-31 09:19:32.713 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:19:32 np0005603609 nova_compute[221550]: 2026-01-31 09:19:32.713 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:19:32 np0005603609 nova_compute[221550]: 2026-01-31 09:19:32.713 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:19:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:32.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:33 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:19:33 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4079400756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:19:33 np0005603609 nova_compute[221550]: 2026-01-31 09:19:33.129 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:19:33 np0005603609 nova_compute[221550]: 2026-01-31 09:19:33.258 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:19:33 np0005603609 nova_compute[221550]: 2026-01-31 09:19:33.259 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4136MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:19:33 np0005603609 nova_compute[221550]: 2026-01-31 09:19:33.259 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:19:33 np0005603609 nova_compute[221550]: 2026-01-31 09:19:33.259 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:19:33 np0005603609 nova_compute[221550]: 2026-01-31 09:19:33.335 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:19:33 np0005603609 nova_compute[221550]: 2026-01-31 09:19:33.335 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:19:33 np0005603609 nova_compute[221550]: 2026-01-31 09:19:33.647 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing inventories for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 04:19:33 np0005603609 nova_compute[221550]: 2026-01-31 09:19:33.664 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating ProviderTree inventory for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 04:19:33 np0005603609 nova_compute[221550]: 2026-01-31 09:19:33.665 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Updating inventory in ProviderTree for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 04:19:33 np0005603609 nova_compute[221550]: 2026-01-31 09:19:33.681 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing aggregate associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 04:19:33 np0005603609 nova_compute[221550]: 2026-01-31 09:19:33.705 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Refreshing trait associations for resource provider 09a2f316-8f9d-47b2-922f-864a1d14c517, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 04:19:33 np0005603609 nova_compute[221550]: 2026-01-31 09:19:33.738 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:19:34 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:19:34 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2578349354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:19:34 np0005603609 nova_compute[221550]: 2026-01-31 09:19:34.149 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:19:34 np0005603609 nova_compute[221550]: 2026-01-31 09:19:34.154 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:19:34 np0005603609 nova_compute[221550]: 2026-01-31 09:19:34.172 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:19:34 np0005603609 nova_compute[221550]: 2026-01-31 09:19:34.173 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:19:34 np0005603609 nova_compute[221550]: 2026-01-31 09:19:34.173 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:19:34 np0005603609 nova_compute[221550]: 2026-01-31 09:19:34.180 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:34.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:19:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:34.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:19:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:36 np0005603609 nova_compute[221550]: 2026-01-31 09:19:36.465 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:36.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:19:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:36.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:19:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:19:37 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3719793373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:19:38 np0005603609 nova_compute[221550]: 2026-01-31 09:19:38.149 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:19:38 np0005603609 nova_compute[221550]: 2026-01-31 09:19:38.149 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:19:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:19:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:38.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:19:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:19:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:38.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:19:39 np0005603609 podman[331795]: 2026-01-31 09:19:39.177090437 +0000 UTC m=+0.063020742 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 04:19:39 np0005603609 nova_compute[221550]: 2026-01-31 09:19:39.181 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:39 np0005603609 podman[331796]: 2026-01-31 09:19:39.182265435 +0000 UTC m=+0.065475983 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible)
Jan 31 04:19:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:40.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:40.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:41 np0005603609 nova_compute[221550]: 2026-01-31 09:19:41.469 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #217. Immutable memtables: 0.
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:19:42.382111) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:856] [default] [JOB 139] Flushing memtable with next log file: 217
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851182382146, "job": 139, "event": "flush_started", "num_memtables": 1, "num_entries": 2380, "num_deletes": 252, "total_data_size": 5866320, "memory_usage": 5932936, "flush_reason": "Manual Compaction"}
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:885] [default] [JOB 139] Level-0 flush table #218: started
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851182437097, "cf_name": "default", "job": 139, "event": "table_file_creation", "file_number": 218, "file_size": 3837073, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 103706, "largest_seqno": 106081, "table_properties": {"data_size": 3827441, "index_size": 6125, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19746, "raw_average_key_size": 20, "raw_value_size": 3808261, "raw_average_value_size": 3946, "num_data_blocks": 267, "num_entries": 965, "num_filter_entries": 965, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850964, "oldest_key_time": 1769850964, "file_creation_time": 1769851182, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 218, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 139] Flush lasted 55164 microseconds, and 5417 cpu microseconds.
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:19:42.437271) [db/flush_job.cc:967] [default] [JOB 139] Level-0 flush table #218: 3837073 bytes OK
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:19:42.437293) [db/memtable_list.cc:519] [default] Level-0 commit table #218 started
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:19:42.441556) [db/memtable_list.cc:722] [default] Level-0 commit table #218: memtable #1 done
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:19:42.441602) EVENT_LOG_v1 {"time_micros": 1769851182441592, "job": 139, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:19:42.441626) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 139] Try to delete WAL files size 5855984, prev total WAL file size 5855984, number of live WAL files 2.
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000214.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:19:42.442988) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039323837' seq:72057594037927935, type:22 .. '7061786F730039353339' seq:0, type:0; will stop at (end)
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 140] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 139 Base level 0, inputs: [218(3747KB)], [216(12MB)]
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851182443019, "job": 140, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [218], "files_L6": [216], "score": -1, "input_data_size": 16719148, "oldest_snapshot_seqno": -1}
Jan 31 04:19:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:42.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 140] Generated table #219: 12619 keys, 14725078 bytes, temperature: kUnknown
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851182702652, "cf_name": "default", "job": 140, "event": "table_file_creation", "file_number": 219, "file_size": 14725078, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14645955, "index_size": 46498, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31557, "raw_key_size": 334192, "raw_average_key_size": 26, "raw_value_size": 14428142, "raw_average_value_size": 1143, "num_data_blocks": 1760, "num_entries": 12619, "num_filter_entries": 12619, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843222, "oldest_key_time": 0, "file_creation_time": 1769851182, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4bf585ed-5b97-4c56-83f8-35fa96479e70", "db_session_id": "GJ8KZ27N7JDEP6ITHIOI", "orig_file_number": 219, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:19:42.702948) [db/compaction/compaction_job.cc:1663] [default] [JOB 140] Compacted 1@0 + 1@6 files to L6 => 14725078 bytes
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:19:42.711515) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 64.4 rd, 56.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 12.3 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 13142, records dropped: 523 output_compression: NoCompression
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:19:42.711531) EVENT_LOG_v1 {"time_micros": 1769851182711523, "job": 140, "event": "compaction_finished", "compaction_time_micros": 259728, "compaction_time_cpu_micros": 26682, "output_level": 6, "num_output_files": 1, "total_output_size": 14725078, "num_input_records": 13142, "num_output_records": 12619, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000218.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851182711873, "job": 140, "event": "table_file_deletion", "file_number": 218}
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000216.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851182713122, "job": 140, "event": "table_file_deletion", "file_number": 216}
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:19:42.442811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:19:42.713143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:19:42.713147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:19:42.713148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:19:42.713150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:19:42 np0005603609 ceph-mon[81667]: rocksdb: (Original Log Time 2026/01/31-09:19:42.713151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:19:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:19:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:42.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:19:44 np0005603609 nova_compute[221550]: 2026-01-31 09:19:44.183 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:44.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:44 np0005603609 nova_compute[221550]: 2026-01-31 09:19:44.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:19:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:44.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:46 np0005603609 nova_compute[221550]: 2026-01-31 09:19:46.473 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:19:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:46.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:19:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:46.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:48.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:19:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:48.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:19:49 np0005603609 nova_compute[221550]: 2026-01-31 09:19:49.185 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:49 np0005603609 nova_compute[221550]: 2026-01-31 09:19:49.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:19:49 np0005603609 nova_compute[221550]: 2026-01-31 09:19:49.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:19:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:19:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:50.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:19:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:50.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:51 np0005603609 nova_compute[221550]: 2026-01-31 09:19:51.477 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:52 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:19:52 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:19:52 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:19:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:52.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:52.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:53 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:19:53 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:19:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:19:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2690524856' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:19:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:19:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2690524856' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:19:54 np0005603609 nova_compute[221550]: 2026-01-31 09:19:54.188 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:19:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:54.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:19:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:54.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:56 np0005603609 nova_compute[221550]: 2026-01-31 09:19:56.480 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:56.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:19:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:56.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:19:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:58.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:58 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:19:58 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:58 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:58.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:59 np0005603609 nova_compute[221550]: 2026-01-31 09:19:59.191 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:59 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:19:59 np0005603609 ceph-mon[81667]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 04:20:00 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:20:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:00.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:20:00 np0005603609 ceph-mon[81667]: overall HEALTH_OK
Jan 31 04:20:00 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:00 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:20:00 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:00.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:20:01 np0005603609 nova_compute[221550]: 2026-01-31 09:20:01.483 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:01 np0005603609 ovn_controller[130359]: 2026-01-31T09:20:01Z|01040|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 31 04:20:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:20:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:02.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:20:02 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:02 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:02 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:02.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:04 np0005603609 nova_compute[221550]: 2026-01-31 09:20:04.194 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:04.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:04 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:04 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:04 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:04.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:05 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:06 np0005603609 nova_compute[221550]: 2026-01-31 09:20:06.486 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:06.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:06 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:06 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:06 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:06.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:20:07.576 140058 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:20:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:20:07.576 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:20:07 np0005603609 ovn_metadata_agent[140053]: 2026-01-31 09:20:07.577 140058 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:20:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:08.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:08 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:08 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:08 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:08.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:09 np0005603609 nova_compute[221550]: 2026-01-31 09:20:09.193 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:10 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:10 np0005603609 podman[332023]: 2026-01-31 09:20:10.152781739 +0000 UTC m=+0.041227906 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:20:10 np0005603609 podman[332022]: 2026-01-31 09:20:10.171765806 +0000 UTC m=+0.062299075 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 04:20:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:10.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:10 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:10 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:10 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:10.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:11 np0005603609 nova_compute[221550]: 2026-01-31 09:20:11.524 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:12.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:12 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:12 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:12 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:12.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:14 np0005603609 nova_compute[221550]: 2026-01-31 09:20:14.195 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:14.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:14 np0005603609 nova_compute[221550]: 2026-01-31 09:20:14.655 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:20:14 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:14 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 04:20:14 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:14.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 04:20:15 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:16 np0005603609 nova_compute[221550]: 2026-01-31 09:20:16.527 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:16.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:16 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:16 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:16 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:16.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:20:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:18.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:20:18 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:18 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:18 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:18.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:19 np0005603609 nova_compute[221550]: 2026-01-31 09:20:19.197 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:20 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:20.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:20 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:20 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:20 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:20.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:21 np0005603609 nova_compute[221550]: 2026-01-31 09:20:21.530 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:22.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:22 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:22 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:22 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:22.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:23 np0005603609 nova_compute[221550]: 2026-01-31 09:20:23.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:20:24 np0005603609 nova_compute[221550]: 2026-01-31 09:20:24.197 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:24.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:24 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:24 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:24 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:24.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:25 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:25 np0005603609 nova_compute[221550]: 2026-01-31 09:20:25.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:20:26 np0005603609 nova_compute[221550]: 2026-01-31 09:20:26.533 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:26.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:26 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:26 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:26 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:26.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:28.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:28 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:28 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:28 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:28.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:29 np0005603609 nova_compute[221550]: 2026-01-31 09:20:29.199 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:30 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:20:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:30.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:20:30 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:30 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:30 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:30.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:31 np0005603609 nova_compute[221550]: 2026-01-31 09:20:31.536 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:32.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:32 np0005603609 nova_compute[221550]: 2026-01-31 09:20:32.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:20:32 np0005603609 nova_compute[221550]: 2026-01-31 09:20:32.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:20:32 np0005603609 nova_compute[221550]: 2026-01-31 09:20:32.660 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:20:32 np0005603609 nova_compute[221550]: 2026-01-31 09:20:32.683 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:20:32 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:32 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:20:32 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:32.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:20:33 np0005603609 nova_compute[221550]: 2026-01-31 09:20:33.658 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:20:34 np0005603609 systemd-logind[823]: New session 69 of user zuul.
Jan 31 04:20:34 np0005603609 systemd[1]: Started Session 69 of User zuul.
Jan 31 04:20:34 np0005603609 nova_compute[221550]: 2026-01-31 09:20:34.201 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:34.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:34 np0005603609 nova_compute[221550]: 2026-01-31 09:20:34.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:20:34 np0005603609 nova_compute[221550]: 2026-01-31 09:20:34.688 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:20:34 np0005603609 nova_compute[221550]: 2026-01-31 09:20:34.689 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:20:34 np0005603609 nova_compute[221550]: 2026-01-31 09:20:34.689 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:20:34 np0005603609 nova_compute[221550]: 2026-01-31 09:20:34.689 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:20:34 np0005603609 nova_compute[221550]: 2026-01-31 09:20:34.690 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:20:34 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:34 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:34 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:34.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:20:35 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2450555092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:20:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:35 np0005603609 nova_compute[221550]: 2026-01-31 09:20:35.109 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:20:35 np0005603609 nova_compute[221550]: 2026-01-31 09:20:35.252 221554 WARNING nova.virt.libvirt.driver [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:20:35 np0005603609 nova_compute[221550]: 2026-01-31 09:20:35.255 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4128MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:20:35 np0005603609 nova_compute[221550]: 2026-01-31 09:20:35.256 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:20:35 np0005603609 nova_compute[221550]: 2026-01-31 09:20:35.256 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:20:35 np0005603609 nova_compute[221550]: 2026-01-31 09:20:35.497 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:20:35 np0005603609 nova_compute[221550]: 2026-01-31 09:20:35.497 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:20:35 np0005603609 nova_compute[221550]: 2026-01-31 09:20:35.546 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:20:35 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:20:35 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2351824196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:20:35 np0005603609 nova_compute[221550]: 2026-01-31 09:20:35.977 221554 DEBUG oslo_concurrency.processutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:20:35 np0005603609 nova_compute[221550]: 2026-01-31 09:20:35.982 221554 DEBUG nova.compute.provider_tree [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed in ProviderTree for provider: 09a2f316-8f9d-47b2-922f-864a1d14c517 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:20:36 np0005603609 nova_compute[221550]: 2026-01-31 09:20:36.021 221554 DEBUG nova.scheduler.client.report [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Inventory has not changed for provider 09a2f316-8f9d-47b2-922f-864a1d14c517 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:20:36 np0005603609 nova_compute[221550]: 2026-01-31 09:20:36.023 221554 DEBUG nova.compute.resource_tracker [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:20:36 np0005603609 nova_compute[221550]: 2026-01-31 09:20:36.023 221554 DEBUG oslo_concurrency.lockutils [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:20:36 np0005603609 nova_compute[221550]: 2026-01-31 09:20:36.540 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:36.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:36 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:36 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:36 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:36.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:37 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 31 04:20:37 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1860320840' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 04:20:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:38.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:38 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:38 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:38 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:38.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:39 np0005603609 nova_compute[221550]: 2026-01-31 09:20:39.207 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:39 np0005603609 ovs-vsctl[332403]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 31 04:20:40 np0005603609 nova_compute[221550]: 2026-01-31 09:20:40.024 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:20:40 np0005603609 nova_compute[221550]: 2026-01-31 09:20:40.024 221554 DEBUG nova.compute.manager [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:20:40 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:40 np0005603609 virtqemud[221292]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 31 04:20:40 np0005603609 virtqemud[221292]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 31 04:20:40 np0005603609 virtqemud[221292]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 04:20:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:40.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:40 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:40 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:20:40 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:40.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:20:40 np0005603609 podman[332632]: 2026-01-31 09:20:40.930349182 +0000 UTC m=+0.093573795 container health_status 94d82424ff1d933228991a6d3883d0d3edbeca44021cfb05346062f20cd34a6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 04:20:40 np0005603609 podman[332615]: 2026-01-31 09:20:40.932214678 +0000 UTC m=+0.095816720 container health_status 3c37e209d5117c41611837b9b1ada24551247fc40f5d7df8766586fb40389b6e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94-12a8c654aaa5d3e5f9bddba5e32e9e6697396d50a5e2ba40a522b51072bb0f94'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 04:20:41 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: cache status {prefix=cache status} (starting...)
Jan 31 04:20:41 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:20:41 np0005603609 lvm[332780]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 04:20:41 np0005603609 lvm[332780]: VG ceph_vg0 finished
Jan 31 04:20:41 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: client ls {prefix=client ls} (starting...)
Jan 31 04:20:41 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:20:41 np0005603609 nova_compute[221550]: 2026-01-31 09:20:41.542 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:41 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: damage ls {prefix=damage ls} (starting...)
Jan 31 04:20:41 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:20:41 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 04:20:41 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3750243798' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 04:20:41 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: dump loads {prefix=dump loads} (starting...)
Jan 31 04:20:41 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:20:42 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 31 04:20:42 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:20:42 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 31 04:20:42 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:20:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:20:42 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/265012548' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:20:42 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 31 04:20:42 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:20:42 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 31 04:20:42 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:20:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:20:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:42.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:20:42 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 31 04:20:42 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1582570693' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 04:20:42 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 31 04:20:42 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:20:42 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 31 04:20:42 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:20:42 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:42 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:42 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:42.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 31 04:20:43 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/557018449' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 04:20:43 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: ops {prefix=ops} (starting...)
Jan 31 04:20:43 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:20:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 31 04:20:43 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1224619771' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 04:20:43 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 04:20:43 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3586976088' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 04:20:43 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: session ls {prefix=session ls} (starting...)
Jan 31 04:20:43 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy Can't run that command on an inactive MDS!
Jan 31 04:20:43 np0005603609 ceph-mds[84837]: mds.cephfs.compute-1.dqeaqy asok_command: status {prefix=status} (starting...)
Jan 31 04:20:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 04:20:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/212560233' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 04:20:44 np0005603609 nova_compute[221550]: 2026-01-31 09:20:44.206 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 04:20:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2415556615' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 04:20:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 31 04:20:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3362579220' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 04:20:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:44.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:44 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 31 04:20:44 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1256069273' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 04:20:44 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:44 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:44 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:44.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 04:20:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/662583840' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 04:20:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 31 04:20:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3302768494' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 04:20:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 31 04:20:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3709068946' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 04:20:45 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 04:20:45 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2386080269' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 04:20:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 31 04:20:46 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/480645662' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 04:20:46 np0005603609 nova_compute[221550]: 2026-01-31 09:20:46.546 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:46 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 04:20:46 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3387722562' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 04:20:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:46.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:46 np0005603609 nova_compute[221550]: 2026-01-31 09:20:46.654 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:20:46 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:46 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:20:46 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:46.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:20:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 04:20:47 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4221179603' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 04:20:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 04:20:47 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2047880028' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4349781 data_alloc: 218103808 data_used: 18087936
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 50.467552185s of 50.791564941s, submitted: 29
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab1c00 session 0x55f200b0e5a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f20109ba40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab0800 session 0x55f202752f00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f204764800 session 0x55f202ef21e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20e436c00 session 0x55f203693c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3035000/0x0/0x1bfc00000, data 0x23a09e6/0x2599000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397317 data_alloc: 218103808 data_used: 18087936
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3035000/0x0/0x1bfc00000, data 0x23a09e6/0x2599000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438714368 unmapped: 75079680 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438714368 unmapped: 75079680 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438714368 unmapped: 75079680 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438714368 unmapped: 75079680 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438714368 unmapped: 75079680 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397317 data_alloc: 218103808 data_used: 18087936
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3035000/0x0/0x1bfc00000, data 0x23a09e6/0x2599000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3035000/0x0/0x1bfc00000, data 0x23a09e6/0x2599000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4397317 data_alloc: 218103808 data_used: 18087936
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202d28c00 session 0x55f2028e8780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f20051e960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3035000/0x0/0x1bfc00000, data 0x23a09e6/0x2599000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab0800 session 0x55f20055cf00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.561635971s of 18.616905212s, submitted: 11
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f204764800 session 0x55f2031843c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 75071488 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4400456 data_alloc: 218103808 data_used: 18087936
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3033000/0x0/0x1bfc00000, data 0x23a0a19/0x259b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3033000/0x0/0x1bfc00000, data 0x23a0a19/0x259b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4441096 data_alloc: 218103808 data_used: 23687168
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3033000/0x0/0x1bfc00000, data 0x23a0a19/0x259b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4441096 data_alloc: 218103808 data_used: 23687168
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 75087872 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.537538528s of 12.552680016s, submitted: 5
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 439066624 unmapped: 74727424 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a25b0000/0x0/0x1bfc00000, data 0x2e23a19/0x301e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440401920 unmapped: 73392128 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a250d000/0x0/0x1bfc00000, data 0x2ec6a19/0x30c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440401920 unmapped: 73392128 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a250d000/0x0/0x1bfc00000, data 0x2ec6a19/0x30c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440401920 unmapped: 73392128 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4537392 data_alloc: 218103808 data_used: 24784896
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f205d21400 session 0x55f202f33680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202d2c000 session 0x55f201722d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f2025db680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab0800 session 0x55f203172f00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440541184 unmapped: 73252864 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2413000/0x0/0x1bfc00000, data 0x2fbfa29/0x31bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f204764800 session 0x55f202f321e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f205d21400 session 0x55f2016b3e00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f203774000 session 0x55f2037501e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f20335dc20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab0800 session 0x55f202737c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440541184 unmapped: 73252864 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440541184 unmapped: 73252864 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440541184 unmapped: 73252864 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440541184 unmapped: 73252864 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23f2000/0x0/0x1bfc00000, data 0x2fe0a29/0x31dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4545826 data_alloc: 218103808 data_used: 24784896
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440541184 unmapped: 73252864 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440541184 unmapped: 73252864 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440549376 unmapped: 73244672 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23f2000/0x0/0x1bfc00000, data 0x2fe0a29/0x31dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440549376 unmapped: 73244672 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.877882957s of 13.225532532s, submitted: 119
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440475648 unmapped: 73318400 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4545962 data_alloc: 218103808 data_used: 24784896
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f204764800 session 0x55f202d40d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440475648 unmapped: 73318400 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23e4000/0x0/0x1bfc00000, data 0x2feea29/0x31ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f205d21400 session 0x55f20051e780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23e4000/0x0/0x1bfc00000, data 0x2feea29/0x31ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440475648 unmapped: 73318400 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440475648 unmapped: 73318400 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f202743400 session 0x55f202e93680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f20276a1e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440475648 unmapped: 73318400 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440475648 unmapped: 73318400 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4546122 data_alloc: 218103808 data_used: 24788992
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440475648 unmapped: 73318400 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23e1000/0x0/0x1bfc00000, data 0x2ff1a29/0x31ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440475648 unmapped: 73318400 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440475648 unmapped: 73318400 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23e1000/0x0/0x1bfc00000, data 0x2ff1a29/0x31ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440483840 unmapped: 73310208 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20164c800 session 0x55f200095a40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440483840 unmapped: 73310208 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4546502 data_alloc: 218103808 data_used: 24788992
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20164e400 session 0x55f200ae3e00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f21401f000 session 0x55f2036ca960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440483840 unmapped: 73310208 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23e1000/0x0/0x1bfc00000, data 0x2ff1a29/0x31ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440483840 unmapped: 73310208 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440287232 unmapped: 73506816 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440369152 unmapped: 73424896 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440369152 unmapped: 73424896 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4553222 data_alloc: 218103808 data_used: 25632768
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440369152 unmapped: 73424896 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440369152 unmapped: 73424896 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23e1000/0x0/0x1bfc00000, data 0x2ff1a29/0x31ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440369152 unmapped: 73424896 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.388780594s of 18.688554764s, submitted: 5
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440377344 unmapped: 73416704 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440377344 unmapped: 73416704 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4553574 data_alloc: 218103808 data_used: 25632768
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440377344 unmapped: 73416704 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23e1000/0x0/0x1bfc00000, data 0x2ff1a29/0x31ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440377344 unmapped: 73416704 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440377344 unmapped: 73416704 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 440385536 unmapped: 73408512 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a23e1000/0x0/0x1bfc00000, data 0x2ff1a29/0x31ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,2,0,0,0,0,13])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444809216 unmapped: 68984832 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4599318 data_alloc: 218103808 data_used: 26689536
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444940288 unmapped: 68853760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444940288 unmapped: 68853760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f21401f000 session 0x55f201693a40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20268d000 session 0x55f202f670e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f2026cb400 session 0x55f2000950e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f2092a4800 session 0x55f203184d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f20171cd20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20268d000 session 0x55f201723c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f2026cb400 session 0x55f203692b40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f21401f000 session 0x55f202757680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20a487800 session 0x55f202ef32c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444194816 unmapped: 69599232 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20e436c00 session 0x55f200094000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f203710c00 session 0x55f202f661e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.830982208s of 10.382411957s, submitted: 119
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444203008 unmapped: 69591040 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20a487800 session 0x55f2025dab40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a178b000/0x0/0x1bfc00000, data 0x3c46a39/0x3e43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a288e000/0x0/0x1bfc00000, data 0x2b45a06/0x2d40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444203008 unmapped: 69591040 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4490016 data_alloc: 218103808 data_used: 19951616
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a288e000/0x0/0x1bfc00000, data 0x2b45a06/0x2d40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444203008 unmapped: 69591040 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 69582848 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 69582848 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a288b000/0x0/0x1bfc00000, data 0x2b48a06/0x2d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 69582848 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 69582848 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4490948 data_alloc: 218103808 data_used: 19951616
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 69582848 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 69582848 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f20109a5a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a288b000/0x0/0x1bfc00000, data 0x2b48a06/0x2d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20268d000 session 0x55f203385680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 69582848 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 69582848 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20268d000 session 0x55f2025b9860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 69582848 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.296791077s of 11.565423012s, submitted: 30
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f2036cba40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4493755 data_alloc: 218103808 data_used: 19951616
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a288a000/0x0/0x1bfc00000, data 0x2b48a16/0x2d44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444219392 unmapped: 69574656 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442851328 unmapped: 70942720 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200ab0800 session 0x55f202750960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f204764800 session 0x55f201722d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f20288e400 session 0x55f200094b40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4476949 data_alloc: 218103808 data_used: 24272896
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2ec2000/0x0/0x1bfc00000, data 0x2511a06/0x270c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4476949 data_alloc: 218103808 data_used: 24272896
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2ec2000/0x0/0x1bfc00000, data 0x2511a06/0x270c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.816506386s of 11.953717232s, submitted: 22
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442892288 unmapped: 70901760 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446660608 unmapped: 67133440 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445833216 unmapped: 67960832 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2797000/0x0/0x1bfc00000, data 0x2c34a06/0x2e2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445833216 unmapped: 67960832 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4556815 data_alloc: 218103808 data_used: 25219072
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445833216 unmapped: 67960832 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445833216 unmapped: 67960832 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445833216 unmapped: 67960832 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445767680 unmapped: 68026368 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a2797000/0x0/0x1bfc00000, data 0x2c34a06/0x2e2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445767680 unmapped: 68026368 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a279c000/0x0/0x1bfc00000, data 0x2c37a06/0x2e32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4551743 data_alloc: 218103808 data_used: 25219072
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445767680 unmapped: 68026368 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445767680 unmapped: 68026368 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445767680 unmapped: 68026368 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a279c000/0x0/0x1bfc00000, data 0x2c37a06/0x2e32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445767680 unmapped: 68026368 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a279c000/0x0/0x1bfc00000, data 0x2c37a06/0x2e32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a279c000/0x0/0x1bfc00000, data 0x2c37a06/0x2e32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445767680 unmapped: 68026368 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4551743 data_alloc: 218103808 data_used: 25219072
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445767680 unmapped: 68026368 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.776427269s of 14.190032005s, submitted: 101
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445784064 unmapped: 68009984 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445784064 unmapped: 68009984 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445784064 unmapped: 68009984 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 ms_handle_reset con 0x55f200941800 session 0x55f20335d860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a15fb000/0x0/0x1bfc00000, data 0x2c38a06/0x2e33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446832640 unmapped: 66961408 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4552359 data_alloc: 218103808 data_used: 25227264
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 385 ms_handle_reset con 0x55f200ab0800 session 0x55f2008921e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445800448 unmapped: 67993600 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 385 ms_handle_reset con 0x55f204764800 session 0x55f203750b40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 385 ms_handle_reset con 0x55f20268d000 session 0x55f203184d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 65495040 heap: 513794048 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 385 heartbeat osd_stat(store_statfs(0x1a15f7000/0x0/0x1bfc00000, data 0x2c3a6c1/0x2e37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,10])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 385 heartbeat osd_stat(store_statfs(0x1a15f7000/0x0/0x1bfc00000, data 0x2c3a6c1/0x2e37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [0,0,0,0,0,0,1,2,5])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464240640 unmapped: 57958400 heap: 522199040 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 385 ms_handle_reset con 0x55f20264dc00 session 0x55f202752b40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451747840 unmapped: 74432512 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 385 handle_osd_map epochs [386,386], i have 386, src has [1,386]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 386 ms_handle_reset con 0x55f200941800 session 0x55f202736d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 74416128 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f200ab0800 session 0x55f2028e8d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4767617 data_alloc: 234881024 data_used: 31432704
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451772416 unmapped: 74407936 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451772416 unmapped: 74407936 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a008c000/0x0/0x1bfc00000, data 0x41a0fe3/0x43a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451772416 unmapped: 74407936 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.085830688s of 12.269598007s, submitted: 111
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f20268d000 session 0x55f2016943c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f204764800 session 0x55f2008681e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f20809c800 session 0x55f202739e00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448528384 unmapped: 77651968 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448528384 unmapped: 77651968 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4756097 data_alloc: 234881024 data_used: 31432704
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448536576 unmapped: 77643776 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f200941800 session 0x55f203384000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f200ab0800 session 0x55f20271b680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f20268d000 session 0x55f202f67860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f204764800 session 0x55f2008694a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f209495800 session 0x55f20271ab40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f200941800 session 0x55f202f67e00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f200ab0800 session 0x55f202756f00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f20268d000 session 0x55f202753a40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f204764800 session 0x55f203185680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a008e000/0x0/0x1bfc00000, data 0x41a0fe3/0x43a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448536576 unmapped: 77643776 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448536576 unmapped: 77643776 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 387 ms_handle_reset con 0x55f20a8e4000 session 0x55f203185e00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448544768 unmapped: 77635584 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448544768 unmapped: 77635584 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4811596 data_alloc: 234881024 data_used: 31440896
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448544768 unmapped: 77635584 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19fa73000/0x0/0x1bfc00000, data 0x47b9b22/0x49ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448544768 unmapped: 77635584 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448544768 unmapped: 77635584 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19fa73000/0x0/0x1bfc00000, data 0x47b9b22/0x49ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448552960 unmapped: 77627392 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19fa73000/0x0/0x1bfc00000, data 0x47b9b22/0x49ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448552960 unmapped: 77627392 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f20164e800 session 0x55f20335d4a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4811596 data_alloc: 234881024 data_used: 31440896
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f20164e800 session 0x55f202e93c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f200941800 session 0x55f202736b40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f200ab0800 session 0x55f202f33860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.338602066s of 12.479089737s, submitted: 37
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f204764800 session 0x55f202f665a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f20268d000 session 0x55f202d40d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f20268d000 session 0x55f202f321e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f200941800 session 0x55f203172f00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f200ab0800 session 0x55f202ef3e00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448561152 unmapped: 77619200 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19fa74000/0x0/0x1bfc00000, data 0x47b9b22/0x49ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448561152 unmapped: 77619200 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f20164e800 session 0x55f2036b2000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448585728 unmapped: 77594624 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448585728 unmapped: 77594624 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448585728 unmapped: 77594624 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4815940 data_alloc: 234881024 data_used: 31444992
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448593920 unmapped: 77586432 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19fa73000/0x0/0x1bfc00000, data 0x47b9b45/0x49bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448593920 unmapped: 77586432 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448593920 unmapped: 77586432 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f206feac00 session 0x55f202f32780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f206feac00 session 0x55f2025dba40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448593920 unmapped: 77586432 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448593920 unmapped: 77586432 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4850500 data_alloc: 234881024 data_used: 33796096
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f200941800 session 0x55f201693860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f200ab0800 session 0x55f203693a40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448749568 unmapped: 77430784 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.652714729s of 10.741259575s, submitted: 22
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448757760 unmapped: 77422592 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19fa4f000/0x0/0x1bfc00000, data 0x47ddb45/0x49df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448733184 unmapped: 77447168 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454172672 unmapped: 72007680 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454172672 unmapped: 72007680 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4941498 data_alloc: 251658240 data_used: 46231552
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19fa4f000/0x0/0x1bfc00000, data 0x47ddb45/0x49df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454180864 unmapped: 71999488 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454778880 unmapped: 71401472 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454828032 unmapped: 71352320 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454828032 unmapped: 71352320 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19f50e000/0x0/0x1bfc00000, data 0x4d1db45/0x4f1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454828032 unmapped: 71352320 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4994396 data_alloc: 251658240 data_used: 46354432
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454828032 unmapped: 71352320 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454828032 unmapped: 71352320 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454828032 unmapped: 71352320 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.024875641s of 12.330525398s, submitted: 31
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455442432 unmapped: 70737920 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455540736 unmapped: 70639616 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5014001 data_alloc: 251658240 data_used: 48156672
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19f459000/0x0/0x1bfc00000, data 0x4dd3b45/0x4fd5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455540736 unmapped: 70639616 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455540736 unmapped: 70639616 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455540736 unmapped: 70639616 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f204764800 session 0x55f202ef3e00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f201271c00 session 0x55f202753e00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455557120 unmapped: 70623232 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f200941800 session 0x55f2036b25a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455573504 unmapped: 70606848 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4885187 data_alloc: 234881024 data_used: 42782720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19ffb0000/0x0/0x1bfc00000, data 0x427cb22/0x447d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455581696 unmapped: 70598656 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455581696 unmapped: 70598656 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19ffb0000/0x0/0x1bfc00000, data 0x427cb22/0x447d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455581696 unmapped: 70598656 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455581696 unmapped: 70598656 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19ffb0000/0x0/0x1bfc00000, data 0x427cb22/0x447d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455581696 unmapped: 70598656 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4885187 data_alloc: 234881024 data_used: 42782720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.179315567s of 11.731339455s, submitted: 66
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455581696 unmapped: 70598656 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455688192 unmapped: 70492160 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455688192 unmapped: 70492160 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455696384 unmapped: 70483968 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455696384 unmapped: 70483968 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4889619 data_alloc: 234881024 data_used: 42999808
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 heartbeat osd_stat(store_statfs(0x19ffb1000/0x0/0x1bfc00000, data 0x427cb22/0x447d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 ms_handle_reset con 0x55f200ab0800 session 0x55f20276b4a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455704576 unmapped: 70475776 heap: 526180352 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 388 handle_osd_map epochs [389,389], i have 389, src has [1,389]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 389 ms_handle_reset con 0x55f204764800 session 0x55f20109ba40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 389 ms_handle_reset con 0x55f20a487000 session 0x55f202757680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 389 ms_handle_reset con 0x55f206feac00 session 0x55f2036ca780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 389 heartbeat osd_stat(store_statfs(0x19ffad000/0x0/0x1bfc00000, data 0x427e77b/0x4480000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 389 ms_handle_reset con 0x55f200941800 session 0x55f202756f00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459005952 unmapped: 88186880 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457465856 unmapped: 89726976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 390 ms_handle_reset con 0x55f200ab0800 session 0x55f2025b9c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458530816 unmapped: 88662016 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 390 heartbeat osd_stat(store_statfs(0x19cf62000/0x0/0x1bfc00000, data 0x72c6438/0x74ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 391 ms_handle_reset con 0x55f204764800 session 0x55f2027574a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 391 heartbeat osd_stat(store_statfs(0x19cf62000/0x0/0x1bfc00000, data 0x72c6438/0x74ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458588160 unmapped: 88604672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5257327 data_alloc: 251658240 data_used: 46071808
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458588160 unmapped: 88604672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 391 ms_handle_reset con 0x55f20a487000 session 0x55f2036b21e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 391 ms_handle_reset con 0x55f214020c00 session 0x55f200868f00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 391 ms_handle_reset con 0x55f200941800 session 0x55f2036b3a40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 391 heartbeat osd_stat(store_statfs(0x19cf5f000/0x0/0x1bfc00000, data 0x72c810f/0x74ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458596352 unmapped: 88596480 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.594075203s of 12.305402756s, submitted: 49
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458637312 unmapped: 88555520 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 392 heartbeat osd_stat(store_statfs(0x19cf5d000/0x0/0x1bfc00000, data 0x72c9d66/0x74cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 392 ms_handle_reset con 0x55f200ab0800 session 0x55f20335d2c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 88547328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 88547328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4931344 data_alloc: 251658240 data_used: 46071808
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 392 heartbeat osd_stat(store_statfs(0x19ffa4000/0x0/0x1bfc00000, data 0x4283d66/0x4489000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 392 heartbeat osd_stat(store_statfs(0x19ffa4000/0x0/0x1bfc00000, data 0x4283d66/0x4489000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 88547328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 88547328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 88547328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458661888 unmapped: 88530944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 393 ms_handle_reset con 0x55f20164e800 session 0x55f201725e00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 393 ms_handle_reset con 0x55f20268d000 session 0x55f2025b9e00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461045760 unmapped: 86147072 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4959254 data_alloc: 251658240 data_used: 46071808
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 393 ms_handle_reset con 0x55f204764800 session 0x55f203384000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462110720 unmapped: 85082112 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 393 heartbeat osd_stat(store_statfs(0x19fe4f000/0x0/0x1bfc00000, data 0x41ab8dd/0x43b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462151680 unmapped: 85041152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.917167664s of 10.137438774s, submitted: 84
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 393 ms_handle_reset con 0x55f204764800 session 0x55f2031730e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459866112 unmapped: 87326720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459866112 unmapped: 87326720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459866112 unmapped: 87326720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4918989 data_alloc: 251658240 data_used: 45215744
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459874304 unmapped: 87318528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f200941800 session 0x55f203185680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448585728 unmapped: 98607104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 394 heartbeat osd_stat(store_statfs(0x1a11ca000/0x0/0x1bfc00000, data 0x2c4a537/0x2e52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448585728 unmapped: 98607104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448585728 unmapped: 98607104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448585728 unmapped: 98607104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4642101 data_alloc: 234881024 data_used: 24956928
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f200ab0800 session 0x55f20121fc20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f20164e800 session 0x55f202750960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f20268d000 session 0x55f2037505a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f200941800 session 0x55f202f66d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f200ab0800 session 0x55f202f66b40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f20164e800 session 0x55f200b430e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f204764800 session 0x55f2015bda40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f20a487000 session 0x55f202f33860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f200941800 session 0x55f2036923c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448610304 unmapped: 98582528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 394 heartbeat osd_stat(store_statfs(0x1a0a24000/0x0/0x1bfc00000, data 0x33f1547/0x35fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447373312 unmapped: 99819520 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 394 heartbeat osd_stat(store_statfs(0x1a0a24000/0x0/0x1bfc00000, data 0x33f1547/0x35fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.741806030s of 10.135712624s, submitted: 84
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f203710c00 session 0x55f2031734a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 394 ms_handle_reset con 0x55f20a487800 session 0x55f2025da000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447373312 unmapped: 99819520 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f200ab0800 session 0x55f202751860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441516032 unmapped: 105676800 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a1857000/0x0/0x1bfc00000, data 0x25be066/0x27c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441516032 unmapped: 105676800 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4511305 data_alloc: 218103808 data_used: 12959744
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441516032 unmapped: 105676800 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441516032 unmapped: 105676800 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441516032 unmapped: 105676800 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f20164e800 session 0x55f2027574a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441516032 unmapped: 105676800 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f200941800 session 0x55f2025b9c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a1857000/0x0/0x1bfc00000, data 0x25be066/0x27c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a1857000/0x0/0x1bfc00000, data 0x25be066/0x27c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441516032 unmapped: 105676800 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4511305 data_alloc: 218103808 data_used: 12959744
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f200ab0800 session 0x55f202756f00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f203710c00 session 0x55f2036ca780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441819136 unmapped: 105373696 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441819136 unmapped: 105373696 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441827328 unmapped: 105365504 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441827328 unmapped: 105365504 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a1833000/0x0/0x1bfc00000, data 0x25e2076/0x27eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441827328 unmapped: 105365504 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4564431 data_alloc: 218103808 data_used: 19857408
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441827328 unmapped: 105365504 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441827328 unmapped: 105365504 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 441827328 unmapped: 105365504 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f20809ec00 session 0x55f201695680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f202d2dc00 session 0x55f20335d860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f200941800 session 0x55f202f67c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f200ab0800 session 0x55f203172b40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.575592995s of 15.742922783s, submitted: 70
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f202d2dc00 session 0x55f201695a40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f203710c00 session 0x55f202736d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f20809ec00 session 0x55f20271b680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f200941800 session 0x55f202f33c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f200ab0800 session 0x55f200869860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442146816 unmapped: 105046016 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442146816 unmapped: 105046016 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4615387 data_alloc: 218103808 data_used: 19857408
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a1259000/0x0/0x1bfc00000, data 0x2bbc076/0x2dc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442146816 unmapped: 105046016 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442146816 unmapped: 105046016 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442146816 unmapped: 105046016 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a1259000/0x0/0x1bfc00000, data 0x2bbc076/0x2dc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 442146816 unmapped: 105046016 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f202d2dc00 session 0x55f203184780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444301312 unmapped: 102891520 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f203710c00 session 0x55f202f32960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4648189 data_alloc: 218103808 data_used: 20058112
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a1239000/0x0/0x1bfc00000, data 0x2bdc076/0x2de5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f20809ec00 session 0x55f2016950e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f200941800 session 0x55f20274fe00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444301312 unmapped: 102891520 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 444391424 unmapped: 102801408 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445415424 unmapped: 101777408 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445448192 unmapped: 101744640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445448192 unmapped: 101744640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4709328 data_alloc: 234881024 data_used: 24592384
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a0e74000/0x0/0x1bfc00000, data 0x2f9e0a9/0x31a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445448192 unmapped: 101744640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.254899025s of 13.528295517s, submitted: 61
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445448192 unmapped: 101744640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445448192 unmapped: 101744640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445448192 unmapped: 101744640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445448192 unmapped: 101744640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4706656 data_alloc: 234881024 data_used: 24596480
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x1a0e72000/0x0/0x1bfc00000, data 0x2fa10a9/0x31ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445448192 unmapped: 101744640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445448192 unmapped: 101744640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447430656 unmapped: 99762176 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449904640 unmapped: 97288192 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449970176 unmapped: 97222656 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4775348 data_alloc: 234881024 data_used: 25903104
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x19f4b2000/0x0/0x1bfc00000, data 0x37c10a9/0x39cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449978368 unmapped: 97214464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449978368 unmapped: 97214464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 ms_handle_reset con 0x55f203710c00 session 0x55f1ffce6780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 heartbeat osd_stat(store_statfs(0x19f4ae000/0x0/0x1bfc00000, data 0x37c30a9/0x39ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449978368 unmapped: 97214464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449978368 unmapped: 97214464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.387397766s of 12.696340561s, submitted: 78
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 396 ms_handle_reset con 0x55f20809ec00 session 0x55f2036b21e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449978368 unmapped: 97214464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4780218 data_alloc: 234881024 data_used: 25911296
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 396 heartbeat osd_stat(store_statfs(0x19f4aa000/0x0/0x1bfc00000, data 0x37c4d02/0x39d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449978368 unmapped: 97214464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 396 ms_handle_reset con 0x55f20288f800 session 0x55f20171c960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449978368 unmapped: 97214464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 397 ms_handle_reset con 0x55f20274d800 session 0x55f2036ca000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 397 ms_handle_reset con 0x55f200941800 session 0x55f202737c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 397 ms_handle_reset con 0x55f20274d800 session 0x55f202d41a40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 475037696 unmapped: 72155136 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 397 ms_handle_reset con 0x55f20288f800 session 0x55f2031841e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 397 heartbeat osd_stat(store_statfs(0x19e01d000/0x0/0x1bfc00000, data 0x4c5395b/0x4e61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452313088 unmapped: 94879744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 398 ms_handle_reset con 0x55f202ed1800 session 0x55f2015bda40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452321280 unmapped: 94871552 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4964365 data_alloc: 234881024 data_used: 31469568
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 399 ms_handle_reset con 0x55f202402400 session 0x55f2025db680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452329472 unmapped: 94863360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452329472 unmapped: 94863360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 399 ms_handle_reset con 0x55f200941800 session 0x55f200095680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 399 ms_handle_reset con 0x55f20274d800 session 0x55f2017243c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 399 ms_handle_reset con 0x55f20288f800 session 0x55f202739c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 399 heartbeat osd_stat(store_statfs(0x19e017000/0x0/0x1bfc00000, data 0x4c5727d/0x4e67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449495040 unmapped: 97697792 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 399 heartbeat osd_stat(store_statfs(0x19e017000/0x0/0x1bfc00000, data 0x4c5727d/0x4e67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449495040 unmapped: 97697792 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 399 ms_handle_reset con 0x55f202c5b400 session 0x55f2025b8960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 399 ms_handle_reset con 0x55f202ed1800 session 0x55f2033850e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 399 ms_handle_reset con 0x55f202ed1800 session 0x55f2036ca960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 399 heartbeat osd_stat(store_statfs(0x19e017000/0x0/0x1bfc00000, data 0x4c5727d/0x4e67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 97673216 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4957963 data_alloc: 234881024 data_used: 31469568
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 97673216 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 97673216 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 97673216 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.282118797s of 13.623093605s, submitted: 66
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 97673216 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19e013000/0x0/0x1bfc00000, data 0x4c58dbc/0x4e6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f200941800 session 0x55f202f66b40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f20288f800 session 0x55f202ef3c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f20274d800 session 0x55f203184b40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f202c5b400 session 0x55f2033845a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449536000 unmapped: 97656832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f202c5b400 session 0x55f203751a40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963611 data_alloc: 234881024 data_used: 31477760
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f200941800 session 0x55f20335c000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f20274d800 session 0x55f202e93a40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f203775000 session 0x55f202739c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f20e437000 session 0x55f2025db680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f200941800 session 0x55f2031841e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f20274d800 session 0x55f2036b21e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19d78e000/0x0/0x1bfc00000, data 0x54dddbc/0x56ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5034672 data_alloc: 234881024 data_used: 31477760
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19d78e000/0x0/0x1bfc00000, data 0x54dddbc/0x56ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.655982971s of 11.790694237s, submitted: 39
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5033616 data_alloc: 234881024 data_used: 31473664
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f202c5b400 session 0x55f202f33c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f203775000 session 0x55f20271b680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19d78f000/0x0/0x1bfc00000, data 0x54dddbc/0x56ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448299008 unmapped: 98893824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5034168 data_alloc: 234881024 data_used: 31539200
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451461120 unmapped: 95731712 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19d78f000/0x0/0x1bfc00000, data 0x54dddbc/0x56ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5150808 data_alloc: 251658240 data_used: 45867008
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19d78f000/0x0/0x1bfc00000, data 0x54dddbc/0x56ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19d78f000/0x0/0x1bfc00000, data 0x54dddbc/0x56ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5150808 data_alloc: 251658240 data_used: 45867008
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19d78f000/0x0/0x1bfc00000, data 0x54dddbc/0x56ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452050944 unmapped: 95141888 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.328330994s of 17.342882156s, submitted: 4
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455442432 unmapped: 91750400 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455737344 unmapped: 91455488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451870720 unmapped: 95322112 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5272717 data_alloc: 251658240 data_used: 48615424
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451870720 unmapped: 95322112 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19cbcd000/0x0/0x1bfc00000, data 0x609fdbc/0x62b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19cbcd000/0x0/0x1bfc00000, data 0x609fdbc/0x62b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451870720 unmapped: 95322112 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451870720 unmapped: 95322112 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19cbcd000/0x0/0x1bfc00000, data 0x609fdbc/0x62b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452935680 unmapped: 94257152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f203711400 session 0x55f201695a40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f20288f400 session 0x55f2008685a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f203711400 session 0x55f2025b9860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449036288 unmapped: 98156544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5089835 data_alloc: 234881024 data_used: 39350272
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449036288 unmapped: 98156544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449036288 unmapped: 98156544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19da81000/0x0/0x1bfc00000, data 0x51ebdbc/0x53fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449036288 unmapped: 98156544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449036288 unmapped: 98156544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.088488579s of 12.624095917s, submitted: 96
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5090363 data_alloc: 234881024 data_used: 39350272
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19da81000/0x0/0x1bfc00000, data 0x51ebdbc/0x53fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5090363 data_alloc: 234881024 data_used: 39350272
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f200ab0800 session 0x55f2016941e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f202d2dc00 session 0x55f2036b23c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450093056 unmapped: 97099776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19da81000/0x0/0x1bfc00000, data 0x51ebdbc/0x53fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450093056 unmapped: 97099776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450093056 unmapped: 97099776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19da81000/0x0/0x1bfc00000, data 0x51ebdbc/0x53fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f200941800 session 0x55f2031723c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450256896 unmapped: 96935936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4935110 data_alloc: 234881024 data_used: 32653312
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450256896 unmapped: 96935936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.516612053s of 12.381522179s, submitted: 29
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450256896 unmapped: 96935936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450256896 unmapped: 96935936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19e87c000/0x0/0x1bfc00000, data 0x43f1d89/0x4601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450256896 unmapped: 96935936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450256896 unmapped: 96935936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4935110 data_alloc: 234881024 data_used: 32653312
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450256896 unmapped: 96935936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19e87c000/0x0/0x1bfc00000, data 0x43f1d89/0x4601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450256896 unmapped: 96935936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450281472 unmapped: 96911360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450281472 unmapped: 96911360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450281472 unmapped: 96911360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4944442 data_alloc: 234881024 data_used: 33980416
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19e877000/0x0/0x1bfc00000, data 0x43f6d89/0x4606000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450281472 unmapped: 96911360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450281472 unmapped: 96911360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450281472 unmapped: 96911360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19e877000/0x0/0x1bfc00000, data 0x43f6d89/0x4606000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450281472 unmapped: 96911360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f200ab0800 session 0x55f2031725a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.455163956s of 12.743838310s, submitted: 4
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4944266 data_alloc: 234881024 data_used: 33980416
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f2010c3c00 session 0x55f202750f00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f205c03000 session 0x55f20335d860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f20288f400 session 0x55f20335dc20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19e878000/0x0/0x1bfc00000, data 0x43f6d89/0x4606000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4897430 data_alloc: 234881024 data_used: 33878016
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19ee10000/0x0/0x1bfc00000, data 0x3e5ed89/0x406e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19ee10000/0x0/0x1bfc00000, data 0x3e5ed89/0x406e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f202d2dc00 session 0x55f2028e9a40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4897430 data_alloc: 234881024 data_used: 33878016
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.547045708s of 10.706357002s, submitted: 32
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 ms_handle_reset con 0x55f200ab0800 session 0x55f202ef2d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 heartbeat osd_stat(store_statfs(0x19ee10000/0x0/0x1bfc00000, data 0x3e5ed89/0x406e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450297856 unmapped: 96894976 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 401 ms_handle_reset con 0x55f2010c3c00 session 0x55f2036b2000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449830912 unmapped: 97361920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 401 ms_handle_reset con 0x55f20288f400 session 0x55f2027523c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449830912 unmapped: 97361920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 401 heartbeat osd_stat(store_statfs(0x1a0299000/0x0/0x1bfc00000, data 0x29d3a36/0x2be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 401 ms_handle_reset con 0x55f205c03000 session 0x55f2017243c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449830912 unmapped: 97361920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4676517 data_alloc: 218103808 data_used: 22364160
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a0296000/0x0/0x1bfc00000, data 0x29d56ff/0x2be7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449830912 unmapped: 97361920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449830912 unmapped: 97361920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a0296000/0x0/0x1bfc00000, data 0x29d56ff/0x2be7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 63K writes, 249K keys, 63K commit groups, 1.0 writes per commit group, ingest: 0.24 GB, 0.04 MB/s#012Cumulative WAL: 63K writes, 23K syncs, 2.71 writes per sync, written: 0.24 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4106 writes, 15K keys, 4106 commit groups, 1.0 writes per commit group, ingest: 15.38 MB, 0.03 MB/s#012Interval WAL: 4106 writes, 1694 syncs, 2.42 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449830912 unmapped: 97361920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 402 heartbeat osd_stat(store_statfs(0x1a0296000/0x0/0x1bfc00000, data 0x29d56ff/0x2be7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 402 handle_osd_map epochs [403,403], i have 403, src has [1,403]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449839104 unmapped: 97353728 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 403 ms_handle_reset con 0x55f20a487800 session 0x55f2036b25a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 403 ms_handle_reset con 0x55f204764800 session 0x55f203693a40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 403 heartbeat osd_stat(store_statfs(0x1a0292000/0x0/0x1bfc00000, data 0x29d725a/0x2bea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449855488 unmapped: 97337344 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4679267 data_alloc: 218103808 data_used: 22364160
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.596619606s of 10.162866592s, submitted: 95
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443441152 unmapped: 103751680 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 403 ms_handle_reset con 0x55f200ab0800 session 0x55f200b43860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443441152 unmapped: 103751680 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443441152 unmapped: 103751680 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e44000/0x0/0x1bfc00000, data 0x1e26d79/0x2039000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4526142 data_alloc: 218103808 data_used: 12759040
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e44000/0x0/0x1bfc00000, data 0x1e26d79/0x2039000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f2010c3c00 session 0x55f2036b2000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4524382 data_alloc: 218103808 data_used: 12754944
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f20288f400 session 0x55f2028e9a40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e44000/0x0/0x1bfc00000, data 0x1e26d79/0x2039000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e46000/0x0/0x1bfc00000, data 0x1e26d6a/0x2038000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4524382 data_alloc: 218103808 data_used: 12754944
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e46000/0x0/0x1bfc00000, data 0x1e26d6a/0x2038000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443449344 unmapped: 103743488 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443457536 unmapped: 103735296 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4524382 data_alloc: 218103808 data_used: 12754944
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e46000/0x0/0x1bfc00000, data 0x1e26d6a/0x2038000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443457536 unmapped: 103735296 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443457536 unmapped: 103735296 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e46000/0x0/0x1bfc00000, data 0x1e26d6a/0x2038000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443457536 unmapped: 103735296 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443457536 unmapped: 103735296 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e46000/0x0/0x1bfc00000, data 0x1e26d6a/0x2038000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443457536 unmapped: 103735296 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4524382 data_alloc: 218103808 data_used: 12754944
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443457536 unmapped: 103735296 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443457536 unmapped: 103735296 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443465728 unmapped: 103727104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443465728 unmapped: 103727104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e46000/0x0/0x1bfc00000, data 0x1e26d6a/0x2038000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443465728 unmapped: 103727104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4524382 data_alloc: 218103808 data_used: 12754944
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443465728 unmapped: 103727104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443465728 unmapped: 103727104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443465728 unmapped: 103727104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443473920 unmapped: 103718912 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443473920 unmapped: 103718912 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4524382 data_alloc: 218103808 data_used: 12754944
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e46000/0x0/0x1bfc00000, data 0x1e26d6a/0x2038000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443473920 unmapped: 103718912 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.465164185s of 35.908718109s, submitted: 41
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0e46000/0x0/0x1bfc00000, data 0x1e26d6a/0x2038000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f205c03000 session 0x55f2008685a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f205c03000 session 0x55f2031841e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f200ab0800 session 0x55f2025db680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f2010c3c00 session 0x55f202739c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f20288f400 session 0x55f20335c000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443482112 unmapped: 103710720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443482112 unmapped: 103710720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443482112 unmapped: 103710720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443482112 unmapped: 103710720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4533409 data_alloc: 218103808 data_used: 12754944
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443482112 unmapped: 103710720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443482112 unmapped: 103710720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443482112 unmapped: 103710720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443482112 unmapped: 103710720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443490304 unmapped: 103702528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4533409 data_alloc: 218103808 data_used: 12754944
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443490304 unmapped: 103702528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443490304 unmapped: 103702528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443490304 unmapped: 103702528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443490304 unmapped: 103702528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443490304 unmapped: 103702528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4533569 data_alloc: 218103808 data_used: 12759040
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443490304 unmapped: 103702528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443498496 unmapped: 103694336 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443498496 unmapped: 103694336 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443498496 unmapped: 103694336 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f204764800 session 0x55f203184b40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443498496 unmapped: 103694336 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4533569 data_alloc: 218103808 data_used: 12759040
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f204764800 session 0x55f202ef3c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443498496 unmapped: 103694336 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f200ab0800 session 0x55f2036ca960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443498496 unmapped: 103694336 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f2010c3c00 session 0x55f2033850e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443498496 unmapped: 103694336 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443498496 unmapped: 103694336 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443506688 unmapped: 103686144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4537569 data_alloc: 218103808 data_used: 13287424
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443506688 unmapped: 103686144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443506688 unmapped: 103686144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443506688 unmapped: 103686144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443514880 unmapped: 103677952 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443514880 unmapped: 103677952 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4537569 data_alloc: 218103808 data_used: 13287424
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443514880 unmapped: 103677952 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443514880 unmapped: 103677952 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443523072 unmapped: 103669760 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443523072 unmapped: 103669760 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0dc3000/0x0/0x1bfc00000, data 0x1ea8dcc/0x20bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 443523072 unmapped: 103669760 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4537569 data_alloc: 218103808 data_used: 13287424
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.371913910s of 34.493495941s, submitted: 22
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445251584 unmapped: 101941248 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445251584 unmapped: 101941248 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445120512 unmapped: 102072320 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445120512 unmapped: 102072320 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445120512 unmapped: 102072320 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4600855 data_alloc: 218103808 data_used: 13512704
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a05f5000/0x0/0x1bfc00000, data 0x2676dcc/0x2889000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445120512 unmapped: 102072320 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445128704 unmapped: 102064128 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445128704 unmapped: 102064128 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445128704 unmapped: 102064128 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445128704 unmapped: 102064128 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4600839 data_alloc: 218103808 data_used: 13516800
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445136896 unmapped: 102055936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a05f2000/0x0/0x1bfc00000, data 0x2679dcc/0x288c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445136896 unmapped: 102055936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445136896 unmapped: 102055936 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445145088 unmapped: 102047744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445145088 unmapped: 102047744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4600839 data_alloc: 218103808 data_used: 13516800
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a05f2000/0x0/0x1bfc00000, data 0x2679dcc/0x288c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445145088 unmapped: 102047744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445145088 unmapped: 102047744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445145088 unmapped: 102047744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445145088 unmapped: 102047744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a05f2000/0x0/0x1bfc00000, data 0x2679dcc/0x288c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445153280 unmapped: 102039552 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4600839 data_alloc: 218103808 data_used: 13516800
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a05f2000/0x0/0x1bfc00000, data 0x2679dcc/0x288c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a05f2000/0x0/0x1bfc00000, data 0x2679dcc/0x288c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [5])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445153280 unmapped: 102039552 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445161472 unmapped: 102031360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 445161472 unmapped: 102031360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f203711400 session 0x55f2025b9e00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448839680 unmapped: 98353152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448839680 unmapped: 98353152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4614119 data_alloc: 234881024 data_used: 17383424
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448839680 unmapped: 98353152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.575277328s of 25.811300278s, submitted: 74
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f20274d800 session 0x55f2017234a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a05f1000/0x0/0x1bfc00000, data 0x2679e2e/0x288d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448839680 unmapped: 98353152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448839680 unmapped: 98353152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448839680 unmapped: 98353152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448847872 unmapped: 98344960 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4615781 data_alloc: 234881024 data_used: 17383424
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f200ab0800 session 0x55f2027514a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448847872 unmapped: 98344960 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450240512 unmapped: 96952320 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f2010c3c00 session 0x55f201722d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 ms_handle_reset con 0x55f203711400 session 0x55f2016b2000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0373000/0x0/0x1bfc00000, data 0x28f7e2e/0x2b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450215936 unmapped: 96976896 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450215936 unmapped: 96976896 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450215936 unmapped: 96976896 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4640989 data_alloc: 234881024 data_used: 17383424
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450215936 unmapped: 96976896 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450215936 unmapped: 96976896 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 heartbeat osd_stat(store_statfs(0x1a0373000/0x0/0x1bfc00000, data 0x28f7e2e/0x2b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.533443451s of 11.683992386s, submitted: 45
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450191360 unmapped: 97001472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 405 handle_osd_map epochs [406,406], i have 406, src has [1,406]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 406 heartbeat osd_stat(store_statfs(0x1a036e000/0x0/0x1bfc00000, data 0x28f9a8f/0x2b0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [0,0,1])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 406 ms_handle_reset con 0x55f204764800 session 0x55f20171cf00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448929792 unmapped: 98263040 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448946176 unmapped: 98246656 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4653679 data_alloc: 234881024 data_used: 17399808
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 407 ms_handle_reset con 0x55f203767800 session 0x55f2015bd2c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 407 heartbeat osd_stat(store_statfs(0x1a0253000/0x0/0x1bfc00000, data 0x2a113bc/0x2c29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449724416 unmapped: 97468416 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f203775000 session 0x55f202ef2960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f202c5b400 session 0x55f2036b2780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449806336 unmapped: 97386496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 408 heartbeat osd_stat(store_statfs(0x19fc7b000/0x0/0x1bfc00000, data 0x2fea031/0x3203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449847296 unmapped: 97345536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f200ab0800 session 0x55f202739e00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f2010c3c00 session 0x55f202f32780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f203711400 session 0x55f203384960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f200ab0800 session 0x55f203384780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f2010c3c00 session 0x55f202ef3680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450109440 unmapped: 97083392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f202c5b400 session 0x55f200094f00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450076672 unmapped: 97116160 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4753373 data_alloc: 234881024 data_used: 17399808
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f204764800 session 0x55f200b125a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449306624 unmapped: 97886208 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 408 ms_handle_reset con 0x55f20ef09c00 session 0x55f2025b8000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 408 heartbeat osd_stat(store_statfs(0x19f900000/0x0/0x1bfc00000, data 0x33630a3/0x357e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 408 handle_osd_map epochs [409,409], i have 409, src has [1,409]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449306624 unmapped: 97886208 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 409 ms_handle_reset con 0x55f205c03c00 session 0x55f2037503c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 409 ms_handle_reset con 0x55f200ab0800 session 0x55f20121e3c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 449298432 unmapped: 97894400 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f203775000 session 0x55f202f674a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448135168 unmapped: 99057664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f8f8000/0x0/0x1bfc00000, data 0x3366857/0x3584000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448135168 unmapped: 99057664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4761017 data_alloc: 234881024 data_used: 17408000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f8f8000/0x0/0x1bfc00000, data 0x3366857/0x3584000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f8f8000/0x0/0x1bfc00000, data 0x3366857/0x3584000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448135168 unmapped: 99057664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f8f8000/0x0/0x1bfc00000, data 0x3366857/0x3584000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f8f8000/0x0/0x1bfc00000, data 0x3366857/0x3584000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 448135168 unmapped: 99057664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.020772934s of 14.548928261s, submitted: 349
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f2010c3c00 session 0x55f202e923c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446914560 unmapped: 100278272 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f8d5000/0x0/0x1bfc00000, data 0x338a87a/0x35a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 446914560 unmapped: 100278272 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447602688 unmapped: 99590144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4788342 data_alloc: 234881024 data_used: 20750336
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447602688 unmapped: 99590144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447602688 unmapped: 99590144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f8d5000/0x0/0x1bfc00000, data 0x338a87a/0x35a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447602688 unmapped: 99590144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447602688 unmapped: 99590144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447602688 unmapped: 99590144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4788342 data_alloc: 234881024 data_used: 20750336
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447602688 unmapped: 99590144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447602688 unmapped: 99590144 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f8d4000/0x0/0x1bfc00000, data 0x338b87a/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447610880 unmapped: 99581952 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 447610880 unmapped: 99581952 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f203767400 session 0x55f200b0e000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.310371399s of 12.341070175s, submitted: 7
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453107712 unmapped: 94085120 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4894434 data_alloc: 234881024 data_used: 20783104
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f203767400 session 0x55f20171d680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19eb8e000/0x0/0x1bfc00000, data 0x40d187a/0x42f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f200ab0800 session 0x55f201692d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452583424 unmapped: 94609408 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f2010c3c00 session 0x55f203185680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 94584832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 94584832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19eae3000/0x0/0x1bfc00000, data 0x41798ad/0x439a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453656576 unmapped: 93536256 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453656576 unmapped: 93536256 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4913878 data_alloc: 234881024 data_used: 21114880
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453656576 unmapped: 93536256 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 94584832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19eae3000/0x0/0x1bfc00000, data 0x41798ad/0x439a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 94584832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 94584832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 94584832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4915646 data_alloc: 234881024 data_used: 21114880
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 94584832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 94584832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452608000 unmapped: 94584832 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19eabf000/0x0/0x1bfc00000, data 0x419e8ad/0x43bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.402293205s of 13.857833862s, submitted: 140
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452616192 unmapped: 94576640 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f202c5b400 session 0x55f2036cb680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f204764800 session 0x55f20051e3c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452673536 unmapped: 94519296 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4770860 data_alloc: 234881024 data_used: 17530880
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f200ab0800 session 0x55f2025dba40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452681728 unmapped: 94511104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452681728 unmapped: 94511104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19fa17000/0x0/0x1bfc00000, data 0x3246828/0x3465000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452681728 unmapped: 94511104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452681728 unmapped: 94511104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452681728 unmapped: 94511104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4767962 data_alloc: 234881024 data_used: 17416192
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452681728 unmapped: 94511104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452681728 unmapped: 94511104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452681728 unmapped: 94511104 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19fa17000/0x0/0x1bfc00000, data 0x3246828/0x3465000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452689920 unmapped: 94502912 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452689920 unmapped: 94502912 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4767962 data_alloc: 234881024 data_used: 17416192
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19fa17000/0x0/0x1bfc00000, data 0x3246828/0x3465000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452689920 unmapped: 94502912 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.871665955s of 13.112590790s, submitted: 70
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f203775000 session 0x55f201694b40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f205c03c00 session 0x55f202ef3c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452689920 unmapped: 94502912 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f2010c3c00 session 0x55f201694780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452698112 unmapped: 94494720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452698112 unmapped: 94494720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f60a000/0x0/0x1bfc00000, data 0x32467f5/0x3463000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452698112 unmapped: 94494720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4764617 data_alloc: 234881024 data_used: 17412096
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19f60a000/0x0/0x1bfc00000, data 0x32467f5/0x3463000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f20288f400 session 0x55f203173e00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f205c03000 session 0x55f2033852c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452698112 unmapped: 94494720 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 ms_handle_reset con 0x55f20288f400 session 0x55f2037514a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452706304 unmapped: 94486528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452706304 unmapped: 94486528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452730880 unmapped: 94461952 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 heartbeat osd_stat(store_statfs(0x19fe62000/0x0/0x1bfc00000, data 0x29ef793/0x2c0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452730880 unmapped: 94461952 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4694709 data_alloc: 234881024 data_used: 16662528
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452747264 unmapped: 94445568 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.015648842s of 10.203917503s, submitted: 52
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 411 ms_handle_reset con 0x55f200ab0800 session 0x55f20335c1e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 411 ms_handle_reset con 0x55f2010c3c00 session 0x55f200892f00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452763648 unmapped: 94429184 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452763648 unmapped: 94429184 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452763648 unmapped: 94429184 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 412 ms_handle_reset con 0x55f203775000 session 0x55f201724f00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 412 heartbeat osd_stat(store_statfs(0x1a079f000/0x0/0x1bfc00000, data 0x20b10f9/0x22ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453836800 unmapped: 93356032 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4627855 data_alloc: 234881024 data_used: 16666624
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 413 ms_handle_reset con 0x55f205c03c00 session 0x55f2036cbc20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 413 ms_handle_reset con 0x55f203775000 session 0x55f203750000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453844992 unmapped: 93347840 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453844992 unmapped: 93347840 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453844992 unmapped: 93347840 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453844992 unmapped: 93347840 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0807000/0x0/0x1bfc00000, data 0x1e36d60/0x2054000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453853184 unmapped: 93339648 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4610289 data_alloc: 234881024 data_used: 16666624
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453853184 unmapped: 93339648 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0807000/0x0/0x1bfc00000, data 0x1e36d60/0x2054000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453853184 unmapped: 93339648 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.563073158s of 10.798902512s, submitted: 72
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454926336 unmapped: 92266496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454926336 unmapped: 92266496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454926336 unmapped: 92266496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612399 data_alloc: 234881024 data_used: 16666624
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454934528 unmapped: 92258304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454934528 unmapped: 92258304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454934528 unmapped: 92258304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454934528 unmapped: 92258304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454934528 unmapped: 92258304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612399 data_alloc: 234881024 data_used: 16666624
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454950912 unmapped: 92241920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454950912 unmapped: 92241920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454950912 unmapped: 92241920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454950912 unmapped: 92241920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454959104 unmapped: 92233728 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612399 data_alloc: 234881024 data_used: 16666624
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454959104 unmapped: 92233728 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454959104 unmapped: 92233728 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454959104 unmapped: 92233728 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454959104 unmapped: 92233728 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454959104 unmapped: 92233728 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612399 data_alloc: 234881024 data_used: 16666624
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454959104 unmapped: 92233728 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454959104 unmapped: 92233728 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454967296 unmapped: 92225536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454967296 unmapped: 92225536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454967296 unmapped: 92225536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612399 data_alloc: 234881024 data_used: 16666624
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454967296 unmapped: 92225536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454967296 unmapped: 92225536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454967296 unmapped: 92225536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454975488 unmapped: 92217344 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454975488 unmapped: 92217344 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612399 data_alloc: 234881024 data_used: 16666624
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454983680 unmapped: 92209152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454983680 unmapped: 92209152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454983680 unmapped: 92209152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454983680 unmapped: 92209152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454983680 unmapped: 92209152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612399 data_alloc: 234881024 data_used: 16666624
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454983680 unmapped: 92209152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454983680 unmapped: 92209152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454983680 unmapped: 92209152 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455000064 unmapped: 92192768 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455000064 unmapped: 92192768 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612399 data_alloc: 234881024 data_used: 16666624
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455000064 unmapped: 92192768 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455000064 unmapped: 92192768 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455008256 unmapped: 92184576 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455008256 unmapped: 92184576 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.162109375s of 42.173114777s, submitted: 22
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f202ef3860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f200b130e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f2016b2b40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f20121f4a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f203750d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455385088 unmapped: 91807744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4660095 data_alloc: 234881024 data_used: 16666624
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455385088 unmapped: 91807744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a043b000/0x0/0x1bfc00000, data 0x24148d7/0x2633000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455385088 unmapped: 91807744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455385088 unmapped: 91807744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455385088 unmapped: 91807744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455385088 unmapped: 91807744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4660255 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f2025dbc20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203775000 session 0x55f200b0f2c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455385088 unmapped: 91807744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f205c03c00 session 0x55f20171c960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f200869860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455385088 unmapped: 91807744 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a043b000/0x0/0x1bfc00000, data 0x24148d7/0x2633000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455401472 unmapped: 91791360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a043b000/0x0/0x1bfc00000, data 0x24148d7/0x2633000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455401472 unmapped: 91791360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455401472 unmapped: 91791360 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4665626 data_alloc: 234881024 data_used: 17285120
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456237056 unmapped: 90955776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a043b000/0x0/0x1bfc00000, data 0x24148d7/0x2633000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456237056 unmapped: 90955776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456237056 unmapped: 90955776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a043b000/0x0/0x1bfc00000, data 0x24148d7/0x2633000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456237056 unmapped: 90955776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f200b13860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.005203247s of 15.191963196s, submitted: 17
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f202751860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456237056 unmapped: 90955776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4617078 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203775000 session 0x55f202f32000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a17000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4616487 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a17000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a17000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4616487 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a17000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456253440 unmapped: 90939392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456261632 unmapped: 90931200 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456261632 unmapped: 90931200 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a17000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456261632 unmapped: 90931200 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456261632 unmapped: 90931200 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4616487 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456261632 unmapped: 90931200 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456261632 unmapped: 90931200 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456261632 unmapped: 90931200 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a17000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456261632 unmapped: 90931200 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456269824 unmapped: 90923008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4616487 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456269824 unmapped: 90923008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456269824 unmapped: 90923008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a17000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456269824 unmapped: 90923008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456269824 unmapped: 90923008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a17000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456269824 unmapped: 90923008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4616487 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456269824 unmapped: 90923008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456278016 unmapped: 90914816 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f205c03000 session 0x55f20121e960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f205c03000 session 0x55f20335c5a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f20335c780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f203384000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.268827438s of 28.641456604s, submitted: 27
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456278016 unmapped: 90914816 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f2017243c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203775000 session 0x55f201722000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203775000 session 0x55f2015bd2c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456515584 unmapped: 90677248 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f202ef21e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f20276b860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0263000/0x0/0x1bfc00000, data 0x25eb8e7/0x280b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456515584 unmapped: 90677248 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4684809 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0263000/0x0/0x1bfc00000, data 0x25eb8e7/0x280b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456515584 unmapped: 90677248 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0263000/0x0/0x1bfc00000, data 0x25eb8e7/0x280b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456515584 unmapped: 90677248 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456515584 unmapped: 90677248 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0263000/0x0/0x1bfc00000, data 0x25eb8e7/0x280b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456515584 unmapped: 90677248 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456515584 unmapped: 90677248 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4684809 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0263000/0x0/0x1bfc00000, data 0x25eb8e7/0x280b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456523776 unmapped: 90669056 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456523776 unmapped: 90669056 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0263000/0x0/0x1bfc00000, data 0x25eb8e7/0x280b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456523776 unmapped: 90669056 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456523776 unmapped: 90669056 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.077899933s of 11.461033821s, submitted: 18
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f202751860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456523776 unmapped: 90669056 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4684878 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456523776 unmapped: 90669056 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456523776 unmapped: 90669056 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457359360 unmapped: 89833472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0262000/0x0/0x1bfc00000, data 0x25eb90a/0x280c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457359360 unmapped: 89833472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457359360 unmapped: 89833472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4733038 data_alloc: 234881024 data_used: 23408640
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457359360 unmapped: 89833472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0262000/0x0/0x1bfc00000, data 0x25eb90a/0x280c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457359360 unmapped: 89833472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0262000/0x0/0x1bfc00000, data 0x25eb90a/0x280c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457359360 unmapped: 89833472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f205c03000 session 0x55f20171c960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f2025dbc20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457359360 unmapped: 89833472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.829244614s of 10.177327156s, submitted: 4
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457359360 unmapped: 89833472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4732112 data_alloc: 234881024 data_used: 23408640
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451477504 unmapped: 95715328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388fa/0x2058000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451477504 unmapped: 95715328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f205c03000 session 0x55f202f332c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451477504 unmapped: 95715328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451477504 unmapped: 95715328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451485696 unmapped: 95707136 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4624342 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451485696 unmapped: 95707136 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4624342 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4624342 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451493888 unmapped: 95698944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451502080 unmapped: 95690752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451502080 unmapped: 95690752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451502080 unmapped: 95690752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451502080 unmapped: 95690752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4624342 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451502080 unmapped: 95690752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451502080 unmapped: 95690752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451502080 unmapped: 95690752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451502080 unmapped: 95690752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451502080 unmapped: 95690752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4624342 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 95682560 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 95682560 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 95682560 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 95682560 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 95682560 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4624342 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 95682560 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451510272 unmapped: 95682560 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451518464 unmapped: 95674368 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451518464 unmapped: 95674368 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451518464 unmapped: 95674368 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4624342 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451518464 unmapped: 95674368 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451518464 unmapped: 95674368 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451518464 unmapped: 95674368 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451518464 unmapped: 95674368 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451526656 unmapped: 95666176 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4624342 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451526656 unmapped: 95666176 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451534848 unmapped: 95657984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451534848 unmapped: 95657984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0a16000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f2037514a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f200b0f2c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f2008692c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f2031723c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451534848 unmapped: 95657984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.333183289s of 44.744827271s, submitted: 24
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f20335cb40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f2031721e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f202e92b40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4705227 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 95428608 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f205c03000 session 0x55f202ef2000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f201693860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 95428608 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 95428608 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 95428608 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0091000/0x0/0x1bfc00000, data 0x27bd8e7/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 95428608 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4705003 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 95428608 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f202ef3c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 95428608 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f202f32780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0091000/0x0/0x1bfc00000, data 0x27bd8e7/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 95428608 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f201695e00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203775000 session 0x55f200b13860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451764224 unmapped: 95428608 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451313664 unmapped: 95879168 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4769618 data_alloc: 234881024 data_used: 24059904
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452558848 unmapped: 94633984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a10cf000/0x0/0x1bfc00000, data 0x27bd91a/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452558848 unmapped: 94633984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452558848 unmapped: 94633984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a10cf000/0x0/0x1bfc00000, data 0x27bd91a/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452558848 unmapped: 94633984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a10cf000/0x0/0x1bfc00000, data 0x27bd91a/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452558848 unmapped: 94633984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4769618 data_alloc: 234881024 data_used: 24059904
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452558848 unmapped: 94633984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452558848 unmapped: 94633984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452558848 unmapped: 94633984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452558848 unmapped: 94633984 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a10cf000/0x0/0x1bfc00000, data 0x27bd91a/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 452567040 unmapped: 94625792 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.283365250s of 20.507976532s, submitted: 32
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4801728 data_alloc: 234881024 data_used: 24309760
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0d9c000/0x0/0x1bfc00000, data 0x2ae891a/0x2d0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454991872 unmapped: 92200960 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455090176 unmapped: 92102656 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455090176 unmapped: 92102656 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0d38000/0x0/0x1bfc00000, data 0x2b3d91a/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455090176 unmapped: 92102656 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455090176 unmapped: 92102656 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4814422 data_alloc: 234881024 data_used: 24125440
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455090176 unmapped: 92102656 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0d38000/0x0/0x1bfc00000, data 0x2b3d91a/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455090176 unmapped: 92102656 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0d38000/0x0/0x1bfc00000, data 0x2b3d91a/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455098368 unmapped: 92094464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455098368 unmapped: 92094464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455098368 unmapped: 92094464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0d38000/0x0/0x1bfc00000, data 0x2b3d91a/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4814438 data_alloc: 234881024 data_used: 24125440
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455098368 unmapped: 92094464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0d38000/0x0/0x1bfc00000, data 0x2b3d91a/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455098368 unmapped: 92094464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455098368 unmapped: 92094464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455098368 unmapped: 92094464 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.434202194s of 14.602894783s, submitted: 54
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203775000 session 0x55f2028e8000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454426624 unmapped: 92766208 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f202756960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f2027510e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4638623 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a56000/0x0/0x1bfc00000, data 0x1e388fa/0x2058000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a56000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a56000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4638623 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a56000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450060288 unmapped: 97132544 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4638623 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a56000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4638623 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450068480 unmapped: 97124352 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a56000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4638623 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450076672 unmapped: 97116160 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a56000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450076672 unmapped: 97116160 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450076672 unmapped: 97116160 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450076672 unmapped: 97116160 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450076672 unmapped: 97116160 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a56000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4638623 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a56000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.156669617s of 30.342002869s, submitted: 46
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4714460 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450396160 unmapped: 96796672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f200869860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f200b0e1e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f202d40d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f20126bc20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f203184f00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450396160 unmapped: 96796672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450396160 unmapped: 96796672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a11a2000/0x0/0x1bfc00000, data 0x26ec939/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450396160 unmapped: 96796672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450396160 unmapped: 96796672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a11a2000/0x0/0x1bfc00000, data 0x26ec939/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4714476 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450396160 unmapped: 96796672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f203184960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450396160 unmapped: 96796672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203775000 session 0x55f2025b81e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450396160 unmapped: 96796672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f2016945a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a11a2000/0x0/0x1bfc00000, data 0x26ec939/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f203384960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450396160 unmapped: 96796672 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450150400 unmapped: 97042432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4768236 data_alloc: 234881024 data_used: 24272896
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a11a2000/0x0/0x1bfc00000, data 0x26ec939/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a11a2000/0x0/0x1bfc00000, data 0x26ec939/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4768236 data_alloc: 234881024 data_used: 24272896
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a11a2000/0x0/0x1bfc00000, data 0x26ec939/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 450084864 unmapped: 97107968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.087032318s of 20.319116592s, submitted: 41
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4820192 data_alloc: 234881024 data_used: 24297472
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453926912 unmapped: 93265920 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454107136 unmapped: 93085696 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a09d3000/0x0/0x1bfc00000, data 0x2ebb939/0x30db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 454344704 unmapped: 92848128 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0378000/0x0/0x1bfc00000, data 0x3515939/0x3735000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203767400 session 0x55f2008694a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202742400 session 0x55f202753a40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20bcd7800 session 0x55f202f32b40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455450624 unmapped: 91742208 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f2037501e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f203385c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455450624 unmapped: 91742208 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202742400 session 0x55f202f33c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203767400 session 0x55f203693860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4891608 data_alloc: 234881024 data_used: 25812992
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455450624 unmapped: 91742208 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20bcd7800 session 0x55f2015bcf00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f2036ca3c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455467008 unmapped: 91725824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a036f000/0x0/0x1bfc00000, data 0x351e939/0x373e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455467008 unmapped: 91725824 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455573504 unmapped: 91619328 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a036f000/0x0/0x1bfc00000, data 0x351e95c/0x373f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4931021 data_alloc: 251658240 data_used: 31932416
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a034f000/0x0/0x1bfc00000, data 0x353e95c/0x375f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4931021 data_alloc: 251658240 data_used: 31932416
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a034f000/0x0/0x1bfc00000, data 0x353e95c/0x375f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a034f000/0x0/0x1bfc00000, data 0x353e95c/0x375f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 91414528 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.620735168s of 19.012517929s, submitted: 116
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457244672 unmapped: 89948160 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x19fb5a000/0x0/0x1bfc00000, data 0x3d3395c/0x3f54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4999435 data_alloc: 251658240 data_used: 32567296
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457252864 unmapped: 89939968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x19fb5a000/0x0/0x1bfc00000, data 0x3d3395c/0x3f54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457269248 unmapped: 89923584 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457269248 unmapped: 89923584 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457269248 unmapped: 89923584 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x19fb49000/0x0/0x1bfc00000, data 0x3d4495c/0x3f65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457269248 unmapped: 89923584 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5011871 data_alloc: 251658240 data_used: 32710656
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457269248 unmapped: 89923584 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457269248 unmapped: 89923584 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x19fb49000/0x0/0x1bfc00000, data 0x3d4495c/0x3f65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457269248 unmapped: 89923584 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 89915392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x19fb49000/0x0/0x1bfc00000, data 0x3d4495c/0x3f65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 89915392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x19fb49000/0x0/0x1bfc00000, data 0x3d4495c/0x3f65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5012831 data_alloc: 251658240 data_used: 32780288
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 89915392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 89915392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 89915392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.378579140s of 13.722282410s, submitted: 41
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x19fb49000/0x0/0x1bfc00000, data 0x3d4495c/0x3f65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 89915392 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f2036cb4a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202742400 session 0x55f2036cb0e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457285632 unmapped: 89907200 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4852779 data_alloc: 234881024 data_used: 25817088
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457293824 unmapped: 89899008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203767400 session 0x55f202f66780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457293824 unmapped: 89899008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457293824 unmapped: 89899008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a091a000/0x0/0x1bfc00000, data 0x2f73939/0x3193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457293824 unmapped: 89899008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a091a000/0x0/0x1bfc00000, data 0x2f73939/0x3193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457302016 unmapped: 89890816 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20288f400 session 0x55f202d40d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f200869860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4851771 data_alloc: 234881024 data_used: 25817088
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457302016 unmapped: 89890816 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f2027510e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4660897 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4660897 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457318400 unmapped: 89874432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457326592 unmapped: 89866240 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457326592 unmapped: 89866240 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457326592 unmapped: 89866240 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457326592 unmapped: 89866240 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4660897 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457334784 unmapped: 89858048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457334784 unmapped: 89858048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457334784 unmapped: 89858048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457334784 unmapped: 89858048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457334784 unmapped: 89858048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4660897 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457334784 unmapped: 89858048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457334784 unmapped: 89858048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457342976 unmapped: 89849856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457342976 unmapped: 89849856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457342976 unmapped: 89849856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4660897 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457342976 unmapped: 89849856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457342976 unmapped: 89849856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457342976 unmapped: 89849856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457342976 unmapped: 89849856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457342976 unmapped: 89849856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4660897 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1375000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f20335dc20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202742400 session 0x55f2008690e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203767400 session 0x55f20055d2c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f2037512c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.199741364s of 42.029754639s, submitted: 62
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f2027514a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202742400 session 0x55f20171d0e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f201722d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f203766000 session 0x55f2016930e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f202e92f00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1047000/0x0/0x1bfc00000, data 0x28478e7/0x2a67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4746486 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4746486 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1047000/0x0/0x1bfc00000, data 0x28478e7/0x2a67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f202ef3e00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457351168 unmapped: 89841664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1047000/0x0/0x1bfc00000, data 0x28478e7/0x2a67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1047000/0x0/0x1bfc00000, data 0x28478e7/0x2a67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4811898 data_alloc: 234881024 data_used: 25923584
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1047000/0x0/0x1bfc00000, data 0x28478e7/0x2a67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1047000/0x0/0x1bfc00000, data 0x28478e7/0x2a67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4811898 data_alloc: 234881024 data_used: 25923584
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.717470169s of 18.874071121s, submitted: 30
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1047000/0x0/0x1bfc00000, data 0x28478e7/0x2a67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89153536 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460619776 unmapped: 86573056 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4879482 data_alloc: 234881024 data_used: 25952256
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462323712 unmapped: 84869120 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0aed000/0x0/0x1bfc00000, data 0x2d998e7/0x2fb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462790656 unmapped: 84402176 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0884000/0x0/0x1bfc00000, data 0x2ffc8e7/0x321c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461733888 unmapped: 85458944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0857000/0x0/0x1bfc00000, data 0x302f8e7/0x324f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461733888 unmapped: 85458944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461733888 unmapped: 85458944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4882830 data_alloc: 234881024 data_used: 26583040
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461733888 unmapped: 85458944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0857000/0x0/0x1bfc00000, data 0x302f8e7/0x324f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461733888 unmapped: 85458944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0857000/0x0/0x1bfc00000, data 0x302f8e7/0x324f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461733888 unmapped: 85458944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461733888 unmapped: 85458944 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461742080 unmapped: 85450752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4878302 data_alloc: 234881024 data_used: 26583040
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461742080 unmapped: 85450752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461742080 unmapped: 85450752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a085c000/0x0/0x1bfc00000, data 0x30328e7/0x3252000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461742080 unmapped: 85450752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.940708160s of 15.051081657s, submitted: 93
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202742400 session 0x55f2025dbc20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f202f332c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461742080 unmapped: 85450752 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2038df400 session 0x55f2031734a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456294400 unmapped: 90898432 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456310784 unmapped: 90882048 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456318976 unmapped: 90873856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456318976 unmapped: 90873856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456318976 unmapped: 90873856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456318976 unmapped: 90873856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456318976 unmapped: 90873856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456318976 unmapped: 90873856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456318976 unmapped: 90873856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456318976 unmapped: 90873856 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456327168 unmapped: 90865664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456327168 unmapped: 90865664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456327168 unmapped: 90865664 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456335360 unmapped: 90857472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4672364 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456335360 unmapped: 90857472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456335360 unmapped: 90857472 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2038df400 session 0x55f20051e3c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f20276a1e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f2036b2960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202742400 session 0x55f200ae25a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 64.340438843s of 64.467369080s, submitted: 35
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464347136 unmapped: 82845696 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a1a57000/0x0/0x1bfc00000, data 0x1e388d7/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,4,13])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f202736b40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f200ae2d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f202739c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f20171c960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202742400 session 0x55f203185680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451108864 unmapped: 96083968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451108864 unmapped: 96083968 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4769250 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451117056 unmapped: 96075776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 66K writes, 258K keys, 66K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.04 MB/s#012Cumulative WAL: 66K writes, 24K syncs, 2.70 writes per sync, written: 0.25 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2663 writes, 9401 keys, 2663 commit groups, 1.0 writes per commit group, ingest: 8.75 MB, 0.01 MB/s#012Interval WAL: 2663 writes, 1081 syncs, 2.46 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451117056 unmapped: 96075776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0e6f000/0x0/0x1bfc00000, data 0x2a1f8e7/0x2c3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451117056 unmapped: 96075776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451117056 unmapped: 96075776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2038df400 session 0x55f20276b860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451117056 unmapped: 96075776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f2008692c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0e6f000/0x0/0x1bfc00000, data 0x2a1f8e7/0x2c3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2010c3c00 session 0x55f2027523c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4769250 data_alloc: 234881024 data_used: 16670720
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451117056 unmapped: 96075776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202742400 session 0x55f20126ba40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451117056 unmapped: 96075776 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 451125248 unmapped: 96067584 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4851574 data_alloc: 234881024 data_used: 27856896
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0e4b000/0x0/0x1bfc00000, data 0x2a438e7/0x2c63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f205d21400 session 0x55f202f33680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: mgrc ms_handle_reset ms_handle_reset con 0x55f2026ba400
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3465938080
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3465938080,v1:192.168.122.100:6801/3465938080]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: mgrc handle_mgr_configure stats_period=5
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f20809c400 session 0x55f2031843c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202d2d000 session 0x55f200094960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4851574 data_alloc: 234881024 data_used: 27856896
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0e4b000/0x0/0x1bfc00000, data 0x2a438e7/0x2c63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 453197824 unmapped: 93995008 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.212234497s of 21.114337921s, submitted: 20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455450624 unmapped: 91742208 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 457195520 unmapped: 89997312 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0884000/0x0/0x1bfc00000, data 0x30028e7/0x3222000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,11])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4914978 data_alloc: 234881024 data_used: 29282304
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458571776 unmapped: 88621056 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459014144 unmapped: 88178688 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a07ab000/0x0/0x1bfc00000, data 0x30e08e7/0x3300000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,5])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88170496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88170496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0793000/0x0/0x1bfc00000, data 0x30f28e7/0x3312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88170496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4926090 data_alloc: 234881024 data_used: 29192192
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88170496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88170496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0793000/0x0/0x1bfc00000, data 0x30f28e7/0x3312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88170496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.808871269s of 10.092613220s, submitted: 109
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88170496 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459030528 unmapped: 88162304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4925154 data_alloc: 234881024 data_used: 29188096
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459030528 unmapped: 88162304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459030528 unmapped: 88162304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f202c5b400 session 0x55f2036932c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f2026bbc00 session 0x55f202f66000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0797000/0x0/0x1bfc00000, data 0x30f78e7/0x3317000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459030528 unmapped: 88162304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 ms_handle_reset con 0x55f200ab0800 session 0x55f202e92f00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459030528 unmapped: 88162304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 414 handle_osd_map epochs [415,415], i have 415, src has [1,415]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 415 ms_handle_reset con 0x55f2010c3c00 session 0x55f20271a1e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459030528 unmapped: 88162304 heap: 547192832 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 415 ms_handle_reset con 0x55f202c5b400 session 0x55f202750d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 415 ms_handle_reset con 0x55f202742400 session 0x55f202e92d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5094863 data_alloc: 251658240 data_used: 33964032
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 488497152 unmapped: 66535424 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 415 heartbeat osd_stat(store_statfs(0x19f884000/0x0/0x1bfc00000, data 0x4007550/0x4229000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,2])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 471924736 unmapped: 83107840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 415 ms_handle_reset con 0x55f202d2d000 session 0x55f202738960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466010112 unmapped: 89022464 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 415 ms_handle_reset con 0x55f202d2d000 session 0x55f2036cad20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.584544182s of 10.057537079s, submitted: 73
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466034688 unmapped: 88997888 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 416 heartbeat osd_stat(store_statfs(0x19eff7000/0x0/0x1bfc00000, data 0x48931fd/0x4ab6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466042880 unmapped: 88989696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 416 handle_osd_map epochs [416,417], i have 417, src has [1,417]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 416 handle_osd_map epochs [417,417], i have 417, src has [1,417]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5139189 data_alloc: 251658240 data_used: 36315136
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463626240 unmapped: 91406336 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 417 ms_handle_reset con 0x55f200ab0800 session 0x55f20055d860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463642624 unmapped: 91389952 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 417 ms_handle_reset con 0x55f2010c3c00 session 0x55f203172d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 417 ms_handle_reset con 0x55f202742400 session 0x55f20335de00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 417 ms_handle_reset con 0x55f202c5b400 session 0x55f202e925a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463650816 unmapped: 91381760 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463650816 unmapped: 91381760 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463650816 unmapped: 91381760 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 417 heartbeat osd_stat(store_statfs(0x19eff4000/0x0/0x1bfc00000, data 0x4894e72/0x4ab9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5138965 data_alloc: 251658240 data_used: 36315136
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463650816 unmapped: 91381760 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 417 ms_handle_reset con 0x55f200ab0800 session 0x55f2025da1e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a02b5000/0x0/0x1bfc00000, data 0x35d5e62/0x37f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 417 handle_osd_map epochs [418,418], i have 418, src has [1,418]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4885991 data_alloc: 234881024 data_used: 16699392
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a02b1000/0x0/0x1bfc00000, data 0x35d79a1/0x37fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a02b1000/0x0/0x1bfc00000, data 0x35d79a1/0x37fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4886151 data_alloc: 234881024 data_used: 16703488
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460062720 unmapped: 94969856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f2010c3c00 session 0x55f2036cad20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f202742400 session 0x55f202738960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f202d2d000 session 0x55f202e92d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f203774c00 session 0x55f20271a1e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.356239319s of 18.161268234s, submitted: 86
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460070912 unmapped: 94961664 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f200ab0800 session 0x55f200094960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f203774c00 session 0x55f202f66000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f2010c3c00 session 0x55f20126ba40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f202742400 session 0x55f20171c960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f202d2d000 session 0x55f200ae25a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f200ab0800 session 0x55f202f332c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f2010c3c00 session 0x55f203751680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 481771520 unmapped: 73261056 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f202742400 session 0x55f202750d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f203774c00 session 0x55f2036925a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 481771520 unmapped: 73261056 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f681000/0x0/0x1bfc00000, data 0x42079b1/0x442d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f20809f400 session 0x55f20126a000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 418 ms_handle_reset con 0x55f200ab0800 session 0x55f2036b2960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 481771520 unmapped: 73261056 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850475 data_alloc: 234881024 data_used: 15556608
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 419 ms_handle_reset con 0x55f203774c00 session 0x55f2033845a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 474718208 unmapped: 80314368 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 419 heartbeat osd_stat(store_statfs(0x19f67e000/0x0/0x1bfc00000, data 0x420964e/0x442f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 474718208 unmapped: 80314368 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0e16000/0x0/0x1bfc00000, data 0x2a7164e/0x2c97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 474718208 unmapped: 80314368 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 474718208 unmapped: 80314368 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0e16000/0x0/0x1bfc00000, data 0x2a7164e/0x2c97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 474726400 unmapped: 80306176 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0e16000/0x0/0x1bfc00000, data 0x2a7164e/0x2c97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4861019 data_alloc: 234881024 data_used: 17170432
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 474726400 unmapped: 80306176 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 474726400 unmapped: 80306176 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.985949516s of 10.376070976s, submitted: 75
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 419 handle_osd_map epochs [420,420], i have 420, src has [1,420]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0e13000/0x0/0x1bfc00000, data 0x2a7318d/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0e13000/0x0/0x1bfc00000, data 0x2a7318d/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4849737 data_alloc: 234881024 data_used: 17190912
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0e14000/0x0/0x1bfc00000, data 0x2a7318d/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4865765 data_alloc: 234881024 data_used: 18698240
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 85704704 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0e14000/0x0/0x1bfc00000, data 0x2a7318d/0x2c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 85688320 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 85630976 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 85630976 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0e0e000/0x0/0x1bfc00000, data 0x2a7918d/0x2ca0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4874267 data_alloc: 234881024 data_used: 19820544
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 85630976 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 85630976 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.212800980s of 14.984619141s, submitted: 21
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2010c3c00 session 0x55f202738780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f202742400 session 0x55f2036b2d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 85630976 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0e0e000/0x0/0x1bfc00000, data 0x2a7918d/0x2ca0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 85630976 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0e0e000/0x0/0x1bfc00000, data 0x2a7918d/0x2ca0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,0,0,0,2])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 85622784 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4728193 data_alloc: 234881024 data_used: 13365248
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463888384 unmapped: 91144192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463888384 unmapped: 91144192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f20272d400 session 0x55f200ae2d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1a45000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463896576 unmapped: 91136000 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463896576 unmapped: 91136000 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4728193 data_alloc: 234881024 data_used: 13365248
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1a45000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4728193 data_alloc: 234881024 data_used: 13365248
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1a45000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4728193 data_alloc: 234881024 data_used: 13365248
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463904768 unmapped: 91127808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f20272d400 session 0x55f203750000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1a45000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464396288 unmapped: 90636288 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1a45000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464396288 unmapped: 90636288 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.411104202s of 21.417512894s, submitted: 36
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f207a28800 session 0x55f2028e85a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464396288 unmapped: 90636288 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1a45000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464396288 unmapped: 90636288 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4731233 data_alloc: 234881024 data_used: 17756160
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464396288 unmapped: 90636288 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f205c02c00 session 0x55f20271ba40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cac00 session 0x55f202ef34a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464420864 unmapped: 90611712 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f20e436c00 session 0x55f200095a40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464420864 unmapped: 90611712 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a10ab000/0x0/0x1bfc00000, data 0x27dc1df/0x2a03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464420864 unmapped: 90611712 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464396288 unmapped: 90636288 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4806002 data_alloc: 234881024 data_used: 17764352
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464404480 unmapped: 90628096 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a10ab000/0x0/0x1bfc00000, data 0x27dc1df/0x2a03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464404480 unmapped: 90628096 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464420864 unmapped: 90611712 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a10ab000/0x0/0x1bfc00000, data 0x27dc1df/0x2a03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464420864 unmapped: 90611712 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.411631584s of 11.095273018s, submitted: 97
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464429056 unmapped: 90603520 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4806322 data_alloc: 234881024 data_used: 17772544
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464453632 unmapped: 90578944 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464470016 unmapped: 90562560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cac00 session 0x55f20109a780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f20272d400 session 0x55f200ae3860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464527360 unmapped: 90505216 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0c9b000/0x0/0x1bfc00000, data 0x27dc1df/0x2a03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f205c02c00 session 0x55f203693680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f207a28800 session 0x55f2016921e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464838656 unmapped: 90193920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0c76000/0x0/0x1bfc00000, data 0x28001ef/0x2a28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464838656 unmapped: 90193920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4883905 data_alloc: 234881024 data_used: 26583040
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465379328 unmapped: 89653248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465379328 unmapped: 89653248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0c76000/0x0/0x1bfc00000, data 0x28001ef/0x2a28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465379328 unmapped: 89653248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465379328 unmapped: 89653248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465379328 unmapped: 89653248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4883905 data_alloc: 234881024 data_used: 26583040
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465379328 unmapped: 89653248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465379328 unmapped: 89653248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0c76000/0x0/0x1bfc00000, data 0x28001ef/0x2a28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465387520 unmapped: 89645056 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465387520 unmapped: 89645056 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465387520 unmapped: 89645056 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.062004089s of 16.160535812s, submitted: 245
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4888009 data_alloc: 234881024 data_used: 26611712
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 467968000 unmapped: 87064576 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0895000/0x0/0x1bfc00000, data 0x2be11ef/0x2e09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468287488 unmapped: 86745088 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468287488 unmapped: 86745088 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469434368 unmapped: 85598208 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03a0000/0x0/0x1bfc00000, data 0x30d61ef/0x32fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469671936 unmapped: 85360640 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0398000/0x0/0x1bfc00000, data 0x30dc1ef/0x3304000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [0,0,0,0,0,9])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4969591 data_alloc: 234881024 data_used: 26976256
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469827584 unmapped: 85204992 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469827584 unmapped: 85204992 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469827584 unmapped: 85204992 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469827584 unmapped: 85204992 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0386000/0x0/0x1bfc00000, data 0x30e81ef/0x3310000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469827584 unmapped: 85204992 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0386000/0x0/0x1bfc00000, data 0x30e81ef/0x3310000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4969591 data_alloc: 234881024 data_used: 26976256
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469827584 unmapped: 85204992 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0386000/0x0/0x1bfc00000, data 0x30e81ef/0x3310000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.132133484s of 11.083799362s, submitted: 75
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469852160 unmapped: 85180416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469852160 unmapped: 85180416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469852160 unmapped: 85180416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cb400 session 0x55f2017225a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f20809e000 session 0x55f2028e9680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469852160 unmapped: 85180416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4965851 data_alloc: 234881024 data_used: 26963968
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469852160 unmapped: 85180416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a038e000/0x0/0x1bfc00000, data 0x30e81ef/0x3310000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469852160 unmapped: 85180416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cb400 session 0x55f2036b23c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4956794 data_alloc: 234881024 data_used: 26853376
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b3000/0x0/0x1bfc00000, data 0x30c41df/0x32eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b3000/0x0/0x1bfc00000, data 0x30c41df/0x32eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b3000/0x0/0x1bfc00000, data 0x30c41df/0x32eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4956794 data_alloc: 234881024 data_used: 26853376
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b3000/0x0/0x1bfc00000, data 0x30c41df/0x32eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b3000/0x0/0x1bfc00000, data 0x30c41df/0x32eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4956794 data_alloc: 234881024 data_used: 26853376
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cac00 session 0x55f20335c1e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f20272d400 session 0x55f2016934a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f205c02c00 session 0x55f203185860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469876736 unmapped: 85155840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.229480743s of 21.423732758s, submitted: 50
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b3000/0x0/0x1bfc00000, data 0x30c41df/0x32eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [0,0,0,1])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f205c02c00 session 0x55f203692f00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4960484 data_alloc: 234881024 data_used: 26988544
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b1000/0x0/0x1bfc00000, data 0x30c4212/0x32ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4960484 data_alloc: 234881024 data_used: 26988544
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b1000/0x0/0x1bfc00000, data 0x30c4212/0x32ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 85147648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b1000/0x0/0x1bfc00000, data 0x30c4212/0x32ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4961284 data_alloc: 234881024 data_used: 27009024
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.082525253s of 13.132267952s, submitted: 5
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b1000/0x0/0x1bfc00000, data 0x30c4212/0x32ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4969372 data_alloc: 234881024 data_used: 27381760
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03b1000/0x0/0x1bfc00000, data 0x30c4212/0x32ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4969634 data_alloc: 234881024 data_used: 27381760
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03ac000/0x0/0x1bfc00000, data 0x30c9212/0x32f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03ac000/0x0/0x1bfc00000, data 0x30c9212/0x32f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.205702782s of 12.661606789s, submitted: 20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4969886 data_alloc: 234881024 data_used: 27381760
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469893120 unmapped: 85139456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03ab000/0x0/0x1bfc00000, data 0x30ca212/0x32f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469901312 unmapped: 85131264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469901312 unmapped: 85131264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03ab000/0x0/0x1bfc00000, data 0x30ca212/0x32f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469901312 unmapped: 85131264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4969886 data_alloc: 234881024 data_used: 27381760
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469901312 unmapped: 85131264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469901312 unmapped: 85131264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03aa000/0x0/0x1bfc00000, data 0x30ca212/0x32f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469950464 unmapped: 85082112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469950464 unmapped: 85082112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469950464 unmapped: 85082112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03aa000/0x0/0x1bfc00000, data 0x30ca212/0x32f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cac00 session 0x55f2036cb860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.574435234s of 11.623891830s, submitted: 9
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cb400 session 0x55f200869a40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03aa000/0x0/0x1bfc00000, data 0x30ca212/0x32f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4978358 data_alloc: 251658240 data_used: 29290496
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469950464 unmapped: 85082112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f20272d400 session 0x55f202ef3a40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03ac000/0x0/0x1bfc00000, data 0x30ca1df/0x32f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469958656 unmapped: 85073920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469958656 unmapped: 85073920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03ac000/0x0/0x1bfc00000, data 0x30ca1df/0x32f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469958656 unmapped: 85073920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f20809e000 session 0x55f20121fc20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a03ac000/0x0/0x1bfc00000, data 0x30ca1df/0x32f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464846848 unmapped: 90185728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464863232 unmapped: 90169344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464871424 unmapped: 90161152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464879616 unmapped: 90152960 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464879616 unmapped: 90152960 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464879616 unmapped: 90152960 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464887808 unmapped: 90144768 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464887808 unmapped: 90144768 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464896000 unmapped: 90136576 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464896000 unmapped: 90136576 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 90128384 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 90128384 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 90128384 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 90128384 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 90128384 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 90128384 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 90128384 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464904192 unmapped: 90128384 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 90120192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 90120192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 90120192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 90120192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 90120192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 90120192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 90120192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 90120192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464912384 unmapped: 90120192 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464920576 unmapped: 90112000 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464920576 unmapped: 90112000 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464920576 unmapped: 90112000 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464920576 unmapped: 90112000 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464920576 unmapped: 90112000 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464920576 unmapped: 90112000 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464928768 unmapped: 90103808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747213 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464928768 unmapped: 90103808 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464936960 unmapped: 90095616 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 87.651184082s of 88.030509949s, submitted: 40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464936960 unmapped: 90095616 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464936960 unmapped: 90095616 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cac00 session 0x55f203184d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464936960 unmapped: 90095616 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4750184 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464936960 unmapped: 90095616 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464936960 unmapped: 90095616 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464945152 unmapped: 90087424 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464945152 unmapped: 90087424 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464953344 unmapped: 90079232 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4750184 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464953344 unmapped: 90079232 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464953344 unmapped: 90079232 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464961536 unmapped: 90071040 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464961536 unmapped: 90071040 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464961536 unmapped: 90071040 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4750184 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464961536 unmapped: 90071040 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464986112 unmapped: 90046464 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4750184 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 90021888 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4750184 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 90021888 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 90021888 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 90021888 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 90021888 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 90021888 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1634000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4750184 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465018880 unmapped: 90013696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465018880 unmapped: 90013696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465027072 unmapped: 90005504 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465027072 unmapped: 90005504 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465027072 unmapped: 90005504 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.297000885s of 32.376239777s, submitted: 7
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cb400 session 0x55f200ae3860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4752013 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1633000/0x0/0x1bfc00000, data 0x1e431af/0x206b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465043456 unmapped: 89989120 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1633000/0x0/0x1bfc00000, data 0x1e431af/0x206b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465051648 unmapped: 89980928 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465051648 unmapped: 89980928 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465051648 unmapped: 89980928 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465051648 unmapped: 89980928 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4752305 data_alloc: 234881024 data_used: 16138240
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465059840 unmapped: 89972736 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465068032 unmapped: 89964544 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1633000/0x0/0x1bfc00000, data 0x1e431af/0x206b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465068032 unmapped: 89964544 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465068032 unmapped: 89964544 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465076224 unmapped: 89956352 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f20272d400 session 0x55f200095a40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.952750206s of 10.143758774s, submitted: 5
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f205c02c00 session 0x55f202ef34a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1633000/0x0/0x1bfc00000, data 0x1e431af/0x206b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4752261 data_alloc: 234881024 data_used: 16138240
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f207a28800 session 0x55f202738780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465084416 unmapped: 89948160 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465084416 unmapped: 89948160 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465084416 unmapped: 89948160 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1633000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1633000/0x0/0x1bfc00000, data 0x1e4318c/0x206a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465084416 unmapped: 89948160 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465084416 unmapped: 89948160 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4751429 data_alloc: 234881024 data_used: 16138240
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465084416 unmapped: 89948160 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f207a28800 session 0x55f202750d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 89931776 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465117184 unmapped: 89915392 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cac00 session 0x55f200ae25a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465117184 unmapped: 89915392 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1635000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465117184 unmapped: 89915392 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4749782 data_alloc: 234881024 data_used: 16134144
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465117184 unmapped: 89915392 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465125376 unmapped: 89907200 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465125376 unmapped: 89907200 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1635000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465125376 unmapped: 89907200 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464633856 unmapped: 90398720 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1635000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2026cb400 session 0x55f20051e3c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4749622 data_alloc: 234881024 data_used: 16195584
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 91594752 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463437824 unmapped: 91594752 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.894033432s of 17.817049026s, submitted: 36
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463446016 unmapped: 91586560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f205c02c00 session 0x55f200b0f2c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463454208 unmapped: 91578368 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1635000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463454208 unmapped: 91578368 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4749622 data_alloc: 234881024 data_used: 16195584
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463454208 unmapped: 91578368 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463454208 unmapped: 91578368 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1635000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463454208 unmapped: 91578368 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463470592 unmapped: 91561984 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2074c0000 session 0x55f2027523c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463470592 unmapped: 91561984 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a1635000/0x0/0x1bfc00000, data 0x1e4317d/0x2069000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4827502 data_alloc: 234881024 data_used: 16195584
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 91234304 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 ms_handle_reset con 0x55f2074c0000 session 0x55f200b0e5a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 91234304 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 heartbeat osd_stat(store_statfs(0x1a0cc4000/0x0/0x1bfc00000, data 0x27b417d/0x29da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463798272 unmapped: 91234304 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.619276047s of 10.872257233s, submitted: 25
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 91226112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f2026cac00 session 0x55f2000945a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 91226112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4832436 data_alloc: 234881024 data_used: 16203776
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 91226112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cbf000/0x0/0x1bfc00000, data 0x27b5e38/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463806464 unmapped: 91226112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463814656 unmapped: 91217920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463814656 unmapped: 91217920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463814656 unmapped: 91217920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4832436 data_alloc: 234881024 data_used: 16203776
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463822848 unmapped: 91209728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463822848 unmapped: 91209728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cbf000/0x0/0x1bfc00000, data 0x27b5e38/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 91201536 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cbf000/0x0/0x1bfc00000, data 0x27b5e38/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 91201536 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463831040 unmapped: 91201536 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.353907585s of 11.393359184s, submitted: 2
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4833208 data_alloc: 234881024 data_used: 16203776
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f205c02c00 session 0x55f201692d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f2026cb400 session 0x55f2036b3860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463839232 unmapped: 91193344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cbf000/0x0/0x1bfc00000, data 0x27b5e48/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463839232 unmapped: 91193344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463839232 unmapped: 91193344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463839232 unmapped: 91193344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463847424 unmapped: 91185152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4833208 data_alloc: 234881024 data_used: 16203776
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463847424 unmapped: 91185152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cbf000/0x0/0x1bfc00000, data 0x27b5e48/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463847424 unmapped: 91185152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463847424 unmapped: 91185152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463847424 unmapped: 91185152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f207a28800 session 0x55f20126ba40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463855616 unmapped: 91176960 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4836101 data_alloc: 234881024 data_used: 16306176
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463732736 unmapped: 91299840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465395712 unmapped: 89636864 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cbf000/0x0/0x1bfc00000, data 0x27b5e48/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465395712 unmapped: 89636864 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465395712 unmapped: 89636864 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.736437798s of 14.802865982s, submitted: 8
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f207a28800 session 0x55f2036b2780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f2026cac00 session 0x55f200ae21e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465395712 unmapped: 89636864 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cbf000/0x0/0x1bfc00000, data 0x27b5e48/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4897569 data_alloc: 234881024 data_used: 24899584
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f2026cb400 session 0x55f2036cb680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4896977 data_alloc: 234881024 data_used: 24899584
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f205c02c00 session 0x55f2025dab40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cbf000/0x0/0x1bfc00000, data 0x27b5e48/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cbf000/0x0/0x1bfc00000, data 0x27b5e48/0x29df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f2074c0000 session 0x55f202736b40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 heartbeat osd_stat(store_statfs(0x1a0cc0000/0x0/0x1bfc00000, data 0x27b5e38/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4896097 data_alloc: 234881024 data_used: 24895488
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f2026cac00 session 0x55f200094960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.990265846s of 11.232155800s, submitted: 25
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466460672 unmapped: 88571904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 ms_handle_reset con 0x55f2026cb400 session 0x55f200ae2000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a0cbd000/0x0/0x1bfc00000, data 0x27b7a83/0x29e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466534400 unmapped: 88498176 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466534400 unmapped: 88498176 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a0cbd000/0x0/0x1bfc00000, data 0x27b7a83/0x29e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466534400 unmapped: 88498176 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 422 ms_handle_reset con 0x55f205c02c00 session 0x55f2015bd0e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4764813 data_alloc: 234881024 data_used: 16211968
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463716352 unmapped: 91316224 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463716352 unmapped: 91316224 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463716352 unmapped: 91316224 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463716352 unmapped: 91316224 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a162f000/0x0/0x1bfc00000, data 0x1e46a83/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463716352 unmapped: 91316224 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4765133 data_alloc: 234881024 data_used: 16211968
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 422 ms_handle_reset con 0x55f207a28800 session 0x55f2036b2000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 93372416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 422 heartbeat osd_stat(store_statfs(0x1a162f000/0x0/0x1bfc00000, data 0x1e46a83/0x206f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.252751350s of 10.051544189s, submitted: 25
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 93372416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 93372416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 ms_handle_reset con 0x55f200754000 session 0x55f202756780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a162a000/0x0/0x1bfc00000, data 0x1e485d2/0x2073000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 93372416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 93372416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4771455 data_alloc: 234881024 data_used: 16228352
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 ms_handle_reset con 0x55f200754000 session 0x55f20055cf00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 93372416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 93372416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a162a000/0x0/0x1bfc00000, data 0x1e485d2/0x2073000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461660160 unmapped: 93372416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 ms_handle_reset con 0x55f2026cac00 session 0x55f2036cba40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462725120 unmapped: 92307456 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 ms_handle_reset con 0x55f2026cb400 session 0x55f202ef3e00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 ms_handle_reset con 0x55f205c02c00 session 0x55f200869860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4872789 data_alloc: 234881024 data_used: 16228352
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4872789 data_alloc: 234881024 data_used: 16228352
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4872789 data_alloc: 234881024 data_used: 16228352
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462733312 unmapped: 92299264 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462741504 unmapped: 92291072 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462749696 unmapped: 92282880 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4872789 data_alloc: 234881024 data_used: 16228352
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462749696 unmapped: 92282880 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462749696 unmapped: 92282880 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462749696 unmapped: 92282880 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 ms_handle_reset con 0x55f207a28800 session 0x55f203184b40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462749696 unmapped: 92282880 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.758771896s of 28.318550110s, submitted: 66
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462749696 unmapped: 92282880 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4884121 data_alloc: 234881024 data_used: 17805312
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462946304 unmapped: 92086272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463994880 unmapped: 91037696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463994880 unmapped: 91037696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463994880 unmapped: 91037696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463994880 unmapped: 91037696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963961 data_alloc: 251658240 data_used: 29081600
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463994880 unmapped: 91037696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463994880 unmapped: 91037696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463994880 unmapped: 91037696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463994880 unmapped: 91037696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463994880 unmapped: 91037696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x1a09d5000/0x0/0x1bfc00000, data 0x2a9d634/0x2cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963961 data_alloc: 251658240 data_used: 29081600
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464003072 unmapped: 91029504 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.377329826s of 12.535032272s, submitted: 1
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 470155264 unmapped: 84877312 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468131840 unmapped: 86900736 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468131840 unmapped: 86900736 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb8e000/0x0/0x1bfc00000, data 0x3744634/0x3970000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [0,0,0,0,0,1])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468131840 unmapped: 86900736 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5065939 data_alloc: 251658240 data_used: 29847552
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb66000/0x0/0x1bfc00000, data 0x3764634/0x3990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5077075 data_alloc: 251658240 data_used: 29913088
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb66000/0x0/0x1bfc00000, data 0x3764634/0x3990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb66000/0x0/0x1bfc00000, data 0x3764634/0x3990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5077075 data_alloc: 251658240 data_used: 29913088
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb66000/0x0/0x1bfc00000, data 0x3764634/0x3990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb66000/0x0/0x1bfc00000, data 0x3764634/0x3990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb66000/0x0/0x1bfc00000, data 0x3764634/0x3990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5077075 data_alloc: 251658240 data_used: 29913088
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb66000/0x0/0x1bfc00000, data 0x3764634/0x3990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 85655552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 85647360 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5077075 data_alloc: 251658240 data_used: 29913088
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 85647360 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb66000/0x0/0x1bfc00000, data 0x3764634/0x3990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 85647360 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 85647360 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.568981171s of 27.033638000s, submitted: 105
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 85622784 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 heartbeat osd_stat(store_statfs(0x19eb6d000/0x0/0x1bfc00000, data 0x3764643/0x3991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 423 handle_osd_map epochs [424,424], i have 424, src has [1,424]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469434368 unmapped: 85598208 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 424 ms_handle_reset con 0x55f2026cac00 session 0x55f2027523c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5078242 data_alloc: 251658240 data_used: 29925376
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 424 handle_osd_map epochs [425,425], i have 425, src has [1,425]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469458944 unmapped: 85573632 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 ms_handle_reset con 0x55f2026cb400 session 0x55f20171c960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb65000/0x0/0x1bfc00000, data 0x3767f13/0x3998000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469458944 unmapped: 85573632 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb65000/0x0/0x1bfc00000, data 0x3767f13/0x3998000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469475328 unmapped: 85557248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469475328 unmapped: 85557248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469475328 unmapped: 85557248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5082536 data_alloc: 251658240 data_used: 29937664
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469483520 unmapped: 85549056 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469483520 unmapped: 85549056 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb64000/0x0/0x1bfc00000, data 0x376cf13/0x399a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469491712 unmapped: 85540864 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb64000/0x0/0x1bfc00000, data 0x376cf13/0x399a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469491712 unmapped: 85540864 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469491712 unmapped: 85540864 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5084622 data_alloc: 251658240 data_used: 29937664
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469499904 unmapped: 85532672 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb64000/0x0/0x1bfc00000, data 0x376cf13/0x399a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.265442848s of 13.599176407s, submitted: 20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 85524480 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 ms_handle_reset con 0x55f20268c400 session 0x55f20121e960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 ms_handle_reset con 0x55f205c02c00 session 0x55f200ae3860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 85524480 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 85524480 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 85524480 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5087476 data_alloc: 251658240 data_used: 29941760
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 85524480 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb63000/0x0/0x1bfc00000, data 0x376cf23/0x399b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 85524480 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469508096 unmapped: 85524480 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb63000/0x0/0x1bfc00000, data 0x376cf23/0x399b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb63000/0x0/0x1bfc00000, data 0x376cf23/0x399b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 85516288 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 85516288 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5087476 data_alloc: 251658240 data_used: 29941760
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469516288 unmapped: 85516288 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.287868500s of 10.076254845s, submitted: 23
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469532672 unmapped: 85499904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 ms_handle_reset con 0x55f208ba6c00 session 0x55f2036cb860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb49000/0x0/0x1bfc00000, data 0x37b3f23/0x39b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [0,0,0,0,0,1])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469540864 unmapped: 85491712 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469557248 unmapped: 85475328 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5108020 data_alloc: 251658240 data_used: 30113792
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb49000/0x0/0x1bfc00000, data 0x37b3f23/0x39b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb49000/0x0/0x1bfc00000, data 0x37b3f23/0x39b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5108020 data_alloc: 251658240 data_used: 30113792
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb49000/0x0/0x1bfc00000, data 0x37b3f23/0x39b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb49000/0x0/0x1bfc00000, data 0x37b3f23/0x39b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469590016 unmapped: 85442560 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.508153915s of 13.553033829s, submitted: 2
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5110375 data_alloc: 251658240 data_used: 30294016
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb49000/0x0/0x1bfc00000, data 0x37b3f23/0x39b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'config diff' '{prefix=config diff}'
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 469139456 unmapped: 85893120 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'config show' '{prefix=config show}'
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468336640 unmapped: 86695936 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468754432 unmapped: 86278144 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'log dump' '{prefix=log dump}'
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb02000/0x0/0x1bfc00000, data 0x37faf23/0x39fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'perf dump' '{prefix=perf dump}'
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468836352 unmapped: 86196224 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'perf schema' '{prefix=perf schema}'
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb02000/0x0/0x1bfc00000, data 0x37faf23/0x39fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468402176 unmapped: 86630400 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5120471 data_alloc: 251658240 data_used: 30994432
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468402176 unmapped: 86630400 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468402176 unmapped: 86630400 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468402176 unmapped: 86630400 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468402176 unmapped: 86630400 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb02000/0x0/0x1bfc00000, data 0x37faf23/0x39fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468402176 unmapped: 86630400 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5121591 data_alloc: 251658240 data_used: 31395840
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468402176 unmapped: 86630400 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468410368 unmapped: 86622208 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468410368 unmapped: 86622208 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468410368 unmapped: 86622208 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb02000/0x0/0x1bfc00000, data 0x37faf23/0x39fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468410368 unmapped: 86622208 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5121591 data_alloc: 251658240 data_used: 31395840
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468410368 unmapped: 86622208 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb02000/0x0/0x1bfc00000, data 0x37faf23/0x39fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468410368 unmapped: 86622208 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468410368 unmapped: 86622208 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468410368 unmapped: 86622208 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb02000/0x0/0x1bfc00000, data 0x37faf23/0x39fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468410368 unmapped: 86622208 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5121591 data_alloc: 251658240 data_used: 31395840
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468410368 unmapped: 86622208 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468410368 unmapped: 86622208 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468410368 unmapped: 86622208 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468410368 unmapped: 86622208 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb02000/0x0/0x1bfc00000, data 0x37faf23/0x39fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468418560 unmapped: 86614016 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5121591 data_alloc: 251658240 data_used: 31395840
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468418560 unmapped: 86614016 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468418560 unmapped: 86614016 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 ms_handle_reset con 0x55f20268c400 session 0x55f203185860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468418560 unmapped: 86614016 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.803838730s of 27.999240875s, submitted: 4
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 ms_handle_reset con 0x55f2026cac00 session 0x55f200b12b40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468418560 unmapped: 86614016 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb02000/0x0/0x1bfc00000, data 0x37faf23/0x39fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468418560 unmapped: 86614016 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5121175 data_alloc: 251658240 data_used: 31395840
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468418560 unmapped: 86614016 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468418560 unmapped: 86614016 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 ms_handle_reset con 0x55f2026cb400 session 0x55f202e932c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468418560 unmapped: 86614016 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468418560 unmapped: 86614016 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 ms_handle_reset con 0x55f205c02c00 session 0x55f203692f00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468418560 unmapped: 86614016 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eb02000/0x0/0x1bfc00000, data 0x37faf23/0x39fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [1])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5121175 data_alloc: 251658240 data_used: 31395840
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468426752 unmapped: 86605824 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468426752 unmapped: 86605824 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 ms_handle_reset con 0x55f202814c00 session 0x55f20126ba40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468426752 unmapped: 86605824 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.370082855s of 10.327819824s, submitted: 30
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 ms_handle_reset con 0x55f20268c400 session 0x55f201725e00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468434944 unmapped: 86597632 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468434944 unmapped: 86597632 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 426 ms_handle_reset con 0x55f2026cac00 session 0x55f203384d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5114711 data_alloc: 251658240 data_used: 31285248
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468443136 unmapped: 86589440 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 426 heartbeat osd_stat(store_statfs(0x19eb61000/0x0/0x1bfc00000, data 0x376abc0/0x399c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 426 ms_handle_reset con 0x55f207a28800 session 0x55f203184780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 426 ms_handle_reset con 0x55f200754000 session 0x55f203185c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468443136 unmapped: 86589440 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468443136 unmapped: 86589440 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 426 ms_handle_reset con 0x55f2026cb400 session 0x55f200892960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468443136 unmapped: 86589440 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 426 heartbeat osd_stat(store_statfs(0x19eb62000/0x0/0x1bfc00000, data 0x376abc0/0x399c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 468443136 unmapped: 86589440 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5113535 data_alloc: 251658240 data_used: 31285248
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 426 ms_handle_reset con 0x55f2026cb400 session 0x55f2016b23c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456409088 unmapped: 98623488 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 426 ms_handle_reset con 0x55f200754000 session 0x55f201722000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 426 heartbeat osd_stat(store_statfs(0x19eb62000/0x0/0x1bfc00000, data 0x376abc0/0x399c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456409088 unmapped: 98623488 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456409088 unmapped: 98623488 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.566922188s of 10.051142693s, submitted: 77
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456417280 unmapped: 98615296 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456417280 unmapped: 98615296 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 427 heartbeat osd_stat(store_statfs(0x1a047c000/0x0/0x1bfc00000, data 0x1e4f68d/0x2080000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4808971 data_alloc: 234881024 data_used: 16269312
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456417280 unmapped: 98615296 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456417280 unmapped: 98615296 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456417280 unmapped: 98615296 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456417280 unmapped: 98615296 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 428 ms_handle_reset con 0x55f20268c400 session 0x55f202e923c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456417280 unmapped: 98615296 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a047d000/0x0/0x1bfc00000, data 0x1e5130d/0x2081000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4809425 data_alloc: 234881024 data_used: 16269312
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456417280 unmapped: 98615296 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a047d000/0x0/0x1bfc00000, data 0x1e5130d/0x2081000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456425472 unmapped: 98607104 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456425472 unmapped: 98607104 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.098086357s of 10.012334824s, submitted: 38
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 428 handle_osd_map epochs [429,429], i have 429, src has [1,429]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456425472 unmapped: 98607104 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456425472 unmapped: 98607104 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4813599 data_alloc: 234881024 data_used: 16277504
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a0479000/0x0/0x1bfc00000, data 0x1e52e4c/0x2084000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456425472 unmapped: 98607104 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456425472 unmapped: 98607104 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a0479000/0x0/0x1bfc00000, data 0x1e52e4c/0x2084000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456425472 unmapped: 98607104 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456425472 unmapped: 98607104 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a0479000/0x0/0x1bfc00000, data 0x1e52e4c/0x2084000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456425472 unmapped: 98607104 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4813599 data_alloc: 234881024 data_used: 16277504
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456425472 unmapped: 98607104 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a0479000/0x0/0x1bfc00000, data 0x1e52e4c/0x2084000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 99254272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 ms_handle_reset con 0x55f2026cac00 session 0x55f202739860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 99254272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a0479000/0x0/0x1bfc00000, data 0x1e52e4c/0x2084000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 99254272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.977624893s of 11.272768974s, submitted: 14
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455778304 unmapped: 99254272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 ms_handle_reset con 0x55f2026ca800 session 0x55f20171c780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4817895 data_alloc: 234881024 data_used: 16277504
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455786496 unmapped: 99246080 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a0479000/0x0/0x1bfc00000, data 0x1e52eae/0x2085000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455786496 unmapped: 99246080 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455786496 unmapped: 99246080 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455786496 unmapped: 99246080 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 ms_handle_reset con 0x55f2026ca800 session 0x55f20121ed20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455819264 unmapped: 99213312 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 ms_handle_reset con 0x55f200754000 session 0x55f202e92d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4816453 data_alloc: 234881024 data_used: 16277504
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 455843840 unmapped: 99188736 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a031d000/0x0/0x1bfc00000, data 0x1faee75/0x21e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 ms_handle_reset con 0x55f20268c400 session 0x55f203384b40
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456376320 unmapped: 98656256 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 ms_handle_reset con 0x55f2026cac00 session 0x55f20335c960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456376320 unmapped: 98656256 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456376320 unmapped: 98656256 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f7d3000/0x0/0x1bfc00000, data 0x2af8eae/0x2d2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456376320 unmapped: 98656256 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4917755 data_alloc: 234881024 data_used: 16285696
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456376320 unmapped: 98656256 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456384512 unmapped: 98648064 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456384512 unmapped: 98648064 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f7d3000/0x0/0x1bfc00000, data 0x2af8eae/0x2d2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456384512 unmapped: 98648064 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456384512 unmapped: 98648064 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4917755 data_alloc: 234881024 data_used: 16285696
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456384512 unmapped: 98648064 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f7d3000/0x0/0x1bfc00000, data 0x2af8eae/0x2d2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456392704 unmapped: 98639872 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456392704 unmapped: 98639872 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 68K writes, 267K keys, 68K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.03 MB/s#012Cumulative WAL: 68K writes, 25K syncs, 2.69 writes per sync, written: 0.25 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2606 writes, 8519 keys, 2606 commit groups, 1.0 writes per commit group, ingest: 7.73 MB, 0.01 MB/s#012Interval WAL: 2606 writes, 1074 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456392704 unmapped: 98639872 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456392704 unmapped: 98639872 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4917755 data_alloc: 234881024 data_used: 16285696
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f7d3000/0x0/0x1bfc00000, data 0x2af8eae/0x2d2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456392704 unmapped: 98639872 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456392704 unmapped: 98639872 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456392704 unmapped: 98639872 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456392704 unmapped: 98639872 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 ms_handle_reset con 0x55f2026cb400 session 0x55f2036cbe00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456392704 unmapped: 98639872 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 ms_handle_reset con 0x55f200754000 session 0x55f202752d20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4917755 data_alloc: 234881024 data_used: 16285696
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456409088 unmapped: 98623488 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f7d3000/0x0/0x1bfc00000, data 0x2af8eae/0x2d2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f7d3000/0x0/0x1bfc00000, data 0x2af8eae/0x2d2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456409088 unmapped: 98623488 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 ms_handle_reset con 0x55f20268c400 session 0x55f200b0e960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.332698822s of 27.433200836s, submitted: 77
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 ms_handle_reset con 0x55f2026ca800 session 0x55f203750000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456417280 unmapped: 98615296 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 456351744 unmapped: 98680832 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459563008 unmapped: 95469568 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5002989 data_alloc: 234881024 data_used: 28221440
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459563008 unmapped: 95469568 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f7d2000/0x0/0x1bfc00000, data 0x2af8ebe/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459563008 unmapped: 95469568 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459563008 unmapped: 95469568 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f7d2000/0x0/0x1bfc00000, data 0x2af8ebe/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f7d2000/0x0/0x1bfc00000, data 0x2af8ebe/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459563008 unmapped: 95469568 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459563008 unmapped: 95469568 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5002989 data_alloc: 234881024 data_used: 28221440
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459563008 unmapped: 95469568 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f7d2000/0x0/0x1bfc00000, data 0x2af8ebe/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459563008 unmapped: 95469568 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459571200 unmapped: 95461376 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459571200 unmapped: 95461376 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.522733688s of 12.535221100s, submitted: 2
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 459571200 unmapped: 95461376 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5025799 data_alloc: 234881024 data_used: 28258304
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 460627968 unmapped: 94404608 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f4b6000/0x0/0x1bfc00000, data 0x2e14ebe/0x3048000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 461045760 unmapped: 93986816 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 462913536 unmapped: 92119040 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463192064 unmapped: 91840512 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5063937 data_alloc: 234881024 data_used: 28282880
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f169000/0x0/0x1bfc00000, data 0x3160ebe/0x3394000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5063937 data_alloc: 234881024 data_used: 28282880
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.690249443s of 10.945927620s, submitted: 88
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f147000/0x0/0x1bfc00000, data 0x3183ebe/0x33b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f147000/0x0/0x1bfc00000, data 0x3183ebe/0x33b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 464994304 unmapped: 90038272 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 90021888 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 ms_handle_reset con 0x55f2026cac00 session 0x55f202d41680
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5061957 data_alloc: 234881024 data_used: 28295168
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 ms_handle_reset con 0x55f2038dfc00 session 0x55f201693860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 90021888 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f133000/0x0/0x1bfc00000, data 0x3196ebe/0x33ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 90021888 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 90021888 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465010688 unmapped: 90021888 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 ms_handle_reset con 0x55f2038dfc00 session 0x55f2000954a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465018880 unmapped: 90013696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5059952 data_alloc: 234881024 data_used: 28295168
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465018880 unmapped: 90013696 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465027072 unmapped: 90005504 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f135000/0x0/0x1bfc00000, data 0x3196eae/0x33c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465027072 unmapped: 90005504 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465027072 unmapped: 90005504 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f135000/0x0/0x1bfc00000, data 0x3196eae/0x33c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465035264 unmapped: 89997312 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5059952 data_alloc: 234881024 data_used: 28295168
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465035264 unmapped: 89997312 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465035264 unmapped: 89997312 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465035264 unmapped: 89997312 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465035264 unmapped: 89997312 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465035264 unmapped: 89997312 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5059952 data_alloc: 234881024 data_used: 28295168
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f135000/0x0/0x1bfc00000, data 0x3196eae/0x33c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465035264 unmapped: 89997312 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f135000/0x0/0x1bfc00000, data 0x3196eae/0x33c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465035264 unmapped: 89997312 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465043456 unmapped: 89989120 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f135000/0x0/0x1bfc00000, data 0x3196eae/0x33c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465043456 unmapped: 89989120 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f135000/0x0/0x1bfc00000, data 0x3196eae/0x33c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465043456 unmapped: 89989120 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5059952 data_alloc: 234881024 data_used: 28295168
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465043456 unmapped: 89989120 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465043456 unmapped: 89989120 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465043456 unmapped: 89989120 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465043456 unmapped: 89989120 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f135000/0x0/0x1bfc00000, data 0x3196eae/0x33c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465043456 unmapped: 89989120 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5059952 data_alloc: 234881024 data_used: 28295168
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465051648 unmapped: 89980928 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465051648 unmapped: 89980928 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f135000/0x0/0x1bfc00000, data 0x3196eae/0x33c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465059840 unmapped: 89972736 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465059840 unmapped: 89972736 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465059840 unmapped: 89972736 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5059952 data_alloc: 234881024 data_used: 28295168
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f135000/0x0/0x1bfc00000, data 0x3196eae/0x33c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465059840 unmapped: 89972736 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f135000/0x0/0x1bfc00000, data 0x3196eae/0x33c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465059840 unmapped: 89972736 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465068032 unmapped: 89964544 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f135000/0x0/0x1bfc00000, data 0x3196eae/0x33c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465068032 unmapped: 89964544 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.921092987s of 38.980014801s, submitted: 19
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465068032 unmapped: 89964544 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5061781 data_alloc: 234881024 data_used: 28295168
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 ms_handle_reset con 0x55f200754000 session 0x55f200b0e1e0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f134000/0x0/0x1bfc00000, data 0x3196ed1/0x33ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465084416 unmapped: 89948160 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465084416 unmapped: 89948160 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465092608 unmapped: 89939968 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 89931776 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f134000/0x0/0x1bfc00000, data 0x3196ed1/0x33ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 89931776 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5063017 data_alloc: 234881024 data_used: 28385280
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 89931776 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 89931776 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 89931776 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 89931776 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f134000/0x0/0x1bfc00000, data 0x3196ed1/0x33ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 89931776 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5063017 data_alloc: 234881024 data_used: 28385280
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f134000/0x0/0x1bfc00000, data 0x3196ed1/0x33ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 89931776 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 89931776 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465100800 unmapped: 89931776 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.489473343s of 13.902825356s, submitted: 6
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f134000/0x0/0x1bfc00000, data 0x3196ed1/0x33ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 89923584 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465108992 unmapped: 89923584 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5077053 data_alloc: 251658240 data_used: 29573120
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465149952 unmapped: 89882624 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465149952 unmapped: 89882624 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465149952 unmapped: 89882624 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465149952 unmapped: 89882624 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f132000/0x0/0x1bfc00000, data 0x3196ed1/0x33ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465149952 unmapped: 89882624 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f132000/0x0/0x1bfc00000, data 0x3196ed1/0x33ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5077053 data_alloc: 251658240 data_used: 29573120
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465149952 unmapped: 89882624 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465149952 unmapped: 89882624 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465149952 unmapped: 89882624 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465149952 unmapped: 89882624 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465149952 unmapped: 89882624 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5077053 data_alloc: 251658240 data_used: 29573120
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f132000/0x0/0x1bfc00000, data 0x3196ed1/0x33ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465149952 unmapped: 89882624 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465149952 unmapped: 89882624 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f132000/0x0/0x1bfc00000, data 0x3196ed1/0x33ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465149952 unmapped: 89882624 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 heartbeat osd_stat(store_statfs(0x19f132000/0x0/0x1bfc00000, data 0x3196ed1/0x33ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.403772354s of 15.253164291s, submitted: 6
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465158144 unmapped: 89874432 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465174528 unmapped: 89858048 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 ms_handle_reset con 0x55f2026cac00 session 0x55f2008925a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5085571 data_alloc: 251658240 data_used: 30171136
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465174528 unmapped: 89858048 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 heartbeat osd_stat(store_statfs(0x19f12f000/0x0/0x1bfc00000, data 0x3198b8c/0x33ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465182720 unmapped: 89849856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 ms_handle_reset con 0x55f203767800 session 0x55f202d405a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 ms_handle_reset con 0x55f20510c000 session 0x55f20121f4a0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 heartbeat osd_stat(store_statfs(0x19f130000/0x0/0x1bfc00000, data 0x3198b8c/0x33ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465182720 unmapped: 89849856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465182720 unmapped: 89849856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465182720 unmapped: 89849856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5086197 data_alloc: 251658240 data_used: 30175232
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 heartbeat osd_stat(store_statfs(0x19f130000/0x0/0x1bfc00000, data 0x3198b8c/0x33ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465182720 unmapped: 89849856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465182720 unmapped: 89849856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465182720 unmapped: 89849856 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 heartbeat osd_stat(store_statfs(0x19f130000/0x0/0x1bfc00000, data 0x3198b8c/0x33ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465190912 unmapped: 89841664 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.858416557s of 11.539748192s, submitted: 17
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465190912 unmapped: 89841664 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5086133 data_alloc: 251658240 data_used: 30179328
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465190912 unmapped: 89841664 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465190912 unmapped: 89841664 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465190912 unmapped: 89841664 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 heartbeat osd_stat(store_statfs(0x19f130000/0x0/0x1bfc00000, data 0x3198b8c/0x33ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465264640 unmapped: 89767936 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465264640 unmapped: 89767936 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 heartbeat osd_stat(store_statfs(0x19f12a000/0x0/0x1bfc00000, data 0x31a2b8c/0x33d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5089411 data_alloc: 251658240 data_used: 30175232
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 heartbeat osd_stat(store_statfs(0x19f12a000/0x0/0x1bfc00000, data 0x31a2b8c/0x33d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [0,0,0,0,0,2,0,0,1])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465281024 unmapped: 89751552 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465289216 unmapped: 89743360 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 heartbeat osd_stat(store_statfs(0x19f12a000/0x0/0x1bfc00000, data 0x31a2b8c/0x33d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [0,0,0,0,1])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465289216 unmapped: 89743360 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465305600 unmapped: 89726976 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.671881199s of 10.080826759s, submitted: 85
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465346560 unmapped: 89686016 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5107509 data_alloc: 251658240 data_used: 30175232
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465362944 unmapped: 89669632 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 ms_handle_reset con 0x55f200754000 session 0x55f200ae2000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465362944 unmapped: 89669632 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 heartbeat osd_stat(store_statfs(0x19f129000/0x0/0x1bfc00000, data 0x33ceb8c/0x33d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [0,0,0,0,0,1,0,1])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465379328 unmapped: 89653248 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465395712 unmapped: 89636864 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465444864 unmapped: 89587712 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5109220 data_alloc: 251658240 data_used: 30183424
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465461248 unmapped: 89571328 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465461248 unmapped: 89571328 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465461248 unmapped: 89571328 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 heartbeat osd_stat(store_statfs(0x19f129000/0x0/0x1bfc00000, data 0x33ceb8c/0x33d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465461248 unmapped: 89571328 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465461248 unmapped: 89571328 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5109220 data_alloc: 251658240 data_used: 30183424
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465469440 unmapped: 89563136 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465469440 unmapped: 89563136 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 heartbeat osd_stat(store_statfs(0x19f129000/0x0/0x1bfc00000, data 0x33ceb8c/0x33d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465469440 unmapped: 89563136 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465469440 unmapped: 89563136 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.364985466s of 14.528287888s, submitted: 161
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 465469440 unmapped: 89563136 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5162623 data_alloc: 251658240 data_used: 31657984
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 467509248 unmapped: 87523328 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 88154112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 88154112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 heartbeat osd_stat(store_statfs(0x19ec5b000/0x0/0x1bfc00000, data 0x389cb8c/0x38a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 88154112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 88154112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5160383 data_alloc: 251658240 data_used: 31719424
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 88154112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 heartbeat osd_stat(store_statfs(0x19ec5b000/0x0/0x1bfc00000, data 0x389cb8c/0x38a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 88154112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 88154112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 88154112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 ms_handle_reset con 0x55f2026cac00 session 0x55f203185860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 ms_handle_reset con 0x55f203767800 session 0x55f202750960
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.838685989s of 10.071201324s, submitted: 7
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463691776 unmapped: 91340800 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 ms_handle_reset con 0x55f2038dfc00 session 0x55f2016b3860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 heartbeat osd_stat(store_statfs(0x19ec5b000/0x0/0x1bfc00000, data 0x389cb8c/0x38a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5167072 data_alloc: 251658240 data_used: 31731712
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463691776 unmapped: 91340800 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463691776 unmapped: 91340800 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 ms_handle_reset con 0x55f204765400 session 0x55f202e932c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463708160 unmapped: 91324416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 ms_handle_reset con 0x55f200754000 session 0x55f203184780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463708160 unmapped: 91324416 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 ms_handle_reset con 0x55f2026cac00 session 0x55f203751c20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463724544 unmapped: 91308032 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5128818 data_alloc: 251658240 data_used: 31649792
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 431 ms_handle_reset con 0x55f203767800 session 0x55f20335dc20
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463724544 unmapped: 91308032 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 431 heartbeat osd_stat(store_statfs(0x19f126000/0x0/0x1bfc00000, data 0x3591b2a/0x33d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463724544 unmapped: 91308032 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 431 heartbeat osd_stat(store_statfs(0x19f128000/0x0/0x1bfc00000, data 0x319f7d7/0x33d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 431 ms_handle_reset con 0x55f20268c400 session 0x55f2015bd2c0
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 431 ms_handle_reset con 0x55f2026ca800 session 0x55f202f66000
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463724544 unmapped: 91308032 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 431 heartbeat osd_stat(store_statfs(0x19f128000/0x0/0x1bfc00000, data 0x319f7d7/0x33d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [0,0,1])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463724544 unmapped: 91308032 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.822802544s of 10.120443344s, submitted: 84
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463724544 unmapped: 91308032 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5112504 data_alloc: 251658240 data_used: 31653888
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463724544 unmapped: 91308032 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 431 ms_handle_reset con 0x55f2026ca800 session 0x55f20109be00
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 431 heartbeat osd_stat(store_statfs(0x19f129000/0x0/0x1bfc00000, data 0x319f7b4/0x33d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463724544 unmapped: 91308032 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463732736 unmapped: 91299840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463732736 unmapped: 91299840 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 ms_handle_reset con 0x55f200754000 session 0x55f202ef3860
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463740928 unmapped: 91291648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5115173 data_alloc: 251658240 data_used: 31666176
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463740928 unmapped: 91291648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 463740928 unmapped: 91291648 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x19ed17000/0x0/0x1bfc00000, data 0x27192f3/0x294f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [0,0,0,0,0,2,0,0,3])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 96387072 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x19ed17000/0x0/0x1bfc00000, data 0x27192f3/0x294f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 96387072 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.918573856s of 10.116190910s, submitted: 60
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0060000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 ms_handle_reset con 0x55f20268c400 session 0x55f202738780
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 96387072 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850386 data_alloc: 234881024 data_used: 16310272
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 96387072 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 96387072 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 96387072 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 96387072 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 96387072 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850546 data_alloc: 234881024 data_used: 16314368
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458653696 unmapped: 96378880 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458653696 unmapped: 96378880 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458653696 unmapped: 96378880 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458653696 unmapped: 96378880 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458653696 unmapped: 96378880 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850546 data_alloc: 234881024 data_used: 16314368
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458653696 unmapped: 96378880 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458661888 unmapped: 96370688 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458661888 unmapped: 96370688 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458661888 unmapped: 96370688 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458661888 unmapped: 96370688 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850546 data_alloc: 234881024 data_used: 16314368
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850546 data_alloc: 234881024 data_used: 16314368
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850546 data_alloc: 234881024 data_used: 16314368
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850546 data_alloc: 234881024 data_used: 16314368
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850546 data_alloc: 234881024 data_used: 16314368
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 96362496 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458678272 unmapped: 96354304 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458678272 unmapped: 96354304 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458678272 unmapped: 96354304 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850546 data_alloc: 234881024 data_used: 16314368
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458678272 unmapped: 96354304 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458678272 unmapped: 96354304 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458678272 unmapped: 96354304 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458678272 unmapped: 96354304 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458678272 unmapped: 96354304 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850546 data_alloc: 234881024 data_used: 16314368
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458678272 unmapped: 96354304 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458678272 unmapped: 96354304 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458686464 unmapped: 96346112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458686464 unmapped: 96346112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458686464 unmapped: 96346112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850546 data_alloc: 234881024 data_used: 16314368
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458686464 unmapped: 96346112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458686464 unmapped: 96346112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458686464 unmapped: 96346112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458686464 unmapped: 96346112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458686464 unmapped: 96346112 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850546 data_alloc: 234881024 data_used: 16314368
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458694656 unmapped: 96337920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458694656 unmapped: 96337920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458694656 unmapped: 96337920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458694656 unmapped: 96337920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458694656 unmapped: 96337920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850546 data_alloc: 234881024 data_used: 16314368
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458694656 unmapped: 96337920 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458702848 unmapped: 96329728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458702848 unmapped: 96329728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458702848 unmapped: 96329728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458702848 unmapped: 96329728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850546 data_alloc: 234881024 data_used: 16314368
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458702848 unmapped: 96329728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458702848 unmapped: 96329728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458702848 unmapped: 96329728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458702848 unmapped: 96329728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458702848 unmapped: 96329728 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850546 data_alloc: 234881024 data_used: 16314368
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458719232 unmapped: 96313344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458719232 unmapped: 96313344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458719232 unmapped: 96313344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458719232 unmapped: 96313344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458719232 unmapped: 96313344 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850546 data_alloc: 234881024 data_used: 16314368
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458727424 unmapped: 96305152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458727424 unmapped: 96305152 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458735616 unmapped: 96296960 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458735616 unmapped: 96296960 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458735616 unmapped: 96296960 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850546 data_alloc: 234881024 data_used: 16314368
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458735616 unmapped: 96296960 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458743808 unmapped: 96288768 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458743808 unmapped: 96288768 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458743808 unmapped: 96288768 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458743808 unmapped: 96288768 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: bluestore.MempoolThread(0x55f1fefebb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4850546 data_alloc: 234881024 data_used: 16314368
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458743808 unmapped: 96288768 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'config diff' '{prefix=config diff}'
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458522624 unmapped: 96509952 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'config show' '{prefix=config show}'
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458547200 unmapped: 96485376 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0061000/0x0/0x1bfc00000, data 0x1e58291/0x208d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: prioritycache tune_memory target: 4294967296 mapped: 458268672 unmapped: 96763904 heap: 555032576 old mem: 2845415833 new mem: 2845415833
Jan 31 04:20:47 np0005603609 ceph-osd[79083]: do_command 'log dump' '{prefix=log dump}'
Jan 31 04:20:47 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 31 04:20:47 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1707424259' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 04:20:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 31 04:20:48 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3621906018' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 04:20:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 04:20:48 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1791372070' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 04:20:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:48.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:48 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:48 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:48 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:48.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:48 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 04:20:48 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2293239702' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 04:20:49 np0005603609 nova_compute[221550]: 2026-01-31 09:20:49.238 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:49 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 31 04:20:49 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3710960062' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 04:20:49 np0005603609 nova_compute[221550]: 2026-01-31 09:20:49.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:20:49 np0005603609 nova_compute[221550]: 2026-01-31 09:20:49.689 221554 DEBUG oslo_concurrency.processutils [None req-bd2c07e4-843a-4507-9f7a-8063f84b952d 7236ad21d7ab4000b7ab6db9df93bca9 f1803bf3df964a3f90dda65daa6f9a53 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:20:49 np0005603609 nova_compute[221550]: 2026-01-31 09:20:49.715 221554 DEBUG oslo_concurrency.processutils [None req-bd2c07e4-843a-4507-9f7a-8063f84b952d 7236ad21d7ab4000b7ab6db9df93bca9 f1803bf3df964a3f90dda65daa6f9a53 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:20:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:20:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:50.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:20:50 np0005603609 nova_compute[221550]: 2026-01-31 09:20:50.659 221554 DEBUG oslo_service.periodic_task [None req-c1eec2b4-7226-4a9f-9f9b-dbf5329b52d1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:20:50 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 31 04:20:50 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2362632188' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 04:20:50 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:50 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:20:50 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:50.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:20:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 31 04:20:51 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/486569988' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 04:20:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 31 04:20:51 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1377901725' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 04:20:51 np0005603609 nova_compute[221550]: 2026-01-31 09:20:51.550 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 31 04:20:51 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1184234516' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 04:20:51 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 31 04:20:51 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2731074068' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 04:20:51 np0005603609 systemd[1]: Starting Hostname Service...
Jan 31 04:20:51 np0005603609 systemd[1]: Started Hostname Service.
Jan 31 04:20:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 31 04:20:52 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/440537423' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 04:20:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 31 04:20:52 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4221942119' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 04:20:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 31 04:20:52 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/423536003' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 04:20:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 31 04:20:52 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3247337375' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 04:20:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:52.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 31 04:20:52 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/377591572' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 04:20:52 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 31 04:20:52 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/725191915' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 04:20:52 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:52 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:52 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:52.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 31 04:20:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/329836358' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 04:20:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 31 04:20:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3369620064' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 04:20:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 31 04:20:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1767818276' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 04:20:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 31 04:20:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2512753890' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 04:20:53 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 31 04:20:53 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3092234834' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 04:20:54 np0005603609 nova_compute[221550]: 2026-01-31 09:20:54.241 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:54.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:54 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:54 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 04:20:54 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:54.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 04:20:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:55 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 31 04:20:55 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2130671381' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 04:20:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 31 04:20:56 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1562061555' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 04:20:56 np0005603609 nova_compute[221550]: 2026-01-31 09:20:56.553 221554 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:56.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:56 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 31 04:20:56 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1463004898' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 04:20:56 np0005603609 radosgw[84443]: ====== starting new request req=0x7f4fe56576f0 =====
Jan 31 04:20:56 np0005603609 radosgw[84443]: ====== req done req=0x7f4fe56576f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:56 np0005603609 radosgw[84443]: beast: 0x7f4fe56576f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:56.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:57 np0005603609 ceph-mon[81667]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 31 04:20:57 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/834676227' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 04:20:57 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 04:20:57 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 04:20:57 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 04:20:57 np0005603609 ceph-mon[81667]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
